Increase Traffic to Your Website
Linux provides for proxy, news, and search services with several different software packages. Linux provides a proxy-caching server that caches Web information, which increases the client's access speed, and also reduces bandwidth costs to a company. The most popular proxy-caching server for Linux is called Squid. Squid is a high-performance proxy-caching server that supports FTP, gopher, and HTTP data objects. It is used to cache (store information) that was previously accessed by clients. When a user makes a request for a Web page, Squid checks to see if it has a copy of the requested information in its cache. If Squid has this information, it checks to see if the information is current, updates it if necessary, and then forwards this information to the client. This saves the client time if the information was previously accessed and saves the company money and resources because Squid removes the need to access the requested information from the Internet. Squid also allows the...
Here three Web servers are providing Web services to the Internet and they share a network with an NFS server and a database server. What's wrong with this picture Well, several things are wrong. First of all, these machines are still using dumb hub instead of a switch. Second of all, the NFS and database traffic is competing with the incoming and outgoing Web traffic. If a Web application needs database access, it generates database requests, in response to a Web request from the Internet, which in turn reduces from the bandwidth available for other incoming or outgoing Web requests, thus, effectively making the network unnecessarily busy or less responsive. How can you solve such a problem Using a traffic control mechanism, of course First determine what traffic can be isolated in this network. Naturally, the database and NFS traffic is only needed to service the Web servers. In such a case, NFS and database traffic should be isolated so that they don't compete with Web traffic....
As the SYSM module is a large embedded system, it will boot off a hard disk and use the full features made available to a conventional workstation or server including swapping. The server may be an a Sun, a PowerPC, an ARM, or a conventional PC. It makes little difference which type of architecture is actually used for the SYSM module, since most of its functionality is fairly high level. Because it needs to serve many different applications in parallel while answering rapidly to incoming traffic, the SYSM module runs a kernel configured for preemption. Also, as it serves as a central point for management, it is a multiuser system with an extensive user toolset. The root filesystem on the SYSM module will look similar to the one found on common workstations and servers. In fact, we may even install a conventional server distribution on the SYSM module and configure it to our needs.
A Standard practice is to place the e-mail server in the DMZ.A DMZ is usually comprised of a screening router that blocks most attacks (denial-of-service, system scanning, attacks against Microsoft NetBIOS ports, etc.), and a firewall device that authoritatively blocks incoming traffic, effectively separating the internal network from the world. The DMZ exists between the screening router and the firewall. However, it is often a best practice to place the e-mail server behind the firewall itself. If you do this, however, you must make sure your firewall is configured correctly. Otherwise, a malicious user can take advantage of a misconfigured firewall and gain access to your internal network.
Figures 5-1 and 5-2 show an IDS as the traffic monitoring device. While it is obviously advisable for networks to include an IDS, many other sophisticated tools are available that combine IDS functionality with a host of other features, such as advanced traffic analysis and profiling tools. IPTraf is another good utility, available from It provides the same type of statistical traffic analysis, but does so in an ASCII friendly format. Conversely, employing an effective method for monitoring network traffic and averages by the categories just mentioned, along with established normal baselines, will make unusual traffic patterns more apparent and easier to recognize. For example, during a typical evening between midnight and 2 00 a.m., the only significant traffic experienced on a network is network backup traffic on various servers. However, in the morning during business hours, the traffic monitoring interface indicates an unusually high amount of SQL traffic from the customer...
Other firewalls are embedded in specific applications and provide access controls on the ports with which they communicate. ModSecurity is a perfect example of an embedded application firewall. It is a module in Apache that provides a wide variety of functions customized for web traffic, such as intrusion prevention, event correlation, anomaly detection, and so on.
Now that you've given your machine an identity on the network, you need to tell it where to send all its traffic. Ideally this would be a switch or a router that takes all your traffic in the direction of the Internet and the rest of your network, otherwise known as your default gateway.
To detect and prevent reverse tunneling, all of the same principles apply as in detecting tunneling, but you are hampered significantly by the likelihood that encryption is being employed. Keep in mind that reverse tunneling could go across any port and may look like encrypted web traffic if sent across port 80.
Deploy appropriate physical controls. This is especially important in a VoIP environment. Unless the voice traffic is encrypted, anyone with physical access to the LAN could potentially tap into telephone conversations. Even when encryption is used, physical access to VoIP servers and gateways may allow an attacker to perform traffic analysis to some extent, depending on configuration. Organizations should, therefore, ensure that adequate physical security is in place to restrict access to VoIP network components. Furthermore, additional security measures such as authentication, address filtering, and alarms for notifying the administrator when devices are disconnected can mitigate the risks involved in physical security.
We can expand on the second point a little to give us a different view of the data on our link. Let's now imagine that we class FTP, SMTP, and World Wide Web traffic as essential traffic, and all other traffic as nonessential. If we were interested in seeing the ratio of essential traffic to nonessential traffic, we could do something like
Let's say you're running a 30-person insurance claims-processing office, with lots of medical and financial records stored on four different file servers (all four running Windows 2003 Server). The company processes its own e-mail and Web services, outsources spam and virus-filtering to a third-party, and runs a few custom applications to handle the ordinary course of business. Staff members also use Windows XP Professional computers and laptops for daily activities. The office does not have any Macintosh or Unix computers (although e-mail may be moved to an open-source solution, time permitting). There's a firewall for NAT. Access Control Lists (ACLs) block everything except inbound mail to the mail server and Web traffic to the Web server.
You can make your Web site public by renting server space from an ISP or by obtaining your own domain name and hosting the site from your Apache server. When you rent machine space from an ISP, you pay not only for the room your data takes up, but also for the Web traffic to your site. If you have a small and simple site with little traffic, that's not a problem. If you have a large or complex site with a lot of traffic, renting server space can get costly.
With apachetop, you can see the host machine that your visitors are using, determine whether they came by way of a search engine, and find out what pages they visited while they were there. You can also use apachetop to see if your visitors are repeatedly asking for documents that don't exist. You may want to rename your Web pages if the same page name is mistyped over and over.
Internet keywords are words or phrases typed into the Location bar that redirect your browser to a related site. Companies often pay for internet keywords to drive traffic to their sites with these redirects. For example, you could simply type CNN in an empty Location bar and be taken directly to http www.cnn.com If Konqueror fails to match an internet keyword, however, it will automatically use the search engine you have designated in the Configure Konqueror menu as the default (to be explained shortly).
The internal firewall protects the internal network from outside access. It forwards non-local traffic to the DMZ and restricts incoming traffic. This firewall should be configured to do the following Allow only incoming traffic that has been generated in response to internal requests. Incoming packets to this firewall must not have SYN bit set.
Throughout this section, I'll be using a few different programs to generate traffic for TCPDump to capture. My primary tool for TCP-related captures will be telnet. I'll use telnet to generate traffic and mirror what the real protocol (or close to it) does in the real world. Generation of DNS queries will be accomplished using both the dig command and the host command. The ping and TRaceroute commands will be used. Finally, the hping2 command will be used to generate ICMP traffic as well as other interesting packets, especially in the attack section. With the exception of hping2, all of these programs are installed on most major Linux distributions.
Here's another place where the open-source model comes to the rescue. Because the developers of Snort concentrate their efforts on making Snort the best traffic analysis system possible, they don't worry about other tasks that folks might like Snort to handle. Instead, they make the source code freely available for anyone to tinker with and leave the optional extras up to other folks on the Net. One result is the SnortSam tool for blocking attacks.
As far as incoming traffic is concerned, you have many options. Many systems administrators want to create a firewall that forbids all incoming traffic, except for the TCP and UDP packets necessary when building up and tearing down a network connection. For example, if you want to allow internal clients to allow access to the Web, you will need to allow remote hosts to make connections to your firewall. This involves allowing remote hosts to open up their local ports above 1023 to access your systems at ports above 1023.Therefore, you should take the following steps 1. Configure your firewall to prohibit all incoming traffic from accessing any services below port 1023.The most secure firewall will not allow any connections to these ports. 2. Forbid all incoming traffic unless it is part of an already established session. In Ipchains, the -y option will do this. In Iptables, you would use the --SYN option. Each of these options will have the firewall match and discard any incoming...
Like any server service running on any computer, some monitoring must take place so you have warning signs when something isn't working correctly. This mostly occurs through the log files. All log files reside in the var log apache directory. The logs give you signs of attack, help to diagnose improper configuration settings, and provide valuable information about site traffic.
INPUT This chain applies to all incoming traffic that is destined for the server itself. It does Figure 12-1 is a schematic that provides an overview of the place where the three default chains are functioning. The NIC positions in the figure indicate that the network interface card is involved. As you can see in the figure, the INPUT chain applies to incoming traffic before it encounters server processes, the OUTPUT chain involves outgoing traffic after it leaves the server processes, and the FORWARD chain involves traffic that goes from one network card directly to another.
Assuming that our old ISP isn't particularly cooperative but the new one is, we can use a configured or 6in6 tunnel to the new ISP while the old ISP still operates our uplink. We need to route all outbound traffic through this tunnel in the case that the old ISP uses ingress filtering and drops our outbound packets if their source address is from the new prefix. Incoming traffic to the old prefix will arrive untunneled and traffic to the new prefix will be tunneled to us. This provides us with the necessary transition period to set up the new prefix. When we only use the new prefix and the tunnel as our uplink, then we can switch our uplink to the new ISP, deconfigure the tunnel and resume normal operations. As long as the old ISP doesn't delay the actual switch to the new uplink and cases have been reported we can switch to the new service provider with a scheduled, very short loss of external network connectivity.
Ordinarily, when using Squid on a network to cache Web traffic, browsers must be configured to use the Squid system as a proxy. This type of configuration is known as traditional proxying. In many environments, this is simply not an acceptable method of implementation. Therefore Squid provides a method to operate as an interception proxy, or transparently, which means users do not even need to be aware that a proxy is in place. Web traffic is redirected from port 80 to the port where Squid resides, and Squid acts like a standard Web server for the browser. Using Squid transparently is a two part process, requiring first that Squid be configured properly to accept non-proxy requests, and second that Web traffic gets redirected to the Squid port. The first part of configuration is performed in the Squid module, while the second part can be performed in the Linux Firewall module. That is, assuming you are using Linux, otherwise you should consult the Squid FAQ Transparent Caching...
When users are sending ordinary web traffic, Squid can analyze the traffic. With SSL traffic this is much more difficult, because most of the data sent between the user and web site is encrypted. Therefore, the proxy cannot analyze the SSL traffic for forbidden content. Also, SSL-encrypted data that comes back to the proxy cannot be kept in its cache. For users, however, it is important that SSL traffic is sent through the proxy server to the Internet.
In corporate environments, it is not a good idea to configure all the clients in the network manually to use the proxy. In these cases, you can configure a transparent proxy. With a transparent proxy, you don't need any configuration on the client, because all web traffic is sent through the proxy automatically. The best way to configure a transparent proxy is by configuring the packet filter on your router. If iptables on a Linux router is used, you can configure a transparent proxy with the following rule
Masquerading sits on top of forwarding as a separate kernel service. Traffic is masqueraded in both directions, but not symmetrically. Masquerading is unidirectional. Only outgoing connections can be initiated. As traffic from local machines passes through the firewall to a remote location, the internal machine's IP address and source port are replaced with the address of the firewall machine's external network interface and a free source port on the interface. The process is reversed for incoming responses. Before the packet is forwarded to the internal machine, the firewall's destination IP address and port are replaced with the real IP address and port of the internal machine participating in the connection. The firewall machine's port determines whether incoming traffic, all of which is addressed to the firewall machine, is destined to the firewall machine itself or to a particular local host.
Incoming TCP packet acceptance rules can make use of the connection state flags associated with TCP connections. All TCP connections adhere to the same set of connection states. These states differ between client and server because of the three-way handshake during connection establishment. As such, the firewall can distinguish between incoming traffic from remote clients and incoming traffic from remote servers.
The gufw tool provides several tabs that make it easy for you to create various types of rules without having to understand the syntax of the kernels iptables rules. Once you enable a firewall, all incoming traffic is disabled by default, which is secure but probably not what you want if you plan to support network services such as incoming SSH connections, incoming FTP, and so on. Note that you do not need to select the Allow incoming radio button shown in Figure 25-27 to enable incoming traffic. The radio buttons at the top left of the gufw dialog identify the default behavior of your system. Any rules that you subsequently define represent exceptions to that rule. For example, to quickly create a rule that allows incoming SSH connections, make sure that the Simple tab is selected in the Add a new rule section, enter ssh in the text entry field for this rule, and make sure that you are allowing traffic in both directions, as shown in Figure 25-28.
We recommend installing and using the http_inspect preprocessor. Using http_inspect normalizes all packets containing different forms of HTTP communication into a state that Snort can easily compare and scan through its rules. A huge amount of Web traffic crosses the Net, and many attacks rely on the HTTP protocol as their transmission medium. To configure your Snort system so that it normalizes Web traffic, you need to put a few lines in your snort.conf configuration file that look something like the following
In this way, the firewall hides from the Internet server the fact that your computer even exists. Furthermore, a proxy firewall will commonly handle all incoming connections from the Internet (such as Web traffic, FTP downloads, and e-mail deliveries). Again, this is to minimize the visibility of your internal network to the outside world.
So I set out to answer some of those questions with a thoroughness that I hoped would severely reduce the number of repetitive questions on the mailing list, as well as answer some of the questions I found my clients often asked about Webmin on our servers. According to my revision information I posted the first 0.01 draft on October 6, 2000 on a back corner of my personal website. It contained four chapters, none of which was more than ten pages. It covered Apache, Squid and most of the Webmin related configuration options. I had taken off a long weekend from Friday to Monday to write it and another couple days to figure out how to process DocBook SGML. Within three days of mentioning it on the Webmin list my book, if it could be called that in its diminutive early form, was receiving 1,000 hits and a few hundred unique visitors each day. Our company webserver, where my homepage is hosted, had never seen that much traffic in its entire existence. Interesting. Free stuff draws...
If you are running a server, you should take care to check the boxes next to the services that you plan to provide incoming requests for network services that you do not check now will be blocked by default. For example, if you want to run a Web server, you should check the WWW (HTTP) box to indicate that incoming Web traffic should not be filtered out.
As part of reporting on embedded Linux, LinuxDevices.com set up a web-based survey in 2000 and 2001 that site visitors could fill to provide information regarding their use of Linux in embedded systems. Both years, a few hundred respondents participated in the survey. Though there were no control mechanisms to screen respondents, the results match those of other more formal surveys. Both surveys contained a lot of information. For the sake of simplicity, I will only mention the surveys' most important findings.
The following is a simplified configuration file. The gist of this configuration is that all incoming traffic to privileged ports (those below 1024) is dropped except for ssh traffic. The first line accepts all traffic from the loopback interface. The second line accepts all incoming TCP traffic to the ssh port. The third line drops all incoming TCP traffic to ports between 1 and 1024. The last line drops all incoming UDP traffic to ports between 1 and 1024.
This option puts the interface in promiscuous mode. On a broadcast network, this makes the interface receive all packets, regardless of whether they were destined for this host or not. This allows network traffic analysis using packet filters and such, also called Ethernet snooping. Usually, this is a good technique for hunting down network problems that are otherwise hard to detect. Tools such as tcpdump rely on this.
As you'll recall from last month, a Web proxy provides a control point for restricting which external Web sites your users can reach. It allows you to permit Web access without allowing non-Web traffic (or even publishing a default route to the Internet), and it provides a convenient place to perform content filtering and transaction logging. As you also may recall, unlike a firewall, a Web proxy doesn't need to be a physical choke point through which all traffic must pass for a physical path to the outside. Instead, you can use firewall rules or router ACLs that allow only Web traffic, as a means of ensuring your users will use the proxy. Accordingly, your Web proxy can be set up like any other server, with a single network interface.
The new Varnish 2.0 from Linpro is an open-source reverse-Web accelerator for highcontent Web sites that was designed from the ground up for incoming traffic and not as a client-side proxy or origin server. Varnish temporarily stores the most frequently requested pages in cache memory and offers tools for identifying which pages should and should not be cached, and if they are cached, when to delete them and present fresh content. The result, says Linpro, is a 90 reduction in server requirements. Varnish 2.0 offers new features like improved compression, expanded support for filtering Web content for caching, ESI language support, tighter integration with CMS solutions, load-balancing support, better scaling and improved accelerator tuning. Varnish runs on Linux, Solaris and FreeBSD. varnish-cache.com
It is even possible to do a form of disk mirroring over the network, For example, a Web server may store a duplicate copy of its data on another server, If the first server crashes, a simple TCP IP host name change can redirect the Web traffic to the second server, When the original server is rebuilt, it can recover all of its data from the backup server and be back in business,
I used the hostname of a local workstation. To generate traffic, I pinged the host from my system. In Figure 5.10, you can see an ARP request and reply, the echo request and reply, and Windows Network Basic Input Output System (NetBIOS) User Datagram Protocol (UDP) packets. As you can see, we are
The Secure Sockets Layer standard was developed to encrypt web traffic, and it's been critical in enabling the explosion of commercial sites and e-commerce on the Web. Apache has the ability to encrypt web traffic with SSL, which, with slight modifications, is known as Transport Layer Security. You get this encryption when you access a site with the https prefix. Think of encrypted web traffic as a sealed envelope.
For example, let's say employees aren't allowed to use the eBay auction Web site on the job. A particular employee has been reprimanded for spending hours browsing eBay, and HR wants to monitor his behavior. The following rule logs all Web traffic that contains ebay.com coming from the host
Note that changing interface IDs don't solve the similar problem that small sites and home users have with regard to their network prefix. Making that prefix change to hide from what site certain traffic originates is an entirely different issue either use anonymizing proxies for your traffic or have your ISP dynamically assign you a different prefix every time you connect to the Internet6.
With literally millions of web pages on the Internet, and more springing up every day, it's getting harder to attract and impress online visitors. Having outdated information on your web page will not only frustrate your visitors, but it might even damage your company or organization. To avoid this problem, web pages must be dynamic.
It is possible for one web server to serve web pages for more than one site, even if they are on the same IP address. This has been available since version 1.1 of the HTTP protocol (supported by all main browsers), which included the domain name into the request, as well as the IP address. In the home environment it's quite uncommon but is useful because it allows you to split the incoming web traffic
Segmented networks introduce complexity to the process of handling broadcast-style services such as Address Resolution Protocol (ARP) or DHCP. For a growing network, however, segmenting is likely to be a necessity. By segmenting your traffic into two or more discrete networks, you can better keep traffic on each segment down to a controllable level. Furthermore, this arrangement helps you impose tighter security for inside systems. For instance, you can put Accounting and Human Resources onto another subnet to make it harder for engineering to put sniffers on the network and get to confidential information.
Any kind of network traffic not explicitly allowed by the filtering rule set is suppressed by iptables. Therefore, each of the interfaces with incoming traffic must be placed into one of the three zones. For each of the zones, define the services or protocols allowed. The rule set is only applied to packets originating from external hosts. Locally generated packets are not captured by the firewall.
Next, it is time to do what you want to do on your server allow incoming SSH and HTTP traffic and permit replies to the allowed incoming traffic to get out. Note that these two requirements consist of two parts a part that is configured in the INPUT chain and a part that is configured in the OUTPUT chain. Let's start with some nice rules that define the INPUT chain
Although every ISP expects to see outgoing traffic on your Internet connection, not all of them expect incoming traffic (that is, traffic that someone initiates from outside your network). Although your ISP which may be your local phone company, your cable TV company, or an independent ISP may expect you to download files from remote servers, it may not expect people to download large files from you.
Just writing a couple of lines of HTML code is not enough for most web sites dynamic content is what people want today. Well, web-site visitors don't usually want as much as web designers are eager give them, but the designers are in control, so dynamic content is what we are going to talk about in this chapter.
Standard proxy server Basically, you can use Squid as a proxy cache. This serves two goals it makes transmitting traffic to and from the Internet faster, and it adds security. In this role, Squid is a proxy that sits between a user and the Internet. The user sends all HTTP requests to the proxy and not directly to the Internet. The computers of the users do not even need a direct connection to the Internet to generate web traffic. The proxy handles all the traffic for the user and fetches the required data from the Internet. Then the proxy caches this data locally. The next time another user needs the same data, they don't need to get it from the Internet, but it can be provided from the cache that is maintained by the proxy. Used in this way, the main advantage is that the Squid proxy increases speed for clients that need to get data from the Internet. This chapter describes how you can use Squid in this way.
Let's face it, DNS is not the most sexy component of the Internet's infrastructure. It is an old technology and doesn't get the same attention as newer, more flashy tools and software. Your Web site visitors may comment on how cool your new AJAX widget is, but I guarantee they will never tell the world how pleased they are with your DNS response time.
Where Can I Get Traffic Ivy
If you can not wait, then get Traffic Ivy now. Your Download will be instantly available for you right after your purchase.