Testing Throughput

The best measure of throughput is a large continuous data transfer. Ideally, you want the sending system to generate data instantly and the receiving system to discard it, to avoid complications such as disk-access bottlenecks. In practice, such tests are hard to run unless you control both computers. For this reason, I describe a good but imperfect stand-in: using a File Transfer Protocol (FTP) exchange to measure throughput.

To perform an FTP test, you must have an FTP client installed on one system and an FTP server on another system. This second system might not be under your direct control; for instance, you might use this technique to test data transfer between your network and some remote site on the Internet. If you want to perform a local speed test, consult Chapter 24, which includes information on FTP server configuration. Your task will be greatly simplified if the FTP client reports data transfer speeds, as do most text-based FTP clients. (A few GUI clients omit this report, forcing you to time transfers with a stopwatch.)

Warning FTP programs report speeds in a variety of units. Megabytes per second

(MB/s) and kilobytes per second (KB/s) are both common. Ethernet speeds are often stated in megabits per second (Mbps). If you miss the distinction between bits (b) and bytes (B), you can easily misinterpret your data transfer speeds by a factor of eight. Also, a few programs don't use the correct abbreviations for bits and bytes. If your speeds seem to be an eighth or a tenth what you expect, it may be that you're misinterpreting (or the FTP client is misreporting) the units.

Connect from the FTP client to the server and locate a file to transfer. It's important that the test file be fairly large—at least a megabyte, and possibly larger if you're testing transfers on a fast network. Smaller files are likely to transfer so quickly that minor random variations in transfer times will translate into large differences in apparent throughput.

Once you've located a file of suitable size, try transferring it. If possible, direct the client to store the file as /dev/null. For instance, to transfer megfile.bin using the standard text-mode Linux ftp program, type get megfile.bin /dev/null in ftp. This practice will remove local disk-access delays from the equation. The first time you transfer a file, there may be delays due to disk accesses on the server system. You can't completely eliminate the possibility of server-side problems interfering with throughput estimates, but if you perform multiple transfers, chances are good that the second and subsequent transfers will be from an in-memory disk cache rather than from the disk itself. For this reason, you should disregard the first transfer speed and instead use the average of the next few transfers. In most cases, a total of just two or three estimates (not counting the initial one) should be sufficient to give you some idea of what the throughput is. If your estimates vary wildly, though (say, by a factor of

2 or more), you might want to try increasing the file size. It's also possible that varying network load or a flaky router is causing erratic performance. In addition to retrieving files, you should try sending files using put, as in put megfile.bin. This test will be possible only if you have write access on the server, of course.

On a lightly loaded 100Mbps Ethernet LAN, you should be able to achieve speeds of at least 7MB/s. In tests on my own LAN, most transfers run at between 8MB/s and 11MB/s. Systems with weak CPUs, sluggish hard disks, or heavy local or network loads may produce lower values than this. If a test comes in substantially below this value (or an adjusted value if your network runs at a different speed), you may want to investigate further. For instance, try reversing the client and server roles, or try another FTP client or server; the problem could be due to a misconfigured or poorly performing program. You can also try using a protocol other than FTP.

Note Gigabit Ethernet is fast enough that its theoretical maximum transfer speed is faster than most hard disks in 2003. Thus, if you're using gigabit Ethernet, you must pay careful attention to your disk subsystem and perhaps implement a redundant array of independent disks (RAID) configuration. If you don't, you won't fully utilize the potential of gigabit Ethernet, at least not on individual transfers.

You might also want to check the output of ifconfig, which includes error counts: $ /sbin/ifconfig ethO

ethO Link encap:Ethernet HWaddr 00:05:02:A7:76:DA

inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets: 196432 errors:2 dropped:0 overruns:0 frame:0 TX packets: 100753 errors:3709 dropped:0 overruns:3707 carrier:0 collisions:0 txqueuelen:100

RX bytes:220874752 (210.6 MiB) TX bytes:59365177 (56.6 MiB) lnterrupt:42 Base address:0xd000

Two receive (RX) errors and 3,709 transmission (TX) errors appear in this example's output. Such errors may be the result of driver bugs, a flaky NIC, bad cables, or defective switches, among other things. The sixth line of output includes a count of collisions (0 in this example). Collisions are an inevitable occurrence when your network uses a hub, but a network with switches should have few or no collisions. If the number of collisions is more than a few percent of the total number of packets sent and received, chances are you have too many computers connected on a hub. Try replacing the hub with a switch, or break the network into two parts, with a router or at least a small switch between the parts.

Internet throughput varies with a number of factors, including the speed of the server, the number of other connections to the server, the type of Internet connection used by the server, the load on the routers between you and the server, the nature of your own Internet connection, and the demand for Internet bandwidth from other users at your site. Assuming your LAN is involved in your Internet access, you won't exceed your LAN speeds for Internet throughput. In most cases, Internet transfer speeds are likely to be far lower than those on the LAN. Some sites may crawl along at 1 KB/s or less, but others may exceed 1MB/s, assuming you have a fast enough Internet connection. In many cases, your own Internet connection will be a limiting factor. For instance, if you have an Asymmetric Digital Subscriber Line (ADSL) connection rated at 604Kbps, you probably won't get more than 60 to 70KB/S transfer rates from good sites.

Note Some Internet connections, including ADSL and cable modems, yield asymmetric speeds—upstream speeds are much lower than downstream speeds. If you measure speeds in both directions, you may see a five-or tenfold difference in throughput.

One common problem is that some sites are much slower than others. Any of the Internet speed factors can be the cause. One issue can be distance. Accessing sites from the other side of the planet can be very sluggish because of the large number of routers between the sites. Each router is a chance for packets to be lost, which degrades overall performance. Large distances also increase latencies. Ordinarily, latencies aren't strongly correlated with throughput, but if latencies rise too high, the systems may pause their transmissions while they wait for acknowledgement of sent data, as described in the upcoming section, "Setting the Send and Receive Window Sizes."

0 0

Post a comment