Normalizing network traffic

The decoding and normalization of certain types of network traffic is an important preprocessing chore. Pattern matching systems like Snort can fail when an attacker introduces subtle variations. These variations are perfectly acceptable and even warranted in most cases, but they can be misused by attackers.

There are many ways to encode a Web page address into a URL It's possible for two encoded URLs to look completely different to the human eye, but are the same as far as a Web server is concerned. For example, the following two URLs are exactly the same, yet they look entirely different.

http://www.somewhere.tld/cgi-bin/form-mail.pl?execstuff

http://0/%63g%69%2d%62in/%66%6fr%6d%2d%6d%61%69l%2e%70l?%65%78%65%63%73%74uf%66

Of course, these different notations can easily trip up a Snort rule, which matches against an exact pattern. In the preceding example, consider that a rule matches the form-mail.pl string of bytes, which generates a Snort alert. The second URL would sneak by the detection engine, though it's just as lethal as the first.

Snort's normalization preprocessors are an excellent way to combat these open doors. The following sections cover three of these preprocessors in more depth: HTTP, telnet, and RPC (Remote Procedure Call).

Was this article helpful?

0 0

Post a comment