System Monitoring and Auditing

M onitoring your system for abnormal behavior is an essential task in both system administration and information security. Most attackers leave their "fingerprints" in the system log files, and examining these logs is a fundamental step in the process of network forensics. More important, examining log files on a regular basis, looking for erratic or suspicious user behavior, can prevent attacks and enhance the overall security of your server.

There are attackers who may be able to penetrate your system without leaving any evidence in the log files. However, even the best attackers will not be able to delete or modify a history file without being noticed. Examining log files is just one critical part of your overall security plan.

You should be looking at the system log files for

■ Repeated authentication failures

■ Unusual shutdowns

■ Attempts to relay messages through your mail server

■ Any events that seem out of the ordinary

A single authentication failure for a single user can be attributed to a mistyped password. Multiple authentication failures for a single user can be attributed to an acute case of "fat-finger" syndrome. Multiple authentication failures for multiple users, however, are a clear sign of an attack in the making, or a sign that one of your internal users may be trying to guess the root password in an effort to gain greater access to your server.

Through the use of the centralized syslog facility, Linux offers a wealth of real-time diagnostics that are often not used to their full extent, not because they don't offer the right kind of information, but because most system administrators don't know exactly how to use these diagnostics. The problem is that the standard Linux syslog files are too verbose to be of any value. A typical /var/log/messages file on a standard Red Hat or Caldera distribution can grow by 500-600K bytes in a typical day for a average-load server.

Human inspection of log files has two major shortcomings:

■ The task is tedious and expensive because it has to be performed by a (highly paid) system administrator who is trained to recognize the trouble signs.

■ Humans are prone to error, especially when scanning large text files full of repetitive sequences.

Luckily, the Linux community has responded to the challenge and has developed a number of useful log-monitoring tools that can be run batch style to identify certain keywords or patterns that are known to spell trouble. The section "System Log Monitoring" later in this chapter illustrates the concept of log monitoring using two such tools: swatch and logcheck.

System Logging with syslog

The syslog utility allows the system administrator to configure and manage log files generated by heterogeneous system utilities and applications from a central location. Linux comes standard with the syslog utility configured and the syslogd daemon active upon startup. The system administrator specifies in the /etc/syslog.conf file which messages they would like to see logged, and the application directs the messages to the appropriate log file. Most generic messages are directed to the /var/log/messages file, although the system administrator can choose any arbitrary file as the destination. In general, Linux uses the following initial log files:

/var/log/messages This is the primary Linux log file, used for recording most common system events.

/var/log/secure Authentication failures for daemons started from /etc/ inetd.conf are logged here. (Chapter 4, "Network Services Configuration," discusses the inetd utility.)

/var/log/maillog The sendmail daemon is notoriously verbose (and often very busy), so it makes sense to dedicate this separate log file to keep track of mail-delivery messages.

/var/log/spooler This is the log file where other daemons such as UUCP and News (NNTP) log their messages.

/var/log/boot.log The boot-up messages displayed when the system is coming up are typically logged in this file.

These five log files are automatically rotated by the system via the /etc/logrotate.d/ syslog script. Rotating essentially means renaming the current log file to something like logfile.1 while creating a new (empty) file called logfile to record only those events that took place since the last rotation. At the next rotation, the logfile.1 file is renamed to logfile.2 and the logfile.11 file (if you're only keeping 10 log files) is deleted.

syslog.conf File

Listing 3.1 shows a typical syslog.conf file on a Linux system. It includes references to the five log files described earlier in this section.

Listing 3.1 A typical Linux /etc/syslog.conffile

# Log all kernel messages to the console.

# Logging much else clutters up the screen.

kern.* /dev/console

# Log anything (except mail) of level info or higher.

# Don't log private authentication messages!

*.info;mai1.none;authpriv.none /var/log/messages

# The authpriv file has restricted access.

authpriv.* /var/log/secure

# Log all the mail messages in one place.

mail.* /var/log/maillog

# Everybody gets emergency messages, plus log them on another

# machine.

*.emerg @1oghost

PART 1

# Save mail and news errors of level err and higher in a

# special file.

uucp,news.crit /var/log/spooler

# Save boot messages also to boot.log local7.* /var/log/boot.log

The general format of the entries in the syslog.conf file is as follows:

facility.priority[;facility.priority][TAB]action

Each field is defined as follows:

facility The facility field is the subsystem that you want to log events from. The current supported subsystems are

■ auth: secure and authentication messages

■ authpriv: private secure and authentication messages

■ cron: clock/scheduler daemon

■ daemon: other system daemons

■ kern: Linux kernel messages

■ lpr: line printer system messages

■ news: network news subsystem

■ sysloguser: internal syslog messages

■ uucp: Unix-to-Unix copy subsystem

■ local0-local7: reserved for local use

The special character * can be used to denote any (and all) facilities. priority The priority field specifies the priority level of the message within a given subsystem. The current priorities are (in ascending order of criticality)

■ debug: debug-level messages for troubleshooting

■ info: informational messages

■ notice: normal messages of special interest

■ warning: abnormal messages signaling impending failure

■ error: failure condition messages

■ crit: critical error-condition messages

■ alert: condition needing immediate action

■ emerg: error condition leading to system failure

The special character * can be used to denote any (and all) priorities. destination The destination field defines the destination for log messages that .i: match the given facility and priority combination. This can be either a filename, or g a host name when the @ sign precedes the keyword. The * character takes on a special g meaning here, denoting all users; in other words, all users who are currently logged on will receive a copy of the message on their active terminals. Needless to say, you should only use this for absolutely critical messages that might impact the user's ability to continue to work on the system. action The action field can be one of the following:

■ A regular file. In this case, all messages will be appended to the end of the file. The file must be specified with a full path name, starting with the / character.

■ A named pipe. In this case, the messages are used as the standard input to the pipe.

■ A terminal or console. In this case, the messages are displayed over the given device.

■ A remote machine. In this case, the messages are sent to a remote host's syslogd daemon.

■ A list of users. In this case, the messages are broadcast to the terminals owned by any active users. The special character * is used to send messages to all users that are currently logged on.

Applying these facility, priority, destination, and action values, you can see that the syslog.conf file in Listing 3.1 is instructing the syslogd daemon to:

■ Log all kernel messages to the console.

■ Log authpriv and mail messages to secure and maillog respectively.

■ Log emergency messages to all the users' terminals and to the remote system loghost.

■ Log uucp and news messages (critical and above) to spooler.

■ Log 1oca17 (boot) messages to boot.log.

■ Log the remaining messages (info and above) to messages.

Some messages are critical enough to the system that they should be stored outside the system itself. This is necessary to make sure that you can continue to log events, even when the local disk is full or is otherwise unavailable. Keeping a set of log files on a separate system also has clear security advantages, since a sophisticated attacker will most likely delete or otherwise tamper with the log files on the attacked system. An attacker would have to repeat all the steps he used to get on the first server to get on the second server in order to completely erase his trail.

The next section describes the operation of remote syslog and discusses the security implications of running this service on the remote server.

syslog Server Security

The Linux model is one where services can easily transcend physical servers, and the syslog facility is no exception. When the action @loghost for kernel emergency messages is specified in Listing 3.1, the implication is that there is a system named (or aliased to) loghost that is actively listening on UDP port 514 for incoming syslog requests.

While this facility makes a more robust logging mechanism, it also introduces a known vulnerability, where attackers can target port 514 UDP by sending very large files to this service to fill up the disks and make the server unavailable. This can then be followed by a stealth attack on the servers that this loghost was logging for in the first place.

If you deploy remote syslogd servers, I recommend that you take at least one of the following precautions:

■ Protect access to port 514 UDP by ensuring that only authorized servers are allowed to write to that service (see Chapter 9, "Network Layer Firewalls").

■ Deploy your remote syslogd server on a separate, dedicated network segment that is exclusive for this use (e.g., a 10.0.0/24 management network).

■ Use a dedicated partition to write remote syslogd messages. This partition could fill up completely without having any other adverse effect on the system. (Note, however, that when the log files fill up, the system is really no longer useful.)

TIP In order to allow your remote syslogd server to accept messages from other machines, don't forget to use the -r flag when invoking syslog.

System Log Monitoring

The basic idea behind log file monitoring is to define a set of system log files of interest, and a set of triggers, or regular expressions, to monitor for. When the tool finds a trigger in the appropriate log file, it executes a specified action (sends an e-mail, executes a command, pages the administrator, etc.). Two full-featured system log monitoring tools are swatch and logcheck.

swatch

The Simple WATCHer (swatch) is a monitoring tool developed by Todd Atkins at University of California, Santa Barbara (UCSB). It is composed of a large Perl script that does all the monitoring and a configuration file where you can specify your triggers and a set of actions for each trigger. The following sections describe the installation, configuration, and use of swatch.

Installing swatch

While swatch is readily available in RPM format, be aware that most Linux distributions lack three of the Perl packages required by swatch:

■ perl-File-Tail : a Perl module for reading from continuously updated files

■ perl-Time-HiRes: a Perl module for handling high-resolution time variables

■ perl-Date-Calc: a Perl module for Gregorian calendar date calculations

These modules are part of the CPAN Perl archive that is also available in RPM format. Once you have downloaded these three Perl packages, as well as the latest swatch RPM, then install all four packages in the order shown in Listing 3.2.

Listing 3.2 The swatch installation process [ramon]$ ls -l *.rpm

-rw-r—r— 1 ramon users 55922 Sep 30 22:49 $

perl-Date-Calc-4.2-2.i 386.rpm

-rw-r—r— 1 ramon users 12280 Sep 30 22:49 $

perl-File-Tail-0.91-2.i 386.rpm

-rw-r—r— 1 ramon users 12500 Sep 30 22:49 $

perl-Time-HiRes-01.20-2.i 386.rpm

-rw-r—r— 1 ramon users 28279 Sep 30 22:45 $

swatch-3.0b4-1.noarch.rpm

[ramon]$ sudo rpm -i perl-Date-Calc-4.2-2.i386.rpm [ramon]$ sudo rpm -i perl-File-Tail-0.91-2.i386.rpm [ramon]$ sudo rpm -i perl-Time-HiRes-01.20-2.i386.rpm [ramon]$ sudo rpm -i swatch-3.0b4-1.noarch.rpm

If there are no errors during these installation steps, you are now ready to configure and use the swatch utility.

Configuring swatch

There is only one swatch configuration file, typically called .swatchrc or swatchrc. This file contains a definition of the triggers that you'd like to monitor, as well as the appropriate action that you would like taken in the presence of each of the triggers. The file should have one keyword per line, with an optional equal sign (=) and an optional value for the keyword.

PART 1

The following section defines the keywords used in the swatchrc configuration file.

Pattern-Matching There are two options available in swatchrc for specifying triggers, or patterns, to watch out for:

Watch for regex Take the appropriate action when the regular expression specified in regex is found within the file or command being monitored.

Ignore regex Take the appropriate action when there is any activity within the file or command being monitored, except for events that match the expression specified in regex.

Pattern-Matching Action Once the pattern has been identified, you must specify the action to be taken. Here are the options available:

echo Write the event that matches the pattern being monitored.

bell Ring a bell by printing the appropriate ASCII character. This action is only appropriate if you're directing the output of swatch to stdout.

exec command Executes the specified command as if it were typed in at the command line. You can pass positional variables from the matched line to the command. For example, $1 is the first character in the line, $2 for the second character, and so on. $* is the entire line.

mai1[=address:address:...][,subject=your_subject] Sendsanelectronicmail message to the address you specify (address), with an optional subject header (your_subject). Note that if the recipient is omitted, the message is sent to the user who owns the swatch process on the local server.

Pipe command Pipe the matched lines from the monitored file as input to the specified command.

throttle hours :minutes: seconds This option is useful for patterns that appear repetitively. Rather than taking action on each appearance of the trigger, swatch can signal the event only periodically. However, you will get a report of how many times the event occurred during that time.

when day:hour This option is used to restrict the use of an action to a day of the week and an hour of the day. It's useful to configure swatch to page the system administration staff in some instances or to simply e-mail them in other instances, depending on their availability.

swatch Configuration File Examples

The following is a sample swatch configuration file to alert the system administrator of any user authentication failures in the last 30 minutes. This example specifies that

a 00

PART 1

watchfor /authentication failure/ echo bell throttle 00:30

[email protected],subject=Auth Failure Report

If it's after hours, the following configuration file entry indicates that the staff should also be paged. This example looks for occurrences of the string "filesystem full" and sends two separate e-mail messages, one to [email protected], and another to sysadmin-[email protected], but only during the hours of 5 P.M. to 9 A.M.

watchfor /filesystem full/ echo [email protected],subject=Fi1esystem Full [email protected],when=1-7:17-9

Finally, consider the following example of a swatch configuration file that triggers corrective action at the same time that the administration staff is informed of the condition. This example is similar to the previous two, except that it invokes the execution of a script (c1eanup_o1d_fi1es) whenever the string "filesystem full" is encountered.

watchfor /fi1esystem fu11/ echo [email protected],subject=C1eaning Fi1es exec "c1eanup_o1d_fi1es"

Running swatch you'd like to be notified by pager whenever one of the filesystems has reached capacity. You are instructing swatch to look for the expression "authentication failure," and you are specifying an e-mail message that indicates what action to take. The e-mail message will have the subject "Auth Failure Report." The echo directive instructs swatch to include the offending message in the e-mail. The throttle directive ensures that you are only alerted once every 30 minutes, regardless of the frequency of the message within the 30-minute period.

The swatch script accepts several command-line options. Table 3.1 describes the ones that you need to be most interested in.

Table 3.1 swatch Command-Line Options

Configure Option

Description

Default

--config-file=filename

Location of the configuration file.

$HOME/.swatchrc

--help

Display a short help summary.

N/A

--version

Display swatch script version.

N/A

--tail-file=filename

Examine lines of text as they're added to the file.

Note: Only one of these options can be specified at any

--read-pipe=command

Examine input piped in from the specified command.

one time. The default is: --tail-file=/var/log/

examine=filename

Examine specified filename in a single pass.

messages.

I recommend that you always include the --config-file option, and that you place the configuration file in an obvious place, like /etc/swatchrc, where it's easier to track and manage. When no command-line options are specified, the default swatch command is swatch --config-file=~/.swatchrc --tail-file=/var/log/messages More typical invocations of the swatch command would be swatch --config-file=/etc/swatchrc.messages --tail-^ file=/var/log/messages swatch --config-file=/etc/swatchrc.htmlaccess.log --tail-^ file=/var/log/htmlaccess.log swatch --config-file=/etc/swatchrc.authlog --tail-^ file=/var/log/authlog

Note that these commands specify separate configuration files for each system log file to be monitored. I recommend that you invoke each of these swatch commands (or whichever commands you find useful) on a separate virtual terminal (or xterm in the system console) and review the output periodically. As always, the severity of the alert (echo, mail, page, etc.) should be in accordance with the type of event and its recommended handling as stated in your security policy.

logcheck

Maintained by Craig Rowland of Psionic, logcheck is an adaptation of frequentcheck.sh, a log-monitoring package that once accompanied the Gauntlet firewall from Trusted Information Systems, although some of the most important components have been totally rewritten. Unlike swatch, the logcheck design is such that you don't have to have a constantly running process scrutinizing your log files, which should cut down on overhead on the server. In addition, logcheck can alert you of unusual events, even if you have not defined them as triggers to look for.

The logcheck package contains two executables: logtail and logcheck.sh. logtail keeps track of how much of the log file was monitored last time, and it is written in C for performance reasons. logcheck.sh controls all processing and inspects the contents of the log files. It is meant to be invoked from cron and should be configured to run at least hourly.

Installing logcheck

Fortunately, the RPM version of the logcheck package is ubiquitous, and I recommend that you use the following installation method, since there is no need for special configuration of the sources. Once you have obtained the .rpm file, simply install it using the rpm utility:

-rw-r--r-- 1 ramon users 33707 Sep 28 20:04 logcheck-1.1.1-i386^ rmp

[ramon]$ sudo rpm -i logcheck-1.1.1-1.i386.rpm s a 'S

PART 1

TIP Don't forget to erase any previous or outdated versions of logcheck before running the command that installs logcheck.

Configuring logcheck

Upon installation, logcheck writes a number of reference files in the /etc/logcheck directory, which it uses for default pattern matching. The most interesting of these files is logcheck.hacking (shown in Listing 3.3), in which the author has placed a number of regular expressions that are often associated with documented attacks on Linux servers, such as

■ A sendmail command that is used to obtain more information on legitimate mail users (EXPN, VRFY), as well as login failure attempts from accounts that enjoy system privileges (root), or should not be attempting interactive logins (uucp, bin, sync).

■ The command login.*; .*LOGIN FAILURE.* FROM .*root, which signals a root login with the wrong password

Listing 3.3 The standard /etc/logcheck/logcheck.hacking reference file

"wiz" "WIZ" "debug" "DEBUG"

ATTACK

nested

VRFY

bbs

VRFY

decode

VRFY

uudecode

VRFY

lp

VRFY

demo

VRFY

guest

VRFY

root

VRFY

uucp

VRFY

oracle

VRFY

sybase

VRFY

games

vrfy

bbs

vrfy

decode

vrfy

uudecode

vrfy

lp

vrfy

demo

vrfy

guest

vrfy

root

vrfy

uucp

vrfy

oracle

vrfy

sybase

vrfy

games

expn

decode

expn

uudecode

expn

wheel

expn

root

EXPN

decode

EXPN uudecode EXPN wheel EXPN root

LOGIN root REFUSED

rlogind.*: Connection from .* on illegal port rshd.*: Connection from .* on illegal port sendmail.*: user .* attempted to run daemon uucico.*: refused connect from .* tftpd.*: refused connect from .*

login.

.*LOGIN

FAILURE.

* FROM

.*root

login.

.*LOGIN

FAILURE.

* FROM

.*guest

login.

.*LOGIN

FAILURE.

* FROM

.*bin

login.

.*LOGIN

FAILURE.

* FROM

.*uucp

login.

.*LOGIN

FAILURE.

* FROM

.*adm

login.

.*LOGIN

FAILURE.

* FROM

.*bbs

login.

.*LOGIN

FAILURE.

* FROM

.*games

login.

.*LOGIN

FAILURE.

* FROM

.*sync

login.

.*LOGIN

FAILURE.

* FROM

.*oracle

login.

.*LOGIN

FAILURE.

* FROM

.*sybase

kernel: Oversized packet received from attackalert

In addition, the /etc/logcheck directory includes a file named logcheck.violations that contains patterns that, although they should be flagged as suspicious, don't quite constitute evidence of an attack in progress. For example, a line such as

RETR passwd indicates that someone tried to retrieve the password file via FTP. While this is not an illegal action per se, you should at least question the motive of the user who performed this transfer.

The logcheck.violations file also contains references to failed login attempts and other system access diagnostics, such as file transfers and kernel warnings, that are less critical than the logcheck.hacking file but still warrant further investigation.

Listing 3.4 shows the contents of the logcheck.violations file.

Listing 3.4 The standard /etc/logcheck/logcheck.violations reference file

-ERR Password

ATTACK

PART 1

CWD etc

DEBUG

EXPN

FAILURE

ILLEGAL

LOGIN FAILURE

LOGIN REFUSED

PERMITTED

REFUSED

RETR group

RETR passwd

RETR pwd.db

ROOT LOGIN

SITE EXEC

VRFY

"WIZ"

admin alias database debug denied deny deny host expn failed illegal kernel: Oversized packet received from nested permitted reject rexec rshd securityalert setsender shutdown smrsh su root su:

sucked unapproved vrfy attackalert

You should update the logcheck.hacking and the logcheck.violations files regularly with newly discovered log message patterns, or, better yet, make sure that you always j?

have an up-to-date logcheck installation on all your servers. In addition, take the time to 3

add items to these files that are specific to your server and your system needs. For exam- ^

ple, add message patterns from Apache if you're running an HTTP server, or from $

sendmail if you're running a mail server. 00

PART 1

Note that by default, the Linux version of logcheck scans the /var/log/messages file for all events, so make sure that this file is indeed seeing all application log messages. Your /etc/syslog.conf file should have the following line in it:

*.info /var/log/messages

While there are no real configuration files, the actual logcheck shell script (/usr/sbin/ logcheck) is very readable, and it has a few variable definitions toward the front. I recommend that you only change the line

SYSADMIN=root to a username other than root, if you have a special account to which the logs should be e-mailed instead.

Running logcheck logcheck uses cron to schedule periodic runs. Make sure that your system is indeed running crond (the cron daemon) by issuing the following command:

[ramon]$ ps aux | grep cron root 403 0.0 0.4 1284 304 ? S Sep24 0:00 crond

Upon installation, the logcheck RPM distribution creates the following file in the /etc directory:

/etc/cron.hourly/logcheck

This file forces the cron job on your server to execute logcheck at the top of the hour, resulting in hourly e-mail messages to the root account on the server. This e-mail message includes a report on the security violations and any unusual system events recorded during the last hour.

Listing 3.5 shows an example of this hourly report. In this report, logcheck reports that there was a SOCKS5 client that failed to properly authenticate to the local server, as well as a regular user (ramon) who performed a su command to inspect the /var/log/ messages file.

Listing 3.5 Sample e-mail sent by logcheckto the local root user

From: root <[email protected]>

Message-Id: <[email protected]>

To: [email protected]

Subject: redhat.examp1e.com 10/01/00:18.01 system check Status: RO Content-Length: 222 Lines: 8

Security Vio1ations

Oct 1 17:28:14 redhat Socks5[16296]: Auth Fai1ed: (204.168.33.2:^ 1196)

Unusua1 System Events

Oct 1 17:28:14 redhat Socks5[16296]: Auth Fai1ed: (204.168.33.2:^ 1196)

Oct 1 18:17:15 redhat sudo: ramon : TTY=pts/3 ; PWD=/usr/doc/^ 1ogcheck-1.1.1 ; USER=root ; COMMAND=/usr/bin/tai1/var/1og/messages swatch vs. logcheck

Both swatch and 1ogcheck have a place in your system defense. While swatch makes for a better real-time log notification tool, 1ogcheck provides a straightforward, easy-to-install tool that you can use to create custom reports that can be periodically examined by the administration staff.

Whichever tool you choose to implement, it's crucial to choose a recipient who will have enough time and dedicate enough energy to read the reports thoroughly. These tools are meant to help in one of the most tedious aspects of systems security, but ultimately, there is no substitute for human scrutiny and appropriate response.

File Integrity Auditing

Ensuring the integrity of system executables, configuration files, and log files is paramount to a successful security defense. Consider the case where an intruder manages to replace a commonly executed file (like 1ogin) with a Trojan horse version. If the attacker manages to create a Trojan horse 1ogin that performs just like the original, this attack could go undetected forever. Since you can't easily "look" inside executables, the only way you can flag such a compromise is by comparing the current signature of the current 1ogin utility to the signature taken when the system was first installed.

These signatures are cryptographic hash functions whose properties ensure that two different files can never yield the same hash. Therefore, the slightest modification in one of these files can cause the signature to be drastically different. This is the fundamental principle behind most file integrity assurance tools. By taking a signature snapshot of all the executable files before the system goes on the network, you create a baseline that can be stored in a database, preferably on a write-once, read-many-times medium (to ensure that it cannot be tampered with). You can then periodically compare these signatures to the current state of the files and deal with any changes appropriately.

The following section introduces tripwire, the most popular Linux file integrity assurance utility, and explains its installation, configuration, and use as part of a comprehensive system security strategy.

tripwire

Born in 1992 under the auspices of Dr. Gene Spafford at Purdue University, tripwire was the first tool to be offered in the field of file integrity assurance. In 1999, the maintenance and development of the tool was taken over by Tripwire, Inc., a commercial endeavor spearheaded by Gene Kim, one of tripwire's original developers, while working for Dr. Spafford back at Purdue.

Installing tripwire

Although they offer a fully supported line of commercial products based on the tripwire concept, Tripwire, Inc. makes the Linux version of their tool available for free download. You can obtain a gnu-zipped file containing the distribution at www.tripwire.com/ downloads. Once you have downloaded the archive file, decompress it and extract the contents of the resulting .tar file to the current directory using the following commands:

[ramon]$ gunzip Tripwire_221_for_

_Linux_x86

.tar

.gz

[ramon]$ tar xf Tripwire_221_for_

_Linux_x86

.tar

[ramon]$ rm

Tripwire_221

_for_Linux_x86.tar

[ramon]$ ls

-l

total 8814

-r--r--r--

1

ramon

users

9825

Jan

11

2000

License,

txt

-r--r--r--

1

ramon

users

7060

Jan

11

2000

README

-r--r--r--

1

ramon

users

23065

Jan

11

2000

Release.

_Notes

-r--r--r--

1

ramon

users

3300

Jan

11

2000

install,

■ cfg

-r-xr-xr-x

1

ramon

users

31919

Jan

11

2000

install,

.sh

drwxr-xr-x

2

ramon

users

1024

Jan

11

2000

pkg

Before proceeding with the installation, take a look at the install.cfg file, which contains a number of environment variables that control the installation process. Table 3.2 shows the options that you should examine, along with their default settings. I recommend that you change the value of TWROOT to the location where you normally install system tools (for example, /usr/local/tripwire) because most other environment variables build on this base directory.

NOTE Despite my suggestion, I use /usr/TSS in these examples to make the text more compatible with other tripwire documentation.

Table 3.2 tripwire install.cfg Environment Variables

Variable

Description

Default

TWROOT

The root directory

/usr/TSS

TWBIN

Location of the program executables and configuration files

${TWROOT}/bin

TWPOLICY

Location of policy files

${TWROOT}/po1icy

TWMAN

Location of the man pages

${TWROOT}/man

TWDB

Location of the databases

${TWROOT}/db

TWSITEKEYDIR

Location of key used to secure the configuration and policy files

${TWROOT}/key

TWLOCALKEYDIR

Location of key used to secure database files and reports

${TWROOT}/key

TWREPORT

Location of results of integrity checks

${TWROOT}/report

Next, simply execute the supplied install.sh script as the root user: [ramon]$ sudo ./install.sh

During the installation process, you are prompted to enter a passphrase to protect the confidentiality and integrity of the configuration and policy files. You are also asked to enter the name of the database that will eventually contain the signatures to the system files that you choose to monitor. Choose a good passphrase composed of at least 16 characters. (tripwire accepts passphrases of up to 1023 characters!)

NOTE If you choose to accept the installation directory default of /usr/TSS, you should at least include /usr/TSS/bin in your PATH environment variable if you want to execute tripwire commands without supplying a fully qualified file path. Do not include this path in root's PATH variable, however. You may also want to include /usr/TSS/man in your MANPATH environment variable so you can display the supplied man pages.

Configuring tripwire

Once the package has been installed in the appropriate directory, you must create a configuration file, as well as a default policy, that will also be stored in a text file. After these two files are created, use the twadmin utility to encode and sign both files to ensure that their contents are not modified. The next step is to initialize the signature database, which allows you to run your first integrity check.

The following sections explain in more detail the process of creating a tripwire configuration file and a policy file.

The tripwire Configuration File The tripwire configuration file is typically found at the following location:

${ROOT}/bin/twcfg.txt

The purpose of this configuration file is to control the location of all the other files in the tripwire distribution after it has been installed. Listing 3.6 contains the default contents of the file.

Listing 3.6 Initial contents of the tripwire configuration file (twcfg.txt)

ROOT =/usr/TSS

POLFILE =/usr/TSS/policy/tw.pol

DBFILE =/usr/TSS/db/$(HOSTNAME).twd

REPORTFILE =/usr/TSS/report/$(HOSTNAME)-$(DATE).twr

SITEKEYFILE =/usr/TSS/key/site.key

LOCALKEYFILE =/usr/TSS/key/redhat.example.com-local.key

EDITOR =/bin/vi

LATEPROMPTING =false

LOOSEDIRECTORYCHECKING =false

MAILNOVIOLATIONS =true

EMAILREPORTLEVEL =3

REPORTLEVEL =3

MAILMETHOD =SENDMAIL

SYSLOGREPORTING =false

MAILPROGRAM =/usr/lib/sendmail -oi -t

PART 1

If you specified a custom value for TWROOT before the installation, the paths in this file should reflect that fact. Note that the first six variables in the twcfg.txt file are needed for tripwire operation, while the rest are optional. Table 3.3 contains a description of the environment variables in this file that are different from the ones in the installation configuration (see Table 3.2 for those). Unless you have a very specific need to do so, I recommend that you do not change the default contents of this configuration file.

Table 3.3 tripwire twcfg.txt Environment Variables

Variable

Description

EDITOR

Editor for interactive reports.

LATEPROMPTING

Delay the prompting of the passphrase to minimize exposure.

LOOSEDIRECTORYCHECKING

Don't check file for properties that are likely to change often.

MAILNOVIOLATIONS

Send e-mail report even if no violations occurred.

EMAILREPORTINGLEVEL

Default verbosity level of e-mail report (0 through 4).

REPORTLEVEL

Default verbosity level of printed report (0 through 4).

MAILMETHOD

Choice of mail transport (SMTP or Sendmail).

SYSLOGREPORTING

Send user.notice reports through sys1og.

MAILPROGRAM

Program invoked for mailing Tripwire violation reports.

The next section describes the syntax and maintenance of the policy file and the last steps needed to get tripwire up and running.

The tripwire Policy File Now that you have defined the operational parameters of tripwire, you're ready to tell it which files to watch. This is done in the policy file, which is located by default in /usr/TSS/po1icy/twpo1.txt.

A typical tripwire policy file has three distinct sections:

■ Global definitions of variables whose scope includes the entire policy file

■ File severity levels that allow you to prioritize the execution of policies

■ Rules or group of properties to be checked for each file or object

Consider the sample (partial) policy of the tripwire policy file (twpol.txt) in the examples in this section. This example contains the global variables that determine the location of the tripwire root directory (TWROOT), binary directory (TWBIN), policy (TWPOL), database (TWDB), key files (TWSKEY, TWLKEY), and report (TWREPORT).

@@section GLOBAL

TWROOT="/usr/TSS";

TWBIN="/usr/TSS/bin";

TWPOL="/usr/TSS/po1icy";

TWSKEY="/usr/TSS/key";

TWLKEY="/usr/TSS/key";

TWREPORT="/usr/TSS/report";

HOSTNAME=redhat.examp1e.com;

The next example shows the file severity level section of the tripwire policy file. This example defines a number of macros to be used later in the policy. It uses several built-in property masks that tell the policy which events to examine or ignore (IgnoreNone, Readonly, Dynamic, Growing), as well as some user-defined property masks (+pug, 33, 66, 100). These masks direct tripwire to look for certain types of changes in a given file or directory. (See Table 3.4 for a complete list of the characters used in tripwire's property masks.)

@@section FS SEC_CRIT

SEC_SUID

SEC_TCB

SEC_BIN

SEC_CONFIG

SEC LOG

$(IgnoreNone)-SHa;

$(IgnoreNone)-SHa;

# Critical files - we can't

# afford to miss any changes.

# Binaries with the SUID or SGID

# flags set.

# Members of the Trusted

# Computing Base.

# Binaries that shouldn't

# change.

# Config files that are changed

# infrequently but accessed

# Files that grow, but that

PART 1

SEC_INVARIANT = +pug;

SIG LOW

SIG MED

SIG HI

# should never change ownership.

# Directories that should

# never change permission or

# ownership.

# Non-critical files that are

# of minimal security impact.

# Non-critical files that are

# of significant security impact.

# Critical files that are

# significant points of

# vulnerability.

Table 3.4 tripwire Policy File Property Mask Characters

Property

Description

Ignore properties

+

Record and check properties

p

File permissions

I

Inode number

n

Number of links

u

User id of file owner

g

Group id of file owner

t

File type

s

File size

1

File is expected to grow

d

Device number of disk

I Device number to which inode points

I Device number to which inode points

Table 3.4 tripwire Policy File Property Mask Characters (continued)

Property

Description

b

Number of blocks allocated

a

Access timestamp

m

Modification timestamp

c

Inode creation/modification timestamp

c

Cyclic redundancy check

The Tripwire Binaries rule instructs tripwire to watch the binaries themselves and tag any violation with the highest severity value:

# Tripwire Binaries

(rulename = "Tripwire Binaries", severity = $(SIG_HI)) {

$(TWBIN)/siggen -> $(ReadOn1y); $(TWBIN)/tripwire -> $(ReadOn1y) $(TWBIN)/twadmin -> $(ReadOn1y)

The Tripwire Data Files rule also instructs tripwire to watch the policy configuration and key files. Note that in the file severity section earlier, SEC_BIN was defined to be Readonly, so you're telling tripwire that the policy, configuration, and key files should not change at all. Here's an example of the Tripwire Date Files rule:

# Tripwire Data Files - Configuration Files, Policy Files, Keys,^ Reports, Databases

(rulename = "Tripwire Data Files' {

-> $(Dynamic) -i; -> $(SEC_BIN) -i; -> $(SEC_BIN) -i;

PART 1

$(TWLKEY)/$(HOSTNAME)-local.key -> $(SEC_BIN) ; $(TWSKEY)/site.key -> $(SEC_BIN) ;

#don't scan the individual reports

Make sure to define rules to monitor the integrity of the policy directory, the key files, and the reports, as in the previous example. Note that the content of the report files is considered to be dynamic, and the recurse=0 directive instructs tripwire not to go into any of the report subdirectories.

The following is an example of a medium severity rule that defines a set of files that can change on a regular basis, but must retain their user and group ownership:

# Commonly accessed directories that should remain static with^ regards to owner and group

(rulename = "Invariant Directories", severity = $(SIG_MED)) {

/etc -> $(SEC_INVARIANT) (recurse = 0); }

After you have edited the textual policy file to make any necessary additions or modifications, encrypt it and sign it with the following command:

[ramon]$ sudo /usr/TSS/bin/twadmin --create-polfile^ /usr/TSS/policy/twpol.txt

This results in the creation of a binary file called /usr/TSS/policy/tw.pol. At this point, you have a valid configuration file and an initial policy file in place.

The next section explains how to initialize the tripwire database and how to start using the application.

Running tripwire

You're almost ready, but before you start comparing file signatures to look for tampering, you need to snap a baseline of what the signatures should look like on all the files in your policy. This takes place when you first initialize the signature database using the following command:

[ramon]$ /usr/TSS/bin/tripwire --init

Whenever you want to check the integrity of the current state of the files, simply issue the command

[ramon]$ /usr/TSS/bin/tripwire --check which results in a report being written to stdout and saved to the file location specified in the REPORTFILE environment variable (see Listing 3.6).

If you would like the report to be e-mailed to you instead, use the following variant of the previous command:

[ramon]$ /usr/TSS/bin/tripwire --check --email-report

Once you have reviewed the report, you can confirm that any integrity differences have been acknowledged by writing the new integrity results to the database. Do this by using the following command:

[ramon]$ /usr/TSS/bin/tripwire --update

Finally, as you make changes to the policy file, you need to force tripwire to reload its policy definitions using the following command:

[ramon]$ /usr/TSS/bin/tripwire --update-policy

PART 1

TIP I recommend that you run tripwire at least twice a day and examine the integrity reports carefully before updating the signature database. Ideally, you should simply add the tripwire --check --emai1-report command to a cron job. This command should be executed first thing in the morning and shortly before the system administration staff goes home for the day.

Password Auditing

Keeping track of logs and ensuring the integrity of your files can significantly strengthen your server, but it's easy to fall prey to the "crunchy on the outside, chewy on the inside" syndrome. The weakest link in most of today's Linux servers is the users and more specifically, their choice of passwords. While your security policy should always include a section outlining the properties of good passwords, you should also take a proactive approach to password security and conduct periodic audits of all your users' passwords.

As with any other Unix system, the Linux passwords are stored as the result of a one-way DES encryption operation on the original cleartext password. (Some Linux systems also support MD5 hashing for password protection.) This means that the actual password is never stored in the clear. Whenever a user attempts to log on to the system, the login program encrypts the password entered and compares the result to the one found in /etc/passwd (or /etc/shadow). Only if the two passwords match is the user allowed access to the system.

There are several password-auditing tools available for Linux. These tools take as input the encrypted password file, and they attempt to guess each user's password by staging a dictionary attack, where a collection of commonly used words is DES-encrypted one by one and the results are compared to the /etc/passwd entries. If a match is found, the username is recorded and included in the output report for the security administrator.

This section describes one of these tools, John the Ripper, developed as part of the Openwall project (www.openwa11.com).

John the Ripper

An alternative to the original crack program, John the Ripper is a robust password-guessing tool that uses its own routines to attempt to crack passwords, rather than using the Linux crypt(3) system call. This results in a noticeable performance advantage over the earlier version of the crack tool. Another advantage is that John the Ripper runs on a variety of platforms (most Unix, DOS, and Windows systems), so you can use other machines in your network to try and crack your Linux servers' passwords.

By using John the Ripper to try to guess your own users' passwords, you can alert them that they have chosen too short or too weak a password and exhort them to change it before an attacker with the same tool guesses it and breaks into their account.

Installing john

The john installation is trivial. Download it from www.openwa11.com/john (it's not available in RPM form), decompress it, and expand the archive (see the steps in Listing 3.7). Once you have expanded the sources into their own directory, go to the src directory and run the make command. Note that you have several options for the target executables (type make with no arguments to see the options). Listing 3.7 shows a build for an ELF format executable on the i386 architecture.

Listing 3.7 Installing and compiling john [ramon]$ ls -l john-1.6.tar.gz

-rw-r—r— 1 ramon users 497354 Oct 3 19:27 john-^

1.6.tar.gz

[ramon]$ gunzip john-1.6.tar.gz [ramon]$ tar xf john-1.6.tar [ramon]$ cd john-1.6

README doc run src [ramon]$ cd src [ramon]$ make

To build John the Ripper, type:

u make SYSTEM where SYSTEM can be one of the following:

1inux-x86-any-e1f Linux, x86, ELF binaries

1inux-x86-mmx-e1f Linux, x86 with MMX, ELF binaries bO

1inux-x86-k6-e1f

1inux-x86-any-a.out linux-alpha linux-sparc generic

Linux, AMD K6, ELF binaries Linux, x86, a.out binaries Linux, Alpha Linux, SPARC

Any other UNIX system with gcc

[ramon]$ make linux-x86-any-elf

In -sf x86-any.h arch.h make ../run/john ../run/unshadow ../run/unafs ../run/unique \

JOHN_OBJS="DES_fmt.o DES_std.o BSDI_fmt.o MD5_fmt.o MD5_std.o^ BF_fmt.o BF_std.o AFS_fmt.o LM_fmt.o batch.o bench.o charset.o^ common.o compiler.o config.o cracker.o external.o formats.o^ getopt.o idle.o inc.o john.o list.o loader.o logger.o math.o^ memory.o misc.o options.o params.o path.o recovery.o rpp.o^ rules.o signals.o single.o status.o tty.o wordlist.o unshadow.o^ unafs.o unique.o x86.o" \

CFLAGS="-c -Wall -O2 -fomit-frame-pointer -m486"

make[1]: Entering directory Yhome/ramon/john-1.6/src'

gcc -c -Wall -O2 -fomit-frame-pointer -m486 -funroll-loops DES_fmt.c gcc -s DES_fmt.o DES_std.o BSDI_fmt.o MD5_fmt.o MD5_std.o BF_fmt.o^ BF_std.o AFS_fmt.o LM_fmt.o batch.o bench.o charset.o^ common.o compiler.o config.o cracker.o external.o formats.o^ getopt.o idle.o inc.o john.o list.o loader.o logger.o math.o^ memory.o misc.o options.o params.o path.o recovery.o rpp.o^ rules.o signals.o single.o status.o tty.o wordlist.o unshadow.o^ unafs.o unique.o x86.o -o ../run/john

In -s john ../run/unique make[1]: Leaving directory ~/home/ramon/john-1.6/src' [ramon]$ ls -l ../run/john

-rwxr-xr-x 1 ramon users 148428 Oct 3 19:59 ../run/john

There is no make installation step, so simply copy the executable to the appropriate place in your filesystem using the following command:

Configuring john

All configuration parameters are kept in the file ~\john.ini, which must be present in the current directory when john is invoked. There are four environment variables that can be configured in this file:

wordfile The wordfile variable is the file that contains the word list to be used to crack passwords in batch style.

idle When set to Y, the idle variable forces john to use only idle CPU cycles. I recommend this setting if you need to run john on a production server. The default value is N.

save The save variable is the delay (in seconds) of the crash recovery file. This file contains checkpoints of the work done so far in case of an interruption. The default value is 600.

beep When the beep variable set to Y, john beeps every time a password is successfully cracked. The default value is N.

The following is the [options] portion of a sample john.ini file containing these environment variables:

[ramon]$ more john.ini

# This file is part of John the Ripper password cracker,

# Copyright (c) 1996-98 by Solar Designer

[Options]

# Wordlist file name, to be used in batch mode Wordfile = ~/password.lst

# Use idle cycles only Idle = N

# Crash recovery file saving delay in seconds Save = 600

# Beep when a password is found (who needs this anyway?) Beep = N

Running john

By invoking john with no options, you can allow john to come up with enough random tries to crack the passwords:

[ramon]$ john /etc/passwd

Alternatively, you can supply a list of commonly used words for john to try. The word list can be a file; the following example uses the file my_guesses:

[ramon]$ john -wordfile:my_guesses /etc/passwd

As a third option, the word list can be piped in via the stdin, as in the following command: [ramon]$ cat my_guesses | john -stdin /etc/passwd

Note that in general, john runs for a long time, and it writes its findings as it goes along into the ~/john.pot file. If you wish to see the passwords that john has cracked so far, issue the following command:

[ramon]$ john -show /etc/passwd

Also, if you are using shadow passwords, you need to merge /etc/passwd and /etc/ shadow into a single file for john to work properly. (See Chapter 2, "System Installation and Setup," for information about shadow passwords. The john executable can do this conversion when invoked as unshadow. Start by creating a symbolic link using the command

[ramon]: sudo ln -s /usr/local/bin/john /usr/local/bin/unshadow and then invoke the unshadow script with both files using the command [ramon]$ unshadow /etc/passwd /etc/shadow > mypasswd

You now have a file (mypasswd) that you can try and crack through john: [ramon]$ john mypasswd

Although it's very stable in its current form, the john utility is still being actively developed, and new versions often introduce a dramatic increase in performance. Check the Web site www.openwa11.com periodically to look for updates.

PART 1

In Sum

Security is a process, and putting up a strong defense is virtually useless unless you're willing to monitor your system for intrusions and stay vigilant. The syslog facility is a powerful system security tool, especially when the resulting log files are periodically monitored for abnormal behavior. This chapter describes two such tools: swatch and logcheck.

The value of examining log files, however, is greatly diminished if you can't be sure that their contents haven't been modified by an intruder in order to cover his/her tracks. Part of your security-monitoring procedures should include an integrity check on both log files and important system executables. This serves to assure you that the log files haven't been tampered with and that the executables haven't been replaced by Trojan horse versions.

Finally, it's easy to underestimate the importance of using good passwords that can stand up to naïve intrusion attempts. Among the password-auditing tools currently available for Linux, John the Ripper stands out as a clear winner because of its performance and its ease of use.

Once you have implemented your system security measures and established procedures for system monitoring and auditing, it's time to consider your network layer defense. The next part of this book, Part 2, "Network Security," describes the server configuration issues that impact network security and introduces monitoring and auditing tools and procedures that apply to the network layer.

Continue reading here: Spawning Internet Daemons with inetd

Was this article helpful?

0 0