There are several automated and manual data integrity tools in the marketplace. Some come free with Linux distributions and some are offered as enterprise solutions and can cost tens of thousands of dollars. Some require $3,000 training courses and others have man pages. There is even well-known forensic software capable of running against Linux nodes, which offers the benefit of being able to profile systems before an attack and identify uploaded, malicious, altered, or hidden files or processes after an attack.
The sky is the limit concerning functionality. Whatever method is used to oversee an environment, or recommend or implement as part of an audit, you should follow some basic guidelines to ensure that the data integrity system is functioning properly.
First, double-check the files being monitored and make sure they encompass all of the critical system files. Also check that no additional critical system files have been added as a result of an upgrade, security patch, or installation of additional software. You should review this whenever patches or upgrades are performed or whenever new software is installed.
Second, and more specifically, ensure that only critical system files are being monitored. Many organizations and administrators have a bad habit of performing data integrity checking on too many files and end up ignoring the scans because of it.
Third, ensure that the data integrity process is run and updated with reasonable frequency. Scans should happen often enough to catch problems before they get too big, but not be so overly burdensome as to cause them to be ignored. Furthermore, run an integrity verification scan immediately before patches or new software installations (to verify the system is in a clean state) and immediately afterward (to update the database with the new data regarding the updated files).
Finally, ensure the database is backed up from the system that is being monitored. Attackers can gain access to the system and alter the file hash database (if administrators are careless with their password choices) or corrupt/delete it and render it useless.
Gold Image Baselines The next step in data integrity is to incorporate all of the measurable critical and functional aspects of a system into a single profile. This profile includes all the items in traditional data integrity but needs to be much more comprehensive.
In addition to hash sets in the gold image baseline, the following should also be included:
• All running processes (including full path)
• Process accounts
• System libraries (including full path)
• Open files (including full path)
• User accounts (/etc/passwd) and groups (/etc/group)
• Loaded modules
• Installed devices
• File permissions/
• File flags (such as immutable)
• A bit stream image of the operating system drive(s)
• The files contained in the /etc/init.d directory
• A record of the symbolic links associated with the files in /etc/init.d
• Any other configuration files (of which there are sure to be many)
This image provides a comprehensive picture of the state of the system before any changes are made so you can use it for comparison at a later time. Although you can't assume that everything that has changed on the respective system is malicious, this gives you a good place to start and will at least eliminate certain files that you know are good.
Furthermore, if the state of the system is captured in a known good state, you can use it for more than a malicious incident response. Gold image baselines are often very useful in correcting simple misconfigurations, rather than more dramatic attacks. They are actually part of a more comprehensive disaster recovery and business continuity plan.
Gold image baselines should be stored in a secure location to prevent tampering or snooping, just as with other data integrity packages. They should also be included in your incident response kit for respective systems.
Probably the most well-known and tested resource for creating baselines is Tripwire. However, it does not perform all of the baselines required to create a true, gold image baseline. You can supplement with other tools, native utilities, or custom scripts to make up the difference, or you can use a comprehensive forensic and incident response tool like EnCase Enterprise Edition to perform all tasks within a single utility that you can store for later comparison in a single location.
Was this article helpful?
Read how to maintain and repair any desktop and laptop computer. This Ebook has articles with photos and videos that show detailed step by step pc repair and maintenance procedures. There are many links to online videos that explain how you can build, maintain, speed up, clean, and repair your computer yourself. Put the money that you were going to pay the PC Tech in your own pocket.