Reading the Tapes

The first tape contained the Infostat application source code that was already in The Walton Centre's possession. These were the data entry screens and reports that are put together in the application itself.

Being in a standard format this tape was easily read using cpio (see the man pages for more details of this utility). Used for machine independent file transfers, cpio is a standard UNIX archiving utility.

However, the tape containing the C language source code for the 4thWrite database engine, the one necessary for year 2000 compliance coding, could not be read so easily. This was not a media problem with the tape itself, as the contents could be read raw using the HP-UX dd utility, which explained in detail in Appendix C in the section on Reconfiguring Swap Space. This utility can be used to convert a file, from one format to another, while copying the file, and is also available as a GNU utility for Linux.

Two external data recovery agencies were sent copies of the HP-UX dd output file and staff at NCC's ESCROW department, ex-CHC employees also assisted, but a solution to the problem could not be found. Hex dumps of the tapes suggested that they had been created on a DEC Alpha running OSF-1 and various Internet Newsgroups were contacted. Through this medium, a kind person in the United States offered to read the dd file on his DEC Alpha, but this also failed. Eric Taylor, The Walton Centre's UNIX Guru, was assigned the case. Eric discovered from the hex dumps that the files stored were clearly not streamed. He then tried for a few weeks to work out the data structure used as he was certain it was a file system of some kind, but without any indication what file system it was, this proved to be an impossible task. At this point what was needed was a bit of luck!

The file was transferred to a Linux machine to see if Linux had any utilities that would provide the extra information needed to read the file. This proved to be the luck that was needed, because when the Linux file command was performed on the output from the HP-UX dd file, as this checks a file's magic-number, which contains information on what utility or application created the file. It identified the dd file as a Little Endian New File System Dump. This supported the belief that the tape was produced on a DEC Alpha. Big Endian and Little Endian systems store integers differently and this is what had been causing the problems. The HP-UX server had dump and restore utilities but these could not work with the file, as the HP9000 is a Big Endian machine. Intel-based systems such as Linux on the other hand, are Little Endian. This is a common problem with such dumps and utilities such as file and strings can be useful in discovering if this is the case.

Now the format of the tape was known (i.e. the file created by dump utility of DAC Alpha file system) all that was needed was to convert it to the correct structure. A new version of BSD derived dump and restore source code was obtained and compiled on the Linux server. Feeling sure that this utility would solve the problem it was applied to the file using the commands:

cd /hdbl Change to an unused but mounted partition.

restore -rf Restore dump from file /hdal/datadump .dd.


The restore utility needs a pristine file system to operate correctly; that is why an unused disk partition was used.

Dismay followed when this utility reported errors and could not process the dd file. What was wrong now? All the tools necessary to process the file were in place, so some part of the process was obviously feeding bad information. After a few days head-scratching, it was discovered that, by default, the HP-UX dd command was reading the tape in 2048 byte blocks but only writing the first 512 bytes to the output file, then throwing the rest away. Is it any wonder no one had got anywhere with the dd file!

Once this problem had been solved, the source code could be read using the HP server's DAT drive using dd with the appropriate output block size set, and then transferred to the Linux server using FTP, and processed using the Linux restore utility. This produced 80 MB of C language source code and configuration files. Now the real fun could begin!

Was this article helpful?

0 0
Computer Hard Drive Data Recovery

Computer Hard Drive Data Recovery

Learn How To Recover Your Hard Drive Data After A Computer Failure.

Get My Free Ebook

Post a comment