Maximize
Bookmark

VX Heavens

Library Collection Sources Engines Constructors Simulators Utilities Links Forum

Computer Viruses In Unix Networks

Peter Radatti
CyberSoft, Incorporated
August 1995

1
[Back to index] [Comments (0)]

Copyright © August 1995, February 1996 by Peter V. Radatti.

Permission is granted to any individual or institution to use, copy, or redistribute this document so long as it is not sold for profit, and provided that it is reproduced whole and this copyright notice is retained.

Unix systems are as susceptible to hostile software attacks as any other system, however, the Unix community is zealous in their believe that they are immune. This belief is in the face of historical reality. The first computer viruses created were on Unix systems. The Internet Worm, Trojan Horses and Logic Bombs are all ignored milestones in this belief. Not withstanding these beliefs, there is a growing concern among computer security professionals about these problems. This concern is based on recognition of the complex nature of the problem and the increasing value of Unix based networks. Whereas, the Internet Worm disrupted the Internet in 1988 the cost was relativity low. If this attack is repeated today, the cost will be very high because of the new found importance of the Internet, electronic business networks using EDI and private networks, all of which are Unix based.

Traditional methods used against attacks in other operating system environments such as MS-DOS are insufficient in the more complex environment provided by Unix. Additionally, Unix provides a special and significant problem in this regard due to its open and heterogeneous nature. These problems are expected to become both more common and pronounced as 32 bit multi task network operating systems such as Microsoft NT become popular. Therefore, the problems experienced today are good indicators of the problems and the solutions that will be experienced in the future, no matter which operating system becomes predominate.

The Existence of the Problem and it's Nature

The problem of software attacks exists in all operating systems. These attacks follow different forms according to the function of the attack. In general, all forms of attack contain a method of self preservation which may be propagation or migration and a payload. The most common method of self preservation in Unix is obscurity. If the program has an obscure name or storage location, then it may avoid detection until after its payload has had the opportunity to execute. Computer worms preserve themselves by migration while computer viruses use propagation. Trojan horses, logic bombs and time bombs protect themselves by obscurity.

While the hostile algorithms that have captured the general public's imagination are viruses and worms, the more common direct problem on Unix systems are Trojan horses and time bombs. A Trojan horse is a program that appears to be something it is not. An example of a Trojan horse is a program that appears to be a calculator or other useful utility which has a hidden payload of inserting a back door onto its host system. A simple Trojan horse can be created by modifying any source code with the addition of a payload. One of the most favorite payloads observed in the wild is "/bin/rm -rf / >/dev/null 2>&1" This payload will attempt to remove all accessible files on the system as a background process with all messages redirected to waste disposal. Since system security is lax at many sites, there are normally thousands of files with permission bit settings of octal 777. All files on the system with this permission setting will be removed by this attack. Additionally, all files owned by the user, their group or anyone else on the system whose files are write accessible to the user will be removed. This payload is not limited to use by Trojan horses but can be utilized by any form of attack. Typically, a time bomb can be created by using the "cron" or "at" utilities of the Unix system to execute this command directly at the specified time.

While the bin remove payload is a favorite of many authors, there are other traditional attacks which are not as overt in their destruction. These other attacks are more important because they bend the operation of the system to the purposes of the attacker while not revealing themselves to the system operator. Attacks of this form include the appending of an account record to the password file, copying the password file to an off site email address for leisurely cracking and modification of the operating system to include back doors or cause the transfer of money or property. It is extremely simple to email valuable information off site in such a manner as to insure that the recipient cannot be traced or located. Some of these methods are path dependent, however, the path selected is at the discretion of the attacker.

One of the most simple methods of inserting a back door is the well known suid bit shell attack. In this attack, a trojanized program is used to copy a shell program to an accessible directory. The shell program is then set with permission bits that allow it to execute with the user id and permission of its creator. A simple one line suid bit shell attack can be created by adding the following command to a user's ".login" or any other file that they execute. Example: cp /bin/sh /tmp/gotu ; chmod 4777 /tmp/gotu

Trojan horses and time bombs can be located using the same methods required to locate viruses in the Unix environment. There are many technical reasons why these forms of attack are not desirable, the foremost being their immobility. A virus or worm attack is more important because these programs are mobile and can integrate themselves into the operating system. Of these two forms of attack, the virus attack is the hardest to detect and has the best chance of survival. Worms can be seen in the system process tables and eliminated since they exist as individual processes while virus attacks are protected from this form of detection by their host programs. All of the methods used to detect and prevent viruses are also effective against the other forms of attack, therefore, the remainder of this paper will deal with the more serious problem of viral attacks.

Unix Virus Attacks

The promotion of the concept of "magical immunity" to computer viral attacks surfaces on a regular basis. This concept, while desirable, is misleading and dangerous since it tends to mask a real threat. Opponents of the possibility of viral attacks in Unix state that hardware instructions and operating system concepts such as supervisor mode or permission settings, security ratings like C2 or B1 provide protection. These ideas have been proven wrong by real life. The use of supervisor mode, the additional levels of protection provided by C2 and the mandatory access control provided by security level B1 are not necessary for viral activity and are therefore moot as a method of protection. This fact is supported by the existence of viruses that infect Unix systems as both scripts and binary.

In fact, virus attacks against Unix systems will eventually become more popular as simpler forms of attack become obsolete. Computer viruses have significantly more virility, methods of protection and opportunity for infection. Methods of protection have been highly refined in viruses, including rapid reproduction by infection, migration though evaluation of its environment, (boot viruses look for uninfected floppy diskettes) armor, stealth and polymorphism. In addition, the host system itself becomes a method of protection and propagation. Virus infected files are protected just as much by the operating system as are non-infected files. Introduction of viruses into systems have also been refined using technology called "droppers". A dropper is a Trojan horse that has a virus or viruses as a payload. Finally, extensive networking technology such as NFS (Network File System) allows viruses to migrate between systems without effort.

All of these reasons point to viruses as the future of hostile algorithms, however, the most significant reason for this determination is the effectiveness of the virus as a form of attack. Past experiments by Doctor Fred Cohen [1984] used a normal user account on a Unix system, without privileged access, and gained total security penetration in 30 minutes. Doctor Cohen repeated these results on many versions of Unix, including AT&T Secure Unix and over 20 commercial implementations of Unix. The results have been confirmed by independent researchers worldwide. Separate experiments by Tom Duff [1989] demonstrated the tenacity of Unix viruses even in the face of disinfectors. The virus used in Mr. Duff's experiment was a simple virus written in script. The virus was believed to have been reintroduced by the operating system from the automated backup and restore system. Reinfection took place after the system had been virus free for one year.

Heterogeneous Virus Attacks

I have observed non-Unix personal computers attached to a heterogeneous network that were infected with computer viruses originating from Unix servers [1987]. The Unix systems were not the original point of entry for the viruses. They were dormant while on the Unix systems but became harmful when they migrated to their target systems. The Unix systems acted as unaffected carriers of computer viruses for other platforms. For the sake of simplicity, I have named this effect after an historical medical problem of similar nature, "Typhoid Mary Syndrome" [1991]. Networks and specifically Unix servers that provide network file systems are very susceptible to this problem.

I first observed this problem while investigating an infection of personal computers attached to a network with a large population of Unix servers and workstations. The virus was manually attacked on the personal computers using virus scanners. During the infection period all of the personal computers were disconnected from the network and idle. Once all the computers were disinfected, all removable media was tested and the infection was unobserved for a period of time, the computers were reattached to the network. A few weeks later, a test of the computers using the same virus scanner indicated they had become reinfected with the same viruses. The source of infection was then identified as repositories of executables stored on the Unix file servers.

These repositories were organically grown centralized resources for all the personal computers because the Unix servers were effective at providing these shared services via NFS. In retrospect, this problem had to exist. The use of networked systems that were exported from the Unix platforms provided an easy, powerful method of transferring data, including executables. Some network designs provide all third party software from a network disk for ease of maintenance and reduced storage requirements. This easy access provides an open door for viruses.

Transplatform Viruses Attack Unix

During late 1994 and early 1995, I observed multiple instances of at least three transplatform virus attacks on Unix systems. All of these attacks involved MS-DOS viruses that attacked PC based Unix systems. The first attack involved a virus that corrupted the Unix file system every night. The attack was located using a virus scanner and indicated a Unix binary that was executed at midnight by cron. The MS-DOS virus had become embedded in the Unix executable where it was executed. The virus did not perform as designed in that the corruption was the result of the virus attempting to infect other files and was not an intended effect. The virus was reinstalled every morning when the system was restored. The second attack involved an MS-DOS virus that executed and was successful in infecting other files. Once again, the file system corrupted but it took longer in duration, thereby allowing the virus to propagate. The final infection involved a boot sector virus. Since this type of virus executes prior to the loading of the operating system , the differences between Unix and MS-DOS are moot. The PC-BIOS and processor chips are the same in both cases and the virus is able to execute according to design. In fact, two different viruses were observed performing in this way. The first virus was spread by an MS-DOS setup diskette while the second virus was transmitted using a still undiscovered method. While we observed no boot sector infections of PC based Unix systems during 1994, we received reports from system administrators who were requesting information on our Unix anti-virus product because they had experienced hundreds of infections during 1995. In one instance, a single multinational company lost its entire international network overnight. The estimated cost in lost time, resources, and sales was in the millions of dollars.

Once it is understood that the BIOS and processor functions are the same for both operating systems, it is very easy to see how a transplatform virus could be designed by intention. The virus would be able to process correctly by inspecting the operating system using only common BIOS calls and then modify its basic behavior using a simple "if" structure.

Traditional categories of Protection and Their Failure

There are three traditional categories of protection, none of which provide complete or significant protection as stand-alone methods of implementation. The categories are Control, Inspection and Integrity. Each of these methods has traditionally been used separately.

Control has been the primary intent of the U.S. national standards on computer security. They deal with the control of access to the system, its functions, resources and the ability to move or share data in the system. These national standards are codified in a library generally referred to as the Rainbow series. (The name was given because the books have different color covers making a library shelf look like a rainbow.) While these standards are a valuable and important aspect of computer security, they do not provide a deterrent against software attack. A virus is an effective way of gaining control over a system, even a highly controlled system such as a B1 rated version of Unix. In this case, control does not provide protection against software attacks because of the viruses' ability to change permission sets with each new owner that is infected. A virus attack gains access to multiple users through shared files. Access control is designed to allow the sharing of files. The ability to share files is a basic need of the user and cannot be eliminated without destroying the usefulness of the system. Discretionary Access Control (DAC) is not protection against software attacks because it is a weak form of protection that can be bypassed and, as discretionary, is at the control of the end users who very often ignore it. Sites where the majority of the files on the system have no DAC protection are normal. (Many Unix sites have permission bit settings of 777 which allow anyone to read, write, execute or modify the file.) Mandatory Access Controls (MAC) also has little effect on virus activity for the same reasons, although MAC can be configured to be neither weak nor easy to bypass. Each time a virus attacks an executable file owned by a different user, it takes on the full privileges of that user, including access to files of other users whose permissions intersect the DAC and MAC permission sets of the infected user. On all systems, the need to share files forces the creation of users who exist in multiple permission sets. This multiple membership allows viruses to move between MAC compartments and levels. The reduction of multiple membership users will slow the advance of a virus but will not eliminate it. Finally, once a virus gains access to an operator account (root, operator, isso) it cannot be stopped by any form of control.

Inspection is the traditional way of locating both known holes in operating systems and in locating known viruses. The key word here is "known". System audit tools such as COPS, SATAN and others can only locate holes that are known to them. Virus scanners can only locate viruses that are known to them. This means that a virus scanner or inspection tool is obsolete even before it is shipped from the factory. It can only deal with the past, never the present or future since conditions searched for must exist at the time of coding. Virus scanner have to be constantly updated. This is becoming a problem with the explosion of viruses being created by new authors and virus computer aided design and manufacturing tools (V-CAD/CAM).

It has been proposed that audit tools such as COPS can be used to deter virus infections because they strengthen the system's ability to control access and data movement. These inspection tools only improve control. As stated, control does not provide protection against virus attacks. It attempts to keep outside people out and inside people within their areas of authorization.

The third category of protection is Integrity. Integrity systems are intended to detect change. In the MS-DOS world, early integrity systems used cyclic redundancy character, CRC, values to detect change. A virus was then created which countered this protection. The virus determined the CRC value of the target file, infected it, and then padded the file until the CRC value computed the same. Many Unix users still use this method of change detection, or worse, they attempt to use the date of last modification as an indication of change. The date of last modification can be changed to any value on Unix systems with a simple user command. On many systems an option of the "touch" command provides this ability.

Any integrity tool that does not use cryptographic methods is of little value. In fact, if the integrity system fails to detect critical changes, then the false sense of security created in the system operator can be devastating to the system. CyberSoft created an integrity tool, CIT, using the RSA Associates MD5 cryptographic hash algorithm. Since the algorithm is cryptographic, it can detect even a single bit flip and cannot be misled by any known means. In addition, during the development of CIT, it was determined that it was necessary to detect additions and deletions to the file system since these could be indications of non-infectious attacks such as performed by Trojan horses, worms and hackers. In this way, a rolling baseline can be created that will allow the system operator to quickly recover from any form of file system attack. Modifications to the protected file system created by unauthorized users or software attacks can be detected and removed. Using a tool of this type allows the administrator to locate the approximate time of attack since the modification will have taken place between two known timed events, the last and current execution of the integrity tool. Finally, integrity tools can be used to determine if a third party file has been modified or tampered with prior to use. Some manufacturers of Unix operating systems now publish MD5 digests of their systems. Using these digests, it is possible to determine that the file on your system is exactly as it should be. There was no degradation from misreading the installation media, deterioration of the disk system or intentional modification. If a manufacturer does not publish a list, then end users can create their own by installing an operating system on multiple systems from different media sources. The created digests of each system should agree.

Nontraditional Categories of Protection and Their Failure

In the past, fencing systems were sold as a popular method of virus protection on PC platforms. A fencing system write protects parts of the disk using a hardware board that is added to the system bus. Since a virus cannot infect a file that is write protected using hardware, it appears to be a good method. The obvious drawback is that the user cannot write to the disk if it is write protected. The fencing system therefore had to create zones of protection so that the user could perform useful work. Viruses happily infected the unprotected zones. Fencing systems appear to have never been marketed for Unix systems. CyberSoft did provide fencing as a custom solution to an Internet service provider a few years ago. We suggested that their boot disk have the write enable line cut and a shunt installed. The operating system was installed and logical links were created for all files that required constant modification to a second write enabled disk. This method has been very successful against hacker attacks. The service provider has never had a write protected file modified by an attack. Many people have tried but the method has stood the test of time. This implementation method also suffers from the problem of zones of protection.

Currently Available Methods of Protection

CyberSoft, Inc. manufactures the first and oldest [1991] product in this category. The product is called VFind and runs on most Unix systems. Since I have not studied the other products available for Unix, I will deal with the product that I am qualified to discuss, VFind.

The VFind product provides protection in all three categories. It provides Control by supplying the COPS audit tool along with a proprietary audit tool called THD (Trojan Horse Detector). COPS was not developed by CyberSoft and is available free on the Internet, however, CyberSoft believes it is necessary to provide a certificate of traceability for COPS. It receives the program directly from the author, Dan Farmer, and supplies it to the end user without modification other than packaging. This insures that the end user does not receive a trojaned or corrupt copy of the program. The THD program makes use of the fact that many Trojan attacks use duplicate file names where the file name of the Trojan is the same as a popular Unix command in order to execute. The "ls" command is normally stored in the "/usr/bin" directory. Since many users allow world read permission on their account control file, (a.k.a. dot-files) it is easy to learn the search path selected by that user to search for system commands. If an area that can be written into is in the search path prior to "/usr/bin", then a Trojan or virus infected version of the ls command can be located in that directory and will be executed. The THD program looks for duplicate file names throughout the system. It also detects known high risk file names such as "/tmp/gift" which is the result of the Unix Usenix Virus (aka AT&T Attack Virus) running on the system.

Inspection is provided by a standard virus scanner. Since the Typhoid Mary problem affects Unix systems, the scanner simultaneously searches for Unix, MS-DOS, Macintosh and Amiga viruses on the Unix system. It has a user accessible pattern matching language called CyberSoft Virus Description Language, CVDL, which can be used to keep the scanner up to date. In fact, the end user can use legally obtained scan codes from other vendors or of their own creation in order to provide independence from the vendor.

There were multiple reasons why CyberSoft felt it was necessary to develop a virus description language. The increasing sophistication of the problems was becoming difficult using standard scanning technology. Many of the viruses that attack UNIX are written entirely in source code and executed in interpretative languages such as script. Scan codes cannot be easily designed to find a virus in which white space, the use of tabs and variable names change. Normal scan codes depend on the fact that binary executables contain stable strings of code that can be searched for at specific addresses (excluding polymorph and stealth). This is only partially true in the UNIX environment. Since VFind was designed to search for UNIX, MS-DOS, Apple Macintosh and Commodore Amiga viruses on the UNIX platform, addresses could no longer be specific since the infected file might exist within a pseudo-disk or a compound file such as a tar file. In addition, the sequences of stable code values had to increase in size to hold statistical validity and not generate false hits.

Scanning for viruses written in source code required several innovations in virus scanners. Many of the features required are normal parts of compiler parsers. Compiler parsers are the first step in the process of taking a computer program written in a source language and producing a binary executable. CyberSoft felt that a compiler parser could provide a solution to its technical goals, however, it would be necessary to define an entire language for the parser to work correctly. At the time this decision was being made, 1991, CyberSoft was unable to locate any standards for a virus description language. The language was defined in January 1992 and named the CyberSoft Virus Description Language, CVDL.

During the design of CVDL, several goals were defined. The first was to design a universal way of describing pattern matching. The second was that the language incorporate enough features that unforeseen future requirements could be resolved without changing the language or code. The grammar and versatility of the language must allow general programming within the pattern matching framework. These goals dictated many of the intrinsic features within CVDL, including the necessity to process any character or hex stream. Originally, we desired the capability of processing any length pattern description, however, practical limits prevailed and a limit of 32,000 bytes per description was defined. A description of 32,000 bytes length can yield an actual pattern thousands of times longer so the constraint was considered non-binding. Boolean operators were defined and upper/lower case sensitivity or case insensitivity was made a user selectable option. One of the hardest requirements to efficiently design was the ability to provide forward reference proximity scanning. This feature was a necessity to locate source code viruses. Proximity scanning allows the definition of a pattern that will not be affected by the ambiguity of the typist or white space.

One of the design features of CVDL is its ability to be used for the cleanup of data spills by searching a system for predefined patterns. While data spills are not a common problem with software attacks, they are a common problem with hacker attacks. A hacker will store interesting files in obscure locations. Many organizations caveat "interesting" files using document headers such as "TOP SECRET" or "COMPANY CONFIDENTIAL". Using CVDL, many different possible patterns of actual code can be pattern matched within defined constraints. In this way, CVDL is able to produce a basic model of a pattern that can match with a high percentage of accuracy and integrity.

Finally, an MD5 cryptographic integrity tool called CIT provides integrity to the entire file system. CIT identifies all files which have been modified, added or deleted to the file system. A side benefit to this ability is a reduction in help desk repair time when correcting system problems.

The use of tools from all three categories of protection, along with sensible policies and procedures, provides maximum protection against software method attacks in Unix by providing support in each area that is deficient in the other tools.

Projection of Future Problems

I believe that the problem of attack software written for and targeted against Unix systems will continue to grow, especially now that the Internet has gained popularity. Unix systems are the backbone of the world wide Internet. Viruses will become more prevalent because they provide all of the benefits of other forms of attack while having few drawbacks. Transplatform viruses may become common as an effective attack. All of the methods currently used in creating MS-DOS viruses can be ported to Unix. This includes the creation of automated CAD/CAM virus tools, stealth, polymorphism and armor. The future of viruses on Unix is already hinted at by the wide spread use of Bots and Kill-Bots, (slang term referring to software robots). These programs are able to move from system to system performing their function. Using a Bot as a dropper or creating a virus that includes bot like capability is simple. With the advent of global networks, the edge between viruses, bots, worms and Trojans will blur. Attacks will be created that use abilities from all of these forms and others to be developed. There have already been cases where people have used audit tools such as COPS and SATAN to attack a system. Combining these tools with a virus CAD/CAM program will allow a fully functional virus factory to create custom viruses and attacks against specific targets such as companies that are disliked by the propitiator. The information services provided by the Internet already provide sufficient information in the form of IP addresses and email domain addresses to identify, locate and attack systems owned by specific entities.

Finally, viruses and worms can provide the perfect format for a hostage shielded denial of service attack. It is well known that an Internet attached system can be made to "disappear" or crash by flooding it with IP packets. Site administrators can protect their systems from crashing by programming their local router to filter out packets from the attacking source. The system will still disappear because legitimate users will be squeezed out by the flood of attack packets, but filtering at the router can at least save the system from crashing. Unfortunately, anyone can masquerade as someone else on the Internet by merely using their IP address. This attack can send a barrage of packets to the target site, each of which has a different source IP address. It is not possible to use a router to filter from this type of attack, but the Internet service provider can trace the source of attack by physical channel without relying upon the IP address. In cooperation with other Internet providers, the attacker can be isolated from the Internet for a short time. Hopefully, the attacker will become bored and go away or can be identified for action by law enforcement. Another possibility is to use viruses to generate the attack. If a virus is successful in spreading to thousands of sites on the Internet and is programmed to start an IP attack against a specific target on the same day at the same time then there is no way to stop the attack because it has originated from thousands of sites all of which are live hostages. The site under attack will have to go off line since the Internet service providers will be helpless in the face of a coordinated dispersed attack. Since the impact against each individual hostage system is low, the hostages may not even notice that there is a problem. The Internet service provider attached to the target system is in the best position to detect the attack, however, they are as subject to this attack as the target since they may "crash" from the excessive bandwidth usage flooding their network from multiple sources.

Scenario of a Virus Attack Against a Secure Unix Network

The military and many other companies believe that they are protected against focused attacks because they employ a closed network configuration. In some cases these networks may also use highly secure "B" rated operating systems [NCSC-TG-006]. Typically, the network will not allow modems, Internet connections or have any electronic connections to organizations outside of the immediate need. In addition, the networks are almost always heterogeneous because of legacy equipment, primarily PC systems. The network designers normally allow the PC systems to retain their floppy disk drives even thought their attachment to a network renders them nonessential. Networks of this type have been considered secure, however, they are open to information warfare attacks via a focused virus. Assuming that the propitiator is an outsider without access to the equipment or premises, one possible method of attack against this type of network would take advantage of both the Typhoid Mary Syndrome and Transplatform Viruses to produce an attack that is targeted against the Unix systems but originated from an attached PC. A virus can be created whose payload is triggered by executing on a PC that is attached to the target network. This is not hard with a little inside information about the configuration of the network. The propitiator would then install the virus at all of the local Universities in the hope that someone working at the installation is taking a night class or that one of their children will unknowingly infect a common usage home computer. At that point, the virus has a good chance of entering the target network. This is a well known vector and is enhanced because the virus will not reveal itself. Once on the target system, the PC virus will act like a dropper releasing a Unix virus into the backbone. The payload virus may be necessary because many Unix backbone systems are not PC compatible. The Unix virus payload can then install a back door which can be remotely directed. In addition, the virus can create a covert channel by making use of messenger viruses. While the use of messenger viruses are slow and have low bandwidth, they are bidirectional and can be used for command and control of more complex attacks.

Conclusion

I believe that the problem of attack software targeted against Unix systems will continue to grow. Viruses may become more prevalent because they provide all of the benefits of other forms of attack, while having few drawbacks. Transplatform viruses may become common as an effective attack. All of the methods currently used in creating MS-DOS viruses can be ported to Unix. This includes the creation of automated CAD/CAM virus tools, stealth, polymorphism and armor. The future of viruses on Unix is already hinted at by the wide spread use of Bots and Kill-bots (slang term referring to software robots). These programs are able to move from system to system performing their function. Using a Bot as a dropper or creating a virus that includes bot like capability is simple. With the advent of global networks, the edge between viruses, bots, worms and Trojans will blur. Attacks will be created that use abilities from all of these forms and others to be developed. There have already been cases where people have used audit tools such as COPS and SATAN to attack a system. Combining these tools with a virus CAD/CAM program will allow a fully functional virus factory to create custom viruses to attack specific targets.

As these problems unfold, new methods of protection must be created. Research has hinted at several promising methods of protection, including real time security monitors that use artificial intelligence for simple decision making. It is my hope that these problems never reach existence, but I am already testing them in an attempt to devise methods of counteracting them. If I can create these programs, so can others.

Even with the current problems and the promise of more sophisticated problems and solutions in the future, the one thing that I believe to be certain is that Unix or Unix-like systems will continue to provide a pay back that is well worth the cost of operating them.

Post Notes

Versions of this document were presented at the following conferences as an invited paper:

Virus Bulletin International 1995 September 21, 1995 (Virus Bulletin Journal) Boston Park Plaza Hotel Boston, Mass.

Eighth Annual CALS Expo 1995 October 24, 1995 (National Security Industrial Association) Long Beach Convention Center Long Beach, CA.

Photonics East 1995 October 25, 1995 (SPIE, International Society for Optical Pennsylvania Convention Center Engineering) Philadelphia, PA.

Open Systems Security 1996 March 5, 1996 (MIS Training, OSF, ISSA, Bellcore, Hilton Disney World Village InfoSecurity News) Lake Buena Vista, FL.

This paper is not the exact version presented at the conferences. The conference papers were tailored for each conference. This paper is a work created from the same root document as those presented. For copies of the presented documents please contact the individual conference sponsors for back issue pricing.

[Back to index] [Comments (0)]
deenesitfrplruua