IWS - The Information Warfare Site
News Watch Make a  donation to IWS - The Information Warfare Site Use it for navigation in case java scripts are disabled


Rootkits: Hiding a Succesfull System Compromise

Geoff Galitz
Research Computing, College of Chemistry
UC Berkeley

Abstract:

Computer security incidents continue to escalate into 2001. Nothing captures how difficult it is to detect successful compromises more than these excerpts:
The Air Force Information Warfare Center (AFIWC) estimated that the attacks cost the government over $500,000 at the Rome Laboratory alone. Their estimate included the time spent taking systems off the networks, verifying systems integrity, installing security patches, and restoring service, and costs incurred by the Air Force's Office of Special Investigations and Information Warfare Center. It also included estimates for time and money lost due to the Laboratory's research staff not being able to use their computer systems.

Information Security: Computer Attacks at Department of Defense Pose Increasing Risks (Chapter Report, 05/22/96, GAO/AIMD-96-84)

U.S. companies spent $118 million on computer forensics and other incident response services in 2000, and are expected to more than double that to $277 million by 2004, according to IDC.

San Francisco Chronicle

Computer security can be just as much a detective story, or a game of cat and mouse, as it as a science and an industry. Few fields in computer security better illustrate this than tracking down a successful exploit of a server, after the fact, and keeping the intruder out for good.

Table of Contents

  • BACKGROUND
  • FOCUS
  • THE BASICS
  • RISK AND VULNERABILITY
  • GENERAL PRINCIPLES
  • TRENDS IN ROOTKIT DEVELOPMENT
  • AVAILABLE ROOTKITS
  • AVAILIBITY OF ROOTKITS
  • PREVENTION
  • DETECTION
  • CAUTION
  • BIBLIOGRAPHY

    BACKGROUND

    Computer security is a big industry. Even so, the majority of Internet connected sites do not have enough experience or staff to properly counter the threat posed by intruders, whether they be human, or autonomous software agents (the dreaded worms such as ramen).

    Inevitably, computer systems are infiltrated by an intruder or an agent. Of these systems, many are used for nefarious purposes. In these cases, the intruder has spent time and resources scouting for good targets and the required resources to carry out his needs. The intruder will attempt to keep the infiltration hidden from being discovered by the system owners in order to protect his investment of time and resources, and maximize the exploitation of his ultimate goal.

    Typically, an intruder evades detection by erasing information that was generated during entry of the target system, then installs a software package, generally known as a rootkit, which is designed to prevent the legitimate system owners from discovering that another party has access to their systems.

    FOCUS

    We will discuss, in general terms, what a rootkit is and the principle of operation. We will not discuss any particular rootkit in detail, except where certain modules are noteworthy. We will not discuss how to secure a server. There are plenty of resources for securing servers on the net. Refer to the bibliography at the end for more information.

    THE BASICS

    Why are rootkits even necessary? If a system is to be infiltrated, and later used by the intruder then it is safe to say that unusual activity would be noticed by a systems administrator during the course of regular troubleshooting of a problem, or the intruder may be discovered by regular automated processes that are run on the server by the systems administrator to keep tab of regular users or general health of the server.

    Entry of the system by an intruder is generally a noisy event. That is, whatever means the intruder used to break in, will leave various messages in the log files. Most rootkits include utilities which automatically clean suspicious messages from the log files. We will discuss that below.

    First, let us discuss why an intruder may try to gain access in the first place. The goal of the intruder will, in part, dictate what rootkit is to be used. How the target systems will be used depends on:

  • The goal of the intruder
  • The platform compromised

    Typical uses for a compromised system include:

  • Taking part in a distributed denial of service attack
  • Taking part in a distributed application environment
  • Misappropriation of data for monetary gain
  • Misappropriation of resources for personal gain

    Taking part in a distribted denial of service attack

    DDoS attacks are not new. However, in late 2000, a large wave of attacks hit high-profile sites on the Internet. The motives and techniques of the attack are mostly outside of the scope of this paper, except where the technique may lead to detection of the intrusion on the local target.

    Taking part in a distributed application environment

    There are several contests where contestants race each other in an effort to achieve some goal first. These contests typically involve intensive number crunching, and the need for more CPU cycles. Applications in these contests tend to allocate blocks of data to be analyzed or otherwise processed indpendantly of each other. Intruders have been known to install these applications on systems without permission of the system owner, in order to sovle the puzzle and claim the prize, sometimes in the realm of $10,000 USD.

    Misappropriation of data for monetary gain

    Intruders have stolen sensitive information with the intention of profiting either through the sale of the stolen information to a competitor, or through extortion.

    In late 2000 and early 2001, CD Universe was threatened. An intruder informed the company that he had obtained the credit card numbers of CD Universe customers from a CD Universe server. The ransom was $100,000 USD.

    More information on this particular case be found at:
    Rebuffed Internet extortionist posts stolen credit card data

    Misappropriation of resources for personal gain

    Some intruders merely delight by compromising a system, defacing the target's web site, usually also proudly stating their own handle for stature in the blackhat community.

    The web site http://attrition.org/mirror/attrition catalogs dozens of defaced web sites every day.

    How might these uses lead to the discovery of the intruder on a system?

    In the case if a DDoS, an intruder may not correctly throttle the bandwidth used in an attack from any individual attack node. Ideally for the intruder, the node would attack it's target by sending a small but steady stream of data. The traffic may be ICMP traffic, or it can potentially be seemingly valid HTTP traffic. The details depend on the logistics of the given network.

    DDoS nodes are often installed en masse, on a wide variety of systems. Some compromised systems have the ability to generate much more network traffic than others. Even relatively small amounts of traffic may make a noticeable impact on a busy server, however.

    If a system administrator receives a report of a slow server, one of the first things he will do is determine if an individual process or job is causing the entire system to slow down.

    RISK AND VULNERABILITY

    What is the risk incurred? How does a rootkit installed on a server affect the organization?

    The best way to look at this question is the fact that the server is now out of control of the legitimate system owners. This further raises questions of:

    Liability Is the server now taking part in an attack on an innocent third party? What recourse do they have against you, the unwilling intermediary?
    System resources adequate to carry out a project Are the CPU cycles you paid for working on your project?
    Stability Are intruders running code on your machine which can cause it to become unstable or flood the network?
    Data integrity Is your data accurate? Was a fix or patch or some important calculation removed to make room for an mp3 file?
    Project integrity Will delays in finishing a project due to the above issues result in killing it?

    GENERAL PRINCIPLES

    General principles are exceedingly simple. Rootkits rely on inserting a layer of indirection between system usage and the users and administrators. The layer mimics standard system behavior while implementing new functionality.

    The Root of the Problem.

    This brings up the topic of architecture in modern UNIX systems. We won't delve into a complete discussion of system security. Interestingly, UNIX has an effective method of separating dangerous and non-dangerous types of system behavior. The real problem that we face in computing is that the root user is exempt from all safeguards.

    The concept of a single system account that has no restrictions associated with it seems counter to the notion of security that was associated with UNIX system design in the beginning. There are many reasons why most systems are insecure upon initial install, but the actual design of the operating system is solid. The weak link in the chain is a lack of secure coding guidelines by programmers and the single system account that has no safeguards associated with it.

    In the course of buffer overflow attacks or merely misconfigured systems, an intruder can gain root access due to human error. It should be noted that there is more than enough blame to go around. Many vendors, commercial and free alike, release code with buffer overflows. Most operating system vendors ship their system with applications that are vulnerable to attack enabled by default. Most vendors do not enforce safe implementation guidelines at runtime. Plenty of blame. This paper, however, is not about blame. This paper is about dealing with the reality that any system can potentially be breeched, and then the intruder will need to be ferreted out of the system. To approach this issue with the right frame of mind, entertain the following hypothesis.

    Imagine if there was not a single system account that had no restrictions associated with it. If certain permissions were required to open network sockets (required for outgoing attacks and setting up backdoor access) but that user did not have the ability to manipulate special files on the filesystem. If an entirely different set of permissions were required to edit files marked as special in someway, files created at system install time, for example, then the intruder would have an additional hurdle to overcome to hide his presence effectively. In this particular case, the intruder could establish a method of entry into the system, but it would not be a trivial matter to keep the entry method covert.

    In a layered structure of this type, intrusions and a complete backdoor solution would not be impossible, but would raise the level of effort required to gain complete access. Currently, the intruder is faced with the challenge of:

  • Locating a suitable target
  • Gaining access to the target
  • Gaining access to a privileged system account

    Fairly simple. No wonder there are so many security incidents. If the procedure to administer resources on a server were broken into different modules the intruder would then have to subvert additional levels of authorization or access. That in and of itself would raise the level of effort required for the intruder. However, that effort would become even more complicated due to the fact there is no one all-powerful system account that would enable to the intruder to burrow down into the logical layers of the server. Each service that the intruder would want to breech, would have to be breeched independently. In the current model, the services to breech are access to the host and access to a single system account. Once that single system account is compromised, little further effort is required in gaining access and keeping it.

    A more complete resource administration solution would require completely separate permissions to:

  • Enable network inbound connections
  • Open outbound network connections
  • Edit files in non-system/system space
  • Execute files in non-system/system space

    In order to grant the relevant permissions, seperate system accounts would need to be used. If there was a single system account capable of granting these permissions, then an intruder could break into that account and merely give himself all possible perimssions. However, a happy compromise may to allow a single system account to grant these perimssions, but log all activity associated with that user and those permissions to a facility that cannot be accessed by any account but one, a special system audit user. The facilities associated with the system audit user could not granted to any other user, without exception.

    Back to the Present

    The brief discussion above is not necessarily meant as a treatise on security architecture or as a suggestion to future engineers, but to impress upon the reader the extent of the problem with root user and the associated lack of controls.

    The original idea was to have a single user that could fix problems that were associated with misconfigured or malfunctioning system controls. The idea is valid, but the approach has had unintended consequences as server class systems proliferate beyond the lab.

    Now that we have discussed how easy it is (philosophically) to insert a layer of indirection, once the intruder gets root access, we shall discuss what the intruder does with it.

    This is generally accomplished by creating utilities that look and act as standard system utilities, with additional code to achieve whatever aim the intruder wishes. Usually, this means providing a covert means of manipulating the server.

    To put this into practice, we need to discuss what tools are typically used which could potentially lead to the discovery of an intruder, and hence, what tools are likely to be subverted by a rootkit.

    These tools include:

    Tools Use
    ps View system process status
    top Display and update information about the top cpu processes
    vmstat report virtual memory statistics
    sar (depending on the platform) System Activity Reporter
    netstat Network Statistics
    du Disk Usage

    The ps command includes fields for running time for a given process and the amount of CPU time taken by the process. Any processes taking a disproportionatly large amount of CPU time will become immediately suspect. Therefore, hiding attack nodes from ps, and related utilities is vital to prevent discovery when attack programs run for long periods of time.

    In the event that the ps or top commands do not reveal any leads for the system administrator, the next step may be to take a look at open network connections on the server to determine if a misbehaving service is causing a blockage of system resources.

    In this case, the netstat program is used to determine active network connections. However, these connections also include any backdoors the intruder has left in order to gain access quickly and easily in the future. Therefore, altering the output of netstat is necessary to safegaurd access to the compromised server.

    The actual backdoor into the compromised can take many shapes. The normal system access and authentication mechanisms may be subverted by merely replacing the normal mechanisms (such as /bin/login) with look-alikes which continue to provide normal system services, but also provide undocumented functionality when certain criteria are met. This criteria may include incoming connections from certain networks (again, we need to alter netstat to prevent detection of this particular connection) or merely a special keyword or syntax provided as an argument.

    For example, the /bin/login replacement in the lrk (Linux rootkit) package allows root to login merely by using the special account "rewt" and whatever password was defined during build time. The cancerserver rootkit allows root to login by specifying a particular terminal type in the user environment when connecting from a remote server.

    If a system suddenly becomes low on diskspace on a filesystems that is normally static in size, the inquisitiveness of a systems administrator may be aroused. In this case, the system administrator may navigate around the filesystem using the usual comamnds such as "ls" and using standard utilities to such as "du" to determine where diskspace has suddenly gone.

    In the case of missappropriation of data, the sensitive data may need to be collated or centralized before transmission. This is especially true when an intruder is intercepting passwords, credit card numbers or even social security numbers over the network from the compromised system.

    The logs containing this data will grow in accordance with network utilization. Potentially, the logs may grow fast enough to trigger disk space alarms or even fill up available disk space causing normal operations to fail.

    Therefore, the ls, du and associated utilities must also be replaced by a rootkit to avoid detection.

    For example, the t0rnkit, a widely used rootkit, typically creates a directory called .puta in the /usr/info directory. However, with the trojanized ls command, the system administrator will never see the .puta directory when he looks for it with ls.

    Apart from the evading detection, most rootkits also come tools to allow quick reentry, and surveillance.

    TRENDS IN ROOTKIT DEVELOPMENT

    There is a new threat to the attentive system administrator.

    Of primary concern when looking for rootkits and trojans is a new ability called binary redirection.

    Using binary redirection, the rootkit has the ability to intercept calls to a valid binary on disk and redirect those calls, completely transparently, to another binary located elsewhere on disk, or theoritically, even on a remote server.

    The primary threat here to the system administrator is that neither a visual inspection (via strings [see below]) or a file integrity check will reveal the presense of the subversion, with certain exceptions.

    In the case of knark version 0.59, the key component is called ered and relies upon the knark kernel module being loaded. The ered utility is very easy to use. Merely call ered with two arguments, the first being the binary which you wish to subvert, the second being your replacement. For example:

    ./ered /bin/login /usr/lib/login.bogus
    

    At this point, any time /bin/login is called, the /usr/lib/login.bogus file will actually be executed. Any amount of inspection on the /bin/login file will reveal no signs of intrusion.

    However, ered and knark are not foolproof. In lab conditions, running fuser on the legitimate binary which is supposed to be executed, but has been subverted, does not show as a running process. In other words, if you run /bin/login manually, it will never show in the process table.

    This has been replicated using "ps" and "fuser." For example, redirecting the command /usr/bin/yes to a harmless shell script called redirect.sh which simply prints "Hello World" on standard out will will yield the following results:

    # ered /usr/bin/yes /usr/src/redirect.sh
    # /usr/bin/yes
    Hello World
    #
    

    If you are playing along at home, you may notice that the binary which is running in place of the legitimate binary shows up in the process table. In the above example, while the user executed the yes command, the redirect.sh shell script will appear in the process table.

    Of course, this could be hidden by the intruder using standard practices, but it is yet another detail to keep track of on the part of the intruder; again raising the level of effort on the part of the rootkit author and the intruder.

    All of this means that without additional tools it is still possible to determine if redirection is being employed on a system without approval of the system administrator.

    If a system administrator becomes suspicious, all he has to do is run any given utility by hand, and watch for it in the process table. If the utility never shows up, or the running file does not reflect it's properties correctly via fuser or lsof, then the system administrator has a lead. This is not a perfect solution, and the system administrators are always recommended to run file integrity checkers from read-only media and on an alternate kernel on read-only media, without the ability to turn on loadable kernel modules.

    There are situations where manual checks are preferable for short periods. Many systems in a corporate or high use environment are considered mission critical, where any downtime is considered detrimental. Unfortunately, this means booting an alternate kernel and running a file integrity checker is problematic, especially since the process can take some time on a large server. In these cases, a balance has to be struck between periodic rebooting and checking the filesystem and manual checks.

    We discuss rootkit detection by use of data integrity checkers and manually in the section DETECTION.

    AVAILABLE ROOTKITS

    Currently available rootkits include:
  • knark
  • t0rn
  • cancerserver

    NO EXPERIENCE REQUIRED

    For the most part, the rootkits available on the net are easily compiled and installed with the traditional "make" and "make install"

    Coding skills become important only when the rootkit in question does not compile on a specific host or platform. Traditional rootkits are easily and come with adequate documentation.

    Inexperienced crackers will slip-up, however. They sometimes leave the tar package of the rootkit lying around not giving any thought to the fact that the rootkit will not make the tar package invisible.

    AVAILABITY OF ROOTKITS

    Just like any other piece of software, rootkits are merely a search engine engine away. For example, searching for "linux rootkit" turned up:

    Search Enging Number of references
    www.google.com "...about 241"
    www.altavista.com 84 pages found
    www.search.com (CNET) 2 hits
    www.lycos.com 208 Web sites were found in a search of the complete Lycos Web catalog

    And the list goes on. Of course, many of the hits were merely references for people asking for information because they were infected by a rootkit, and other wishing to learn about them even though they have not infected any machines themselves.

    Still, even that information is a resource for the bad guys just as for good guys. Some of the hits included in those searches actually contained software archives of rootkits.

    Additionally, the cracker underground circulates rootkits and software archives among themselves via irc, email, and non-archived web sites.

    All that is required to get the more widespread rootkits is a little bit of time, and a little persistence is all that is required for more specialized or non-public rootkits.

    PREVENTION

    We will not discuss how to prevent a system from being compromised, but we will discuss steps a system administrator can take to mitigate damage during the break-in process and prevent the rootkit from being successfully deployed.

    Kernel Modules

    Some operating systems support kernel modules. The notion of a kernel module is quite useful. The kernel loads device drivers of various types only when they are needed and are unloaded when not in use. This results in a less complicated and more efficient kernel. However, as we have seen with binary redirection, kernel modules can also be dangerous.

    To avoid this particular vulnerability, disable loadable kernel modules on the system. This is usually done at kernel compile-time.

    Immutable Files

    Some operating systems support immutable files. By setting the immutable attribute on a file, system administrators can prevent unsophisticated intruders from subverting any given file.

    The chattr(8) man page on setting the immutable flag says:

           A file with the `i' attribute cannot be modified: it  can­
           not  be deleted or renamed, no link can be created to this
           file and no data can be written  to  the  file.  Only  the
           superuser can set or clear this attribute.
    
    If an intruder gains root access, they can simply remove the attribute. However, most attacks are scripted and many script attacks will not check the immutable attribute, thus the script will fail.

    Read-Only Filesystems

    Some filesystems can be mounted read-only with significant trouble. The /bin, /sbin, /usr/bin, /usr/sbin, and /lib directories can be mounted read-only. The /etc filesystem does contain files that need to be modified at run-time such as /etc/mtab.

    An example /etc/fstab file may resemble:

    
    Device                  Mountpoint      FStype  Options         Dump    Pass#
    /dev/da0s1b             none            swap    sw              0       0
    /dev/da0s1a             /               ufs     rw              1       1
    /dev/da0s1f             /usr            ufs     ro              2       2
    /dev/da0s1c             /sbin           ufs     ro              2       2
    /dev/da0s1d             /lib            ufs     ro              2       2
    /dev/da0s1e             /var            ufs     rw              2       2
    /dev/da0s2e             /usr/ports      ufs     rw              2       2
    /dev/da0s3e             /usr/local/     ufs     rw      
    

    DETECTION

    As we have discussed, most contemporary rootkits merely replace the standard utilities with special versions. Rootkit installations often falsify timestamp and filesize information to defeat the system administrators from visually confirming the integrity of the utilities via the ls command.

    However, systems administrators have several additional utilities at their disposal. These are:

  • Manual Inspection
  • Data Integrity Checkers/Database

    MANUAL INSPECTION

    Typically, a manual inspection is carried out using the strings command. The strings utility is standard on all modern UNIX platforms. Its purpose is merely to display the human readable (ASCII) portions of a binary file. Fortunately for the experienced systems administrator, this human readable data includes the names of files where intruder passwords are kept, library versions the trojan was compiled with, and other information which does not normally correlate with the original data in the target file.

    DATA INTEGRITY

    In addition to visual inspection, there are various approaches to taking snapshots of key files on the system and calculating speical checksums to come up with a specific signature for a file that cannot be falsified without great effort.

    It is outside of the scope of this paper to discuss each file integrity tool in detail. Rather we will discuss the general approach.

    There are several factors to consider:

  • keeping the database of signatures secure from tampering
  • raising the work effort on the part of the intruder to prevent making falsifying signatures impracticle.

    Most data integrity checkers keep their information in a database (possibly little more than a text file). If the database is stored on the same machine that is compromised, the signatures which are used to verify the integrity of your data is also compromised. While there are no known rootkits which automatically seek out and alter the signatures in one of these databases, it is not difficult to conceptualize.

    Therefore, it is important to keep the signatures where they will not be tampered with. Generally, it is best to take the signatures and burn them onto a CD-ROM or on other read-only media for storage. Simply the load CD-ROM whenever the system needs to be checked. With the advent of binary redirection, it is also important that the integrity checker be run from a clean kernel. This means creating bootdisks or a bootable CD-ROM and running the entire check from the read-only media. This also means shutting down the system for general use, which can introduce disruption into high availability environments.

    While booting from a clean kernel is strongly recommended, a site may be able to reach an adequate compromise by running the integrity checker from a read-only on the running kernel periodically, perhaps once an evening. Then the server can be checked using a clean kernel (and a re-boot) once a month.

    In the end, downtime from periodic security sweeps will be less damaging than downtime from a successful compromise. The single most well known implementation is tripwire. While tripwire is effective, there are also other approaches would should be used in conjunction with each other. Of particular interest are:

  • AIDE
  • RPM (Redhat Linux)
  • spfDB (Sun Solaris)

    AIDE is similar to tripwire. It does not differ radically from tripwire in function. More information on AIDE can be found at: AIDE.

    RPM is the Redhat Package Manager. File integrity is small part of what RPM is meant for and is not particularly flexible in the same manner as tripwire or AIDE. RPM should not be relied upon as the sole file integrity checker for a system, but in a pinch it can be quite useful. In order to use RPM to verify the integrity of an RPM package, the syntax is:

    rpm -V [package name]

    If a file within the package fails the integrity test, rpm will report thusly: 5......T c /bin/login

    More information on RPM can be found at: RPM.org

    spfDB is the Solaris Fingerprint database. Most files that ship in Solaris media are cataloged in the SPF. For files which should not change on demand (/bin/login as opposed to /etc/motd) this includes all versions of /bin/login that shipped on Solaris media, including all patched versions. This methodology will alert the system administrator when the given files are updated by patches that did not originate from a Sun certified source.

    The downside to the SPF is the less than elegant interface via an HTTP connection.

    More information on spfDB can be found at: spfDB.

    NETWORK DETECTION

    It is also possible to detect rootkits remotely. There are two approaches:

  • Active Detection
  • Passive Detection

    In active detection, a port mapper or security scanner is run against your own hosts to look for anything abnormal. The ability of a rootkit to guard against this type of detection is minimal, hence this is an important approach for security organizations to consider.

    The criteria to look for in scanning a network for rootkit installations would be:

  • SSH servers on unusual ports
  • telnet servers on unusual ports
  • HTTP servers on unusual ports

    Some freely available tools to conduct sweeps are:

  • NMAP
  • Nessus

    Every organization should have a security policy which stipulates that security scans be performed periodicially. Only by checking the system from the inside, on the console, and from the outside, via a scan from the network, can an organization be reasonably sure that a server is still clean.

    An example of a network scan detecting unauthorized backdoors would include the following output of nmap:

    Starting nmap V. 2.54BETA4 ( www.insecure.org/nmap/ )
    Interesting ports on testbox (xx.xx.xx.xx):
    (The 65535 ports scanned but not shown below are in state: closed)
    Port       State       Service             Protocol     Version
    21/tcp     open        ftp                 FTP          6.00LS
    22/tcp     open        ssh                 SSH          1.99-2.4.0 SSH Secure Shell (non-commercial)
    23/tcp     open        telnet                           
    25/tcp     filtered    smtp                             
    79/tcp     open        finger                           
    80/tcp     open        http                HTTP         Apache/1.3.14 (Unix)
    111/tcp    open        sunrpc              RPC          
    513/tcp    open        login                            
    514/tcp    open        shell                            
    515/tcp    open        printer                          
    1241/tcp   open        msg                              
    3001/tcp   open        nessusd                          
    3306/tcp   open        mysql                            
    5432/tcp   open        postgres            PostgreSQL   
    6000/tcp   open        X11                              
    6112/tcp   open        dtspc                            
    30299/tcp  open        ssh                 SSH          1.99-2.4.0 SSH Secure Shell (non-commercial)
    
    It is unusual, to say the least, to have a daemon like SSH running on a high port number

    Passive detection involves running a network monitor to listen for network traffic which commonly indicates a rootkit install. These items would include:

  • IRC traffic
  • telnet traffic
  • unusual amounts of UDP traffic
  • unusual amounts of ICMP traffic

    IRC (Internet Relay Chat) is a favorite means of communication for crackers, and also a favorite toy for some crackers to create bots (autonomous agents) which harass valid IRC channels with seemingly random IRC traffic (spam or inflammatory language, for example).

    UDP (User Datagram Protocol) is a popular method of remotely controlling attack agents. UDP traffic is more easily spoofed than TCP (Transmission Control Protocol) and does not involve the same level of accountability as TCP connections. It is easier to hide UDP connections in the noise of an active network.

    ICMP (Internet Control Message Protocol) is often used to disguise remote control channels to rootkits and attack agents as normal network diagnostic traffic. Fortunately, it is fairly easy to pick out unusual ICMP traffic on a network compared to UDP or TCP traffic.

    CAUTION

    Once a rootkit is detected a proactive security organization may wish to track down the culprit. Once the rootkit is detected, the gig is up as they say. That does not mean the intruder is going to give-up, however.

    Some intruders include logic bombs with their rootkits. Upon detection, a rootkit may go into self-destruct mode. It may actually be as innocent as deleting the rootkit itself and not affecting the rest of the server, or it may be as serious as destroying the entire filesystem and the boot block.

    The trigger to self-destruct can be anything. The self-destruct code could potentially be activated by:

  • Incoming NMAP packets on a specific port
  • The ls command being executed in a specific directory
  • A reboot.

    Not all intruders are experts. It is just as feasible that a cracker damages the system during the intrusion, hence causing unintended damage.

    Make sure you have clean backups handy.

    CONCLUSION

    We have discussed why you may find rootkits on your servers, how easy it is to find rootkits, and the varying levels of risk across different rootkits.

    BIBLIOGRAPHY

    SecurityFocus
    Packetstorm
    SANS