IWS - The Information Warfare Site
News Watch Make a  donation to IWS - The Information Warfare Site Use it for navigation in case java scripts are disabled


Congressman Stephen Horn, R-CA Chairman

Oversight  hearing on

"What Can be Done to Reduce the Threats Posed by Computer Viruses and Worms to the Workings of Government?"

August 29, 2001

Testimony of 

Marc Maiffret
Chief Hacking Officer
eEye Digital Security

before the 

Subcommittee on Government Efficiency, 
Financial Management 
and Intergovernmental Relations 


I would like to thank you for giving me the chance to write this testimony and to share with you some of the knowledge I have gained over the past few years as both a hacker and a security professional.

My first three years in the field of computer security where spent as a hacker. That part of my life allowed me to gain insight into the types of security threats we face, as well as an understanding of what can be done to defend against those threats.

One day, when I was 17, I had a "wake up call" of sorts that motivated me to turn my life around and to put my knowledge towards something that would help people in their quest for security. That something is now what is known as eEye Digital Security.

eEye Digital Security was started a little over 3 years ago by Firas Bushnaq and myself. We formed eEye with the intention of creating software products that would help protect companies against the growing threat of cyber attacks.

In addition to building software, one of the ways that we are able to help keep systems secure is through vulnerability research. Vulnerability research is the process of looking for ways that someone could potentially manipulate a software product or hardware device, in order to gain access to a system or network.

Since its inception, eEye has researched and published some of the largest software vulnerabilities to date. In fact, within the last 3 months alone, eEye has discovered 3 vulnerabilities within software products that are installed on more than 8 million servers around the world. When eEye finds a vulnerability, we work closely with the software manufacture (vendor) in order to help them create a "patch" which is then installed by computer administrators in order to protect their systems from the newly discovered vulnerability.

In May of this year (2001), eEye discovered a vulnerability within Microsoft’s Internet Information Services Web Server software. Microsoft IIS is a software product that is installed on roughly 6 million Web servers around the world. The vulnerability allowed an attacker to gain complete control of a Microsoft IIS Web Server within a matter of a few seconds from anywhere in the world. When we discovered the vulnerability (which we termed the .ida buffer overflow) we followed the same process of contacting the software vendor and working with them to have a patch released. eEye and Microsoft worked together to make sure that system administrators were aware of this serious vulnerability and protected themselves accordingly.

In a perfect world, every system administrator would have installed the security patch and all 6 million systems would have been protected from this vulnerability; however, computer security is not perfect, and the consequences resulting from systems remaining unpatched were far worse than anyone expected.

The Creation and Release of the CodeRed Worm.

The CodeRed worm has become a great example of just how fragile the Internet really is. I believe that the CodeRed worm contains many key elements to make for a serious discussion on the current types of threats the Internet and the United States are facing on the digital frontier.

A computer worm is one of the most dangerous types of attacks that threaten the Internet today, often more dangerous than any virus. A virus can only infect new systems if a computer user performs a certain action (e.g.. executing an email attachment) whereas a worm, once planted on the Internet, is completely self-propagating. This ability allows a worm program to infect a very large amount of systems in a very short period of time. Once the spreading has begun, the author of the worm can conceivably have control over thousands if not millions of systems, which can then be used to perform attacks against the Internet or specific parts of the Internet.

As I said earlier the best real world example of all of this is the CodeRed worm.

On Friday, July 13th,eEye Digital Security was contacted by a network administrator that was experiencing a stream of .ida buffer overflow attacks being sent against his computer network. At first the administrator felt that it was simply a few hackers on the Internet attempting to break into his network. Later that day one of his websites’ pages was replaced with a message that said "Hacked by Chinese. Welcome to http://www.worm.com". That Web server then proceeded to attempt to connect to other Web servers on the Internet. All of this information was made available to the network administrator because he was running a network intrusion detection system, which was able to detect the .ida, attacks. At this point, when his server started trying to connect to other Web servers, the administrator began to think that someone had possibly written a worm program for the .ida vulnerability. Since eEye Digital Security was the company that discovered the .ida vulnerability, the administrator turned to us for help. On Friday July 13th,he sent us an email containing the details of what he was experiencing. We worked most of Friday evening to try and decipher what was happening, but without the actual code of the worm the work was difficult. On Sunday July 16th a second network administrator, who had been in contact with the first administrator, was able to give us the complete binary capture (attack code) of the worm that was also attacking his network. We then worked through Monday and early Tuesday until we released our initial worm analysis on the morning of Tuesday, July 17th. The initial analysis was sent to various security mailing lists and also to government cyber watch agencies such as NIPC. We named the worm "CodeRed" after the type of soda that Ryan Permeh (the other researcher at eEye that dissected CodeRed) and I were drinking during the late-night hours of work on the worm.

Over the next few days we worked closely with NIPC to explain to them how CodeRed worked and to make sure they had all of the information that they needed to release an alert.

On Wednesday, July 18th, 2001 we released our second and more detailed analysis of the CodeRed worm. In this analysis we outlined that between the 20th and 27th (UTC) of the month, the CodeRed worm was going to stop trying to infect new Web servers and instead start attacking by means of flooding the www.whitehouse.gov Web server with very large amounts of data (much like the yahoo.com DDoS attacks). We then pointed out that on the 28th the worm was suppose to go to "sleep" and never try to infect a new server again. At this point the CodeRed worm had infected nearly 400 thousand systems. That meant that 400 thousand Web servers around the world would be sending terabytes of data through the Internet towards the White House’s Web server.

We talked with NIPC between July 18th and July 19th to help them further understand CodeRed and the impact it was going to be having on the White House Web server and the Internet as a whole. Time short since it was the 19th and only a matter of a few hours before infected CodeRed servers were going to stop trying to infect new machines and start attacking the White House Web server.

We received a phone call on July 19th from Erkan Chase of the FBI, whom Vince Rowe (our contact at the time to NIPC), had introduced us too. Erkan Chase asked if I and Ryan Permeh would be able to send his superior an email within 10 minutes that would detail what effect CodeRed would have on the White House’s Web server and the Internet itself and what they could do, if anything, to keep the worst from happening.

In our email we outlined that the minor effect of CodeRed would be the White House’s Web server going offline. The more significant effect would be that the Internet itself, in some parts of the world, could actually stop working or slow to a crawl because so many hundreds of thousands of systems were going to be pushing large amounts of data through the Internet pipes. We then outlined that the best course of action would be to take the White House’s Web server offline because if none of the worms could connect to the server then they wouldn’t be able to send the floods of data.

NIPC released their CodeRed worm alert on Thursday July 19, 2001. The conversation with Erkan Chase was probably one of the last communications we had with NIPC for reasons unknown to us.

A few hours later the original IP address of www.whitehouse.gov was "black holed", the website was moved to a new address, and the thousands of infected servers were unable to connect to the old address thus preventing the flood of data from being sent. In the end the Internet was still standing and the aftermath of CodeRed was solely that half of a million Web servers were still infected by the worm.

Between July 20th and July 28th there was not much CodeRed activity, nor were there many organizations actively warning people of what was yet to come.

As stated earlier, the CodeRed worm was written to basically go to sleep on the 28th (UTC) of the month. In a perfect world, CodeRed should have completely died on the 28th and we should have never heard about it again. However, since our original analysis we at eEye had warned that all it would take for the spreading to continue would be one infected system with its internal clock set incorrectly. On such a system the worm would have never gone to "sleep" and on August 1st would start infecting new systems, essentially starting the CodeRed worm all over again.

In a last minute effort, Microsoft, NIPC, FedCIRC, ITAA, CERT, SANS, ISS, ISA got together and released a press release on July 29, 2001 stating that CodeRed was going to return and emphasizing that all vulnerable systems needed to be patched. A large press conference was held and eEye Digital Security, despite significant involvement at the beginning, was not included or recognized.

On Saturday August 4th, 2001 eEye Digital Security was contacted by security firm SecurityFocus.com because they had knowledge of a new worm that had been released. Within the binary data of this new worm was the word "CodeRedII", so it was obviously written after the discovery of the original worm on July 13th. We analyzed this new worm and found that it was much smarter than the original CodeRed. Its method of propagation was done in a way in which it would infect servers at a much faster rate than the first CodeRed. This new CodeRedII worm also installed a backdoor/trojan on infected web servers. This backdoor/trojan program would allow an attacker to be able to remotely break into any server infected with CodeRedII even after an administrator installed the security patch. The speed at which this new version of CodeRed could infect systems and the malicious backdoor that it placed on the systems seemed to indicate that this new worm was written by someone more technically cunning than the original CodeRed worm author.

In the end when the dust the Internet was still standing. There were, however, a total of about half of a million Web servers that were compromised by CodeRed and CodeRedII. Also a few smaller computer networks were disabled intermittently due to the influx of network traffic caused by CodeRed and its variants.

The CodeRed worm was in some ways one of the best things to happen to computer security in a long time. It was a much needed wake-up-call for software vendors and network administrators alike.

CodeRed could have caused much more damage than it did, and if the authors of CodeRed had really wanted to attempt to take down the Internet then they could most likely have easily done so.

Below are a few reasons how CodeRed could have been more devastating. 

        1. Before CodeRed was released there was information spread throughout the     Internet underground which exposed ways in which a worm, like CodeRed, could actually spread itself across the Internet without any Intrusion Detection System being able to detect it. This possibility existed because of a design flaw in most IDS systems. If an attacker would have written CodeRed in a way that it exploited this design flaw then CodeRed would have had much more time to spread before being detected, resulting in many more compromised machines under the worm’s control that could be used to bring down the Internet. 

        2. CodeRed and CodeRedII were only able to infect Microsoft Windows 2000 Web servers, which only make up part of the 6 million IIS Web servers on the Internet. The second part of that 6 million servers figure is made up of Microsoft Windows NT 4.0 Web servers. If the attackers wrote the worm to infect Windows NT 4.0 systems then the worm would most likely have at least doubled the number of servers that it was able to infect, bring the number closer to 1 million. 

        3. The payload of the worm (the code left on a compromised machine) could have been much more devastating. Instead of simply attacking the White House’s Web server, the worm could have done something such as a true DDoS (Distributed Denial of Service) attack against various websites or Internet backbones. Unlike most DDoS’s that have taken place before, this worm could have had close to a million servers at its disposal instead of just a handful.

The scenarios are endless, but the point is that CodeRed was actually not as nearly devastating as it could have been.

What made all of this possible? What steps can be taken to help prevent things like this in the future? These are the most important questions, and luckily there is much we can learn from CodeRed to improve our current security standing.

There were two things that made the CodeRed worm possible: the vulnerability within the Microsoft Internet Information Web Server software and the fact that not nearly enough administrators installed the Microsoft supplied security patch. If the vulnerability had not existed within the software, or if administrators had installed the patch, then CodeRed would have never existed.

One of the first areas that needs improvement is the way that software vendors test their code for stability and security. I am a software engineer; so I know that mistakes do happen and programmers will now and then accidentally write vulnerable code. Software vendors, however, are usually not very motivated to take security seriously.

Most software vendors will take security just seriously enough in order to curb bad PR or news stories about vulnerabilities within their products. When a vulnerability is found within a software product used on millions of servers, then the press will typically write articles to expose the information to a larger audience. Software vendors should not wait until they have been publicly embarrassed in order to take security more seriously. Anything that a software vendor does to make their software more secure, after a PR fiasco, is something that they should have been done before the fact.

Also a lot of times security is the last thing discussed when companies map out a product development cycle. Typically, a company will put more focus on making their products perform better and have more features than how secure they will be in the end. Therefore security is usually made to try to fit around the current architecture of products. Security needs to be of the greatest importance when designing software to be run on thousands of servers. Software products must be made so that security is designed first and then everything else (features/functionality) is made to fit around the security architecture. This is typically a problem for most software development firms because in most product markets there is a race to get new features to market before the competitors do. Usually the race to get those new features out results in new vulnerabilities to exploit.

Software vendors should also be taking a better approach at notifying customers of vulnerabilities and patch releases. Software products should discontinue to run if a critical security patch is missing, and every software vendor should have an email alert system that clients can subscribe to in order to receive email notification anytime a new patch is released.

When it comes to installing patches, many administrators are actually sometimes more afraid of the security patch than of the vulnerability itself. The reason being that some software vendors have had bad track records in releasing security patches, and in fact a lot of vendors have had to re-release security patches numerous times because the original security patch did not function correctly and in some cases broke a system component that had nothing to do with the component that the patch was suppose to be fixing. It is for this reason, because patches can sometimes lead to system instability, that administrators have grown hesitant to install security patches, and sometimes will wait as many as two weeks in order to make sure the patch is safe to install.

Software vendors are not the only ones at fault here though. Network administrators and managers at various corporations are also to blame for faulty security. Going back to CodeRed as our example, we can see that really the largest reason for CodeRed spreading as it did was because a lot of network administrators did not install the Microsoft security patch. Microsoft has an email notification system that will notify administrators anytime Microsoft releases a new security patch. Last time I checked there were roughly two hundred thousand people subscribed to Microsoft’s security mailing list. It is completely obvious that that is a very small number of people compared to the number of administrators who run Microsoft software within their networks. As an example, there are roughly 6 million Microsoft IIS Web servers on the Internet. Only two hundred thousand administrators being subscribed to Microsoft’s security mailing list is unacceptable. Administrators need to be proactive in finding ways to stay up to date with the latest security patch releases of software. Software vendors also need to be more proactive in doing everything possible to let users know what they can do to stay up to date with the latest security patches.

It should also be noted that many companies have a very small budget for an IT staff, or do not even have an IT staff. This leads to a lot of problems for administrators when it comes to securing a companies network. Many administrators are already over-worked with their day-to-day tasks without having to worry about security. Companies need to make sure they have the staff needed in order to maintain the security of their network. Companies must also do their best to provide their IT staff with the budgets they need to be able to maintain the tools that will help them keep their network secure. Also, corporate managers need to understand that security must be taken seriously. For example, administrators are usually caught in the dilemma of not being able to install the newly released security patch, which requires their Web server to go offline for a few minutes, for fear they may get in trouble with management for having server downtime.

To help get security messages out to the public, there needs to be a centralized organization for vulnerability alerting. There are a few cyber watch organizations (NIPC, SANS, CERT) that currently watch for large scale attacks (i.e. worms, larger vulnerabilities, viruses) however I feel these organizations would be able to accomplish a lot more if they sent alerts about all vulnerabilities instead of only vulnerabilities deemed "serious enough" to cover. There should be a website or email alert system that administrators could join that would allow them to find out about all vulnerabilities and patches.

In my opinion, a government run organization, like NIPC, has the best chance of succeeding because it will not have the financial motivations of a corporate entity. Whether it is through the release of security auditing tools for issues such as CodeRed, or initiating a system of notification about all vulnerabilities, these are just a couple of small things that would make an organization very useful to the average administrator trying to keep his systems secure.

Also an organization, such as NIPC, could perform real-world technical research on a regular basis. One example of how such an organization could discover and alert about worms almost as soon as they are released is if they setup a large scale "honeypot." A honeypot is a term that security professionals use to describe a dummy network that has been setup to typically trap and study hackers. If an organization were to own a large enough block of IP addresses (computer internet addresses) from various Internet providers from around the world, then they could build and maintain specifically designed honeypots that are able to detect new worms/viruses almost as soon as they are released, or at least much faster than they are detecting them right currently.

I referenced the CodeRed worm heavily in this document because I feel by analyzing it closely we can learn a lot about what went wrong and what we can do to in the future to prevent things like CodeRed from taking a major toll on security and the Internet.

In conclusion, the biggest problem facing security today is that there are to many people talking about what we could do or what the threat is, and not enough people doing real work that will result in the mitigating or abolishment of those threats.


Initial CodeRed Analysis – http://www.eeye.com/html/Research/Advisories/AL20010717.html
CodeRedII Analysis - http://www.eeye.com/html/Research/Advisories/AL20010804.html