Whenever a new virus threat appears, there is a period in which users are exposed to infection. This is because the virus has time to spread during the gap between the moment when the virus appears and the moment when the laboratories of antivirus companies provide their customers with an update to neutralize the new code.
This time period is usually short – a few hours, in the most difficult cases – but the authors of today´s viruses and worms, aware of the existence of this unprotected period, have managed to ensure that their creations spread far more quickly.
If we focus on the most rapid threats to have appeared to date – such as Sasser, Blaster or SQLSlammer – all of these managed to cause damage within a matter of minutes, well before any virus laboratory had time to react.
In order to alleviate this problem, some antivirus companies tried to implement automatic systems for detecting new viruses, allowing the user to send in suspicious files, and automatically generating a disinfection system for the suspected virus. The major drawback of this system is that the computer user needs to be aware that the file is suspicious in order to send it, and this is a lot to ask of users with limited IT knowledge.
One way of detecting unknown threats has been to use heuristics to analyze the internal code of a program in order to detect possible malicious instructions. This system is fine when the code implements actions that are directly harmful to the system, such as overwriting boot sectors of a hard disk. Unfortunately, today´s threats don´t use instructions as obvious as “format c:”. Virus creators are well aware that even the most basic heuristic engines will quickly detect this sort of attack. This is why they use systems that allow their creations to go unnoticed by the classical detection methods. For example, SQLSlammer and Sasser entered computers through an instruction given directly in TCP/IP.
Exploiting a vulnerability (in SQL Server and in Windows), both SQLSlammer and Sasser entered computers without arousing suspicion. Because they did not come in a file that arrived by e-mail or on a disk, and did not use a potentially harmful instruction, classic antivirus programs did not detect them. What possible solution can there be to this problem?
In principle, once a malicious code has entered the system it must perform some kind of action in order to reproduce itself, such as exploiting a vulnerability that causes the system security to fail. By using a specific process to monitor the basic elements of the system, it is possible to detect anomalies. So, for example, a system that monitors the number of outgoing e-mails would identify an increase in activity if a worm was resending copies of itself on a massive scale. Faced with a sharp rise in the level of e-mail activity, there would be no doubt that something strange was happening, and it would be sufficient to look for the process which was generating the sending of these e-mails to find a probable e-mail worm.
In the same way, if we monitor certain actions within vital elements of the system it is possible to avoid some of the typical problems associated with viruses. To return to the example of SQLSlammer, the only way to stop it is through deep inspection of all the TCP/IP packets in a communication, both incoming and outgoing. But in this instance we must go much further than this. If a virus of this type is capable of attacking in a few minutes, it is not enough just to look for a simple pattern that matches a virus signature; the reason for the transmission of the packet and its contents must be analyzed in order to identify it as a dangerous action. The analysis must be performed in TCP, UDP and ICMP, in order to detect attacks at levels above that of the network.
With a system for detecting potential threats like that described, it is possible to avoid service denial attacks, port scanning, direct hacker attacks, IP-Spoofing, MAC-Spoofing, etc. However, technology that performs this detection process cannot on its own offer users or network administrators an adequate level of protection against viruses and intruders. Classic antivirus protection should not be neglected, as the protection if provides is highly effective against known threats, even if it needs to be complemented with detection systems which go beyond simple scanning to extend to the analysis of processes being executed.
The arrival of these technologies is imminent, and this will certainly boost the level of protection that all Internet users in 2004 need in the face of new, unknown threats.