Throttling Viruses: Restricting Propagation to defeat malicious mobile code by Matthew M. Williamson, 18th Annual Computer Security Applications Conference (ACSAC2002), Las Vegas, Dec 2002

This paper presents a very simple technique to slow the propagation of malicious code, relying on the observation that these malicious programs have a "propagation phase" where they connect rapidly to a large number of hosts. This behaviour is quite distinct from "normal" behaviour, for most types of hosts. Thus, detecting and punishing such behavior can be automated, by slowing down connections to more than a certain number of "new" hosts made within a short time interval. This reduces the rapid propagation behavior shown by worms like CodeRed/Nimda. Human intervention can then be used to deal with the problem.

The main advantage of this scheme is that it is tolerant of false positives. The authors find that even restricting the allowed number of new connections (to new hosts) to something as small as 1-2 per second do not disturb normal behaviour, but do restrict viruses/worms.

They present simulations based on trace data to show that a) "bad" behavior is penalized and b) "good" behaviour is largely unaffected by this scheme. They suggest that implementing this in the network driver would be suitable.

Discussion/pros/cons:

* An advantage is that it is a very simple scheme with low overhead. It can easily be implemented by integrating it into "personal firewalls". Even if viruses adapt to this scheme by propagating slower (so as not to be caught) , it is still a win.

* The scheme has tunable parameters so it can be set according to what the normal use pattern is.

* It might be particularly useful for servers which don't usually initiate connections.

* The traffic sample they used for their analysis is very limited, and not enough to say whether it will not slow down "good" traffic in the general case.

* This scheme is suitable for a particular type of worm; it does not solve the email virus type of problem.

* Another good thing is that it is automatic, and reasonably benign, so as to be usable.

* On the bad side, slowing down propagation to 1 or 2 new connections per second (by an order or two of magnitude) is still too slow for human intervention in many cases.

* The main drawback is the lack of an implemention, without which it is hard to tell how it actually works. (They mention this in the paper).

* While the present form is suitable for things like Web traffic, a normal profile for different types of connections (different applications/ports) may be suitable, rather than something as broad as connections from a host per time. (This would avoid the problem of applications that do make relatively large numbers of connections)

Voting: 2 weak rejects and 2 strong accepts, the rest all weak accepts.