M. Costa, J. Crowcroft, M. Castro, A. Rowstron, L. Zhou, L. Zhang, and P. Barham. Vigilante: End-to-end containment of Internet worms. In Proc. of the 20th ACM Symp. on Operating Systems Principles (SOSP), Brighton, UK, October 2005. It is very important to automate the worm containment process, since the worms spread too fast for humans to respond. The paper addresses the problem of automatic worm containment. The most interesting part of the paper is providing a framework for creating verifiable alerts, as well as automatic verification of them by other parties. From a very high level perspective, using the approach presented in this paper, one can receive a message from another party which *proves* to her that her machine has a vulnerability. Next, her machine can automatically create a filter that keeps her safe from worms that aim at exploiting that specific vulnerability. In addition, she can (automatically) spread the word to other people. To be more specific, the paper introduces the concept of self-certifying alerts (SCA) which enables an end-to-end containment architecture. An SCA is a machine-verifiable proof of a vulnerability: it proves existence of a vulnerability. SCAs can be passed to other hosts, where they can be automatically verified, even when the receiver has no trust in sender. If correctly verified, hosts can automatically generate host-based filters to block the worm traffic, as well as passing the SCA to other hosts. 1. Self-Certifying Alerts (SCAs) can be of three types: -Arbitrary Execution Control alerts identify vulnerabilities that allow worms to redirect execution to arbitrary pieces of code is a service’s address space. -Arbitrary Code Execution alerts identify code-injection vulnerabilities where they describe how to execute an arbitrary piece of code that is supplied in a message sent to the vulnerable service. -Arbitrary Function Argument alerts identify data-injection vulnerabilities that allow worms to change the value of arguments to critical functions. 2. Alert generation: can be done using any detection engine provided that in generates an SCA of a supported type. They performed this on two types of detection engines: - Engines that identify attempts to execute code in a protected page - Engines that detect infections by tracking the flow of dirty data (data received in certain input operations) which is called *dynamic dataflow analysis*. 3. Alert distribution: They propose using a secure overlay network to broadcast SCAs. Pros: -The approach is host-based, not network-based, thus increases the chance of containing worms. -The use of emerging technology of VMs for eliminating the risk of getting infected while verifying an SCA is very appropriate. -All the detection, alert generation, verification, and filter generation are automated. -The evaluation was really good. In particular using real worms and actually simulating a big topology was very promising. -Support for detection engine diversity (which reduces the false negative rate). Cons: -Keeping VMs up for verification. -The need for everyone to join an overlay. -The need for thousands of super-peers and detectors across the internet to be really effective. (some of the cons are in the discussion points below) Summary of Discussion Points: 1. Although the presented approach seems to be effective and fast, it seems to be creating an arms race. The point is that no matter how fast the containment process, the worm writers could have the upper hand in the arms race, because they start first. 2. The current format of SCAs requires the exact address, code, or the argument to be in the message. It could be the case that some kind of computation is performed on the input or it is transformed in some way before it actually performs the malicious action. This calls for a possible improvement on the format of SCAs: to make them more complex and expressive. 3. Deployability concerns: 3.1. Is it reasonable to assume that people are willing to join an overlay just for the purpose of worm containment? -Most of the people said no. Even if they are, it requires all of them to have certificates from a trusted third party, which may not be that easily feasible. 3.2. What is the incentive for being a super-peer, or a detection engine, given that a huge number of them is required for the overlay? What could be the business model behind this? -Some people said companies like Microsoft or Symantec might have interests in this. Some people thought if it could be set up to use unused cycles of machines around the world, then many people might be willing to contribute, something like “Worm-Containment at Home!” A criticism about this was that even those people that are willing to become super-peers, they are probably very computer-savvy people and are not likely to be running older versions of softwares, so they may not be able to help. 4. Do you think the three types of alerts that are supported now can cover most of the infections? -It seems to be reasonably effective in containing most of the attacks like buffer overflow or code injection, but it might fall short in detecting attacks that use a semantic bug in a software or attacks like SQL injection. 5. Do you believe that even with the rate-limiting scheme, still DoS can be an issue? -There are certainly some subtle points there that are not mentioned in the paper. If you are doing the rate limiting based on the total size of SCAs, then you will be vulnerable to DoS if others send you a lot of small SCAs. If it is done by the number of SCAs, then others can DoS you by sending a few very large SCAs.