Artificial Neural Networks for Misuse Detection James Cannady Summary ------- The author attempts to use Neural Networks to learn about typical attempts at network attacks and use the neural network to detect new attacks. This is a novel idea because previously only rule-based expert systems were used for the purpose of misuse detection. The author claims that due to the analytical strength of neural networks, the approach proposed in this paper is better. He claims that rule-based systems suffer from an inability to detect extended attacks, attacks occuring in isolation or attacks with multiple attackers working together. Neural networks have been used previously in anomaly detection. This is the first known use of neural nets in misuse detection. The author discusses the advantages and disadvantages involved in using neural networks and also discusses various implementation options. Then, he describes his own approach. He describes the characteristics of the neural network used by him, the characteristics of the data used by him in training and testing the neural network and the results he obtained. Discussion ---------- - Anomaly detection would be an easier and more likely application for neural networks because it should be easier to train the neural net for this application. - Misuse detection involves a smaller set of behaviors, however. - A performance comparison with expert systems is not done in the paper, probably because the aim is to show that using neural nets for the purpose actually works. - Discussion about the product used to configure the neural network (RealSecure). - Problem with using expert system for security, eg. as an anti-virus: frequently update the expert system with info on specific attack patterns. It was thought that a neural network was a more general solution (i.e. not needing to be fed in specific attack patterns) and was self-learning, and hence superior to expert systems. However, it was also mentioned that while with expert systems feeding in specific attack patterns ensures for sure that the system is now capable of detecting these attacks, we cannot say for sure that the neural net will *definitely* learn and hence we're not sure if it will definitely detect a similar attack in the future. - Discussed "poisoning" the learning process of neural network - changing the baseline. - Difficult to analyze the working of the neural network - "Black Box Problem" mentioned in the paper. We don't know which node does what in the learning/detection process. Comment about "neural network lobotomy". :-) - It was mentioned that the neural network should be able to detect ALL 360 attack signatures. It was mentioned that this was what the training data correlation in the results section referred to. - The motivation behind using source and destination address as event record data elements was questioned. It was mentioned that this was to detect flood. - It was suggested that some mechanism to record the lapse of time be used and also incorrect IP flags be used to train the neural net so it can detect SYN floods, for example. - Comments on the results (root mean square error, correlation, etc.): how would these results have compared to those for expert systems? Also, the ability to detect attacks represented by the known attack signatures used in training is mentioned. But the whole point of the paper is to be able to detect NEW types of attacks based on the signatures of the older types of attacks. This is not evaluated. - Confusion over what the graphs mean - axes not labelled!!! - Details about neural network - how many internal nodes? - The configuration where the neural net is used as a front end for the expert system was discussed. Voting results -------------- Strong accept - 0 Weak accept - 5 Weak reject - 7 Strong reject - 0 -------------------------------------------------------------------------------- Summary by Samarth Harish Shah