By Topic

Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on

Issue 4 • Date July 2001

Filter Results

Displaying Results 1 - 12 of 12
  • Editorial protecting the network: research directions in information assurance

    Publication Year: 2001 , Page(s): 249 - 252
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • Probabilistic techniques for intrusion detection based on computer audit data

    Publication Year: 2001 , Page(s): 266 - 274
    Cited by:  Papers (57)  |  Patents (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    This paper presents a series of studies on probabilistic properties of activity data in an information system for detecting intrusions into the information system. Various probabilistic techniques of intrusion detection, including decision tree, Hotelling's T2 test, chi-square multivariate test, and Markov chain are applied to the same training set and the same testing set of computer audit data for investigating the frequency property and the ordering property of computer audit data. The results of these studies provide answers to several questions concerning which properties are critical to intrusion detection. First, our studies show that the frequency property of multiple audit event types in a sequence of events is necessary for intrusion detection. A single audit event at a given time is not sufficient for intrusion detection. Second, the ordering property of multiple audit events provides additional advantage to the frequency property for intrusion detection. However, unless the scalability problem of complex data models taking into account the ordering property of activity data is solved, intrusion detection techniques based on the frequency property provide a viable solution that produces good intrusion detection performance with low computational overhead View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulation of self-similarity in network utilization patterns as a precursor to automated testing of intrusion detection systems

    Publication Year: 2001 , Page(s): 327 - 331
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB) |  | HTML iconHTML  

    The behavior of a certain class of automatic intrusion detection systems (IDSs) may be characterized as sensing patterns of network activity which are indicative of hostile intent. An obvious technique to test such a system is to engage the IDSs of interest, and then use human actors to introduce the activities of a would-be intruder. While having the advantage of realism, such an approach is difficult to scale to large numbers of intrusive behaviors. Instead it would be preferable to generate traffic which includes these manifestations of intrusive activity automatically. While such traffic would be difficult to produce in a totally general way, there are some aspects of network utilization which may be reproducible without excessive investment of resources. In particular, real network loading often exhibits patterns of self-similarity, which may be seen at various levels of time scaling. These patterns should be replicated in simulated network traffic as closely as is feasible, given the computational ability of the simulator. We propose the use of multiresolution wavelet analysis as a technique which may be used to accomplish the desired detection, and subsequent construction of self-similarity in the simulated traffic. Following a multiresolution decomposition of the traffic using an orthogonal filterbank, the resulting wavelet coefficients may be filtered according to their magnitude, Some of the coefficients may be discarded, yielding an efficient representation. We investigate the effect of compression upon the reconstructed signal's self-similarity, as measured by its estimated Hurst parameter View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using clustering to discover the preferences of computer criminals

    Publication Year: 2001 , Page(s): 311 - 318
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    The ability to predict computer crimes has become increasingly important. The paper describes a method for discovering the preferences of computer criminals. This method involves sequential clustering based on the variance of clusters discovered in higher order clustering. These discovered preferences can be used for the direct protection of computer systems against ongoing attacks or for the construction of simulations of future attacks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • NetCamo: camouflaging network traffic for QoS-guaranteed mission critical applications

    Publication Year: 2001 , Page(s): 253 - 265
    Cited by:  Papers (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB) |  | HTML iconHTML  

    This paper presents the general approach, design, implementation, and evaluation of NetCamo, which is a system to prevent traffic analysis in systems with real-time requirements. Integrated support for both security and real-time is becoming necessary for computer networks that support mission critical applications. This study focuses on how to integrate both the prevention of traffic analysis and guarantees for worst-case delays in an internetwork. We propose and analyze techniques that efficiently camouflage network traffic and correctly plan and schedule the transmission of payload traffic so that both security and real-time requirements are met. The performance evaluation shows that our NetCamo system is effective and efficient. By using the error between target camouflaged traffic and the observed (camouflaged) traffic as metric to measure the quality of the camouflaging, we show that NetCamo achieves very high levels of camouflaging without compromising real-time requirements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting and displaying novel computer attacks with Macroscope

    Publication Year: 2001 , Page(s): 275 - 281
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB)  

    Macroscope is a network-based intrusion detection system that uses bottleneck verification (BV) to detect user-to-superuser attacks. BV detects novel computer attacks by looking for users performing high privilege operations without passing through legal “bottleneck” checkpoints that grant those privileges. Macroscope's BV implementation models many common Unix commands, and has extensions to detect intrusions that exploit trust relationships, as well as previously installed Trojan programs. BV performs at a false alarm rate more than two orders of magnitude lower than a reference signature verification system, while simultaneously increasing the detection rate from roughly 20% to 80% of user-to-superuser attacks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DARPA Information Assurance Program dynamic defense experiment summary

    Publication Year: 2001 , Page(s): 331 - 336
    Cited by:  Papers (11)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB) |  | HTML iconHTML  

    Several types of experiments are being conducted by the Defense Advanced Research Projects Agency (DARPA) Information Assurance (IA) Program in DARPA's IA Lab. This research program is driven by concepts of strategic cyberdefense. Each experiment involves a carefully formulated hypothesis that is intended to be either supported or refuted by the experimental testing. In many cases, “red team” attackers participate in all phases of the experiment and contribute to generating the data required to test the hypothesis. The red team is usually structured to model a well-resourced adversary, such as a foreign, national intelligence agency. The particular experiment described here explored one aspect of the IA program's grand hypothesis of dynamic defense: “Dynamic modification of defensive structure improves system assurance.” This experiment concentrated on the assertion that autonomic response mechanisms can improve overall system assurance by thwarting an attack while it is underway. In most cases, each attack in this experiment was run first with only “prevent and detect” mechanisms enabled, then repeated with “prevent, detect, and respond mechanisms” enabled. The key result of this experiment is that the hypothesis was supported View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the defense of the distributed denial of service attacks: an on-off feedback control approach

    Publication Year: 2001 , Page(s): 282 - 293
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    Proposes a coordinated defense scheme of distributed denial of service (DDoS) network attacks, based on the backward-propagation, on-off control strategy. When a DDoS attack is in effect, a high concentration of malicious packet streams are routed to the victim in a short time, making it a hot spot. A similar problem has been observed in multiprocessor systems, where a hot spot is formed when a large number of processors access simultaneously shared variables in the same memory module. Despite the similar terminologies used here, solutions for multiprocessor hot spot problems cannot be applied to that in the Internet, because the hot traffic in DDoS may only represent a small fraction of the Internet traffic, and the attack strategies on the Internet are far more sophisticated than that in the multiprocessor systems. The performance impact on the hot spot is related to the total hot packet rate that can be tolerated by the victim. We present a backward pressure propagation, feedback control scheme to defend DDoS attacks. We use a generic network model to analyze the dynamics of network traffic, and develop the algorithms for rate-based and queue-length-based feedback control. We show a simple design to implement our control scheme on a practical switch queue architecture View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Training a neural-network based intrusion detector to recognize novel attacks

    Publication Year: 2001 , Page(s): 294 - 299
    Cited by:  Papers (31)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    While many commercial intrusion detection systems (IDS) are deployed, the protection they afford is modest. State-of-the-art IDS produce voluminous alerts, most false alarms, and function mainly by recognizing the signatures of known attacks so that novel attacks slip past them. Attempts have been made to create systems that recognize the signature of “normal,” in the hope that they will then detect attacks, known or novel. These systems are often confounded by the extreme variability of nominal behavior. The paper describes an experiment with an IDS composed of a hierarchy of neural networks (NN) that functions as a true anomaly detector. This result is achieved by monitoring selected areas of network behavior, such as protocols, that are predictable in advance. While this does not cover the entire attack space, a considerable number of attacks are carried out by violating the expectations of the protocol/operating system designer. Within this focus, the NNs are trained using data that spans the entire normal space. These detectors are able to recognize attacks that were not specifically presented during training. We show that using small detectors in a hierarchy gives a better result than a single large detector. Some techniques can be used not only to detect anomalies, but to distinguish among them View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaboration requirements: a point of failure in protecting information

    Publication Year: 2001 , Page(s): 336 - 342
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB) |  | HTML iconHTML  

    It is sometimes necessary to collaborate with individuals and organizations which should not be fully trusted. Collaborators must be authorized to access information systems some of the data in which, typically, should be withheld. New collaborations require dynamic alterations to security provisions. Solutions based on extending access control to deal with collaborations are either awkward and costly, or unreliable. An alternative approach, complementing basic access control, is results filtering. Content filtering is also costly, but provides a number of benefits not obtainable with access control alone. The most important is that the complexity of setting up and maintaining isolating information cells for every combination of access rights is avoided. New classes of collaborators can be added without requiring a reorganization of the entire information structure. There is no overhead for internal use. Since content of documents, not their labels, is checked, misfiling will not cause inappropriate release. The approach used in the TIHI/SAW projects at Stanford uses simple rules to drive filtering primitives. The filters run on a modest, but dedicated computer managed by a security officer. The rules implement the security policy and balance manual effort and complexity. The functional allocation of responsibilities is good. Result filtering can also be used to implement pure intrusion detection, since it is invisible. The intruder can be given an impression of success, while becoming a target for monitoring or cover stories View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating damage from cyber attacks: a model and analysis

    Publication Year: 2001 , Page(s): 300 - 310
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    Accurate recovery from a cyber attack depends on fast and perfect damage assessment. For damage assessment, traditional recovery methods require that the log of an affected database must be scanned starting from the attacking transaction until the end. This is a time-consuming task. Our objective in this research is to provide techniques that can be used to accelerate the damage appraisal process and produce a correct result. We have presented a damage assessment model and four data structures associated with the model. Each of these structures uses dependency relationships among transactions, which update the database. These relationships are later used to determine exactly which transactions and exactly which data items are affected by the attacker. A performance comparison analysis obtained using simulation is provided to demonstrate the benefit of our model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Technology challenges for virtual overlay networks

    Publication Year: 2001 , Page(s): 319 - 327
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB) |  | HTML iconHTML  

    An emerging generation of mission-critical networked applications is placing demands on the Internet protocol suite that go well beyond the properties they were designed to guarantee. Although the “next generation internet” (NGI) is intended to respond to the need, when we review such applications in light of the expected functionality of the NGI, it becomes apparent that the NGI will be faster but not more robust. We propose a new kind of virtual overlay network (VON) that overcomes this deficiency and can be constructed using only simple extensions of existing network technology. In this paper, we use the restructured electric power grid to illustrate the issues, and elaborate on the technical implications of our proposal View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The fields of systems engineering and human machine systems: systems engineering includes efforts that involve issue formulation, issue analysis and modeling, and decision making and issue interpretation at any of the lifecycle phases associated with the definition, development, and implementation of large systems.

 

This Transactions ceased production in 2012. The current retitled publication is IEEE Transactions on Systems, Man, and Cybernetics: Systems.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dr. Witold Pedrycz
University of Alberta