By Topic

Intelligence and Security Informatics (ISI), 2010 IEEE International Conference on

Date 23-26 May 2010

Filter Results

Displaying Results 1 - 25 of 57
  • An executive decision support system for longitudinal statistical analysis of crime and law enforcement performance crime analysis system pacific region (CASPR)

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (765 KB) |  | HTML iconHTML  

    This paper describes the structure of an executive decision support system based on a data warehouse about offences and clearance rates in British Columbia. The system was developed at the Institute for Canadian Urban Research Studies at Simon Fraser University. The paper explains how the data mining and automated statistical analysis in this system can be used by criminologists for analysis of crime trends both at the jurisdiction level and province wide. Database technologies and statistical functions have been utilized in a set of programs that encapsulate the knowledge of experts. The paper explains how by performing repeated regression analysis on all Jurisdiction-crime combinations the system can discover important and significant trends at the local level and how general province-wide trends can be detected. An example of using the system to evaluate the relationship between reported crime rates and clearance rates is explained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Testing perception of crime in a virtual environment

    Page(s): 7 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (976 KB) |  | HTML iconHTML  

    Fear of crime is a central topic in the field of victimization. In particular, criminologists are interested in the environmental structures and cues that generate fear. Research has shown that fear of crime has a direct impact on pedestrian navigation through the urban setting. Most studies have used traditional methods such as surveys or interviews. Researchers have debated the methodological issues stemming from these methods. This article introduces two explorative studies which use a virtual environment (VE) as a research tool for the study of fear of crime. The benefits associated to using VEs in this field of research are discussed. The development, implementation and results of these two studies are presented. The limitations and future directions of VE experiments are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identifying high risk crime areas using topology

    Page(s): 13 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4806 KB) |  | HTML iconHTML  

    Computational criminology is an area of research that joins advanced theories in criminology with theories and methods in mathematics, computing science, geography and behavioural psychology. It is a multidisciplinary approach that takes the strengths of several disciplines and, with semantic challenges, builds new methods for the analysis of crime and crime patterns. This paper presents a developing algorithm for linking the geographic and cognitive psychology sides of criminology research with a prototype topology algorithm that joins local urban areas together using rules that define similarity between adjacent small units of analysis. The approach produces irregular shapes when mapped in a Euclidean space, but which follow expectations in a non-Euclidean topological sense. There are high local concentrations or hot spots of crime but frequently there is a sharp break on one side of the hot spot and with a gradual diffusion on the other. These shapes follow the cognitive psychological way of moving from one location to another without noticing gradual changes or conversely being aware of sharp changes from one location to the next. This article presents a pattern modeling approach that uses topology to spatially identify the concentrations of crime and their crisp breaks and gradual blending into adjacent areas using the basic components: interior, boundary and exterior. This topology algorithm is used to analyze crimes in a moderate sized city in British Columbia. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Natural Language Processing based on Semantic inferentialism for extracting crime information from text

    Page(s): 19 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (659 KB) |  | HTML iconHTML  

    This article describes an architecture for Information Extraction systems on the web, based on Natural Language Processing (NLP) and especially geared toward the exploration of information about crime. The main feature of the architecture is its NLP module, which is based on the Semantic Inferential Model. We demonstrate the feasibility of the architecture through the implementation thereof to provide input for a collaborative web-based system of registering crimes called WikiCrimes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predicting social ties in mobile phone networks

    Page(s): 25 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (366 KB) |  | HTML iconHTML  

    A social network dynamically changes since the social relationships (social ties) change over time. The evolution of a social network mainly depends on the evolution of the social relationships. The social-tie strengths of person-to-person are different one another even though they are in the same group. In this paper we investigate the evolution of person-to-person social relationships, quantify and predict social tie strengths based on call-detail records of mobile phones. We propose an affinity model for quantifying social-tie strengths in which a reciprocity index is integrated to measure the level of reciprocity between users and their communication partners. Since human social relationships change over time, we map the call-log data to time series of the social-tie strengths by the affinity model. Then we use ARIMA model to predict social-tie strengths. For validation of our results, we used actual call logs of 81 users collected for a period of 8 months at MIT by the Reality Mining Project group and also used call logs of 20 users collected for a period of 6 months by UNT's Network Security team. These users have around 5000 communication partners. The experimental results show that our model is effective. We achieve prediction performance with accuracy of average 95.2% for socially close and near members. Among other applications, this work is useful for homeland security, detection of unwanted calls (e.g., spam), and marketing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Network neighborhood analysis

    Page(s): 31 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (427 KB) |  | HTML iconHTML  

    We present a technique to represent the structure of large social networks through ego-centered network neighborhoods. This provides a local view of the network, focusing on the vertices and their kth order neighborhoods allowing discovery of interesting patterns and features of the network that would be hidden in a global network analysis. We present several examples from a corporate phone call network revealing the ability of our methods to discover interesting network behavior that is only available at the local level. In addition, we present an approach to use these concepts to identify abrupt or subtle anomalies in dynamic networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Early warning analysis for social diffusion events

    Page(s): 37 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (773 KB) |  | HTML iconHTML  

    There is considerable interest in developing predictive capabilities for social diffusion processes, for instance enabling early identification of contentious “triggering” incidents that are likely to grow into large, self-sustaining mobilization events. Recently we have shown, using theoretical analysis, that the dynamics of social diffusion may depend crucially upon the interactions of social network communities, that is, densely connected groupings of individuals which have only relatively few links to other groups. This paper presents an empirical investigation of two hypotheses which follow from this finding: 1.) the presence of even just a few inter-community links can make diffusion activity in one community a significant predictor of activity in otherwise disparate communities and 2.) very early dispersion of a diffusion process across network communities is a reliable early indicator that the diffusion will ultimately involve a substantial number of individuals. We explore these hypotheses with case studies involving emergence of the Swedish Social Democratic Party at the turn of the 20th century, the spread of SARS in 2002–2003, and blogging dynamics associated with potentially incendiary real world occurrences. These empirical studies demonstrate that network community-based diffusion metrics do indeed possess predictive power, and in fact can be significantly more predictive than standard measures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identifing influential users in an online healthcare social network

    Page(s): 43 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (433 KB) |  | HTML iconHTML  

    As an important information portal, online healthcare forum are playing an increasingly crucial role in disseminating information and offering support to people. It connects people with the leading medical experts and others who have similar experiences. During an epidemic outbreak, such as H1N1, it is critical for the health department to understand how the public is responding to the ongoing pandemic, which has a great impact on the social stability. In this case, identifying influential users in the online healthcare forum and tracking the information spreading in such online community can be an effective way to understand the public reaction toward the disease. In this paper, we propose a framework to monitor and identify influential users from online healthcare forum. We first develop a mechanism to identify and construct social networks from the discussion board of an online healthcare forum. We propose the UserRank algorithm which combines link analysis and content analysis techniques to identify influential users. We have also conducted an experiment to evaluate our approach on the Swine Flu forum which is a sub-community of a popular online healthcare community, MedHelp (www.medhelp.org). Experimental results show that our technique outperforms PageRank, in-degree and out-degree centrality in identifying influential user from an online healthcare forum. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Generalizing terrorist social networks with K-nearest neighbor and edge betweeness for social network integration and privacy preservation

    Page(s): 49 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (431 KB) |  | HTML iconHTML  

    Social network analysis has been shown to be effective in supporting intelligence and law enforcement force to identify suspects, terrorist or criminal subgroups, and their communication patterns. However, social network data owned by individual law enforcement units contain private information that must be preserved before sharing with other law enforcement units. Such privacy issue tremendously reduces the utility of the social network data since the integration of social networks from different law enforcement units cannot be fully integrated. Without integration of social network data, the effectiveness of terrorist or criminal social network analysis is diminished. In this paper, we introduce the KNN and EBB algorithm for constructing generalized subgraphs and a mechanism to integrate the generalized information to conduct the closeness centrality measures. The result shows that the proposed technique improves the accuracy of closeness centrality measures substantially while protecting the sensitive data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The U.S. and the EU differences in anti-terrorism efforts

    Page(s): 55 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (287 KB) |  | HTML iconHTML  

    While there is a consensus among developed countries over the need to combat terrorism, there are marked differences on how to accomplish that. Recently the European Union (EU) rejected a US-EU agreement on financial data exchange. Shifts in power in Europe are taking place, which contributed to this rejection. But more basically, there is a difference in views on the balance between about managing anti-terrorism efforts and respect for civil liberties, especially in data sharing. The post-WWII alliances are still strong, but Europe has a different take on many issues. In our interdependent globalized world, U.S. authorities are being required to adjust the tools and methods we can use. The sympathy and readiness to assist in the immediate aftermath of 9/11 have now faded. We should understand these differences and realize we are entering a period where being creative and factoring in the views of our allies is essential if we are to pursue successfully those who would do us harm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developing a Dark Web collection and infrastructure for computational and social sciences

    Page(s): 59 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1496 KB) |  | HTML iconHTML  

    In recent years, there have been numerous studies from a variety of perspectives analyzing the Internet presence of hate and extremist groups. Yet the websites and forums of extremist and terrorist groups have long remained an underutilized resource for terrorism researchers due to their ephemeral nature and access and analysis problems. The purpose of the Dark Web archive is to provide a research infrastructure for use by social scientists, computer and information scientists, policy and security analysts, and others studying a wide range of social and organizational phenomena and computational problems. The Dark Web Forum Portal provides web enabled access to critical international jihadist and other extremist web forums. The focus of this paper is on the significant extensions to previous work including: increasing the scope of data collection, adding an incremental spidering component for regular data updates; enhancing the searching and browsing functions; enhancing multilingual machine-translation for Arabic, French, German and Russian; and advanced Social Network Analysis. A case study on identifying active participants is shown at the end. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic construction of domain theory for attack planning

    Page(s): 65 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (733 KB) |  | HTML iconHTML  

    Terrorism organizations are devising increasingly sophisticated plans to conduct attacks. The ability of emulating or constructing attack plans by potential terrorists can help us understand the intents and motivation behind terrorism activities. A feasible computational method to construct plans is planning technique in AI. Traditionally, AI planning methods rely on a predefined domain theory which is compiled by domain experts manually. To facilitate domain theory construction and plan generation, we propose a method to construct domain theory automatically from free text data. The effectiveness of our proposed approach is evaluated empirically through experimental studies using real world terrorist plans . View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GENIUS: A computational modeling framework for counter-terrorism planning and response

    Page(s): 71 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (642 KB) |  | HTML iconHTML  

    Public safety has been a great concern in recent years as terrorism occurs everywhere. When a public event is held in an urban environment like Olympic games or soccer games, it is important to keep public safe and at the same time, to have a specific plan to control and rescue the public in the case of a terrorist attack. In order to better position public safety in communities against potential threats, it is of utmost importance to identify existing gaps, define priorities and focus on developing approaches to address those. In this paper, we present a system which aims at providing a decision support, threats response planning and risk assessment. Threats can be in the form of Chemical, Biological, Radiological, Nuclear and Explosive (CBRNE). In order to assess and manage possible risks of such attacks, we have developed a computational framework of simulating terrorist attacks, crowd behaviors, and police or safety guards' rescue missions. The characteristics of crowd behaviors are modeled based on social science research findings and our own virtual environment experiments with real human participants. Based on gender and age, a person has a different behavioral characteristic. Our framework is based on swarm intelligence and agent-based modeling, which allows us to create a large number of people with specific behavioral characteristics. Different test scenarios can be created by importing or creating 3D urban environments and putting certain terrorist attacks (such as bombs or toxic gas) on specific locations and time-lines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Terrorist threat assessment with formal concept analysis

    Page(s): 77 - 82
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (518 KB) |  | HTML iconHTML  

    The National Police Service Agency of the Netherlands developed a model to classify (potential) jihadists in four sequential phases of radicalism. The goal of the model is to signal the potential jihadist as early as possible to prevent him or her to enter the next phase. This model has up till now, never been used to actively find new subjects. In this paper, we use Formal Concept Analysis to extract and visualize potential jihadists in the different phases of radicalism from a large set of reports describing police observations. We employ Temporal Concept Analysis to visualize how a possible jihadist radicalizes over time. The combination of these instruments allows for easy decision-making on where and when to act. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Balancing security and information sharing in intelligence databases

    Page(s): 83 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (353 KB) |  | HTML iconHTML  

    We analyze the restrictions to information sharing imposed by MultiLevel Secure Databases. We offer several measures to quantify the information loss due to security levels, and propose the problem of minimizing such loss without allowing users to access information above their security clearance. We give a partial solution to the problem, and discuss some of its shortcomings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Agent based correlation model for intrusion detection alerts

    Page(s): 89 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB) |  | HTML iconHTML  

    Alert correlation is a promising technique in intrusion detection. It analyzes the alerts from one or more intrusion detection system and provides a compact summarized report and high-level view of attempted intrusions which highly improves security effectiveness. Correlation component is a procedure which aggregates alerts according to certain criteria. The aggregated alerts could have common features or represent steps of pre-defined scenario attacks. Correlation approaches composed of a single component or a comprehensive set of components. The effectiveness of a component depends heavily on the nature of the dataset analyzed. The order of correlation component will affect the correlation process performance. Moreover not all components should be used for different dataset. This paper presents an agent-based alert correlation model. Learning agent learns the nature of dataset within a network then guides the whole correlation process and components in such a suitable way of which components could be used and in which order. The model improves the performance of correlation process by selecting the proper components to be used. This model assures minimum alerts to be processed on each component depending on the dataset and minimum time for correlation process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross-level behavioral analysis for robust early intrusion detection

    Page(s): 95 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB) |  | HTML iconHTML  

    We anticipate future attacks would evolve to become more sophisticated to outwit existing intrusion detection techniques. Existing anomaly analysis techniques and signature-based detection practices can no longer effective. We believe intrusion detection systems (IDSs) of the future will need to be capable to detect or infer attacks based on more valuable information from the network-related properties and characteristics. We observed that even though the signatures or traffic patterns of future stealthy attacks can be modified to outwit current IDSs, certain behavioral aspects of an attack are invariant. We propose a novel approach that jointly monitors network activities at three different levels: transport layer protocols, (vulnerable) network services, and invariant anomaly behaviors (called attack symptoms). Our system, SecMon, captures the network behaviors by simultaneously performing cross-level state correlation for effective detection of anomaly behaviors. For the most part, the invariant anomaly behavior has not been fully exploited in the past. A probabilistic attack inference model is also proposed for attack assessment by correlating the observed attack symptoms to achieve the low false alarm rate. The evaluations demonstrate our prototype system is efficient and effective for sophisticated attacks, including polymorphism, stealthy, and unknown attack. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intelligent decision support for Marine safety and Security Operations

    Page(s): 101 - 107
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (366 KB) |  | HTML iconHTML  

    The architecture and core mechanisms of a decision support system for a Marine Security Operations Centre (MSOC) are presented. The goal of this system is to improve coordination in emergency response services during critical situations, including detection and prevention of illegal activities. The system design emphasizes robustness and scalability through its decentralized control structure, automated planning and replanning, dynamic resource configuration management and task execution management under uncertainty. An example scenario from the marine operations domain is described. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatically identifying the sources of large Internet events

    Page(s): 108 - 113
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB) |  | HTML iconHTML  

    The Internet occasionally experiences large disruptions, arising from both natural and manmade disturbances, and it is of significant interest to develop methods for locating within the network the source of a given disruption (i.e., the network element(s) whose perturbation initiated the event). This paper presents a near real-time approach to realizing this logical localization objective. The proposed methodology consists of three steps: 1.) data acquisition/preprocessing, in which publicly available measurements of Internet activity are acquired, “cleaned”, and assembled into a format suitable for computational analysis, 2.) event characterization via tensor factorization-based time series analysis, and 3.) localization of the source of the disruption through graph theoretic analysis. This procedure provides a principled, automated approach to identifying the root causes of network disruptions at “whole-Internet” scale. The considerable potential of the proposed analytic method is illustrated through a computer simulation study and empirical analysis of a recent, large-scale Internet disruption. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross Entropy approach for patrol route planning in dynamic environments

    Page(s): 114 - 119
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (406 KB) |  | HTML iconHTML  

    Proper patrol route planning increases the effectiveness of police patrolling and improves public security. In this paper we present a new approach for the real-time patrol route planning in a dynamic environment. We first build a mathematic framework, and then propose a fast algorithm developed from the Cross Entropy method to meet the real-time computation requirement needed for many applications. In addition, as the randomness is an important factor for practices, the entropy concept is used for designing the randomized patrol routes schedule strategy. Numerical studies demonstrate that the approach has fast convergence property and is efficient in dynamic patrol environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computational knowledge and information management in veterinary epidemiology

    Page(s): 120 - 125
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (865 KB) |  | HTML iconHTML  

    Monitoring of infectious animal diseases is an essential task for national biosecurity management and bioterrorism prevention. For this purpose, we present a system for animal disease outbreak analysis by automatically extracting relational information from online data. We aim to detect and map infectious disease outbreaks by extracting information from unstructured sources. The system crawls web sites and classifies pages by topical relevance. The information extraction component performs document analysis for animal disease related event recognition. The visualization component plots extracted events into GoogleMaps1 using geospatial information and supports timeline representation of animal disease outbreaks in SIMILE2. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Entity refinement using latent semantic indexing

    Page(s): 126 - 128
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (363 KB) |  | HTML iconHTML  

    Automated extraction of named entities is an important text analysis task. In addition to recognizing the occurrence of entity names, it is important to be able to label those names by type. Most entity extraction techniques categorize extracted entities into a few basic types, such as PERSON, ORGANIZATION, and LOCATION. This paper presents an approach for generating more fine-grained subdivisions of entity type. The technique of latent semantic indexing (LSI) is used to provide semantic context as an indicator of likely entity subtype. Tests were carried out on a collection of 5.5 million English-language news articles. At modest levels of recall, the accuracy of sub-type assignment was comparable to the accuracy with which the gross type was assigned by a state-of-the-art commercial entity extraction software package. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Latent semantic analysis and keyword extraction for phishing classification

    Page(s): 129 - 131
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (313 KB) |  | HTML iconHTML  

    Phishing email fraud has been considered as one of the main cyber-threats over the last years. Its development has been closely related to social engineering techniques, where different fraud strategies are used to deceit a naïve email user. In this work, a latent semantic analysis and text mining methodology is proposed for the characterisation of such strategies, and further classification using supervised learning algorithms. Results obtained showed that the feature set obtained in this work is competitive against previous phishing feature extraction methodologies, achieving promising results over different benchmark machine learning classification techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsupervised multilingual concept discovery from daily online news extracts

    Page(s): 132 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (298 KB) |  | HTML iconHTML  

    Web syndication technologies help us easily aggregate daily news from diverse sources. However, the huge amount of information makes us more difficult to read let alone digest and focus on the most important events. Therefore, we need an efficient way of news extraction and mining. In this paper, we propose an unsupervised approach to multilingual concept discovery from daily online news extracts. First, key terms are extracted statistically from short news extracts. Second, similar term candidates are grouped into concrete concepts with unsupervised term clustering methods. Our goal is automatic news processing with minimum resources, which requires no training in advance. The experimental results show the potential of the proposed approach in efficiency and effectiveness. Further investigation is needed to study the cross-lingual relation between extracted concepts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimating sentiment orientation in social media for intelligence monitoring and analysis

    Page(s): 135 - 137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (322 KB) |  | HTML iconHTML  

    This paper presents a computational approach to inferring the sentiment orientation of “social media” content (e.g., blog posts) which focuses on the challenges associated with Web-based analysis. The proposed methodology formulates the task as one of text classification, models the data as a bipartite graph of documents and words, and uses this framework to develop a semi-supervised sentiment classifier that is well-suited for social media domains. In particular, the proposed algorithm is capable of combining prior knowledge regarding the sentiment orientation of a few documents and words with information present in unlabeled data, which is abundant online. We demonstrate the utility of the approach by showing it outperforms several standard methods for the task of inferring the sentiment of online movie reviews, and illustrate its potential for security informatics through a case study involving the estimation of Indonesian public sentiment regarding the July 2009 Jakarta hotel bombings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.