By Topic

Service Operations and Logistics, and Informatics (SOLI), 2012 IEEE International Conference on

Date 8-10 July 2012

Filter Results

Displaying Results 1 - 25 of 92
  • Reaching the masses through a Rural Services Platform

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1637 KB) |  | HTML iconHTML  

    The richness of interaction technologies used by a consumer or moderated by a facilitator for the consumer plays an important role in adoption of financial, advisory and business services by a rural populace. Rural ICT initiatives undertaken so far have focused on device-level technologies that bring in a single service to the consumer. In this paper, we present the architecture and implementation of a Rural Services Platform which is not only a low cost shared model, but also brings in innovation in an end-to-end solution while touching the next billion population. We describe our work in the context of social transfers by Government of India like NREGA payments and also present the architecture of a solution which can be used by 1) Business Correspondents who go into the remote villages to disburse payments 2) consumers directly, who can through a voice channel conduct their transactions of money transfers. Our solution can be used to provide an array of services across industries like banking, finance, agriculture, health-care and education to the end user, thereby not only reducing the cost of delivery, but also making the entire innovation scalable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Preference-driven personalized recommendation by k-comparative annotation and reasoning

    Publication Year: 2012 , Page(s): 7 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1194 KB) |  | HTML iconHTML  

    Good eating habits are important for maintaining a healthy life and preventing the lifestyle-related disease epidemic. Researches about menu recommendation or diet planning are thus attracting much attention recently. A key factor toward a successful diet planning is an individual's food preference instead of dogmatic nutrition pattern since it is unlikely that an individual would accept the meal plan merely based on the nutrition supplements. However, the extraction of personal preference is definitely not a trivial matter. In this paper, we present the k-comparative annotation and reasoning technique for semi-automatically extracting users' preferences in a more efficient and effective manner. Comparing to conventional methods, the proposed system can not only reveal users' opinions about foods more fairly but also save lots of food annotation efforts during the training data collection stage. The resulted system is thus expected to improve users' diet habit and compliance with healthier lifestyle. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Point pattern analysis utilizing controlled randomization for police tactical planning

    Publication Year: 2012 , Page(s): 13 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1272 KB) |  | HTML iconHTML  

    Law enforcement agencies often rely on crime pattern identification techniques to support their tactical planning. K-function analysis has been one of the most popular crime pattern identification approaches. It has been integrated with point randomization procedures to identify the level of clustering in crimes. One limitation of this integration is that it can only differentiate between a complete random pattern and a clustered point pattern. It is well known that crimes only occur in populated area and the distribution of human population is spatially heterogeneous. A complete random pattern of crimes rarely occurs. The current K-function offers little insights on the clustering levels of crimes given our prior knowledge on the distribution of processes that may have influenced the occurrence of crimes. This study integrates two controlled point randomization procedures with K-function analysis to analyze crime patterns. These two approaches are compared against the complete random pattern and results indicate that the controlled point randomization procedures can reveal detailed information on the underlying processes for point patterns. It can also take into account the underlying processes for the crimes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving service coordination in municipal government with the Shared Data Manager

    Publication Year: 2012 , Page(s): 19 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB) |  | HTML iconHTML  

    In this paper, we describe our work in the design and evaluation of a tool, the Shared Data Manager, to be used to enable automatic data sharing capabilities for municipal government applications. In an earlier study, we determined that municipal government employees rely heavily on manual methods for data sharing, which is time-consuming and error-prone. We describe in detail our findings from a two-week evaluation of the Shared Data Manager system with municipal employees. Overall, municipal employees found the tool to be useful for sharing data between departments and customizing data sharing access controls. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Logistics orchestration modeling and evaluation for humanitarian relief

    Publication Year: 2012 , Page(s): 25 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (789 KB) |  | HTML iconHTML  

    This paper proposes an orchestration model for post-disaster response that is aimed at automating the coordination of scarce resources that minimizes the loss of human lives. In our setting, different teams are treated as agents and their activities are "orchestrated" to optimize rescue performance. Results from simulation are analysed to evaluate the performance of the optimization model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online incremental regression for electricity price prediction

    Publication Year: 2012 , Page(s): 31 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1598 KB) |  | HTML iconHTML  

    Modeling methods aiming at predicting electricity price accurately, should be capable of handling a continuous stream of data while keeping responsive to the potential structural changes. To this end, traditional machine learning based approaches are widely applied such as Multi-linear Regression, Artificial Neural Network (ANN), Time Series Models like Auto Regressive Moving Average Models (ARMA), Gaussian Process (GP), random forests and Genetic Algorithm (GA), all of which can fall into two categories: the parametric and non parametric model. While practical challenges in forecasting streaming data come along with the structural variation of the testing samples making the training samples not necessarily representative enough towards the new arriving samples. In such an online forecasting context, an incremental supervised learning based algorithm is better suited in contrast to the batch-mode one. Given the fact that it can adapt to the new coming streaming data by accommodating the possible variations of new samples, as well as allows for the removal of old data if necessary. An incremental learning algorithm is presented in this paper, i.e. the online support vector regression model, which enjoys the merits of less memory capacity and less computational overload compared with the batch methods. Promising results are demonstrated by evaluating with other typical regression methods for the electricity price forecasting task on a publicly available benchmark data set. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Load forecasting using Twin Gaussian Process model

    Publication Year: 2012 , Page(s): 36 - 41
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2452 KB) |  | HTML iconHTML  

    Load forecasting is an attractive and complicated application of machine learning theory and algorithms. Continuous efforts have been made from both academics and industry, by using various methods such as Regression, Artificial Neural Network (ANN), Time Series Models like Auto Regressive Moving Average Models (ARMA), Gaussian Process (GP) and Genetic Algorithm (GA). The non-parametric models are not widely used in the forecasting domain, yet the promising results from the recent applications of Gaussian Process have indicated a potential value for this kind of algorithms. In this paper, we describe a very recently proposed machine learning algorithm, Twin Gaussian Process (TGP) and apply it to the load forecasting task. Different from the Gaussian Process Model, the Twin Gaussian Process uses Gaussian Process (GP) priors on both covariance as well as responses, and obtain the output via Kullback-Leibler divergence minimization between two GP modeled as normal distributions over finite index sets of training and testing examples. As a result, TGP is able to account for the correlations of both inputs and outputs. In our case study, TGP is evaluated and compared with other widely used algorithms. And experimental results show that TGP can be a useful tool for load forecasting. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An effective sequential pattern mining algorithm to support automatic process classification in contact center back office

    Publication Year: 2012 , Page(s): 42 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (922 KB) |  | HTML iconHTML  

    Contact center and its back office play a pivotal role on delivering excellent services to customer. However, back office process and operations become more and more complex, variable and costly due to frequent environment varying and the trend of staff-intensive. Automatic process classification and delimitation in back office is an effective way to help resolve these challenges, but it suffers very high deployment cost due to the complex and burdensome configuration works. In this paper, we propose an effective algorithm on sequential pattern mining to generate process patterns automatically, instead of manual configuration works, to achieve the goals of scalable deployment with high efficiency and low cost on automatic process classification and delimitation in contact center back office. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recourse aware resource allocation for contingency planning in distributed service delivery

    Publication Year: 2012 , Page(s): 48 - 53
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB) |  | HTML iconHTML  

    Remote delivery of services using geographically distributed service delivery locations has emerged as a popular and viable business model. Examples of services delivered in this manner are software services, business process outsourcing services, customer support centers, etc. The very nature of services and the fragile nature of the business environments in some of the delivery locations accentuates the need for business continuity. A key aspect of enabling business continuity is, at the time of a disruptive event, ability to reroute the services delivered from affected locations to unaffected locations while meeting their resource requirements. Such rerouting is called recourse. We highlight the need for recourse aware resource allocation. We study this problem from a computational viewpoint, present a new recourse aware resource allocation heuristic, and experimentally compare this to traditional resource allocation methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • General Enterprise Framework (GEF)

    Publication Year: 2012 , Page(s): 54 - 59
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1070 KB) |  | HTML iconHTML  

    We present a technique in the domain of the business architecture of enterprises. It is called GEF (General Enterprise Framework) because it intends to be universal and to apply at whole enterprise. GEF is a grid where the activities of business processes are classified into five levels - Plan, Execute, Monitor, Control, Manage information. In an ideal situation, all levels are defined and computerized. By surveying the current situation, management can check to which extent the organization's business processes (a) cover levels (completeness) and (b) are ITsupported (computerization). This twofold assessment enables a fit-gap analysis and a maturity appraisal. The novelties of GEF are agility and universality, and also the completeness of the analysis. A test on real projects showed that GEF is easy and effective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discovery of generalized spatial association rules

    Publication Year: 2012 , Page(s): 60 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (886 KB) |  | HTML iconHTML  

    Spatial association rule mining is an important technique of spatial data mining and business intelligence. Nevertheless, traditional spatial association rule mining approaches have a significant limitation that they cannot effectively involve and exploit non-spatial information. As a result, many interesting rules mixing spatial and non-spatial information which provide extra insights and tell the hidden patterns cannot be found. In this paper, we propose a novel approach to discover the Generalized Spatial Association Rules (GSAR), which are capable of expressing richer information including not only spatial, but also non-spatial and taxonomy information of spatial objects. Meanwhile, the additional computation introduced only costs linear time complexity. A case study on a real crime dataset shows that using the proposed approach, many interesting and meaningful crime patterns can be discovered. However, traditional approaches cannot find such patterns at all. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relational rule learning in decoupled heterogeneous subspaces

    Publication Year: 2012 , Page(s): 66 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (753 KB) |  | HTML iconHTML  

    Service business now plays increasingly important role in real-world economy. This has stimulated the analytic requirement for generating insight from the structural and interrelated service data, so as to improve service operation and management excellence. In this paper, we propose a novel multi-relational classification algorithm, namely RSCC (Relational Subspace Collaborative Classification). RSCC restructures the relational dataset into a set of decoupled semantic-level subspaces while keeps the heterogeneity of relational data. It employs a heuristic rule learning strategy that globally searches for the best predicates effectively. Our experiments on multiple benchmark datasets demonstrate its performance and efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data validation for business continuity planning

    Publication Year: 2012 , Page(s): 72 - 77
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (818 KB) |  | HTML iconHTML  

    In this paper we present a system and case study for business data validation in large organizations. The validated and consistent data provides the capability to handle outages and incidents in a more principled fashion and helps in business continuity. Typically, different business units employ separate systems to produce and store their data. The data owners choose their own technology for data base storage. It is a non-trivial task to keep the data consistent across business units in the organization. This non-availability of consistent data can lead to sub optimal planning during outages and organizations can incur huge financial costs. Traditional custom data validation system fetches the data from various data sources and flow it through the central validation system resulting in huge data transfer cost. Moreover, accommodating change in business rules is laborious process. Accommodating such changes in the system can lead to re-design and re-development of the system. This is a very costly and time consuming activity. In this paper, we employ a Metadata driven rule-based data validation system, which is domain independent, distributed, scalable and can easily accommodate changes in business requirements. We have deployed our system in real life settings. We present some of the results in this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated selection of blocking columns for record linkage

    Publication Year: 2012 , Page(s): 78 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB) |  | HTML iconHTML  

    Record Linkage is an essential but expensive step in enterprise data management. In most deployments, blocking techniques are employed which can reduce the number of record pair comparisons and hence, the computational complexity of the task. Blocking algorithms require a careful selection of column(s) to be used for blocking. Selection of appropriate blocking column is critical to the accuracy and speed-up offered by the blocking technique and requires intervention by data quality practitioners who can exploit prior domain knowledge to analyse a small sample of the huge database and decide the blocking column(s). However, the selection of optimal blocking column(s) can depend heavily on the quality of data and requires extensive analysis. An experienced data quality practitioner is required for the selection of optimal blocking columns. In this paper, we present a datadriven approach to automatically choose blocking column(s), motivated from the modus operandi of data quality practitioners. Our approach produces a ranked list of columns by evaluating them for appropriateness for blocking on the basis of factors including data quality and distribution. We evaluate our choice of blocking columns through experiments on real world and synthetic datasets. We extend our approach to be employed in scenarios where more than one column can be used for blocking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data consolidation solution for internal security needs

    Publication Year: 2012 , Page(s): 84 - 89
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1182 KB) |  | HTML iconHTML  

    The threats of the 21st century are too complex, difficult and time consuming to discern with traditional intelligence practices that shun advances in information technology and rely heavily on human experts. Good information is fundamental to understand and respond to 21st century national security threats. Without comprehensive information, decision-makers operate with a limited understanding of the threat horizon or the best means to address it. Required information exists across a variety of proprietary and open sources, and the volume of data available that might potentially contain relevant facts is simply too large and the bandwidth of the trained analysts is limited. Such information must be available in time-critical situations to be able to quickly connect the dots across various related pieces of information. It is imperative that decision-makers are provided intelligent tools that can automatically extract new relevant information from data without being explicitly asked, leading to actionable intelligence. To overcome these challenges we propose an information collection, management and analysis framework to meet the ever-growing threats to national security. The proposed framework establishes a collaborative environment to semi-automatically generate actionable intelligence by ensuring that the right people have access to all inclusive information at the right time. The core of this framework is to create a single view of entity by correlating information from different sources, stored in different formats. These sources can be passport, immigration, driving license, FIR records, Telecom and Utility services. The correlation algorithm is able to handle varied amount of noise in the data such as syntactic and semantic variations, format changes, spelling error, incomplete data, regional and linguistics variation as well as addition or removal of fields. The framework can further exploit the consolidated view to discover relationships between entities t- us expanding the reach for relevant information. The framework provides multiple avenues of interaction and the foresight needed to incorporate new sources of data as they arise in future. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Managing data quality by identifying the noisiest data samples

    Publication Year: 2012 , Page(s): 90 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (607 KB) |  | HTML iconHTML  

    Enterprise datasets are often noisy. Several columns can have non-standard, erroneous or missing information. Poor quality data can lead to incorrect reporting and wrong conclusions being drawn. Data cleansing involves standardizing such data to improve its quality. Often data cleansing tasks involve writing rules manually. The step involves understanding the data quality issues and then writing data transformation rules to correct these issues. This is a human intensive task. In this study we propose a method to identify noisy subsets of huge unlabelled textual datasets. This is a two step process where in the first step we develop an estimation tool to predict the data quality on an unlabelled text dataset as produced by a segmentation model. The accuracy of the proposed method is shown on a real life dataset. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new B2B platform based on cloud computing

    Publication Year: 2012 , Page(s): 96 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1481 KB) |  | HTML iconHTML  

    With the fast development of Internet and its data scale, B2B (Business to Business), whose speed and high availability advantage is based on Internet, is eroding more and more market share of traditional business. In recent years, the new data processing technologies, such as cloud computing, assure the enhancement of computer's computing capacity; it has become possible for researchers to process and analyze the massive business data, including large scale customer data, and large scale commodity information. A new B2B platform frame, based on cloud computing with less running time and better response efficiency, is proposed here to promote transaction handle efficiency. By adoption of complex network theory and cloud computing technology, the new platform has been evaluated to decrease transaction handle time, and make more benefits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining paths and transactions data to improve allocating commodity shelves in supermarket

    Publication Year: 2012 , Page(s): 102 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (854 KB) |  | HTML iconHTML  

    How to deploy commodities for sale in different shelves in a supermarket in order to obtain better benefit for merchants with considering convenience for customers is an important topic in the retail area. In this paper, we present a new method for allocating commodity shelves in supermarket based on customers' shopping paths and transactions data mining. Therein, customers' shopping paths data can be obtained by shopping cart or basket, on which RFID (Radio Frequency Identification) tags located. And shopping transaction data can be obtained from POS (Point of Sales) machine. Through integrating and mining the frequent paths data and transactions data, See-Buy Rate, which refers to an approximate probability to purchase this commodity for customers when they see this commodity, can be calculated for each commodity. Based on SeeBuy Rate, we build benefit optimization model to obtain the optimal allocating solution with considering the profit, sales volume, and purchase probability of the commodity. At last, one computation example is illustrated to show how to apply this method to practice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A process definition language for Internet of things

    Publication Year: 2012 , Page(s): 107 - 110
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (872 KB) |  | HTML iconHTML  

    In Internet of things, lots of instruments and sensors will connect into Internet and can be controlled through Internet to achieve Smart Planet. One of the key challenges is to integrate instruments and sensors into business process. SOA is an ideal infrastructure for business process management and lots of business process definition languages are now available for process orchestration. Instruments functions are encapsulate to web services and can be organized with other web services equally. But these device-oriented web services are different from common web services because devices can't be controlled by more than one client at the same time. This feature can't be supported in traditional process definition languages. In this paper, a new process definition language for Internet of Things is provided. It is an XML-based language; the structure of the language includes 3 sections: services, sequence and vars. Services section is used to describe service information. Device-oriented web services and common web services are described in this section. Sequence section is the main body of the process, which includes the specific business process of the IoT application. Sequence is used to enable executing several commands in order. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Privacy protection for personal data integration and sharing in care coordination services: A case study on wellness cloud

    Publication Year: 2012 , Page(s): 111 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (727 KB) |  | HTML iconHTML  

    Care coordination services bring together a multitude of providers to deliver continuity of care outside clinical settings. The coordinated services improve wellness management and operational outcomes but pose challenges on privacy when integrating multiple sources of personal health data and providing a data access and sharing mechanism to third party providers. In this paper, we particularly address the privacy challenges associated with data integration and sharing in a multi-tenant cloud environment for healthcare. We present three care coordination use cases and detail the functional requirements across different stages of a personal data service cycle. Additionally, retlecting on technical challenges associated with privacy-preserving data integration and sharing, we introduce a set of common data services to handle these issues, which ultimately lend support to the development of accountable coordinated care services. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RFID- Enabled dynamic Value Stream Mapping

    Publication Year: 2012 , Page(s): 117 - 122
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1168 KB) |  | HTML iconHTML  

    Value Stream Mapping (VSM) is one of the most powerful lean manufacturing tools used for quick analyses of products and information flow through a manufacturing system from door to door. This versatile, powerful method is used to visualize product flows as snapshot; it just describes the production behavior within specific period from the total production time; which some time cause a misleading during lean tools implementation. In this paper a new real-time data collection called Radio Frequency Identification (RFID) system will be integrated in VSM to collect the objects information through a manufacturing system in the production floor. The new integrated RFID-VSM system called Dynamic Value Stream Mapping (DVSM) Provides a real-time data for VSM to be able to interact with the processes, people, material, and any other constraint relevant to the production situation. As time progresses and the operation are "producing", the workers and managers will have the ability to interact live with the animated flow. Queues build up, inventory deplete, people move, etc ... Based on the real situation the managers can take the right decision at the right time where the workers will make changes to processing capacity, labor requirements, flow, and cell layout, to optimize and design or develop the future state. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal control of HVAC operations based on sensor data

    Publication Year: 2012 , Page(s): 123 - 128
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1672 KB) |  | HTML iconHTML  

    We present a comprehensive approach for leveraging sensor networks in order to improve HVAC (Heating, Ventilation, and Air Conditioning) services in terms of occupants preferences as well as sustainability. A two-step approach is presented with a data-driven model estimation for each HVAC configuration and an optimization step taking into account dynamic trade-offs between changing preferences of occupants and energy usage with possibly time-varying penalty constants. We further enable consideration of potentially available forecasts of relevant variables such as outside temperatures. Application of suggested approach is illustrated based on a case study and benefits as well as limitations are discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A traffic-network-model-based algorithm for short-term prediction of urban traffic flow

    Publication Year: 2012 , Page(s): 129 - 132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (839 KB) |  | HTML iconHTML  

    In the research field of Intelligent Transportation Systems (ITS), traffic flow prediction is a key technology for traffic guidance and advanced control strategy. Accuracy and immediacy are the main requirements for prediction methods. This paper presents a short-term prediction algorithm of traffic flow rate based on the macroscopic urban road network model. Classified into different typical elements, a traffic road network can be expressed as a matrix. Taking crosses and their links as basic research objects, the proposed prediction method can only use a few real traffic parameters obtained from loop detectors to realize accurate short-term prediction of traffic flow rate. This method can also be adaptable to different kinds of road network. In case study, the real traffic system is simulated with the microscopic traffic simulation platform (CORSIM). In the given simulation environment of road network, the experiment results illustrate that the proposed prediction algorithm can accurately predict flow rate in short term. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Artificial power system based on ACP approach

    Publication Year: 2012 , Page(s): 133 - 137
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1047 KB) |  | HTML iconHTML  

    Modern power system is a typical multi-level complex giant system consisting of physical infrastructures, human operators, and social resources, etc. The conventional analytical methods and simulation systems can't provide sufficient guidance for its operation and management, because they are mainly based on physical models, natural phenomenon, or other existing control methods which are based on reductionism. ACP approach, mainly consisting of artificial systems (A), computational experiments (C) and parallel execution (P), which is based on holism and complex system theory, has its specific advantages in the research on power systems. In this article, ACP approach is applied to build up artificial power system (APS) by using multi-agent complex networks. With the help of APS, actual power system's control, scheduling, optimization and management can be improved further via providing theoretical guidance and technical support for the rolling optimizations in normal situations and emergency management in abnormal situations. As a case study, an APS constructed with actual data from North China power grid is constructed and its vulnerability is simulated and analyzed under random, dynamic and static attacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Research on intelligent scientific research collaboration platform and taking journal intelligence system as example

    Publication Year: 2012 , Page(s): 138 - 143
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1472 KB) |  | HTML iconHTML  

    Currently, the research issues are becoming increasingly global and complex. In order to master more and more professional and comprehensive ability to solve problems, it is proposed in this paper that academic intelligence, journal intelligence, conference intelligence, paper intelligence and so on are integrated together to establish intelligent scientific research collaboration platform. And taking the system application of Science and Technology Review as example, the process of scientific research collaboration is carried out to verify the effectiveness of the system. In conclusion, the scientific research collaboration platform could satisfy the comprehensive needs for effectively acquiring a mass of information and launching scientific research collaboration as well as facilitating academic communication. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.