By Topic

Computer Modeling and Simulation (EMS), 2011 Fifth UKSim European Symposium on

Date 16-18 Nov. 2011

Filter Results

Displaying Results 1 - 25 of 99
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (569 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (76 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (142 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (116 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (150 KB)  
    Freely Available from IEEE
  • Welcome message from the Chairs

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (158 KB)  
    Freely Available from IEEE
  • Conference organization

    Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (104 KB)  
    Freely Available from IEEE
  • International Program Committee

    Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB)  
    Freely Available from IEEE
  • International Reviewers

    Page(s): xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (133 KB)  
    Freely Available from IEEE
  • Keynote Addresses

    Page(s): xvii - xxii
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (390 KB)  

    These keynote addresses discuss the following: energy, growth and simulation; signal processing challenges in brain-computer interfacing; and classification and comparison of modeling approaches and simulation techniques based on benchmarks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Plenary Presentation

    Page(s): xxiii - xxiv
    Save to Project icon | Request Permissions | PDF file iconPDF (241 KB)  
    Freely Available from IEEE
  • Technical Sponsors

    Page(s): xxv
    Save to Project icon | Request Permissions | PDF file iconPDF (104 KB)  
    Freely Available from IEEE
  • An Improved Artificial Weed Colony for Continuous Optimization

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (463 KB) |  | HTML iconHTML  

    In this paper, after a literature review, studies will be concentrated on standard deviation of invasive weedoptimization's normal distribution function which is used for distributing seeds of each weed over the search space. Although invasive weed optimization is a great algorithm to solve real world practical optimization problems but there is a serious drawback in distributing the seeds over the search space. A new concept will be presented to distribute seeds of each weed over the search space which increases the robustness and effectiveness of algorithm, and therefore leads to an improved invasive weed optimization. Simulation on a set of unconstrained benchmark functions reveals the superiority of the proposed algorithm in quick convergence and finding better solutions compared to the original invasive weed optimization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Use of Clustering and Interpolation Techniques for the Time-Efficient Simulation of Complex Models within Optimization Tasks

    Page(s): 6 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (706 KB) |  | HTML iconHTML  

    Several widely used model optimization techniques such as, for instance, genetic algorithms, exploit on intelligent test of different input variables configurations. Such variables are fed to an arbitrary model and their effect is evaluated in terms of the output variables, in order to identify their optimal values according to some predetermined criteria. Unfortunately some models concern real world phenomena which involve a high number of input and output variables, whose interactions are complex. Consequently the simulations can be so time consuming that their use within an optimization procedure is unaffordable. In order to overcome this criticality, reducing the simulation time required for running the model within the optimization task, a novel method based on the combination of clustering and interpolation techniques is proposed. This technique is based on the use of a set of pre-run simulations of the original model, which are firstly used to cluster the input space and to assign to each cluster a suitable output value within the output space. Subsequently, in the simulation phase, an ad-hoc interpolation is performed in order to provide the final simulation results. The proposed method has been tested on a complex model of a blast furnace within an optimization problem and has obtained good results in terms of accuracy and time-efficiency of the simulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Genetic-Neuro-Fuzzy Controllers for Second Order Control Systems

    Page(s): 12 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB) |  | HTML iconHTML  

    Overshoot, settling and rise time define the timing parameters of a control system. The main challenge is to attempt to reduce these parameters to achieve good control performances. The target is to obtain the optimal timing values. In this paper, three different approaches are presented to improve the control performances of second order control systems. The first approach is related to the design of a PID controller based on Ziegler-Nichols tuning formula. An optimal fuzzy controller optimized through Genetic Algorithms represents the second approach. Following this way, the best membership functions are chosen with the help of the darwinian theory of natural selection. The third approach uses the neural networks to achieve adaptive neuro-fuzzy controllers. In this way, the fuzzy controller assumes self-tuning capability. The results show that the designed PID controller has a very slow rise time. The genetic-fuzzy controller gives good values of overshoot and settling time. The best global results are achieved by neuro-fuzzy controller which presents good values of overshoot, settling and rise time. Moreover, our neuro-fuzzy controller improves the results of some conventional PID and fuzzy controllers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Intelligent Collaborative E-learning Strategy

    Page(s): 18 - 23
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (301 KB) |  | HTML iconHTML  

    Evaluating learners in terms of collaboration capabilities in a collaborative e-learning session can be quantified based on several parameters. These parameters include learner's contribution in the collaborative e-learning session. They also include learner's collaboration history in collaborative e-learning sessions and the collaboration capabilities of the group in which he participates. These parameters can be used as inputs to a model that generates a set of recommendations for the next step that should be taken towards the e-learning objectives. These recommendations can be used to enroll the learner in an e-learning session that best fits his collaboration capabilities. In this paper, a fuzzy inference system is introduced to estimate learner's collaboration in terms of a proposed metric called collaboration-index. This index can be utilized to guide the e-learning system for advising the learner to the best choice of the next collaborative e-learning session. Simulation study has been carried out to evaluate the performance of the proposed strategy. Simulation results show that the proposed strategy provides a positive impact on collaborative e-learning environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bimodal System for Emotion Recognition from Facial Expressions and Physiological Signals Using Feature-Level Fusion

    Page(s): 24 - 29
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (586 KB) |  | HTML iconHTML  

    This paper presents an automatic approach for emotion recognition from a bimodal system based on facial expressions and physiological signals. The information fusion is to combine information from both modalities. We tested two approaches, one based on mutual information which allows the selection of relevant information, the second approach is based on principal component analysis that allows the transformation of data into another space. The obtained results using both modalities are better compared to the separate use of each modality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A memetic algorithm for program verification

    Page(s): 30 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (170 KB) |  | HTML iconHTML  

    We present a memetic algorithm for program safety properties verification. This problem is expressed by means of Reach ability of some erroneous location L in the program. We use a new method for program modeling: A Separation Modeling Approach: ASMA, in which programs are represented by two components: Data Model DM, and Control Model CM. The erroneous location is represented by its "Location Access Chain", LAC: a string where each position represents the required value of CM elements guards to reach L. The memetic algorithm generates each time a new population attempting to provide an execution which is " conform" to the location access chain. An individual of the population is a set of intervals each one representing an input variable. At each generation, two local search operators are used to improve some chosen solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Non-Statistical metrics for estimating redundancies in forensic investigations of network intrusions

    Page(s): 36 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (394 KB) |  | HTML iconHTML  

    Most statistical methods do not perfectly conform to real cases of cyber crimes. Consequently, using statistical methods to analyze intrusion logs in order to present evidentiary values in courts of law are often refuted as baseless and inadmissible evidences regardless of the input spent to generate the reports and whether the reports are well-grounded evidences or not. Sometimes, complainants are often bewildered and confused because it is almost certain that the prime suspects will be absolved in courts of law. These are tragic developments to computer security experts, corporate and private organizations that leverage on the usage of the Internet facilities to boost service delivery, business activities and profitability. Thus, this paper presents non-statistical metrics that adopt Serialization Modelling Method (S2M) to improve interpretations of intrusion logs. The approach instantiates tokens and serializes alerts triggered by Snort using well-defined values. Experiments illustrate that duplicate tokens or patterns of alerts that exhibit increased propensity are indicative of redundant alerts to a certain degree. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Electrical Capacitance Tomography Method Using Online Pressure and Temperature Data Measurements

    Page(s): 42 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB)  

    In this paper, a new formulation of the Electrical Capacitance Tomography (ECT) problem for real-time image reconstruction of the multiphase flow fluid passing through a given section of a pipeline is presented. The suggested Regularized Constrained Gauss-Newton (RCGN) algorithm determines the dielectric distribution of the internal fluid by solving the inverse and forward problems on the data captured from both the electrical electrodes surrounding the pipeline and the pressure and temperature sensors distributed at different locations of the target process. By measuring the pressure and temperature at different locations of the target, an estimation of its density distribution is performed using its fluid mechanic properties. Experimental results on a set of different images clearly show that the proposed method achieves more accurate results than the traditional methods which use only boundary electrodes, while keeping the computation time almost same. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of Learning and Database in Robustness of Security Tools: Based on Immune System Modeling

    Page(s): 47 - 52
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB) |  | HTML iconHTML  

    Increasing complexity and dynamics of systems has reduced the efficiency of security tools. For overcoming this problem, security tools introduce newer patches, thereby the size of database and computation overloads are increased. By Biological Immune System (BIS) modeling and simulation, its behavior in different situations will be considered and the effect of database size and learning ability in robustness of the BIS will be evaluated. For the BIS modeling and simulation, Biological Agent will be introduced and a Multi Agent System in Net logo software has been designed. According the results of simulation, the effect of learning rate in different states in system's robustness has been evaluated. Furthermore, robustness of the systems with different sizes of database in the initial state has been considered. After all, the effect these two parameters on the BIS robustness have been illustrated and compared. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intelligent Composition of Dynamic-Cost Services in Service-Oriented Architectures

    Page(s): 53 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (386 KB) |  | HTML iconHTML  

    Service-oriented architecture becomes a major computing practice in modern enterprise software systems. In several cases, the cost of the services could depend on the combination of the services utilized. In addition, there could be several constraints imposed on the service composition process. One of the important constraints is to impose an upper bound on the number of clients' usages of services in a period of time. Another important constraint is to restrict the number of service-providers that should be utilized. This constraint is normally needed to reduce the number of long-term relationships between the clients and the service-providers. In this paper, a new algorithm is introduced that tries to find the optimal set of services needed by the software designer to fit his computing requirements. It tries to minimize the overall incurred cost taking into consideration the dynamic-cost of the services occurred due to clients' usage patterns. In addition, it tries to minimize the number of service-providers utilized and maximize the overall QoS. In addition, it takes into account the constraints imposed on using the services. Genetic algorithms are adopted in tackling this problem, which are able to reach a near-optimal solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of Reference Evapotranspiration Using Limited Climatic Data and Bayesian Model Averaging

    Page(s): 59 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1143 KB) |  | HTML iconHTML  

    Motivated by the increased number of sensors and sensor networks for environmental and weather monitoring, we propose a method to estimate reference evapotranspiration (ETo) from limited climate data. There are several modifications to the standard FAO Penman-Monteith equation (FAO PM) that enables us to use limited climatic data for estimating ETo, however these equations have to be adjusted locally depending of the different climatic conditions. In this paper, we use Bayesian model averaging in order to determine the uncertainty of different models that explain ETo. Using this approach, we tackle the multi-collinearity problem of climatic variables by combining multiple regression models. More specifically, we consider estimation of ETo as a non- stationary regression problem where the rules governing the mean and noise processes might change depending of the different climatic conditions. In order to build the candidate models, we use a divide and conquer approach known as Treed Gaussian Processes (TGP) and then demonstrate the method by using time series of ETo calculated by means of the FAO PM equation. The results are also compared with other regression techniques and simplified equations for calculating ETo. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Investigation on Different Kernel Functions for Weighted Kernel Regression in Solving Small Sample Problems

    Page(s): 64 - 69
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (442 KB) |  | HTML iconHTML  

    Previously, weighted kernel regression (WKR) has proved to solve small problems. The existing WKR has been successfully solved rational functions with very few samples. The design and development of WKR is important in order to extend the capability of the technique with various kernel functions. Based on WKR, a simple iteration technique is employed to estimate the weight parameters with Gaussian as a kernel function before WKR can be used in predicting the unseen test samples. In this paper, however, we investigate various kernel functions with Particle Swarm Optimization (PSO) as weight estimators as it offers such flexibility in defining the objective function. Hence, PSO has the capability to solve non-closed form solution problem as we also introduce regularization term with L1 norm in defining the objective function as to solve training sample, which corrupted by noise. Through a number of computational experiments, the investigation results show that the prediction quality of WKR is primarily dominated by the smoothing parameter selection rather than the type of kernel function. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Ant Colony Optimization and Simulated Annealing for Rule Induction

    Page(s): 70 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB) |  | HTML iconHTML  

    This paper proposes a hybrid of ant colony optimization and simulated annealing for rule induction. The hybrid algorithm is part of the sequential covering algorithm which is the commonly used algorithm to extract classification rules directly from data. The hybrid algorithm will minimize the problem of low quality discovered rule by an ant in a colony, where the rule discovered by an ant is not the best quality rule. Simulated Annealing will be used to produce a rule for each ant. The best rule for a colony will then be chosen and later the best rule among the colonies will be included in the rule set. The ordered rule set is arranged in decreasing order of generation. Thirteen data sets which consist of discrete and continuous data from UCI repository were used to evaluate the performance of the proposed algorithm. Promising results were obtained when compared to the Ant-Miner algorithm in terms of accuracy, number of rules and number of terms in the rules. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.