By Topic

Granular Computing, 2005 IEEE International Conference on

Date 25-27 July 2005

Go

Filter Results

Displaying Results 1 - 25 of 79
  • [Cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (879 KB)  
    Freely Available from IEEE
  • [Title page]

    Page(s): nil1
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Copyright page

    Page(s): nil2
    Save to Project icon | Request Permissions | PDF file iconPDF (50 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): i - xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (1189 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (105 KB)  
    Freely Available from IEEE
  • Organization charts

    Page(s): xvii - xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (139 KB)  
    Freely Available from IEEE
  • Approximate reduct computation by rough sets based attribute weighting

    Page(s): 383 - 386 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (776 KB) |  | HTML iconHTML  

    Rough set theory provides the reduct and the core concepts for knowledge reduction. The cost of reduct set computation is highly influenced by the attribute set size of the dataset where the problem of finding reducts has been proven as an NP-hard problem. This paper proposes an approximate approach for reduct computation. The approach uses the discernibility matrix concept and a weighting mechanism to determine the significance of an attribute to be considered in the reduct. A second supplementary weight is used to break the tie when several attributes have the same significance. The approach is extensively experimented and evaluated on various standard domains. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Kolmogorov complexity based automata modeling for intrusion detection

    Page(s): 387 - 392 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2728 KB) |  | HTML iconHTML  

    According to Kolmogorov complexity, a string is considered patternless if the shortest Turing machine that can encode it is at least as long as the string itself. Conversely, a non-random string with patterns can be described by some Turing machine that is shorter than the string. Hence, special forms of Turing machines - such as functions, N-grams, finite automata and stochastic automata - can all be regarded as representations of some approximations of patterns. Based on these observations, system profiles are defined for anomaly-based intrusion detection systems. The results are encouraging. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Research of non-distinct Solow economic growth model

    Page(s): 393 - 396 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (632 KB) |  | HTML iconHTML  

    A variety of fuzzy phenomena exist in economic systems in the realistic world, so it is of great significance to build a model with fuzzy numbers to solve them after analysis is done in research of Solow economic growth model. However, the function with fuzzy numbers is not differentiable. Moreover, the traditional operation rules can not be used to solve the fuzzy general quadrate equality. The author here first builds a non-distinct (that is fuzzy) economic growth model by using fuzzy mapping theory and by generalizing the definite economic growth model into the non-distinct case. Besides, he probes a fuzzy solution and its properties to the model. And finally, he proves the feasible generalization by a numerical example. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DCf: a double clustering framework for fuzzy information granulation

    Page(s): 397 - 400 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1464 KB) |  | HTML iconHTML  

    In this paper, we present a framework for extracting well-defined and semantically sound information granules. The framework is mainly centered on a double clustering process, hence, it is called DCf (double clustering framework). A first clustering process identifies cluster prototypes in the multidimensional data space, then the projections of these prototypes are further clustered along each dimension to provide a granulation of data. Finally, the extracted granules are described in terms of fuzzy sets that meet interpretability constraints so as to provide a qualitative description of the information granules. Different implementations of DCf are presented and compared on a medical diagnosis problem to show the utility of the proposed framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An e-intelligence approach to e-commerce intrusion detection

    Page(s): 401 - 404 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1624 KB) |  | HTML iconHTML  

    As enterprise level e-commerce applications are integrated over the Internet, security has become an increasingly important issue. Under this new integrated Web services environment, simple black-and-white logic is not sufficient to deal with complex intrusion detection problems. E-intelligence must be added into the application layer for a better detection of malicious intruders using legitimate channels to attack mission-critical applications. A new e-intelligence approach is proposed in this paper, which uses the fuzz trust model and e-intelligence to detect the intrusions. It also uses honey token for deception and honey application for learning the behavior from attackers, and then feed the information back to the system at the application level. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A kind of support vector fuzzy classifiers

    Page(s): 405 - 408 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB) |  | HTML iconHTML  

    Support vector machine (SVM) is a new promising machine learning method with good generalization ability, which learns the decision surface from two distinct classes of input points. But in many applications, the data are not always obtained precisely, i.e. there exist some fuzziness in the data. In this paper, we reformulated the conventional support vector classifiers such that they can learn from fuzzy input points given in the form of triangular fuzzy numbers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of a matrix-based binary granular computing algorithm in RST

    Page(s): 409 - 412 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB) |  | HTML iconHTML  

    Upper approximation and lower approximation are the basic definitions in rough set theory (RST). Equivalent partition generates classification and brings granulation structures of a universe. As the extension of RST, in this paper, a matrix-based binary granular computing algorithm is proposed. Furthermore, a standardized algorithm is designed to compute the positive region, negative region, the accuracy and the quality of approximation, which are the most essential concepts in RST and the foundations for further research. The proposed algorithm is a kind of logical approach which is easier to implement and faster than the traditional algebraic one. It is a successful application of binary granular computing in RST. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Indiscernibility criterion based on rough sets in feature selection and detection of landmines

    Page(s): 413 - 416 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1368 KB) |  | HTML iconHTML  

    Metal detectors currently used by the teams engaged in decontamination of mines, cannot differentiate a mine from metallic debris where the soil contains large quantities of metal scraps and cartridge cases. Landmines are a significant barrier to financial, economic and social development in various parts of this world, so a sensor is required that reliably confirms that the ground being tested does not contain an explosive device, with almost perfect reliability. Human experts are unable to give belief and plausibility to the rules devised from the huge databases. Rough sets can be applied to classify the landmine data because here any prior knowledge of rules is not required, these rules are automatically discovered from the database. Finally, the whole database is divided into mutually exclusive elementary sets. The rough logic classifier uses lower and upper approximations for determining the class of the objects. The paper aims to induce low-dimensionality rule sets from historical descriptions of domain features which are often of high dimensionality. Moreover, algorithms based on the rough set theory are particularly suited for parallel processing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Temporal granular logic for temporal data mining

    Page(s): 417 - 422 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1384 KB) |  | HTML iconHTML  

    In this article, a formalism for a specific temporal data mining task (the discovery of rules, inferred from databases of events having a temporal dimension), is defined. The formalism, based on first-order temporal logic, is then extended to include the concept of temporal granularity. Based on this theoretical framework, a detailed study is made to investigate the formal relationships between the interpretation of the same event in linear time structures with different granularities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sequent calculus system for rough sets based on rough Stone algebras

    Page(s): 423 - 426 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (784 KB) |  | HTML iconHTML  

    Many researchers study rough sets from the point of description of the rough set pairs (a rough set pair is also called a rough set), i.e., . An important result is that the collection of rough sets of an approximation space can be made into a Stone algebra. The collection of all subsets of a set forms a Boolean algebra under the usual set theoretic operations, a model for classical proposition logic are Boolean algebras. So, it is reasonable to assume that rough Stone algebras form a class of algebras appropriate for a logic of rough sets. In this paper, a sequent calculus system corresponding to rough Stone algebra, is proposed. The syntax and semantics are defined. The soundless and completeness are proved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling and verification of train safety comprehensive monitoring system using temporal Petri nets

    Page(s): 427 - 430 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (720 KB) |  | HTML iconHTML  

    Petri nets and temporal logic are efficient tools to study concurrent systems, but each has their own shortages. Thus, we introduce temporal Petri nets to model, analyze and verify train safety comprehensive monitoring system. On one hand, we describe frameworks of this system using Petri nets. On the other hand, we use temporal logic to describe temporal relationships of system states. Then, we analyze and verify the properties of model and conclude that the system is reliable and effective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intelligent agent-based expert system architecture for generating work plan in marshalling station

    Page(s): 431 - 434 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (784 KB) |  | HTML iconHTML  

    The present approach for generating work plan is difficult to meet the dynamic and intelligent requirement, and it has not had the function of assistant decision yet. An agent that can simulate the reasoning process of human has the properties such as adaptability and intelligence, which can be used to solve the problems in making plan process. This paper has presented the initial architecture for an intelligent agent-based expert system for generating of work plan in marshalling station known as IAES-OPMY. IAES-OPMY can support the station controller's decisions on generating plan and in turn improve the validity, encash ratio and automatization of plan. Furthermore, it can speed the turnover of wagons and bring great profits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A weighting scheme based on emerging patterns for weighted support vector machines

    Page(s): 435 - 440 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1352 KB) |  | HTML iconHTML  

    Support vector machines (SVMs) are powerful tools for solving classification problems and have been applied to many application fields, such as pattern recognition and data mining, in the past decade. Weighted support vector machines (weighted SVMs) extend SVMs by considering that different input vectors make different contributions to the learning of decision surface. An important issue in training weighted SVMs is how to develop a reliable weighting model to reflect the true noise distribution in the training data, i.e., noise and outliers should have low weights. In this paper, we propose to use emerging patterns (EPs) to construct such a model. EPs are those itemsets whose supports in one class are significantly higher than their supports in the other class. Since EPs of a given class represent the discriminating knowledge unique to their home class, noise and outliers should contain no EPs or EPs of the both contradicting classes, while a representative instance of the class should contain strong EPs of the same class. We calculate numeric scores for each instance based on EPs, and then assign weights to the training data using those scores. An extensive experiment carried out on a large number of benchmark datasets show that our weighting scheme often improves the performance of weighted SVMs over SVMs. We argue that the improvement is due to the ability of our model to approximate the true distribution of data points. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Qualitative Mapping, Criterion Trasformation and Artificial Neuron

    Page(s): 441 - 446
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1530 KB)  

    A mathematical model for describing the disciplinarian that the true value of property p(x) varies according to it's qualitative criterion [alpha, beta], called the qualitative mapping taup (x,[alpha, beta]) is presented in this paper, and the inner product transformation of qualitative criterion [alpha, beta], denoted by w_[alpha, beta] is discussed. It is shown that an artificial neuron is a special qualitative mapping View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approach to linear programming with fuzzy coefficients based on fuzzy numbers distance

    Page(s): 447 - 450 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (568 KB) |  | HTML iconHTML  

    In this paper, we first present a new distance between fuzzy numbers, then rank fuzzy numbers with the help of the new distance, and then we apply the distance to linear programming with fuzzy coefficients, transforming linear programming with fuzzy coefficients into crisp linear programming. Finally, we give a numerical example to illustrate its feasibility. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • v-support vector classification with uncertainty based on expert advices

    Page(s): 451 - 453 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB) |  | HTML iconHTML  

    Support vector techniques have been successfully applied to many real-world problems, but it is difficult to select the parameter C. The v-support vector classification (v-SVC) has the advantage of a parameter v on controlling the number of support vectors. However, it is required that every input must be exactly assigned to one of these two classes without any uncertainty. A new v-SVM technique is proposed which is able to deal with training data with uncertainty based on expert advices. Firstly, the meaning of the uncertainty is defined. Based on this meaning of uncertainty, the algorithm has been derived. This technique extends the application horizon of v-SVM greatly. As an application, the problem about early warning of grain production is solved by our algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Study of BP neural network based on MECA

    Page(s): 454 - 457 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB) |  | HTML iconHTML  

    This paper designs BP neural network with mind evolution clone algorithm (MECA). Taking the relation between diversity of mind evolution population and clone mechanism of biology into account, MECA is proposed in the paper. Not only can the algorithm converge to globally optimal solution, but also it solves premature convergence problem efficiently. The algorithm has been applied to training XOR. Simulation results show that MECA presented in this thesis performs better in contrast with simple genetic algorithm and BP algorithm. There is a great improvement in the quality and efficiency of the training of neural network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy approximation operators based on internal/external factors analysis

    Page(s): 458 - 461 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (720 KB) |  | HTML iconHTML  

    Centered on the philosophic rule of the internal and external factors of the alteration and development of objects, the paper sets up a dynamic information system driven by the external factors. Accordingly, it proposes two pairs of fuzzy approximation operators based on internal/external factors analysis in dynamic information system. It applies the pairs into the study on the fuzzy actions of external factors and on the actions of fuzzy external factors on the alternation of information system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new multiresolution classification model based on partitioning of feature space

    Page(s): 462 - 467 Vol. 2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2280 KB) |  | HTML iconHTML  

    Multiresolution analysis is a hot topic in the past decade. In this paper, we propose a new multiresolution classification method which adopts a coarse-to-fine strategy both during the training and the testing processes based on decomposing of the feature space. The training algorithm locates the boundary between two classes from coarse to fine by dividing the hypercubes which lie on the boundary step by step. The testing algorithm firstly labels the testing data set by the classifier trained at initial resolution. Then, only those lying on the boundary are labeled at the finer resolution. As an example, an approach named MRSVC is proposed, which exploits support vector machines as the basic classifier. Finally, theoretical analysis and experimental results have substantiated the effectiveness of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.