By Topic

Machine and Web Intelligence (ICMWI), 2010 International Conference on

Date 3-5 Oct. 2010

Filter Results

Displaying Results 1 - 25 of 94
  • Author index

    Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (29 KB)  
    Freely Available from IEEE
  • [Front matter]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (26 KB)  
    Freely Available from IEEE
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (22 KB)  
    Freely Available from IEEE
  • [Front matter]

    Page(s): 1 - 8
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • Efficient extraction of news articles based on RSS crawling

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB) |  | HTML iconHTML  

    The expansion of the World Wide Web has led to a state where a vast amount of Internet users face and have to overcome the major problem of discovering desired information. It is inevitable that hundreds of web pages and weblogs are generated daily or changing on a daily basis. The main problem that arises from the continuous generation and alteration of web pages is the discovery of useful information, a task that becomes difficult even for the experienced internet users. Many mechanisms have been constructed and presented in order to overcome the puzzle of information discovery on the Internet and they are mostly based on crawlers which are browsing the WWW, downloading pages and collect the information that might be of user interest. In this manuscript we describe a mechanism that fetches web pages that include news articles from major news portals and blogs. This mechanism is constructed in order to support tools that are used to acquire news articles from all over the world, process them and present them back to the end users in a personalized manner. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring semantic roles of Web interface components

    Page(s): 8 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB) |  | HTML iconHTML  

    The adaptability of Web interfaces in response to changes in the interaction context, display environments (e.g., mobile screens) and user's personal preferences is becoming increasingly desirable due to the pervasive use of Web information. One of the major challenges in Web interface adaptation is to discover the semantic structure underlying a Web interface. This paper presents a robust and formal approach to recovering interface semantics using a graph grammar approach. Due to its distinct characteristics of spatial specification in the abstract syntax, the Spatial Graph Grammar (SGG) is used to perform semantic grouping and interpretation of segmented screen objects. We use the well-established image processing technology to recognize atomic interface objects in an interface image. The output is a spatial graph, which records significant spatial relations among recognized objects. Based on the spatial graph, the SGG parser recovers the hierarchical relations among interface objects and thus provides semantic interpretation suitable for adaptation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Causal reasoning in graphical models

    Page(s): 15 - 19
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB) |  | HTML iconHTML  

    This paper presents the problem of the identification of the causal relations that agents, in front of a sequence of reported events, may attribute on the basis of their beliefs on the course of things and available pieces of information. In particular, we focus on graphical models exploiting the idea of “intervention”, initially proposed in the probability framework by Pearl, and developed in the more qualitative setting of the theory of possibilities within the french national project called MICRAC. We show that interventions, which are very useful for representing causal relations between events, can be naturally viewed as a belief change process. This paper also provides an overview of main compact representation formats, and their associated inference tools, that exist in a possibility theory framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Click fraud prevention in pay-per-click model: Learning through multi-model evidence fusion

    Page(s): 20 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2685 KB) |  | HTML iconHTML  

    Multi-sensor data fusion has been an area of intense recent research and development activity. This concept has been applied to numerous fields and new applications are being explored constantly. Multi-sensor based Collaborative Click Fraud Detection and Prevention (CCFDP) system can be viewed as a problem of evidence fusion. In this paper we detail the multi level data fusion mechanism used in CCFDP for real time click fraud detection and prevention. Prevention mechanisms are based on blocking suspicious traffic by IP, referrer, city, country, ISP, etc. Our system maintains an online database of these suspicious parameters. We have tested the system with real-world data from an actual ad campaign where the results show that use of multilevel data fusion improves the quality of click fraud analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Activity regulation for building dictionaries on an on-line collaborative platform

    Page(s): 132 - 139
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (879 KB) |  | HTML iconHTML  

    This paper discusses automatic regulation in participative Web systems. We present a generic solution with an original trace-centered approach. We describe an experiment with a general trace-based system (TBS) called CARTE (Collection, activity Analysis and Regulation based on Traces Enriched) featuring a regulation mechanism and we couple this system with an on-line generic platform for managing lexical resources called Jibikipedia. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Independent task scheduling in heterogeneous environment via makespan refinery approach

    Page(s): 211 - 217
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (122 KB) |  | HTML iconHTML  

    Tasks scheduling in heterogeneous computing environments is one of the most challenging problems in distributed computing. The optimally mapping of independent tasks onto heterogeneous distributed computing systems is known to be NP complete problem. This paper addresses a two-stage methodology for solving the independent task scheduling problems in heterogeneous distributed computing. The scheduler aims to minimize the total completion time using the task reassignment strategy. This later uses a new Makespan Refinery Approach (MRA) to improve our initial task scheduling solution by reducing the maximum completion time. The effectiveness of the proposed scheduling method has been tested and evaluated using simulations. The experiment results show the behaviour of the scheduling method for the short completion time of a set of tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • WeSPaS — Web specification pattern system

    Page(s): 61 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (768 KB) |  | HTML iconHTML  

    We propose a library of web specification patterns to assist web developers and testers in formally specifying web related properties. The current version of the library contains 119 functional and non-functional patterns obtained from scrutinizing various resources in the field of quality assurance of Web Applications, which characterize successful web application using a set of standardized attributes. We evaluated 10% of our patterns on six Web applications. The results showed that the majority of tested properties are violated by the web applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved radial basis function neural network based on a cooperative coevolutionary algorithm for handwritten digits recognition

    Page(s): 464 - 468
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (690 KB) |  | HTML iconHTML  

    Co-evolutionary algorithms are a class of adaptive search meta-heuristics inspired from the mechanism of reciprocal benefits between species in nature. The present work proposes a cooperative co-evolutionary algorithm to improve the performance of a radial basis function neural network (RBFNN) when it is applied to recognition of handwritten Arabic digits. This work is in fact a combination of ten RBFNNs where each of them is considered as an expert classifier in distinguishing one digit from the others; each RBFNN classifier adapts its input features and its structure including the number of centres and their positions based on a symbiotic approach. The set of characteristic features and RBF centres have been considered as dissimilar species where each of them can benefit from the other, imitating in a simplified way the symbiotic interaction of species in nature. Co-evolution is founded on saving the best weights and centres that give the maximum improvement on the sum of squared error of each RBFNN after a number of learning iterations. The results quality has been estimated and compared to other experiments. Results on extracted handwritten digits from the MNIST database show that the co-evolutionary approach is the best. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Viewpoint-based annotations for Knowledge Discovery in Databases

    Page(s): 320 - 323
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (660 KB) |  | HTML iconHTML  

    In this paper, we propose a new approach that makes the viewpoint notion explicit in a multiview Knowledge Discovery in Databases (KDD) process. We define a viewpoint in KDD as an analyst's perception of a KDD process, which refers to his own knowledge. Our purpose is to facilitate both the reusability and adaptability of a KDD process, and to reduce its complexity whilst maintaining the trace of the past analysis in terms of viewpoints. We also propose a viewpoint-based conceptual model for KDD process that integrates both the analyzed and the analyst domain knowledge. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the integration of dispatching and covering for emergency vehicles management system

    Page(s): 198 - 204
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (739 KB) |  | HTML iconHTML  

    In the vehicle management systems, two important issues have to be considered by managers: the dispatching problem with the aim of minimizing the response time for current emergency calls; and the covering problem with the objective of keeping proper coverage to satisfy future calls in best times. The purpose of this paper is to investigate the value, in term of service quality, of integrating the dispatching and covering problems in the same model. A heuristic algorithm, combining Ant optimization and Tabu search, is used as a solution approach. Several numerical examples are used to compare the integrated approach and the non-integrated one. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic threshold for replicas placement strategy

    Page(s): 394 - 401
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1001 KB) |  | HTML iconHTML  

    The data replication is a very important technique for the availability of data in the grids. One of the challenges in data replication is the replicas placement. In this paper, we present our contribution by proposing a replicas placement strategy in a hierarchical grid. Our approach is based on a dynamic threshold, contrary to the other strategies of replicas placement which use a static threshold. In our strategy we show that the threshold depends on several factors such as the size of the data to be replicated, the consumed bandwidth which is explained by the level of the tree related to the grid. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Schema matching for integrating multimedia metadata

    Page(s): 234 - 239
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (614 KB) |  | HTML iconHTML  

    The recent growing of multimedia in our lives requires an extensive use of metadata for multimedia management. Consequently, many metadata standards have appeared. Using these standards has become very complicated since they have been developed by independent communities. The content and context are usually described using several metadata standards. Accordingly, a multimedia user must be able to interpret all these standards. In this context, several metadata integration techniques have been proposed in order to deal with this challenge. These integrations are made by domain experts which is costly and time-consuming. This paper presents a new system for a semi-automatic integration of multimedia metadata. This system will automatically map between metadata needed by the user and those encoded in different formats. The integration process makes use of several information: XML Schema entity names, their corresponding comments as well as the hierarchical features of XML Schema. Our experimental results demonstrate the integration benefits of the proposed system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid Petri nets based approach for analyzing complex dynamic systems

    Page(s): 180 - 184
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (235 KB) |  | HTML iconHTML  

    This paper presents a mapping algorithm to deal with a new class of hybrid Petri net (HPN) called Discrete Continue elementary HPN. The method enables us to analyze some system's properties using the linear hybrid automaton generated by the mapping process. The method is applied to a three tanks water system and analyzed by a PHAVer software tool. Its effectiveness is illustrated by numerical simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards a new approach for real time face detection and normalization

    Page(s): 455 - 459
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1058 KB) |  | HTML iconHTML  

    Nowadays, face recognition algorithms, proposed in the literature, reached a correct performance level when the acquisitions conditions for the tested images are controlled (variability of the environment, illumination, poses, expressions and also the number of images in the database to identify people). These performances fall when these conditions are degraded. The controlled conditions of acquisition correspond to a good balance of illumination, as well as a high-resolution, a good pose of the face and a maximum sharpness of the face image. Although several methods have been proposed to resolve the problem of illumination variation, the problem of rotation and occlusion still an obstacle. In this paper, we propose a new method for face detection and normalization which consists to choose the best pose and the point of view of the detected face. This normalization conducts us to select the normalized images to be used as an input in the recognition process, in order to improve its performance. This approach was implemented and tested on a public database. The preliminary results seem very promising. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fully decentralized algorithm to timestamping transactions in a peer-to-peer environments

    Page(s): 185 - 189
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB) |  | HTML iconHTML  

    In this paper, we investigate a decentralized approach to timestamping transactions in a replicated database, under partial replication in Peer-To-Peer (P2P) environments. In order to solve problems of concurrent updates and node failures, we propose an architecture based on quorums, this architecture allows assigning a unique timestamp to each distributed transaction, to select the servers replicas and to coordinate the distributed execution of the transaction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Context-aware ubiquitous framework services using JADE-OSGI integration framework

    Page(s): 48 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4068 KB) |  | HTML iconHTML  

    The ubiquitous computing frameworks aim to build new applications to provide assistive services for people in autonomously way. Therefore, ubiquitous computing frameworks must integrate both heterogeneous and dynamic devices and interoperate with other heterogeneous applications. These frameworks must adapt their behaviors to provide services that match current user activity. This paper presents a context-aware ubiquitous approach based on lightweight coupling between multi-agent system and OSGi framework. A prototype of this framework is developed and simulated using USARSim robot simulator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A metacomputing approach for the winner determination problem in combinatorial auctions

    Page(s): 485 - 488
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB) |  | HTML iconHTML  

    Grid computing is an innovative approach permitting the use of computing resources which are far apart and connected by Wide Area Networks. This recent technology has become extremely popular to optimize computing resources and manage data and computing workloads. The aim of this paper is to propose a metacomputing approach for the winner determination problem in combinatorial auctions (WDP). The proposed approach is a hybrid genetic algorithm adapted to the WDP and implemented on a grid computing platform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nodes coupling in a Bayesian network for the automatic classification of XML documents

    Page(s): 146 - 152
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (879 KB) |  | HTML iconHTML  

    The document classification is one of the classical task of information retrieval and it has involved numerous studies. In this paper, we are presenting a learning model for XML document classification based on Bayesian networks. This latter is a probabilistical reasoning formalism. It permits to represent depending relationships between the random variables in order to describe a problem or a phenomenon. In this article, we are proposing a model which simplifies the arborescent representation of the XML document that we have, named coupled model and we will see that this approach improves the response time and keeps the same performances of the classification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion similarity measure between video sequences using multivariate time series modeling

    Page(s): 292 - 296
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2392 KB) |  | HTML iconHTML  

    The analysis and interpretation of video contents is an important component of modern vision applications such as surveillance, motion synthesis and web-based user interfaces. A requirement shared by these very different applications is the ability to learn statistical models of appearance and motion from a collection of videos, and then use them for recognizing actions or persons in a new video. Measuring the similarity and dissimilarity between video sequences is crucial in any video sequences analysis and decision-making process. Furthermore, many data analysis processes effectively deal with moving objects and need to compute the similarity between trajectories. In this paper, we propose a similarity measure for multivariate time series using the Euclidean distance based on Vector Autoregressive (VAR) models. The proposed approach allows us to identify and recognize actions of persons in video sequences. The performance of our methodology is tested on a real dataset. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Buffers sizing in assembly lines using a Lorenz multiobjective ant colony optimization algorithm

    Page(s): 283 - 287
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (636 KB) |  | HTML iconHTML  

    In this paper, a new multiobjective resolution approach is proposed for solving buffers sizing problems in assembly lines. The considered problem consists of sizing the buffers between the different stations in a line taking in consideration that the size of each buffer is bounded by a lower and an upper value. Two objectives are taken in consideration: the maximization of the throughput rate and the minimization of the total size of the buffers. The resolution method is based on a multiobjective ant colony algorithm but using the Lorenz dominance instead of the well-known Pareto dominance relationship. The Lorenz dominance relationship provides a better domination area by rejecting the solutions founded on the extreme sides of the Pareto front. The obtained results are compared with those of a classical Multiobjective Ant Colony Optimization Algorithm. For that purpose, three different measuring criteria are applied. The numerical results show the advantages and the efficiency of the Lorenz dominance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A priori replica placement strategy in data grid

    Page(s): 402 - 406
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (659 KB) |  | HTML iconHTML  

    The use of grid computing is becoming increasingly important in the areas requiring large quantity of data and calculation. To provide better access time and fault tolerance in such systems, the replication is one of the main issues for this purpose. The effectiveness of a replication model depends on several factors, including the replicas placement strategy. In this paper, we propose an a priori replicas placement strategy optimizing distances between the data hosted on the grid. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.