By Topic

Networked Computing and Advanced Information Management, 2008. NCM '08. Fourth International Conference on

Date 2-4 Sept. 2008

Go

Filter Results

Displaying Results 1 - 25 of 142
  • [Front cover - Vol 2]

    Publication Year: 2008 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (6567 KB)  
    Freely Available from IEEE
  • [Title page i - Volume 2]

    Publication Year: 2008 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (54 KB)  
    Freely Available from IEEE
  • [Title page iii - Volume 2]

    Publication Year: 2008 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (75 KB)  
    Freely Available from IEEE
  • [Copyright notice - Volume 2]

    Publication Year: 2008 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • Table of contents - Volume 2

    Publication Year: 2008
    Save to Project icon | Request Permissions | PDF file iconPDF (134 KB)  
    Freely Available from IEEE
  • Welcome Message from the General Chair - Volume 2

    Publication Year: 2008 , Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Welcome Message from the Program Chair - Volume 2

    Publication Year: 2008 , Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (89 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • International Committee - Volume 2

    Publication Year: 2008 , Page(s): xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (72 KB)  
    Freely Available from IEEE
  • Program Committee - Volume 2

    Publication Year: 2008 , Page(s): xvii - xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (78 KB)  
    Freely Available from IEEE
  • 3D Morphing Techniques for Fish Growth Process System Implementation Based on Environmental Factors

    Publication Year: 2008 , Page(s): 3 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (350 KB) |  | HTML iconHTML  

    Industry related to digital contents is growing rapidly and attracts public attention as industry that makes high value added. Digital reflex contents industry and computer graphics technique are the core technique. This technique is applied to various fields of blockbuster movie, game, educational contents, simulation and so on, we research the movement of each object that consists of contents and automation of object change enthusiastically. Morphing technique is applied to movie or advertisement that needs advanced special visual effect, game, music video and so on. This paper wants to graft morphing technique to fish growth system that shows the growth from spawn to full-grown fish. We suggest morphing technique that is change technique about object like swimming fish in the water that repeats certain frames, want to design system that simulates after-grown aspects that are changed by changes of various environmental factors that affect growth process of fish. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comparative Study of Classification Methods in Financial Risk Detection

    Publication Year: 2008 , Page(s): 9 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (203 KB) |  | HTML iconHTML  

    Early detection of financial risks can help credit grantors and institutions to establish appropriate policies for credit products, reduce losses and increase revenue. In recent years, the application of data mining techniques, such as classification and clustering, in financial risk detection has drawn interest from academic researchers and industry practitioners. The performance of classification methods varied with different datasets. No single method has been found to be superior over others for all datasets. The goal of this paper is to provide comparative analysis of the ability of a selection of popular classification methods to predict financial risk. The outcome of this study can help financial institutions select appropriate classifiers for their specific tasks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Competitive-Perspective Evaluation Framework of Software Firms

    Publication Year: 2008 , Page(s): 13 - 20
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    According as software industry helps the competitiveness of other industries to reinforce, it is important to evaluate software firms from a competitive perspective. So this study presents an improved evaluation framework for evaluating the competitiveness of software firms. And the evaluation framework presented on this study will be tested on its effectiveness by conducting a case study on software firms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Data Dependency Relationship Directed Graph and Block Structures Based Abstract Geospatial Information Service Chain Model

    Publication Year: 2008 , Page(s): 21 - 27
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB) |  | HTML iconHTML  

    With the development of Web services and service-oriented architecture (SOA), Web services has become the primary method to implement geospatial information sharing and interoperability, besides, geospatial information processing modeling basing on Web service compositions is one of the research hotspots. The data-centered procedure of geospatial information processing describes the steps of data processing and information retrieval. Current Web service composition languages and workflow models no matter low-level IT-oriented or high-level abstract are all with complex models, can not express data and data flow intuitively and not suit for geospatial domain users. In this paper, a data-dependency directed graph and block structures based abstract geospatial information service chain model (DDBASCM) is proposed and the translation method to BPEL is also suggested, geospatial domain users who are not Web services experts can modeling service chains intuitively, translate service chain into BPEL executable process and execute it using BPEL engine. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Data Sanitization Method for Privacy Preserving Data Re-publication

    Publication Year: 2008 , Page(s): 28 - 31
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (275 KB) |  | HTML iconHTML  

    When a table containing personal information is published, sensitive information should not be revealed. Although k-anonymity and l-diversity models are popular approaches to protect privacy, they are limited to one time data publishing. After a dataset is updated with insertions and deletions, a data holder cannot safely release up-to-date information. Recently, m-invariance model has been proposed to support re-publication of dynamic datasets. However, m-invariance model has two drawbacks. First, the m-invariant generalization can cause high information loss. Second, if the adversary already obtained sensitive values of some individuals before accessing released information, m-invariance leads to severe privacy breaches. In this paper, we propose a new data sanitization technique for safely releasing dynamic datasets. The proposed technique prevents two drawbacks of m-invariance and provides a simple and effective method for handling inserted and deleted records. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Database for the Analysis of Program Change Patterns

    Publication Year: 2008 , Page(s): 32 - 39
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (995 KB) |  | HTML iconHTML  

    Software repositories contain an enormous amount of information regarding the evolution of any large software system. In our experiments we choose the dataset of the freely available Mozilla CVS repository. We downloaded 9552 program files (C++), extracted the CVS log data, and extracted the Mozilla bugs information from the Bugzilla database. From these sources we extracted the program file change data and used a database for storing the extracted data. We further used this database for the analysis of program file changes in order to find change patterns. We apply an approach on the database that allows us to identify the different types of change transactions like bug fixing, clean, bug introducing and bug fix-introducing transactions. We further use the database to find the program file change distribution. Furthermore we use the probability of bug introducing and bug fix-introducing changes to identify the source file as being risky or not for further changes. Such information is not only useful for developers but also for software managers in order to assign resources, e.g., for testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Distributed Model for Medical Treatment System

    Publication Year: 2008 , Page(s): 40 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (522 KB) |  | HTML iconHTML  

    Many new-style medical treatment systems have been developed. Due to employing various incompatible methods respectively, the most of existing medical systems lack abilities to collaborate with each other. To break the technical barriers and build an open and collaborative medical treatment system, a distributed model, named DM4MTS is represented in this paper. Combining multiple techniques, such as Web service, agent and ontology, the DM4MTS model overcomes difficulties of the dynamic service join, semantic service discovering and matching and medical data transmission. A simulation experiment based on jadex is fulfilled. The results show the feasibility and flexibility of the DM4MTS model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Domain Knowledge-Driven Framework for Multi-Criteria Optimization-Based Data Mining Methods

    Publication Year: 2008 , Page(s): 46 - 49
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (483 KB) |  | HTML iconHTML  

    In recent years, multi-criteria optimization (MCO) community has made noticeable progress in the area of data mining and knowledge discovery. While most research effort is devoted to developing models and algorithms to "mine" data, not enough attention has been paid to the "knowledge discovery" aspect. Real-world data mining problems are complex and require close collaboration between data miners and domain experts. This paper analyzes the characteristics of MCO methods and proposes a framework that supports the integration of domain knowledge, business constraints and expectations, and data mining expertise. The aim of the framework is to turn the results of MCO-based data mining methods into actionable knowledge that can be applied to real-world problems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fuzzy Time Series Model to Forecast the BDI

    Publication Year: 2008 , Page(s): 50 - 53
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (130 KB) |  | HTML iconHTML  

    Fuzzy time series models has been applied to forecast various problems and have been shown to forecast better than other models. In this article, we intend to apply Chou and Lee's fuzzy time series model to forecast the Baltic Dry Index (BDI) index for the next month. The root mean square error is one criteria to evaluate the forecasting performance. Empirical results show that the fuzzy time series model is suitable for the BDIpsilas prediction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Genetic Multi-Agent Rule Induction System for Stream Data

    Publication Year: 2008 , Page(s): 54 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (305 KB)  

    Many data mining algorithms are not capable of working effectively with very large stream data sets. Today's, organizations are building massive amounts of Internet-related stream data they collect, process, and store. Organizations want to mine effectively large stream data sets. But existing data mining algorithms have many critical problems. Storage management, increased run time, complexity of algorithms is the examples. This study constructs a new stream data mining algorithms, and builds knowledge base from very large stream data sets with genetic algorithm and rule induction system. Unlike exiting methods that build knowledge from stream data sets, genetic multi-agent rule induction system builds knowledge from the large stream data sets and then significantly improves prediction and classification accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Holistic Understanding of HCI Perspectives on Smart Home

    Publication Year: 2008 , Page(s): 59 - 65
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (259 KB) |  | HTML iconHTML  

    Human Computer Interaction (HCI) is a discipline in which we emphasize users as well as technology in the design process. Our future home called Smart Home (SH) became one of interesting topics for HCI researchers. While HCI community has performed studies on it, they were done by a more specific and dispersed way. Due to this, a problem occurred that it is difficult to grasp a general understanding of HCI perspectives on SH. In this paper, we thus seek to propose a framework reflecting a holistic view on appropriateness of SH, considering its three dimensions: human, home and technology. Based on the framework, we also discuss its related issues from HCI perspectives. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Knowledge Integration Model for Corporate Dividend Prediction

    Publication Year: 2008 , Page(s): 66 - 74
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (315 KB) |  | HTML iconHTML  

    Dividend is one of essential factors determining the value of a firm. According to the valuation theory in finance, discounted cash flow (DCF) is the most popular and widely used method for the valuation of any asset. Since dividends play a key role in the pricing of a firm value by DCF, it is natural that the accurate prediction of future dividends should be most important work in the valuation. Although the dividend forecasting is of importance in the real world for the purpose of investment and financing decision, it is not easy for us to find good theoretical models which can predict future dividends accurately except Marsh and Merton (1987) model. Thus, if we can develop a better method than Marsh and Merton in the prediction of future dividends, it can contribute significantly to the enhancement of a firm value. Therefore, the most important goal of this study is to develop a better method than Marsh and Merton model by applying artificial intelligence techniques. The effectiveness of our approach was verified by the experiments comparing with Marsh and Merton model, Neural Networks, and CART approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Medical Information Management System Using the Semantic Web Technology

    Publication Year: 2008 , Page(s): 75 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1214 KB) |  | HTML iconHTML  

    A medical information management system (MIMS) is developed using the semantic Web technologies for medical data management in the development of a diagnosis method of dementia. This system manages metadata that are extracted from the medical data files automatically or are created by researchers. One of the characteristics of the data set of MIMS is that some of the data items are changed occasionally by the progress of the research. For supporting such change of the data items, MIMS provides three mechanisms. (1) A pluggable metadata extractor is a metadata extraction mechanism that given as a plug-in which automatically generates a set of metadata from medical data files. (2) An RDFView is a semantic Web retrieval mechanism which provides Web application programming interfaces created from SPARQL templates dynamically. (3) A representation mechanism is a method to display a result of the semantic Web retrieval service. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Modeling Method for Web Service Composition on Business Layer

    Publication Year: 2008 , Page(s): 81 - 86
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (548 KB) |  | HTML iconHTML  

    The existing modeling methods for Web service composition are not suitable for modeling Web service composition on the business layer. A novel concept model, called DWSCCM, for Web service composition on the business layer are represented in this paper. The initial experiments based on DWSCCM model show that this model is feasible and efficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Multi-Channel Combination Method of Image Perceptual Hashing

    Publication Year: 2008 , Page(s): 87 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB) |  | HTML iconHTML  

    Traditionally, the proposed perceptual hashing algorithms are mainly focused on the feature extraction and coding stages. But they only concerned parts of perception information, and take a certain type of features to obtain codes, which makes them to be biased on resisting some varieties of attacks. On the other hand, HVS (Human Visual System) as one of best methods simulating the image cognitive processing of human vision system normally takes into account more than one perception factor. Based on the human nature of image cognitive processing and the conclusion of HVS, we suggest that image perceptual hashing should take the advantages of different perceptual feature channels and reserve the superiorities of each. In this paper, a multi-channel combination method is proposed and the performance under many general attacks is examined. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Approach to Improve the Vote-Based Classifier Selection

    Publication Year: 2008 , Page(s): 91 - 95
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB) |  | HTML iconHTML  

    In the past decade many new methods were proposed for combining multiple classifiers. Neural network ensemble is a learning paradigm where many neural networks are jointly used to solve a problem. We propose a GA-based method for constructing a neural network ensemble using a weighted vote-based classifier selection approach. Main presumption of this method is that the reliability of the predictions of each classifier differs among classes. During testing, the classifiers whose votes are considered as being reliable are combined using weighted majority voting. This method of combination outperforms the ensemble of all classifiers almost 2.26% and 4.00% on Hoda and Wine data sets, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.