By Topic

Computer and Information Science, 2005. Fourth Annual ACIS International Conference on

Date 14-16 July 2005

Filter Results

Displaying Results 1 - 25 of 128
  • Proceedings. Fourth Annual ACIS International Conference on Computer and Information Science

    Save to Project icon | Request Permissions | PDF file iconPDF (296 KB)  
    Freely Available from IEEE
  • Fourth Annual ACIS International Conference on Computer and Information Science - Title Page

    Page(s): i - iii
    Save to Project icon | Request Permissions | PDF file iconPDF (48 KB)  
    Freely Available from IEEE
  • Fourth Annual ACIS International Conference on Computer and Information Science - Copyright Page

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • Fourth Annual ACIS International Conference on Computer and Information Science - Table of contents

    Page(s): v - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (60 KB)  
    Freely Available from IEEE
  • Message from the Conference Chairs

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (22 KB)  
    Freely Available from IEEE
  • Message from the Program Chairs

    Page(s): xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • Software engineering - components, interfaces, behaviours

    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (25 KB)  

    Summary form only given. Software engineering has matured from heuristic practice to an engineering discipline. Over the years, software technology developed into a key qualification for mastering complex technical systems. Nowadays, software engineers can benefit from a solid stock of basic research addressing the specification, modelling, design and implementation of sequential, concurrent, distributed and real time systems. This paper surveys the scientific foundations of modern software technology concentrating on components, interfaces and behaviours. We present a unifying approach relating different system views manifesting themselves as data model, communication model, state transition model, and process model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Acquiring dominant compound terms to build Korean domain knowledge bases

    Page(s): 2 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB)  

    Compound terms should be well ranked to reduce laborious work for building domain knowledge bases such as term dictionary and thesaurus. Especially, dominant terms in recent years are valuable in the aspects of coverage and reference. We adopt linguistic filtering using a part-of-speech filter and four combination rules to extract Korean compound terms. Domain seed terms are used to obtain their related terms from the above extracted term list. Term ranking, which considers the dominance trend of terms from several year data, assigns term dominance values to the related terms. Experimental results show that our ranking scheme adequately distributes extracted terms than term frequency ordering to reduce the effort of building domain knowledge bases in the manner of term clustering in three groups; growing, declining, and steady. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Genetic algorithm implementation in Python

    Page(s): 8 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (92 KB) |  | HTML iconHTML  

    This paper deals with genetic algorithm implementation in Python. Genetic algorithm is a probabilistic search algorithm based on the mechanics of natural selection and natural genetics. In genetic algorithms, a solution is represented by a list or a string. List or string processing in Python is more productive than in C/C++/Java. Genetic algorithms implementation in Python is quick and easy. In this paper, we introduce genetic algorithm implementation methods in Python. And we discuss various tools for speeding up Python programs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Kernel based intrusion detection system

    Page(s): 13 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (160 KB) |  | HTML iconHTML  

    Recently, applying artificial intelligence, machine learning and data mining techniques to intrusion detection system are increasing. But most of researches are focused on improving the performance of classifier. Selecting important features from input data lead to a simplification of the problem, faster and more accurate detection rates. Thus, selecting important features is an important issue in intrusion detection. Another issue in intrusion detection is that most of the intrusion detection systems are performed by off-line and it is not proper method for realtime intrusion detection system. In this paper, we develop the realtime intrusion detection system which combines on-line feature extraction method with least squares support vector machine classifier. Applying proposed system to KDD CUP 99 data, experimental results show that it has remarkable performance compared to off-line intrusion detection system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An effective real time update rule for improving performances both the classification and regression problems in kernel methods

    Page(s): 19 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    It is general solution to get an answer from both classification and regression problem that information in real world matches matrices. This paper treats primary space as a real world, and dual space that primary spaces transfers to new matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Furthermore, the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-curve, and kernel methods. This paper also suggests dynamic momentum which is learning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV, and L-curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Knowledge-based compliance management systems - methodology and implementation

    Page(s): 25 - 29
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB) |  | HTML iconHTML  

    In recent times, a challenging problem for organisations worldwide has been the management of the growing number of rules, procedures, policies, and reporting requirements governing their businesses, operations, and industry. This paper considers the task of building automated knowledge-based compliance management systems. In this paper, we aim to highlight the common weaknesses found in compliance methodologies in practice. The requirements of a good research methodology for compliance are discussed. Finally, the paper presents the development of a research methodology for compliance and its validation through a progressive case study in International Transfer Pricing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of translator for stack-based codes from 3-address codes in CTOC

    Page(s): 32 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (112 KB) |  | HTML iconHTML  

    We Present CTOC, a framework for optimizing Java bytecode. The framework supports two intermediate representations for representing Java bytecode: CTOC-B, a streamlined representation of bytecode which is simple to manipulate; CTOC-T, a typed 3-address intermediate representation suitable for optimization. We translate CTOC-T back to CTOC-B that translates a needless code. A needless code is redundant store/load statement and partial redundancy. We study the technique necessary to effectively translate CTOC-T back to CTOC-B without losing performance. This paper eliminates needless code. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A reflective practice of automated and manual code reviews for a studio project

    Page(s): 37 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (77 KB) |  | HTML iconHTML  

    In this paper, the target of code review is project management system (PMS), developed by a studio project in a software engineering master's program, and the focus is on finding defects not only in view of development standards, i.e., design rule and naming rule, but also in view of quality attributes of PMS, i.e., performance and security. From the review results, a few lessons are learned. First, defects which had not been found in the test stage of PMS development could be detected in this code review. These are hidden defects that affect system quality and that are difficult to find in the test. If the defects found in this code review had been fixed before the test stage of PMS development, productivity and quality enhancement of the project would have been improved. Second, manual review takes much longer than an automated one. In this code review, general check items were checked by automation tool, while project-specific ones were checked by manual method. If project-specific check items could also be checked by automation tool, code review and verification work after fixing the defects would be conducted very efficiently. Reflecting on this idea, an evolution model of code review is studied, which eventually seeks fully automated review as an optimized code review. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and implementation of UML based mobile integrated management system

    Page(s): 43 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (688 KB) |  | HTML iconHTML  

    Existing research expenses management task consists of budget planning, budget draw-up, and exact settlement of budget. Therefore, integrated management is needed keenly for certain security, efficient operation and clear execution of research expenses. To reflect these needs, Research Expenses Integrated Management (REIM) has been offered for the application used on mobile by reusing a business application module has been developed in REIM development process. Mobile collaboration component which supports a specialized collaboration process to be adapted for the peculiarities of a mobile machine also has been developed. As a result, it can offer various supportive information for decision making for the establishment of research management policy by reflecting user's requirement in real-time. It can also provide accuracy and prevention of errors to each operation since it can grasp the process of each operation without time and space hindrance through mobile collaboration. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and implementation of forced unmount

    Page(s): 49 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    This paper describes a kernel function named FU (forced unmount) related to file system on Linux system. FU is a function to forcibly unmount file systems in despite of busy state of file systems. The current implementation was developed on Linux-2.6.8 and tested by tools, POSTMAR and LTP. This contains considerations of FU and a algorithm to solve problems during developing FU. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Notice of Violation of IEEE Publication Principles
    Development of embedded software with component integration based on ABCD architectures

    Page(s): 54 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (191 KB) |  | HTML iconHTML  

    Notice of Violation of IEEE Publication Principles

    "Development of Embedded Software with Component Integration Based on ABCD Architecture"
    by Haeng-Kon Kim, Roger Y. Lee, and Hae-Sool Yang
    in the Proceedings of the 4th Annual ACIS International Conference on Computing and Information Science (ICIS'05), 14-16 July 2005, pp. 54-60

    After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

    This paper was found to be a near verbatim copy of the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.

    Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following article:

    "An Architecture for Embedded Software Integration Using Reusable Components"
    by Shige Wang and Kang G. Shin,
    in the Proceedings of the 2000 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems, ACM Press, 17-19 November 2000, pp. 110-118

    The state-of-the-art approaches to embedded real-time software development are very costly. The high development cost can be reduced significantly by using model-based integration of reusable components. To the ABCD (architecture, basic, common and domain) architecture, we propose an architecture that supports integration of software components and their behaviors, and reconfiguration of component behavior at executable-code-level. In the architecture, components are designed and used as building blocks for integration, each of which is modeled with event-based external interfaces, a control logic driver, and service protocols. The behavior of each component is specified as a finite state machine (FSM), and the integrated b- havior is modeled as a nested finite state machine (NFSM). These behavior specifications can be packed into a control plan program, and loaded to a runtime system for execution or to a verification tool for analysis. With this architecture, embedded software can be constructed by selecting and then connecting (as needed) components in an asset library, specifying their behaviors and mapping them to an execution platform. Integration of heterogeneous implementations and vendor neutrality are also supported. Our evaluation based on machine tool control software development using this architecture has shown that it can reduce development and maintenance costs significantly, and provide high degrees of reusability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A review approach to detecting structural consistency violations in programs

    Page(s): 61 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    The application of specification-based program verification techniques (e.g., testing, review, and proof) usually faces strong challenges in practice when the gap between the structure of a specification and that of its program is large. In this paper we describe an approach to detecting the violations of the structural consistency in programs based on their specifications by review. The approach is aimed at supporting software development in which programs are constructed based on their formal specifications. We establish a set of criteria and a review process that can guide reviewers to uncover structural consistency violations in programs, and apply the approach in a case study to assess its effectiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mining frequent pattern using item-transformation method

    Page(s): 698 - 706
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB) |  | HTML iconHTML  

    Mining frequent patterns is a fundamental and crucial task in data-mining problems. This paper proposes a novel and simple approach, which does not belong to the candidate generation-and-test approach (for example, the a priori algorithm) and the pattern-growth approach (such as the FP-growth algorithm) two approaches. This approach treats the database as a stream of data and finds the frequent patterns by scanning the database only once. Two versions of the approach (i.e., mapping-table and transformation-function) are provided. Analyses and simulations of the approach are also performed. Analyses show that the transformation-function version is much better than the a priori and FP-growth ones in storage complexity. Simulation results show that the mapping-table version is comparable to the FP-growth algorithm in execution time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PRBAC: an extended role based access control for privacy preserving data mining

    Page(s): 68 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    Issues about privacy-preserving data mining have emerged globally, but still the main problem is that non-sensitive information or unclassified data, one is able to infer sensitive information that is not supposed to be disclosed. This paper proposes an approach to PPDM based on an extended role based access control called privacy preserving data mining using an extended role based access control(PRBAC); Sensitive objects (SOBS) component is added to the model in order to privacy protecting during data mining. Users are allowed to access and thereby mine different sets of data according to their roles. Our proposed model can be used over the existing technologies. The paper goal is to preserve individual's privacy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RGSN model for database integration

    Page(s): 74 - 79
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    Considering information integration, we have a careful look at many types of collisions based on heterogeneities. We propose a relational global semantic network (RGSN) to solve this problem This is based on the WordNet which defines the relationships among words. The main structure of RGSN is to derive connections among entities in each schema and finally provide a foundation to solve the heterogeneity. We suggest an implementation of SELECT query that can be used in the environment of the integrated heterogeneous databases. It helps users derive data correctly without considering the various schema structures of many local databases. That is prerequisite for the utilization of integrated databases and would be the main contribution of this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient workcase classification method and tool in workflow mining

    Page(s): 80 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB) |  | HTML iconHTML  

    This paper conceives a workcase classification method and implements it as a tool so as to be used in workflow mining systems. The method is for resolving the workcase classification problem issued for mining an activity firing or execution sequence of a workcase from monitoring and audit logs. That is, it finally generates a workcase classification decision tree consisting of a minimal set of critical activities to be used for deciding the corresponding researchable-path of the workcases. Why is the method efficient? Because it uses a minimal decision tree in classifying workcases' reachable-paths. And the tool is a graphical visualizer of the method, and consists of three subsystems used to automatically generate information control net, activity dependency net and minimal activity net through their corresponding algorithms. Especially the method and tool might be an impeccable solution for the specific domain of massively parallel large-scale workflow procedures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Equivalence of transforming non-linear DACG to linear concept tree

    Page(s): 86 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (120 KB) |  | HTML iconHTML  

    This paper introduces a strategy and its theory proof to transform non-linear concept graph: directed acyclic concept graph (DACG) into a linear concept tree. The transformation is divided into three steps: normalizing DACG into a linear concept tree, establishing a function on host attribute, and reorganizing the sequence of concept generalization. This study develops alternative approach to discovery knowledge under non-linear concept graph. It overcomes the problems with information loss in rule-based attribute oriented induction and low efficiency in path-id method. Because DACG is a more general concept schema, it is able to extract rich knowledge implied in different directions of non-linear concept scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Palmprint identification algorithm using Hu invariant moments and Otsu binarization

    Page(s): 94 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    Recently, biometrics-based personal identification is regarded as an effective method of person's identity with recognition automation and high performance. In this paper, the palmprint recognition method based on Hu invariant moment is proposed. And the low-resolution(75dpi) palmprint image(135×135 Pixel) is used for the small scale database of the effectual palmprint recognition system. The proposed system is consists of two parts: firstly, the palmprint fixed equipment for the acquisition of the correctly palmprint image and secondly, the algorithm of the efficient processing for the palmprint recognition. And the palmprint identification step is limited 3 times. As a results, when the coefficient is 0.001 then FAR and GAR are 0.038% and 98.1% each other. The authors confirmed that FAR is improved 0.002% and GAR is 0.1% each other compared with online palmprint identification algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Natural image compression based on modified SPIHT

    Page(s): 100 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1000 KB) |  | HTML iconHTML  

    Due to the bandwidth and storage limitations, image must be compressed before transmission. The set partitioning in hierarchical trees (SPIHT) algorithm is an efficient method for lossy and lossless coding of image. This paper presents some modifications on the set partitioning in hierarchical trees (SPIHT) algorithm. It is based on the idea of insignificant correlation of wavelet coefficient among the medium and high frequency subbands respectively. In this scheme, insignificant wavelet coefficients that correspond to the same spatial location in the medium subbands can be used to reduce the redundancy by a combined function that modified SPIHT proposes. In the high frequency subbands, the modified SPIHT proposes dictator to reduce the interband redundancy. Experimental results show that the proposed technique improves the quality of the reconstructed image in both PSNR and perceptual result when compare to JPEG2000 at the same bit rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.