By Topic

Web Information Systems Engineering, 2000. Proceedings of the First International Conference on

Date 19-21 June 2000

Go

Filter Results

Displaying Results 1 - 25 of 33
  • First international conference on web information systems engineering

    Publication Year: 2000 , Page(s): iii - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (177 KB)  
    Freely Available from IEEE
  • Authors index

    Publication Year: 2000 , Page(s): 219
    Save to Project icon | Request Permissions | PDF file iconPDF (62 KB)  
    Freely Available from IEEE
  • Web documents categorization using fuzzy representation and HAC

    Publication Year: 2000 , Page(s): 24 - 28 vol.2
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    Most of the existing techniques for the characterization of Web documents are based on term-frequency analysis. In such models, given a set of documents, the characterization of each document is represented by a feature vector in a vector space. However, as Web documents written in HTML are semi-structured by means of tags, the traditional techniques that assign term weights only by the frequency of occurrence may not be able to provide satisfactory results in representing the content of such documents. Some recent studies have shown that the fuzzy representation (FR) of WWW information based on the significance of HTML tags is an effective alternative for characterizing Web documents. In this paper, the FR is used to generate the feature vector for each Web document and the hierarchical agglomerative clustering (HAC) algorithm is applied to investigate its efficiency and effectiveness for the automatic categorization of Web documents with similar contents. Experiments that have been conducted suggest several benefits of using such an approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data mining algorithms for web pre-fetching

    Publication Year: 2000 , Page(s): 34 - 38 vol.2
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB)  

    To speed up fetching web pages, this paper gives an intelligent technology of web pre-fetching. We use a simplified WWW data model to represent the data in the cache of web browser to mine the association rules. We store these rules in a knowledge base so as to predict the user's actions. Intelligent agents are responsible for mining the users' interest and pre-fetching web pages, based on the interest association repository. In this way user browsing time has been reduced transparently View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Conceptual levels of SGML tags: a proposed taxonomy based on the tagging in the Orlando Project

    Publication Year: 2000 , Page(s): 2 - 10 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (788 KB)  

    Several projects in various disciplines are now using the Standardized General Markup Language (SGML) tags at an interpretive level, i.e. these projects contain tags which have the potential to provide the reader with additional information that is not already explicit in the text itself. One such interpretive project is the Orlando Project, which is an integrated history of women's writing in the British Isles, currently under development in Canada. Orlando is unlike other projects in that the content is being written and tagged simultaneously. It also contains a wide and rich variety of both descriptive and interpretive tags, which provide the user with a wealth of information on women's writing in the British Isles, but the project does not currently provide an explicit indication of the level of description or interpretation to be expected in any given tag. Without such a taxonomy, projects like Orlando risk introducing potential ambiguities for the scholarly user. This paper therefore proposes a potential conceptual tag taxonomy for literary interpretive SGML projects such as Orlando View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Synchronous distance education: enhancing speaking skills via Internet-based real time technology

    Publication Year: 2000 , Page(s): 168 - 172 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB)  

    The paper reports on an investigation into one of the most urgent problems facing distance language education: the problem of lack of exposure to speaking practice in the target language. The Open Learning Chinese Program taught at Griffith University is used as a case study. Following a discussion on issues relating to distance education for languages such as the indispensability of technology to learning languages in a distance mode, and the importance of communicative competence, the paper moves on to an examination of the capabilities of Internet based real time technology. Two major indications can be generated from this research: real time technology can help solve the problem of insufficient exposure to speaking practice, and a historical convergence of distance and traditional campus based education toward a networked education can be expected View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structured Web pages management for efficient data retrieval

    Publication Year: 2000 , Page(s): 97 - 104 vol.2
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    The widespread use of the World Wide Web in recent years has opened up universal access to a vast number of information sources. An obstacle that affects the access to Web data is the lack of an information structure among and within Web pages. This raises a need for structured Web page management for efficient Web information searching. Our proposed structured Web page management is built in two stages: (i) HTML transformation to XML, and (ii) a navigation hierarchy. Also, we study how querying Web data can be accomplished in our structured Web page management, by which users may follow a navigation hierarchy to browse both inter-page and intra-page structures of the Web database and can specify queries for desired information View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Database support of Web course development with design patterns

    Publication Year: 2000 , Page(s): 212 - 216 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB)  

    Current distance learning is mostly based on Web technologies. However, course materials announced as Web documents do not have a normalized structure. It is difficult for students to realize where they are in a Web navigation graph. On the other hand, a text book has a fixed structure, such as the hierarchy of chapters and the index. A text book reader knows how to start searching for information with the common structure of books in his/her mind. If distance learning course materials are organized in one or two patterns, it would be easier for an individual student to follow. We investigate this approach, and propose a system for Web course designs with patterns. The system also serves as a front end module of a Web learning environment which provides automatic assessment of student performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reverse software engineering with UML for Web site maintenance

    Publication Year: 2000 , Page(s): 157 - 161 vol.2
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (476 KB)  

    It is shown that reverse software engineering using the Unified Process (UP) and visual models with the Unified Modeling Language can be applied to Web site maintenance. By reverse engineering the current Web sites, the implementation models of the current Web sites are derived from the Web sites. For the navigation schemes, the Web elements and their dependencies of the current Web sites are shown in component diagrams. Also, the physical directory structures are shown in the component view of the implementation model. Our empirical results on official university Web site maintenance show that the reverse software engineering and visual models can help Web administrators to understand the navigation schemes and physical structures quickly and easily View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Developing an XML gateway for business-to-business commerce

    Publication Year: 2000 , Page(s): 67 - 74 vol.2
    Cited by:  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (672 KB)  

    Business-to-business (B2B) Electronic Business (EC) is speedily getting popular and companies are increasingly using this notion and technology for business transaction and data exchange among each other. Traditionally, using Electronic Data Interchange (EDI) on Value Added Network (VAN) is common for data exchange on B2B EC, however drawback are the trade-off between flexibility and efficiency among partners in the whole supply chain; cost is often an issue due to the lack of standard in information exchange. Now, the popularity of XML (Extensible Markup Language) helps companies to solve the problem because of its economical cost and flexibility. In this paper a model for data exchange is investigated so that the cost for information flow could be reduced and the flexibility of the supply chain could be improved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Data mining and XML: current and future issues

    Publication Year: 2000 , Page(s): 131 - 135 vol.2
    Cited by:  Papers (1)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    This paper describes potential synergies between data mining and XML, which include the representation of discovered data mining knowledge, knowledge discovery from XML documents, XML-based data preparation and XML-based domain knowledge. Each category is viewed from a theoretical as well as a practical point of view View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • WebReader: a mechanism for automating the search and collecting information from the World Wide Web

    Publication Year: 2000 , Page(s): 47 - 54 vol.2
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (700 KB)  

    Current Web search engines are based on keyword search, and relevance of a web page is dependent on the number of hit count on the keywords. As keyword matching is not at the same level as semantic matching, the searching scope is unnecessarily broad and the precision (and recall) can be rather low. These problems give rise to undesirable performance on web information searching. In this paper, we describe a mechanism called WebReader, which is a middleware between the browser and the Web for automating the search and collecting information from the Web. By facilitating meta-data specification in XML and manipulation in XSL, WebReader provides the users with a centralized, structured, and categorized means to specify and Web information. An experimental prototype based on XML, XSL and Java has been developed to show the feasibility and practicality of our approach through a real-life application example View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Signature coding for meta objects

    Publication Year: 2000 , Page(s): 89 - 96 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB)  

    In this investigation, we discuss optimization issues of meta-queries for databases. Since traditional techniques can't be applied to meta-queries, we propose a new technique of cost-based optimization without the evaluation of meta-objects. For this purpose, we introduce signature techniques and extend database schemes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shared XML documents in service centers of the future

    Publication Year: 2000 , Page(s): 105 - 112 vol.2
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB)  

    Call centers are at the core of today's customer relations management. Increasingly, they are also utilized internally as competence and knowledge centers. Turning them into service centers of the future (SCotF) requires parallel communication over several channels, including Internet services, in a distributed synchronous fashion. In this paper, we show that the emerging XML standards provide a good basis for this type of interaction. In turning e-service into a groupware application, we propose to apply a spatial awareness model to assist in the collaboration. We demonstrate that it can be integrated into the XML/XSL-framework. The results are compared with a previous solution, which applied proprietary tools. Questions of fidelity and a critique of the existing standards and tools complement the practical results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Smart-Web

    Publication Year: 2000 , Page(s): 162 - 166 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB)  

    The problems associated with the management of Internet sites that require dynamic content or constant changes has led to the production of the article. http://www.smart-city.com.au is a local government Internet site which has made use of Smart-Web to alleviate the bottleneck that is encountered at the Web master when highly dynamic Web content most be altered. It was found that such content could be much move easily handled if the owners of the data had access to change both the data and the structure of the site. The Smart-Web menuing system has been in use since September 1999 and can be viewed at http://www.smart-city.com.au View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Web based system for managing university research center Web sites

    Publication Year: 2000 , Page(s): 136 - 142 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB)  

    Describes the challenges faced in coordinating the Web sites of numerous research groups within a large university. We argue that an XML framework is most appropriate, providing the greatest flexibility and forward compatibility. Software tools were developed to facilitate the creation of XML archives and to display them in common Web browsers, transparent to users. Attention is paid to meta-data and the development of research-sensitive search-specific name-spaces. An XML-sensitive search engine was developed to facilitate searching across all sites and the extraction and assembly of cross-site data, such as a weekly list of activities View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of distributed database access histories for buffer allocation

    Publication Year: 2000 , Page(s): 81 - 88 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB)  

    Studies the buffer-size setting problem for distributed database systems. The main goal is to minimize physical I/O while achieving better buffer utilization at the same time. As opposed to traditional buffer management strategies, where a limited knowledge of user access patterns is analyzed and used, our buffer allocation mechanism extracts knowledge from historical reference streams and then determines the optimal buffer space based on the discovered knowledge. Simulation experiments show that the proposed method can achieve an optimal buffer allocation solution for distributed database systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Web data cleansing and preparation for ontology extraction using WordNet

    Publication Year: 2000 , Page(s): 11 - 18 vol.2
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (536 KB)  

    The explosive growth of data on the World Wide Web makes information management and knowledge discovery increasingly difficult. Applying database techniques to manage Web information can help in solving these problems. One difficulty encountered is that Web documents, unlike structured databases, contain unstructured and semi-structured data. Our hypothesis is that creating ontologies to describe the semantics of Web data is the key to bridging the gap between semi-structured data and structured databases, and hence to facilitating the application of database techniques. We extract an ontology (or conceptual schema) from a set of Web pages in a particular application domain automatically. The prototype we are constructing is called WebOntEx (Web Ontology Extraction). This paper describes the data preparation process and the semantic resolution process of the WebOntEx project to build a meta-database and a Web database View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Creating data exchange standards with XML: a waste?

    Publication Year: 2000 , Page(s): 63 - 66 vol.2
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (324 KB)  

    This position paper presents a proposition concerning the relationship between XML and data interchange standards. The basic premise is that it is a waste of a good flexible technology to use it for the purposes of creating, or re-creating, fixed standards. The argument draws its inspiration from developments in the business world where the shift has been from the mass production of identical units to the production of customised and essentially unique artefacts. This general shift is highlighted by one particular industrial sector where this form of production has been the dominant paradigm throughout its history: the construction industry. This is the largest industrial sector in the world but suffers from poor IT uptake. The fragmented nature of the industry does not lend itself to the adoption of repetitive and fixed technologies. Rather it requires flexible technologies. XML is a key example. The paper thus argues that XML is ideally, suited to the new business paradigm of mass customisation. It does not deny the necessity for standards-they will always be required. But XML is concerned with the quick and easy creation of metadata rather than establishing fixed and standardised metadata structures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object-Oriented mediator queries to XML data

    Publication Year: 2000 , Page(s): 39 - 46 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (536 KB)  

    The mediator/wrapper approach is used to integrate data from different databases and other data sources by introducing a middleware virtual database that provides high level abstractions of the integrated data. A framework is presented for querying XML data through such an Object-Oriented (OO) mediator system using an OO query language. The mediator architecture provides the possibility to specify OO queries and views over combinations of data from XML documents, relational databases, and other data sources. In this way interoperability of XML documents and other data sources is provided. The mediator provides OO views of the XML data by inferring the schema of imported XML data from the DTD of the XML documents, if available, using a set of translation rules. A strategy is used for minimizing the number of types (classes) generated in order to simplify the querying. If XML documents not having DTDs are read, or if the DTD is incomplete, the system incrementally infers the OO schema from the XML structure while reading XML data. This requires that the mediator database is capable of dynamically extending and modifying the OO schema. The paper overviews the architecture of the system and describes incremental rules for translating XML documents to OO database structures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using contextual semantics to automate the Web document search and analysis

    Publication Year: 2000 , Page(s): 19 - 23 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (312 KB)  

    Traditional information retrieval techniques require documents that share enough words to build semantic links between them. This kind of technique is greatly affected by two factors: synonymy (different words having the same meaning) and polysemy (a word with several meanings), also known as ambiguity. Synonymy may result in a loss of semantic difference, while polysemy may lead to wrong semantic links. S.J. Green (1999) proposed the concept of a synset (a set of words having the same or a close meaning) and used a synset method to solve the problems of synonymy and polysemy. Although the synonymy problem can be solved, the polysemy problem still remains, because it is not actually possible to use an entire document as a basis to identify the meaning of a word. In this paper, we propose the concept of a context-related semantic set in order to identify the meaning of a word by considering the relations between the word and its contexts. We believe that this approach can efficiently solve the ambiguity problem and hence support the automation of Web document searching and analysis View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Development of Web customize system for sharing educational resources

    Publication Year: 2000 , Page(s): 173 - 178 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB)  

    Recently, the Internet has become very widely used and the number of Web pages has been increasing. The authors believe that it is possible for many Web pages to become educational materials. Therefore, they focus on a learning environment using Web pages. They propose a novel framework called WECE (Web based Educational Customize Environment). WECE adds knowledge (annotations, questions, answers, etc.) to any Web pages. WECE is a new approach that introduces a customizing layer model into WWW systems. In addition, the authors develop a prototype WECE system which is called “Web-Retracer”. Introducing Web-Retracer to a classroom may reduce a teacher's burden in selecting teaching materials for the learners. The teacher can select the contents of the teaching materials without considering the learner's level View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time quiz functions for dynamic group guidance in distance learning systems

    Publication Year: 2000 , Page(s): 188 - 195 vol.2
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB)  

    One of the serious problems of distance learning systems is how to teach many students on an individual basis. The authors design an educational method, dynamic group guidance using quiz functions for network based lecturing, and present an implementation method. Using the dynamic group guidance function, the system groups students automatically according to the results of quizzes and a teacher can guide each group individually. Supported by the system, therefore, a teacher can guide every student in a lecture efficiently according to his/her own level of understanding. We have realized the system using two technologies: user profiling and software agents View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modular development of multimedia courseware

    Publication Year: 2000 , Page(s): 179 - 187 vol.2
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (660 KB)  

    The use of multimedia in courseware and Web based learning is a current topic. This is influenced by the wide availability and the permanently improving technical possibilities. The employment of multimedia technology in education makes it possible to illustrate and to grasp complex processes and coherences. However, the development of multimedia content is still a very costly and tedious task. In order to reduce the costs and effort needed to build multimedia contents, it is desirable to organize the content in a modular way so that it can be (re)used and created cooperatively. The authors present a development process for multimedia contents that supports a modular cooperative design of multimedia courseware. They argue that applying modularity to courseware design not only reduces the costs but also allows the development of a high quality reusable and configurable courseware. Configurable courseware can be adapted to meet the needs and characteristics of different lecturers, students and contexts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A cognitive model for adaptive hypermedia systems

    Publication Year: 2000 , Page(s): 29 - 33 vol.2
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB)  

    Like other authors, we believe that hypermedia systems, and especially the World Wide Web, can increase and improve their functionality by means of making the semantics of the information system structure explicit. In this paper, we justify the need for a cognitive model in the conception of hypermedia systems. A semantic-dynamic model is presented that provides a complete, adaptive and evolving control of the development and maintenance of hyper-documents and an understandable navigation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.