By Topic

Distributed Computing Systems, 2004. FTDCS 2004. Proceedings. 10th IEEE International Workshop on Future Trends of

Date 26-28 May 2004

Filter Results

Displaying Results 1 - 25 of 62
  • Exploiting semantic proximity in peer-to-peer content searching

    Publication Year: 2004 , Page(s): 238 - 243
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (438 KB) |  | HTML iconHTML  

    A lot of recent work has dealt with improving performance of content searching in peer-to-peer file sharing systems. In this paper we attack this problem by modifying the overlay topology describing the peer relations in the system. More precisely, we create a semantic overlay, linking nodes that are "semantically close", by which we mean that they are interested in similar documents. This semantic overlay provides the primary search mechanism, while the initial peer-to-peer system provides the fail-over search mechanism. We focus on implicit approaches for discovering semantic proximity. We evaluate and compare three candidate methods, and review open questions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient key-evolving signature scheme based on pairing

    Publication Year: 2004 , Page(s): 68 - 73
    Cited by:  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (301 KB) |  | HTML iconHTML  

    This paper proposes an efficient key-evolving paradigm to deal with the key exposure problem of digital signature schemes. In the paradigm the secret key evolves with time and it is computationally infeasible for an adversary to forge a signature for the periods before the time of the key exposure. The scheme we propose is based on pairing (bilinear maps) and is efficiently constructed. We associate time with all nodes of a binary tree rather than the leaves only for the first time in a signature scheme. The complexity is a log magnitude in terms of the number of the total time periods. Compared with other previous key-evolving signature schemes, the signing and key update algorithm are very efficient. Finally, we give a detailed security analysis for the scheme. The security proof is based on the computational Diffie-Hellman assumption. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IPSec-based delegation protocol and its application

    Publication Year: 2004 , Page(s): 74 - 79
    Cited by:  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (247 KB) |  | HTML iconHTML  

    In this paper, we present a key-management protocol for delegated trust between the user and a set of entities. The protocol is based on IPSec architecture. We draw a mapping from SPI in IPSec architecture to users requesting a service spread across multiple hosts potentially in different administrative domains. We also outline the application and implementation of the protocol. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards an integrated architecture for peer-to-peer and ad hoc overlay network applications

    Publication Year: 2004 , Page(s): 312 - 318
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (282 KB) |  | HTML iconHTML  

    Peer-to-peer (P2P) networks and mobile ad hoc networks (MANET) share some key characteristics: self-organization and decentralization, and both need to solve the same fundamental problem: connectivity. We motivate a study for the convergence of the two overlay network technologies and sketch an evolving architecture towards integrating the two technologies in building overlay network applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalable, structured data placement over P2P storage utilities

    Publication Year: 2004 , Page(s): 244 - 251
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (633 KB)  

    P2P overlays offer a convenient way to host an infrastructure that can scale to the size of the Internet and yet be manageable. Current proposals, however, do not offer support for structuring data, other than assuming a distributed hash table. In reality, both applications and users typically organize data in a structured form. One such popular structure is the tree as employed in a file system, and a database. A naive approach such as hashing the pathname not only ignores locality in important operations such as file/directory lookup, but also results in uncontrollable, massive object relocations when rename on a path component occur. In this paper, we investigate policies and strategies that place a tree onto the flat storage space of P2P systems. We found that, in general, there exists a tradeoff between lookup performance and balanced storage utilization, and attempt to balance these two requirements calls for an intelligent placement decision. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A distributed buffer management approach supporting IPv6 mobility

    Publication Year: 2004 , Page(s): 270 - 276
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (466 KB)  

    In wireless local area networks (WLAN), mobility support is essential for providing seamless services. The current Mobile IP standard suffers several problems, especially for accommodating real-time media applications. Most of the existing studies focus on improving the performance between the home agent (HA) and foreign agent (FA). In this work, we propose a buffer management approach to enhance the performance between the mobility agent (HA or FA) and the mobile node (MN). The proposed scheme deals with the packet delivery between MA and MN in Layer 2. Through the simulation, we show that our scheme improves TCP throughput as well as reducing the UDP packet loss rate. Moreover, its implementation is feasible in both IPv4 and IPv6, and suitable for supporting quality of service (QoS). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The vMatrix: server switching

    Publication Year: 2004 , Page(s): 110 - 118
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1076 KB) |  | HTML iconHTML  

    Today most Internet services are pre-assigned to servers statically, hence preventing us from doing real-time sharing of a pool of servers across as group of services with dynamic load. Fluidly copying services in and out of servers remains a challenge due to the many dependencies that such services have on software, hardware, and most importantly, people. In this paper we present a novel solution, which builds on top of the classic operating systems concept of a virtual machine monitor (VMM). A VMM allows us to encapsulate the state of the machine in a virtual machine file, which could then be activated on any real machine running the VHM software. This eliminates the software dependencies problem by allowing us to move the whole machine around including the operating system, libraries, and third party modules that the service depends on. It eliminates the hardware dependencies problem by allowing us to mimic the hardware that the service expects regardless of the real hardware of the hosting machine. It also solves the people dependency problem by presenting the developers and system administrators with the same isolation model that they are used too with statically allocated servers. We describe our vMatrix framework in detail and address how to load balance the virtual machine services across the real-machines to maximize utilization efficiency (in terms of machines and people costs) such that total cost of the system is reduced without degrading the service performance and without requiring cost prohibitive code and architectural changes to existing legacy services. Our solution also offers additional side benefits like on-demand replication for absorbing flash crowds (in case of a newsworthy event like a major catastrophe) and faster failure recovery times. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Random landmarking in mobile, topology-aware peer-to-peer networks

    Publication Year: 2004 , Page(s): 319 - 324
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (373 KB) |  | HTML iconHTML  

    DHT can locate objects in a peer-to-peer network within an efficient amount of overlay hops. Since an overlay hop is likely to consist of multiple physical hops, the ratio between the number of physical hops induced by the overlay routing process and the number of physical hops on a direct physical path is often significantly lopsided. Recently, some approaches have been suggested to optimize that ratio by building topology-aware peer-to-peer overlays. However, none of them were explicitly designed to handle node mobility. We present an approach that optimizes the overlay versus direct physical path ratio and maintains it even in the presence of node mobility. Thus, it is well suited for highly dynamic networks, such as ad-hoc networks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new perspective in defending against DDoS

    Publication Year: 2004 , Page(s): 186 - 190
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (265 KB) |  | HTML iconHTML  

    Distributed denial of service (DDoS) is a major threat to the availability of Internet services. The anonymity allowed by IP networking, together with the distributed, large scale nature of the Internet, makes DDoS attacks stealthy and difficult to counter. As various attack tools become widely available and require minimum knowledge to operate, automated anti-DDoS systems are increasingly important. This paper studies the problem of providing an anti-DoS service (called AID) for general-purpose TCP-based public servers. We design a random peer-to-peer (RP2P) network that connects the registered client networks with the registered servers. RP2P is easy to manage and its longest path length is just three hops. The AID service ensures that the registered client networks can always access the registered servers even when they are under DoS attacks. It creates the financial incentive for commercial companies to provide the service, and meets the need for enterprises without the expertise to outsource their anti-DoS operations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enterprise computing in the on demand era

    Publication Year: 2004
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (194 KB) |  | HTML iconHTML  

    Web-based distributed computing is the vital technology enabler for today's most important e-business opportunities. However, a complex enterprise e-business system poses several technology challenges. Some of the key components that would enable the growth of an e-business are service-oriented architectures, understanding the use of data across businesses, application integration, and reduction of the complexity of management, and human intervention. In this position paper we outline some of the major challenges and trends in enterprise systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Integrating X/Open DTP into Grid services for Grid transaction processing

    Publication Year: 2004 , Page(s): 128 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (258 KB) |  | HTML iconHTML  

    This paper proposes a new architecture of Grid transaction processing called GridTP based on the OGSA platform and the X/Open DTP model. It is easy to transplant the legacy transaction processing systems to Grid services because GridTP has a similar programming model and interfaces with traditional middleware (XA and TX). Moreover GridTP is independent of any existing transaction protocols (i.e., BTP WS-Transaction, etc.) in Web services. As a case study of GridTP a Grid application called the 3G Portal is presented to demonstrate the usage of this architecture. Therefore, GridTP has made a seamless mechanism for embedding the X/Open DTP model in Grid services, which provides one promising reference implementation for the future Grid transaction processing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis and experimentation of an open distributed platform for synthetic traffic generation

    Publication Year: 2004 , Page(s): 277 - 283
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (350 KB) |  | HTML iconHTML  

    This work presents an open distributed platform for traffic generation that we called distributed Internet traffic generator (D-ITG), capable of producing traffic (network transport and application layer) and of accurately replicating appropriate stochastic processes for both IDT (inter departure time) and PS (packet size) random variables. We implemented two different versions of our distributed generator. In the first one, a log server is in charge of recording the information transmitted by senders and receivers and these communications are based either on TCP or UDP. In the other one, senders and receivers make use of the MPI library. A complete performance analysis among centralized version and the two versions of D-ITG is presented. To our knowledge, no similar works are available. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimating device availability in pervasive peer-to-peer environment

    Publication Year: 2004 , Page(s): 254 - 260
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (344 KB) |  | HTML iconHTML  

    In pervasive computing environments, devices often communicate in a peer-to-peer manner, either for passing messages or collaboratively running applications. For any particular device, information about the availability of other devices can lead to more efficient communication and better execution of distributed tasks. For example, a device can distribute computation tasks to reliable devices and temporarily avoid unreliable ones. Since there usually is not a central server to coordinate or monitor the communication in such environment, a key technical challenge is for each individual device to predict the availability of other devices. In this paper, we describe some methods for predicting the availability of other devices using historic availability data of those devices obtained in routine usage. In this scheme, each device separately maintains data about the past communication with the other devices, and predicts current and future availability using the statistics of those data. These methods do not require any centralized monitoring or extra probing, and thus have very low computation cost. These characteristics make them suitable for small devices in a peer-to-peer environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two stage optimization of job scheduling and assignment in heterogeneous compute farms

    Publication Year: 2004 , Page(s): 119 - 124
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (203 KB) |  | HTML iconHTML  

    Distributed networked computing in a compute farm environment has attracted great attention in recent years. Specialized management system for a compute farm enables heterogeneous distributed resources to be shared in a seamless way between various competing jobs. A key functionality of such system is a scheduler that controls the assignment of jobs to resources. This paper outlines a range of scheduling constrains as well as a list of required scheduling features for a state-of-the-art management system in distributed farm computing. It also presents a novel two stage static-dynamic scheduling algorithm to deal with the scheduling complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Container based framework for self-healing software system

    Publication Year: 2004 , Page(s): 306 - 310
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (239 KB) |  | HTML iconHTML  

    Software "self-healing" is an approach to detect improper operations of software applications, transactions and business processes, and then to initiate corrective action without disrupting users. The software engineering literature contains many studies on software error detection and error correction. In this paper, we introduce a "container based self-healing" framework and provide an outline on how the framework can help in evolving a self-healing system for a complex distributed system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Grid computing in Taiwan

    Publication Year: 2004 , Page(s): 201 - 204
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2496 KB) |  | HTML iconHTML  

    Grid computing focuses on aggregating resources (e.g., processor cycles, disk storage and contents) from a large-scale computing environment. It intends to deliver high-performance distributed platforms for computationand/or data-intensive applications. In this paper, we study the enabling techniques for Grid computing for high-performance computing. Our goals are to: (1) understand the design of these key components; (2) set up a Grid computing platform; (3) learn how to create Grid-enabled high-performance applications; and (4) share experiences on constructing such platforms and applications from the status of Taiwan. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • PKUAS: an architecture-based reflective component operating platform

    Publication Year: 2004 , Page(s): 163 - 169
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (826 KB) |  | HTML iconHTML  

    Reflective middleware is the major approach to improving the adaptability of middleware and its applications. Current researches and practices pay little attention on the usability of reflective middleware. There is also lacking a systematic way to adapt a runtime system via reflective middleware. This paper presents the design and implementation of PKUAS (Peking University Application Server), an architecture-based reflective component operating platform compliant with Java 2 Platform Enterprise Edition. PKUAS constructs and represents its platform and applications from the perspective of software architecture so as to provide an understandable, user-friendly and systematic way to use reflective middleware. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed systems design using function-class decomposition with aspects

    Publication Year: 2004 , Page(s): 148 - 153
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (265 KB) |  | HTML iconHTML  

    Object-oriented methods are known for their capabilities to encapsulate and manage core concerns of complex software systems. However, they are inadequate in discerning and separating a variety of other cross-cutting concerns. In particular, for distributed systems, a number of important concerns such as synchronization, logging, and security, should be sufficiently treated in the design phase of the software lifecycle in order to ensure high system quality. Often these concerns tend to be overlooked at the design level and consequently scattered across multiple system modules during implementation. Consequently it becomes difficult to connect the set of requirements with the system structure, thus system traceability is reduced. This paper proposes an extension to the function-class decomposition (FCD) method, that is a hybrid method of structured analysis and OO approach, by integrating the concepts of "aspect". "Aspect" is an abstraction mechanism that emerged in recent years from the aspect-oriented programming (AOP) community. This extended method supports separation of functional and non-functional concerns by maintaining two primary views (function-class view and aspect view) at the design stage, and demonstrates the iterative process by applying it to the development of an example system called M-Net that is an Internet-based real-time distributed conferencing system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards a fully distributed P2P Web search engine

    Publication Year: 2004 , Page(s): 332 - 338
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (324 KB) |  | HTML iconHTML  

    Most centralized Web search engines currently find it harder to catch up with the growth in information needs. Here, we present a fully distributed, collaborative peer-to-peer Web search engine named Coopeer. The goal of the work is to complement centralized search engines to provide more humanized and personalized results by utilizing users' collaboration. Towards this goal, three main ideas are introduced: (a) PeerRank to use cooperation among users for evaluation; (b) a query-based representation to obtain a more humanized description of documents; and (c) a semantic routing algorithm to obtain user-customized results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards supporting fine-grained access control for Grid resources

    Publication Year: 2004 , Page(s): 59 - 65
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (887 KB) |  | HTML iconHTML  

    The heterogeneous nature and independent administration of geographically dispersed resources in a Grid demand the need for access control using fine-grained policies. In this paper, we investigate the problem of fine-grained access control in the context of resource allocation in the Grid, as we believe it is the first and key step in developing access control methods specifically tailored for Grid systems. To perform this access control, we design a security component (to be part of a meta-scheduler service) that finds the list of nodes where a user is authorized to run his/her jobs. The security component is designed in an effort to reduce the number of rules that need to be evaluated for each user request. We believe such a fine-grained policy-based access control would help the adoption of the Grid to a higher extent into new avenues such as desktop Grids, as the resource owners are given higher flexibility in controlling access to their resources. Similarly, Grid users get a higher flexibility in choosing the resources in which their jobs must execute. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Platform-independent dynamic reconfiguration of distributed applications

    Publication Year: 2004 , Page(s): 286 - 291
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (270 KB) |  | HTML iconHTML  

    The aim of dynamic reconfiguration is to allow a system to evolve incrementally from one configuration to another at run-time, without restarting it or taking it offline. In recent years, support for transparent dynamic reconfiguration has been added to middleware platforms, shifting the complexity required to enable dynamic reconfiguration to the supporting infrastructure. These approaches to dynamic reconfiguration are mostly platform-specific and depend on particular implementation approaches suitable for particular platforms. In this paper, we propose an approach to dynamic reconfiguration of distributed applications that is suitable for application implemented on top of different platforms. This approach supports a platform-independent view of an application that profits from reconfiguration transparency. In this view, requirements on the ability to reconfigure components are expressed in an abstract manner. These requirements are then satisfied by platform-specific realizations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A complexity measure for ontology based on UML

    Publication Year: 2004 , Page(s): 222 - 228
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (294 KB) |  | HTML iconHTML  

    UML is a good tool to represent ontologies. When using UML for ontology development, one of the principal goals is to assure the quality of ontologies. UML class diagrams provide a static modeling capability that is well suited for representing ontologies, so the structural complexity of a UML class diagram is one of the most important measures to evaluate the quality of the ontologies. This paper uses weighted class dependence graphs to represent given class diagrams, and then presents a structure complexity measure for the UML class diagrams based on entropy distance. It considers complexity of both classes and relationships between the classes, and presents rules for transforming complexity value of classes and different kinds of relations into weighted class dependence graphs. This method can measure the structure complexity of class diagrams objectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The power of DHT as a logical space

    Publication Year: 2004 , Page(s): 325 - 331
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (273 KB) |  | HTML iconHTML  

    P2P DHT has fast become a rather "classical" research field. The currently popular mindsets are mostly routing-centric, or storage-centric. In this paper, we argue that it maybe more interesting to simply view DHT as a logical space that can dynamically size itself with potentially unlimited amount resources. This is in someway analogous to the virtual memory in any contemporary operating system. We believe that exploring such a perspective will bring about new insights as well as applications. We illustrate the power of this abstraction with a few examples, including self-scaling, self-healing fat tree, self-tune storage system, and a lightweight quorum-based distributed lock protocol which does not assume a constant number of members to start with. While their individual utilizations differ radically, all of them are united under this viewpoint. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An architectural view of the entities required for execution of task in pervasive space

    Publication Year: 2004 , Page(s): 37 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (299 KB)  

    Aiming to provide computation ubiquitously, pervasive computing is perceived as a means to provide a user with the transparency of anywhere, any place, anytime computing. Pervasive computing is characterized by execution of task in heterogeneous environments that use invisible and ubiquitously distributed computational devices. It relies on service composition that creates customized services from existing services by process of dynamic discovery, integration and execution of those services. In such an environment, seamlessly providing resource for the execution of the tasks with limited networked capabilities is further complicated by continuously changing context due to mobility of the user. To the best of our knowledge no prior work to provide such a pervasive space has been reported in the literature. In this paper we propose an architectural perspective for pervasive computing by defining entities required for execution of tasks in pervasive space. In particular we address the following issues, namely entities required for execution of the task, architecture for providing seamless access to resources in the face of changing context in wireless and wireline infrastructure, and dynamic aggregation of resources under heterogeneous environment. We also evaluate the architectural requirements of a pervasive space through a case study. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An architecture for EventWeb

    Publication Year: 2004 , Page(s): 95 - 101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4222 KB)  

    While the volume and diversity of multimedia permeating the world around us increases, our chances of making sense of the available information do the opposite. This environment poses a number of challenges which include achieving scalability while accessing all the available media, live and archived, inferring its context, and delivering media to all interested parties with its context attached. We envision a solution to this set of challenges in a novel system architecture. As a starting point, however, we select a previously described framework, EventWeb, suitable for annotating raw multimedia data with context meaningful to end users. We then map it onto a distributed architecture capable of correlating, analyzing, and transporting the volumes of data characteristic of the problem space. This paper first presents the requirements for our architecture, then discusses this architecture in detail, and outlines our current implementation efforts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.