Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Databases, Parallel Architectures and Their Applications,. PARBASE-90, International Conference on

Date 7-9 March 1990

Filter Results

Displaying Results 1 - 25 of 122
  • PARBASE-90 International Conference on Databases, Parallel Architectures and Their Applications (Cat. No.90CH2728-4)

    Publication Year: 1990
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • Inference techniques for fault tolerant distributed database systems

    Publication Year: 1990 , Page(s): 233 - 234
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    A data inference approach to increase data availability in distributed database systems is proposed. When the requested data are not accessible owing to network and/or site failures, the database system can infer or approximate them from other accessible database fragments. Two different levels of correlated knowledge are used for inference. In the schema level, correlated knowledge between objects is represented as inference paths. Further, in the instance level, correlated rules are used to represent their detail correlations. In general, inference paths suggest proper objects and directions for data inference. By the selection of proper inference paths, correlated rules can be used to derive the inaccessible information. It is noted that a data inference system can be implemented as a front-end system to an existing distributed database system. It consists of a database fragment availability table which provides the data accessibility information for each site, the inference engine that selects inference paths and rules for inferring unavailable data, and the query modification system which transforms the given query to an alternate one such that all the required database fragments are accessible View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of scanning algorithms

    Publication Year: 1990
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    Three implementation techniques for main memory database systems are described and compared, namely, the compilation, vectorization, and adaptive methods. The adaptive method can be combined with the standard, compilation, and vectorization methods. Experiments show that, with standard optimization techniques like vectorization and compilation, a performance increase of a factor of five can be obtained. The adaptive method turned out to be favorable only in combination with the vectorization and compilation approach. Experience with applying these techniques indicates that the performance increase is well worth the added implementation effort for both the compilation and vectorization approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel function invocation in a dynamic argument-fetching dataflow architecture

    Publication Year: 1990 , Page(s): 112 - 116
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB)  

    The basic structure of a dynamic data-flow architecture based on the argument-fetching data-flow principle is outlined. In particular, the authors present a scheme to exploit fine-grain parallelism in function invocation based on the argument-fetching principle. They extend the static architecture by associating a frame of consecutive memory space for each parallel function invocation, called a function overlay, and identify each invocation instance with the base address of its overlay. The scheme gains efficiency by making effective use of the power provided by the argument-fetching data-flow principle: the separation of the instruction scheduling mechanism and the instruction execution. To handle function applications and memory management, the proposed architecture will have a memory overlay manager that is separate from the pipelined execution unit. To verify the design, a set of standard benchmark programs was mapped onto the new architecture and executed on an experimental general-purpose data-flow architecture simulation testbed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting coarse grained parallelism in database applications

    Publication Year: 1990 , Page(s): 510 - 512
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    An approach to exploiting coarse-grained parallelism in database applications is presented. This approach combines the database facilities of ADAMS with the dependency detection and parallel execution facilities of Mentat. The approach to providing mode-two parallelism is to make changes behind the ADAMS language interface, thus insulating users from the changes in operating environment. In addition, by merging with a mature parallel/distributed computing technology, it has been possible to capitalize on existing software to provide these facilities and avoid building them from scratch View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the design, implementation, and evaluation of a portable parallel database system

    Publication Year: 1990 , Page(s): 516 - 518
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    The authors describe a complete implementation and evaluation of a parallel relational database system. The described implementation exploits parallel algorithms initially proposed for a hypercube to achieve its speedup. Specifically, attention is given to a portable parallel database system that exploits both parallel algorithms and data parallelism to expedite database processing. Two join algorithms are evaluated. It is shown that, for joins with a comparable number of tuples in each of the two joining relations, a bucket-based approach is preferable. However, if the two relations greatly differ in size, a broadcast-based approach is preferred View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimistic multi-level concurrency control for nested typed objects

    Publication Year: 1990
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (84 KB)  

    The authors propose an optimistic multilevel method that also exploits commutativity of typed operations in order to enhance concurrency. One of the main advantages of this method is that it makes it possible to take commutativity depending upon return values into account. A bank-transaction example illustrating the proposed approach is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Will 2D meshes replace hypercubes?

    Publication Year: 1990 , Page(s): 117 - 119
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    The authors analytically investigate and derive the speedup provided by 2-D meshes over hypercubes for several common communication patterns. They take into consideration the difference in channel widths, the communication rates, and the effect of worm-hole routing. The 2-D meshes are shown to provide significant speedups if effects of contentions can be ignored. For large networks having hundreds of nodes, contentions become significant, reducing the speedups. Appropriate mapping and routing may be used to reduce the contentions while retaining the speedups View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance evaluation of a new optimistic concurrency control algorithm

    Publication Year: 1990 , Page(s): 522 - 525
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB)  

    A modification of the classic Kung-Robinson timestamp-based concurrency control algorithm is described. The algorithm is based on two innovative techniques: query killing notes and weak serializability of transactions. In particular, it prefers long transactions over short queries and thus reduces considerably the number of transaction rollbacks required. In order to test the validity and evaluate the performance of the proposed algorithm, a simulation program was written and run using a realistic set of transactions. The simulation was performed using Flat Concurrent Prolog (FCP). The advantages of FCP for specifying and implementing parallel algorithms include its refined granularity of parallelism, its declarativeness and conciseness, and its powerful communication and synchronization primitives. Results of algorithm performance are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Graph modeling and analysis of linear recursive queries

    Publication Year: 1990 , Page(s): 44 - 53
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (724 KB)  

    The authors study a class of complex linear recursive rules or blocks via the V-graph model. A block a linear recursive rule represented by a nontrivial 2-connected component (i.e. containing at least a cycle). It is first shown that a simple form of block, namely, the cycle, has a simple, periodic, variable connection in the expansions. Then it is shown that blocks also have a periodic variable connection in the expansions, and the period and connection can be algorithmically determined. This allows a query to be evaluated efficiently with an iterative algorithm. The effects of the static variable bindings in blocks on query evaluation are then discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semantic addressing

    Publication Year: 1990
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (92 KB)  

    The author explores the question of whether computer networks can be made to determine who should receive the messages they carry more efficiently than their human users. Some elements of a possible answer to this question are provided. A protocol for semantic addressing based on partitioning methods borrowed from the field of information retrieval is developed. Use of the protocol has been simulated, but complete analysis of the results has yet to be carried out View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of transaction restart techniques in a distributed database environment

    Publication Year: 1990 , Page(s): 513 - 515
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB)  

    The authors present the results of a simulation study carried out to compare the effects of different restart techniques on the overall throughput, number of restarts, average response time, and average communication delay in representative distributed database environments. The performances of the following transaction restart methods are compared: (1) restart with random increase of timestamp, (2) restart with random delay, (3) data-marking method, (4) data marking with random delay, and (5) restart with a substitute transaction. The substitute transaction method is shown to perform well under all loads, except for the case in which all transactions are update-only. In this case, restart with a random delay performs better. The data-marking method introduces a very high communication overhead owing to the fact that the transactions keep sending messages requesting operations when an item is not available. This results in high response times and in an erratic behavior of the system under higher loads View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An architecture for parallel search of large, full-text databases

    Publication Year: 1990 , Page(s): 342 - 349
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB)  

    A novel signature approach is introduced for retrieval of documents from large text databases. The structure of the signature files under this approach lends itself to a highly parallel hardware implementation. The signature files are generated by software and loaded into RAM. Upon a user query, the signature files are searched in parallel by a special-purpose search module for identification of relevant documents. The search module is implemented in hardware with a parallel architecture. Experimental results and the design of the search module are presented. Experiments have shown that this technique is quite efficient in terms of storage overhead and the number of false drops. Perhaps the most attractive feature of this system is the hardware cost. The signature files are designed so that they can be saved, accessed, and compared using inexpensive memory chips. Another feature of this system is the capability for simple manipulation of the `don't care characters,' which is not provided by other signature methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Load balancing and multiprogramming in the Flagship Parallel Reduction Machine

    Publication Year: 1990
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (96 KB)  

    Investigations into load balancing and multiprogramming for a multiprocessor supporting declarative programming are reported. The Flagship Parallel Reduction Machine uses a packet-based graph reduction model of computation to exploit the parallelism in functional languages. The abstract architecture comprises a set of closely coupled processor-store pairs connected by a multistage delta communication network. In such a system, where program parallelism is not easily predictable at compile time, dynamic scheduling of work is necessary; the load-balancing scheme must therefore provide a dynamic mapping of program parallelism over the processor configuration. Investigations to enhance the load-balancing scheme and to determine multiprogramming efficiency in the Flagship Machine are described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reverse data engineering of E-R-designed relational schemas

    Publication Year: 1990 , Page(s): 438 - 440
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB)  

    A novel solution is presented for the data engineer's inverse mapping problem: to construct from a relational database schema (RDBS) a corresponding entity-relationship diagram (ERD). The inverse mapping is difficult because many ERDs may correspond to one ERDs, or to none if the schema is not well formed, so it is not clear how to define a mapping in this direction. Nonetheless, it would be desirable to choose the most representative ERD whenever possible, so that the benefits of E-R visualization and analysis can be applied to RDBSs, even after their relation schemes have been changed. The authors present a first approach to an experimental solution for this inversion problem, for the case of an RDBS that was originally designed by an ERD-based algorithm, which means that it was once a canonical relational schema (CRS) and subsequently altered. The demonstration system tracks each `atomic change' in the CRS and determines, by means of its PROLOG-implemented E-R knowledge base, the corresponding changes in the given ERD. The reasonable restrictions that the present system puts on the legal set of possible changes, for example, on deletions, allow it to trace their effects correctly, so that well-formedness of the ERD is preserved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An optimal fault-tolerant broadcasting algorithm for a cube-connected cycles multiprocessor

    Publication Year: 1990 , Page(s): 206 - 215
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (724 KB)  

    The author develops a novel broadcasting algorithm in cube-connected-cycles (CCC) multiprocessors using a binomial tree. The initiating processor takes [h-1/2]+s (1+[h-1/2]) steps to broadcast the message to all other processors. The proposed broadcasting algorithm is a procedure by which a processor can pass a message to all other processors in the network nonredundantly: it is extremely important for diagnosis of the network, distribution agreement, or clock synchronization. The author also describes an optimal fault-tolerant broadcasting algorithm in the CCC which tolerates s-1 processors or s-1 ring failures. The algorithm takes 1+2* ([h-1/2])+s(1+[h-1/2]) steps to broadcast the message to all other processors in optimal fault-tolerant broadcasting View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling interconnection networks using a hardware description language

    Publication Year: 1990
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (76 KB)  

    Using a hardware descriptive language, it is possible to develop simulation models for processor interconnection networks which take hardware considerations into account during simulation. Models were produced for nine interconnection networks, specialized first for a simple algorithm, then adapted for a more complex task. Simulations were conducted to ensure the correctness of the models, and the complexities involved in expanding the simple example into the more complex one were considered. In particular, simulation results obtained with the DABL (Daisy Behavioral Language) model are presented. The DABL models were capable of providing more insight into hardware-related issues than simulations conducted in conventional programming languages View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structured data in structured logic programming environment

    Publication Year: 1990
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (100 KB)  

    EPSILON, a prototype developed in the context of the European ESPRIT project, is discussed. It is built on top of a commercial Prolog and DBMS (database management system) running on standard UNIX environment. The EPSILON logic programming environment allows the structuring of large knowledge bases expressed in logic languages using so-called theories. The integration of different object-oriented concepts into the current EPSILON prototype provides an environment offering the user object-oriented concepts at different levels. First, the theory concept itself is an object-oriented mechanism to structure logic programs. Second, implementing logic languages, including object-oriented concepts (type hierarchies, nested data structures, etc.), by metaprogramming provided by the theory concept allows the user to employ different augmented logic languages within the programming environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Logic of knowledge and belief in the design of a distributed integrity kernel

    Publication Year: 1990 , Page(s): 418 - 420
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    Work on a language called ISL (Integrity Specification Language), which is intended to be used for the specification of integrity kernels in distributed databases, is reported. ISL is based on a form of interval temporal logic and provides a framework for a logic of knowledge and belief about data integrity. ISL is a design tool for integrity kernels in a distributed environment where the dynamic evaluation of data integrity based upon partial knowledge and informed judgment is required. The Clark and Wilson integrity model (1987) designed to prevent fraudulent and erroneous data modification is subsumed. An integrity system which includes extensions to the concept and functionality of a transaction manager as defined in the SDD-1 is given. A partial syntax for the ISL language is given. A temporal interval interpretation in the semantics of the `unless' operator is introduced. This operator provides a basis for the logic of knowledge and belief applied to data integrity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extending object-oriented databases with rules

    Publication Year: 1990 , Page(s): 556 - 557
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB)  

    Rules have been adopted as a means of expressing knowledge which is uniform throughout the type lattice but needs to be customized for each object type with integrity constraints. More specifically, such knowledge involves associations between instances, between objects, and between objects in different databases; grouping of instances; creation and deletion of object instances; and any other activity that requires a uniform approach in order to ensure consistency and transparency. A logical query language based on the representation of associations between objects, which can be made by rules, has been used. The goal was not a full-fledged object-oriented programming language, but a logical query language with base predicate associations between objects. A logical query language has the obvious advantage of expressiveness and simplicity but poses certain difficulties in evaluation; evaluation is an issue in the case of recursive queries due to time-consuming join operations. This problem was overcome by transforming recursion into iterative navigation through associations between object instances View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design considerations of a fault tolerance distributed database system by inference technique

    Publication Year: 1990
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (76 KB)  

    A fault-tolerant distributed database system with inference capability, consisting of a query parser and analyzer, an information module, and an inference system, is described. The information module provides the allocation and availability information of all the attributes in the system. The inference system consists of a knowledge base and an inference engine. The correlated knowledge among the attributes is represented in rules and stored in the knowledge base. During normal operations, all the database fragments are accessible; thus the processor based on the query process plan accesses the required attributes and processes the query. When network partition occurs, if the required attribute is inaccessible, the information module and the inference system will be invoked. On the basis of the required query operations provided by the parser and the correlated knowledge among the attributes, the inference engine modifies the original query to a new one so that all the required data for the query are accessible from the requested site. Depending on the type of query, the physical allocation of the database fragments, and the database domain semantics, the modified query may provide the exact, approximate, or summarized information of the original query View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Use of mesh connected processors for realizing fault tolerant relational database operations

    Publication Year: 1990 , Page(s): 568 - 570
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB)  

    The authors present algorithms for relational database operations using mesh connected processors. An important feature of these algorithms is that they are fault tolerant, that is errors in computation that may arise owing to either permanent or transient failure in a single processor are detected and corrected. These algorithms are immediately useful for main memory databases. Data elements for mesh connected processors can be fed from main memory View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A predicate-calculus based language for semantic databases

    Publication Year: 1990 , Page(s): 424 - 429
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    The author proposes a nonprocedural language for semantic databases in general and for the semantic binary model in particular. The foundation of the language is a database interpretation of a first-order predicate calculus. The calculus is enriched with second-order constructs for aggregation (statistical functions), specification of transactions, parameterized query forms, and other uses. The language is called SD-Calculus (Semantic Database Calculus). Of special interest is the use of this language for specification of bulk transactions, including generation of sets of new abstract objects. Implementation of the language is discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A vectorization technique of hashing and its application to several sorting algorithms

    Publication Year: 1990 , Page(s): 147 - 151
    Cited by:  Papers (2)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    A vectorized algorithm for entering data into a hash table is presented. A program that enters multiple data could not be executed on vector processors by conventional vectorization techniques because of data dependences. The proposed method enables execution of multiple data entry by conventional vector processors and improves the performance by a factor of 12.7, compared with the normal sequential method, when 4099 pieces of data are entered on the Hitachi S-810. This method is applied to address calculation sorting and the distribution counting sort, whose main part was unvectorizable by previous techniques. It improves performance by a factor of 12.8 when n=214 on the S-810 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Massively parallel implementation of two operations: unification and inheritance

    Publication Year: 1990 , Page(s): 505 - 509
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    The author develops two algorithms for a massively parallel system, and SIMD (single instruction, multiple data) computer with a general and fast communication network. Each of the two operations (unification and inheritance) is basic for one knowledge representation scheme. Both take data represented by directed graphs. For ease of integration in real systems and naturalness of specification, the operations are implemented incrementally, in the spirit of M.R. Quillian's `spreading activation', and not as atomic operations. The running time of both algorithms is almost linear in the number of vertices on the longest path in the graph representation. The association of the two operations is not accidental; the author intends to integrate them in a hybrid reasoning system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.