By Topic

Computers and Communications, 1990. Conference Proceedings., Ninth Annual International Phoenix Conference on

Date 21-23 March 1990

Filter Results

Displaying Results 1 - 25 of 155
  • Ninth Annual International Phoenix Conference on Computers and Communications (Cat. No.90CH2799-5)

    Publication Year: 1990
    Save to Project icon | Request Permissions | PDF file iconPDF (15 KB)  
    Freely Available from IEEE
  • A file service mechanism for distributed systems

    Publication Year: 1990 , Page(s): 486 - 492
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    Experimental file service mechanisms are introduced for distributed systems. The implementation as a prototype in a computer network is presented. In this prototype there exist multiple server processes, client processes, and one server manager process. According to the user request, a file server process is assigned to a client process and activated by the server manager process. A file server has various operational modes. The mode is determined by the server manager to meet various application requirements. This process can also perform some functions for security enhancement. The file service mechanism helps network users to select any file server with its most efficient mode and ensures data integrity and security. This mechanism is combined with a high-level programming language provided by a client process and therefore offers high seviceability View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A vector based backward state justification search for test generation in sequential circuits

    Publication Year: 1990 , Page(s): 630 - 637
    Cited by:  Papers (1)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    An innovative approach to the state justification portion of the sequential circuit automatic test pattern generation (ATPG) process is described. Given the absence of a stored fault, an ATPG controller invokes some combinational circuit test generation procedure, such as the D-algorithm, to identify a circuit state (goal state) and input vectors that will sensitize a selected fault. The state justification phase then finds a transfer sequence to the goal from the present state. A forward fault propagation search can be successfully guided through state space from the present state, but the forward justification search is less efficient and the failure rate is high. This backward-function-level search invokes inverse RTL-level primitives and exploits easy movement of data vectors in structured VLSI circuits. Examples illustrated are in AHPL. This search is equally applicable to an RTL-level subset of VHDL. Combinational logic units are treated as functions, and the circuit states are partitioned into control states and data states. Partial covers, conceptually similar to singular covers in the D-algorithm, model the inverse functions of combinational logic units View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fault analysis of a TMR system using multiple valued logic

    Publication Year: 1990 , Page(s): 23 - 29
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB)  

    The performance under the stuck at faults is analyzed for circuits using multiple-valued logic and triple modular redundancy (TMR). Three types of faults are specified, and the voter output of the TMR is determined on the basis of the type of fault. The average error per bit of the voter output is determined taking into consideration the type of fault and the signal transmitted. The errors due to Gaussian noise in multivalued logic are also analyzed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multilevel programming paradigm

    Publication Year: 1990 , Page(s): 340 - 346
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB)  

    A multilevel programming paradigm that resembles a hardware design process is developed. This paradigm provides software architectural support for software reuse and is based on the functional programming language FP, object-oriented programming, and conventional programming. There are three levels-function level, object level, and procedure level. Ideas and concepts are adopted and modified to blend the three levels. The algebra of programs can be used to provide the underlying mathematical foundation for the function and object levels, and the in-situ evaluation semantics is used to define the semantics of the procedure level. The concept of signature is used as a tool to provide a window to look into the object and to specify interfaces. The user of a class needs to know only the signature. Adoption for reuse becomes easy in this paradigm because interface classes can be constructed. There is only one data type-the FP object, which provides a standard data format View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Massively parallel approach to pattern recognition

    Publication Year: 1990 , Page(s): 61 - 67
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    Template matching is concerned with measuring the similarity between patterns of two objects. A massively parallel approach to pattern recognition with a large template set is proposed. A class of image recognition problem inherently needs large template sets, for example, the recognition of Chinese characters, which needs thousands of templates. The proposed algorithm is based on the SIMD-SM-R machine or the SIMD machine with broadcasting abilities, which is the most popular parallel machine to date, and uses a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given image it scans the template pyramid searching for the match. Implementation of the proposed scheme is described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed blackbox: a testbed for distributed problem solving

    Publication Year: 1990 , Page(s): 741 - 748
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (752 KB)  

    Distributed blackbox (DBB) is an adapted version of the game of blackbox. It is shown why DBB is a suitable testbed for experimental research in distributed problem solving, wherein several experts cooperate with each other to solve a single problem. DBB is compared with another well-known testbed called DVMT (distributed vehicle monitoring testbed), and the advantages and disadvantages of using DBB are made apparent. Knowledge engineering, verification, and performance evaluation of the underlying expert systems for DBB require fewer man-years than for DVMT, while the richness of the problems to be solved is retained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Issues in the design and implementation of views in object-oriented databases

    Publication Year: 1990
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (52 KB)  

    Summary form only given. Views have traditionally been a mechanism for coordinating access to shared data in a database. View definition schemes in relational databases have aimed at consistency with the conceptual model and the use of a limited view specification language that allows efficient and unambiguous translation of view updates. These goals are equally applicable to object-oriented databases (OODBs). Consequently, it makes sense to model views in OODBs as objects themselves. OODBs differ from relational databases in that they have notions of classes and instances, and allow nesting of both classes and instances. Thus the underlying theory of the object-oriented data model is graph based, in contrast to the relational theory, which is set based. Object-oriented views can be thought of as graph transformations applied to class or instance graphs in the underlying database. Depending on the nature of the underlying objects, views can be class-lattice views or complex object views. The exact set of graph transformations that can be applied to class or instance graphs needs to be limited to allow views to be virtualizable for querying purposes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ROPCO: an environment for micro-incremental reuse

    Publication Year: 1990 , Page(s): 347 - 354
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (720 KB)  

    The authors report on the design of ROPCO (reuse of persistent code and object code), a novel approach to reuse utilizing code and object code stored in persistent structures. One of the main motives in the design of ROPCO is to eliminate the compilation of portions of a program which have not been altered, while the consistency of variables and intra program flow graphs remains intact. Basic blocks are selected as units of reuse, and the notion of a `use network' is devised to accomplish this goal. Another design motive is to store versions of blocks of a program efficiently, while maintaining direct and sequential access to those blocks. Hierarchical and flat persistency methods are utilized to achieve this goal View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Requirements for token holding times in timed-token protocols

    Publication Year: 1990 , Page(s): 598 - 604
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB)  

    Minimum requirements for the high-priority token-holding time in a network using timed-token access protocol (such as IEEE 802.4 and FDDI) are derived in order to show that the throughput of synchronous messages is now lower than the amount of traffic generated for that class. This minimal value is essential in order to avoid unbounded queue length for the synchronous class and to achieve high network responsiveness. The results have been obtained for synchronous messages generated according to a generic periodic pattern; however, as no constraint is assumed for the period, the results obtained may be used to approximate nonperiodic generation patterns. It is shown how the theoretical results can be used to tune the network performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of DQDB

    Publication Year: 1990 , Page(s): 548 - 555
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (540 KB)  

    The distributed queue dual bus (DQDB) is the draft of a standard proposed by the IEEE 802.6 Working Group on metropolitan area networks (MANs). This proposal has been criticized because the network control information is subject to propagation delays that may be much longer than the transmission time of a data segment. As a result, the protocol for dynamic bandwidth allocation suffers from asymmetric access properties. It is shown that both throughput and delay may be decisively affected by the geographic location of a station with respect to other users. On the other hand, DQDB yields fair medium access for many realistic scenarios View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The role of location of traffic control in internetwork gateways

    Publication Year: 1990
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (60 KB)  

    Summary form only given. An evaluation of window-based gateway-to-gateway-level congestion control in an interconnected network environment is reported. Two dynamic control algorithms proposed by the author are intended to operate the system below the critical internetwork load that gives rise to congestion at gateways and connected networks. In one of the algorithms, source gateways regulate the traffic flow, whereas in the other, destination gateways provide the control. The algorithms provide further control to static window protocol by adjusting the window size in accordance with the availability of network resources at the destination. A comparison of the two algorithms has shown that controlling the internetwork messages at the destination gateway produces better performance results than control at the source gateway View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DIAES-a distributed image archiving expert system

    Publication Year: 1990 , Page(s): 749 - 756
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    A distributed image archiving expert system (DIAES) is introduced to improve the response time of a multilevel storage system and to support real-time picture archiving and communication systems (PACSs). DIAES provides efficient schemes for image retrieval, storage, and organization by means of techniques offered by distributed artificial intelligence (DAI). With DIAES, declarative (i.e. image description) and procedural (i.e. heuristics for selecting the right images, predicting the images that will be requested next, and calculating the `connection force' between images) knowledge are both integrated into a hierarchical, entity-based knowledge management scheme, termed as frames and rules associated system entity structure (FRASES). By interpreting the built-in knowledge, DIAES predicts images which have higher probability of being retrieved next and archives these images to the upper (or top) level storage devices. This, in turn, reduces the possibility of accessing images located on the lower level storage devices and improves the response time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Methodological and architectural keynotes for an integrated multimedia hospital information system

    Publication Year: 1990
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    Summary form only given. The key points of the architecture of the distributed information systems of a completely new hospital are discussed. The major aspects relate to the system's capability of integrating natural language texts into the standard distributed database and providing the user with `intelligent' query capabilities for retrieving information and knowledge from them. The information system under development will be compatible with some European standards, concurrently being defined by a European consortium of hospitals, computer manufacturers, and software houses. These requirements have also determined the need for adapting and improving various methodological issues for the design activities. The most important and innovative aspects of the project are identified. The system relies on a distributed architecture, in which both data and processes are split on different computers and are able to support the local departmental needs autonomously, as well as communicate with each other for exchanging data in a modality fully transparent to the users View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mutual exclusion in arbitrary networks when process identity numbers are not distinct

    Publication Year: 1990 , Page(s): 885 - 886
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (172 KB)  

    A study of the problem of the mutual exclusion of critical sections of several distributed processes is presented. Each process is given with a particular section of code called its critical section. The problem is to design protocol to ensure that, at a given time, no more than one process is executing its own critical section. A distributed algorithm for mutual exclusion working in arbitrary networks is proposed. This algorithm does not use the logical clock, and the identities of nodes are not necessarily distinct. Initially, each process knows only its adjacent channels and knows them by local identities. The network is organized as a logical rooted tree where the root holds the token. When a process wants to enter the critical section, it sends a request message over an outgoing channel toward the root and waits for the token. The number of messages required for every request is bounded by 2d where d is the diameter of the network View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consensus problem on a generalized network connected by unreliable transmission media

    Publication Year: 1990
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (116 KB)  

    Summary form only given. The authors consider a distributed system in which all processors are reliable during the consensus execution, while the message TMs (transmission mediums) may be distributed by noise or an intruder and the exchange message altered maliciously. By definition, if consensus can be reached under the symptom of malicious fault, the other cases of fault assumption are solved. Therefore, an efficient and reliable protocol is proposed, and its efficiency and reliability are proved. The common term `round' is used to denote the interval of a message exchange. The proposed protocol, GLINK, can tolerate the maximal number of faulty TMs and requires only two rounds. Protocol GLINK achieves consensus by using two phrases: the message exchange phase, which requires only two rounds, and the decision-making phase, for which no rounds are required View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transmission revaluation of ISDN transmission lines at basic access

    Publication Year: 1990
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (56 KB)  

    Owing to the changed transmission contents and quality requirements of the integrated services digital network's (ISDN's) basic access, the transmission performance of already existing metallic pairs has to be revaluated to ensure their usability. The main elements affected are transmission mode and line coding on twisted wires. The signal-to-noise ratio (S/N) of near- or far-end crosstalk is presented, and the Pe of transmission lines is also reevaluated under various conditions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance evaluation of an IEEE 802.4 LAN

    Publication Year: 1990 , Page(s): 523 - 530
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB)  

    An IEEE 802.4 local area network (LAN) can be analyzed by two different approaches. The first approach focuses on aspects of the network, or protocol, which affect performance. In the IEEE 802.4 token bus protocol, the key performance measurements of token rotation time and network efficiency can be altered by defining several parameters. Experiments on a real token bus network showed that by carefully selecting these parameters to match characteristics of the given application, the tradeoff between token rotation time and efficiency can be optimized. The second approach emphasizes the performance level of the individual station provided by VLSI integrated circuits. The relevant measures of performance, which were analyzed for the MC68824 Token Bus Controller (TBC), include percentage of bus bandwidth, the contribution of the TBC to station delay, and the ability of the TBC to avoid the transmit underrun and receive overrun error conditions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An approach to conceptual modelling of IR auxiliary data

    Publication Year: 1990 , Page(s): 500 - 505
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (444 KB)  

    The diffusion of automated information retrieval (IR) applications and other applications which share many peculiarities with IR is increasing. The request for these types of applications show the present lack of methodologies for IR data design. The complexity of data modeling in IR is due more to the complex relationships which exist between the indexing terms than to the relationships between documents of the collection. The conceptual modeling paradigm necessary for the design of auxiliary data is investigated. An object-oriented approach is compared with the conceptual modeling paradigm and examined as a candidate tool for IR design View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel system development: a reference model for CASE tools

    Publication Year: 1990 , Page(s): 364 - 372
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1028 KB)  

    An ongoing study and enhancement program is being conducted on Smith Industries' development process. The findings are being applied to real projects, and the results are being closely tracked. The author summarizes some of the results so far and shows how they provide a reference model for CASE (computer-aided-software-engineering) tools. From the study results it is concluded that no simple sequential model can describe the development process and that a parallel model, adaptable to the needs of the particular project, is more realistic. This parallel model encompasses all of the disciplines involved in development: systems, hardware, software, reliability, testing, manufacturing, and so on. A project database is needed to accommodate all the many products of development and their complex relationships across all these disciplines. Such a model and database provide a reference for evaluating existing CASE tools and developing new ones. Existing CASE tools fall far short of meeting the needs of this model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A retargetable compiler for the generation of control routines of a microprocessor-based digital system

    Publication Year: 1990 , Page(s): 469 - 476
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB)  

    A retargetable compiler by using a hybrid interpretative/table-driven code generation scheme is described. The rapid development of computer architectures and microprocessors has created an urgent need for the retargetable compiler. The retargetable compiler has the following advantages in designing microprocessor-based digital systems; (a) designers need not learn any microprocessor assembly languages; (b) it is easy to translate programs to a new microprocessor-based digital system. The UNCOL model of J. Strong, et al. (1958) and a general framework for retargetable compilers are discussed, the three most popular retargetable code generation methods are compared, and a hybrid interpretative/table-driven code is proposed. An implementation for the EMSAIL language is described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A semantic data model for intellectual database access

    Publication Year: 1990 , Page(s): 719 - 725
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB)  

    A semantic data model is developed to make the intellectual database access possible. The model design is based on the entity-relationship model from the viewpoint of the management of semantic information. The concept of case grammar is introduced, not only to make the properties of entities distinct, but also to interpret natural-language-like queries effectually View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software interfaces for integrated simulation applications

    Publication Year: 1990 , Page(s): 832 - 839
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    In integrated simulation applications, simulation tools interact with other components of computer-aided-design systems at the level of internal structures, sharing some internal data and communicating through procedural invocations. There are two basic types of interactions between interacting tools, the so-called reverse and indirect communication methods. Reverse communication returns to the invocation environment after each required task; it is a rather flexible technique, but it requires global control methods which can be quite unreliable and difficult for modifications in nontrivial applications. Indirect communication uses local control at different levels of hierarchical organization, passing global information from one level of the hierarchy to another; it corresponds thus to hierarchical structure of tools and to different levels of abstraction at different levels of hierarchy. Several integrated applications built around a circuit simulation environment are described, and interfaces designed for the integration of different tools are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two learning methods for a tree-search combinatorial optimizer

    Publication Year: 1990 , Page(s): 606 - 613
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (812 KB)  

    Several combinatorial problems of logic synthesis and other CAD problems have been solved in a uniform way using a general-purpose tree-searching program MULT-II. Two learning methods that have been implemented to improve the program's efficiency are presented. A weighted heuristic function, used to evaluate operators, is applied during a solution tree search. The optimal vector of coefficients for this function is learned in a simplified perceptron scheme. By using the second learning method, the similarity of shapes among the solution cost improvement curves is used to define the termination moment of the search process. The amplification effect of the concurrent action of both these methods has been observed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Uncertainty in information retrieval: an approach based on fuzzy sets

    Publication Year: 1990 , Page(s): 809 - 814
    Cited by:  Papers (4)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB)  

    Design and implementation issues are discussed for a fuzzy information retrieval system using a knowledge-based system approach. In this context, system performance is identified as a critical point, and heuristics are suggested for a constrained spreading activation in the knowledge base and for optimization of the retrieval procedure. The design and implementation issues outlined are being tested in a prototype system named FIRST (Fuzzy Information Retrieval SysTem), which is being implemented in Prolog on a personal workstation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.