Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). We apologize for the inconvenience.
By Topic

Date 23-26 Feb. 1997

Filter Results

Displaying Results 1 - 25 of 58
  • Proceedings IEEE COMPCON 97. Digest of Papers [Front Matter and Table of Contents]

    Publication Year: 1997
    Save to Project icon | Request Permissions | PDF file iconPDF (316 KB)  
    Freely Available from IEEE
  • Universal data access with OLE DB

    Publication Year: 1997 , Page(s): 2 - 7
    Cited by:  Papers (1)  |  Patents (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (665 KB)  

    OLE DB is Microsoft's new data access API designed to enable access to all kinds of data sources, both database and non database, regardless of format or location. OLE DB builds on Microsoft's Component Object Model (COM), which is the foundation for OLE and ActiveX. OLE DB is core to Microsoft's Universal Access strategy. OLE DB aims at providing an environment for business applications to access all kinds of data sources in an integrated way, including desktop data such as spreadsheets, text processing documents, and electronic mail; server data stored in the file system, indexed sequential files, relational, hierarchical and network databases; and data computed by middle tier business objects. Most database companies are pursuing a Universal Storage strategy which provides access to all kinds of data types such as text, spatial, video, and audio, and insist on placing all kinds of data used in an organization inside the database. However, the reality is that a vast amount of mission critical data in a corporation is stored in a combination of database and non database data sources for functionality and performance reasons. Therefore, Microsoft is pursuing a Universal Access strategy which provides an infrastructure that enables the integration of a wide variety of data sources so that applications can be written in an efficient, safe and disciplined manner without losing the advantages of a centralized database system. The paper provides an overview of OLE DB and describes how OLE DB enables the Microsoft Universal Access approach to managing data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Windows NT clusters for availability and scalabilty

    Publication Year: 1997 , Page(s): 8 - 13
    Cited by:  Patents (20)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (630 KB)  

    We describe the architecture of the clustering extensions to the Windows NT operating system. Windows NT clusters provide three principal user visible advantages: improved availability by continuing to provide a service even during hardware or software failure. Increased scalability by allowing new components to be added as system load increases. Lastly, clusters simplify the management of groups of systems and their applications by allowing the administrator to manage the entire group as a single system. We first describe the high level goals for the design team, and some of the difficulties making the appropriate changes to Windows NT. We then provide an overview of the structure of the cluster specific components and discuss each component in more detail before closing with a discussion of some possible future enhancements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Microsoft Transaction Server

    Publication Year: 1997 , Page(s): 14 - 18
    Cited by:  Papers (1)  |  Patents (39)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (403 KB)  

    The Microsoft Transaction Server represents a new category of product that makes it makes it easier to develop and deploy high performance, scaleable, and reliable distributed applications. This is achieved by combining the technology of component based development and deployment environments with the reliability and scaleability of transaction processing monitors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Alpha 21164PC microprocessor

    Publication Year: 1997 , Page(s): 20 - 27
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (726 KB)  

    The internal architecture of a 2000 MIPS/1000 MFLOPS (peak) high performance low cost CMOS Alpha microprocessor chip is described. This implementation is derived from the Alpha 21164 microprocessor to reduce cost while maintaining high performance. It contains a quad issue superscalar instruction unit, two 64 bit integer execution pipelines, and two 64 bit floating point execution pipelines. The memory unit and bus interface unit have been redesigned to provide a high performance memory system using industry standard PC SRAM and DRAM components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Alpha 21264: a 500 MHz out-of-order execution microprocessor

    Publication Year: 1997 , Page(s): 28 - 36
    Cited by:  Papers (20)  |  Patents (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (782 KB)  

    The paper describes the internal organization of the 21264, a 500 MHz, out of order, quad fetch, six way issue microprocessor. The aggressive cycle time of the 21264 in combination with many architectural innovations, such as out of order and speculative execution, enable this microprocessor to deliver an estimated 30 SpecInt95 and 50 SpecFp95 performance. In addition, the 21264 can sustain 5+ Gigabytes/sec of bandwidth to an L2 cache and 3+ Gigabytes/sec to memory for high performance on memory-intensive applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DIGITAL FX!32 running 32-Bit x86 applications on Alpha NT

    Publication Year: 1997 , Page(s): 37 - 42
    Cited by:  Papers (1)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB)  

    DIGITAL FX!32 is a unique combination of emulation and binary translation which makes it so that any 32 bit program which runs on an x86 system running Windows NT 4.0 will install and run on an Alpha Windows NT 4.0 system. After translation, x86 applications run as fast under DIGITAL FX!32 on a 500 Mz Alpha system as on a 200 Mz Pentium-Pro. The emulator and its associated runtime provide for transparent execution of x86 applications. The emulator uses translation results when they are available and produces profile data which is used by the translator. The translator provides native Alpha code for the portions of an x86 application which have been previously executed. A server manages the translation process for the user, making the overall process completely transparent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Color image quality metric S-CIELAB and its application on halftone texture visibility

    Publication Year: 1997 , Page(s): 44 - 48
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (399 KB)  

    We describe experimental tests of a spatial extension to the CIELAB color metric for measuring color reproduction errors of digital images. The standard CIELAB /spl Delta/E metric is suitable for use on large uniform color targets, but not on images, because color sensitivity changes as a function of spatial pattern. The S-CIELAB extension includes a spatial processing step, prior to the CIELAB /spl Delta/E calculation, so that the results correspond better to color difference perception by the human eye. The S-CIELAB metric was used to predict texture visibility of printed halftone patterns. The results correlate with perceptual data better than standard CIELAB and point the way to various improvements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An evaluation of video fidelity metrics

    Publication Year: 1997 , Page(s): 49 - 55
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (646 KB)  

    We assess the capability of four metrics (average MSE, average SNR, ANSI parameters, and ITS metric) to determine the fidelity of video sequences. First, we discuss the ideal requirements for a video fidelity metric, in terms of monotonicity, degree of change, and consistent behavior. Then, we construct a series of highly reproducible degraded sequences containing artifacts common to DCT based transform coders, such as H.263, and evaluate the performance of each metric on those sequences. From the resulting data, we determine the accuracy and reliability of each of those metrics. Our analysis can help guide the choice of an appropriate video fidelity metric for evaluating algorithm and architecture choices and tradeoffs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image quality metrics based on single and multi-channel models of visual processing

    Publication Year: 1997 , Page(s): 56 - 60
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    We review two classes of image analysis tools based on single and multiple channel models of human vision processing. These tools were designed to predict the visibility of printed dots and halftone texture, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient self-versioning documents

    Publication Year: 1997 , Page(s): 62 - 67
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB)  

    We describe methods to produce software and multimedia documents that are self versioning-they efficiently capture changes as the document is modified, providing access to every version with extremely fine granularity. The approach uses an object based spatial indexing scheme that combines fast access with very low storage overhead. Multiple tools can extract change reports from these documents without requiring their queries to be synchronized. We describe and evaluate a working implementation of these ideas, suitable for use in software development environments, multimedia authoring systems, and non traditional databases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Presentation by tree transformation

    Publication Year: 1997 , Page(s): 68 - 73
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (522 KB)  

    Structured documents are represented as trees. The layout or presentation of a document is also often modeled as a computation over a tree. But these trees are not generally the same. For instance, L/sup A/T/sub E/X converts a structured document to the T/sub E/X formatting hierarchy of boxes and glue. In other words, presentation is a mapping between trees. Casting it as a formal tree transformation offers both expressive, compact style specifications and efficient implementation. In our structured document system Ensemble, we have implemented a general framework for presentation by tree transformation. It consists of a core transformation engine; several distinct output tree languages or 'media'; and style files in a common language. To demonstrate its flexibility, we have built media for formatting programs, for presenting numerical data as graphs, and for displaying the tree structure of any document. We have also defined four efficiency requirements for interactive presentation, and tuned the implementation to meet each one. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Grendel: a Web browser with end user extensibility

    Publication Year: 1997 , Page(s): 74 - 79
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (690 KB)  

    Electronic documents, particularly those on the World Wide Web, have an inherent structure which can be utilized. However; the tools to do so have typically been oriented towards professional programmers. We present scripting language features that can be incorporated into tools that manipulate structured network documents. This set of language features allows us to build visual tools to specify transformation on such documents. Subsequently transformation scripting, is opened up to a broad class of users. This allows the tools to be easily extended by end users. World Wide Web browsers serve as a class of tools that can take advantage of this technique. We discuss our experimental browser, Grendel, which has an embedded scripting language, CrossJam, based upon transformation scripting. Grendel has a number of novel applications and a simple visual tool, Spar, to assist in scripting the browser's behavior. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The evolution of the HP/Convex Exemplar

    Publication Year: 1997 , Page(s): 81 - 86
    Cited by:  Papers (13)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    The Exemplar X-Class is the second generation SPP from HP/Convex. It is a ccNUMA (cache coherent nonuniform memory access) architecture comprised of multiple nodes. We describe the evolution from the first generation systems to the current S- and X-class systems. Each node may contain up to 16 PA-8000 processors, 16 Gbytes of memory and 8 PCI busses. The peak performance of each node is 11.5 Gflops. Memory access is UMA within each node and is accomplished via a nonblocking crossbar. Each node can be correctly considered as a symmetric multiprocessor. The interconnect between nodes is a derivative of the IEEE standard, SCI, which permits up to 32 nodes to be connected in a 2 dimensional topology. The system includes features to aid high performance engineering/scientific computations. Among these are a hardware bcopy engine, interconnect caches, and memory and cache based semaphores. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compiler optimizations for the PA-8000

    Publication Year: 1997 , Page(s): 87 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB)  

    Compiler optimizations play a key role in unlocking the performance of the PA-8000 (L. Gwennap, 1994), an innovative dynamically scheduled machine which is the first implementation of the 64 bit PA 2.0 member of the HP PA-RISC architecture family. This wide superscalar, long out of order machine provides significant execution bandwidth and automatically hides latency at runtime; however despite its ample hardware resources, many of the optimizing transformations which proved effective for the PA-8000 served to augment its ability to exploit the available bandwidth and to hide latency. While legacy codes benefit from the PA-8000's sophisticated hardware, recompilation of old binaries can be vital to realizing the full potential of the PA-8000, given the impact of new compilers in achieving peak performance for this machine. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New security architectural directions for Java

    Publication Year: 1997 , Page(s): 97 - 102
    Cited by:  Papers (7)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (682 KB)  

    The paper gives an overview of the technical direction of Java in terms of the security architecture and desirable features. It also highlights some of the feasibility constraints on the security solutions. The paper assumes that the reader has prior knowledge of Java basics and general security issues. The article is a purely technical discussion for the wider Java community, and does not necessarily commit JavaSoft to any particular features or implementations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Privacy-enhancing technologies for the Internet

    Publication Year: 1997 , Page(s): 103 - 109
    Cited by:  Papers (11)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (730 KB)  

    The increased use of the Internet for everyday activities is bringing new threats to personal privacy. The paper gives an overview of existing and potential privacy enhancing technologies for the Internet, as well as motivation and challenges for future work in this field. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comprehensive diagnostics software strategy for IDT's microprocessors

    Publication Year: 1997 , Page(s): 111 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    Although it is routine to write tests that verify the basic functions of the cache or pipeline, how often is verification of a write-after-write interlock operation or an interrupt leading to an aborted cache flush cycle operation performed? Diagnostic software verifies the architectural compliance and functionality of a microprocessor design and is one of the most important areas to consider during the hardware development phase. A design is only as good as the diagnostics that test and verify it. But IDT is highly aware of the importance of diagnostics software, and a diagnostics team was formed to design a world class functional test suite for its microprocessors. The goal of this team was to create structured, modular, and highly leverageable tests that conformed to a vertical diagnostics strategy, designed to address complex operation sequences such as pipeline interlocks and hazards as well as asynchronous interruptions to the pipeline, in various combinations. To simplify the debugging efforts of logic designers, a generic exception handler capable of handling nested interrupts and exceptions-in the correct priority order to accommodate simultaneous exception occurrences-was implemented. To cover all possible processor cycle combinations and to include the areas that controlled diagnostics can not reach, pseudo-random diagnostics code was generated. To further guarantee a comprehensive diagnostics strategy, several application programs written in C were developed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Functional verification of the superscalar SH-4 microprocessor

    Publication Year: 1997 , Page(s): 115 - 120
    Cited by:  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (523 KB)  

    Functional verification of modern complex processors is a formidable and time consuming task. In spite of substantial manual effort, it is extremely difficult to systematically cover the corner cases of the control logic design, within a short processor design cycle. The SH4 processor is a dual issue superscalar RISC architecture with extensive hardware support for 3D graphics. We present the development of a semi automated methodology for functional verification. In particular, we elaborate a scheme to automatically generate test programs to verify the superscalar issue logic, bypass/multi bypass logic and stall logic, starting from the microarchitectural specification. Finally, we present the Random Test Generation methodology and the specific Random Test Generators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Web Browser Intelligence: opening up the Web

    Publication Year: 1997 , Page(s): 122 - 123
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (141 KB)  

    The World-Wide Web has brought us two important resources: ubiquitous browsers and a global information repository. In the current model, these two resources are tightly coupled. By introducing the concept of a programmable intermediary between browser and server, Web Browser Intelligence (WBI) has relaxed this coupling. WBI allows the browser to execute arbitrarily complex commands using the network of Web servers as an information resource. This architecture enables automatic personalization of the Web for individual users, automatic restructuring of information on servers, and collaboration among Web users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Acorn's technology for network computing

    Publication Year: 1997 , Page(s): 124 - 129
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (625 KB)  

    The concept of network computing-simple, easy access client machines linked to service provision elsewhere on the network-has recently become a topic of serious interest. Acorn has for many years concentrated its efforts in the design of economical interactive computing systems; now, technologies Acorn has developed over the years are finding new outlets in this emerging market. We survey both the rationale behind some particular technical efforts (such as how to make the most of low resolution displays, especially TV), and the ways in which they are now proving appropriate to current developments in the network computing arena. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evolution of object-relational database technology in DB2

    Publication Year: 1997 , Page(s): 131 - 135
    Cited by:  Patents (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (510 KB)  

    The paper defines the object-relational approach to database management and examines its advantages. It summarizes the features that are expected in an object-relational system, and discusses how these features are influencing the ANSI/ISO SQL Standard. Using DB2 as an example, it describes how object-relational features can be used to meet the needs of advanced database applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bringing objects to the mainstream

    Publication Year: 1997 , Page(s): 136 - 142
    Cited by:  Patents (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (534 KB)  

    Oracle provides an open type system that is consistent with ANSI SQL3 and provides interoperability of SQL with C/C++, Java and CORBA data models. By providing native support for objects in the database and navigational access to database objects from different host languages, we are reducing the impedance mismatch between the applications and the database. The Oracle server also provides a database extensibility framework that allows a tight integration of domain-specific data and logic with the database server. This database extensibility is achieved in an open, safe and manageable fashion in a network-centric computing architecture. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DataBlade extensions for INFORMIX-Universal Server

    Publication Year: 1997 , Page(s): 143 - 148
    Cited by:  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (582 KB)  

    In September 1996, Informix Software released the first version of its new object-relational database management system to developers and partners. This system, called the INFORMIX-Universal Server, supported a new way of building and deploying database applications. Developers could write software modules, called DataBlade extensions, that extended the database server with knowledge of new types and operations. This paper describes the architecture of INFORMIX-Universal Server and how it supports DataBlade extensions. The paper describes the way that DataBlade developers and application developers interact with the server. Finally, it describes a set of DataBlade extensions that were available for the server at the end of 1996. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • System overview of the SGI Origin 200/2000 product line

    Publication Year: 1997 , Page(s): 150 - 156
    Cited by:  Papers (4)  |  Patents (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (866 KB)  

    The SGI Origin 200/2000 is a cache-coherent non-uniform memory access (ccNUMA) multiprocessor, designed and manufactured by Silicon Graphics Inc. (SGI). The Origin system was designed from the ground up as a multiprocessor that was capable of scaling to both small and large processor counts without any cost, bandwidth or latency cliffs. The Origin system consists of up to 512 nodes interconnected by a highly scalable Craylink network. Each node consists of one or two R10000 processors and up to 4 GBytes of coherent memory. Each node also connects to the scalable XIO I/O subsystem. This paper discusses the motivation for building the Origin 200/2000 and describes its architecture and implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.