By Topic
Skip to Results

Search Results

You Refined by
Topic: Computing & Processing (Hardware/Software)  Remove   
384 Results returned
Skip to Results
  • Save this Search
  • Download Citations Disabled
  • Save To Project
  • Email
  • Print
  • Export Results
  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Spaces of Interaction, Places for Experience:Places for Experience

    Benyon, D.
    DOI: 10.2200/S00595ED1V01Y201409HCI022
    Copyright Year: 2014

    Morgan and Claypool eBooks

    Spaces of Interaction, Places for Experience is a book about Human-Computer Interaction (HCI), interaction design (ID) and user experience (UX) in the age of ubiquitous computing. The book explores interaction and experience through the different spaces that contribute to interaction until it arrives at an understanding of the rich and complex places for experience that will be the focus of the next period for interaction design. The book begins by looking at the multilayered nature of interaction and UX—not just with new technologies, but with technologies that are embedded in the world. People inhabit a medium, or rather many media, which allow them to extend themselves, physically, mentally, and emotionally in many directions. The medium that people inhabit includes physical and semiotic material that combine to create user experiences. People feel more or less present in these media and more or less engaged with the content of the media. From this understanding of people in media, the book explores some philosophical and practical issues about designing interactions. The book journeys through the design of physical space, digital space, information space, conceptual space and social space. It explores concepts of space and place, digital ecologies, information architecture, conceptual blending and technology spaces at work and in the home. It discusses navigation of spaces and how people explore and find their way through environments. Finally the book arrives at the concept of a blended space where the physical and digital are tightly interwoven and people experience the blended space as a whole. The design of blended spaces needs to be driven by an understanding of the correspondences between the physical and the digital, by an understanding of conceptual blending and by the desire to design at a human scale. There is no doubt that HCI and ID are changing. The design of “microinteractions” remains important, but there is a bigger picture o consider. UX is spread across devices, over time and across physical spaces. The commingling of the physical and the digital in blended spaces leads to new social spaces and new conceptual spaces. UX concerns the navigation of these spaces as much as it concerns the design of buttons and screens for apps. By taking a spatial perspective on interaction, the book provides new insights into the evolving nature of interaction design. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Tendency Assessment and Cluster Validity

    Bezdek, James C.
    Publication Year: 2008

    IEEE eLearning Library Courses

    This course - the second in a series of three - discusses several approaches to the first and third problems of clustering identified in module I - viz., pre-clustering tendency assessment and post-clustering cluster validation. The target audience comprises advanced undergraduate and graduate students majoring in engineering and science, and practicing engineers and scientists interested in either research about or applications of clustering to real world problems such as data mining, image analysis and bioinformatics. Some of subject matter in this course is available in textbooks (most notably some of the material about cluster validity functionals), and some of the subject matter is the object of (my) current research. The references contain pointers to some excellent papers on these topics, and on a number of related or competitive methods that have been proposed and studied by others. I begin with a simple numerical example that establishes the necessity for both assessment and validity. Then, I discuss the visual assessment of tendency family of algorithms (VAT, sVAT and coVAT). These algorithms produce images that enable a user to make useful guesses about the number of clusters to seek in relational data before proceeding with a partitioning method for finding the clusters. Since object data can always be converted to relational form by computing pair wise distances, these methods are well defined for all types of unlabeled numerical data. The coVAT algorithm provides a means for estimating the number of clusters in each of the four problems associated with rectangular relational data: row clusters, column clusters, joint (pure) clusters, and mixed co-clusters. The second half of this course presents some examples of cluster validation using scalar measures or indices of cluster validity. Several examples from each of the three major categories (crisp, fuzzy and probabilistic) of indices are presented. This course concludes with a numerical example that compares 23 indices of all three types on clusters in 12 sets of data drawn from mixtures of Gaussian distributions having either 3 or 6 components. (SOME) indices of all three types do pretty well in this example, while others do very badly. I don't think this problem has a general "solution", but since we use clustering in many, many applications, we keep trying to find good indices to validate algorithmic outputs. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Replicated Data Management for Mobile Computing

    Douglas, T.
    DOI: 10.2200/S00132ED1V01Y200807MPC005
    Copyright Year: 2008

    Morgan and Claypool eBooks

    Managing data in a mobile computing environment invariably involves caching or replication. In many cases, a mobile device has access only to data that is stored locally, and much of that data arrives via replication from other devices, PCs, and services. Given portable devices with limited resources, weak or intermittent connectivity, and security vulnerabilities, data replication serves to increase availability, reduce communication costs, foster sharing, and enhance survivability of critical information. Mobile systems have employed a variety of distributed architectures from client–server caching to peer-to-peer replication. Such systems generally provide weak consistency models in which read and update operations can be performed at any replica without coordination with other devices. The design of a replication protocol then centers on issues of how to record, propagate, order, and filter updates. Some protocols utilize operation logs, whereas others replicate state. Syst ms might provide best-effort delivery, using gossip protocols or multicast, or guarantee eventual consistency for arbitrary communication patterns, using recently developed pairwise, knowledge-driven protocols. Additionally, systems must detect and resolve the conflicts that arise from concurrent updates using techniques ranging from version vectors to read–write dependency checks. This lecture explores the choices faced in designing a replication protocol, with particular emphasis on meeting the needs of mobile applications. It presents the inherent trade-offs and implicit assumptions in alternative designs. The discussion is grounded by including case studies of research and commercial systems including Coda, Ficus, Bayou, Sybase’s iAnywhere, and Microsoft’s Sync Framework. Table of Contents: Introduction / System Models / Data Consistency / Replicated Data Protocols / Partial Replication / Conflict Management / Case Studies / Conclusions / Bibliography View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Adaptive Interaction:A Utility Maximization Approach to Understanding Human Interaction with Technology

    Payne, S. ; Howes, A.
    DOI: 10.2200/S00479ED1V01Y201302HCI016
    Copyright Year: 2013

    Morgan and Claypool eBooks

    This lecture describes a theoretical framework for the behavioural sciences that holds high promise for theory-driven research and design in Human-Computer Interaction. The framework is designed to tackle the adaptive, ecological, and bounded nature of human behaviour. It is designed to help scientists and practitioners reason about why people choose to behave as they do and to explain which strategies people choose in response to utility, ecology, and cognitive information processing mechanisms. A key idea is that people choose strategies so as to maximise utility given constraints. The framework is illustrated with a number of examples including pointing, multitasking, skim-reading, online purchasing, Signal Detection Theory and diagnosis, and the influence of reputation on purchasing decisions. Importantly, these examples span from perceptual/motor coordination, through cognition to social interaction. Finally, the lecture discusses the challenging idea that people seek to find opt mal strategies and also discusses the implications for behavioral investigation in HCI. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    The Answer Machine

    Feldman, S.
    DOI: 10.2200/S00442ED1V01Y201208ICR023
    Copyright Year: 2012

    Morgan and Claypool eBooks

    The Answer Machine is a practical, non-technical guide to the technologies behind information seeking and analysis. It introduces search and content analytics to software buyers, knowledge managers, and searchers who want to understand and design effective online environments. The book describes how search evolved from an expert-only to an end user tool. It provides an overview of search engines, categorization and clustering, natural language processing, content analytics, and visualization technologies. Detailed profiles for Web search, eCommerce search, eDiscovery, and enterprise search contrast the types of users, uses, tasks, technologies, and interaction designs for each. These variables shape each application, although the underlying technologies are the same. Types of information tasks and the trade-offs between precision and recall, time, volume and precision, and privacy vs. personalization are discussed within this context. The book examines trends toward convenient, contex -aware computing, big data and analytics technologies, conversational systems, and answer machines. The Answer Machine explores IBM Watson's DeepQA technology and describes how it is used to answer health care and Jeopardy questions. The book concludes by discussing the implications of these advances: how they will change the way we run our businesses, practice medicine, govern, or conduct our lives in the digital age. Table of Contents: Introduction / The Query Process and Barriers to Finding Information Online / Online Search: An Evolution / Search and Discovery Technologies: An Overview / Information Access: A Spectrum of Needs and Uses / Future Tense: The Next Era in Information Access and Discovery / Answer Machines View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Provenance:An Introduction to PROV

    Moreau, L. ; Groth, P.
    DOI: 10.2200/S00528ED1V01Y201308WBE007
    Copyright Year: 2013

    Morgan and Claypool eBooks

    The World Wide Web is now deeply intertwined with our lives, and has become a catalyst for a data deluge, making vast amounts of data available online, at a click of a button. With Web 2.0, users are no longer passive consumers, but active publishers and curators of data. Hence, from science to food manufacturing, from data journalism to personal well-being, from social media to art, there is a strong interest in provenance, a description of what influenced an artifact, a data set, a document, a blog, or any resource on the Web and beyond. Provenance is a crucial piece of information that can help a consumer make a judgment as to whether something can be trusted. Provenance is no longer seen as a curiosity in art circles, but it is regarded as pragmatically, ethically, and methodologically crucial for our day-to-day data manipulation and curation activities on the Web. Following the recent publication of the PROV standard for provenance on the Web, which the two authors actively help hape in the Provenance Working Group at the World Wide Web Consortium, this Synthesis lecture is a hands-on introduction to PROV aimed at Web and linked data professionals. By means of recipes, illustrations, a website at www.provbook.org, and tools, it guides practitioners through a variety of issues related to provenance: how to generate provenance, publish it on the Web, make it discoverable, and how to utilize it. Equipped with this knowledge, practictioners will be in a position to develop novel applications that can bring open-ness, trust, and accountability. Table of Contents: Preface / Acknowledgments / Introduction / A Data Journalism Scenario / The PROV Ontology / Provenance Recipes / Validation, Compliance, Quality, Replay / Provenance Management / Conclusion / Bibliography / Authors' Biographies / Index View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A Primer on Cluster Analysis

    Bezdek, James C.
    Publication Year: 2008

    IEEE eLearning Library Courses

    This course- the first in a series of three - provides a foundation for understanding the field of cluster analysis in unlabeled data. The target audience for this course comprises undergraduate and graduate students majoring in engineering and science, as well as practicing engineers and scientists interested in either research about or applications of clustering to real world problems such as data mining, image analysis and bioinformatics. The subject matter is widely available in a number of standard textbooks given in the references below. The course begins with a discussion of the general nature of clustering. Three problems are identified: tendency assessment, partitioning and validation. Two types of data are discussed: object vector data, and pair wise objects relational data. Next, I develop the mathematical structure needed to carry clustering algorithms, discussing the notions of similarity, label vectors, partition matrices (U) and point prototypes (V). The second part of the course contains a description (and pseudo code) for one algorithm each from the four major categories of clustering methods. Specifically, I discuss and illustrate with a numerical example: (i) the U only model for single linkage clustering; (ii) the V only model for clustering with Kohonen's self-organizing map; (iii) the (U,V) model for clustering with the hard and fuzzy c-means models; and (iv) the (U,V,+) model for clustering using the expectation-maximization algorithm for Gaussian mixture decomposition. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Visual Information Retrieval using Java and LIRE

    Lux, M. ; Marques, O.
    DOI: 10.2200/S00468ED1V01Y201301ICR025
    Copyright Year: 2013

    Morgan and Claypool eBooks

    Visual information retrieval (VIR) is an active and vibrant research area, which attempts at providing means for organizing, indexing, annotating, and retrieving visual information (images and videos) from large, unstructured repositories. The goal of VIR is to retrieve matches ranked by their relevance to a given query, which is often expressed as an example image and/or a series of keywords. During its early years (1995-2000), the research efforts were dominated by content-based approaches contributed primarily by the image and video processing community. During the past decade, it was widely recognized that the challenges imposed by the lack of coincidence between an image's visual contents and its semantic interpretation, also known as semantic gap, required a clever use of textual metadata (in addition to information extracted from the image's pixel contents) to make image and video retrieval solutions efficient and effective. The need to bridge (or at least narrow) the semanti gap has been one of the driving forces behind current VIR research. Additionally, other related research problems and market opportunities have started to emerge, offering a broad range of exciting problems for computer scientists and engineers to work on. In this introductory book, we focus on a subset of VIR problems where the media consists of images, and the indexing and retrieval methods are based on the pixel contents of those images -- an approach known as content-based image retrieval (CBIR). We present an implementation-oriented overview of CBIR concepts, techniques, algorithms, and figures of merit. Most chapters are supported by examples written in Java, using Lucene (an open-source Java-based indexing and search implementation) and LIRE (Lucene Image REtrieval), an open-source Java-based library for CBIR. Table of Contents: Introduction / Information Retrieval: Selected Concepts and Techniques / Visual Features / Indexing Visual Features / LIRE: An Extensible Java CBIR Li rary / Concluding Remarks View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Semantic Breakthrough in Drug Discovery

    Chen, B. ; Wang, H. ; Ding, Y. ; Wild, D.
    DOI: 10.2200/S00600ED1V01Y201409WEB009
    Copyright Year: 2014

    Morgan and Claypool eBooks

    The current drug development paradigm---sometimes expressed as, ``One disease, one target, one drug''---is under question, as relatively few drugs have reached the market in the last two decades. Meanwhile, the research focus of drug discovery is being placed on the study of drug action on biological systems as a whole, rather than on individual components of such systems. The vast amount of biological information about genes and proteins and their modulation by small molecules is pushing drug discovery to its next critical steps, involving the integration of chemical knowledge with these biological databases. Systematic integration of these heterogeneous datasets and the provision of algorithms to mine the integrated datasets would enable investigation of the complex mechanisms of drug action; however, traditional approaches face challenges in the representation and integration of multi-scale datasets, and in the discovery of underlying knowledge in the integrated datasets. The Sem ntic Web, envisioned to enable machines to understand and respond to complex human requests and to retrieve relevant, yet distributed, data, has the potential to trigger system-level chemical-biological innovations. Chem2Bio2RDF is presented as an example of utilizing Semantic Web technologies to enable intelligent analyses for drug discovery. Table of Contents: Introduction / Data Representation and Integration Using RDF / Data Representation and Integration Using OWL / Finding Complex Biological Relationships in PubMed Articles using Bio-LDA / Integrated Semantic Approach for Systems Chemical Biology Knowledge Discovery / Semantic Link Association Prediction / Conclusions / References / Authors' Biographies View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Information Theory Tools for Image Processing

    Feixas, M. ; Bardera, A. ; Rigau, J. ; Xu, Q.
    DOI: 10.2200/S00560ED1V01Y201312CGR015
    Copyright Year: 2014

    Morgan and Claypool eBooks

    Information Theory (IT) tools, widely used in many scientific fields such as engineering, physics, genetics, neuroscience, and many others, are also useful transversal tools in image processing. In this book, we present the basic concepts of IT and how they have been used in the image processing areas of registration, segmentation, video processing, and computational aesthetics. Some of the approaches presented, such as the application of mutual information to registration, are the state of the art in the field. All techniques presented in this book have been previously published in peer-reviewed conference proceedings or international journals. We have stressed here their common aspects, and presented them in an unified way, so to make clear to the reader which problems IT tools can help to solve, which specific tools to use, and how to apply them. The IT basics are presented so as to be self-contained in the book. The intended audiences are students and practitioners of image proces ing and related areas such as computer graphics and visualization. In addition, students and practitioners of IT will be interested in knowing about these applications. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Spoken Dialogue Systems

    Jokinen, K. ; McTear, M.
    DOI: 10.2200/S00204ED1V01Y200910HLT005
    Copyright Year: 2009

    Morgan and Claypool eBooks

    Considerable progress has been made in recent years in the development of dialogue systems that support robust and efficient human-machine interaction using spoken language. Spoken dialogue technology allows various interactive applications to be built and used for practical purposes, and research focuses on issues that aim to increase the system's communicative competence by including aspects of error correction, cooperation, multimodality, and adaptation in context. This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems. It provides an overview of the basic issues such as system architectures, various dialogue management methods, system evaluation, and also surveys advanced topics concerning extensions of the basic model to more conversational setups. The goal of the book is to provide an introduction to the methods, problems, and solutions that are used in dialogue system development and evaluation. It presents dialogue m delling and system development issues relevant in both academic and industrial environments and also discusses requirements and challenges for advanced interaction management and future research. Table of Contents: Preface / Introduction to Spoken Dialogue Systems / Dialogue Management / Error Handling / Case Studies: Advanced Approaches to Dialogue Management / Advanced Issues / Methodologies and Practices of Evaluation / Future Directions / References / Author Biographies View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Bad to the Bone:Crafting Electronic Systems with BeagleBone and BeagleBone Black

    Barrett, S. ; Kridner, J.
    DOI: 10.2200/S00500ED1V01Y201304DCS041
    Copyright Year: 2013

    Morgan and Claypool eBooks

    This comprehensive book provides detailed materials for both novice and experienced programmers using all BeagleBone variants which host a powerful 32-bit, super-scalar TI Sitara ARM Cortex A8 processor. Authored by Steven F. Barrett and Jason Kridner, a seasoned ECE educator along with the founder of Beagleboard.org, respectively, the work may be used in a wide variety of projects from science fair projects to university courses and senior design projects to first prototypes of very complex systems. Beginners may access the power of the "Bone" through the user-friendly Bonescript examples. Seasoned users may take full advantage of the Bone's power using the underlying Linux-based operating system, a host of feature extension boards (Capes) and a wide variety of Linux community open source libraries. The book contains background theory on system operation coupled with many well-documented, illustrative examples. Examples for novice users are centered on motivational, fun robot projec s while advanced projects follow the theme of assistive technology and image processing applications. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Search-Based Applications:At the Confluence of Search and Database Technologies

    Grefenstette, G. ; Wilber, L.
    DOI: 10.2200/S00320ED1V01Y201012ICR017
    Copyright Year: 2010

    Morgan and Claypool eBooks

    We are poised at a major turning point in the history of information management via computers. Recent evolutions in computing, communications, and commerce are fundamentally reshaping the ways in which we humans interact with information, and generating enormous volumes of electronic data along the way. As a result of these forces, what will data management technologies, and their supporting software and system architectures, look like in ten years? It is difficult to say, but we can see the future taking shape now in a new generation of information access platforms that combine strategies and structures of two familiar -- and previously quite distinct -- technologies, search engines and databases, and in a new model for software applications, the Search-Based Application (SBA), which offers a pragmatic way to solve both well-known and emerging information management challenges as of now. Search engines are the world's most familiar and widely deployed information access tool, used b hundreds of millions of people every day to locate information on the Web, but few are aware they can now also be used to provide precise, multidimensional information access and analysis that is hard to distinguish from current database applications, yet endowed with the usability and massive scalability of Web search. In this book, we hope to introduce Search Based Applications to a wider audience, using real case studies to show how this flexible technology can be used to intelligently aggregate large volumes of unstructured data (like Web pages) and structured data (like database content), and to make that data available in a highly contextual, quasi real-time manner to a wide base of users for a varied range of purposes. We also hope to shed light on the general convergences underway in search and database disciplines, convergences that make SBAs possible, and which serve as harbingers of information management paradigms and technologies to come. Table of Contents: Search Based pplications / Evolving Business Information Access Needs / Origins and Histories / Data Models and Storage / Data Collection/Population / Data Processing / Data Retrieval / Data Security, Usability, Performance, Cost / Summary Evolutions and Convergences / SBA Platforms / SBA Uses and Preconditions / Anatomy of a Search Based Application / Case Study: GEFCO / Case Study: Urbanizer / Case Study: National Postal Agency / Future Directions View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    The Shortest-Path Problem:Analysis and Comparison of Methods

    Ortega-Arranz, H. ; R. Llanos, D. ; Gonzalez-Escribano, A.
    The Shortest-Path Problem:Analysis and Comparison of Methods

    DOI: 10.2200/S00618ED1V01Y201412TCS001
    Copyright Year: 2014

    Morgan and Claypool eBooks

    Many applications in different domains need to calculate the shortest-path between two points in a graph. In this paper we describe this shortest path problem in detail, starting with the classic Dijkstra's algorithm and moving to more advanced solutions that are currently applied to road network routing, including the use of heuristics and precomputation techniques. Since several of these improvements involve subtle changes to the search space, it may be difficult to appreciate their benefits in terms of time or space requirements. To make methods more comprehensive and to facilitate their comparison, this book presents a single case study that serves as a common benchmark. The paper also compares the search spaces explored by the methods described, both from a quantitative and qualitative point of view, and including an analysis of the number of reached and settled nodes by different methods for a particular topology. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Mathematical Tools for Shape Analysis and Description

    Biasotti, S. ; Falcidieno, B. ; Giorgi, D. ; Spagnuolo, M.
    DOI: 10.2200/S00588ED1V01Y201407CGR016
    Copyright Year: 2014

    Morgan and Claypool eBooks

    This book is a guide for researchers and practitioners to the new frontiers of 3D shape analysis and the complex mathematical tools most methods rely on. The target reader includes students, researchers and professionals with an undergraduate mathematics background, who wish to understand the mathematics behind shape analysis. The authors begin with a quick review of basic concepts in geometry, topology, differential geometry, and proceed to advanced notions of algebraic topology, always keeping an eye on the application of the theory, through examples of shape analysis methods such as 3D segmentation, correspondence, and retrieval. A number of research solutions in the field come from advances in pure and applied mathematics, as well as from the re-reading of classical theories and their adaptation to the discrete setting. In a world where disciplines (fortunately) have blurred boundaries, the authors believe that this guide will help to bridge the distance between theory and practic . Table of Contents: Acknowledgments / Figure Credits / About this Book / 3D Shape Analysis in a Nutshell / Geometry, Topology, and Shape Representation / Differential Geometry and Shape Analysis / Spectral Methods for Shape Analysis / Maps and Distances between Spaces / Algebraic Topology and Topology Invariants / Differential Topology and Shape Analysis / Reeb Graphs / Morse and Morse-Smale Complexes / Topological Persistence / Beyond Geometry and Topology / Resources / Bibliography / Authors' Biographies View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Mathematical Basics of Motion and Deformation in Computer Graphics

    Anjyo, K. ; Ochiai, H.
    DOI: 10.2200/S00599ED1V01Y201409CGR017
    Copyright Year: 2014

    Morgan and Claypool eBooks

    This synthesis lecture presents an intuitive introduction to the mathematics of motion and deformation in computer graphics. Starting with familiar concepts in graphics, such as Euler angles, quaternions, and affine transformations, we illustrate that a mathematical theory behind these concepts enables us to develop the techniques for efficient/effective creation of computer animation. This book, therefore, serves as a good guidepost to mathematics (differential geometry and Lie theory) for students of geometric modeling and animation in computer graphics. Experienced developers and researchers will also benefit from this book, since it gives a comprehensive overview of mathematical approaches that are particularly useful in character modeling, deformation, and animation. Table of Contents: Preface / Symbols and Notations / Introduction / Rigid Transformation / Affine Transformation / Exponential and Logarithm of Matrices / 2D Affine Transformation between Two Triangles / Global 2D Sh pe Interpolation / Parametrizing 3D Positive Affine Transformations / Further Readings / Bibliography / Authors' Biographies View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A Short Introduction to Preferences: Between AI and Social Choice

    Rossi, F. ; Venable, K. ; Walsh, T.
    DOI: 10.2200/S00372ED1V01Y201107AIM014
    Copyright Year: 2011

    Morgan and Claypool eBooks

    Computational social choice is an expanding field that merges classical topics like economics and voting theory with more modern topics like artificial intelligence, multiagent systems, and computational complexity. This book provides a concise introduction to the main research lines in this field, covering aspects such as preference modelling, uncertainty reasoning, social choice, stable matching, and computational aspects of preference aggregation and manipulation. The book is centered around the notion of preference reasoning, both in the single-agent and the multi-agent setting. It presents the main approaches to modeling and reasoning with preferences, with particular attention to two popular and powerful formalisms, soft constraints and CP-nets. The authors consider preference elicitation and various forms of uncertainty in soft constraints. They review the most relevant results in voting, with special attention to computational social choice. Finally, the book considers prefere ces in matching problems. The book is intended for students and researchers who may be interested in an introduction to preference reasoning and multi-agent preference aggregation, and who want to know the basic notions and results in computational social choice. Table of Contents: Introduction / Preference Modeling and Reasoning / Uncertainty in Preference Reasoning / Aggregating Preferences / Stable Marriage Problems View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Mobile Platforms and Development Environments

    Helal, S. ; Bose, R. ; Li, W.
    DOI: 10.2200/S00404ED1V01Y201202MPC009
    Copyright Year: 2012

    Morgan and Claypool eBooks

    Mobile platform development has lately become a technological war zone with extremely dynamic and fluid movement, especially in the smart phone and tablet market space. This Synthesis lecture is a guide to the latest developments of the key mobile platforms that are shaping the mobile platform industry. The book covers the three currently dominant native platforms -- iOS, Android and Windows Phone -- along with the device-agnostic HTML5 mobile web platform. The lecture also covers location-based services (LBS) which can be considered as a platform in its own right. The lecture utilizes a sample application (TwitterSearch) that the authors show programmed on each of the platforms. Audiences who may benefit from this lecture include: (1) undergraduate and graduate students taking mobile computing classes or self-learning the mobile platform programmability road map; (2) academic and industrial researchers working on mobile computing R&D projects; (3) mobile app developers for a specifi platform who may be curious about other platforms; (4) system integrator consultants and firms concerned with mobilizing businesses and enterprise apps; and (5) industries including health care, logistics, mobile workforce management, mobile commerce and payment systems and mobile search and advertisement. Table of Contents: From the Newton to the iPhone / iOS / Android / Windows Phone / Mobile Web / Platform-in-Platform: Location-Based Services (LBS) / The Future of Mobile Platforms / TwitterSearch Sample Application View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Analysis Techniques for Information Security

    Datta, A. ; Jha, S. ; Li, N. ; Melski, D.
    DOI: 10.2200/S00260ED1V01Y201003SPT002
    Copyright Year: 2010

    Morgan and Claypool eBooks

    Increasingly our critical infrastructures are reliant on computers. We see examples of such infrastructures in several domains, including medical, power, telecommunications, and finance. Although automation has advantages, increased reliance on computers exposes our critical infrastructures to a wider variety and higher likelihood of accidental failures and malicious attacks. Disruption of services caused by such undesired events can have catastrophic effects, such as disruption of essential services and huge financial losses. The increased reliance of critical services on our cyberinfrastructure and the dire consequences of security breaches have highlighted the importance of information security. Authorization, security protocols, and software security are three central areas in security in which there have been significant advances in developing systematic foundations and analysis methods that work for practical systems. This book provides an introduction to this work, covering rep esentative approaches, illustrated by examples, and providing pointers to additional work in the area. Table of Contents: Introduction / Foundations / Detecting Buffer Overruns Using Static Analysis / Analyzing Security Policies / Analyzing Security Protocols View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Controlling Energy Demand in Mobile Computing Systems

    Ellis, C.
    DOI: 10.2200/S00089ED1V01Y200704MPC002
    Copyright Year: 2007

    Morgan and Claypool eBooks

    This lecture provides an introduction to the problem of managing the energy demand of mobile devices. Reducing energy consumption, primarily with the goal of extending the lifetime of battery-powered devices, has emerged as a fundamental challenge in mobile computing and wireless communication. The focus of this lecture is on a systems approach where software techniques exploit state-of-the-art architectural features rather than relying only upon advances in lower-power circuitry or the slow improvements in battery technology to solve the problem. Fortunately, there are many opportunities to innovate on managing energy demand at the higher levels of a mobile system. Increasingly, device components offer low power modes that enable software to directly affect the energy consumption of the system. The challenge is to design resource management policies to effectively use these capabilities. The lecture begins by providing the necessary foundations, including basic energy terminology and widely accepted metrics, system models of how power is consumed by a device, and measurement methods and tools available for experimental evaluation. For components that offer low power modes, management policies are considered that address the questions of when to power down to a lower power state and when to power back up to a higher power state. These policies rely on detecting periods when the device is idle as well as techniques for modifying the access patterns of a workload to increase opportunities for power state transitions. For processors with frequency and voltage scaling capabilities, dynamic scheduling policies are developed that determine points during execution when those settings can be changed without harming quality of service constraints. The interactions and tradeoffs among the power management policies of multiple devices are discussed. We explore how the effective power management on one component of a system may have either a positive or negative impact on over ll energy consumption or on the design of policies for another component. The important role that application-level involvement may play in energy management is described, with several examples of cross-layer cooperation. Application program interfaces (APIs) that provide information flow across the application-OS boundary are valuable tools in encouraging development of energy-aware applications. Finally, we summarize the key lessons of this lecture and discuss future directions in managing energy demand. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Surface Computing and Collaborative Analysis Work

    Brown, J. ; Wilson, J. ; Gossage, S. ; Hack, C.
    DOI: 10.2200/S00492ED1V01Y201303HCI019
    Copyright Year: 2013

    Morgan and Claypool eBooks

    Large surface computing devices (wall-mounted or tabletop) with touch interfaces and their application to collaborative data analysis, an increasingly important and prevalent activity, is the primary topic of this book. Our goals are to outline the fundamentals of surface computing (a still maturing technology), review relevant work on collaborative data analysis, describe frameworks for understanding collaborative processes, and provide a better understanding of the opportunities for research and development. We describe surfaces as display technologies with which people can interact directly, and emphasize how interaction design changes when designing for large surfaces. We review efforts to use large displays, surfaces or mixed display environments to enable collaborative analytic activity. Collaborative analysis is important in many domains, but to provide concrete examples and a specific focus, we frequently consider analysis work in the security domain, and in particular the cha lenges security personnel face in securing networks from attackers, and intelligence analysts encounter when analyzing intelligence data. Both of these activities are becoming increasingly collaborative endeavors, and there are huge opportunities for improving collaboration by leveraging surface computing. This work highlights for interaction designers and software developers the particular challenges and opportunities presented by interaction with surfaces. We have reviewed hundreds of recent research papers, and report on advancements in the fields of surface-enabled collaborative analytic work, interactive techniques for surface technologies, and useful theory that can provide direction to interaction design work. We also offer insight into issues that arise when developing applications for multi-touch surfaces derived from our own experiences creating collaborative applications. We present these insights at a level appropriate for all members of the software design and development team. Table of Contents: List of Figures / Acknowledgments / Figure Credits / Purpose and Direction / Surface Technologies and Collaborative Analysis Systems / Interacting with Surface Technologies / Collaborative Work Enabled by Surfaces / The Theory and the Design of Surface Applications / The Development of Surface Applications / Concluding Comments / Bibliography / Authors' Biographies View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Computer Architecture Techniques for Power-Efficiency

    Kaxiras, S. ; Martonosi, M.
    DOI: 10.2200/S00119ED1V01Y200805CAC004
    Copyright Year: 2008

    Morgan and Claypool eBooks

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these costs is the inexorable increase in power dissipation and power density in processors. Power dissipation issues have catalyzed new topic areas in computer architecture, resulting in a substantial body of work on more power-efficient architectures. Power dissipation coupled with diminishing performance gains, was also the main cause for the switch from single-core to multi-core architectures and slowdown in frequency increase. This book aims to document some of the most important architectural techniques that were invented, proposed, and applied to reduce both dynamic power and static power dissipation in processors and memory hierarchies. A significant number of techniques have been proposed for a wide range of situations and this book synthesizes those techniques by focusing on their common characteristics. Table of Contents: Introduction / Modeling, Simulation, and Measurement / Using Voltage and Frequency Adjustments to Manage Dynamic Power / Optimizing Capacitance and Switching Activity to Reduce Dynamic Power / Managing Static (Leakage) Power / Conclusions View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Reversible Digital Watermarking:Theory and Practices

    Naskar, R. ; Chakraborty, R.
    DOI: 10.2200/S00567ED1V01Y201401SPT010
    Copyright Year: 2014

    Morgan and Claypool eBooks

    Digital Watermarking is the art and science of embedding information in existing digital content for Digital Rights Management (DRM) and authentication. Reversible watermarking is a class of (fragile) digital watermarking that not only authenticates multimedia data content, but also helps to maintain perfect integrity of the original multimedia "cover data." In non-reversible watermarking schemes, after embedding and extraction of the watermark, the cover data undergoes some distortions, although perceptually negligible in most cases. In contrast, in reversible watermarking, zero-distortion of the cover data is achieved, that is the cover data is guaranteed to be restored bit-by-bit. Such a feature is desirable when highly sensitive data is watermarked, e.g., in military, medical, and legal imaging applications. This work deals with development, analysis, and evaluation of state-of-the-art reversible watermarking techniques for digital images. In this work we establish the motivation or research on reversible watermarking using a couple of case studies with medical and military images. We present a detailed review of the state-of-the-art research in this field. We investigate the various subclasses of reversible watermarking algorithms, their operating principles, and computational complexities. Along with this, to give the readers an idea about the detailed working of a reversible watermarking scheme, we present a prediction-based reversible watermarking technique, recently published by us. We discuss the major issues and challenges behind implementation of reversible watermarking techniques, and recently proposed solutions for them. Finally, we provide an overview of some open problems and scope of work for future researchers in this area. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    User-Centered Data Management

    Catarci, T. ; Dix, A. ; Kimani, S. ; Santucci, G.
    DOI: 10.2200/S00285ED1V01Y201006DTM006
    Copyright Year: 2010

    Morgan and Claypool eBooks

    This lecture covers several core issues in user-centered data management, including how to design usable interfaces that suitably support database tasks, and relevant approaches to visual querying, information visualization, and visual data mining. Novel interaction paradigms, e.g., mobile and interfaces that go beyond the visual dimension, are also discussed. Table of Contents: Why User-Centered / The Early Days: Visual Query Systems / Beyond Querying / More Advanced Applications / Non-Visual Interfaces / Conclusions View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Near Field Communication:Recent Developments and Library Implications

    McHugh, S. ; Yarmey, K.
    DOI: 10.2200/S00570ED1V01Y201403ETL002
    Copyright Year: 2014

    Morgan and Claypool eBooks

    Near Field Communication is a radio frequency technology that allows objects, such as mobile phones, computers, tags, or posters, to exchange information wirelessly across a small distance. This report on the progress of Near Field Communication reviews the features and functionality of the technology and summarizes the broad spectrum of its current and anticipated applications. We explore the development of NFC technology in recent years, introduce the major stakeholders in the NFC ecosystem, and project its movement toward mainstream adoption. Several examples of early implementation of NFC in libraries are highlighted, primarily involving the use of NFC to enhance discovery by linking books or other physical objects with digital information about library resources, but also including applications of NFC to collection management and self-checkout. Future uses of NFC in libraries, such as smart posters or other enhanced outreach, are envisioned as well as the potential for the "touch paradigm" and "Internet of things" to transform the ways in which library users interact with the information environment. Conscious of the privacy and security of our patrons, we also address continuing concerns related to NFC technology and its expected applications, recommending caution, awareness, and education as immediate next steps for librarians. View full abstract»

Skip to Results

SEARCH HISTORY

Search History is available using your personal IEEE account.