Loading [MathJax]/extensions/MathMenu.js
Steven A. Grosz - IEEE Xplore Author Profile

Showing 1-25 of 98 results

Filter Results

Show

Results

The scientific protocols, experiments and instruments that generate data are an integral part of the research lifecycle. Consequently, almost every scientific research institution requires a Research Data Storage System (RDSS). However, RDSS implementations vary significantly due to factors that include cost, geography, workloads, policy, risk tolerance and available technical skills. A RDSS may b...Show More
There is little doubt that we have entered an era where data underpins modern science and research in general. In support of this, numerous infrastructures have been designed and built, ranging from proprietary on-prem systems through to distributed commercial clouds. Such implementations provide a range of functions during the research lifecycle from provisioning and cataloguing data assets throu...Show More
Much of computer science research can benefit from the Translational Computing Science paradigm, which bridges foundational, use-inspired, and applied research with the delivery and deployment of its outcomes and supports essential bidirectional interplays. However, its wide adoption continues to face multiple challenges and roadblocks. This paper uses the perspectives and experiences of two accid...Show More
Modern scientific instruments are becoming essential for discoveries because they provide unprecedented insight into physical or biological events – often in real time. However, these instruments may generate large amounts of data, and increasingly they require sophisticated e-infrastructure for analysis, storage and archive. The increasing complexity and scale of the data, processing steps and sy...Show More
This paper presents a framework to characterize and identify local sequences of proteins that are statistically redundant under the measure of Shannon information content while accounting for variations in their occurrences over evolutionary insertions, deletions, and substitutions of amino acids. The identification of such local sequences provides insights for downstream studies on proteins. Here...Show More
The increase in compute power and complexity of supercomputing systems requires the decrease in the feature size and the supply voltage of internal components. Such development makes unintended errors such as soft errors, potentially caused by random bit flips, inevitable because of the huge size of the resources (such as CPU cores and memory). In this paper, we discuss a non-parametric statistica...Show More

Translational Research in Computer Science

David Abramson;Manish Parashar

Computer
Year: 2019 | Volume: 52, Issue: 9 | Magazine Article |
Cited by: Papers (23)
There are benefits to formalizing translational computer science (TCS) to complement traditional modes of computer science research, as has been done for translational medicine. TCS has the potential to accelerate the impact of computer science research overall.Show More

Translational Research in Computer Science

David Abramson;Manish Parashar

Year: 2019 | Volume: 52, Issue: 9 | Magazine Article |
Energy efficiency has become increasingly important in high performance computing (HPC), as power constraints and costs escalate. Workload and system characteristics form a complex optimization search space in which optimal settings for energy efficiency and performance often diverge. Thus, we must identify trade-off options for performance and energy efficiency to find the desired balance between...Show More
The increasing amount of data being collected from simulations, instruments and sensors creates challenges for existing e-Science infrastructure. In particular, it requires new ways of storing, distributing and processing data in order to cope with both the volume and velocity of the data. The University of Queensland has recently designed and deployed MeDiCI, a data fabric that spans the metropol...Show More
Coral reefs are of global economic and biological significance but are subject to increasing threats. As a result, it is essential to understand the risk of coral reef ecosystem collapse and to develop assessment process for those ecosystems. The International Union for Conservation of Nature (IUCN) Red List of Ecosystem (RLE) is a framework to assess the vulnerability of an ecosystem. Importantly...Show More
Computational analyses of the growing corpus of three-dimensional (3D) structures of proteins have revealed a limited set of recurrent substructural themes, termed super-secondary structures. Knowledge of super-secondary structures is important for the study of protein evolution and for the modeling of proteins with unknown structures. Characterizing a comprehensive dictionary of these super-secon...Show More
Relative debugging helps trace software errors by comparing two concurrent executions of a program - one code being a reference version and the other faulty. By locating data divergence between the runs, relative debugging is effective at finding coding errors when a program is scaled up to solve larger problem sizes or migrated from one platform to another. In this work, we envision potential cha...Show More
Provides an abstract of the keynote presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.Show More
Provides an abstract of the keynote presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.Show More
Relative debugging traces software errors by comparing two executions of a program concurrently - one code being a reference version and the other faulty. Relative debugging is particularly effective when code is migrated from one platform to another, and this is of significant interest for hybrid computer architectures containing CPUs accelerators or coprocessors. In this paper we extend relative...Show More
Data is predicted to transform the 21st century, fuelled by an exponential growth in the amount of data captured, generated and archived. Traditional high performance machines are optimized for numerical computing rather than IO performance or for supporting large memory applications. This paper discusses a new machine, called FlashLite, which addresses these challenges. The paper describes the mo...Show More
This paper presents WorkWays, a workflow-based science gateway that supports human-in-the-loop workflows. The computational steering capability of WorkWays has been used to solve a number of problems in which it is useful for users to study intermediate results and steer the computation. Two of those use cases are discussed in this paper.Show More
Summary form only given. CCDB, implements a strategy called "Comparative Debugging", which helps trace software errors by comparing two executions of a program at the same time - one code being a reference version and the other faulty. Specifically, users write "assertions" that detect when data structure contents in the two executions diverge, and using the dataflow of the code it is possible to ...Show More
Detecting and isolating bugs that arise only at high processor counts is a challenging task. Over a number of years, we have implemented a special debugging method, called "relative debugging," that supports debugging applications as they evolve or are ported to larger machines. It allows a user to compare the state of a suspect program against another reference version even as the number of proce...Show More
Proteins are biomolecules of life. They fold into a great variety of three-dimensional (3D) shapes. Underlying these folding patterns are many recurrent structural fragments or building blocks (analogous to 'LEGO® bricks'). This paper reports an innovative statistical inference approach to discover a comprehensive dictionary of protein structural building blocks from a large corpus of experimental...Show More
Modern science increasingly involves managing and processing large amounts of distributed data accessed by global teams of researchers. To do this, we need systems that combine data, meta-data and workflows into a single system. This paper discusses such a system, built from a number of existing technologies. We demonstrate the effectiveness on a case study that analyses MRI data.Show More
Workflow-based science gateways that bring the power of scientific workflows to the Web are becoming increasingly popular. Different IO models enabling interactions between a running workflow and web portal have been explored. However, these are typically not dynamic enough to allow users to insert data into, or export data out of, a continuously running workflow. In this paper, we present a novel...Show More
Parallel debugging faces challenges in both scalability and efficiency. A number of advanced methods have been invented to improve the efficiency of parallel debugging. As the scale of system increases, these methods highly rely on a scalable communication protocol in order to be utilized in large-scale distributed environments. This paper describes a debugging middleware that provides fundamental...Show More
Universities are constantly searching for ways that prepare students as effective global professionals. At the same time, cyber infrastructure leverages computing, information, and communication technology to perform research, often in an international context. In this paper we discuss a novel model, called the Cyber infrastructure Internship Program (CIP), which serves both of these goals. Specif...Show More
Modern in-silico science (or e-Science) is a complex process, often involving multiple steps conducted across different computing environments. Scientific workflow tools help scientists automate, manage and execute these steps, providing a robust and repeatable research environment. Increasingly workflows generate data sets that require scientific visualization, using a range of display devices su...Show More