By Topic

Web Services (ECOWS), 2011 Ninth IEEE European Conference on

Date 14-16 Sept. 2011

Filter Results

Displaying Results 1 - 25 of 34
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (1382 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (128 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (143 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (112 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - vi
    Save to Project icon | Request Permissions | PDF file iconPDF (193 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): vii
    Save to Project icon | Request Permissions | PDF file iconPDF (91 KB)  
    Freely Available from IEEE
  • ECOWS 2011 Conference organization

    Page(s): viii - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (85 KB)  
    Freely Available from IEEE
  • Business Process Configuration in the Cloud: How to Support and Analyze Multi-tenant Processes?

    Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (345 KB) |  | HTML iconHTML  

    Lion's share of cloud research has been focusing on performance related problems. However, cloud computing will also change the way in which business processes are managed and supported, e.g., more and more organizations will be sharing common processes. In the classical setting, where product software is used, different organizations can make ad-hoc customizations to let the system fit their needs. This is undesirable, especially when multiple organizations share a cloud infrastructure. Configurable process models enable the sharing of common processes among different organizations in a controlled manner. This paper discusses challenges and opportunities related to business process configuration. Causal nets (C-nets) are proposed as a new formalism to deal with these challenges, e.g., merging variants into a configurable model is supported by a simple union operator. C-nets also provide a good representational bias for process mining, i.e., process discovery and conformance checking based on event logs. In the context of cloud computing, we focus on the application of C-nets to cross-organizational process mining. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Fading Boundary between Development Time and Run Time

    Page(s): 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (115 KB) |  | HTML iconHTML  

    Summary form only given. Modern software applications are often embedded in highly dynamic contexts. Changes may occur in the requirements, in the behavior of the environment in which the application is embedded, in the usage profiles that characterize interactive aspects. Changes are difficult to predict and anticipate, and are out of control of the application. Their occurrence, however, may be disruptive, and therefore the software must also change accordingly. In many cases, changes to the software cannot be handled off-line, but require the software to self react by adapting its behavior dynamically, in order to continue to ensure the required quality of service. The big challenge in front of us is how to achieve the necessary degrees of flexibility and dynamism required in this setting without compromising dependability of the applications. To achieve dependability, a software engineering paradigm shift is needed. The traditional focus on quality, verification, models, and model transformations must extend from development time to run time. Not only software development environments (SDEs) are important for the software engineer to develop better software. Feature-full Software Run-time Environments (SREs) are also key. SREs must be populated by a wealth of functionalities that support on-line monitoring of the environment, inferring significant changes through machine learning methods, keeping models alive and updating them accordingly, reasoning on models about requirements satisfaction after changes occur, and triggering model-driven self-adaptive reactions, if necessary. In essence, self adaptation must be grounded on the firm foundations provided by formal methods and tools in a seamless SDE SRE setting. The talk discusses these concepts by focusing on non-functional requirements-reliability and performance-that can be expressed in quantitative probabilistic requirements. In particular, it shows how probabilistic model checking can help reasoning about re- - quirements satisfaction and how it can be made run-time efficient. The talk reports on some results of research developed within the SMScom project, funded by the European Commission, Programme IDEAS-ERC, Project 227977 (http://www.erc-smscom.org/). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 13 Years of SOA at Credit Suisse: Lessons Learned-Remaining Challenges

    Page(s): 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (113 KB)  

    Credit Suisse has been active in the field of service oriented architecture over many years. I chose the birth date of the "Credit Suisse Information Bus" 13 years ago as the starting point of a long journey towards an enterprise SOA at Credit Suisse. I have chosen a number of case studies, marking major steps in the SOA progress. Each case study starts with a strategic business need, continues with the chosen solution, and concludes with a discussion of the achievements and the remaining gaps. Putting these case studies into a historic perspective, shows a continuous evolution, where each step expands the business value, closes gaps of previous solutions, and last but not least leads to new challenges. I will illustrate each case study with examples and data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Aligning Web Services with the Semantic Web to Create a Global Read-Write Graph of Data

    Page(s): 15 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (489 KB) |  | HTML iconHTML  

    Despite significant research and development efforts, the vision of the Semantic Web yielding to a Web of Data has not yet become reality. Even though initiatives such as Linking Open Data gained traction recently, the Web of Data is still clearly outpaced by the growth of the traditional, document-based Web. Instead of releasing data in the form of RDF, many publishers choose to publish their data in the form of Web services. The reasons for this are manifold. Given that RESTful Web services closely resemble the document-based Web, they are not only perceived as less complex and disruptive, but also provide read-write interfaces to the underlying data. In contrast, the current Semantic Web is essentially read-only which clearly inhibits networking effects and engagement of the crowd. On the other hand, the prevalent use of proprietary schemas to represent the data published by Web services inhibits generic browsers or crawlers to access and understand this data, the consequence are islands of data instead of a global graph of data forming the envisioned Semantic Web. We thus propose a novel approach to integrate Web services into the Web of Data by introducing an algorithm to translate SPARQL queries to HTTP requests. The aim is to create a global read-write graph of data and to standardize the mashup development process. We try to keep the approach as familiar and simple as possible to lower the entry barrier and foster the adoption of our approach. Thus, we based our proposal on SEREDASj, a semantic description language for RESTful data services, for making proprietary JSON service schemas accessible. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Service Offer Discovery Using Genetic Algorithms

    Page(s): 23 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (654 KB) |  | HTML iconHTML  

    Available service descriptions are often specified using abstract definitions of service attributes. However, service consumers are mainly interested in concrete, consumable service offers which are specified using concrete values of service attributes. Service offers, due to their request dependence and dynamicity, have to be generated on-the-fly what may require interaction with a service. We propose a service description model that facilitates creation of consumable service offers. A large number of service offers can be generated considering flexible search requests. In order to address that, we propose a novel approach to dynamic generation of service offers. Our approach is based on genetic algorithms and reduces the number of relevant service offers. For evaluation purposes we apply our approach to the shipping domain where real shipping services on the Web are used to prove the effectiveness and usability of our approach in a real-world domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formal Verification of Web Services Composition Using Linear Logic and the pi-calculus

    Page(s): 31 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    We give an overview of a rigorous approach to Web Services composition based on theorem proving in the proof assistant HOL Light. In this, we exploit the proofs-as-processes paradigm to compose multiple Web Services specified in Classical Linear Logic, while using the expressive nature of our theorem-proving framework to provide a systematic and rigorous treatment of properties such as exceptions. The end result is not only a formally verified proof of the composition, with an associated guarantee of correctness, but also an 'executable' π-calculus statement describing the composition in process-algebraic terms. We illustrate our approach by analyzing a non-trivial example involving numerous Web Services in a real-estate domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Two-Stage RESTful Web Service Composition Method Based on Linear Logic

    Page(s): 39 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (377 KB) |  | HTML iconHTML  

    RESTful web services, which are declarative, light-weight and easy-to-access, have attracted increasing interest from industry and are already widely used for exposing their services on the Internet. However, the formalism of RESTful web services, especially in terms of automatic composition, is still under explored compared to the extensive research in RPC-style web services. This paper introduces a formal definition of RESTful web services and proposes a method for RESTful web service composition based on Linear Logic. This is a two-stage proof-searching method that finds composition services at both resource and service invocation method levels. It greatly improves the searching efficiency and guarantees the correctness and completeness of the service composition process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Architecture of a Secure Business-Process-Management System in Service-Oriented Environments

    Page(s): 49 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB) |  | HTML iconHTML  

    Business-process-management (BPM) systems are increasingly used in service-oriented architectures (SOA), coordinating activities of web services and of human actors. The openness and flexibility of SOA causes new challenges regarding security of business processes. This is because they have to deal with unexpected situations at run time. Respective solutions must be inherent to a BPM system (BPMS). Process implementers must explicitly model different security aspects of business processes, and the BPMS must combine them with run-time context to enforce security. However, existing BPMS do not address security issues in a way that is sufficiently broad. Thus, extensions are necessary, and they should integrate existing technology. The core contribution of this paper is to propose how to extend the WfMC reference architecture for BPMS to accomplish this, having identified different design alternatives. The resulting architecture is generic and integrates with existing SOA technologies like federated identity management. It is modular, making adaptations relatively easy. This has been hard to achieve because security aspects concern the entire system. Finally, our architecture can be used to implement the various business process security modelling approaches that have been presented in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Secure Proxy-Based Cross-Domain Communication for Web Mashups

    Page(s): 57 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (574 KB) |  | HTML iconHTML  

    A web mashup is a web application that integrates content from heterogeneous sources to provide users with a more integrated and seamless browsing experience. Client-side mashups differ from server-side mashups in that the content is integrated in the browser using the client-side scripts. However, the legacy same origin policy (SOP) implemented by the browsers cannot provide a flexible client-side communication mechanism to exchange information between different sources. To address this problem, we propose a secure client-side cross-domain communication model facilitated by a trusted proxy and the HTML 5 post Message method. The proxy-based model supports fine-grained access control for elements that belong to different sources in web mashups, and the design guarantees the confidentiality, integrity, and authenticity during cross-domain communications. The proxy-based design also allows users to browse mashups without installing browser plug-ins. For mashups developers, the provided API minimizes the amount of code modification. The results of experiments demonstrate that the overhead in-curred by our proxy model is low and reasonable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Whom to Trust? Generating WS-Security Policies Based on Assurance Information

    Page(s): 65 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (925 KB) |  | HTML iconHTML  

    As input for authorization decisions as well as to offer personalized services, service providers often require information about their users' identity attributes. In open identity management systems, these identity attributes are not necessarily managed by the service providers themselves, but by independent identity providers. Users might be required to aggregate identity attributes from multiple identity providers in order to meet a service's needs. On the other hand service providers might also have certain requirements concerning their confidence in these attributes and face the problem of choosing one among multiple identity providers that can possibly assert the same attributes, but with different trust qualities. In this paper, we present an architecture to generate service policies using assurance information about available identity providers. Our logic-based attribute assurance library, called Identity Trust, allows the configuration of a knowledge base reflecting a service provider's knowledge about remote identity providers. Service providers can state their trust requirements concerning technical and organizational details of identity providers and their ability to assert identity attributes. A reasoning engine finds suitable (combinations of) identity providers, which serve as input for our policy framework that generates corresponding policies using the WS-Security policy format. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Composing Non-functional Concerns in Web Services

    Page(s): 73 - 80
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (433 KB) |  | HTML iconHTML  

    Support for non-functional concerns (NFC) is essential for the success and adoption of web services. This support encompasses two aspects: the specification of NFCs and their realization. However, state-of-the-art works offer only limited support for these aspects. This is especially true for the composition of multiple non-functional concerns with one web service, which is a highly complex task. It is complex because specific knowledge from different domains is required, as well as an understanding of the interdependencies between non-orthogonal NFCs. In this paper, we present an approach and a toolset for the specification and realization of the composition of multiple NFCs in web services. We also present a well-defined process involving different roles and we introduce graphical modeling notations for specifying non-functional requirements, actions realizing the requirements, action compositions and the mapping of actions to web services. These specification models are used for the generation of code that realizes the NFCs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Learning Architecture for Scheduling Workflow Applications in the Cloud

    Page(s): 83 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (334 KB) |  | HTML iconHTML  

    The scheduling of workflow applications involves the mapping of individual workflow tasks to computational resources, based on a range of functional and non-functional quality of service requirements. Workflow applications such as scientific workflows often require extensive computational processing and generate significant amounts of experimental data. The emergence of cloud computing has introduced a utility-type market model, where computational resources of varying capacities can be procured on demand, in a pay-per-use fashion. In workflow based applications dependencies exist amongst tasks which requires the generation of schedules in accordance with defined precedence constraints. These constraints pose a difficult planning problem, where tasks must be scheduled for execution only once all their parent tasks have completed. In general the two most important objectives of workflow schedulers are the minimisation of both cost and make span. The cost of workflow execution consists of both computational costs incurred from processing individual tasks, and data transmission costs. With scientific workflows potentially large amounts of data must be transferred between compute and storage sites. This paper proposes a novel cloud workflow scheduling approach which employs a Markov Decision Process to optimally guide the workflow execution process depending on environmental state. In addition the system employs a genetic algorithm to evolve workflow schedules. The overall architecture is presented, and initial results indicate the potential of this approach for developing viable workflow schedules on the Cloud. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Online Provenance Service for Distributed Metabolic Flux Analysis Workflows

    Page(s): 91 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    Scientific applications in the area of Systems Biology become more and more demanding in terms of workflow organization and required computational resources. Metabolic flux analysis with carbon tracer experiments (13C-MFA) is a particularly challenging technology consisting of several tightly interwoven building blocks, some of which are quite compute-intensive while others may require human expert interaction. Within these buildings blocks and at their interfaces, large amounts of provenance data (i.e., process messages, logs and metadata) are created. An online provenance approach for a scientific workflow framework is presented to efficiently monitor and analyze long running parallel 13C-MFA applications. The design of our approach consists of a cross-language programming interface for generating messages, a provenance store for collecting log messages from distinct workflows, and a versatile user client for querying and filtering data of interest. Seamless integration into the BPEL-based scientific workflow framework is enabled by web service interfaces. Salient features of our provenance solution are interactivity and scalability, as demonstrated by two examples. The framework provides the capability to capture, filter, query, and trace provenance data on demand. Our solution leverages scientific analysis of complex 13C-MFA workflows. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Workflow Skeletons: Increasing Scalability of Scientific Workflows by Combining Orchestration and Choreography

    Page(s): 99 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1254 KB) |  | HTML iconHTML  

    Dataflow modeling is the natural way of composing scientific workflows, because they often comprise numerous data transformation steps applying massive parallelism. However, modeling control flow within dataflow is often achieved at the expense of clarity and comprehensibility. This paper describes scientific workflows maintaining the robustness of centralized control (using orchestration) by modeling control flow, while at the same time integrating sub-workflows that are modeled by workflow skeletons (using choreography) describing dataflow. Following the concept of algorithmic skeletons, we define workflow skeletons as re-usable parallel constructs describing dataflow connections between proxies representing services. Proxies are able to communicate with each other allowing for efficient coupling between parallel tasks and avoiding of unnecessary data transfers. Skeletons increase scalability on demand by accepting the number of parallel tasks. The primary contributions are a formal model describing workflow skeletons and a script language "Work flow Skeleton Language" (WorkSKEL). Furthermore, this paper demonstrates the definition of selected workflow patterns like pipeline and farm in WorkSKEL. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SLA Monitor: A System for Dynamic Monitoring of Adaptive Web Services

    Page(s): 109 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (708 KB) |  | HTML iconHTML  

    Service Level Agreements (SLAs) need to be monitored at runtime to assure that the Business Level Agreements (BLAs) / Business Level Objectives (BLOs) are indeed satisfied in the realized business workflow and allow the organization to adjust its business processes best to the environment. In this paper, we show how multiple SLAs specified on various observable attributes can be formally specified, automatically synthesized and plugged into the underlying workflow service engine to assure such a conformance. Such a conformance validation allows the workflow engine to ensure the satisfaction of BLA/BLO and adapt as per requirements. Note that most of the SLAs, can be characterized formally either as safety properties based on bounded history of the business events/attributes or some standard quantification of the performance attributes. In our work, the former is specified using a temporal logic called SL that has been shown to have the expressive power of regular safety properties, we confine to a fragment of SL called DSL, for which the accepting automata is deterministic. The latter is specified using standard system/user provided macros based on the observable QoS attributes. In the paper, we first describe an automatic synthesis (model checking) of monitors from the DSL formulae realized through a model checking algorithm, and then provide an overview of the integrated environment called SLA Monitor for specifying and monitoring conformance. The effectiveness of specifying SLAs in DSL is demonstrated through examples and SLA Management is illustrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cost Reduction through SLA-driven Self-Management

    Page(s): 117 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (432 KB) |  | HTML iconHTML  

    A main challenge for service providers is managing service-level agreements (SLAs) with their customers while satisfying their business objectives, such as maximizing profits. Most current systems fail to consider business objectives and thus to provide a complete SLA management solution. This work proposes an SLA-driven management solution that aims to maximize the provider's profit by reducing resource costs as well as fines owning to SLA violations. Specifically, this work proposes a framework that comprises multiple, configurable control loops and supports automatically adjusting service configurations and resource usage in order to maintain SLAs in the most cost-effective way. The framework targets services implemented on top of large-scale distributed infrastructures, such as clouds. Experimental results demonstrate its effectiveness in maintaining SLAs while reducing provider costs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Service Level Achievements -- Distributed Knowledge for Optimal Service Selection

    Page(s): 125 - 132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2171 KB) |  | HTML iconHTML  

    In a service-oriented setting, where services are composed to provide end user functionality, it is a challenge to find the service components with best-fit functionality and quality. A decision based on information mainly provided by service providers is inadequate as it cannot be trusted in general. In this paper, we discuss service compositions in an open market scenario where an automated best-fit service selection and composition is based on Service Level Achievements instead. Continuous monitoring updates the actual Service Level Achievements which can lead to dynamically changing compositions. Measurements of real life services exemplify the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Automatic Requirements Negotiation Approach for Business Services

    Page(s): 133 - 140
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (433 KB) |  | HTML iconHTML  

    Organizations now resort to service-orientation as it enables them to quickly create and offer new business services (BSs) or optimize existing ones. In many cases, organizations must cooperate to offer such services so as to concentrate only on their core business. An initial phase to the design of a novel BS concerns the determination of the BS's functional and non-functional requirements. The respective research approaches exploit goal models to specify and elicit such requirements. However, while it is easy to reach an agreement for the functional requirements, this is not true for the non-functional ones. First, as the involved stakeholders may have different requirements and expertise level for particular non-functional aspects. Second, as a BS's non-functional performance is critical for distinguishing among functionally-equivalent BSs of other competing organizations. Thus, the stakeholders must negotiate over the BS's non-functional requirements. However, such a negotiation may take considerable time and needs the active stakeholder involvement in terms of alternative offers for the conflicting requirements. To this end, this paper proposes a broker-based BS negotiation framework that can automatically determine the non-functional requirements of the required BS. This framework takes as input a functional goal model as well as the stakeholder requirements in terms of utility functions on the non-functional performance of the required BS functional goal and its sub-goals, and can propose an overall solution that is balanced and consistent across the goal model levels and satisfies as much as possible all the stakeholders. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.