Position Paper on Web Transactions
David Ingham {[email protected]},
Mark Little {[email protected]},
Savas Parastatidis {[email protected]},
James Webber {[email protected]},
Stuart Wheater {[email protected]}
Hewlett-Packard Arjuna Lab
Abstract
It is our contention that supporting transactional applications in the loosely-coupled world of Web services cannot be addressed solely by reusing traditional intranet-based transaction models. We intend to examine architectural characteristics of transactioning within a Web Services environment and to ultimately propose transactioning models which suit that problem domain.
Web Services are the building blocks of a new generation of distributed applications. Combining distinct Web Services to build an application is not fundamentally different from combining components, except for the fact that these “components” are extremely loosely coupled and often deployed by unrelated providers. Though the ability to build applications which span enterprises has existed for some time (e.g., using DCOM, CORBA etc.), such applications have rarely crossed enterprise boundaries for a number of reasons, including perceived cost of maintaining such infrastructure and security considerations.
The Web Services architecture is being promoted as being an enabling technology for distributed applications that are easy to build and maintain. The Simple Object Access Protocol (SOAP) provides a basic communication infrastructure that simplifies data exchange. However, the availability of a transport mechanism only addresses part of the problem; higher level protocols that build on top of SOAP need to be defined for interactions that are specific to particular application domains (e.g., ebXML). Of special interest to us is the ability to accurately orchestrate such interactions within the context of a Web Transaction.
Traditionally, distributed computing systems have used transactions to ensure the ACID properties (atomicity, consistency, isolation, and durability) of multiple units of work which are perceived as a single activity [1][2]. Unlike traditional deployments of component‑based distributed applications, Internet applications built around Web Services do not have the luxury of ownership of their building blocks. A description of a Web Service must therefore be sufficiently expressive to capture the transactional properties of the service, in a similar manner to EJB deployment descriptors. It follows that participants in a Web Transaction must support the necessary infrastructure to be driven by the protocol, as defined by a particular transactional model. We do not prescribe which transactional models will be used, since we are not of the opinion that one size fits all.
It is our intention to architect a solution to the problem of enabling transactions in a Web Services environment. Furthermore, whilst we have extensive experience [3][4][5] to draw upon, we are not carrying any assumptions over into the Web Services transactions. We shall develop the architecture based on the characteristics of this new domain rather, than trying to force existing protocols into new problem domains. However, there will certainly be a significant proportion of Web Services that will be backed by database technology. Some of these services may choose to export their transactional properties (e.g., via an XA-like interface) through a Web Service interface. It is therefore important that the architecture developed is able to coordinate such services in addition to any extended transactional models (even less powerful models).
However, we also believe that there are benefits (both commercial and intellectual) that can be drawn from our experience: there are many resources, such as databases, that exist and which work within an ACID transaction model. We believe it is important to be able to support these resources in a Web Transaction. In addition, we believe that presenting a model which is easy to understand and reason about is also important; one of the reasons that ACID transactions have proven successful in the implementation of fault‑tolerant applications is that the model is simple to understand and use.
Web Services can only become of real commercial value if some degree of dependability can be achieved. The Web is rapidly being populated by service providers who wish to sell their products to the large potential customer base. However, there are still important security and fault‑tolerance considerations which must be addressed. One of these is the fact that the Web frequently suffers from failures which can affect both the performance and consistency of applications run over it [5]. For straightforward use cases such as a single client communicating with a single server for a zero-value interaction (such as obtaining timetable information etc.), transactions may not even be necessary. However, as soon as multiple Web Services are composed into applications, coordination becomes an important consideration. We see coordination as the means to reliably support interactions of any nature (e.g., B2C, B2B). There may not even be a coordination in the traditional sense.
Before proceeding any further, we shall first define some terminology that we shall use in the rest of this paper:
· Atomic Transactions (transaction): a traditional ACID transaction.
· Atomic Action: an activity, or group of activities, that does not necessarily possess the guaranteed ACID properties. An atomic action still has the ‘all or nothing’ effect (i.e., failure does not result in partial work).
· Web Transaction: an activity, or group of activities, that is responsible for performing some application‑specific work. A Web Transaction may be an atomic transaction or an atomic action; we make no assumptions either way.
At the individual service level, providers have new concerns in addition to supporting legitimate users of their services. Unlike tightly coupled applications where components are used within a controlled run‑time environment that, amongst other things, may manage transactions, Web Service providers may not trust clients to use their services correctly. Whether this misuse happens through malice, stupidity, or oversight is unimportant. What is important is that clients can affect the state of the service with which they interact and can thus cause problems.
An example of such a problem involves the booking of taxis and restaurants. An application which provides for clients to be collected from home, taken to a restaurant and then returned safely home may interact with two web services, one for a taxi company and one which represents the restaurant. If the application books a taxi first, and then does not get a reservation with the restaurant, it should ensure that the corresponding taxi booking is also revoked. However, a poorly written or malicious application would leave the taxi company believing that it had secured a booking whilst the client no longer as any intention of using that booking since the restaurant cannot offer an appropriate reservation (for the purposes of this example we have chosen to ignore payment issues).
A standard approach to solving this problem would be to use a transaction service with a two‑phase commit protocol. Under this scheme a transaction coordinator would first obtain an agreement in principle from each of the services before proceeding to commit the whole transaction. However, this model requires a certain degree of trust from the point of view of the services in that a transaction coordinator exists and that it will not leave the services hanging. Similarly, compensating actions could be used to ensure consistency, but the fact remains that some level of coordination (and thus a coordinator) is necessary and must be trusted to some degree. In the Web Services architecture, there is no such trust since we compose applications from services that we find in the UDDI directory and we may change service providers dynamically at run‑time. Thus services may not trust any third party (what would traditionally be called a coordinator) to ensure consistency. In short, services need mechanisms for protecting themselves, whilst application developers need mechanisms to ensure the properly synchronised completion or failure of a business transaction.
Of course, it is possible to pass transaction contexts (perhaps encoded as an XML document) between parties involved in a Web Transaction. With encryption for security, digital signatures for non-repudiation, and mechanisms to allow services to undo committed work, we could implement a Web Services transaction system that reflects best practice in current systems. It would be a significant challenge to do this in a generic fashion, and a still greater engineering challenge to encapsulate this detail away from application developers. Yet this kind of relatively heavyweight infrastructure is not in keeping with the lightweight feel of the Web Services architecture. It is our contention that a protocol (or protocol suite) for Web Transactions should be as minimalist and un‑intrusive as possible, yet be rich enough to support a suitably wide variety of transactional models. The challenge is to define an architecture that can accommodate different transactional models as the problem domain dictates.
Given the Web Services architecture, transactioning is not a trivial problem to solve. Since we are interacting with parties that we do not know, and probably do not trust one another (either from the point of view of the application, or of individual Web Services), current transactioning practices are inappropriate. In order to build an appropriate infrastructure, we need to consider the following:
· Transactionality, where transaction-like properties are observed for a Web Transaction, though without making any assumptions about the ACID properties of constituent Web Services;
· Performance and Denial-of-Service, whereby a service will not wait indefinitely for the actions of other parties to complete and may choose to discard any operations that it has performed in the context of the current atomic action (and possibly compensate or roll back, depending upon the extended transaction model);
· Identity, where a service can accurately identify its users and vice-versa, and correspondingly to be able to attach permissions to those identities;
· Anonymity and privacy, whereby a user of a service will not be prone to invasion of privacy by that service;
· Non-repudiation, where the actions of a user or service cannot be denied in the event of arbitration;
· Versioning, whereby there is a requirement for formally deprecating services and retiring them from use gracefully, lest transactions fail on account of accessing long-dead Web Services.
This list is almost certainly incomplete. We have only begun to understand the implications of the move to a loosely‑coupled environment and the requirements Web Transactions place on “transactionality.” As our work progresses, we will doubtlessly encounter more situations that need remedying. For the moment however, the list gives us an idea of the kinds of problems that we are facing and is a useful starting point.
What our initial investigation shows is that one-size does not fit all and different applications will require different qualities of “transactional” service. However, it is our belief that these different models share core functionality based upon activity signalling and distribution and that it is possible to use the framework presented in [6][7] that allows middleware to manage complex Web Transactions. The framework provides an infrastructure to support the coordination and control of abstract, application‑specific entities. These entities (activities) may be transactional, they may use weaker forms of serialisability, or they may not be transactional at all. The different extended transaction models are mapped onto specific implementations that use this framework. It is on specifying these extended transaction models where most of our work on Web Transactions will take place.
It is clear to us that Web Services are the first truly viable architecture which supports communication across enterprise boundaries. Whilst previous technologies theoretically could have achieved much the same effect, the simple fact of the matter is that they never gained widespread acceptance.
However, the fact that we now have the potential for inter-enterprise interactions means that we will meet in practice problems which we never encountered in closed distributed systems. It is likely that our transaction models will not work well in the new architecture. If we are to attribute any reasonable level of dependability to transactions involving Web Services, we need to develop and standardise on protocols which address the issues described above, and which are extensible enough to overcome future issues as they arise.
[1] Bernstein, P. A. and E. Newcomer. 1997. “Principles of Transaction Processing”. San Francisco, CA, Morgan Kaufmann.
[2] Gray, J. and A. Reuter. 1993. “Transaction Processing: Concepts and Techniques”. San Francisco, CA, Morgan Kaufmann.
[3] J. J. Halliday, S. K. Shrivastava, and S. M. Wheater, “Implementing Support for Work Activity Coordination within a Distributed Workflow System”, Proceedings of the Third International Conference on Enterprise Distributed Object Computing (EDOC ’99), September 1999, pp. 116-123.
[4] M. C. Little and S. K. Shrivastava, “Java Transactions for the Internet”, the 4th Conference on Object-Oriented Technologies and Systems (COOTS'98), Santa Fe, New Mexico, USA, April 1998.
[5] M. C. Little, S. K. Shrivastava, S. J. Caughey and D. B. Ingham, “Constructing Reliable Web Applications Using Atomic Actions”, “Constructing Reliable Web Applications using Atomic Actions”, Proceedings of the Sixth International World Wide Web Conference, Santa Clara, USA 7-11 April 1997.
[6] OMG, Additional Structuring Mechanisms for the OTS Specification, September 2000, document orbos/2000-04-02.
[7] “J2EE Activity Service for Extended Transactions”, Java Community Process JSR 095, http://java.sun.com/aboutJava/communityprocess/jsr/jsr_095_activity.html