Consider the everyday operation of withdrawing cash from an automated teller machine (ATM), an operation you perform frequently. You access your account, specify the amount to withdraw, and then receive cash from the machine. Yet even an operation this mundane involves multiple machines (the ATM, the bank mainframe, and probably a few other machines) and multiple databases (an accounts database, a money transfer database, an audit database, and so on), each of which may also reside on a machine of its own. At the ATM itself, the withdrawal involves both a software user interface and mechanical devices such as the card reader, keypad, bill delivery mechanism, and receipt printer.
The difficulty in developing an ATM application lies in the fact that all of these steps can succeed or fail independently of the others. For example, suppose the ATM can’t connect to the mainframe at the bank or for some reason cannot execute your request. Or, suppose there is a security problem (the wrong PIN code was entered) or the hardware fails (the ATM runs out of bills).
In addition, multiple users may access the bank’s system simultaneously. Their access and the changes they make to the system must be isolated from one another. For example, while you are withdrawing money at the ATM, your spouse could be accessing the account online and a teller could be doing a balance check for a loan approval.
Nevertheless, both you and the bank expect either all the operations involved in accomplishing the request to succeed, or all the operations to fail. Partial success or partial failure of a banking transaction is simply not acceptable; you don’t want the bank to deduct the money from the customer’s account but not dispense the bills, or to dispense the bills but not deduct money from the account.
The expectation for an all-or-nothing series of operations characterizes many business scenarios. Enterprise-level services such as funds management, inventory management, reservation systems, and retail systems require an all-or-nothing series of operations. A logical operation (such as cash withdrawal) that complies with this requirement is called a transaction.
The fundamental problem in implementing a transactional system is that executing all the operations necessary to complete the transaction requires transitioning between intermediate inconsistent system states—states that cannot themselves be tolerated as valid outcomes of the transaction. For example, an inconsistent state would result if you were to deduct money from one account but not credit it to another in a simple transfer of funds between the two accounts. In essence, an inconsistent state is any system-state that is the result of partial success or failure of the elements of one logical operation.
One approach to addressing the complex failure scenarios of a transaction is to add error-handling code to the business logic of your application. However, such an approach is impractical. A transaction can fail in numerous ways. In fact, the number of failure permutations is exponentially proportional to the number of objects and resources participating in the transaction. You are almost certain to miss some of the rare and hard-to-produce failure situations. Even if you manage to cover them all, what will you do when the system evolves—when the behavior of existing components changes and more components and resources are added, thereby multiplying the number of errors you have to deal with? The resulting code will be a fragile solution. Instead of adding business value to the components, you will spend most of your time writing error-handling code, performing testing and debugging, and trying to reproduce bizarre failure conditions. Additionally, the tons of error-handling code will introduce a serious performance penalty.
The proper solution is not to have the transaction error-handling logic in your code. Suppose the transaction could be abstracted enough that your components could focus on executing their business logic and let some other party monitor the transaction success or failure. That third party would also ensure that the system be kept in a consistent state and that the changes made to the system (in the case of a failed transaction) would be rolled back.
That solution is exactly the idea behind the COM+ transaction management service. COM+ simplifies the use of transactions in the enterprise environment. COM+ provides administrative configuration of transactional support for your components. COM+ enables auto-enlistment of resources participating in the transaction and supports managing and executing the transaction across machine boundaries. The COM+ transaction management service is based on the MTS transactions management model, with a few improvements and innovations.
Before we discuss COM+ transaction support, you need to understand the basics of transaction processing, the fundamental properties that every transaction must have, and some common transaction scenarios. If you are already familiar with the basic transaction concepts, feel free to skip directly to Section 4.4 later in this chapter.
Formally, a transaction is a set of potentiality complex operations
that will all succeed or fail as one atomic operation.
Transactions were first introduced in the early 1960s by database vendors. Today, other resource products, such as messaging systems, support transactions as well. Traditionally, the application developer programmed against a complex Transaction Processing Monitor (TPM)—a third party that coordinated the execution of transactions across multiple databases and applications. The idea behind a TPM is simple: because any object participating in a transaction can fail and because the transaction cannot proceed without having all of them succeed, each object should be able to help determine success or failure of the entire transaction. This is called voting on the transaction’s outcome. While a transaction is in progress, the system can be in an inconsistent state. When the transaction completes, however, it must leave the system in a consistent state—either the state it was in before the transaction executed or a new one.
Transactions are so crucial to the consistency of an information system that, in general, whenever you update a persistent storage (usually a database), you need to do it under the protection of a transaction. Another important transaction quality is its duration. Well-designed transactions are of short duration because the speed with which your application can process transactions has a major impact on its scalability and throughput. For example, imagine an online retail store. The store application should process customer orders as quickly as possible and manage every client’s order in a separate transaction. The faster the transaction executes, the more customers per second the application can service (throughput) and the more prepared the application is to scale up to a higher number of customers.