Oracle Essentials Oracle Database 11g (40 page)

Refresh

Refreshing a workspace means bringing the data in a workspace into agreement with the same data in the overall database. This capability could be used if you chose to create a workspace with a snapshot of the data at the end of a day. At midnight, youwould refresh the workspace to make the workspace reflect the data from the previous day.

Merge

A merge operation rolls changes made in a workspace into its parent workspace.

As you can imagine, both the refresh and the merge operations could end up with conflicts between data values in the workspace and its parent. Workspace management keeps track of conflicts on a per-table basis; you can resolve the conflicts manually.

Workspace Enhancements

Workspace Manager is tightly integrated with the Oracle database. Oracle Database 10
g
Workspace Manager enhancements included the ability to export and import version-enabled tables, to use SQL*Loader to bulk-load data into version-enabled tables, to trigger events based on workspace operations, and to define workspaces that are continually refreshed.

Oracle Database 11
g
continues the stream of enhancements to workspaces, providing support for optimizer hints and more data maintenance operations on workspace-enabled tables.

200

|

Chapter 8: Oracle Multiuser Concurrency

Chapter 9

CHAPTER 9

Oracle and Transaction Processing9

The value of information systems is clear from the ever-increasing number of transactions processed by the world’s databases. Transactions form the foundation of business computing systems. In fact, transaction processing (TP) was the impetus for business computing as we know it today. The batch-oriented automation of core business processes like accounting and payroll drove the progress in mainframe computing through the 1970s and 1980s. Along the way, TP began the shift from batch to users interacting directly with systems, and online transaction processing (OLTP) was born. In the 1980s the computing infrastructure shifted from large centralized mainframes with dumb terminals to decentralized client/server computing with graphical user interfaces (GUIs) running on PCs and accessing databases on other machines over a network.

The client/server revolution provided a much better user interface and reduced the cost of hardware and software, but it also introduced additional complexity in systems development, management, and deployment. After a decade of use, system administrators were being slowly overwhelmed by the task of managing thousands of client machines and dozens of servers, so the latter half of the 1990s saw a return to centralization, including the grid (introduced in
Chapter 1).
Throughout all of these shifts, Oracle databases have continued to use their inherent architecture and constant enhancements to service the ever-growing load of transactions.

This chapter looks at all of the features of the Oracle database that contribute to its ability to handle large transaction loads. Although many of the specific features covered in this chapter are touched upon in other chapters of this book, this chapter examines all of these features in light of their use in large OLTP systems.

OLTP Basics

Before we discuss how Oracle specifically handles OLTP, we’ll start by presenting a common definition of online transaction processing.

201

What Is a Transaction?

The concept of a transaction and the relevant Oracle mechanics for dealing with transactions were discussed in Chapter 8. To recap that discussion, a
transaction
is a logical unit of work that must succeed or fail in its entirety. Each transaction typically involves one or more Data Manipulation Language (DML) statements such as INSERT, UPDATE, or DELETE, and ends with either a COMMIT to make the changes permanent or a ROLLBACK to undo the changes.

The industry bible for OLTP,
Transaction Processing: Concepts and Techniques,
by Jim Gray and Andreas Reuter (Morgan Kaufmann; see
Appendix B),
introduced the notion of the
ACID
properties of a transaction. A transaction must be the following:
Atomic

The entire transaction succeeds or fails as a complete unit.

Consistent

A completed transaction leaves the affected data in a consistent or correct state.

Isolated

Each transaction executes in isolation and doesn’t affect the states of others.

Durable

The changes resulting from committed transactions are persistent.

If transactions execute serially—one after the other—their use of ACID properties can be relatively easily guaranteed. Each transaction starts with the consistent state of the previous transaction and, in turn, leaves a consistent state for the next transaction. Concurrent usage introduces the need for sophisticated locking and other coordination mechanisms to preserve the ACID properties of concurrent transactions while delivering throughput and performance.
Chapter 8
covered Oracle’s handling of locking and concurrency in depth.

What Does OLTP Mean?

Online transaction processing can be defined in different ways: as a type of computing with certain characteristics, or as a type of computing in contrast to more traditional batch processing.

General characteristics

Most OLTP systems share some of the following general characteristics:
High transaction volumes and large user populations

OLTP systems are the key operational systems for many companies, so these systems typically support the highest volume and largest communities of any systems in the organization.

202

|

Chapter 9: Oracle and Transaction Processing

Well-defined performance requirements

OLTP systems are central to core business operations, so users must be able to depend on a consistent response time. OLTP systems often involve Service Level Agreements that state the expected response times.

High availability

These systems are typically deemed mission-critical with significant costs resulting from downtime.

Scalability

The ability to increase transaction volumes without significant degradation in performance allows OLTP systems to handle fluctuations in business activity.

In short, OLTP systems must be able to deliver consistent performance at any time, regardless of system load. Anything that affects these core systems can produce a ripple effect throughout your entire organization, affecting both revenue and profitability.

Online versus batch

Online transaction processing implies direct and conversational interaction between the transaction processing system and its users. Users enter and query data using forms that interact with the backend database. Editing and validation of data occur at the time the transactions are submitted by users.

Batch processing occurs without user interaction. Batches of transactions are fed from source files to the operational system. Errors are typically reported in exception files or logs and are reviewed by users or operators later on. Virtually all OLTP systems have a batch component: jobs that can execute in off-peak hours for reporting, payroll runs, posting of accounting entries, and so on.

Many large companies have batch-oriented mainframe systems that are so thoroughly embedded in the corporate infrastructure that they cannot be replaced or removed. A common practice is to “frontend” these legacy systems with OLTP systems that provide more modern interfaces. Users interact with the OLTP system to enter transactions. Batch files are extracted from the OLTP system and fed into the downstream legacy applications.

Once the batch processing is done, extracts are produced from the batch systems and are used to refresh the OLTP systems. This extraction process provides the users with a more sophisticated interface with online validation and editing, but it preserves the flow of data through the entrenched batch systems. While this process seems costly, it’s typically more attractive than the major surgery that would replace older systems. To compound the difficulty, in some cases the documentation of these older systems is incomplete and the employees who understand the inner workings have retired or moved on.

OLTP Basics

|

203

The financial services industry is a leader in information technology for transaction processing, so this notion of feeding legacy downstream applications is very common in banks and insurance companies. For example, users often enter insurance claims into frontend online systems. Once all the data has been entered, if the claim has been approved, it’s extracted and fed into legacy systems for further processing and payment.

Oracle features such as transportable tablespaces and Streams, discussed in

Chapter 13
of this book, are aimed in part at providing the functionality required by distributed OLTP systems in a more timely fashion than traditional batch jobs.

OLTP Versus Business Intelligence

Mixed workloads—OLTP and reporting—are the source of many performance challenges and the topic of intense debate. The data warehousing industry had its genesis in the realization that OLTP systems could not realistically provide the needed transaction throughput while supporting the enormous amount of historical data and ad hoc query workload that business analysts needed for things like multiyear trend analysis.

The issue isn’t simply one of adequate machine horsepower; rather, it’s the way data is modeled, stored, and accessed, which is typically quite different. In OLTP, the design centers on analyzing and automating business processes to provide consistent performance for a well-known set of transactions and users. The workload revolves around large numbers of short and well-defined transactions—with a fairly significant percentage of write transactions.

Business intelligence typically operates on larger data stores that frequently are assembled from multiple data sources and contain long histories. The schema design for data warehouses is usually very different from the fully normalized design best suited for OLTP data stores. And data warehouses can support ad hoc queries that, because of their complexity and the amount of data accessed, can place significant loads on a system with only a handful of requests.

Reporting and query functions are part of an OLTP system, but the scope and frequency are typically more controlled than in a data warehouse environment. For example, a banking OLTP system will include queries for customer status and account balances, but not multiyear transaction patterns.

The OLTP system typically provides forms that allow well-targeted queries that are executed efficiently and don’t consume undue resources. However, hard and fast rules—for example, that OLTP systems don’t include extensive query facilities—

don’t necessarily hold true. The I/O performed by most OLTP systems tends to be approximately 70–80 percent read and 20–30 percent write. Most transactions involve the querying of data, such as product codes, customer names, account balances, inventory levels, and so on. Users submitting tuned queries for specific business functions are a key part of OLTP. Ad hoc queries across broad data sets are not.

204

|

Chapter 9: Oracle and Transaction Processing

Business intelligence data warehousing systems and OLTP systems could access much of the same data, but these types of systems also typically have different requirements in terms of CPU, memory, and data layout, which makes supporting a mixed workload less than optimal for both types of processing. Real Application Clusters, with dynamic service provisioning since Oracle Database 10
g
, makes it possible to allocate individual nodes for individual workloads. It also makes it more feasible to deploy these mixed workloads to a single database (albeit with multiple database instances).

Oracle’s OLTP Heritage

Oracle has enjoyed tremendous growth as the database of choice for OLTP in the midrange-computing environment. Oracle6 introduced nonescalating row-level locking and read consistency (two of the most important of Oracle’s core OLTP

features), but Oracle7 was really the enabler for Oracle’s growth in OLTP. Oracle7

introduced many key features, including the following:

• Multi-Threaded Server (MTS), now known as shared server

• Shared SQL

• Stored procedures and triggers

• XA support

• Distributed transactions and two-phase commits

• Data replication

• Oracle Parallel Server (OPS)*

Oracle8 enhanced existing functionality and introduced additional OLTP-related features including the following:

• Connection pooling

• Connection multiplexing

• Data partitioning

• Advanced Queuing (AQ)

• Index organized tables

• Internalization of the Distributed Lock Manager (DLM) for Oracle Parallel Server

• Internalization of the triggers for replicated tables and parallel propagation of replicated transactions

* OPS was actually available for DEC VMS in 1989 and for NCR Unix with the last production release of Oracle6 (version 6.0.36), but it became widely available, more stable, and more popular in Oracle7.

Oracle’s OLTP Heritage

|

205

Oracle8
i
provided the following additional enhancements and technologies for OLTP:

• Support for Java internally in the database kernel

• Support for distributed component technologies: CORBA V2.0 and Enterprise JavaBeans (EJB) v1.0

• Publish/subscribe messaging based on Advanced Queuing

• Online index rebuild and reorganization

• Database Resource Manager (DRM)

• Use of a standby database for queries

• Internalization of the replication packages used to apply transactions at the remote sites

Oracle9
i
continued this trend, with the introduction of Real Application Clusters, which extended the benefits of Oracle Parallel Server to OLTP applications. Since Oracle Database 10
g,
the capabilities of Real Application Clusters support deployment to a new computing model, grid computing. But many of the capabilities that enable OLTP with Oracle have been core to the database product for many years.

Other books

04 - Rise of the Lycans by Greg Cox - (ebook by Undead)
Whispering Shadows by Jan-Philipp Sendker
Katrina, The Beginning by Elizabeth Loraine
The Boys Are Back in Town by Christopher Golden
A Mistletoe Affair by Farrah Rochon
CODE X:Episode 1 by M.R. Vallone
A Bouquet of Love by Janice Thompson
The Evensong by Lindsay Payton