Chapter 11. Partitioning for OPS

Partitioning is the process of designing database applications in such a way that OPS instances running on different nodes access mutually exclusive sets of data. This reduces contention for the same data blocks by multiple instances. The end result is that pinging is reduced, and the OPS system will run more efficiently.

Partitioning must be done when a database application or set of database applications is designed. You have to analyze the data access requirements of each application that you are designing and assign applications to OPS nodes so that contention for data from multiple nodes is minimized. This chapter describes three common approaches to partitioning in an OPS environment.

When Is Partitioning Needed?

In an OPS environment, several nodes access a shared database. If multiple database instances access the same set of data objects, these data objects end up in the buffer cache of each of those instances. The system will then need to synchronize the buffer caches in the different instances using Parallel Cache Management (PCM) locks. The use of these locks results in pinging, a process (introduced in Chapter 8) that has a high amount of overhead associated with it. The result is a decrease in overall database performance.

Chapter 8 described two types of pings: false pings and true pings. False pings result when multiple OPS instances are contending for the same PCM locks but not the same database blocks. You can reduce false pings ...

Get Oracle Parallel Processing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.