The Blog from the DBA Classroom

By: Joel Goodman

Archive for February, 2012

Don’t be a Crash Test Dummy

Posted by Joel Goodman on 28/02/2012

During a recent Oracle 11g Tuning Course in London, a discussion of performance problems related to locking and resource serialisation arose during the lessons dealing with contention. I was then asked by a student to summarise the types of serialisation that occur and to describe the worst types that I find at customers. I replied that “in-doubt” locks on tables involved in distributed transactions have created the worst cases that I have seen. I will focus on this type of lock later in this post. But first, the answer to the first question:

There are different types of serialisation mechanisms for different purposes:

1. Latches – used to protect low level memory structures or access to certain sections of code. Latches are held for very short durations and have been described very nicely in numerous blogs. A good in depth discussion may be found here in a white paper by David Gornshtein and Boris Tamarkin. Latches are typically held for microseconds and have no longer have any documented parameters to control or administer them. Monitoring may be done using GUI tools or by using dynamic performance views such as:

  1. V$LATCH

2. Mutexes– which are used from onwards for library cache management as they are faster and use less memory than do latches. Mutexes, like latches are typically held for short durations and have no documented controlling parameters, They may be monitored using dynamic performance views such as:


3. ENQUEUES – used to control access to resources for a longer period of time. ENQUEUES (also called locks) are obtained using enqueues. Unlike mutexes or latches, they may be held for long periods by the owning session, possibly until the session commits. A good presentation on locks, latches and mutexes by my colleague Tanel Poder may be found here.  DBAs can control this potential contention problem by having sessions that hold locks sniped after a period of time, setting IDLE_TIME in the older implementation with Oracle Kernel profiles, or using the Database Resource Manager, and setting timeouts differentially for sessions that don’t currently block other sessions and those that do.  Locks may be monitored using dynamic performance views such as:

  1. V$LOCK

4. User Defined Locks – also  known as UL lock types, these locks are created and managed using the DBMS_LOCK package. While it is beyond the scope of this post, to discuss the details, one unusual aspect of user managed locks is that they may persist across commits. This may be done by setting the RELEASE_ON_COMMIT boolean parameter of the DBMS_LOCK.REQUEST function to FALSE. This may lead to contention problems introduced by this level and duration of locking. DBAs ideally should know the purpose of these locks, and their likely duration, in order to monitor effectively. Monitoring of these locks is done using the same view as for ENQUEUES listed above.

5.  “In Doubt” locks on tables – these occur during the two phase commit process that occur when distributed transaction over database links are being committed.  A good discussion of the two-phased commit processing may be found in the Oracle documentation. What is unusual here is that an “in doubt” transaction lock remains on the table, returning an error on reads AND writes,  until the “in doubt” state is resolved.

When a network, node or instance failure prevents a distributed transaction from completing a “two phase commit” process that is already underway, one or more of the sites involved in the coordination of the commit, may not know the disposition at other sites. If a site is in the “prepare” state, then it must wait to be told whether to commit or to roll back  updates to any local objects involved in the distributed transaction. Until this decision is made, the objects remain locked. But since it is not known whether or not rows involved in the transaction will be committed, the lock is for both updates AND reads!! This is quite a serious lock as tables in this state may not even be read. Attempting to query a table in such a state results in an

ora-01591 : lock held by in-doubt distributed transaction x.yy.zzzz

error, where x.yy.zzzz represents the Global Transaction Identifier of the distributed transaction.

Normally if an instance or node failure occurs, it is resolved quickly, when connectivity is re-established amongst the database instances involved in the distributed transaction. This process is managed using the following components:

  1. The RECO background process, also known as the recoverer process
  2. The DBA_2PC_PENDING Data Dictionary view – based on sys.pending_trans$ base table
  3. The DBA_2PC_NEIGHBORS  Data Dictionary view – based on sys.pss1$ base table

RECO, on each instance will handshake with the other RECO processes and a decision will be made on any “in doubt” resources, based on whether the transaction did indeed commit or roll back and once the affected databases perform these actions, the affected tables are unlocked.

But if there is a delay in re-establishing connectivity, tables may remain locked for long periods of time; minutes, hours or more, because RECO can not make decisions if one or more of the instances is not available. If one of the sites has gone down due to a hardware problem for example.  and requires a resolution time longer than is acceptable, then DBAs may need to resolve this manually by performing either a force commit or a force rollback on the global transaction for the surviving instances and databases.

Oracle provides a way to practice these manual “in doubt” resolutions on test systems by using two separate but related features:

  • Disabling the RECO process
  • Performing 2 phase commit “crash tests

Disabling the RECO process

This is done by using the following statement logged as SYS, SYSTEM or with an appropriate role.

alter system disable distributed recovery;

The RECO process remains up, but does not participate in any automated “in doubt” resolution. By doing this, a DBA may simulate numerous failures using the crash tests, without having to shut down any instances.


The Crash Tests

These tests are performed by using specially worded comments in the COMMIT statement, which support comments for the purpose of general documentation in the code. There are many crash tests that are possible, and DBAs are better off if all the scenarios that might apply are tested.

The commit statement for crash tests has the following form:

commit comment ‘ORA-2PC-CRASH-TEST-<nn>’ ;

where nn is the number of the crash test.

To understand the tests, note that the participants in Oracle two phased commit processing may play one of a number of roles in the proceedings:

1. Global Transaction Coordinator – This is the database instance that issues the commit statement and to which the client or middle tier is connected.

2. Local Transaction Coordinator – This is a database instance that issues prepare, commit or rollback statements to database instances not connected directly to the Global Transaction Coordinator. This occurs if one or more of the databases that are directly connected to by the Global Transaction Coordinator,  connect onwards in a “daisy chain” fashion to other remote databases.

3. Commit Point Site – This is the instance that has the highest value for the COMMIT_POINT_STRENGTH parameter. The database that is the commit point site, is never asked to prepare. All the other sites are asked to prepare and if they do successfully, then a commit is sent to the commit point site. Usually, this should be the site where most of the updates are done as it saves preparing, when there is a possibility of rolling back, as preparing causes redo log buffer flush to be done. If there are only two databases, and the commit point strengths are equal, then the remote site is asked to prepare and the local site acts as both the global transaction coordinator and the commit point site.

Note also that the global transaction coordinator only has visibility of remote sites that are directly connected via database links. If there is a daisy-chain, then a local transaction coordinator is also used. For example if  database instance A connects to database instances B and C, and database instance B in turn connects to database instance D, then database instance D can not be the commit point site.

Assuming that database A is the global transaction coordinator, database B is the commit point site, database D is “daisy chained” from database B, and database instance A sends a prepare to database instance B.  Database instance B must do the following:

  • Send a prepare request to database instance D
  • Get a positive reply to the prepare
  • Issue a commit to database instance D
  • Get a positive reply for the commit
  • Commit its own resources
  • Return a positive result for the commit to database A

If database B is not the commit point site, then it gets a prepare and later on a commit from the global transaction coordinator.

Here is what it must do for the prepare:

  • Send a prepare request to database instance D
  • Get a positive reply to the prepare
  • Prepare its own resources
  • Return a positive result for the prepare to database A

And here is what it must do for the commit:

  • Send a commit request to database instance D
  • Get a positive reply to the commit
  • Commit its own resources
  • Return a positive result for the commit to database A

Note the following too

  • If any site fails to prepare then all sites will be instructed to roll back by the global or the local coordinator.
  • Once a transaction is either committed or rolled back, then a forget message is propagated to all the sites so that they may remove the appropriate rows from sys.pending_trans$ and sys.pss1$

Now we can look at the crash tests with their associated number to be placed in the comment:

  1. Crash commit point site after collect
  2. Crash non-commit point site after collect
  3. Crash non-commit point site before prepare
  4. Crash non-commit point site after prepare
  5. Crash commit point site before commit
  6. Crash commit point site after commit
  7. Crash non-commit point site before commit
  8. Crash non-commit point site after commit
  9. Crash commit point site before forget
  10. Crash non-commit point site before forget

To make this a bit simpler I have a demo that I use in the classroom. The script and the spooled output may be downloaded.

Have a go and don’t be a crash test dummy!


February 2012


Posted in Oracle | Tagged: , , , , | 3 Comments »