The Blog from the DBA Classroom

By: Joel Goodman

Talk the Talk

Posted by Joel Goodman on 30/08/2013


I’m pleased to say that I am now on twitter as JoelJGoodman.

I also am pleased to add that I have been confirmed as a speaker at the DOAG conference where i will speak on Global Resource Management in RAC.

This is on 19/11/2013 at 15:00 CET.

I have been working on various certification exams and am pleased that the Oracle Linux 6 Certified associate exam is now available for Beta testing.

You can find details here : https://blogs.oracle.com/certification/entry/0925_02?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+orana+(OraNA)

The Oracle Linux Certified Professional exam will eventually follow.

I will also be presenting at the Oracle Database Server SIG in Thames Valley Park on October 8th on an Oracle Database 12c topic as yet to be decided. Ill update on twitter when the topic is confirmed.

Looking forward to meeting up at the conferences or in Oracle courses.

Joel

London

30/08/2013

 

Advertisements

Posted in Oracle | Leave a Comment »

User Groups can be Abstract

Posted by Joel Goodman on 03/08/2013


I have had confirmation from two user group conferences for this autumn:

1. DOAG – where I shall present a talk on Global Resource Coordination in a RAC Environment

2. UKOUG – where I shall present a Parallel Execution Master Class and host two round table discussions:

2.1 Grid Infrastructure and RAC Round Table

2.2 Engineered Systems Round Table

I am please also that some of my Oracle Colleagues will be at these conferences.

Uwe Hesse from Oracle University is presenting at both conferences on DBA topics

Iloon Ellen-Wolff from Oracle Support is presenting at the UKOUG on APEX Listener.

I look forward to seeing some of my friends and colleagues from past conferences and courses.

Note: I am now on Twitter as JoelJGoodman

Posted in Oracle, User Group Presentations | Leave a Comment »

Speaking on Ships can be Testing

Posted by Joel Goodman on 10/05/2013


1. The Ship

I took some time off working on the development of several Oracle Certification Exams to participate in the Norwegian Oracle User Group conference. Many other speakers and attendees have described the wonderful environment of the conference, comprising a land day in Oslo, followed by two days aboard the Colour Magic Cruise Ferry from Oslo to Kiel in northern Germany and back.

I had one talk on each day:

  1. Parallel Execution Master Class  – three hours on the first morning in the land day. Many people attended to “fill in the cracks” or Parallel execution and were also quite keen to understand the 11gR2 changes that permit automatic parallel execution. This talk covered:
    1. Parallel Execution Administration
    2. Parallel Execution Plans
    3. Parallel Query
    4. Parallel DML
    5. Parallel DDL
    6. Monitoring and Tuning Parallel Execution
  2. Global Resource Management in RAC – this was a conventional one hour talk on first day of cruise about the management and coordination of global resources such as:
    1. Global Enqueues
    2. Global Locks (Instance Locks)
    3. Global Buffer Cache
    4. Global Result cache
  3. Partitioning Master Class – this was a  two hour Master class on day two of the cruise for people with or without experience of partitioning and covered:
    1. All Table Partitioning Methods
    2. All Index Partitioning Methods
    3. Partition Maintenance Operations
    4. Partition Pruning (or Elimination)
    5. Partitioned Execution Plans

All the talks were well attended for the English Language stream, and I plan to do these at future conferences.

While at  the conference I made some new friends and enjoyed the company of  colleagues from past conferences too.

  1. Philippe Fierens – who attended my talks, discussed technology and philosophy at dinner and joined our group for drinks each night
  2. Jan Karremans – who exemplified the dedicated IT professional. Jan overcame getting motion sickness at sea to be at the conference and to share time with us at dinner and drinks
  3. Iloon Ellen-Wollf – from Oracle Support. I attended her talk on Oracle Public Cloud overview and learned a bit about how this offering works. I was pleased to see Oracle Support providing speakers to a User Group conference as this is not that common in my experience in speaking at such conferences.

2. The Exams

I have been working on various certification exam projects for Oracle in the past 9 months, either as an exam reviewer, or for some topics as one of the exam developers. some of the projects are:

  1. Certification projects for the next release of the Oracle Database Server.
  2. Development of the next version of Oracle Linux certification exams based on Linux 6
  3. Development of a certification exam for Oracle Database Cloud Admin
  4. Migrating the Oracle Certified Master Exam environment for Oracle 11g  to version 11..2.0.3
  5. Training the MySQL Developer Exam writing team in Oracle Item writing standards
  6. Training the MySQL Administration Exam writing team in Oracle Item writing standards
  7. Training the Solaris Exam writing team in Oracle Item writing

Exam development involves many steps from design of the blueprint for the exam right through to the exam going live, but the Subject Matter Experts who create exams must all be trained in the Oracle Item writing standards. This is a soft skill separate from the technical area of expertise. I have worked for many years with Julia Johnson, Global Director of Oracle Certification Exam development to create training guides, instructional videos and mentoring sessions, all with the aim of improving the standards of exam item writing.

One of the crucial stages in the process is the Beta test for any exam. So if you are interested in participating in the Beta programme for any Oracle certification, that is in Beta, please let me know and we can contact you closer to the time.

Joel

May 2013

Posted in Oracle | Leave a Comment »

Can Supporting an Exadata Database Machine Make You Certifiable?

Posted by Joel Goodman on 28/02/2013


One of my roles in Oracle is to help develop Oracle Certification exams, and I have been extremely busy in recent months. This work is a global activity, and is in addition to teaching courses and seminars for Oracle University in EMEA and to speaking at user groups and special interest group meetings.

The role entails:

  • Collaborating with Oracle Database or Linux Subject matter experts to help write exam items for Oracle Certified Expert  (OCE) certifications
  • Reviewing entire exams written by the core technology exam development team for Oracle Certified Associate (OCA) or Oracle Certified Professional (OCP) certifications
  • Teaching exam item writing workshops to other exam development teams with Oracle, including the Solaris and MySQL item writing teams.
  • Reviewing existing exams to assess where improvements can be made by exam developers for future projects.

The exam development projects in which I have participated in recent months are:

  1. Oracle Database OCP upgrade exam for the next release of the Oracle Database server for existing OCPs
  2. Oracle Database OCA exam for the next release release of the Oracle Database server for new OCA candidates
  3. Bronze exam for certain markets, based on the 2 day DBA overview course
  4. Oracle 11g SQL Tuning OCE exam
  5. Oracle Enterprise Manager Cloud Control  Cloud Administration exam
    • For Oracle Infrastructure as a Service (IaaS) Cloud Administrators –
    • For Oracle Platform as a Service (PaaS) Cloud Administrators.  This includes Database as a Service (DBaaS) and Middleware as a Service (MWaaS)
  6. Oracle Exadata Database Machine X3 Administration Oracle Certified Expert exam

Whilst all these exams have their interesting aspects, the focus of this post is the Database Machine OCE exam.

The Database Machine Support Skillset Challenge

Much has been written by myself and others about the different skills involved in provisioning a Database Machine and then maintaining and monitoring the various components. This skill list can be briefly summarised as follows:

  1. Hardware provisioning, installation and maintenance – A Database Machine requires floorspace, adequate ventilation and power supply
  2. Network Administration – A Database Machine must have its Ethernet switch and some other components connected to the corporate LAN. There may also be a requirement to connect Media servers to the Database Machine InfiniBand network.
  3. Operating System Administration  – The Database nodes, also referred to as “Compute” nodes, require O/S administration including backups and patching
  4. Exadata Storage Administration – The Storage nodes or “Cells” need administering and monitoring of Celldisks, Griddisks, Flashcache and other resources
  5. Grid Infrastructure Administration – The Clusterware and Oracle ASM must be administered and monitored> ASM in particular is crucial for monitoring the mirrored disk groups, because Exadata uses ASMs host based mirroring. Mirroring is not done in the Cells.
  6. Oracle Database Administration – this includes Single Instance Oracle, RAC and RAC One Node as all three are possible on a Database Machine

The challenge for an organisation is figuring out how to support a Database Machine (or any other “Engineered System”). There are three models that I have encountered amongst the attendees to the Oracle Database Machine Admin course over the past three to four years:

  1. The Silo Model – This involves having DBAs look after any Oracle products. So DBAs must skill up on all the skills listed above. The advantage is that the support organisation has all the skills in the one team and each DBA can handle support requirements in several areas, but this requires training to expand the traditional DBA skill set.
  2. The Team Model – This involves creating a new team which amongst all the members has the required skills, but where each member is responsible for their own area of expertise. This model requires less training as the members have the required skills already, but requires a change to the organisation.
  3. The Virtual Team Model – This involves creating a virtual team which amongst all the members has the required skills, but where each member is responsible for their own area of expertise. This model requires less training as the members have the required skills already, and requires no real change to existing organisational structures, apart from agreeing on the terms of reference for the virtual team.  This is the most common model I have encountered.

The Database Machine Certification Blueprint Challenge

The challenge for developing the Oracle Exadata Database Machine X3 Administration exam was to address the skill set profile of the typical administrator in light of the  preponderance of the Virtual Team Model. Since few people would be likely to have all the skills to support the Database Machine by themselves, we focused the exam content blueprint on the main skills that one uses in day to day administration. This is predominantly a DBA-centric skill set.

Here is a general outline of the skills and how we treated them for the purposes of the exam:

  • In-depth knowledge of Exadata Storage Server Administration. This is the key component of the Database Machine. The Storage is very DBA-centric, as it is defines specifically for Oracle Databases and ASM, and is not a general purpose storage solution
  • In-depth knowledge of ASM features relating specifically to Exadata based disk groups. This includes but is not restricted to ASM mirroring, Exadata-specific disk group parameters, and Exadata specific monitoring.
  • In Depth knowledge of Clusterware features relating specifically to Exadata resource management and availability. This includes but is not restricted to I/O fencing in for  Exadata based ASM disk groups, partitioning the Exadata storage grid to support multiple database clusters and maintaining storage availability to the storage during patching.
  • In Depth knowledge of Oracle Database administration relating to the use of Exadata based ASM disk groups. This includes but is not limited to Monitoring SQL execution plans when objects reside in Exadata bases ASM disk groups, Exadata specific wait events and Exadata specific statistics.
  • Overview knowledge of network administration. This includes network overview, general IP address and Ethernet connectivity requirements, very general knowledge of the InfiniBand switch topography and port requirements.
  • Overview knowledge of Operating System maintenance requirements. This includes but is not limited to differentiating between the storage nodes and compute nodes, knowing what is and what is not permitted regarding O/S patching and administration, and maintaining high availability during patching.
  • Overview of The Database Machine Hardware components. This includes but is not limited to rack sizes, disk options. site planning, installation and support.
  • Concept Only Overview of Monitoring the Database Machine with Enterprise Manager.  Since both Grid Control and Cloud Control are available for monitoring, the exam focuses only on concepts including but not restricted to as SNMP traps, plug-ins and target discovery

For a complete list of possible exam topics visit the Certification page for the Database Machine Exam and click on the “Exam Topics” tab.

The Exam Development and Quality Assurance Challenge

The Exam was written by a team Oracle University Subject Matter Experts who were all experienced in the product. The review and Beta Test involved Experts from Oracle ACS, with field experience, and other SMEs who were not involved in exam item writing.

If you work with the Exadata Database Machine, preparing for this certification will help you gain a mastery of the skills most often used on a day to day basis as well as a good working knowledge of all the other skill areas listed above.

Joel

London February 2013

Posted in Oracle | 4 Comments »

UKOUG Conference 2012 and Oracle Certification – Are they on Your Agenda?

Posted by Joel Goodman on 14/09/2012


The UKOUG launched the 2102 Server Technology conference agenda yesterday evening, at an informal gathering at the Rack and Tenter pub near the Oracle London City Office. I had a chance to see the conference agenda for the first time since the conference planning conference that I attended in July in Reading.

I met with other presenters, some Oracle colleagues and various others, some of whom came as guests of the invitees, but who also had interesting things to add to the evenings proceedings. Good food, some drinks and even time to debate the finer points of comedy and comedians, and the complexities of Rugby laws for people who dont know the game. As I am an amateur, county level rugby referee, this was not a bad topic for an Oracle meeting!
This is especially so, given that the guest speaker at this year’s conference will be an Ex-England rugby captain.

Also had some time to catch up with friends and colleagues from the user group and as usual, the organisers have done a fine job in creating a top quality agenda for this year.

New in the agenda is the inclusion of “Engineered Systems” as a separate category for talks and round table sessions to cover the various Oracle engineered solutions:

  • Oracle Exadata Database Machine
  • Oracle Database Appliance
  • Oracle Exalogic Elastic Cloud
  • Oracle Sparc Super Cluster
  • Oracle Exalytics In-Memory Machine
  • Oracle Big Data Appliance
  • Sun ZFS Storage Appliance

At this year’s conference I will be involved in the following activities:

  1. Presenting a talk on Oracle Partitioning New Features
  2. Presenting a talk on RAC Global Resource Management
  3. Chairing the Data Guard Round Table Discussion Group
  4. Chairing the Living with Engineered Systems Round Table Discussion Group
  5. Chairing the Grid Infrastructure and RAC Round Table Discussion Group

I have spent the last four months working various certification developments projects and hence had no time to post new articles. Part of my role in Oracle is that of Global team leader for DBA Certification Exam Development and in that capacity I:

  • Write large parts of the Oracle Certified Expert (OCE) DBA exams
  • Review all DBA Oracle Certified Associate (OCA) exams
  • Review all DBA Oracle Certified Professional (OCP) exams
  • Define the content of the DBA Oracle Certified Master (OCM) exams
  • Design the OCM Delivery system used in Oracle University Classrooms

Part of this involved New feature Exams for upgrading OCP when a new release of Oracle is available and I’ve been looking at that this summer.

Earlier this summer I recorded three Oracle Certification Exam Prep Seminars which help exam candidates review the exam objectives, and assess their own skills gap as an aid to revising for the exam. You can preview them using the following links

There will also be a segment on the Oracle Certified Master Exam covering:

  • Preparing for the exam
  • How the exam is conducted
  • How to handle the pressure of time
  • What happens after the exam

This part is still in post production editing at the time of writing this post.

I hope to see some friends and colleagues at this year’s conference and also hope that people planning on maintaining their certification or attaining new ones find the prep seminars useful.

 

Joel

September 2012

Posted in Oracle | 1 Comment »

Great Speakers – and in Stereo Too!

Posted by Joel Goodman on 14/05/2012


It has been my pleasure to be involved for the past 14 years with the UKOUG as an Oracle Liaison to the DBA oriented technical SIGs, as a speaker at the SIGs and at the Annual Conference.

This is one of the best if not the best Oracle conferences in the world, organised by a dedicated team of conference specialists, with many years of expertise in both technology and conference organisation.

It features well known experts in the latest database, O/S and storage technology and a culture of sharing information and ideas. Many of the bloggers and authors that you may have read of whom you have heard, will be there.

If you have expertise, then there will be others willing to listen and share. There is no better way to cut to the key issues, than by meeting other experts who can also share with you.

There is also a great atmosphere, with social evening programmes and entertainment.

So consider submitting one or more abstracts, while time remains to do so.

To register go to the UKOUG Conference 2012 Call for Papers page.

I hope you can make it!!

Joel

London

May 2012

Posted in Oracle | Leave a Comment »

Smooth Sailing at the OUGN

Posted by Joel Goodman on 22/03/2012


I have spent the past two days at the Norwegian Oracle User Group (OUGN) along with many  colleagues and friends from the Oracle world in both EMEA and further afield. I’ve been honoured to have been invited for the past three years and am joined this year by two distinguished Oracle University trainers with whom I regularly collaborate on the EMEA OU Subject Matter Experts team: Harald van Breederode and Uwe Hesse.

Several other people have blogged about the various events but the unique aspect of this event is the venue:

Held on the Colour Magic Cruise Ferry sailing from Oslo to Kiel and then back over two days makes this conference unlike any other in which I have participated.

This year we had a land day on Wednesday in Olso first followed by the cruise portion of the conference.

Uwe will speak on “Data Guard Real Time Query”; a feature that is part of Active Data Guard on Friday.

Harald has already presented two talks:

  • Server Side Smart Flash cache on Wednesday in Oslo
  • Adaptive Cursor Sharing today on board the Colour Magic

Harald will finish tomorrow with a talk on Client Connectivity in a Data Guard environment.

 

I presented two talks already:

  • Database Machine Overview and Features on Wednesday in Oslo
  • Database Machine Monitoring using Enterprise Manager and Command line on board the Colour Magic

I will finish tomorrow with a talk on Automatic Parallelism from 11gR2.

 

It is a  well run and friendly conference and the organisers are to be congratulated for the way they do this year after year.

 

Joel

 

Somewhere at Sea

22/03/2012

 

 

 

Posted in Oracle | 1 Comment »

Weblogic JTA

Posted by Joel Goodman on 20/03/2012


Are you an Oracle WebLogic Server Administrator?

Would you like to help define the depth and scope of future Oracle WebLogic Server training and certification?

Oracle is conducting this online Job Task survey prior to the next major Oracle WebLogic Server product release to ensure that the tasks Oracle WebLogic Server Administrators consider important are prominent in the design of future WebLogic Server Administrator curriculum and certification offerings.

Don’t miss this chance to do what DBAs did last year, to help define the curriculum and certification for the Oracle Database. Join with other Oracle experts to take this online survey and tell us what tasks are important to you.

Survey is here: http://www.surveymonkey.com/s/7XKX983

 

Joel

 

 

Posted in Oracle | Leave a Comment »

Don’t be a Crash Test Dummy

Posted by Joel Goodman on 28/02/2012


During a recent Oracle 11g Tuning Course in London, a discussion of performance problems related to locking and resource serialisation arose during the lessons dealing with contention. I was then asked by a student to summarise the types of serialisation that occur and to describe the worst types that I find at customers. I replied that “in-doubt” locks on tables involved in distributed transactions have created the worst cases that I have seen. I will focus on this type of lock later in this post. But first, the answer to the first question:

There are different types of serialisation mechanisms for different purposes:

1. Latches – used to protect low level memory structures or access to certain sections of code. Latches are held for very short durations and have been described very nicely in numerous blogs. A good in depth discussion may be found here in a white paper by David Gornshtein and Boris Tamarkin. Latches are typically held for microseconds and have no longer have any documented parameters to control or administer them. Monitoring may be done using GUI tools or by using dynamic performance views such as:

  1. V$LATCH
  2. V$LATCHNAME
  3. V$LATCH_PARENT
  4. V$LATCH_CHILDREN
  5. V$LATCHHOLDER
  6. V$LATCHNAME
  7. V$LATCH_MISSES

2. Mutexes– which are used from 10.2.0.2 onwards for library cache management as they are faster and use less memory than do latches. Mutexes, like latches are typically held for short durations and have no documented controlling parameters, They may be monitored using dynamic performance views such as:

  1. V$MUTEX_SLEEP
  2. V$MUTEX_SLEEP_HISTORY

3. ENQUEUES – used to control access to resources for a longer period of time. ENQUEUES (also called locks) are obtained using enqueues. Unlike mutexes or latches, they may be held for long periods by the owning session, possibly until the session commits. A good presentation on locks, latches and mutexes by my colleague Tanel Poder may be found here.  DBAs can control this potential contention problem by having sessions that hold locks sniped after a period of time, setting IDLE_TIME in the older implementation with Oracle Kernel profiles, or using the Database Resource Manager, and setting timeouts differentially for sessions that don’t currently block other sessions and those that do.  Locks may be monitored using dynamic performance views such as:

  1. V$LOCK
  2. V$LOCK_TYPE
  3. V$LOCK_ELEMENT
  4. V$LOCKED_OBJECT
  5. V$LOCK_ACTIVITY
  6. V$LOCKS_WITH_COLLISIONS

4. User Defined Locks – also  known as UL lock types, these locks are created and managed using the DBMS_LOCK package. While it is beyond the scope of this post, to discuss the details, one unusual aspect of user managed locks is that they may persist across commits. This may be done by setting the RELEASE_ON_COMMIT boolean parameter of the DBMS_LOCK.REQUEST function to FALSE. This may lead to contention problems introduced by this level and duration of locking. DBAs ideally should know the purpose of these locks, and their likely duration, in order to monitor effectively. Monitoring of these locks is done using the same view as for ENQUEUES listed above.

5.  “In Doubt” locks on tables – these occur during the two phase commit process that occur when distributed transaction over database links are being committed.  A good discussion of the two-phased commit processing may be found in the Oracle documentation. What is unusual here is that an “in doubt” transaction lock remains on the table, returning an error on reads AND writes,  until the “in doubt” state is resolved.

When a network, node or instance failure prevents a distributed transaction from completing a “two phase commit” process that is already underway, one or more of the sites involved in the coordination of the commit, may not know the disposition at other sites. If a site is in the “prepare” state, then it must wait to be told whether to commit or to roll back  updates to any local objects involved in the distributed transaction. Until this decision is made, the objects remain locked. But since it is not known whether or not rows involved in the transaction will be committed, the lock is for both updates AND reads!! This is quite a serious lock as tables in this state may not even be read. Attempting to query a table in such a state results in an

ora-01591 : lock held by in-doubt distributed transaction x.yy.zzzz

error, where x.yy.zzzz represents the Global Transaction Identifier of the distributed transaction.

Normally if an instance or node failure occurs, it is resolved quickly, when connectivity is re-established amongst the database instances involved in the distributed transaction. This process is managed using the following components:

  1. The RECO background process, also known as the recoverer process
  2. The DBA_2PC_PENDING Data Dictionary view – based on sys.pending_trans$ base table
  3. The DBA_2PC_NEIGHBORS  Data Dictionary view – based on sys.pss1$ base table

RECO, on each instance will handshake with the other RECO processes and a decision will be made on any “in doubt” resources, based on whether the transaction did indeed commit or roll back and once the affected databases perform these actions, the affected tables are unlocked.

But if there is a delay in re-establishing connectivity, tables may remain locked for long periods of time; minutes, hours or more, because RECO can not make decisions if one or more of the instances is not available. If one of the sites has gone down due to a hardware problem for example.  and requires a resolution time longer than is acceptable, then DBAs may need to resolve this manually by performing either a force commit or a force rollback on the global transaction for the surviving instances and databases.

Oracle provides a way to practice these manual “in doubt” resolutions on test systems by using two separate but related features:

  • Disabling the RECO process
  • Performing 2 phase commit “crash tests

Disabling the RECO process

This is done by using the following statement logged as SYS, SYSTEM or with an appropriate role.

alter system disable distributed recovery;

The RECO process remains up, but does not participate in any automated “in doubt” resolution. By doing this, a DBA may simulate numerous failures using the crash tests, without having to shut down any instances.

 

The Crash Tests

These tests are performed by using specially worded comments in the COMMIT statement, which support comments for the purpose of general documentation in the code. There are many crash tests that are possible, and DBAs are better off if all the scenarios that might apply are tested.

The commit statement for crash tests has the following form:

commit comment ‘ORA-2PC-CRASH-TEST-<nn>’ ;

where nn is the number of the crash test.

To understand the tests, note that the participants in Oracle two phased commit processing may play one of a number of roles in the proceedings:

1. Global Transaction Coordinator – This is the database instance that issues the commit statement and to which the client or middle tier is connected.

2. Local Transaction Coordinator – This is a database instance that issues prepare, commit or rollback statements to database instances not connected directly to the Global Transaction Coordinator. This occurs if one or more of the databases that are directly connected to by the Global Transaction Coordinator,  connect onwards in a “daisy chain” fashion to other remote databases.

3. Commit Point Site – This is the instance that has the highest value for the COMMIT_POINT_STRENGTH parameter. The database that is the commit point site, is never asked to prepare. All the other sites are asked to prepare and if they do successfully, then a commit is sent to the commit point site. Usually, this should be the site where most of the updates are done as it saves preparing, when there is a possibility of rolling back, as preparing causes redo log buffer flush to be done. If there are only two databases, and the commit point strengths are equal, then the remote site is asked to prepare and the local site acts as both the global transaction coordinator and the commit point site.

Note also that the global transaction coordinator only has visibility of remote sites that are directly connected via database links. If there is a daisy-chain, then a local transaction coordinator is also used. For example if  database instance A connects to database instances B and C, and database instance B in turn connects to database instance D, then database instance D can not be the commit point site.

Assuming that database A is the global transaction coordinator, database B is the commit point site, database D is “daisy chained” from database B, and database instance A sends a prepare to database instance B.  Database instance B must do the following:

  • Send a prepare request to database instance D
  • Get a positive reply to the prepare
  • Issue a commit to database instance D
  • Get a positive reply for the commit
  • Commit its own resources
  • Return a positive result for the commit to database A

If database B is not the commit point site, then it gets a prepare and later on a commit from the global transaction coordinator.

Here is what it must do for the prepare:

  • Send a prepare request to database instance D
  • Get a positive reply to the prepare
  • Prepare its own resources
  • Return a positive result for the prepare to database A

And here is what it must do for the commit:

  • Send a commit request to database instance D
  • Get a positive reply to the commit
  • Commit its own resources
  • Return a positive result for the commit to database A

Note the following too

  • If any site fails to prepare then all sites will be instructed to roll back by the global or the local coordinator.
  • Once a transaction is either committed or rolled back, then a forget message is propagated to all the sites so that they may remove the appropriate rows from sys.pending_trans$ and sys.pss1$

Now we can look at the crash tests with their associated number to be placed in the comment:

  1. Crash commit point site after collect
  2. Crash non-commit point site after collect
  3. Crash non-commit point site before prepare
  4. Crash non-commit point site after prepare
  5. Crash commit point site before commit
  6. Crash commit point site after commit
  7. Crash non-commit point site before commit
  8. Crash non-commit point site after commit
  9. Crash commit point site before forget
  10. Crash non-commit point site before forget

To make this a bit simpler I have a demo that I use in the classroom. The script and the spooled output may be downloaded.

Have a go and don’t be a crash test dummy!

Joel

February 2012

Posted in Oracle | Tagged: , , , , | 3 Comments »

Mounting Failures With Your Failgroups?

Posted by Joel Goodman on 16/01/2012


Last year I wrote a post about mirroring in ASM in the context of Exadata which discussed aspects of ASM mirroring, including failgroups, and the ability of ASM to cope with different failure scenarios. The post dealt with cases such as:

  • Single disk failure in a single cell
  • Multiple disk failure in a single cell
  • Single cell failure
  • Overlapping disk failures in multiple cells

It also covered the management of free space in ASM.

But the assumption was that the diskgroup(s) were already mounted, when the failures occurred. If an ASM instance attempts to mount a diskgroup however, then things are very different.

I have been working closely with members of the Oracle Advanced Customer Support Services team over the past 18 months, to train EMEA staff in Grid Infrastructure, RAC, Exadata and Enterprise Linux, to help skill up for supporting the Oracle Database Machine.  Recently, this collaboration resulted in a joint project between Oracle University and ACS to triage some problems at an EMEA customer.

The customer’s configuration used Grid Infrastructure for a standalone server on a two node Solaris stretch cluster. This was done to utilise ASM on each node, to support a live environment on one node, and a test environment on the other node in a different location.  But the test node was also part of the HA strategy, in case of problems on the live node. Oracle Restart was not used, so essentially, it was two separate nodes each with Oracle and ASM.

The ASM disks, were provided by SAN LUNs, with arrays in both the live and test locations. Each array was connected to both the live and test servers.

The live system diskgroups each had two failgroups to support ASM normal redundancy

  • First failgroup on LUNs provided by the storage array at the live location
  • Second failgroup on LUNs provided by the storage array at the test location

Normally, the ASM instance on the live system would have two diskgroups mounted, each with two failgroups. But to handle live location failures, scripts were written, that would:

  • Shut the test database instances on the test server
  • Unmount the test database diskgroups in the ASM instance on the test server
  • Mount the live database diskgroups in the ASM instance on the test server
  • Start the live database instance on the test server

In effect, this was a cold failover of the database. But without Grid Infrastructure for a cluster and clustered ASM, the diskgroups could only be mounted by a single ASM instance at one time. Thus the live disk groups could be mounted on the test server, only if not currently mounted on the live one.

Test were done by the customer, demonstrating that the scripts worked properly when the live database node failed. Eventually after all testing was completed the system for which this database was created, became operational.

I was called in, partly to triage a failure of the same recovery scripts that had worked during testing, and also to do a health check on the database and ASM environments. What I discovered from the ASM and database logs, was that one of the failgroups, on the storage array at the live data centre, had gone offline. ASM continued to read mirror copies from the other failgroup,  as it is designed to do, whenever I/O for primary copies of ASM allocation units (AUs) on the offline failgroup were done. Eventually, the OS on the live server failed, possibly related to access to swap space or other system files on the array.

When the failover to the test system was invoked and the scripts executed:

  • The ASM instance failed to mount the live database diskgroups
  • Startup of the live database instance on the test server failed as it could not access the database files stored in the ASM diskgroups

An attempt was eventually made to fail back to the live server, once the live system OS was back up, but the same problem arose. When I examined the scripts, and the logs, it showed that the MOUNT command for the ASM diskgroups was missing the FORCE option. This is what caused the mount failures in both cases.

Mirrored ASM diskgroups, may survive the loss of ASM disks  or even a complete failgroup, when already mounted, but to mount a mirrored ASM disk group requires that each AU have at least two copies discovered for normal redundancy diskgroups. If one of more ASM disks, is not accessible, then the diskgroup will not mount, because the stated level of redundancy will not have been met. The FORCE option overcomes this problem, by requesting that the diskgroup be mounted, without providing the stated level of redundancy, should a subsequent failure of  an ASM disk occur.

Contributing to the problem, was the lack of testing for storage array failure. The tests had only been done on database node failure. No tests were done for storage array failure, storage network failure or for failure of any of the components upon which the storage depends. This was why the failure to mount the diskgroups in this situation, was not discovered during testing.

An alternative approach to the software architecture, would have been to use Grid Infrastructure for a cluster, rather than Grid Infrastructure for a standalone server. This would have meant the following:

  • The live database diskgroups could be mounted simultaneously from the ASM instances on both the live and test servers, because clustered ASM can do this.
  • No need to write custom scripts to mount the diskgroups on the test server would be needed.
  • The Grid Infrastructure could be used to control the “cold failover” of the non-RAC database instance from the live server to the test server
  • Since the diskgroups could be mounted already they would be accessible and the live database could start up normally on the test server

It was an interesting bit of troubleshooting, and served to re-enforce my classroom message about testing all failure scenarios when planning for High Availability. Instead of suffering mounting failures, aim for success.

Joel

1/2012

Posted in Oracle | Tagged: , , , , | 1 Comment »