Are you “Chained” to your Tables?
Posted by Joel Goodman on 09/09/2009
David Kurtz gave a very interesting and useful talk at yesterdays UKOUG UNIX SIG concerning performance problems that he diagnosed at one of his customers. Whenever the number of logins exceeded 90 there was a correlation between that and the number of waits on “cache buffers chains latch” which he demonstrated visually by creating excel graphs from extracted AWR report data accessed via ODBC.
The diagnosis pointed to tables that had lob columns with the storage enabled in row. Most of the lob cells for most of the rows were short enough at the start to be included “in line”. Subsequent updates to the lob cells however, sometimes caused the total length of the column to exceed 4096 bytes including the metadata thereby causing the lob data to be moved to the lob segment. This also had the effect of increasing the free space in the data block thereby allowing more rows to be inserted with short lob cells which later on could also have their data moved to the lob segment.
All of these updates caused rows to be partly contained in the data block and partly elsewhere and a discussion ensued about “migrated” rows.
But Oracle is a bit ambiguous about “Chained” rows and “Migrated” rows and so we had a discussion both during the Q&A and afterward, and I thought this topic needed to be revisited due to the ambiguity.
First lets define the terminology:
Chained Rows – Rows where the row headers indicate that the row is contained in two or more “pieces” due to:
- A row having been inserted or updated such that the row is larger than the maximum free space in an oracle block after accounting for the block headers. In this case the tablespace blocksize and the row length determine whether the row is chained and if so the “row pieces” are held in different blocks. Such a row uses the first byte of the row header called the flag byte to indicate which piece of the chain is in the current block and if not the last piece, then a rowid of the next piece follows the three byte row header pointing to the next piece in another block. It also uses the third byte of the row header called the column count to indicate how many columns are in each row piece.
- A row having been inserted or updated such that the number of columns exceeds 255. In this case the block size and row length relationship only determine whether or not different blocks are used for the pieces but there will be always be two or more “row pieces” depending on the number of columns in the table. This is known as “intra-block chaining” and has been a feature of Oracle since version 8.0 when support for more that 255 columns was added. Such a row uses the flag byte in the row header to indicate each piece of the chain and uses the column count to indicate how many columns are in this row piece. If the row is chained only due to column count then all the row pieces will be in the same block with a maximum of four pieces and this will not be considered a “chained row” for performance reasons as only one block access is needed to fetch the row. If however the row also is too large then it will be chained as in case 1 and may have as many pieces as required.
- A row having been updated making the row larger such that the updated version of the existing row is unable to fit into the current block due to insufficient free space. This type of chaining is known as “row migration” but is repairable as will be described later. In this case the original block will contain the original row header and the rowid pointing to where the row has moved but there will be no column data in the original block. If the row is migrated and fits into 1 block then the row header in the new block will indicate in the flag byte that there is only 1 row piece that is entirely in this block. If the row is migrated but is now also too large or has more than 255 columns or both, then the row will be both migrated and chained as described above.
- A row in an Indexed Organised table (IOT) having an overflow segment and a row has overflowed some or all of the non-key columns. This may occur either because the row is longer than the “PCTTHRESHOLD” value which defines a limit as a percentage of the block space for storing data in IOT btree blocks. It may also occur due to the “INCLUDING” clause which specifies a cutoff column in the table after which all columns are overflowed for each row. If both clauses are used then whichever method results in a smaller row piece in the btree block will be used for that specific row.
Multi-Block rows – Rows where the row headers do NOT indicate that the row is contained in two or more “pieces” due to:
- A row containing a lob column and either the “disable storage in row” clause was specified for the lob, or the “enable storage in row” clause was specified for the lob but the lob cell length for this row including lob metadata exceeds 4096 bytes. In this case the lob metadata in the form of a “lob locator” points to the location of the lob data within a block of the lob segment or indirectly via the lob index segment. Note that these rows are NOT considered to be chained in the usual sense so that the “Analyze table xxx list chained rows” command will not report such a row as being chained even though multiple blocks may be required for row access. This is because the clob data is in a different segment not in another block of the same segment and because the row header will indicate in the flag byte that the entire row is in the one block. Furthermore all the columns are represented in the block but the out of line lob has only metadata in line in the block.
How can we tell that chained or migrated rows are affecting performance?
In the AWR or statspack reports or in the v$sysstat view look for the line containing statistic “table fetch continued row”. Here is an example:
select name, value from v$sysstat where name like ‘%table fetch%';
table fetch continued row 9293
If the values in this statistic are rising over time then more and more rows that are “chained” in some way are being fetched causing extra latching and block pinning as well as extra I/O from time to time.
To determine if certain heap tables have chained rows due to any of the first three cases listed above one may look at the DBA_TABLES.CHAIN_CNT column but only if you use the ANALYZE command for statistics. Since best practice is to use DBMS_STATS, the solution is as follows:
- Run the $ORACLE_HOME/rdbms/admin/utlchain.sql script
- From the same user do “Analyze table xxx list chained rows” where xxx is the name of the table in question.
- Query the chained_rows table to see details of any row in the table that has row pieces in multiple blocks.
- Determine if repair or migrated rows is a useful activity
The HEAD_ROWID column contains the rowid for the start of the chain IE the rowid points to the block where the first 3 byte row header resides. If the row is migrated as in case three above then the entire row may be in one or more other blocks as mentioned.
If the row is chained due to either case 1 or 2 above, then the HEAD_ROWID points to the location of the first row piece. Subsequent row pieces may or may not be in the same block depending on the circumstances outlined above.
Only migrated rows as in case 3 may be fixed. Since a migrated row remains migrated in order to preserve the rowid for index use, the only way to repair this is to delete and then reinsert all the “chained” rows. This would be useful if the average row length of the table is much smaller than the block size or if by describing the table one can determine that the maximum row length is smaller than the block size. In these cases many or most if not all the “chained” rows reported by ANALYZE .. LIST CHAINED ROWS command will be the migrated type which can be repaired.
To effect the repair of a table called TABA do the following:
- Disable any triggers on the table.
- Disable any constraints on the table.
- Do “CREATE TABLE TABA_TEMP AS SELECT * FROM TABA WHERE ROWID IN (SELECT HEAD_ROWID FROM CHAINED_ROWS);” to copy the chained rows to a holding table.
- Do “DELETE FROM TABA WHERE ROWID IN (SELECT HEAD_ROWID FROM CHAINED_ROWS);” to delete the chained rows.
- Then do “INSERT INTO TABA SELECT * FROM TABA_TEMP” to reinsert the rows as new rows in the table. As they are new they will be handled as any other rows are upon insert and the chained rows caused by the migration process listed above will now no longer be chained.
- Delete the holding table
- Re-enable the constrains if any exist
- Re-enable the triggers if any exist.
To determine if IOTs have chained rows due to row pieces that are in an IOT overflow segment one may look at the DBA_TABLES.CHAIN_CNT column but only if you use the ANALYZE command for statistics. Since best practice is to use DBMS_STATS, the solution is as follows:
- Run the $ORACLE_HOME/rdbms/admin/utlchn1.sql script. This script differs from the ultchain.sql script by creating a CHAINED_ROWS table with a UROWID data type instead of a ROWID data type in the HEAD_ROWID column to support the universal rowid format used for IOT chaining and IOT secondary indexes.
- Then do an “ANALYZE TABLE xxx LIST CHAINED ROWS” as described earlier.
- Then do a “SELECT * FROM CHAINED_ROWS“.
An alternative is to:
- Use the “DBMS_IOT.BUILD_CHAIN_ROWS_TABLE” procedure
- Do the “ANALYZE TABLE xxx LIST CHAINED ROWS INTO IOT_CHAINED_ROWS;“
- Then do a “SELECT * FROM IOT_CHAINED_ROWS“.
Regarding tables with LOB columns, one may examine the LOB properties from DBA_LOBS or DBA_PART_LOBS using the IN_ROW column. Out of line LOBS always require visits to extra blocks which may cause more I/O and more latching. For LOBS that enable storage in row more I/O and latching may still occur. For example if the lob cells are large but stay in the block, then fewer rows will fit in the block and a random set of row accesses may require more block visits than would occur if the lobs were out of line. This is especially so if the application does not always access the lob column(s). This is why one may disable the storage in row even if the lobs are not that big. But rows in such tables are not flagged as “chained” even though they may have data in more than one block and in fact in more than one segment.
To sumarise what DBAs may do regarding performance, “Migrated” rows may be repaired, and one may control LOB overheads to some degree by enabling or disabling storage in row to suit access patterns, but to do all this requires that one know the data and be familiar with the applications to some extent.