×

Warning

JUser: :_load: Unable to load user with ID: 1826

Troubleshooting and Tuning LOB Segment Performance in Oracle

Troubleshooting and Tuning LOB Segment Performance in Oracle

The large object (LOB) datatype allows us to hold and manipulate unstructured and semistructured data such as documents, graphic images, video clips, sound files, and XML files. The DBMS_LOB package was designed to manipulate LOB datatypes. Starting in Oracle Database 12c, LOBs can store large amounts of data with a maximum size of 128 TB depending on the database block size; a single table can have one or more columns of LOB datatypes, such as binary large object (BLOB), character large object (CLOB), national character large object (NCLOB), and BFILE. This article describes some of the problems that can happen when you have LOB segments in your environment and how to mitigate these problems.



Introduction to the LOB Datatype

When creating and designing your database’s LOB objects, remember to carefully review the Oracle Database SecureFiles and Large Objects Developer’s Guide (Oracle, 2016) especially Chapter 14, “Performance Guidelines.” This guide updates you on latest recommendations for creating your LOBs depending on what types of data will reside within these table columns.

It’s important to remember that whenever a LOB column is created in a table, two different segments are actually created: one LOBSEGMENT and one LOBINDEX. The LOBINDEX is the one that points to the LOB “chunks” that are stored in the corresponding LOBSEGMENT. In some cases, LOBs may be stored “inline” - that is, inside the table segment—but inline storage is usually used for LOB data that is fairly small (less than about 4000 bytes) or is NULL. In those cases, LOB values are stored directly inside the table segment.

Now let’s discuss what causes the database to generate a high-watermark (HW) enqueue event. When a session wants to access a database resource, the lock that coordinates the access to this resource is an enqueue. When an enqueue event happens, it means that a session is waiting for another session to free up this resource. The event always appears with the name of the enqueue in the format enq: <enqueue_type >- <details>; each enqueue type will have different details. The P1, P2, and P3 columns of dynamic views V$SESSION and V$SESSION_WAIT are also helpful in giving us more details about where the waiting session is currently held up and what is causing the lock. Those values can assume different meanings depending on the enqueue, as shown in the corresponding P1TEXT, P2TEXT, and P3TEXT columns. Listing 1 illustrates just a few of Oracle Database 12c’s more than 600 enqueue events.

Listing 1 Enqueue Events

select
distinct name from v$event_name
where
name like '%enq%' order by 1;

enq: AB - ABMR process initialized enq: AB - ABMR process start/stop enq: AC - acquiring partition id
enq:      AD  - allocate  AU
enq:      AD  - deallocate AU
enq:      AD  - relocate  AU
enq:      AE  - lock

The HW enqueue is responsible for serializing the allocation of space beyond the high-watermark of a segment. The recommended action to handle an unexpected HW enqueue is to manually allocate more extents for the LOB’s segments; however, in later sections we also discuss proactive methods to help avoid unexpected HW enqueues by following recommended best practices for LOB configuration.

 

Fixing a LOB Problem: A Real-World Example

In this real-world example, the database of an e-commerce company is completely hung. The following steps will quickly identify and fix this issue:

  1. Create an automatic workload repository (AWR) report to determine if the enq: HW event is one of the top five wait events. Log in to your database as SYSDBA and run the following command. When prompted, choose the two most recent AWR snapshot IDs to analyze just the last two snapshots:
$> ?/rdbms/admin/awrrpt.sql
  1. Review the top five wait events listed in the report:
Event Waits Time (s) (ms) Time Wait Class
---------------- -----------------------------------
enq:HW contention 249,725 3,289 13 90.0 User I/O direct 
path write 168,486 103 1 2.8 User I/O DB CPU 
PX qref latch 6,392,581 40 0 1.1 Other 
PX Deq: Slave Session Stats 18 1 51 .0 Other
  1. If the event is constantly occurring in the database, just run the following query to identify the sessions encountering the HW contention:
SELECT sid, event
FROM gv$session_wait
WHERE event LIKE '%contention%';

SID EVENT
------- ----------
9426 enq: HW - contention 
13050 enq: HW - contention

 

  1. Run the following query to isolate the object that is experiencing HW enqueue contention:
SELECT
   DBMS_UTILITY.DATA_BLOCK_ADDRESS_FILE(id2) file# 
   DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK(id2) block#
FROM gv$lock WHERE type = 'HW';

FILE# BLOCK# 
---- ----
19 195

 

  1. Using the datafile number (19) and block ID (195) obtained from the prior query, discover the object being locked by this event via the following query:
SELECT owner, segment_type, segment_name
FROM dba_extents
WHERE
   file_id = 19 AND
   195 between block_id AND
   block_id + blocks - 1;

OWNER SEGMENT_TYPE SEGMENT_NAME
--------------------------------------------
ORABPEL LOBSEGMENT SYS_LOB0000181226C00029$$

 

  1. Next, isolate the table name that references this LOB segment using the following query:
SELECT
   owner, table_name, segment_name
FROM dba_lobs
WHERE
   segment_name='SYS_LOB0000181226C00029$$';

OWNER TABLE_NAME SEGMENT_NAME
---------------------------------------------------------------
ACOM_BPEL_AQ INSERT_SITE_ORDER_BI_TBL SYS_LOB0000181226C00029$$

 

  1. Now that you have isolated exactly which LOB segment is causing the HW enqueue, it’s time to alleviate the root cause of this problem by manually adding a new extent to the segment. It’s important to know if the object resides on a tablespace whose extents are either AUTO ALLOCATEd or of UNIFORM size. If its extents are defined as UNIFORM, then you simply need to add a new extent of the same size; otherwise, you would add a new extent sized the same as the biggest extent for this segment:
  • Determine the biggest extent size of this object:
SELECT
DISTINCT bytes
FROM dba_extents
WHERE segment_name = 'SYS_LOB0000181226C00029$$'
         AND owner = 'ORABPEL';

BYTES
----------
1048576
8388608
65536
  • Add some extents to this LOB segment:
ALTER TABLE orabpel.insert_site_order_bi_tbl
   MODIFY LOB ('SYS_LOB0000055018C00004$$')
              (ALLOCATE EXTENT (SIZE 8M));

Congratulations! With this simple action, you have just saved your company millions of dollars in
orders that would be seriously delayed or even lost from customer activity on its e-commerce website.

 

Another Real-World Example: HW Resolution

If your database is a repository for applications such as Oracle Transportation Management (OTM), Oracle Business Process Execution Language (BPEL), and Oracle E-Business Suite (EBS), it is likely that you will eventually encounter problems regarding LOB segments. For example, we recently ran into an issue with OTM related to the I_TRANSMISSION table when we applied 150,000,000 INSERTs and DELETEs plus 500,000,000 UPDATEs against this table’s XML_BLOB column in a single month. Following are the steps used to verify and repair this problem quickly:


Note

You can also use the following approach with the previous enq: HW example. This example simply shows an alternative for detecting and solving the issue.


  1. Run an AWR report and check the Top 5 Timed Events section. If you are indeed encountering problems with LOB segments, you will see something like the following report:
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~
Avg
Wait Total Call Wait
Event Waits Time (s) (ms) Time % Class
------------------------------ ------------ --------- ------- ------ -------------
db file sequential read 2,580,474 35,544 14 77.4 User I/O
SQL*Net more data from client 659,140 5,513 8 ** 12.0 Network
CPU time 4,540 9.9
enq: HW - contention 88,910 2,890 33 * 6.3 Configuration
log file sync 777,146 1,688 2 3.7 Commit
  1. In the same AWR report, check the Top Enqueue Activity section for output that’s similar to what follows:
Enqueue Activity Snaps: 1234-1235
-> only enqueues with waits are shown
-> Enqueue stats gathered prior to 10g should not be compared with 10g data
-> ordered by Wait Time desc, Waits desc
Gets Gets Wait Avg Wait
Enq Type Reg Succ Failed Waits Time (s) Time (ms)
------------------- ------- ------- ------- ------- --------- -----------
HW-Segment
High Water Mark 93,860 93,862 0 88,226 2,961 33.56 *
TX-Transaction
(row lock contention) 272 272 0 209 570 2,729.44 **
TX-Transaction
(index contention) 4,564 4,564 0 4,144 34
  1. Verify the Wait Events section of the AWR report; you will likely see something like this:
-> s -second
-> ms - millisecond - 1000th of a second
-> ordered by wait time desc, waits desc (idle events last)
% Time Total Wait Avg Waits
Event Waits -outs Time (s) Wait (ms) /txn
---------------------------- -------------- ------- ----------- --------- -----
db file sequential read 2,580,474 .0 35,544 14 3.3
SQL*Net more data from client ** 659,140 .0 5,513 8 0.9
enq: HW - contention 88,910 .0 2,890 33* 0.1
log file sync 777,146 .0 1,688 2 1.0
read by other session 103,140 .0 929 9 0.1
SQL*Net break/reset to client 114,782 .0 813 7 0.1
enq: TX - row lock contention *** 380 43.4*** 557 1466* 0.0
log file parallel write 552,663 .0 394 1 0.7
latch: cache buffers chains 55,203 .0 382 7 0.1
  1. Now that you have gathered all this information, find the object that is experiencing this wait event of enq: HW. Run the following query to isolate the objects that are encountering any enq: HW contention between the AWR snapshot interval chosen in your AWR report:
SELECT
   sql_id,
   event,
   event_id,
   time_waited,
   current_obj#,
   current_file#,
   current_block#
FROM dba_hist_active_sess_history
WHERE snap_id BETWEEN 1072 AND 1076
   AND event LIKE '%HW%CONTENTION%'
   AND time_waited > 0
   AND current_obj# <> -1
ORDER BY time_waited, event, sql_id;
  1. Capture the values for current_obj# from the previous query and supply that value to the following query:
SELECT
   owner,
   object_name,
   object_type
   FROM dba_objects
WHERE object_id = [current_obj# of query above];

The following alternative query shows crucial information when you need to isolate the object name, object type, and (most important) the SQL identifier corresponding to the statements that are suffering from enq: HW contention:

SQL> col object_name format a30
SQL> col program format a30
SQL> col event format a30
SQL> SELECT DISTINCT
CURRENT_OBJ#,o.object_name,o.owner,o.object_type,CURRENT_BLOCK#,SESSION_STATE,SQL_ID,EVENT
from v$active_session_history a, dba_objects o
where a.current_obj# = o.object_id
and a.event like 'enq%HW%';
CUR_ OBJ_NAME OWN OBJECT_ CUR_ SESS SQL_ID EVENT
OBJ# TYPE BLOCK#
------- --------------- ---- -------- -------- ------- --------- ------
235738 MLOG$_ENI_OLTP_ ENI TABLE 100747 WAITING 0ghshjjvf86bg enq: HW -
ITEM_STAR contention
612464 HIST_PEDIDOS B2W TABLE 394731 WAITING 9phv0npccjqa2 enq: HW -
contention
  1. Before adding extents to this object, it is advisable to check the object’s extent sizes:
SELECT COUNT(*), bytes/1024/1024 "MB"
FROM dba_extents
WHERE segment_name = 'HIST_PEDIDOS' AND owner='B2W'
GROUP BY bytes;
COUNT(*) BYTES
---------- ----------
1969801 131072
  1. Now that you know the largest extent size for the object, you simply add several new extents for it. Use SQL to construct the necessary MODIFY PARTITION commands based on a partition that is already known:
SELECT
'alter table '||table_owner||'.'||table_name||' modify partition '||partition_name||'
lob ('||column_name ||') (allocate extent (size 131072));'
FROM dba_lob_partitions
WHERE table_name = 'HIST_PEDIDOS'
AND partition_name like '%2014%';

In our experience, after enough new extents are added—we recommend adding at least 20!—the enq: HW contention will simply disappear from the database’s wait events and applications will start running faster again.

While adding extents manually is a good way to relieve this contention temporarily, it should be noted that it’s not a permanent solution. Because the enq: HW enqueue occurs when we want to extend the high-watermark of the LOB faster than the foreground process can acquire and format the new LOB chunks, we are simply doing the work proactively for the foreground processes by allocating the new extents manually. However, depending on how much space we need to grow for the incoming workload, 30 or even 50 extents will give the application some time to fulfill that space before it begins to encounter the HW enqueue again. So depending on the situation and expected size and duration of the application workload, allocating extents manually will only temporarily fix the issue.

Also, it’s important to note that, as is the case with almost all data definition languages (DDLs), manually allocating an extent does require the database to obtain an exclusive lock on the segment; therefore, depending on the level of concurrency, you won’t be able to work around it unless there is sufficient downtime. Thus, it’s a good idea to capture these query results into an alert script that would monitor these events in your database and raise a corresponding alert should this situation arise.

 

BASICFILE LOB Issues: Toward a More Perfect Fix

Here are some other possible solutions to permanently increase the throughput of LOB chunks allocated to the LOB segment:

  • Move the LOB segment to a tablespace with a larger UNIFORM extent size. This has proved to be the most effective method because large UNIFORM extents provide more chunks per HW operation.
  • Increase the LOB chunk size. First, determine the average size of LOB data via procedure DBMS_LOB. GETLENGTH, and then set the LOB’s chunk size to between 120 and 150 percent of the average size of the LOB data.
  • Increase the size of chunk size reclamation by setting event 44951 TRACE NAME CONTEXT FOREVER, LEVEL 1024. Once a LOB hits its high-watermark, it will try to reclaim space inside the segment first by purging old images of LOB data based on the setting for PCTVERSION or RETENTION parameters. Setting this event increases how many chunks Oracle will try to reclaim in a single operation, thus making it more effective when enqueue HW waits are encountered. This strategy is complementary with the proper sizing of the LOB’s chunk size, as previously described.
  • Make sure that Automatic Segment Space Management (ASSM) has been activated for the tablespace in which the LOB resides. This is a prerequisite for SECUREFILE LOBs as well.

Finally, it’s important to note that even in Oracle Database Release 11.2.0.1 there was a nasty bug that could corrupt the segment header (thus making the whole segment corrupt) when a manual ALLOCATE EXTENT operation was interrupted, especially for LOB segments. See My Oracle Support (MOS) Note

  • “Segment header corruption if extent allocation operation is interrupted,” for complete information.

 

BASICFILE versus SECUREFILE LOBs

Oracle Database 11g’s new SECUREFILE option for storing LOB datatypes offers better options for managing LOB datatypes, including LOB deduplication, encryption, and compression. It’s therefore strongly recommended that you migrate your BASICFILE LOBs to SECUREFILE LOBs if your database is being upgraded from Oracle 9i or Oracle 10g to Oracle 11g; by doing so, you are likely to improve LOB performance and manageability while simultaneously overcoming several of the problems we discussed in the previous section. However, your database’s transition to SECUREFILE LOBs will not necessarily be seamless, as the following scenarios demonstrate.

One issue you may encounter will be readily obvious when an application first inserts data into this new LOB that has been migrated to a SECUREFILE format. If you have an 11.2.0.1 or 11.2.0.2 database, you could have some serious problems when inserting data into this LOB field regardless of whether it’s in BASICFILE format. (Note that this particular bug has been repaired in 11.2.0.3.)

It’s also important to recognize how a LOB’s storage parameters affect its performance; here is a brief summary of the most crucial ones:

  • CHUNK specifies the smallest unit of LOBSEGMENT and is always a multiple of the DB_BLOCK_SIZE parameter. So if your database’s DB_BLOCK_SIZE is 16 KB and you insert 2 KB worth of data into a LOB column, 14 KB is simply wasted space. Because the maximum chunk size is 32 KB, it’s important to specify CHUNK so that space isn’t being wasted unnecessarily.
  • The CACHE directive tells Oracle to retain LOB data in the database buffer cache, whereas the NOCACHE setting never brings LOB data into the buffer cache. The end result is that Oracle will use direct read/writes for NOCACHEd data (indicated by the direct path read/write wait event) and perform reads/writes from the database buffer cache for CACHEd data (indicated by the db file sequential read wait event). Also note that for LOB data that will only be read and never written, a third option — CACHE READS —brings LOB data into the buffer cache only during reads, not during writes.
  • The LOGGING option enables the logging of changed LOB data to the online redo logs. If you want to improve performance for data manipulation language (DML) executed against your LOB data and know that your application’s recovery requirements don’t require logging of changed LOB data, consider setting the LOB to NOLOGGING. Note that if you specify NOLOGGING for an in-line LOB, any changes to its data will still be logged in the online redo logs.

PerformanceTuning LOB tables in Oracle Database

LOB New and Old Type Differences

To illustrate the difference between these two LOB formats, let’s create two tables, one with a BASICFILE LOB and the other with a SECUREFILE LOB:

  1. Create table test1 so that it contains a BASICFILE LOB column. Remember that if you are performing these tests in Oracle Database 12c, you must specify the parameter BASICFILE to use this type of storage because, starting in that release, the default option during LOB creation is now SECUREFILE:
CREATE TABLE test1 (col1 CLOB,col2 number)
             LOB(col1) STORE AS SECUREFILE(CACHE)
tablespace TS_GG_DATA;

When a table containing a LOB segment is created, it creates two different segment types: LOBSEGMENT and LOBINDEX. As its name suggests, LOBINDEX is an index used to access the pages or “chunks” of its corresponding LOBSEGMENT. The following query illustrates:

SYS@ORCL AS SYSDBA> SELECT COUNT(*), segment_name, segment_type
FROM dba_extents
WHERE segment_name = 'TEST1'
GROUP BY segment_name, segment_type;
COUNT(*) SEGMENT_NAME SEGMENT_TYPE
---------- -------------- ------------
24 TEST1 TABLE
  1. Create the test2 table using the SECUREFILE LOB option:
CREATE TABLE test2 (col1 CLOB,col2 number)
LOB(col1) STORE AS BASICFILE
tablespace TS_GG_DATA;
  1. Run the following query to validate the tables and their corresponding LOB datatypes using the DBA_LOBS view:
set lines 200
col column_name for a30
SELECT
   table_name,
   column_name,
   segment_name,
   securefile
FROM dba_lobs
WHERE table_name like 'TEST%';
TABLE_NAME COLUMN_NAME SEGMENT_NAME SECURE
---------------------------- --------------------- ------------------------- ------
TEST1 COL SYS_LOB0000088862C00001$$ YES
TEST2 COL1 SYS_LOB0000088867C00001$$ NO
  1. Using a simple loop in an anonymous Procedural Language/Structured Query Language (PL/SQL) block, run the following test to concatenate a value to create different values to insert 1 million rows into both types of LOB segments. Of course, be sure that there is sufficient space in the LOB segments’ tablespaces and corresponding Automatic Storage Management (ASM) disk group before attempting this test:
SET TIME ON
SET TIMING ON
TRUNCATE TABLE test1;
TRUNCATE TABLE test2;
-- Load TEST1
TT@ORCL > BEGIN
FOR v_Count_1 IN 1..1000000 LOOP
INSERT INTO TEST1(col1) VALUES ('testInsertColLOBTest2'||v_count_1);
commit;
END LOOP;
END;
/
PL/SQL procedure successfully completed.
Elapsed: 00:03:00.40
-- Load TEST2
TT@ORCL > BEGIN
FOR v_Count_1 IN 1..1000000 LOOP
INSERT INTO TEST2(col1) VALUES ('testInsertColLOBTest2'||v_count_1);
commit;
END LOOP;
END;
/
22:00:55 2 22:00:55 3 22:00:55 4 22:00:55 5 22:00:55 6 22:00:55 7
PL/SQL procedure successfully completed.

Elapsed: 00:09:11.61

As these tests prove, inserting into a SECUREFILE LOB is over three times more efficient (180 s vs. 551 s) than when inserting the same data into a SECUREFILE LOB on an Oracle 11g Release 2 database.

  1. Run the following, and you’ll see that there is no significant difference in performance for update statements between BASICFILE and SECUREFILE LOBs:
TT@ORCL > 
BEGIN
   FOR v_Count_2 IN 1..1000 
   LOOP
      update TEST2 set col1 = 'testInsertColLOBTest2'||v_count_2 where rownum <100;
      commit;
   END LOOP;
END;
/
PL/SQL procedure successfully completed.
Elapsed: 00:00:03.93

TT@ORCL > 
BEGIN
   FOR v_Count_2 IN 1..1000 LOOP
      update TEST1 set col1 = 'testInsertColLOBTest2'||v_count_2 where rownum <100;
      commit;
   END LOOP;
END;
/
PL/SQL procedure successfully completed.
Elapsed: 00:00:03.62
  1. However, it is important to remember that after data in a LOB segment has been updated and/or deleted, it’s possible that the LOB could become seriously fragmented. It’s therefore advisable to shrink the LOB using the following command:
-- For Oracle Database 10.2 and above:
ALTER TABLE <table name>
MODIFY LOB (<lob column name>) (SHRINK SPACE [CASCADE]);
-- For Oracle Database 10.1 and below:
ALTER TABLE <table name>
MOVE LOB (<lob column name>) STORE AS (TABLESPACE <tablespace name>);

 

Migrating BASICFILE LOBs to SECUREFILE LOBs

Migrating a BASICFILE LOB to a SECUREFILE LOB can be done using one simple command, as follows:

TT@ORCL > ALTER TABLE test2 MOVE LOB (col1) STORE AS SECUREFILE (TABLESPACE users);

Table altered.
Elapsed: 00:00:23.54

However, one disadvantage of this simple migration strategy is that the LOB will be inaccessible for any DML activity while the migration completes. Another way to migrate a BASICFILE to a SECUREFILE LOB in ONLINE mode is to use the DBMS_REDEFINITION package, as our next example shows:

  1. If the user account that’s going to be used for the LOB migration doesn’t have SYSDBA privileges, run the following to grant specific privileges to that account so that it can perform the migration:
-- Create the migrating user (if it doesn't yet exist)...
grant dba to tt identified by tt123;
-- ... or grant the existing user account the necessary specific privileges
-- so it can use DBMS_REDEFINITION
grant execute on dbms_redefinition to tt;
grant alter any table to tt;
grant drop any table to tt;
grant lock any table to tt;
grant create any table to tt;
grant select any table to tt;
grant create session to tt;
  1. Create the tables for this example. This includes the table that will be converted (test3) as well as the table that will be used in the migration (test4) that’s known as the interim table:
CREATE TABLE test3 (
   col1 NUMBER PRIMARY KEY,
   col2 CLOB
   ); 

23:03:10 2 23:03:10 3 23:03:10 4
Table created.

Elapsed: 00:00:00.11
  1. Insert some example data into test3 for demonstration purposes:
TT@ORCL > 
BEGIN
   FOR v_Count_2 IN 1..10000 LOOP
      INSERT INTO TEST3(col1,col2) VALUES (v_count_2,'testInsertColLOBTest3'||v_count_2);
      commit;
   END LOOP;
END;

/
PL/SQL procedure successfully completed.

Elapsed: 00:00:01.82
 
  1. Create the interim table (test4), which will act as a staging table that will store data being inserted, deleted, or updated while the online migration proceeds:
TT@ORCL > 
CREATE TABLE test4 (
   col1 NUMBER NOT NULL,
   col2 CLOB
   ) LOB(col2) STORE AS SECUREFILE (NOCACHE);

Table created.

Elapsed: 00:00:00.05
  1. Start the redefinition of table test3 using the START_REDEF_TABLE procedure of package DBMS_REDEFINITION:
23:07:09 TT@ORCL > 
DECLARE
   col_mapping VARCHAR2(1000);
BEGIN
   сol_mapping := 'col1 col1, '|| 'col2 col2';
   DBMS_REDEFINITION.START_REDEF_TABLE('tt', 'test3', 'test4', col_mapping);
END;
/

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.97
  1. To insure that all of the constraints are copied from the original table to the interim table, invoke the COPY_TABLE_DEPENDENTS procedure of DBMS_REDEFINITION as follows:
23:07:42 TT@ORCL > 
DECLARE
   error_count pls_integer := 0;
BEGIN
   DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
      uname=> 'tt',
      orig_table=> 'test3',
      int_table=> 'test4',
      copy_indexes=> DBMS_REDEFINITION.CONS_ORIG_PARAMS,
      copy_triggers=> TRUE,
      copy_constraints=> TRUE,
      copy_privileges=> TRUE,
      copy_statistics=> FALSE,
      num_errors=> error_count);
   DBMS_OUTPUT.PUT_LINE('errors := ' || TO_CHAR(error_count));
END;
/

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.89
  1. Finish the redefinition of table test3 to table test4 using the FINISH_REDEF_TABLE procedure of DBMS_REDEFINITION:
23:08:10 TT@ORCL > EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE('tt', 'test3', 'test4');
PL/SQL procedure successfully completed.
Elapsed: 00:00:02.52
  1. Verify that the column in table test3 that was originally a BASICFILE LOB is now a SECUREFILE LOB via the following query against DBA_LOBS:
23:18:33 TT@ORCL > select owner,table_name,column_name,securefile
from dba_lobs where table_name='TEST3';
OWNER TABLE_NAME COLUMN_NAME SEC
---------- --------------------- --------------------- --------------
TT TEST3 COL2 YES
Elapsed: 00:00:00.05
23:18:53 TT@ORCL >

Another way to perform an online, high-availability migration between BASICFILE and SECUREFILE LOBs is to use CTAS (create table as select) in conjunction with Oracle GoldenGate on the same source and target databases to replicate data. This makes it possible to sync up the original and new tables’ contents after the CTAS operation that transforms the tables is completed. Once the migration is complete, there will be only a brief window of application downtime while original and new tables are renamed.

 

The Impact of PCTFREE on LOBs

Setting an appropriate value for the PCTFREE parameter is critical to avoiding wasted space while creating tables with in-line LOB columns. PCTFREE specifies the minimum percentage of free space in a block that the Oracle database reserves when updating existing rows in a table, so it’s mainly designed to inhibit unnecessary row migration when a row piece no longer fits within a block after the row piece is updated and grows in length.

The default value for PCTFREE is 10 percent, but it’s not unusual to see it set to a higher value to prevent row migration. For example, if a table’s PCTFREE value has been set to 30, Oracle will ensure that 30 percent of the block is reserved for row-length growth after the rows have been updated. When a new row piece is inserted into this block and 70 percent of the space has been used, then the block is marked as full and the next row will be inserted into another block with sufficient free space.

However, a larger-than-normal value for PCTFREE can also mean that huge amounts of free space can be inadvertently wasted when a table contains in-line LOB columns. Our next example demonstrates just how much space can be saved in this situation:

  1. Create a simple table with a LOB column and PCTFREE equal to 40 percent, insert 100,000 rows into this new table, and then verify the row count and gather optimizer statistics:
SYS@ORCL AS SYSDBA> create table tt.test_pctfree (col1 clob) PCTFREE 40;
Table created.

SYS@ORCL AS SYSDBA> 
BEGIN
   FOR v_Count_2 IN 1..100000 LOOP
      INSERT INTO tt.test_pctfree
      VALUES ('testInsertColLOBTest2'||v_count_2);
      COMMIT;
   END LOOP;
END;
/

SYS@ORCL AS SYSDBA> select count(*) from tt.test_pctfree;

COUNT(*)
----------
100000

SYS@ORCL AS SYSDBA> EXEC DBMS_STATS.GATHER_TABLE_STATS ('TT', 'TEST_PCTFREE');
PL/SQL procedure successfully completed.
  1. Check the number of blocks and the average size of each block on this table:
SYS@ORCL AS SYSDBA>
select
   blocks,
   avg_space,
   pct_free
from
   dba_tables
where
   table_name='TEST_PCTFREE';

BLOCKS     AVG_SPACE   PCT_FREE
---------- ---------- ----------
1504         3362        40
  1. Truncate the table and change its PCTFREE of example table to 0 (zero):
SYS@ORCL AS SYSDBA> truncate table tt.test_pctfree;

Table truncated.
Elapsed: 00:00:00.07

SYS@ORCL AS SYSDBA> alter table tt.test_pctfree pctfree 0;

Table altered.
Elapsed: 00:00:00.01
  1. Insert the same quantity of rows again and regather statistics:
SYS@ORCL AS SYSDBA> 
BEGIN
   FOR v_Count_2 IN 1..100000 LOOP
      INSERT INTO tt.test_pctfree VALUES ('testInsertColLOBTest2'||v_count_2);
      COMMIT;
   END LOOP;
END;
/

PL/SQL procedure successfully completed.
Elapsed: 00:00:14.13

SYS@ORCL AS SYSDBA> EXEC DBMS_STATS.GATHER_TABLE_STATS ('TT', 'TEST_PCTFREE');

PL/SQL procedure successfully completed.
Elapsed: 00:00:00.22

SYS@ORCL AS SYSDBA>
  1. Verify the new values of blocks and average space that each block consumes:
SYS@ORCL AS SYSDBA> 
select
   blocks,
   avg_space,
   pct_free
from
   dba_tables
where
   table_name='TEST_PCTFREE';

BLOCKS       AVG_SPACE PCT_FREE
---------- ---------- ----------
874             303       0

Elapsed: 00:00:00.00

As this demonstration shows, the number of blocks consumed is much lower: 874 for a PCTFREE of 0 percent versus 1,504 when PCTFREE is 40 percent, or an improvement of about 41 percent. Using a high value for PCTREE in tables with LOB columns can represent a huge waste of space, and this is why some LOB tablespaces become incredibly huge, which tends to make their reorganization difficult. If a table that contains LOBs has not yet been partitioned, there’s an excellent chance that an Oracle DBA could be held hostage to this table’s space demands because it may become nearly impossible to find a sufficient window of time to recreate the table, even when using DBMS_REDEFINITION.

It is important to emphasize that reducing wasted space within a table with LOBs applies just to inline LOBs. As this example showed, the data inserted was quite small, and most of it was stored within the table itself; for this reason, it was significantly affected by the PCTFREE value of the table. For out-of­line LOBs, however, the table’s PCTFREE setting is not a factor because LOB space is allocated and maintained in units of chunks (which are several blocks long) and not on a block-by-block basis as with a table segment.

While LOB datatypes (BLOB, CLOB, NCLOB, and BFILE) do not use the PCTFREE storage parameter or free lists to manage free space, the same strategy to manage LOB space growth can be applied to chunk size when dealing with out-of-line LOBs. Chunk size should therefore be set to a larger value than the average size of LOB data if updates are expected to increase their size; otherwise, LOB data migration might occur, and application performance may be significantly degraded, because each LOB retrieval will be forced to retrieve twice as many LOB chunks.

One final but crucial caveat to remember is that the chunk size also impacts the size of the redo generated during DML against LOB data. A redo record must be generated for the entire chunk, so the size of the chunk has a direct impact on how much redo will be generated, and there will be a definite corresponding impact on DML performance. It’s therefore important to be sure to set chunk size no larger than necessary.

 

 Overcoming Poor INSERT Performance

If your application code should encounter unexpectedly poor performance during INSERTs after migrating from BASICFILE to SECUREFILE LOBs in Oracle Database 11g Release 2, be aware that there is a rather pernicious bug in release 11.2.0.1 that is documented in MOS Note 1323933.1, “Securefiles performance appears slower than basicfile LOB.”

To verify that this bug is really what is causing poor INSERT performance (as well as provide a quick workaround for this problem), use the following command to change the _kdli_sio_fileopen parameter to an appropriate value at the session level before continuing to perform INSERTs against the table that contains one or more SECUREFILE LOB columns:

SQL> alter session set "_kdli_sio_fileopen"='nodsync';

The MOS note strongly suggests applying an appropriate patch for your database version to fix the problem permanently. Finally, note that as of release 11.2.0.3, this bug has been repaired.

 

Summary

From this blog article, you have learned that the more tables with LOB columns you have, the more likely you are to encounter problems regarding LOB performance and maintenance. The recommendation you must always follow is that whenever creating a new table with LOB columns, try to use the best storage parameters that fit exactly the behavior of its corresponding table:

  • For tables that will never be updated, be sure to set PCTFREE to a value of zero (0).
  • If you do not really need to use LOB storage, consider using VARCHAR or RAW instead.

Вас заинтересует / Intresting for you:

What Does Java Mean to an Orac...
What Does Java Mean to an Orac... 3153 views Александров Попков Tue, 27 Feb 2018, 18:52:19
How to use RMAN for Stop/Start...
How to use RMAN for Stop/Start... 1861 views Андрей Волков Mon, 31 Jan 2022, 17:35:05
RMAN: Backing UP Oracle Databa...
RMAN: Backing UP Oracle Databa... 894 views Андрей Волков Wed, 29 Sep 2021, 18:27:51
RMAN: Backing Up the spfile
RMAN: Backing Up the spfile 4638 views Андрей Волков Wed, 29 Sep 2021, 18:13:31
Comments (0)
There are no comments posted here yet
Leave your comments
Posting as Guest
×
Suggested Locations