Overcoming Undo Tablespace Corruption in a Oracle Database

Stas Belkov

Stas Belkov

The author of the article. A well-known specialist in the IT world. Oracle Products and Solutions Consultant. Practicing programmer and database administrator. More about.

UndoUndo data is a vital component of a Oracle database environment, and when something goes wrong with undo tablespaces where this data is stored, the database administrator (DBA) must take quick action to correct the problem. Otherwise, the database could become damaged, and valuable information might be permanently lost. This chapter explains how to manage some undo problems that may occur in real-world scenarios.



 

Overview of Undo Management

Undo tablespaces were introduced in Oracle Database 9/, but not until Oracle Database 11g was the undo management mode set to AUTO by default. Automatic undo management means that Oracle DBAs no longer need to manage rollback segments manually because the Oracle database itself will manage rollback segments in a single tablespace.

As soon as an Oracle database instance starts, it looks for the UNDO_TABLESPACE parameter to determine which undo tablespace should be used; if it does not find the specified undo tablespace, the database will not start and an error will be generated in the database’s alert log. If the UNDO_TABLESPACE parameter is not being used, the database will try to find any undo tablespace in the database, and if it cannot find one, it will start to use rollback segments stored in the SYSTEM tablespace. This is not an optimal situation for database performance and is definitely not recommended. Again, in this situation, an alert message will be written to the database instance’s alert log.

 

 

The Importance of UNDO_RETENTION

When a transaction begins, it is assigned a specific undo segment in the undo tablespace. That undo segment, like all segments in a tablespace, consists of several undo extents. Until the transaction is either committed or rolled back, the undo extents for a transaction have a status of ACTIVE. However, once the transaction is completed, if other statements need those extents to maintain a read-consistent view of the data as of the system change number (SCN) at which the statement began, then the related undo extents are marked as UNEXPIRED and are retained until they are no longer needed to maintain read consistency. If there are no related statements that need to maintain a read-consistent view once the transaction has completed, then the undo extents will be marked as EXPIRED and other new transactions can reuse them.

The UNDO_RETENTION initialization parameter specifies the time in seconds that Oracle will attempt to retain undo extents before they can be overwritten by transactions that need undo space. When a transaction has already been committed and its execution time is longer than the value of UNDO_RETENTION, the status of the transaction’s undo extent changes to EXPIRED so that other transactions can reuse them. Otherwise, if the transaction itself was of short duration, then the undo extents will remain in UNEXPIRED status until the amount of time since the transaction was committed exceeds the time specified by UNDO_RETENTION.

The UNDO_RETENTION parameter also interacts with the undo tablespace differently depending on whether the tablespace’s underlying datafiles are autoextensible. If the undo tablespace’s datafiles are set to a fixed size, then Oracle will ignore the UNDO_RETENTION parameter because it cannot guarantee the value specified for the retention period without having to increase the size of the undo tablespace. On the other hand, if the undo tablespace’s datafiles are set to AUTOEXTEND instead, then Oracle will try to gracefully honor the UNDO_RETENTION parameter value by increasing the size of the undo tablespace’s datafiles instead of overwriting unexpired undo transactions.

The ultimate impact of the UNDO_RETENTION parameter and its interaction with its tablespace can be verified by checking the value for the TUNED_UNDORETENTION column of the V$UNDOSTAT dynamic view. This value is calculated and adjusted on the fly. The way it’s calculated depends on the MAXQUERYLENGTH of the query captured within the time interval, as well as on the total space usage of the undo tablespace. If your database’s application workload consists of long-running report-like queries, and its undo tablespace’s datafiles are relatively large and of fixed sized, it is unlikely to encounter a very high value for TUNED_UNDORETENTION.

This situation can sometimes cause problems for database application performance. For example, if TUNED_UNDORETENTION is set relatively high, then depending on the database’s application workload, it is possible that a lot of UNEXPIRED undo extents will result, and the database will most likely reuse them during peak data manipulation language (DML) execution periods. Reusing UNEXPIRED undo extents is potentially more expensive in terms of latches and internal work than reusing EXPIRED segments. In this situation, it may be advantageous to disable autotuning of retention values by setting the hidden initialization parameter _UNDO_AUTOTUNE to a value of FALSE; alternatively, the _HIGHTHRESHOLD_UNDORETENTION parameter can be used to specify an upper bound value for TUNED_UNDORETENTION.

Note

Be sure to review MOS Note 742035.1, “Contention under auto-tuned undo retention,” and its notes about Bug 7291739 before implementing this solution because it identifies a particularly pernicious bug for Oracle databases prior to release 11.2.0.1.

 

 

Tuning UNDO_RETENTION

The DBMS_UNDO_ADV package provides some excellent feedback to help determine the best value for the UNDO_RETENTION parameter:

 

SQL> SELECT DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-1/24, SYSDATE)

AS best_undo_time FROM dual;

BEST_U N DO_TIME

845

This output indicates that the longest-running query on this instance took 845 seconds. Based on this output, then, 845 seconds is a good starting point for the undo retention parameter. Another query that can help identify an appropriate value for UNDO_RETENTION follows:

 

SQL> SELECT DBMS_UNDO_ADV.REQUIRED_RETENTION(SYSDATE-30, SYSDATE)

AS reqd_retn FROM dual;

REQD_RETN

1699

This query leverages the REQUIRED_RETENTION procedure of the DBMS_UNDO_ADV package to capture the execution time of the longest-running query in the last 30 days. If you use the same input parameter in the LONGEST_QUERY procedure of the first example, you will see that the same value will be returned.

 

Note

DBMS_UNDO_ADVISOR is an undocumented procedure. For more information about using it, see MOS Note 1580225.1, “What is the Undo Advisor and how to use it through the DBMS_UNDO_ADV package.” Much of this information is also available via the Undo Advisor in Enterprise Manager Database Console, Grid Control, and Cloud Control.

 

 

DTP, XA, and Rollback Segments

Oracle Database applications that connect to multiple databases via database links or that use XA (extended Architecture) for distributed transaction processing (DTP) may occasionally experience issues with pending transactions. When such issues arise, it’s not unusual to encounter extreme contention, often to the point that the database instance “hangs,” which may cause all database DML operations to halt until the problem is resolved. The following checklist and corresponding scripts will help to ascertain if an Oracle database instance is in danger of hanging due to issues with DTP via XA:

  1. Run the following query to determine if there are any pending distributed transactions and, if so, build a set of commands to force any pending distributed transactions to commit:

 

SQL> SET HEADING OFF
SELECT 'commit force '''| |local_tran_id| |''';' FROM dba_2pc_pending;
SQL> 
commit force '151.23.987365';
commit force '155.29.1615583'; 
commit force '231.10.1069716'; 
commit force '237.18.648972'; 
commit force '238.15.811599'; 
commit force '36.5.1329177'; 
commit force '393.41.746115'; 
commit force '4733.28.915649'; 
commit force '613.17.686683';
9 rows selected.

This output identifies nine pending distributed transactions that must be resolved by forcing them to explicitly commit. While it’s also acceptable to roll back these transactions, it makes much better sense to commit these transactions because committing them diminishes the chance of losing any transaction data.

      2. Next, execute the commands previously generated

SQL> commit force '151.23.987365';

 commit force '155.29.1615583'; 

commit force '231.10.1069716'; 

commit force '237.18.648972'; 

commit force '238.15.811599'; 

commit force '36.5.1If executing this command fails to bring329177'; 

commit force '393.41.746115';

 commit force '4733.28.915649'; 

commit force '613.17.686683';

Commit complete.

Commit complete.

Commit complete.



 

  1. Run this next query to build a set of transactions to execute on this instance to purge any lost transactions from the database:

 

SQL> SELECT 'Execute

DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('''||local_tran_id||''');commit;'

FROM dba_2pc_pending;

Execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRY('151.23.987365');commit;

Execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRy('155.29.1615583');commit;

Execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRy('231.10.1069716');commit;

Execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRy('237.18.648972');commit;

Execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRy('238.15.811599');commit;

Execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRy('36.5.1329177');commit;

Execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRy('393.41.746115');commit;

Execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRy('4733.28.915649');commit;

Execute DBMS_TRANSACTION.PURGE_LOST_DB_ENTRy('613.17.686683');commit;

9 rows selected.

Simply copy and paste the output from the previous command into a SQL*Plus session to clean up any additional pending distributed transactions. However, if the distributed transactions persist, it may be necessary to investigate and resolve one last possibility: the status of the rollback segments associated with these purged transactions may have changed from ONLINE to PARTLY AVAILABLE.

  1. The PARTLY AVAILABLE status means that the rollback segment contains data for an in-doubt transaction or a recovered distributed transaction. The database instance’s background recoverer process (RECO) should automatically resolve in-doubt or pending transactions; however, it still may be necessary for the Oracle DBA to intervene to reset the affected rollback segments to their proper status of ONLINE. Doing so may cause the database to hang if transactions other than the purged distributed transactions were using those rollback segments.

To correct this situation, run the following query to locate any rollback segments still marked as partly available:

 SQL> SELECT

'ALTER ROLLBACK SEGMENT "'||segment_name||'" ONLINE;'

FROM dba_rollback_segs

WHERE status LIKE 'PARTLY_AVAILABLE';

 

  1. If executing this command fails to bring the rollback segments back online, some additional remedial action is required. Take the rollback segment offline and then verify that its status is now OFFLINE instead of PRTLY AVAILABLEA using the following commands:

 

SQL> ALTER ROLLBACK SEGMENT "_SYSSMU168$" OFFLINE;

SQL> SELECT status

FROM dba_rollback_segs

WHERE SEGMENT_NAME = '_SYSSMU168$';



  1. Once the rollback segment is taken offline successfully, drop it and then immediately re-create it, as follows:
SQL> DROP ROLLBACK SEGMENT "_SYSSMU168$";

SQL> CREATE ROLLBACK SEGMENT '_SYSSMU168$' TABLESPACE UNDOTBS1;

This step should resolve any rollback segment-related issues, and database application user sessions should now be able to proceed without any danger of halting or causing the database instance to hang.

 

Other Unusual Rollback and Undo Segment Issues

Sometimes, a session can hang when the DBA tries to force a transaction to commit or roll back (COMMIT FORCE or ROLLBACK FORCE). In the database, we can see the session hang as it waits on the event “free global transaction table entry.” Other rare instances can occur when underlying metadata about the transaction is corrupted. In this case, it will most likely require manually deleting and re­inserting the transaction information within the appropriate internal tables. See MOS Note 401302.1, “How to resolve stranded DBA_2PC_PENDING entries,” for more information.

 

Recovering from Undo Tablespace Corruption

If you have been an Oracle DBA for any length of time, it’s quite likely that you’ve encountered an ORA- 1578, ORA-7445, or ORA-600 error during your career. These are indications of a symptom that all DBAs love to hate: corruption. Fortunately, corruption within Oracle database structures doesn’t occur very often, but when it does, it must be dealt with carefully and immediately. It is also crucial to determine the true root cause (or causes) of the corruption and employ proactive measures to ensure it will not reoccur.

The simplest technical definition of corruption in an Oracle database is that it is an inconsistency in either a data block or buffer cache memory structure. Fortunately, there are well-known methods to prevent, detect, and repair corruption depending on the type of corruption encountered. The remainder of this chapter discusses the appropriate approaches to detecting, repairing, and then proactively preventing corruption in rollback segments in the SYSTEM tablespace and undo segments in an undo tablespace.

 

Preventing, Detecting, and Repairing Corruption

First, it’s important to distinguish two broad types of corruption: memory corruption and block corruption.

An ORA-7445 error is generally indicative of memory corruption, and the root causes of this error are legion: It could be caused by a malfunction in the database server’s dynamic random-access memory (DRAM), storage network, I/O controller memory, or even the internal disk drive’s memory cache. This type of error requires careful testing and retesting to establish the root cause, and it may even require intervention by the appropriate hardware technicians to diagnose, isolate, and repair the ultimate source of the problem.

An ORA-600 or ORA-1578 error typically indicates either logical or physical corruption of a database block or memory buffer. Physical corruption means that the database cannot recognize the block or buffer correctly; perhaps the block header has been corrupted, or its actual size no longer matches its expected size. Logical corruption implies that even though the block or buffer structure is still intact, its contents are essentially gibberish and are no longer legible to the database. As these issues are mostly within an Oracle DBA’s jurisdiction, the remainder of this section focuses on how to prevent, detect, and repair this type of corruption.

Prevention. A smart Oracle DBA spends her time anticipating a problem before it happens. In the case of corruption, she follows recommended best practices, including avoiding single points of failure in the database’s hardware and network, activating appropriate monitoring of the hardware and the database using tools such as Oracle Enterprise Manager 12c Cloud Control, and activating appropriate values for specific initialization parameters to limit or eliminate database corruption.

Detection. Unfortunately, detection of database corruption often first appears when a user reports a strange error message on his screen, usually followed by a frantic search of the corresponding database’s alert log to find the ORA-1578 error that matches the database block that’s been corrupted. However, it’s also possible to detect block corruption outside of application errors in the following ways:

  • During regularly scheduled Recovery Manager (RMAN) backup processing
  • After running the RMAN BACKUP VALIDATE command against a database, tablespace, or datafile, either with or without the optional CHECK LOGICAL clause to identify logical corruption
  • Running the DBMS_REPAIR package against a specific database segment
  • Executing the database verification utility, DBVerify, against a specific datafile

In these cases, it’s likely that the V$DATABASE_BLOCK_CORRUPTION dynamic view will contain a list of the corrupted blocks and that information can be leveraged effectively for repair of the corrupted blocks.

Repair. Once corruption has been detected, it’s the job of an Oracle DBA to repair the damage as quickly as possible. The appropriate method to repair the damage depends on the type of corruption detected, the criticality of the object to the database application, the amount of downtime that the corruption is causing or may cause, the availability of RMAN backups on either disk or tape, and even the specific disaster recovery environment that has been implemented. For example, a corrupted database block on disk in the oldest historical partition of a partitioned table containing data from 20 years ago is obviously of less immediate concern than a block in the hottest partition of the same table.

 

Handling Memory Corruption

When blocks are being read into an Oracle database’s buffer cache, Oracle background processes will verify the blocks being read from disk. The background processes compare the incarnation number (INC) and sequence number (SEQ) in the header of the data block with the INCSEQ structure in the database server’s baseboard to make sure that the block versions are exactly the same. This verification should prevent an Oracle database from reading a block from disk that contains a corrupted block header.

Even so, it is still possible for memory corruption to occur. Here are several possible vectors:

  • Oracle software bugs
  • Operating system software bugs
  • Non-Oracle programs writing to the same memory address being used by Oracle
  • Hardware problems, including failing DRAM components

Prevention. One of the best ways to prevent unnecessary corruption of Oracle database blocks is to set the DB_BLOCK_CHECKSUM initialization parameter to its recommended setting of FULL. The Oracle database will then verify that a block is logically consistent before it’s written back to disk. Be aware, however, that there is a potential performance penalty of up to 10 percent if the application workload is skewed heavily toward DML because of the additional verification processing this block checking entails.

Starting with Oracle Database 11g, it’s even simpler to set the appropriate values to guard against corruption via the DB_ULTRA_SAFE initialization parameter. This parameter accepts just three possible values:

  • As might be expected, OFF sets the lowest acceptable values: DB_CLOCK_CHECKING,
  • DB_BLOCK_CHECKSUM, and DB_LOST_WRITE_PROTECT are deactivated (but can be set separately to appropriate values if desired).
  • DATA_ONLY sets DB_CLOCK_CHECKING to MEDIUM, DB_LOST_WRITE_PROTECT to TYPICAL, and DB_BLOCK_CHECKSUM to FULL.\
  • DATA_AND_INDEX sets DB_BLOCK_CHECKING and DB_CLOCK_CHECKSUM to FULL and DB_LOST_WRITE_PROTECT to TYPICAL.

Detection. Memory corruption is typically detected by monitoring the database instance alert log for ORA-600 errors; however, it’s just as likely that they will be reported when database application users unexpectedly receive an ORA-600 error or when error messages that are related to memory corruption are reported to the Oracle DBA.

Repair. Restarting a database instance is one potential method to clear at least some memory corruption errors. However, the Oracle hanganalyze utility is an excellent tool for collecting more information, and it’s likely that Oracle Support will require more detailed information when opening a service request to solve the memory corruption issues. Collecting hanganalyze information is relatively straightforward, and there are two methods to choose from: SQL*Plus and Oradebug.

 

Invoking Hanganalyze via SQL*Plus

One of the best methods to gather information about database performance problems when the database is hanging is to use the HANGANALYZE procedure as follows:

  1. Start a SQL*Plus session as SYSDBA and run the following command:
ALTER SESSION SET EVENTS 'immediate trace name HANGANALYZE level 3';
  1. Wait 1 minute and run this same command again:
ALTER SESSION SET EVENTS 'immediate trace name HANGANALYZE level 3';
  1. Wait 1 more minute and again run the same command:
ALTER SESSION SET EVENTS 'immediate trace name HANGANALYZE level 3';
  1. Wait 1 more minute and then run this command to set another event:
ALTER SESSION SET EVENTS 'immediate trace name SYSTEMSTATE level 266';
  1. Wait 1 more minute, and run the same event command again:
ALTER SESSION SET EVENTS 'immediate trace name SYSTEMSTATE level 266';
  1. Finally, wait 1 more minute and run the same command once more:
ALTER SESSION SET EVENTS 'immediate trace name SYSTEMSTATE level 266';
  1. Exit your SQL*Plus session, gather all trace files generated in your database’s USER_DUMP directory, and send them on to Oracle Support for expert analysis.

 

Using Hanganalyze without a Persistent SQL*Plus Connection

If you are unable to log in to the affected database with SQL*Plus at the exact moment that the hang occurs, this method will work instead:

 

1. Log in to SQLPLUS using the following command:

sqlplus -prelim / as sysdba

2. Set the pid of your session using the following command:

oradebug setmypid

3. Set the ulimit of this session:

oradebug unlimit

4. Run the HANGANALYZE command:

oradebug hanganalyze 3

5. Wait 3 minutes and run it again:

oradebug hanganalyze 3

6. Wait 3 more minutes and run it once more:

oradebug hanganalyze 3

7. Wait 3 more minutes, then run this command:

oradebug dump systemstate 10

8. Now wait 7 minutes and run the same command again:

oradebug dump systemstate 10

9. Wait 7 minutes and run the same command once more:

oradebug dump systemstate 10

10. Cancel the tracing, stop the session, and send the appropriate files from USER_DUMP_DEST to

Oracle Support.
 

Note

If you are analyzing an issue with an Oracle Real Application Cluster (RAC) database, be sure to invoke oradebug setinst all to capture all instance information properly!

 

Handling Logical Corruption

Poorly designed applications can cause logical corruptions because they do not check integrity of data. Some bugs related to the Oracle optimizer could also lead to logical corruptions.

  • Prevention. Excessively test application code before deploying it to a production environment. Also, be sure to keep your Oracle database patched to the most recent patch set release and strive to upgrade databases to the most stable version for your current release.
  • Detection. Logical corruptions caused by application errors are difficult to detect; the best professionals to find errors of this kind are the developers of the application in question.
  • Repair. The same professional who finds the error—usually, the application developer—will be the one to fix it.

 

Overcoming Media Corruption

Media corruption has several potential vectors, including (but not limited to) the following:

  • Hardware problems in the database server, storage area network, or physical storage components
  • Misconfigured parameters in the operating system storage area network
  • I/O controller issues
  • Logical volume issues
  • User errors (e.g. a user inadvertently replaced or deleted a datafile)
  • Unexpected bugs in Oracle database or Automatic Storage Management (ASM) software
  • Bugs in the database server’s operating system software

Prevention. Depending on the cause, here are some recommended best practices to prevent media corruption:

  • Practice the principle of least privilege to ensure that no one can inadvertently overwrite or delete a datafile.
  • Have a good backup policy, and always test a full restore at regular intervals to ensure full database recoverability can be performed when it is least expected.
  • Install a monitoring tool such as Oracle Grid Control 11g or Cloud Control 12c so alerts can be raised whenever an ORA error occurs, or deploy shell scripts to monitor the alert log.
  • Always make changes to your database in a test environment before deploying any application changes that might cause corruption.

Detection. ORA errors will be found in alert log files, and sometimes users accessing the database will report these errors. In addition, DBVerify can be used to find physical corrupted blocks in datafiles.

Repair. Datafile block corruption can be repaired in many different ways, and the appropriate method depends on the type of block that needs recovery.

 

File Header Block Corruption

As the name implies, the header block keeps track of information about the datafile as well as the status of the file and the last time that a RESETLOG operation was performed. Also, every time a checkpoint occurs, the file header block is updated with checkpoint information.

Detection. Fortunately, there are several ways to detect file header block corruption:

  • Review the database’s alert logs for any ORA errors.
  • Run DBVerify against datafiles.
  • Issue the ALTER SYSTEM CHECK DATAFILES command, and then check the database’s alert log afterwards to see if any errors were generated.
  • Issue SELECT FILE#, ONLINE_STATUS from V$RECOVER_FILE to check for any datafile that needs recovery, and then review the output to find any information about corrupted blocks.
  • The error ORA-1578 is always generated when a file header becomes corrupted, and this error message will display the block number and data file number of the affected datafile.

Repair. The only repair option in this situation is to restore the block from a valid datafile backup.

 

Data Dictionary Object Block Corruption

Data dictionary object blocks are crucial because they belong to objects that reside in the SYSTEM tablespace and contain information that the database cannot live without—the objects that comprise the database itself.

Detection. Again, a couple of methods are available to detect block corruption:

  • Running DBVerify against the SYSTEM tablespace’s datafiles will identify which blocks are corrupted.
  • The database’s alert log file will show ORA-1578 errors related to the SYSTEM tablespace’s datafiles.

Repair. As the SYSTEM tablespace is absolutely crucial to the database’s operation, this situation must be dealt with immediately upon detection of corruption.

The best method is to restore the SYSTEM tablespace from a valid backup and recover the block/datafile. However, remember that this recovery will require shutting down the database instance and restarting it in MOUNT mode before recovery can continue.

If you have a disaster recovery site provided by either Data Guard physical replication or GoldenGate logical replication, you may consider switching over to that system until the data dictionary corruption is repaired.

 

Undo Header and Undo Block Corruption

Transactions utilize undo segments in two cases:

  • If the database detects that a transaction has failed, it will automatically roll back that transaction to maintain data integrity.
  • When an application user explicitly cancels the transaction through a web browser or other interface, the application issues the ROLLBACK command to undo everything that was committed within the current transaction.

Undo segments are therefore crucial for transaction consistency, so any undo tablespace corruption issues must be handled with extreme urgency.

Detection. Again, several methods are available to detect block corruption:

  • Run DBVerify on the undo tablespace’s datafile to discover any corruption.
  • The database’s alert log will register ORA errors when undo corruption is detected. If the corrupted undo segment is not offline, an ORA-1545 is typically raised, while a query that is unable to locate the appropriate version of a block from an undo segment for read consistency will typically raise an ORA-1578 error.
  • Query the V$ROLLSTAT dynamic view; if any undo segment shows a status of NEEDS RECOVERY, then undo corruption is likely.

Repair. Recovering from undo segment corruption is a bit trickier than other corruption recovery scenarios.

If an undo segment is found to contain a corrupted block, a good first attempt to solve the problem is to simply take the undo segment offline and then drop the segment.

If that method doesn’t work, or if it is impossible to drop the undo segment, then it will be necessary to restore and recover the undo tablespace from a valid backup. However, as with SYSTEM tablespace recovery, doing so will require shutting down the database instance and restarting it in mount mode before recovery of the undo tablespace can continue.

One unsupported repair method involves the following procedure:

  1. If the corrupted undo block is part of a table, index, or cluster, then activate the following event. It will write information about the object number into a trace file in the USER_DUMP_DEST directory for the current session:
ALTER SYSTEM SET EVENT ”10015 trace name context forever, level 10”;
  1. Review the trace file to find the object number, and then run the following query:
SQL> SELECT owner, object_name, object_type, status
FROM dba_objects
WHERE object_id = <object# from trace file>;
  1. Drop the object and restore only this object from a valid backup.

If the block belongs to the undo header, then the repair is much more tedious:

  1. List the corrupted undo segment by setting unsupported parameter _CORRUPTED_ROLLBACK_SEGMENTS to a value of rollback segments’ names.
  2. Run a full database export.
  3. Create an empty database.
  4. Run a full database import.

 

Summary

As this chapter demonstrates, the undo tablespace is a vital part of any Oracle database after Oracle Database 10g Release 1. It’s therefore crucial for an Oracle DBA to be able to detect, diagnose, and repair any problem related to the undo tablespace and its corresponding undo segments and extents. If the DBA ignores any of the potential issues this chapter has described, database applications may encounter unacceptable response times, the database instance may unexpectedly hang, and the database may even stop operating completely or be impossible to open or reopen. Following are the best practices mentioned in this chapter:

  • Be sure that you understand the intricacies of how an Oracle database uses its undo tablespace, including how the UNDO_RETENTION parameter setting determines the potential size of an undo tablespace as well as how long an unexpired undo extent will be retained for purposes of read consistency.
  • Use the Undo Advisor tool. Whether accessed via Oracle Enterprise Manager or the undocumented DBMS_UNDO_ADVISOR package, it can be invaluable for tuning undo tablespace usage and related initialization parameter settings.
  • Leverage the hanganalyze utility to accurately ascertain the cause of an apparent database hang and decide which approach to overcome the hang is most warranted.
  • Learn how to distinguish the types and potential causes of block media corruption, and be prepared to detect, diagnose, and correct corruption in any circumstance because this type of corruption can cause an Oracle database to hang and may even require database downtime to repair the damage.

 

Вас заинтересует / Intresting for you:

Recovering from Undo Tablespac...
Recovering from Undo Tablespac... 2886 views Светлана Комарова Thu, 26 Jul 2018, 08:00:38
Stopping the Oracle Database 1...
Stopping the Oracle Database 1... 2235 views Андрей Волков Sat, 29 Feb 2020, 10:19:28
How to connect to Oracle Datab...
How to connect to Oracle Datab... 3097 views Андрей Волков Sat, 29 Feb 2020, 10:19:58
Oracle Database 12C Backup: un...
Oracle Database 12C Backup: un... 2824 views Андрей Волков Sat, 29 Feb 2020, 10:15:04
Comments (0)
There are no comments posted here yet
Leave your comments
Posting as Guest
×
Suggested Locations