Data Pump: Transferring Data from one Oracle database to another

Data Pump: Transferring Data between databases

One of the main uses of Data Pump is the copying of data from one database to another. Often, source and destination databases are located in data centers thousands of miles apart. Data Pump offers several powerful features for efficiently copying data:


  • Network link
  • Copying data files (transportable tablespaces)
  • External tables

Using a network link allows you to take an export and import it into the destination database without having to create a dump file. This is a very efficient way of moving data.

Oracle also provides the transportable tablespace feature, which lets you copy the data files from a source database to the destination and then use Data Pump to transfer the associated metadata. These two techniques are described in the following sections.

Suppose you have two database environments—a production database running on a Solaris box and a test database running on a Linux server. Your boss comes to you with these requirements:

  • Make a copy of the production database on the Solaris box.
  • Import the copy into the testing database on the Linux server.
  • Change the names of the schemas when importing so as to meet the testing database standards for names.

First, consider the steps required to transfer data from one database to another, using the old exp/imp utilities. The steps would look something like this:

  1. Export the production database (which creates a dump file on the database server).
  2. Copy the dump file to the testing database server.
  3. Import the dump file into the testing database.

You can perform those same steps using Data Pump. However, Data Pump provides a much more efficient and transparent method for executing those steps. If you have direct network connectivity between the production and testing database servers, you can take an export and directly import it into your target database without having to create or copy any dump files. Furthermore, you can rename schemas on the fly as you perform the import. Additionally, it doesn’t matter if the source database is running on an OS different from that of the target database.

An example will help illustrate how this works. For this example, the production database users are STAR2, CIA_APP, and CIA_SEL. You want to move these users into a testing database and rename them STAR_JUL, CIA_APP_JUL, and CIA_SEL_JUL. This task requires the following steps:

  1. Create users in the test database to be imported into. Here is a sample script that creates the users in the testing database:
    define star_user=star_jul

    define star_user_pwd=star_jul_pwd

    define cia_app_user=cia_app_jul

    define cia_app_user_pwd=cia_app_jul_pwd

    define cia_sel_user=cia_sel_jul

    define cia_sel_user_pwd=cia_sel_jul_pwd

    --

    create user &&star_user identified by &&star_user_pwd;

    grant connect,resource to &&star_user;

    alter user &&star_user default tablespace dim_data;

    --

    create user &&cia_app_user identified by &&cia_app_user_pwd;

    grant connect,resource to &&cia_app_user;

    alter user &&cia_app_user default tablespace cia_data;

    --

    create user &&cia_sel_user identified by &&cia_app_user_pwd;

    grant connect,resource to &&cia_app_user;

    alter user &&cia_sel_user default tablespace cia_data;
  2. In your testing database, create a database link that points to your production database. The remote user referenced in the CREATE DATABASE LINK statement must have the DBA role granted to it in the production database. Here is a sample CREATE DATABASE LINK script:
    create database link dk

    connect to darl identified by foobar

    using 'dwdb1:1522/dwrep1';
  3. In your testing database, create a directory object that points to the location where you want your log file to go:
    SQL> create or replace directory engdev as '/orahome/oracle/ddl/engdev';
  4. Run the import command on the testing box. This command references the remote database via the NETWORK_LINK parameter. The command also instructs Data Pump to map the production database user names to the newly created users in the testing database.
    $ impdp darl/engdev directory=engdev network_link=dk \

    schemas='STAR2,CIA_APP,CIA_SEL' \

    remap_schema=STAR2:STAR_JUL,CIA_APP:CIA_APP_JUL,CIA_SEL:CIA_SEL_JUL

This technique allows you to move large amounts of data between disparate databases without having to create or copy any dump files or data files. You can also rename schemas on the fly via the REMAP_SCHEMA parameter. This is a very powerful Data Pump feature that lets you transfer data quickly and efficiently.

Image Tip  When replicating entire databases, also consider using the RMAN duplicate database functionality.

DATABASE LINK VS. NETWORK_LINK

Don’t confuse exporting while connected to a remote database over a database link with exporting using the NETWORK_LINK parameter. When exporting while connected to a remote database via a database link, the objects being exported exist in the remote database, and the dump file and log file are created on the remote server in the directory specified by the DIRECTORY parameter. For instance, the following command exports objects in the remote database and creates files on the remote server:

$ expdp mv_maint/foo@shrek2 directory=dp_dir dumpfile=sales.dmp

In contrast, when you export using the NETWORK_LINK parameter, you are creating dump files and log files locally, and the database objects being exported exist in a remote database; for example,

$ expdp mv_maint/foo network_link=shrek2 directory=dp_dir dumpfile=sales.dmp

Oracle provides a mechanism for copying data files from one database to another, in conjunction with using Data Pump to transport the associated metadata. This is known as the transportable tablespace feature. The amount of time this task requires depends on how long it takes you to copy the data files to the destination server. This technique is appropriate for moving data in DSS and data warehouse environments.

Image Tip  Transporting tablespaces can also be used (in conjunction with the RMAN CONVERT TABLESPACE command) to move tablespaces to a destination server that has a platform different from that of the host.

Follow these steps to transport tablespaces:

  1. Ensure that the tablespace is self-contained. These are some common violations of the self-contained rule:
    • An index in one tablespace can’t point to a table in another tablespace that isn’t in the set of tablespaces being transported.
    • A foreign key constraint is defined on a table in a tablespace that references a primary key constraint on a table in a tablespace that isn’t in the set of tablespaces being transported.

    Run the following check to see if the set of tablespaces being transported violates any of the self-contained rules:

    SQL> exec dbms_tts.transport_set_check('INV_DATA,INV_INDEX', TRUE);

    Now, see if Oracle detected any violations:

    SQL> select * from transport_set_violations;

    If you don’t have any violations, you should see this:

    no rows selected

    If you do have violations, such as an index that is built on a table that exists in a tablespace not being transported, then you’ll have to rebuild the index in a tablespace that is being transported.

  2. Make the tablespaces being transported read-only:
    SQL> alter tablespace inv_data read only;

    SQL> alter tablespace inv_index read only;
  3. Use Data Pump to export the metadata for the tablespaces being transported:
    $ expdp mv_maint/foo directory=dp_dir dumpfile=trans.dmp \

    transport_tablespaces=INV_DATA,INV_INDEX
  4. Copy the Data Pump export dump file to the destination server.
  5. Copy the data file(s) to the destination database. Place the files in the directory where you want them in the destination database server. The file name and directory path must match the import command used in the next step.
  6. Import the metadata into the destination database. Use the following parameter file to import the metadata for the data files being transported:
    userid=mv_maint/foo

    directory=dp_dir

    dumpfile=trans.dmp

    transport_datafiles=/ora01/dbfile/rcat/inv_data01.dbf,

    /ora01/dbfile/rcat/inv_index01.dbf

If everything goes well, you should see some output indicating success:

Job "MV_MAINT"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed...

If the data files that are being transported have a block size different from that of the destination database, then you must modify your initialization file (or use an ALTER SYSTEM command) and add a buffer pool that contains the block size of the source database. For example, to add a 16KB buffer cache, place this in the initialization file:

db_16k_cache_size=200M

You can check a tablespace’s block size via this query:

SQL> select tablespace_name, block_size from dba_tablespaces;

The transportable tablespace mechanism allows you to quickly move data files between databases, even if the databases use different block sizes or have different endian formats. This section doesn’t discuss all the details involved with transportable tablespaces; the focus of this blog is to show how to use Data Pump to transport data. See the Oracle Database Administrator’s Guide, which can be freely downloaded from the Technology Network area of the Oracle Web site (http://otn.oracle.com), for complete details on transportable tablespaces.

Image Note  To generate transportable tablespaces, you must use the Oracle Enterprise Edition. You can use other editions of Oracle to import transportable tablespaces.

Вас заинтересует / Intresting for you:

Stopping the Oracle Database 1...
Stopping the Oracle Database 1... 2166 views Андрей Волков Sat, 29 Feb 2020, 10:19:28
Oracle Database 12C Backup: un...
Oracle Database 12C Backup: un... 2767 views Андрей Волков Sat, 29 Feb 2020, 10:15:04
Starting the Oracle Database 1...
Starting the Oracle Database 1... 1840 views Андрей Волков Sat, 29 Feb 2020, 10:19:42
How to connect to Oracle Datab...
How to connect to Oracle Datab... 2982 views Андрей Волков Sat, 29 Feb 2020, 10:19:58
Comments (0)
There are no comments posted here yet
Leave your comments
Posting as Guest
×
Suggested Locations