Oracle Data Pump is a
newer, faster and more flexible alternative to the "exp" and
"imp" utilities used in previous Oracle versions. In addition to
basic import and export functionality data pump provides a PL/SQL API and
support for external tables.
For the examples to work we must first unlock the SCOTT account
and create a directory object it can access. The directory object is only a
pointer to a physical directory, creating it does not actually create the
physical directory on the file system of the database server.
CONN / AS SYSDBA
ALTER USER scott IDENTIFIED BY
tiger ACCOUNT UNLOCK;
CREATE OR REPLACE DIRECTORY
test_dir AS '/u01/app/oracle/oradata/';
GRANT READ, WRITE ON DIRECTORY
test_dir TO scott;
Existing directories can be queried using the ALL_DIRECTORIES view.
Note. Data Pump is a server-based technology, so it typically
deals with directory objects pointing to physical directories on the database
server. It does not write to the local file system on your client PC.
Table
Exports/Imports
The
TABLES
parameter is used to specify the tables that are to be
exported. The following is an example of the table export and import syntax.expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
The
TABLE_EXISTS_ACTION=APPEND
parameter allows data to be imported into existing tables.Schema Exports/Imports
The
OWNER
parameter of exp has been replaced by the SCHEMAS
parameter which is used to specify the schemas to be
exported. The following is an example of the schema export and import syntax.expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
Database
Exports/Imports
The
FULL
parameter indicates that a complete database export is
required. The following is an example of the full database export and import
syntax.expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log
impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log
INCLUDE
and EXCLUDE
The
INCLUDE
and EXCLUDE
parameters can be used to limit the export/import to
specific objects. When the INCLUDE
parameter is used, only those objects specified by it will
be included in the export/import. When the EXCLUDE
parameter is used, all objects except those specified by it
will be included in the export/import. The two parameters are mutually
exclusive, so use the parameter that requires the least entries to give you the
result you require. The basic syntax for both parameters is the same.INCLUDE=object_type[:name_clause] [, ...]
EXCLUDE=object_type[:name_clause] [, ...]
The
following code shows how they can be used as command line parameters.
expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
If
the parameter is used from the command line, depending on your OS, the special
characters in the clause may need to be escaped, as follows. Because of this,
it is easier to use a parameter file.
include=TABLE:\"IN (\'EMP\', \'DEPT\')\"
A
single import/export can include multiple references to the parameters, so to
export tables, views and some packages we could use either of the following
approaches.
INCLUDE=TABLE,VIEW,PACKAGE:"LIKE '%API'"
or
INCLUDE=TABLE
INCLUDE=VIEW
INCLUDE=PACKAGE:"LIKE '%API'"
Multiple
objects can be targeted in once statement using the
LIKE
and IN
operators.EXCLUDE=SCHEMA:"LIKE 'SYS%'"
EXCLUDE=SCHEMA:"IN ('OUTLN','SYSTEM','SYSMAN','FLOWS_FILES','APEX_030200','APEX_PUBLIC_USER','ANONYMOUS')"
The
valid object type paths that can be included or excluded can be displayed using
the
DATABASE_EXPORT_OBJECTS
, SCHEMA_EXPORT_OBJECTS
, and TABLE_EXPORT_OBJECTS
views.
The
NETWORK_LINK
parameter identifies a database link to be used as the
source for a network export/import. The following database link will be used to
demonstrate its use.CONN / AS SYSDBA
GRANT CREATE DATABASE LINK TO test;
CONN test/test
CREATE DATABASE LINK remote_scott CONNECT TO scott IDENTIFIED BY tiger USING 'DEV';
In
the case of exports, the
NETWORK_LINK
parameter identifies the database link pointing to the
source server. The objects are exported from the source server in the normal
manner, but written to a directory object on the local server, rather than one
on the source server. Both the local and remote users require the EXP_FULL_DATABASE
role granted to them.expdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT directory=TEST_DIR dumpfile=EMP.dmp logfile=expdpEMP.log
For
imports, the
NETWORK_LINK
parameter also identifies the database link pointing to the
source server. The difference here is the objects are imported directly from
the source into the local server without being written to a dump file. Although
there is no need for a DUMPFILE
parameter, a directory object is still required for the logs
associated with the operation. Both the local and remote users require theIMP_FULL_DATABASE
role granted to them.impdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT directory=TEST_DIR logfile=impdpSCOTT.log remap_schema=SCOTT:TEST
Flashback
Exports
The
exp
utility used the CONSISTENT=Y
parameter to indicate the export should be consistent to a
point in time. By default the expdp
utility exports are only consistent on a per table basis. If
you want all tables in the export to be consistent to the same point in time,
you need to use the FLASHBACK_SCN
or FLASHBACK_TIME
parameter.
The
FLASHBACK_TIME
parameter value is converted to the approximate SCN for the
specified time.expdp ..... flashback_time=systimestamp
# In parameter file.
flashback_time="to_timestamp('09-05-2011 09:00:00', 'DD-MM-YYYY HH24:MI:SS')"
# Escaped on command line.
expdp ..... flashback_time=\"to_timestamp\(\'09-05-2011 09:00:00\', \'DD-MM-YYYY HH24:MI:SS\'\)\"
Not
surprisingly, you can make exports consistent to an earlier point in time by
specifying an earlier time or SCN, provided you have enough UNDO space to keep
a read consistent view of the data during the export operation.
If
you prefer to use the SCN, you can retrieve the current SCN using one of the
following queries.
SELECT current_scn FROM v$database;
SELECT DBMS_FLASHBACK.get_system_change_number FROM dual;
SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;
That
SCN is then used with the
FLASHBACK_SCN
parameter.expdp ..... flashback_scn=5474280
The
following queries may prove useful for converting between timestamps and SCNs.
SELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM dual;
SELECT SCN_TO_TIMESTAMP(5474751) FROM dual;
In
11.2, the introduction of legacy mode means that you can use the
CONSISTENT=Y
parameter with the expdp
utility if you wish.
Miscellaneous
Information
Unlike
the original exp and imp utilities all data pump ".dmp" and
".log" files are created on the Oracle server, not the client
machine.
All
data pump actions are performed by multiple jobs (server processes not DBMS_JOB
jobs). These jobs are controlled by a master control process which uses
Advanced Queuing. At runtime an advanced queue table, named after the job name,
is created and used by the master control process. The table is dropped on
completion of the data pump job. The job and the advanced queue can be named
using the
JOB_NAME
parameter.
Cancelling the client process does not stop the associated data pump job.
Issuing "ctrl+c" on the client during a job stops the client output
and presents a command prompt. Typing "status" at this prompt allows
you to monitor the current job.Export> status
Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:\TEMP\DB10G.DMP
bytes written: 4,096
Worker 1 Status:
State: EXECUTING
Object Schema: SYSMAN
Object Name: MGMT_CONTAINER_CRED_ARRAY
Object Type: DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Completed Objects: 261
Total Objects: 261
Data
pump performance can be improved by using the
PARALLEL
parameter. This should be used in conjunction with the "%U"
wildcard in the DUMPFILE
parameter to allow multiple dumpfiles to be created or read.
The same wildcard can be used during the import to allow you to reference
multiple files.expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log
impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=impdpSCOTT.log
The
DBA_DATAPUMP_JOBS
view can be used to monitor the current jobs.system@db10g> select * from dba_datapump_jobs;
OWNER_NAME JOB_NAME OPERATION
------------------------------ ------------------------------ ------------------------------
JOB_MODE STATE DEGREE ATTACHED_SESSIONS
------------------------------ ------------------------------ ---------- -----------------
SYSTEM SYS_EXPORT_FULL_01 EXPORT
FULL EXECUTING 1 1
Data
Pump API
Along with the data
pump utilities Oracle provide an PL/SQL API. The following is an example of how
this API can be used to perform a schema export.
SET SERVEROUTPUT ON SIZE 1000000
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
BEGIN
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'EMP_EXPORT',
version => 'LATEST');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.dmp',
directory => 'TEST_DIR');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.log',
directory => 'TEST_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '= ''SCOTT''');
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.detach(l_dp_handle);
END;
/
Once
the job has started the status can be checked using.
system@db10g> select * from dba_datapump_jobs;
External
Tables
Oracle
have incorporated support for data pump technology into external tables. The ORACLE_DATAPUMP
access driver can be used to unload data to data pump export files and
subsequently reload it. The unload of data occurs when the external table is
created using the "AS" clause.
CREATE TABLE emp_xt
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY test_dir
LOCATION ('emp_xt.dmp')
)
AS SELECT * FROM emp;
The
data can then be queried using the following.
SELECT * FROM emp_xt;
The
syntax to create the external table pointing to an existing file is similar,
but without the "AS" clause.
DROP TABLE emp_xt;
CREATE TABLE emp_xt (
EMPNO NUMBER(4),
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2))
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY test_dir
LOCATION ('emp_xt.dmp')
);
SELECT * FROM emp_xt;
Help
The
HELP=Y
option displays the available parameters.expdp help=y
impdp help=y
No comments:
Post a Comment