Wednesday, 8 November 2017

Oracle Parallel IMPDP and EXPDP


(Please Note my other blog address: svgulhane wordpress IT updates)

Prerequisites

Oracle Directory

When you come to use Datapump utilities, you need to have a pre-existing Oracle Directory within the database being exported or imported. This directory object tells datapump – where to write or read dump files to/from. By default, every database created has a directory already set up for datapump to use. It is called DATA_PUMP_DIR and defaults to the location $ORACLE_HOME/rdbms/log.

How to find existence of Oracle Directory

select owner, directory_name, directory_path from dba_directories where directory_name = 'DATA_PUMP_DIR';
OWNER   DIRECTORY_NAME  DIRECTORY_PATH
------  --------------  ----------------------------------------
SYS     DATA_PUMP_DIR   /srv/oracle/product/11gR1/db//rdbms/log/

How to create Oracle Directory (You need to have DBA privileges)

create directory my_datapump_dir as '/scratch/DEV/datapump';

Directory created.
Action: Raise DBA ticket if you want to create the Oracle directory or use the existing Oracle directory for import or export activity by checking with previous script.

Physical OS Path

mkdir /scratch/DEV/datapump
chown oracle:dba /scratch/DEV/datapump
ls -l /scratch/DEV/
drwxr-xr-x  2 oracle dba   4096 Aug 29 15:03 datapump
...

Available Space in OS Server

df -h
[db12c@server Downloads]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              50G   43G  4.4G  91% /
tmpfs                 142G  653M  142G   1% /dev/shm
/dev/sda1            1008M  244M  714M  26% /boot
/dev/sda4             5.0G  911M  3.8G  20% /oem
/dev/mapper/DomUVol-scratch
                      3.7T  2.4T  1.2T  69% /scratch
Action: If insufficient OS space then remove unwanted dumps from your end or find the schemas which can be deleted.

Check the size of the dump

Action: According to Size of the dump decide the USER and SYSTEM tablespace required and accordingly increase the space. Compilation of the objects required more system tablespace.
IMPORTANT : You cannot recover system tablespace once consumed, it will be released only when that Instance got dropped. So try to get import done successfully and don’t drop the schema unnecessarily.

Sample Estimation:

Example 1

Dump Size: 1.85 gb
space used by schema:  at most 2g
system tablespace used: 15g
Total time taken:  4.20 min

Example 2

210 gb consumed for 73gb of dump, with 1gb system tablespace separately.

Available free space in USER and SYSTEM tablespace

Enter the target tablespace name to check free space select
a.tablespace_name,
sum(a.BYTES/(1024*1024))used_MB,
sum(a.maxbytes/(1024*1024))maxsize_MB ,
abs(sum((a.maxbytes/(1024*1024))-(a.BYTES/(1024*1024)))) free_MB

from dba_data_files a
where a.tablespace_name like('%&TS_Name%')
group by a.tablespace_name
order by a.tablespace_name ;
Action: If insufficient tablespace then raise DBA ticket in advance before commencing the import/export activity to avoid failure of the task, even if import got failed , not need to stop the import process you check the issue and raise DBA ticket by sharing the log files and commands used. Once issue resolved that import/export got commenced automatically.

Check the version of the source and target database

Just connect to source and target db it will show the version.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Action: Make sure source and target version need to be same for successful import activity, else for different version contact DBA by raising DBA ticket if they can help depends on the version.

Check the location of the dump and privileges granted on that path

If that folder not having privilege then that import/export will having issue and also log will not be generated if log folder not having privilege.
Grant it yourself or contact DBA.

Setting the Oracle Home and Oracle SID

Before proceeding for import or export we need to set oracle home/sid.

Method 1

First initialize the oracle home and Oracle SID
. oraenv
Enter the Instance name and try to connect to the schema, if you are successfully connected then proceed for import/export.

Method 2

bash
export ORACLE_BASE=/scratch/db/db1123/
export ORACLE_HOME= /scratch/db/db1124/product/11.2.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/bin
export ORACLE_SID= FPCITI
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Parallel Export/Import

As a general rule for processes you find the cpu_count parameter (v$parameter) and use cpu-count-1.  However. the Oracle Utilities documentation makes a slightly different recommendation for using a Data Pump export
select name,value from v$parameter where name = 'cpu_count';
1)     Set the degree of parallelism to two times the number of CPUs, then tune from there.
2)     For Data Pump Export, the PARALLEL parameter value should be less than or equal to the number of output dump files.
3)     For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.
Examples are given in following topics.

EXPDP


Example 1

expdp FC1132_SOAINFRA/oracle123@KERDEV12 DIRECTORY=SOA DUMPFILE=FC1132_SOAINFRA.DMP LOGFILE=FC1132_SOAINFRA.LOG SCHEMAS=FC1132_SOAINFRA

Example 2 (Parallel export)

Faster by parallel process advantage and also avoid large file issue to transfer files between servers, multiple file get copied faster with multiprocessing.
To Export Full Database, give the following command
$expdp  scott/tiger FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp
             LOGFILE=myfullexp.log JOB_NAME=myfullJob
The above command will export the full database and it will create the dump file full.dmp in the directory on the server /u01/oracle/my_dump_dir
In some cases where the Database is in Terabytes the above command will not feasible since the dump file size will be larger than the operating system limit, and hence export will fail. In this situation you can create multiple dump files by typing the following command
Create the export.par
FILESIZE=10G
parallel=32
logfile=expdp_t7.log
dumpfile=expd_t7_%u.dmp
schemas=fcubs1201
directory=expdir  
COMPRESSION=ALL
JOB_NAME=fcubs1201_job
Execute following command :
expdp parfile=export.par

IMPDP

Example 1

impdp FCCPU114T@CPU11G2 DIRECTORY=DATA_PUMP_DIR REMAP_SCHEMA=FC114MR2:FCCPU114T dumpfile=FC114MR2_20.01.2012_203047_FULL_DUMP.DMP logfile=IMPORT_FCCPU114T_.log REMAP_TABLESPACE=FC114MR2:FCCPU114T EXCLUDE=GRANT transform=OID:n

Example 2

Parameter File: (FC12P01_KERNLS.par)
USERID=FC12P01/FC12P01@KERNLS
DIRECTORY=EXPIMP
DUMPFILE=FCUBS_12.0.0_ITR2_FINAL.DMP
LOGFILE=FCUBS_12.0.0_ITR2_FINAL_2.log
REMAP_SCHEMA=FCUBS12R2:FC12P01
REMAP_TABLESPACE=FCUBS12R2:FC12P01
TABLE_EXISTS_ACTION=REPLACE
TRANSFORM=SEGMENT_ATTRIBUTES:n
TRANSFORM=OID:n
JOB_NAME=impjobfc12p01
Command :
FC12P01/********@KERNLS parfile=FC12P01_KERNLS.par

Exmaple 3
Create following impdp.par and imp.sh file by using vi editor.
impdp.par
USERID=user_name/password@Instance_name
directory= fcubsdumpdir
dumpfile= fcubs12.dmp
logfile= fcubs12.log
REMAP_SCHEMA=old_schema_name:new_schema_name
REMAP_TABLESPACE=old_tablespace_name:new_tablespace_name
TRANSFORM=SEGMENT_ATTRIBUTES:n
TRANSFORM=OID:n
JOB_NAME= fcubs12_job

imp.sh
PATH=$PATH:$HOME/bin
export ORACLE_HOME=/scratch/db/db1123/product/11.2.0/dbhome_1
export ORACLE_SID=jpub2wek
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
impdp parfile=/scratch/work_area/DEV/dump/impdp.par

Go to directory where imp.sh exists and run following command
imp.sh

Example 4 (Faster Parallel Import taking advantage of multiple processors)

(Allowed only when you export the dump using parallel with multiple dump files)
impdp  system/sysdec2012@TRAININGDB  schemas=test parallel=2 DIRECTORY=DATA_DUMP DUMPFILE=test_%U.dmp LOGFILE=test.log JOB_NAME=testJob


Thanks And Regards
Santoshkumar Gulhane



(Please Note my other blog address: svgulhane wordpress IT updates)

No comments:

Post a Comment