Monday, 8 October 2018

Different Examples of expdp and impdp

1) Using Tables clause in expdp

If table name in upper case and db is case sensitive.
expdp otm/Password123@V6DB2 DIRECTORY=DATA_PUMP_DIR DUMPFILE=table1.dmp logfile=OTM_12_27NOV2013_2_imp.log NCLUDE=TABLE:\"IN \(\'TO_TestCase\'\)\"
expdp tccindife/tccindife@barc11r2 directory=DATA_DIR dumpfile=abcd.dmp logfile=tccindife_tables.log content=data_only tables='TCCINDIFE.ACTB_ACCBAL_MISMATCH','TCCINDIFE.ACTB_CONSOL_STMT_FILE'

2) Example Using Parameter Files

expdp system/manager parfile=exp.par

Parameter file:exp.par

DIRECTORY = my_dir

DUMPFILE = exp_tab.dmp

LOGFILE = exp_tab.log

SCHEMAS = scott

INCLUDE = TABLE:"IN ('EMP', 'DEPT')"

3) Examples using Parameter files for all expdp and impdp

exp.sh (Contents Following):
##Initialization of the Oracle Path
PATH=$PATH:$HOME/bin
export ORACLE_HOME=/oraint2/ora1122/product/11.2.0/dbhome_1
export ORACLE_SID=POC1
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
export PATH
LD_LIBRARY_PATH=/oraint2/ora1122/product/11.2.0/dbhome_1/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
#Execution of the expdp command with parfile
expdp parfile=/oraint1/ora10gR2/tcdump/export.par

export.par (Contents Following as a parameter file):
USERID=user_name/password@Instance_name
schemas=tcbnk12
directory=tcbnkdumpdir
compression=all
dumpfile=tcbnk12.dmp
logfile=tcbnk113_21feb2012_FULL.log

imp.sh (Contents Following):
##Initialization of the Oracle Path
PATH=$PATH:$HOME/bin
export ORACLE_HOME=/scratch/db/db1123/product/11.2.0/dbhome_1
export ORACLE_SID=jpub2wek
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
#Execution of the impdp command with parfile
impdp parfile=/scratch/work_area/DEV/dump/impdp.par

impdp.par (Contents Following as a parameter file):

USERID=user_name/password@Instance_name
directory= tcbnkdumpdir
dumpfile= tcbnk12.dmp
logfile= tcbnk12.log
REMAP_SCHEMA=old_schema_name:new_schema_name
REMAP_TABLESPACE=old_tablespace_name:new_tablespace_name
TRANSFORM=SEGMENT_ATTRIBUTES:n
TRANSFORM=OID:n
JOB_NAME= tcbnk12_job

4) Exclude/Include Clause with parameter file.

Impdp.par
userid=TCBNKAT121/TCBNKAT121@TCDEMO
DIRECTORY=DUMP_DIR
DUMPFILE=TCBNK_12.0.0_ITR2_FINAL.DMP
LOGFILE=TCBNK_12.0.0_ITR2_FINAL1.log
REMAP_SCHEMA=TCBNK12R2:TCBNKAT121
REMAP_TABLESPACE=TCBNK12R2:TCBNKAT121
TRANSFORM=SEGMENT_ATTRIBUTES:n
TRANSFORM=OID:n
COMPILE=N
EXCLUDE=SEQUENCE,PROCEDURE,PACKAGE,TRIGGER,INDEX,SYNONYM,VIEW,FUNCTION,TYPE,LIBRARY
IGNORE=Y
GRANTS=Y INDEXES=Y ROWS=Y CONSTRAINTS=Y
% expdp system/manager DIRECTORY=my_dir DUMPFILE=exp_tab.dmp LOGFILE=exp_tab.log SCHEMAS=scott INCLUDE=TABLE:\"IN \(\'EMP\', \'DEP\'\)\"

5) Example with Remap_Schema

impdp warbadev21/warbadev21@me11g22 directory=datadump dumpfile=WARBA.dmp remap_tablespace=WBTCSIM1:WARBADEV21 remap_schema=WBTCSIM1:WARBADEV21 logfile=warba_dev3.log

impdp USERID=TCS1131DEV/TCS1131DEV@server1 SCHEMAS=tcs1131dev DIRECTORY=DUMP DUMPFILE=expdp_tcs1131dev_full.dmp INCLUDE = PACKAGE_BODY LOGFILE=impdp_tcs1131dev_package_110120121.LOG
IMPDP using db link
impdp system/welcome1@server1 DIRECTORY=RESTORE remap_schema=PAND2BCOLL:PAND2BCOLL remap_tablespace=PAND2BCOLL:PAND2BCOLL NETWORK_LINK=DBLINK_pandgp56 logfile=imp_iut01_pandgp57_1.log

6) Example of Exporting a Full Database

To Export Full Database or Bigger size database or need to do parallel export and import, give the following command
$expdp scott/tiger DIRECTORY=data_pump_dir FULL=y DUMPFILE=partsize%U.dmp
FILESIZE=10G LOGFILE=mypartexp.log JOB_NAME=mypartJob
We get multiple dump files named as partsize01.dmp, partsize02.dmp and so on.  The FILESIZE parameter specifies max size of the pieces of the dump files.

7) Table level export

$expdp emp/emp DIRECTORY=dpump_dir_tab DUMPFILE=selectedtables.dmp TABLES=employees,jobs,departments

8) Exporting selective tablespaces

$expdp emp/emp DIRECTORY=dpump_dir_ts DUMPFILE=selected_tbs.dmp TABLESPACES=TBS_emp,TBS_dpt,TBS_Mgr
This will export all the objects from TBS_emp,TBS_dpt,TBS_Mgr

9) Excluding and Including Objects in expdp

$expdp scott/tigers DIRECTORY=data_pump_dir DUMPFILE=exclude.dmp
SCHEMAS=SCOTT EXCLUDE=TABLE:"like 'C%'"
Here all tables in Scott’s Schema whose name starts with “C “ will not be exported.
$expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT INCLUDE=TABLE:"like 'Z%'"
Here all tables in Scott’s Schema whose name starts with “Z “ will be exported.
Similarly you can also exclude INDEXES, CONSTRAINTS, GRANTS, USER, SCHEMA

10) Adding Query to Filter out Rows during Export

expdp emp/passw QUERY=emp:'"WHERE emp_id > 10 AND sal > 90000"' NOLOGFILE=y DIRECTORY=dpump_test DUMPFILE=query.dmp

11) Expdp/Impdp in different directories/different mountpoints in same server

expdp dumpfile=DATA_PUMP_DIR:TESTONE_datapumpdir_%U.dmp,EXPBKP2:TESTONE_expbkp2_%U.dmp logfile=DATA_PUMP_DIR:TESTONE_expdp.log full=y compression=all parallel=2
impdp dumpfile=DATA_PUMP_DIR:TESTONE_expdp_%U.dmp,EXPBKP2:TESTONE_expdp_%U.dmp logfile=TESTONE_impdp.log directory=DATA_PUMP_DIR parallel=2 schemas=SCOTT

12) Points for parallel EXPDP/IMPDP

As a general rule for processes you find the cpu_count parameter (v$parameter) and use cpu-count-1.  However. the Oracle Utilities documentation makes a slightly different recommendation for using a Data Pump export
select name,value from v$parameter where name = ‘cpu_count’;
1) Set the degree of parallelism to two times the number of CPUs, then tune from there.
2) For Data Pump Export, the PARALLEL parameter value should be less than or equal to the number of output dump files.
For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.

EXAMPLE 1)

export.par
FILESIZE=10G
compression=all
parallel=32
logfile=expdp_t7.log
dumpfile=expd_t7_%u.dmp
schemas=fcubs1201
directory=expdir
COMPRESSION=ALL
JOB_NAME=fcubs1201_job

EXECUTE

expdp parfile=export.par
Username: sys as sysdba
Password:welcome1

EXAMPLE 2)

expdp vallep/password directory=datapump full=y dumpfile=data.dmp filesize=2G parallel=2 logfile=full.log

EXAMPLE 3):

impdp system/sysdec2012@TRAININGDB schemas=test parallel=2 DIRECTORY=DATA_DUMP DUMPFILE=test_%U.dmp LOGFILE=test.log JOB_NAME=testJob content=metadata_only

13) REUSE_DUMPFILE : Overwrite existing dumpfile)

This is the option with data pump expdp utility. Normally when you perform the export using expdp utility and if the dumpfile is present in the export directory it will throw an error “ORA-27038: created file already exists”. This situation happens when you wanted to perform the repetitive exports using the same dumpfile. Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. You should mention the parameter value as Y to overwrite the existing dump file. By default the option considered as N.
$ expdp scott/tiger directory=exp_dir dumpfile = x.dmp table s= example reuse_dumpfiles = y14)Verification of status of IMPDP job

Upgrade Oracle Database to Latest Oracle Database 18c

Upgrade Types

Database Upgrade Assistant (DBUA) – Interactive GUI method.
Command-Line Method for Upgrading Oracle Database – Manual Method
Oracle Data Pump – Export/Import Method

IMPORTANT POINTS TO REMEMBER

DBUA runs the Pre-Upgrade Information Tool as part of the prerequisite checks it performs before starting the upgrade. However, to reduce downtime, Oracle strongly recommends that you run the Pre-Upgrade Information Tool manually.
If the DB Is too large and DB need to be upgraded to same server, then DBUA tool is the best option to select.
Make sure there are sufficient space for the tablespaces to grow during upgrade, else upgrade will get failed in between. Like in cluster DB Diskgroup need to have sufficient free space.
If you stop upgrade, either you need to restore the DB to start again with DBUA or without restoring DB you need to upgrade DB using Manual (Command Line) upgrade procedure.
Need to Use Same OS user of Previous Database Version

Method 1 – DBUA

After running DBUA from the 18c binary, which was already installed.
ORACLE_HOME/bin
./dbua
Under Step 1 of 10
Select the database need to upgrade to higher version
Under Step 2 of 10
Select the PDB’s going to upgrade
Under Step 3 of 10
Pre-requisite check Result, this will be either Manual or Auto fixable and having severity as Warning or Error.
It may includes , apply patch to source server, OS or DB parameter changes, PGA/SGA memory suggestions, and etc.
Under Step 4 of 10
Select Upgrade Options such as
  • Enable Parallel Upgrade
  • Recompile Invalid Objects During Post Upgrade
  • Upgrade Timezone Data
Select User tablespace to Read Only during Upgrade
You can specify the SQL script to be executed before or after upgrade
Under Step 5 of 10
Select Recovery Option
1)Flashback and Guaranteed Restore Point
2)Use RMAN backup
Under Step 6 of 10
Select the existing listener or you can create new Listener.
Under Step 7 of 10
Configure Enterprise manager or you can register in existing OMS

Method 2 – Manual Upgrade

(Local or Remote Cloning / Plug-out, Plug-In)
  1. Run the Pre-Upgrade Information Tool on the PDB.
[db1221@whf00ccs ~]$ $ORACLE_HOME/jdk/bin/java -jar /scratch/db/db1800/product/18.0.0/dbhome_1/rdbms/admin/preupgrade.jar dir /tmp –c PDBTESTCLONE

==================

PREUPGRADE SUMMARY

==================

  /tmp/preupgrade.log

  /tmp/preupgrade_fixups.sql

  /tmp/postupgrade_fixups.sql
Here in this script
  • $ORACLE_HOME/jdk/bin/java is the Source Oracle Java Home
  • /scratch/db/db1800/product/18.0.0/dbhome_1/rdbms/admin/preupgrade.jar
is the target ORACLE HOME preupgrade.jar file, it will decide on which version db going to upgrade
Run preupgrade_fixups.sql on your source database.
For example:
CONNECT / AS SYSDBA
SQL> ALTER SESSION SET CONTAINER= PDBTESTCLONE ;
SQL> @/tmp/preupgrade_fixups.sql
Executing Oracle PRE-Upgrade Fixup Script
Auto-Generated by:       Oracle Preupgrade Script
Version: 18.0.0.0.0 Build: 1
Generated on:            2018-09-06 08:01:37
For Source Database:     OBCL12R2
Source Database Version: 12.2.0.1.0
For Upgrade to Version:  18.0.0.0.0
Executing in container: PDBTESTCLONE
Preup                             Preupgrade
Action                            Issue Is
Number  Preupgrade Check Name     Remedied    Further DBA Action
——  ————————  ———-  ——————————–
parameter_obsolete        NO        Manual fixup recommended.
invalid_objects_exist       NO        Manual fixup recommended.
hidden_params                NO        Informational only. Further action is optional.
dictionary_stats               YES       None.
mv_refresh                       NO         Informational only. Further action is optional.
tablespaces_info              NO        Informational only. Further action is optional.
parameter_deprecated   NO        Informational only. Further action is optional.
cycle_number                   NO        Informational only. Further action is optional

YOU HAVE CHOICE TO EITHER CLONE THE PDB OR UNPLUG AND PLUG THE PDB.

In Local clone method if the DB is too large and there are no much free space to duplicate the DB then you can go DBUA or Unplug and Plug method in same server. It will work on the existing DB Data files.
Local clone is suitable for smaller database where you can have the copy of the source in case of any issue.

UPGRADE USING CLONE PROCESS

  • The source PDB must be open in read-only mode.
  • The target database must have a database link to the source database.
  • The user in the source database that the database link connects to must have the create pluggable database privilege.
Steps to create the Remote Clone PDB
  • Create a common user in the source and give required permission.
(In Source)
create user c##dblinkuser identified by oracle123;
grant create session,create pluggable database to c##dblinkuser container=ALL;
  • Create a db link from target container database to source container database.
(In Target)
 create database link pdblink connect to c##dblinkuser identified by oracle123 using ‘ofss2311694.in.oracle.com:1521/pdbtest1’;
  • Source PDB must be in read-only mode
alter pluggable database PDBTEST1 close immediate;
alter pluggable database PDBTEST1 open read only;
  • In targetcreate the directory for the PDB’s datafiles and create pluggable database from source pdb using db link. Execute following command to create the PDB
create pluggable database PDBTESTCLONE from PDBTEST1@pdblink file_name_convert=(‘/scratch/db1212dat/CDBTEST/PDBTEST1′,’/scratch/db1212dat/CDBTEST/PDBTESTCLONE’);
Upgrading the Earlier Release PDB to the Later Release
In target, open the clone DB in Upgrade mode, which is now in mounted mode
SQL> show pdbs
CON_ID CON_NAME                   OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED                      READ ONLY   NO
3 DLY124WLYCLN                  READ WRITE  NO
4 PDBTESTCLONE                  MOUNTED     YES
SQL> alter pluggable database PDBTESTCLONE open upgrade;
SQL> show pdbs
CON_ID CON_NAME                    OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED                      READ ONLY   NO
3 DLY124WLYCLN                  READ WRITE  NO
4 PDBTESTCLONE                  MIGRATE     YES

Upgrade the PDB Using Upgrade Utility Command
  • Method 1 : Parallel Upgrade Utility command (catctl.pl)
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl -d \
$ORACLE_HOME/rdbms/admin -c ‘PDBTESTCLONE’ -l $ORACLE_BASE catupgrd.sql
OR
  • Method 2 : Shell utility (dbupgrade)
/scratch/db/db1800/product/18.0.0/dbhome_1/bin/dbupgrade
Post-upgrade Fixup and Invalid object compilation
  • Log in to SQL*Plus, and open the PDB to execute post-upgrade fixups, and to
recompile the INVALID objects in the database:
SQL> ALTER SESSION SET CONTAINER=PDBTESTCLONE;
SQL> @/tmp/postupgrade_fixups.sql
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql
SQL>  SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
COUNT(*)
———-
0
SQL>  SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
COUNT(*)
———-
59
SQL> conn / as sysdba
SQL> alter pluggable database PDBTESTCLONE close immediate;
SQL> alter pluggable database PDBTESTCLONE open;
SQL> show pdbs
CON_ID CON_NAME                    OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED                      READ ONLY    NO
3 DLY124WLYCLN                 READ WRITE   NO
4 PDBTESTCLONE                 READ WRITE   YES
SQL> select con_id, type, message, status
from PDB_PLUG_IN_VIOLATIONS
where status<>’RESOLVED’
order by time;
CONT_ID  TYPE   MESSAGE                                                                      STATUS
——-  ——   ———————————————————————–    ——–
10      ERROR  SQL patch ID/UID 24923080/21874550 (): Installed in the PDB but not in the CDB.   PENDING
[db18c@blrgxr6116 OPatch]$ ./datapatch –verbose
………
……….
Current state of SQL patches:
Patch 24923080 ():
Installed in COMPR_PDB only
………
……….
Patch 24923080 rollback (pdb COMPR_PDB): SUCCESS
logfile: /scratch/db18c/app/db12c/cfgtoollogs/sqlpatch/24923080/21874550/24923080_rollback_CDB122R_COMPR_PDB_2018Sep26_15_22_46.log (no errors)
NOTE before running datapatch make sure:
Copy the Patch folder from the source Oracle Home (OH) to the target Oracle Home (OH) same sub-directories.
From source folder /u01/app/oracle/product/12.2.0.1/dbhome_1/sqlpatch to target OH.
 [oracle@whfoda3011 sqlpatch]$ ls -lrt
total 228
-rwxr-x— 1 oracle oinstall    999 Apr  1  2016 sqlpatch.bat
-rwxr-x— 1 oracle oinstall   2732 Apr  1  2016 sqlpatch
-rw-r–r– 1 oracle oinstall   4208 Jul  2  2016 sqlpatch.pl
-rw-r–r– 1 oracle oinstall   3516 Aug 16  2016 sqlpatch_bootstrap_driver.sql
-rw-r–r– 1 oracle oinstall  14532 Aug 26  2016 sqlpatch_bootstrap.sql
drwxr-xr-x 7 oracle oinstall   4096 Jan 26  2017 lib
-rw-r—– 1 oracle oinstall 170922 Apr 10 10:40 sqlpatch.pm
drwxr-xr-x 3 oracle oinstall   4096 Apr 18 14:48 24923080
SQL> conn / as sysdba
SQL> alter pluggable database PDBTESTCLONE close immediate;
SQL> alter pluggable database PDBTESTCLONE open;
SQL> show pdbs
CON_ID CON_NAME                    OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED                      READ ONLY    NO
3 DLY124WLYCLN                 READ WRITE   NO
4 PDBTESTCLONE                 READ WRITE   NO

ORA-00600: internal error code, arguments: [4194], [#], [#]


Error: ORA-00600: internal error code, arguments: [4194], [#], [#]
You can see this error in alert log file, when starting the PDB, alter Tablespace command, and it may crash db any time.
Oracle Database Edition Release 12.2.0.1.0 – 64bit Production.
This error indicates that a mismatch has been detected between redo records and rollback (undo) records.
Here ARGUMENTS: [4194], [#], [#]
Arg [a] – Maximum Undo record number in Undo block
Arg [b] – Undo record number from Redo block
Resolution steps are as follows:
alter session set container=PDB01;

create undo tablespace UNDOTBS3 datafile '/user1/containerdb/undotbs03.db' size 2000M;

SQL> alter system set undo_tablespace=UNDOTBS3 scope=both;

SQL> drop tablespace UNDOTBS1 including contents and datafiles;

select tablespace_name, status, segment_name from dba_rollback_segs where status != 'OFFLINE';

TABLESPACE_NAME           STATUS               SEGMENT_NAME
--------------------      -------            ---------------- 
SYSTEM                    ONLINE             SYSTEM
UNDOTBS3                  ONLINE             _SYSSMU1_2906661113$


SQL> shutdown immediate;
Pluggable Database closed.
SQL> startup;
Pluggable Database opened.
SQL> show pdbs

CON_ID        CON_NAME      OPEN MODE           RESTRICTED
----------    -----------   -------------       ----------   
10            OSDC141       READ WRITE          NO