Monday, 8 October 2018

Different Examples of expdp and impdp

1) Using Tables clause in expdp

If table name in upper case and db is case sensitive.
expdp otm/Password123@V6DB2 DIRECTORY=DATA_PUMP_DIR DUMPFILE=table1.dmp logfile=OTM_12_27NOV2013_2_imp.log NCLUDE=TABLE:\"IN \(\'TO_TestCase\'\)\"
expdp tccindife/tccindife@barc11r2 directory=DATA_DIR dumpfile=abcd.dmp logfile=tccindife_tables.log content=data_only tables='TCCINDIFE.ACTB_ACCBAL_MISMATCH','TCCINDIFE.ACTB_CONSOL_STMT_FILE'

2) Example Using Parameter Files

expdp system/manager parfile=exp.par

Parameter file:exp.par

DIRECTORY = my_dir

DUMPFILE = exp_tab.dmp

LOGFILE = exp_tab.log

SCHEMAS = scott

INCLUDE = TABLE:"IN ('EMP', 'DEPT')"

3) Examples using Parameter files for all expdp and impdp

exp.sh (Contents Following):
##Initialization of the Oracle Path
PATH=$PATH:$HOME/bin
export ORACLE_HOME=/oraint2/ora1122/product/11.2.0/dbhome_1
export ORACLE_SID=POC1
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
export PATH
LD_LIBRARY_PATH=/oraint2/ora1122/product/11.2.0/dbhome_1/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
#Execution of the expdp command with parfile
expdp parfile=/oraint1/ora10gR2/tcdump/export.par

export.par (Contents Following as a parameter file):
USERID=user_name/password@Instance_name
schemas=tcbnk12
directory=tcbnkdumpdir
compression=all
dumpfile=tcbnk12.dmp
logfile=tcbnk113_21feb2012_FULL.log

imp.sh (Contents Following):
##Initialization of the Oracle Path
PATH=$PATH:$HOME/bin
export ORACLE_HOME=/scratch/db/db1123/product/11.2.0/dbhome_1
export ORACLE_SID=jpub2wek
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
#Execution of the impdp command with parfile
impdp parfile=/scratch/work_area/DEV/dump/impdp.par

impdp.par (Contents Following as a parameter file):

USERID=user_name/password@Instance_name
directory= tcbnkdumpdir
dumpfile= tcbnk12.dmp
logfile= tcbnk12.log
REMAP_SCHEMA=old_schema_name:new_schema_name
REMAP_TABLESPACE=old_tablespace_name:new_tablespace_name
TRANSFORM=SEGMENT_ATTRIBUTES:n
TRANSFORM=OID:n
JOB_NAME= tcbnk12_job

4) Exclude/Include Clause with parameter file.

Impdp.par
userid=TCBNKAT121/TCBNKAT121@TCDEMO
DIRECTORY=DUMP_DIR
DUMPFILE=TCBNK_12.0.0_ITR2_FINAL.DMP
LOGFILE=TCBNK_12.0.0_ITR2_FINAL1.log
REMAP_SCHEMA=TCBNK12R2:TCBNKAT121
REMAP_TABLESPACE=TCBNK12R2:TCBNKAT121
TRANSFORM=SEGMENT_ATTRIBUTES:n
TRANSFORM=OID:n
COMPILE=N
EXCLUDE=SEQUENCE,PROCEDURE,PACKAGE,TRIGGER,INDEX,SYNONYM,VIEW,FUNCTION,TYPE,LIBRARY
IGNORE=Y
GRANTS=Y INDEXES=Y ROWS=Y CONSTRAINTS=Y
% expdp system/manager DIRECTORY=my_dir DUMPFILE=exp_tab.dmp LOGFILE=exp_tab.log SCHEMAS=scott INCLUDE=TABLE:\"IN \(\'EMP\', \'DEP\'\)\"

5) Example with Remap_Schema

impdp warbadev21/warbadev21@me11g22 directory=datadump dumpfile=WARBA.dmp remap_tablespace=WBTCSIM1:WARBADEV21 remap_schema=WBTCSIM1:WARBADEV21 logfile=warba_dev3.log

impdp USERID=TCS1131DEV/TCS1131DEV@server1 SCHEMAS=tcs1131dev DIRECTORY=DUMP DUMPFILE=expdp_tcs1131dev_full.dmp INCLUDE = PACKAGE_BODY LOGFILE=impdp_tcs1131dev_package_110120121.LOG
IMPDP using db link
impdp system/welcome1@server1 DIRECTORY=RESTORE remap_schema=PAND2BCOLL:PAND2BCOLL remap_tablespace=PAND2BCOLL:PAND2BCOLL NETWORK_LINK=DBLINK_pandgp56 logfile=imp_iut01_pandgp57_1.log

6) Example of Exporting a Full Database

To Export Full Database or Bigger size database or need to do parallel export and import, give the following command
$expdp scott/tiger DIRECTORY=data_pump_dir FULL=y DUMPFILE=partsize%U.dmp
FILESIZE=10G LOGFILE=mypartexp.log JOB_NAME=mypartJob
We get multiple dump files named as partsize01.dmp, partsize02.dmp and so on.  The FILESIZE parameter specifies max size of the pieces of the dump files.

7) Table level export

$expdp emp/emp DIRECTORY=dpump_dir_tab DUMPFILE=selectedtables.dmp TABLES=employees,jobs,departments

8) Exporting selective tablespaces

$expdp emp/emp DIRECTORY=dpump_dir_ts DUMPFILE=selected_tbs.dmp TABLESPACES=TBS_emp,TBS_dpt,TBS_Mgr
This will export all the objects from TBS_emp,TBS_dpt,TBS_Mgr

9) Excluding and Including Objects in expdp

$expdp scott/tigers DIRECTORY=data_pump_dir DUMPFILE=exclude.dmp
SCHEMAS=SCOTT EXCLUDE=TABLE:"like 'C%'"
Here all tables in Scott’s Schema whose name starts with “C “ will not be exported.
$expdp scott/tiger DIRECTORY=data_pump_dir DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT INCLUDE=TABLE:"like 'Z%'"
Here all tables in Scott’s Schema whose name starts with “Z “ will be exported.
Similarly you can also exclude INDEXES, CONSTRAINTS, GRANTS, USER, SCHEMA

10) Adding Query to Filter out Rows during Export

expdp emp/passw QUERY=emp:'"WHERE emp_id > 10 AND sal > 90000"' NOLOGFILE=y DIRECTORY=dpump_test DUMPFILE=query.dmp

11) Expdp/Impdp in different directories/different mountpoints in same server

expdp dumpfile=DATA_PUMP_DIR:TESTONE_datapumpdir_%U.dmp,EXPBKP2:TESTONE_expbkp2_%U.dmp logfile=DATA_PUMP_DIR:TESTONE_expdp.log full=y compression=all parallel=2
impdp dumpfile=DATA_PUMP_DIR:TESTONE_expdp_%U.dmp,EXPBKP2:TESTONE_expdp_%U.dmp logfile=TESTONE_impdp.log directory=DATA_PUMP_DIR parallel=2 schemas=SCOTT

12) Points for parallel EXPDP/IMPDP

As a general rule for processes you find the cpu_count parameter (v$parameter) and use cpu-count-1.  However. the Oracle Utilities documentation makes a slightly different recommendation for using a Data Pump export
select name,value from v$parameter where name = ‘cpu_count’;
1) Set the degree of parallelism to two times the number of CPUs, then tune from there.
2) For Data Pump Export, the PARALLEL parameter value should be less than or equal to the number of output dump files.
For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.

EXAMPLE 1)

export.par
FILESIZE=10G
compression=all
parallel=32
logfile=expdp_t7.log
dumpfile=expd_t7_%u.dmp
schemas=fcubs1201
directory=expdir
COMPRESSION=ALL
JOB_NAME=fcubs1201_job

EXECUTE

expdp parfile=export.par
Username: sys as sysdba
Password:welcome1

EXAMPLE 2)

expdp vallep/password directory=datapump full=y dumpfile=data.dmp filesize=2G parallel=2 logfile=full.log

EXAMPLE 3):

impdp system/sysdec2012@TRAININGDB schemas=test parallel=2 DIRECTORY=DATA_DUMP DUMPFILE=test_%U.dmp LOGFILE=test.log JOB_NAME=testJob content=metadata_only

13) REUSE_DUMPFILE : Overwrite existing dumpfile)

This is the option with data pump expdp utility. Normally when you perform the export using expdp utility and if the dumpfile is present in the export directory it will throw an error “ORA-27038: created file already exists”. This situation happens when you wanted to perform the repetitive exports using the same dumpfile. Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. You should mention the parameter value as Y to overwrite the existing dump file. By default the option considered as N.
$ expdp scott/tiger directory=exp_dir dumpfile = x.dmp table s= example reuse_dumpfiles = y14)Verification of status of IMPDP job

Upgrade Oracle Database to Latest Oracle Database 18c

Upgrade Types

Database Upgrade Assistant (DBUA) – Interactive GUI method.
Command-Line Method for Upgrading Oracle Database – Manual Method
Oracle Data Pump – Export/Import Method

IMPORTANT POINTS TO REMEMBER

DBUA runs the Pre-Upgrade Information Tool as part of the prerequisite checks it performs before starting the upgrade. However, to reduce downtime, Oracle strongly recommends that you run the Pre-Upgrade Information Tool manually.
If the DB Is too large and DB need to be upgraded to same server, then DBUA tool is the best option to select.
Make sure there are sufficient space for the tablespaces to grow during upgrade, else upgrade will get failed in between. Like in cluster DB Diskgroup need to have sufficient free space.
If you stop upgrade, either you need to restore the DB to start again with DBUA or without restoring DB you need to upgrade DB using Manual (Command Line) upgrade procedure.
Need to Use Same OS user of Previous Database Version

Method 1 – DBUA

After running DBUA from the 18c binary, which was already installed.
ORACLE_HOME/bin
./dbua
Under Step 1 of 10
Select the database need to upgrade to higher version
Under Step 2 of 10
Select the PDB’s going to upgrade
Under Step 3 of 10
Pre-requisite check Result, this will be either Manual or Auto fixable and having severity as Warning or Error.
It may includes , apply patch to source server, OS or DB parameter changes, PGA/SGA memory suggestions, and etc.
Under Step 4 of 10
Select Upgrade Options such as
  • Enable Parallel Upgrade
  • Recompile Invalid Objects During Post Upgrade
  • Upgrade Timezone Data
Select User tablespace to Read Only during Upgrade
You can specify the SQL script to be executed before or after upgrade
Under Step 5 of 10
Select Recovery Option
1)Flashback and Guaranteed Restore Point
2)Use RMAN backup
Under Step 6 of 10
Select the existing listener or you can create new Listener.
Under Step 7 of 10
Configure Enterprise manager or you can register in existing OMS

Method 2 – Manual Upgrade

(Local or Remote Cloning / Plug-out, Plug-In)
  1. Run the Pre-Upgrade Information Tool on the PDB.
[db1221@whf00ccs ~]$ $ORACLE_HOME/jdk/bin/java -jar /scratch/db/db1800/product/18.0.0/dbhome_1/rdbms/admin/preupgrade.jar dir /tmp –c PDBTESTCLONE

==================

PREUPGRADE SUMMARY

==================

  /tmp/preupgrade.log

  /tmp/preupgrade_fixups.sql

  /tmp/postupgrade_fixups.sql
Here in this script
  • $ORACLE_HOME/jdk/bin/java is the Source Oracle Java Home
  • /scratch/db/db1800/product/18.0.0/dbhome_1/rdbms/admin/preupgrade.jar
is the target ORACLE HOME preupgrade.jar file, it will decide on which version db going to upgrade
Run preupgrade_fixups.sql on your source database.
For example:
CONNECT / AS SYSDBA
SQL> ALTER SESSION SET CONTAINER= PDBTESTCLONE ;
SQL> @/tmp/preupgrade_fixups.sql
Executing Oracle PRE-Upgrade Fixup Script
Auto-Generated by:       Oracle Preupgrade Script
Version: 18.0.0.0.0 Build: 1
Generated on:            2018-09-06 08:01:37
For Source Database:     OBCL12R2
Source Database Version: 12.2.0.1.0
For Upgrade to Version:  18.0.0.0.0
Executing in container: PDBTESTCLONE
Preup                             Preupgrade
Action                            Issue Is
Number  Preupgrade Check Name     Remedied    Further DBA Action
——  ————————  ———-  ——————————–
parameter_obsolete        NO        Manual fixup recommended.
invalid_objects_exist       NO        Manual fixup recommended.
hidden_params                NO        Informational only. Further action is optional.
dictionary_stats               YES       None.
mv_refresh                       NO         Informational only. Further action is optional.
tablespaces_info              NO        Informational only. Further action is optional.
parameter_deprecated   NO        Informational only. Further action is optional.
cycle_number                   NO        Informational only. Further action is optional

YOU HAVE CHOICE TO EITHER CLONE THE PDB OR UNPLUG AND PLUG THE PDB.

In Local clone method if the DB is too large and there are no much free space to duplicate the DB then you can go DBUA or Unplug and Plug method in same server. It will work on the existing DB Data files.
Local clone is suitable for smaller database where you can have the copy of the source in case of any issue.

UPGRADE USING CLONE PROCESS

  • The source PDB must be open in read-only mode.
  • The target database must have a database link to the source database.
  • The user in the source database that the database link connects to must have the create pluggable database privilege.
Steps to create the Remote Clone PDB
  • Create a common user in the source and give required permission.
(In Source)
create user c##dblinkuser identified by oracle123;
grant create session,create pluggable database to c##dblinkuser container=ALL;
  • Create a db link from target container database to source container database.
(In Target)
 create database link pdblink connect to c##dblinkuser identified by oracle123 using ‘ofss2311694.in.oracle.com:1521/pdbtest1’;
  • Source PDB must be in read-only mode
alter pluggable database PDBTEST1 close immediate;
alter pluggable database PDBTEST1 open read only;
  • In targetcreate the directory for the PDB’s datafiles and create pluggable database from source pdb using db link. Execute following command to create the PDB
create pluggable database PDBTESTCLONE from PDBTEST1@pdblink file_name_convert=(‘/scratch/db1212dat/CDBTEST/PDBTEST1′,’/scratch/db1212dat/CDBTEST/PDBTESTCLONE’);
Upgrading the Earlier Release PDB to the Later Release
In target, open the clone DB in Upgrade mode, which is now in mounted mode
SQL> show pdbs
CON_ID CON_NAME                   OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED                      READ ONLY   NO
3 DLY124WLYCLN                  READ WRITE  NO
4 PDBTESTCLONE                  MOUNTED     YES
SQL> alter pluggable database PDBTESTCLONE open upgrade;
SQL> show pdbs
CON_ID CON_NAME                    OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED                      READ ONLY   NO
3 DLY124WLYCLN                  READ WRITE  NO
4 PDBTESTCLONE                  MIGRATE     YES

Upgrade the PDB Using Upgrade Utility Command
  • Method 1 : Parallel Upgrade Utility command (catctl.pl)
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catctl.pl -d \
$ORACLE_HOME/rdbms/admin -c ‘PDBTESTCLONE’ -l $ORACLE_BASE catupgrd.sql
OR
  • Method 2 : Shell utility (dbupgrade)
/scratch/db/db1800/product/18.0.0/dbhome_1/bin/dbupgrade
Post-upgrade Fixup and Invalid object compilation
  • Log in to SQL*Plus, and open the PDB to execute post-upgrade fixups, and to
recompile the INVALID objects in the database:
SQL> ALTER SESSION SET CONTAINER=PDBTESTCLONE;
SQL> @/tmp/postupgrade_fixups.sql
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql
SQL>  SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
COUNT(*)
———-
0
SQL>  SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
COUNT(*)
———-
59
SQL> conn / as sysdba
SQL> alter pluggable database PDBTESTCLONE close immediate;
SQL> alter pluggable database PDBTESTCLONE open;
SQL> show pdbs
CON_ID CON_NAME                    OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED                      READ ONLY    NO
3 DLY124WLYCLN                 READ WRITE   NO
4 PDBTESTCLONE                 READ WRITE   YES
SQL> select con_id, type, message, status
from PDB_PLUG_IN_VIOLATIONS
where status<>’RESOLVED’
order by time;
CONT_ID  TYPE   MESSAGE                                                                      STATUS
——-  ——   ———————————————————————–    ——–
10      ERROR  SQL patch ID/UID 24923080/21874550 (): Installed in the PDB but not in the CDB.   PENDING
[db18c@blrgxr6116 OPatch]$ ./datapatch –verbose
………
……….
Current state of SQL patches:
Patch 24923080 ():
Installed in COMPR_PDB only
………
……….
Patch 24923080 rollback (pdb COMPR_PDB): SUCCESS
logfile: /scratch/db18c/app/db12c/cfgtoollogs/sqlpatch/24923080/21874550/24923080_rollback_CDB122R_COMPR_PDB_2018Sep26_15_22_46.log (no errors)
NOTE before running datapatch make sure:
Copy the Patch folder from the source Oracle Home (OH) to the target Oracle Home (OH) same sub-directories.
From source folder /u01/app/oracle/product/12.2.0.1/dbhome_1/sqlpatch to target OH.
 [oracle@whfoda3011 sqlpatch]$ ls -lrt
total 228
-rwxr-x— 1 oracle oinstall    999 Apr  1  2016 sqlpatch.bat
-rwxr-x— 1 oracle oinstall   2732 Apr  1  2016 sqlpatch
-rw-r–r– 1 oracle oinstall   4208 Jul  2  2016 sqlpatch.pl
-rw-r–r– 1 oracle oinstall   3516 Aug 16  2016 sqlpatch_bootstrap_driver.sql
-rw-r–r– 1 oracle oinstall  14532 Aug 26  2016 sqlpatch_bootstrap.sql
drwxr-xr-x 7 oracle oinstall   4096 Jan 26  2017 lib
-rw-r—– 1 oracle oinstall 170922 Apr 10 10:40 sqlpatch.pm
drwxr-xr-x 3 oracle oinstall   4096 Apr 18 14:48 24923080
SQL> conn / as sysdba
SQL> alter pluggable database PDBTESTCLONE close immediate;
SQL> alter pluggable database PDBTESTCLONE open;
SQL> show pdbs
CON_ID CON_NAME                    OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED                      READ ONLY    NO
3 DLY124WLYCLN                 READ WRITE   NO
4 PDBTESTCLONE                 READ WRITE   NO

ORA-00600: internal error code, arguments: [4194], [#], [#]


Error: ORA-00600: internal error code, arguments: [4194], [#], [#]
You can see this error in alert log file, when starting the PDB, alter Tablespace command, and it may crash db any time.
Oracle Database Edition Release 12.2.0.1.0 – 64bit Production.
This error indicates that a mismatch has been detected between redo records and rollback (undo) records.
Here ARGUMENTS: [4194], [#], [#]
Arg [a] – Maximum Undo record number in Undo block
Arg [b] – Undo record number from Redo block
Resolution steps are as follows:
alter session set container=PDB01;

create undo tablespace UNDOTBS3 datafile '/user1/containerdb/undotbs03.db' size 2000M;

SQL> alter system set undo_tablespace=UNDOTBS3 scope=both;

SQL> drop tablespace UNDOTBS1 including contents and datafiles;

select tablespace_name, status, segment_name from dba_rollback_segs where status != 'OFFLINE';

TABLESPACE_NAME           STATUS               SEGMENT_NAME
--------------------      -------            ---------------- 
SYSTEM                    ONLINE             SYSTEM
UNDOTBS3                  ONLINE             _SYSSMU1_2906661113$


SQL> shutdown immediate;
Pluggable Database closed.
SQL> startup;
Pluggable Database opened.
SQL> show pdbs

CON_ID        CON_NAME      OPEN MODE           RESTRICTED
----------    -----------   -------------       ----------   
10            OSDC141       READ WRITE          NO

Wednesday, 22 November 2017

Check points before and after IMPDP/EXPDP

(Please Note my other blog address: svgulhane wordpress IT updates)

  Check points before IMPDP/EXPDP

  • Check the Oracle directory existed and have created the OS directory where you want to store dump files. This Directory will pointed to OS directory where Oracle will create the .dmp files, from where later you can point your IMPDP job by using same Oracle Directory.
Oracle Directory
create or replace directory IN_WIP as '/oraint2/oradata/FCIS921/dpdump/';
grant execute, read, write on directory dir to UBS105SUP;
OR
grant all on directory dir to UBS105SUP;
  • Check the free space available in the server mount point. Space need to be estimated to be consumed during expdp, else your job get stopped. In such situaltion you can free the space by deleting unwanted files from the storage space and then job get restarted automatically. You can check the status by attaching the job with ATTACH command.
df –h
[user@server1~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda2            184G   32G  143G  18% /
tmpfs                 7.5G     0  7.5G   0% /dev/shm
/dev/xvda1            976M  295M  631M  32% /boot
/dev/xvda5            4.8G  777M  3.8G  17% /oem
blrnas404.in.oracle.com:/export/1170
  • Check the System and User Tablespace size, it should be sufficient. Please note System tablespace used by the Oracle for its scratch work during the IMPDP and also mostly consumed during compilation of the objects. This System tablespace cannot be recoverd or you can not get it returned even you drop the Schema. In case of Schema tablespace you can get the space returned by dropping that schema and space get used for storing the actual data of the tables and other objects. You can check space before firing IMPDP command by using following query.
SET LINESIZE 300
SET PAGESIZE 225
column File_path format a60
column AUTOEXTENSIBLE format a3
column Allocated format 999,999,9999
column Free format 999,999,9999
column Maxsize format 9999999,9999
column Yet_Not_alloc_MB format 9999999,9999
select
a.tablespace_name,
sum(a.BYTES/(1024*1024))Allocated,
(b.free_space) Free,
sum(a.maxbytes/(1024*1024))Maxsize ,
abs(sum((a.maxbytes/(1024*1024))-(a.BYTES/(1024*1024)))) Yet_Not_alloc_MB
from dba_data_files a,
(select tablespace_name, round(sum(bytes)/1024/1024 ,2) as free_space
from dba_free_space group by tablespace_name) b
where a.tablespace_name like('%&TS_Name%') and a.tablespace_name=b.tablespace_name
group by a.tablespace_name,free_space
order by a.tablespace_name ;
  • Check the version of the source and target database, to avoid following types of error during import of the dump file into different version of the database schema.SELECT * FROM V$VERSIONIMP-00010: not a valid export file, header failed verification
    IMP-00000: Import terminated unsuccessfully
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specificationImport: Release 10.2.0.3.0 – 64bit Production on Tuesday, 07 June, 2017 09:43:10Copyright (c) 2003, 2005, Oracle. All rights reserved.Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-39142: incompatible version number 3.1 in dump file
    “/u04/oracledb/dpdump/test_version_failed.dmp”
    To avoid this issue you need to care while exporting the dump you need to mention the target version of the DB.  E.G
    If you want to  import your 11g expdp dump file into Oracle 10g in this case you need to add VERSION=10.2  parameter during export.
    From 11g expdp command:
    expdp xyz/xxxxx parfile=exp_version.par version=10.2
    Now import the dump into the 10.2.0.1 database.
    From 10g impdp command:
    impdp xyz/xxxx parfile=imp_version.par
  • Check the size of the dump to be imported
Dump file size:
ls –lrt
-rw-rw-rw- 1 db1123 oinstall 12538 Nov 20 00:00 expdp-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 2210488320 Nov 20 00:00 expdp-20112014_000001.dmp
-rw-rw-rw- 1 db1123 oinstall 4497 Nov 20 00:00 expdp-tinfra-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 16900096 Nov 20 00:00 expdp-tinfra-20112014_000001.dmp
  • Check all the privileges related to expdp or impdp are granted.
set linesize 300
set pagesize 100
column Object format a30
column PRIVS format a30
column Type format a20
select P1 OBJECT, nvl(o.object_type, '(ROLE)')Type
, P2 PRIVS from (select Grantee G, Table_name p1, privilege p2
from dba_tab_privs
union
select Grantee G, null p1, granted_role p2
from dba_role_privs
union
select Grantee G, null p1, privilege p2 from dba_sys_privs),
dba_objects o
where G = '&USER'
and p1 = o.object_name(+);
  • Grant all the basic privileges required.
grant create session to &&USR;
create public synonym dbms_shared_pool for sys.dbms_shared_pool;
grant select on dba_jobs_running to &USR;
grant select on v_$database to &USR;
grant select on v_$nls_parameters to &USR;
grant select on v_$parameter to &USR;
grant select on v_$process to &USR;
grant select on v_$session to &USR;
grant select on v_$timer to &USR;
grant select on v_$instance to &USR;
grant create database link to &USR;
grant create library to &USR;
grant create materialized view to &USR;
grant create procedure to &USR;
grant create sequence to &USR;
grant create session to &USR;
grant create synonym to &USR;
grant create table to &USR;
grant create trigger to &USR;
grant create type to &USR;
grant create view to &USR;
grant execute on dbms_alert to &USR;
grant execute on dbms_application_info to &USR;
grant execute on dbms_aq to &USR;
grant execute on dbms_aqadm to &USR;
grant execute on dbms_defer to &USR;
grant execute on dbms_defer_query to &USR;
grant execute on dbms_defer_sys to &USR;
grant execute on dbms_job to &USR;
grant execute on dbms_lock to &USR;
grant execute on dbms_pipe to &USR;
grant execute on dbms_refresh to &USR;
grant execute on dbms_rls to &USR;
grant execute on dbms_shared_pool to &USR;
grant execute on dbms_sql to &USR;
grant execute on utl_file to &USR;
--grant exp_full_database to &USR; (Check with DBA before granting this privilege)
--grant imp_full_database to &USR; (Check with DBA before granting this privilege)
grant execute on SYS.UTL_MAIL to &USR;
--grant execute on SYS.CTX_DDL to &USR;
grant execute on CTXSYS.CTX_DDL to &USR;
grant execute on SYS.DBMS_CRYPTO to &USR;
grant execute on SYS.UTL_RAW to &USR;
grant select on dba_data_files to &Usr;
grant select on dba_free_space to &Usr;
  • Check the location of the dump and privileges granted on that path
ls –lrt
-rw-rw-rw- 1 db1123 oinstall 12538 Nov 20 00:00 expdp-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 2210488320 Nov 20 00:00 expdp-20112014_000001.dmp 
----- 2gb
-rw-rw-rw- 1 db1123 oinstall 4497 Nov 20 00:00 expdp-tinfra-20112014_000001.log-rw-rw---- 1 db1123 oinstall 16900096 Nov 20 00:00 expdp-tinfra-20112014_000001.dmp 
-----16 mb
  • Parallel Import/Export needs to have multiple .dmp files

    If there are 6 CPU then you can have 12 dump files can be optimal for parallel processing.
    EXPDP Parallel Example:export.par
    FILESIZE=10G
    compression=all
    parallel=32
    logfile=expdp_t7.log
    dumpfile=expd_t7_%u.dmp
    schemas=fcubs1201
    directory=expdir
    COMPRESSION=ALL
    JOB_NAME=fcubs1201_job
    Execute
    expdp parfile=export.par
    Username: sys as sysdba
    Password:welcome1
    IMPDP Parallel Example:
    impdp system/sysdec2012@TRAININGDB schemas=test parallel=2 DIRECTORY=DATA_DUMP DUMPFILE=test_%U.dmp LOGFILE=test.log JOB_NAME=testJob content=metadata_only
  • Do not invoke Import as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.

  • Increase undo retention

  • Increase redo log file size

  • If db is in archivelog mod then check archivelog it may get filled and get following error

ORA-19815: WARNING: db_recovery_file_dest_size of 3211788288000 bytes is 100.00% used, and has 0 remaining bytes available.
Go to rman list archivelog all delete archivelog all

Checkpoints after IMPDP/EXPDP

  • Check the count of the objects imported so far and find out how many was there in source schema.
  • Check the log file generated during expdp or impdp for checking the progress of the dump.
  • Generally import failed due to system or user tablespace not having free space. Also some time there is no free space in hard disk.
  • Check if any user has locked the objects, if lock exists then clear that locksselect 'alter system kill session '''|| sess.sid||','||sess.serial#||''' immediate ;'  from v$locked_object lo,dba_objects ao,v$session sess where ao.object_id=lo.object_id and lo.session_id=sess.sid ;
  • If your session get closed you can attach to running job. Attach to that job_name and find the issues by status command.
Attach to EXPDP and IMPDP :
impdp username_to_whom_job_executed/pwd attach= 'SYS_IMPORT_FULL_02'
expdp ddl/slrmclaren@TCTOOLS attach= 'DDLEXP'

impdp>status
impdp> STOP_JOB=IMMEDIATE
impdp>kill_job
Status (Shows the running status of the job)
stop job immediate (We can start the job again)

Query to verify the expdp/impdp

select    round(sofar/totalwork*100,2)  percent_completed,    v$session_longops.*  from    v$session_longops  where    sofar <> totalwork  order by    target,     sid;
select substr(sql_text, instr(sql_text,'"')+1,
instr(sql_text,'"', 1, 2)-instr(sql_text,'"')-1)
table_name,
rows_processed,
round((sysdate
- to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))
*24*60, 1) minutes,
trunc(rows_processed /
((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))
*24*60)) rows_per_min
from
v$sqlarea
where
upper(sql_text) like 'INSERT % INTO "%'
and
command_type = 2
and
open_versions > 0;
select
sid,
serial#
from
v$session s,
dba_datapump_sessions d
where
s.saddr = d.saddr;
select
sid,
serial#,
sofar,
totalwork
from
v$session_longops;
select * from USER_DATAPUMP_JOBS;
select * from DBA_DATAPUMP_SESSIONS;
select owner_name, job_name,state from dba_datapump_jobs where state not in 'NOT RUNNING';
SET LINESIZE 300
SET PAGESIZE 200
column OWNER_NAME format a10
column JOB_NAME format a30
column OPERATION format a10
column JOB_MODE format a6
column STATE format a20
column DATAPUMP_SESSIONS a10
column ATTACHED_SESSIONS a10
select OWNER_NAME,JOB_NAME,substr(operation,1,10) operation,state, ATTACHED_SESSIONS, DATAPUMP_SESSIONS from dba_datapump_jobs
where state not in 'NOT RUNNING';

(Please Note my other blog address: svgulhane wordpress IT updates)

Wednesday, 15 November 2017

All about IMPDP and EXPDP

(Please Note my other blog address: svgulhane wordpress IT updates)

Table of Contents

Before starting EXPDP and IMPDP

Verification of status of IMPDP job

Estimation of Space Utilization while import

Finding file Location, Space Utilized, Max Space Allocated, Autoextend Y/N

Various Query used to find the issues while expdp and impdp

DBA_DATAPUMP_JOBS Table Structure

Finding the Job_name from below views

Keywords in EXPDP and IMPDP

The Commands are used while in interactive mode.

Problem and Solution:

Before starting EXPDP and IMPDP

  • Check the Oracle directory existed and have created the OS directory where you want to store dump files.

Oracle Directory

create or replace directory IN_WIP as '/oraint2/oradata/FCIS921/dpdump/';
grant execute, read, write on directory dir to UBS105SUP;
OR
grant all on directory dir to UBS105SUP;
  • Check the free space available in the server mount point

df –h
[user@server1~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda2            184G   32G  143G  18% /
tmpfs                 7.5G     0  7.5G   0% /dev/shm
/dev/xvda1            976M  295M  631M  32% /boot
/dev/xvda5            4.8G  777M  3.8G  17% /oem
blrnas404.in.oracle.com:/export/1170

  • Check the System and User tablespace size, it should be sufficient


SET LINESIZE 300
SET PAGESIZE 225
column File_path format a60
column AUTOEXTENSIBLE format a3
column Allocated format 999,999,9999
column Free format 999,999,9999
column Maxsize format 9999999,9999
column Yet_Not_alloc_MB format 9999999,9999
select
a.tablespace_name,
sum(a.BYTES/(1024*1024))Allocated,
(b.free_space) Free,
sum(a.maxbytes/(1024*1024))Maxsize ,
abs(sum((a.maxbytes/(1024*1024))-(a.BYTES/(1024*1024)))) Yet_Not_alloc_MB
from dba_data_files a,
(select tablespace_name, round(sum(bytes)/1024/1024 ,2) as free_space
from dba_free_space group by tablespace_name) b
where a.tablespace_name like('%&TS_Name%') and a.tablespace_name=b.tablespace_name
group by a.tablespace_name,free_space
order by a.tablespace_name ;

  • Check the version of the source and target database


SELECT * FROM V$VERSION

  • Check the size of the dump to be imported


Dump file size:
ls –lrt
-rw-rw-rw- 1 db1123 oinstall 12538 Nov 20 00:00 expdp-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 2210488320 Nov 20 00:00 expdp-20112014_000001.dmp  ----- 2gb
-rw-rw-rw- 1 db1123 oinstall 4497 Nov 20 00:00 expdp-tinfra-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 16900096 Nov 20 00:00 expdp-tinfra-20112014_000001.dmp  -----16 mb





  • Check all the privileges related to expdp or impdp are granted.


set linesize 300
set pagesize 100
column Object format a30
column PRIVS format a30
column Type format a20
select P1 OBJECT, nvl(o.object_type, '(ROLE)')Type
, P2 PRIVS from (select Grantee G, Table_name p1, privilege p2
from dba_tab_privs
union
select Grantee G, null p1, granted_role p2
from dba_role_privs
union
select Grantee G, null p1, privilege p2 from dba_sys_privs),
dba_objects o
where G = '&USER'
and p1 = o.object_name(+);
  • Grant all the basic privileges required.

grant create session to &&USR;
create public synonym dbms_shared_pool for sys.dbms_shared_pool;
grant select on dba_jobs_running to &USR;
grant select on v_$database to &USR;
grant select on v_$nls_parameters to &USR;
grant select on v_$parameter to &USR;
grant select on v_$process to &USR;
grant select on v_$session to &USR;
grant select on v_$timer to &USR;
grant select on v_$instance to &USR;
grant create database link to &USR;
grant create library to &USR;
grant create materialized view to &USR;
grant create procedure to &USR;
grant create sequence to &USR;
grant create session to &USR;
grant create synonym to &USR;
grant create table to &USR;
grant create trigger to &USR;
grant create type to &USR;
grant create view to &USR;
grant execute on dbms_alert to &USR;
grant execute on dbms_application_info to &USR;
grant execute on dbms_aq to &USR;
grant execute on dbms_aqadm to &USR;
grant execute on dbms_defer to &USR;
grant execute on dbms_defer_query to &USR;
grant execute on dbms_defer_sys to &USR;
grant execute on dbms_job to &USR;
grant execute on dbms_lock to &USR;
grant execute on dbms_pipe to &USR;
grant execute on dbms_refresh to &USR;
grant execute on dbms_rls to &USR;
grant execute on dbms_shared_pool to &USR;
grant execute on dbms_sql to &USR;
grant execute on utl_file to &USR;
--grant exp_full_database to &USR; (Check with DBA before granting this privilege)
--grant imp_full_database to &USR; (Check with DBA before granting this privilege)
grant execute on SYS.UTL_MAIL to &USR;
--grant execute on SYS.CTX_DDL to &USR;
grant execute on CTXSYS.CTX_DDL to &USR;
grant execute on SYS.DBMS_CRYPTO to &USR;
grant execute on SYS.UTL_RAW to &USR;
grant select on dba_data_files to &Usr;
grant select on dba_free_space to &Usr;
  • Check the location of the dump and privileges granted on that path

ls –lrt
-rw-rw-rw- 1 db1123 oinstall 12538 Nov 20 00:00 expdp-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 2210488320 Nov 20 00:00 expdp-20112014_000001.dmp  ----- 2gb
-rw-rw-rw- 1 db1123 oinstall 4497 Nov 20 00:00 expdp-tinfra-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 16900096 Nov 20 00:00 expdp-tinfra-20112014_000001.dmp  -----16 mb



  • Increase undo retention
  • Increase redo log file size

  • If db is archivelog mod then check archivelog it may get filled
ORA-19815: WARNING: db_recovery_file_dest_size of 3211788288000 bytes is 100.00% used, and has 0 remaining bytes available.
    Go to rman list archivelog all
    delete archivelog all
  • Do not invoke Import as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.
  • Check the syntax and key words used in command

  • Check the count of the objects imported so far and find where it stopped.

  • Check the log file generated during expdp or impdp for checking the progress of the dump.

  • Generally import failed due to system or user tablespace not having free space. Also some time there is no free space in hard disk.

  • Check if any user has locked the objects, if lock exists then clear that locks

  • Parallel import needs to have multiple .dmp files

  • If there are 6 CPU then you can have 12 dump files can be optimal for parallel processing.

  • Importing data from Different Version:
  • Need to include the VERSION=10.1.0.1 parameter as well in the impdp parameter, similar to the version of the db while exporting the dump.
  • If your session get closed you can attach to running job. Attach to that job_name and find the issues by status command.

Example:

impdp username_to_whom_job_executed/pwd attach= 'SYS_IMPORT_FULL_02'
expdp ddl/slrmclaren@TCTOOLS attach= 'DDLEXP'

Status (Shows the running status of the job)
stop job immediate (We can start the job again)
impdp>status
impdp> STOP_JOB=IMMEDIATE
impdp>kill_job


Verification of status of IMPDP job

Basic Check At UNIX level to see whether import process is active:-
ps -ef |grep impdp
[ora11g22@ofss220004 impdp]$ ps -ef |grep impdp
ora11g22 3603 25472 0 12:00 pts/0 00:00:00 /bin/sh ./impdp.sh
ora11g22 3604 3603 0 12:00 pts/0 00:00:01 impdp parfile=/scratch/software/impdp/impdp.par
ora11g22 6808 6663 0 12:28 pts/9 00:00:02 impdp parfile=impdp.par
ora11g22 8929 25472 0 12:47 pts/0 00:00:00 grep impdp
Attach to the job and check its progress :-
Step 1) Find the job name to be attach
Open the logfile for import and starting few lines itself it will say
Master table "TCBNK12R2"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Here SYS_IMPORT_SCHEMA_01 is basically the job_name for this import process which is running
Step 2) At Unix prompt type
impdp username/password attach=job_name
impdp TCBNK12R2/TCBNK12R2 attach=SYS_IMPORT_SCHEMA_01
The output will be something like below :-
Import: Release 11.2.0.2.0 - Production on Thu Aug 2 12:54:50 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_IMPORT_SCHEMA_01
Owner: TCBNK12R2
Operation: IMPORT
Creator Privs: TRUE
GUID: C6438DC2127F0AB5E040B80A9C870E15
Start Time: Thursday, 02 August, 2012 12:00:57
Mode: SCHEMA
Instance: DONTFAIL
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND TCBNK12R2/********@TC120R2 parfile=parfile_TCBNK12R2_FB.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND TCBNK12R2/******** parfile=/scratch/software/impdp/impdp.par
TABLE_EXISTS_ACTION APPEND
State: EXECUTING
Bytes Processed: 3,259,068,472
Percent Done: 99
Current Parallelism: 1
Job Error Count: 0
Dump File: /scratch/software/impdp/TCBNK_12.0.0_ITR2_FINAL.DMP
Worker 1 Status:
Process Name: DW00
State: EXECUTING
Object Schema: TCBNK12R2
Object Name: ORPKS_ORDMURKE_KERNEL
Object Type: SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Completed Objects: 800
Worker Parallelism: 1
Import>
Here state means :- JOB state (if its executing it means job is running ) else it will be UNKNOWN/stopped etc
Job status can also be found via below command (if u have privileged database access)
SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE
---------- -------------------- ---------- ---------- ------------
SYSTEM     SYS_EXPORT_FULL_01   EXPORT     FULL       EXECUTING
To start/resume the job interactively:-
Import> start_job
To check status of the job :-
Import> status
To exit out of Import> console
Import> exit_client

Estimation of Space Utilization while import

Dump Size: 1.85 gb
space used by schema: at most 2g
system tablespace used: 15g
Total time taken: 4.20 min
other estimation of import

Dump Size 73 GB
Space used after import 210 GB
Systeme tablespace consumed: 1GB
Space utilization in RMAN
For 120gb of db size required 14gb of diskspace around 12%of total db size

Finding file Location, Space Utilized, Max Space Allocated, Autoextend Y/N

SET LINESIZE 300
SET PAGESIZE 225
column File_path format a60
column AUTOEXTENSIBLE  format a3
column used_G format 999,999
column maxsize_G format 999,999
column user_bytes format 9999999
select
a.file_name File_path,
a.BYTES/(1024*1024*1024)used_G,
a.maxbytes/(1024*1024*1024)maxsize_G ,
a.AUTOEXTENSIBLE
, b.creation_time
, a.user_bytes/(1024*1024) user_bytes
-- , a.INCREMENT_BY
from dba_data_files a,v$datafile b
where a.file_id =b.file#
and a.tablespace_name like('%&TS_Name%')
and a.file_name like('%&File_Name%')
order by b.creation_time

17) Various Query used to find the issues while expdp and impdp

Query 1:

You can observe v$session_longops to monitor some parts of an Oracle export job.
select
   round(sofar/totalwork*100,2)  percent_completed,
   v$session_longops.*
from
   v$session_longops
where
   sofar <> totalwork
order by
   target,
   sid;

Query 2:

col table_name format a30
select substr(sql_text, instr(sql_text,'"')+1,
               instr(sql_text,'"', 1, 2)-instr(sql_text,'"')-1)
          table_name,
       rows_processed,
       round((sysdate
              - to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))
             *24*60, 1) minutes,
       trunc(rows_processed /
                ((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))
             *24*60)) rows_per_min
from
   v$sqlarea
where
  upper(sql_text) like 'INSERT % INTO "%'
  and
  command_type = 2
  and
  open_versions > 0;

Query 3:

select
   sid,
   serial#
from
   v$session s,
   dba_datapump_sessions d
where
   s.saddr = d.saddr;

select
   sid,
   serial#,
   sofar,
   totalwork
from
   v$session_longops;

Query 4

: (Job Name related)
DBA_DATAPUMP_JOBS Table Structure
OWNER_NAMEVARCHAR2(30)Loged in Schema name from where job executed
JOB_NAMEVARCHAR2(30)Job name system defined e.g SYS_EXPORT_SCHEMA_15, or we can specify in expdp command like job_name=new_job
OPERATIONVARCHAR2(30)IMPORT OR EXPORT
JOB_MODEVARCHAR2(30)FULL
STATEVARCHAR2(30)EXECUTING OR NOT RUNNING
DEGREENUMBER
ATTACHED_SESSIONSNUMBER
DATAPUMP_SESSIONSNUMBER
Finding the Job_name from below views
select * from USER_DATAPUMP_JOBS;
select * from DBA_DATAPUMP_SESSIONS;

select owner_name, job_name,state from dba_datapump_jobs where state not in 'NOT RUNNING';
SET LINESIZE 300
SET PAGESIZE 200
column OWNER_NAME format a10
column JOB_NAME format a30
column OPERATION format a10
column JOB_MODE format a6
column STATE format a20
column DATAPUMP_SESSIONS a10
column ATTACHED_SESSIONS a10
select OWNER_NAME,JOB_NAME,substr(operation,1,10) operation,state, ATTACHED_SESSIONS, DATAPUMP_SESSIONS from dba_datapump_jobs
where state not in 'NOT RUNNING';
select * from dba_datapump_jobs;
select job_name,state from dba_datapump_jobs;
You can Delete all the tables from the o/p of the query if state is NOT RUNNING
set col o.status 10
SELECT o.status, o.object_id, o.object_type,o.owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;
select STATUE,JOB_NAME from dba_datapump_jobs where owner_name = 'APACK2114';
SELECT o.status, o.object_id, o.object_type,o.owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.owner_name like 'APACK2114';
Dropping table which is having state as Not Running.
drop table APACK2114.SYS_IMPORT_FULL_01;
Query 4: Verification of object counts from both the target and source schema..
select count(*) from dba_objects where owner='ALBBSUP';

Keywords in EXPDP and IMPDP

The available keywords and their descriptions follow. Default values are listed within square brackets.
ATTACH
Attach to an existing job.
For example, ATTACH=job_name.
CLUSTER
Utilize cluster resources and distribute workers across the Oracle RAC.
Valid keyword values are: [Y] and N.
CONTENT
Specifies data to load.
Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.
DATA_OPTIONS
Data layer option flags.
Valid keywords are: SKIP_CONSTRAINT_ERRORS.
DIRECTORY
Directory object to be used for dump, log and SQL files.
DUMPFILE
List of dump files to import from [expdat.dmp].
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION_PASSWORD
Password key for accessing encrypted data within a dump file.
Not valid for network import jobs.
ESTIMATE
Calculate job estimates.
Valid keywords are: [BLOCKS] and STATISTICS.
EXCLUDE
Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".
FLASHBACK_SCN
SCN used to reset session snapshot.
FLASHBACK_TIME
Time used to find the closest corresponding SCN value.
FULL
Import everything from source [Y].
HELP
Display help messages [N].
INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.
INCLUDE=PACKAGE,FUNCTION
INCLUDE=PROCEDURE:"LIKE'XYZ%'"
INCLUDE=TABLE:"LIKE'XYZ%'"
INCLUDE=VIEWS:"LIKE'XYZ%'"
Example include=package option:
userid=TC1203DEMO/TC1203DEMO@KERELCM
DIRECTORY=IN_WIP
DUMPFILE=TCBNK_12.0.3.0.0_BNK1203OSDC_FULL_DUMP.DMP
LOGFILE=TCBNK_12.0.3.0.0_BNK1203OSDC_FULL_DUMP2.LOG
REMAP_SCHEMA=BNK1203OSDC:TC1203DEMO
REMAP_TABLESPACE=BNK1203OSDC:TC1203DEMO
TRANSFORM=OID:n
INCLUDE=PACKAGE
IGNORE=Y
GRANTS=Y
INDEXES=Y
ROWS=Y
CONSTRAINTS=Y

The following commands are used while in interactive mode.

Note: abbreviations are allowed.
CONTINUE_CLIENT
Return to logging mode. Job will be restarted if idle.
EXIT_CLIENT
Quit client session and leave job running.
HELP
Summarize interactive commands.
KILL_JOB
Detach and delete job.
PARALLEL
Change the number of active workers for current job.
START_JOB
Start or resume current job.
Valid keywords are: SKIP_CURRENT.
STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.
STOP_JOB
Orderly shutdown of job execution and exits the client.
Valid keywords are: IMMEDIATE.


Problem and Solution:

Problem:

“segmentation error” on user site

When executing from server giving bdbcopy. CSTB_ITEM_DESC not found error
(Missing schema_name.object_name in tables=)
User command:
impdp bdbcopy/bdbcopy TABLES=CSTB_ITEM_DESC DIRECTORY=EXPDPDIR CONTENT=DATA_ONLY DUMPFILE=bdbmadrid.dmp LOGFILE=bdbmadrid_imp_labels.log;
Solution : (Need to write schema_name.object name)
Changed Command:
bdbcopy/********@WEUTF11G TABLES=BDBMADRID.CSTB_ITEM_DESC DIRECTORY=EXPDPDIR CONTENT=DATA_ONLY DUMPFILE=bdbmadrid.dmp LOGFILE=New.log REMAP_SCHEMA=BDBMADRID:BDBCOPY REMAP_TABLESPACE=BDBMADRID:BDBCOPY

Problem:

Unable to create master table. Insufficient Privileges

Solution:

Required basic privilege granted.

Problem:

Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Job "BDBFFT"."SYS_IMPORT_FULL_01" stopped due to fatal error . Instance Name-WEUTF11G at 10.184.74.181 server.

Solution:

From the log file it is also found the error : unable to create INITIAL extent for segment in tablespace FBLDEV1
Table space increased and issue resolved.

Problem:

Export file created by EXPORT:V09.02.00 via direct path IMP-00013: only a DBA can import a file exported by another DBA IMP-00000: Import terminated unsuccessfully

Solution:

Full grant given to user and after that restarted the Import given

Problem:

I am trying to do EXPDP from a FLEXCUBE fromt end screen. While doing so I am getting errors like ORA-06512: at "SYS.KUPC$QUE_INT", line 588 ORA-01013: user requested cancel of current operation ORA-06512: at "SYS.KUPC$QUE_INT", line 588 The procedure for taking EXPORT dump is working fine through a backend stub.

Solution:

Timeout parameter was set at application level

Problem:

EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully

Solution:

Resolve the Space Issue
Problem:
Processing object type SCHEMA_EXPORT/VIEW/VIEW ORA-39082: Object type VIEW:"TC120NLSDEV"."CLVW_PARKED_LOANS" created with compilation warnings Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Job "TC120NLSDEV"."SYS_IMPORT_FULL_01" stopped due to fatal error at 02:34:05

Solution:
When I Checked the temporary tablespace free space it was 100% full, so increased it as follows:
alter database tempfile ‘/tempfilelocateion/temp01.dbf’ autoextend on maxsize 4g;
alter tablespace temp add tempfile ‘/tempfilelocateion/temp02.dbf’ size 1g autoextend on maxsize 4g;
Issue got resolved.
Then I have attached to the existing failed job as follows:
Impdp tcdb/tcdb attach=’SYS_IMPORT_FULL_01’
Problem:
ORA-31626: job does not exist ORA-31633: unable to create master table "FPPERF.SYS_IMPORT_SCHEMA_05" ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT", line 1048 ORA-60019: Creating initial extent of size 14 in tablespace of extent size 8
Problem:
Issue related to Extent Size of the tablespace was different in source and target.
Solution:
Here we have to have both source and target with same extent size.
After checking the extent size source was 128K and target was 64K, after making changes the import done successfully.
Problem:
ORA-00955: name is already used by an existing object
Solution:
In following case the TEST schema does not exists and command using content=data_only
Example:
impdp system/sysdec2012@TRAININGDB schemas=test parallel=2 DIRECTORY=DATA_DUMP DUMPFILE=test_%U.dmp LOGFILE=test.log JOB_NAME=testJob content=data_only
Error:
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [62]
TABLE_DATA:"TEST"."TAB"
ORA-31603: object "TAB" of type TABLE not found in schema "TEST"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 8364
As the above job is still executing, need to kill_job,
If we re-execute following statement with same job name and different content=metadata_only, we get following error:
impdp system/sysdec2012@TRAININGDB schemas=test parallel=2 DIRECTORY=DATA_DUMP DUMPFILE=test_%U.dmp LOGFILE=test.log JOB_NAME=testJob content=metadata_only
Error:
Import: Release 11.2.0.2.0 - Production on Fri May 3 19:15:10 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-31633: unable to create master table "SYSTEM.TESTJOB"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1020
ORA-00955: name is already used by an existing object
To avoid this error kill the job by attaching to the existing job as follows:
impdp system/sysdec2012@TRAININGDB attach=testJob
Import> kill_job
Are you sure you wish to stop this job ([yes]/no): y
Problem:
impdp TC120NLSDEV/TC120NLSDEV@KERNLS DIRECTORY=EXPIMP DUMPFILE=TC12.DMP LOGFILE=TC12.log REMAP_SCHEMA=TCBNK12R2:TC120NLSDEV REMAP_TABLESPACE=TCBNK12R2:TC120NLSDEV TRANSFORM=SEGMENT_ATTRIBUTES:n TRANSFORM=OID:n EXCLUDE=GRANT IGNORE=Y
Error occurred:
Schema SYSTEM is needed to import this object, but is unaccessible,

Solution:
This is happened as the export was done with system. So either use system schema or grant the sufficient privileges to schema used in impdp.
Problem:
Job "TC12P01"."SYS_IMPORT_FULL_01" stopped due to fatal error at 18:43:10 in schema TC12P01 [DB:KERNLS] server: 10.184.74.172
Solution: import failed due to the bug in Oracle Database 11g Enterprise Edition Release 11.2.0.1.0. We will have to get a downtime and apply a patch.

(Please Note my other blog address: svgulhane wordpress IT updates)