Wednesday, 22 November 2017

Check points before and after IMPDP/EXPDP

(Please Note my other blog address: svgulhane wordpress IT updates)

  Check points before IMPDP/EXPDP

  • Check the Oracle directory existed and have created the OS directory where you want to store dump files. This Directory will pointed to OS directory where Oracle will create the .dmp files, from where later you can point your IMPDP job by using same Oracle Directory.
Oracle Directory
create or replace directory IN_WIP as '/oraint2/oradata/FCIS921/dpdump/';
grant execute, read, write on directory dir to UBS105SUP;
OR
grant all on directory dir to UBS105SUP;
  • Check the free space available in the server mount point. Space need to be estimated to be consumed during expdp, else your job get stopped. In such situaltion you can free the space by deleting unwanted files from the storage space and then job get restarted automatically. You can check the status by attaching the job with ATTACH command.
df –h
[user@server1~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda2            184G   32G  143G  18% /
tmpfs                 7.5G     0  7.5G   0% /dev/shm
/dev/xvda1            976M  295M  631M  32% /boot
/dev/xvda5            4.8G  777M  3.8G  17% /oem
blrnas404.in.oracle.com:/export/1170
  • Check the System and User Tablespace size, it should be sufficient. Please note System tablespace used by the Oracle for its scratch work during the IMPDP and also mostly consumed during compilation of the objects. This System tablespace cannot be recoverd or you can not get it returned even you drop the Schema. In case of Schema tablespace you can get the space returned by dropping that schema and space get used for storing the actual data of the tables and other objects. You can check space before firing IMPDP command by using following query.
SET LINESIZE 300
SET PAGESIZE 225
column File_path format a60
column AUTOEXTENSIBLE format a3
column Allocated format 999,999,9999
column Free format 999,999,9999
column Maxsize format 9999999,9999
column Yet_Not_alloc_MB format 9999999,9999
select
a.tablespace_name,
sum(a.BYTES/(1024*1024))Allocated,
(b.free_space) Free,
sum(a.maxbytes/(1024*1024))Maxsize ,
abs(sum((a.maxbytes/(1024*1024))-(a.BYTES/(1024*1024)))) Yet_Not_alloc_MB
from dba_data_files a,
(select tablespace_name, round(sum(bytes)/1024/1024 ,2) as free_space
from dba_free_space group by tablespace_name) b
where a.tablespace_name like('%&TS_Name%') and a.tablespace_name=b.tablespace_name
group by a.tablespace_name,free_space
order by a.tablespace_name ;
  • Check the version of the source and target database, to avoid following types of error during import of the dump file into different version of the database schema.SELECT * FROM V$VERSIONIMP-00010: not a valid export file, header failed verification
    IMP-00000: Import terminated unsuccessfully
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specificationImport: Release 10.2.0.3.0 – 64bit Production on Tuesday, 07 June, 2017 09:43:10Copyright (c) 2003, 2005, Oracle. All rights reserved.Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production
    With the Partitioning, OLAP and Data Mining options
    ORA-39001: invalid argument value
    ORA-39000: bad dump file specification
    ORA-39142: incompatible version number 3.1 in dump file
    “/u04/oracledb/dpdump/test_version_failed.dmp”
    To avoid this issue you need to care while exporting the dump you need to mention the target version of the DB.  E.G
    If you want to  import your 11g expdp dump file into Oracle 10g in this case you need to add VERSION=10.2  parameter during export.
    From 11g expdp command:
    expdp xyz/xxxxx parfile=exp_version.par version=10.2
    Now import the dump into the 10.2.0.1 database.
    From 10g impdp command:
    impdp xyz/xxxx parfile=imp_version.par
  • Check the size of the dump to be imported
Dump file size:
ls –lrt
-rw-rw-rw- 1 db1123 oinstall 12538 Nov 20 00:00 expdp-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 2210488320 Nov 20 00:00 expdp-20112014_000001.dmp
-rw-rw-rw- 1 db1123 oinstall 4497 Nov 20 00:00 expdp-tinfra-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 16900096 Nov 20 00:00 expdp-tinfra-20112014_000001.dmp
  • Check all the privileges related to expdp or impdp are granted.
set linesize 300
set pagesize 100
column Object format a30
column PRIVS format a30
column Type format a20
select P1 OBJECT, nvl(o.object_type, '(ROLE)')Type
, P2 PRIVS from (select Grantee G, Table_name p1, privilege p2
from dba_tab_privs
union
select Grantee G, null p1, granted_role p2
from dba_role_privs
union
select Grantee G, null p1, privilege p2 from dba_sys_privs),
dba_objects o
where G = '&USER'
and p1 = o.object_name(+);
  • Grant all the basic privileges required.
grant create session to &&USR;
create public synonym dbms_shared_pool for sys.dbms_shared_pool;
grant select on dba_jobs_running to &USR;
grant select on v_$database to &USR;
grant select on v_$nls_parameters to &USR;
grant select on v_$parameter to &USR;
grant select on v_$process to &USR;
grant select on v_$session to &USR;
grant select on v_$timer to &USR;
grant select on v_$instance to &USR;
grant create database link to &USR;
grant create library to &USR;
grant create materialized view to &USR;
grant create procedure to &USR;
grant create sequence to &USR;
grant create session to &USR;
grant create synonym to &USR;
grant create table to &USR;
grant create trigger to &USR;
grant create type to &USR;
grant create view to &USR;
grant execute on dbms_alert to &USR;
grant execute on dbms_application_info to &USR;
grant execute on dbms_aq to &USR;
grant execute on dbms_aqadm to &USR;
grant execute on dbms_defer to &USR;
grant execute on dbms_defer_query to &USR;
grant execute on dbms_defer_sys to &USR;
grant execute on dbms_job to &USR;
grant execute on dbms_lock to &USR;
grant execute on dbms_pipe to &USR;
grant execute on dbms_refresh to &USR;
grant execute on dbms_rls to &USR;
grant execute on dbms_shared_pool to &USR;
grant execute on dbms_sql to &USR;
grant execute on utl_file to &USR;
--grant exp_full_database to &USR; (Check with DBA before granting this privilege)
--grant imp_full_database to &USR; (Check with DBA before granting this privilege)
grant execute on SYS.UTL_MAIL to &USR;
--grant execute on SYS.CTX_DDL to &USR;
grant execute on CTXSYS.CTX_DDL to &USR;
grant execute on SYS.DBMS_CRYPTO to &USR;
grant execute on SYS.UTL_RAW to &USR;
grant select on dba_data_files to &Usr;
grant select on dba_free_space to &Usr;
  • Check the location of the dump and privileges granted on that path
ls –lrt
-rw-rw-rw- 1 db1123 oinstall 12538 Nov 20 00:00 expdp-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 2210488320 Nov 20 00:00 expdp-20112014_000001.dmp 
----- 2gb
-rw-rw-rw- 1 db1123 oinstall 4497 Nov 20 00:00 expdp-tinfra-20112014_000001.log-rw-rw---- 1 db1123 oinstall 16900096 Nov 20 00:00 expdp-tinfra-20112014_000001.dmp 
-----16 mb
  • Parallel Import/Export needs to have multiple .dmp files

    If there are 6 CPU then you can have 12 dump files can be optimal for parallel processing.
    EXPDP Parallel Example:export.par
    FILESIZE=10G
    compression=all
    parallel=32
    logfile=expdp_t7.log
    dumpfile=expd_t7_%u.dmp
    schemas=fcubs1201
    directory=expdir
    COMPRESSION=ALL
    JOB_NAME=fcubs1201_job
    Execute
    expdp parfile=export.par
    Username: sys as sysdba
    Password:welcome1
    IMPDP Parallel Example:
    impdp system/sysdec2012@TRAININGDB schemas=test parallel=2 DIRECTORY=DATA_DUMP DUMPFILE=test_%U.dmp LOGFILE=test.log JOB_NAME=testJob content=metadata_only
  • Do not invoke Import as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.

  • Increase undo retention

  • Increase redo log file size

  • If db is in archivelog mod then check archivelog it may get filled and get following error

ORA-19815: WARNING: db_recovery_file_dest_size of 3211788288000 bytes is 100.00% used, and has 0 remaining bytes available.
Go to rman list archivelog all delete archivelog all

Checkpoints after IMPDP/EXPDP

  • Check the count of the objects imported so far and find out how many was there in source schema.
  • Check the log file generated during expdp or impdp for checking the progress of the dump.
  • Generally import failed due to system or user tablespace not having free space. Also some time there is no free space in hard disk.
  • Check if any user has locked the objects, if lock exists then clear that locksselect 'alter system kill session '''|| sess.sid||','||sess.serial#||''' immediate ;'  from v$locked_object lo,dba_objects ao,v$session sess where ao.object_id=lo.object_id and lo.session_id=sess.sid ;
  • If your session get closed you can attach to running job. Attach to that job_name and find the issues by status command.
Attach to EXPDP and IMPDP :
impdp username_to_whom_job_executed/pwd attach= 'SYS_IMPORT_FULL_02'
expdp ddl/slrmclaren@TCTOOLS attach= 'DDLEXP'

impdp>status
impdp> STOP_JOB=IMMEDIATE
impdp>kill_job
Status (Shows the running status of the job)
stop job immediate (We can start the job again)

Query to verify the expdp/impdp

select    round(sofar/totalwork*100,2)  percent_completed,    v$session_longops.*  from    v$session_longops  where    sofar <> totalwork  order by    target,     sid;
select substr(sql_text, instr(sql_text,'"')+1,
instr(sql_text,'"', 1, 2)-instr(sql_text,'"')-1)
table_name,
rows_processed,
round((sysdate
- to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))
*24*60, 1) minutes,
trunc(rows_processed /
((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))
*24*60)) rows_per_min
from
v$sqlarea
where
upper(sql_text) like 'INSERT % INTO "%'
and
command_type = 2
and
open_versions > 0;
select
sid,
serial#
from
v$session s,
dba_datapump_sessions d
where
s.saddr = d.saddr;
select
sid,
serial#,
sofar,
totalwork
from
v$session_longops;
select * from USER_DATAPUMP_JOBS;
select * from DBA_DATAPUMP_SESSIONS;
select owner_name, job_name,state from dba_datapump_jobs where state not in 'NOT RUNNING';
SET LINESIZE 300
SET PAGESIZE 200
column OWNER_NAME format a10
column JOB_NAME format a30
column OPERATION format a10
column JOB_MODE format a6
column STATE format a20
column DATAPUMP_SESSIONS a10
column ATTACHED_SESSIONS a10
select OWNER_NAME,JOB_NAME,substr(operation,1,10) operation,state, ATTACHED_SESSIONS, DATAPUMP_SESSIONS from dba_datapump_jobs
where state not in 'NOT RUNNING';

(Please Note my other blog address: svgulhane wordpress IT updates)

Wednesday, 15 November 2017

All about IMPDP and EXPDP

(Please Note my other blog address: svgulhane wordpress IT updates)

Table of Contents

Before starting EXPDP and IMPDP

Verification of status of IMPDP job

Estimation of Space Utilization while import

Finding file Location, Space Utilized, Max Space Allocated, Autoextend Y/N

Various Query used to find the issues while expdp and impdp

DBA_DATAPUMP_JOBS Table Structure

Finding the Job_name from below views

Keywords in EXPDP and IMPDP

The Commands are used while in interactive mode.

Problem and Solution:

Before starting EXPDP and IMPDP

  • Check the Oracle directory existed and have created the OS directory where you want to store dump files.

Oracle Directory

create or replace directory IN_WIP as '/oraint2/oradata/FCIS921/dpdump/';
grant execute, read, write on directory dir to UBS105SUP;
OR
grant all on directory dir to UBS105SUP;
  • Check the free space available in the server mount point

df –h
[user@server1~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/xvda2            184G   32G  143G  18% /
tmpfs                 7.5G     0  7.5G   0% /dev/shm
/dev/xvda1            976M  295M  631M  32% /boot
/dev/xvda5            4.8G  777M  3.8G  17% /oem
blrnas404.in.oracle.com:/export/1170

  • Check the System and User tablespace size, it should be sufficient


SET LINESIZE 300
SET PAGESIZE 225
column File_path format a60
column AUTOEXTENSIBLE format a3
column Allocated format 999,999,9999
column Free format 999,999,9999
column Maxsize format 9999999,9999
column Yet_Not_alloc_MB format 9999999,9999
select
a.tablespace_name,
sum(a.BYTES/(1024*1024))Allocated,
(b.free_space) Free,
sum(a.maxbytes/(1024*1024))Maxsize ,
abs(sum((a.maxbytes/(1024*1024))-(a.BYTES/(1024*1024)))) Yet_Not_alloc_MB
from dba_data_files a,
(select tablespace_name, round(sum(bytes)/1024/1024 ,2) as free_space
from dba_free_space group by tablespace_name) b
where a.tablespace_name like('%&TS_Name%') and a.tablespace_name=b.tablespace_name
group by a.tablespace_name,free_space
order by a.tablespace_name ;

  • Check the version of the source and target database


SELECT * FROM V$VERSION

  • Check the size of the dump to be imported


Dump file size:
ls –lrt
-rw-rw-rw- 1 db1123 oinstall 12538 Nov 20 00:00 expdp-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 2210488320 Nov 20 00:00 expdp-20112014_000001.dmp  ----- 2gb
-rw-rw-rw- 1 db1123 oinstall 4497 Nov 20 00:00 expdp-tinfra-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 16900096 Nov 20 00:00 expdp-tinfra-20112014_000001.dmp  -----16 mb





  • Check all the privileges related to expdp or impdp are granted.


set linesize 300
set pagesize 100
column Object format a30
column PRIVS format a30
column Type format a20
select P1 OBJECT, nvl(o.object_type, '(ROLE)')Type
, P2 PRIVS from (select Grantee G, Table_name p1, privilege p2
from dba_tab_privs
union
select Grantee G, null p1, granted_role p2
from dba_role_privs
union
select Grantee G, null p1, privilege p2 from dba_sys_privs),
dba_objects o
where G = '&USER'
and p1 = o.object_name(+);
  • Grant all the basic privileges required.

grant create session to &&USR;
create public synonym dbms_shared_pool for sys.dbms_shared_pool;
grant select on dba_jobs_running to &USR;
grant select on v_$database to &USR;
grant select on v_$nls_parameters to &USR;
grant select on v_$parameter to &USR;
grant select on v_$process to &USR;
grant select on v_$session to &USR;
grant select on v_$timer to &USR;
grant select on v_$instance to &USR;
grant create database link to &USR;
grant create library to &USR;
grant create materialized view to &USR;
grant create procedure to &USR;
grant create sequence to &USR;
grant create session to &USR;
grant create synonym to &USR;
grant create table to &USR;
grant create trigger to &USR;
grant create type to &USR;
grant create view to &USR;
grant execute on dbms_alert to &USR;
grant execute on dbms_application_info to &USR;
grant execute on dbms_aq to &USR;
grant execute on dbms_aqadm to &USR;
grant execute on dbms_defer to &USR;
grant execute on dbms_defer_query to &USR;
grant execute on dbms_defer_sys to &USR;
grant execute on dbms_job to &USR;
grant execute on dbms_lock to &USR;
grant execute on dbms_pipe to &USR;
grant execute on dbms_refresh to &USR;
grant execute on dbms_rls to &USR;
grant execute on dbms_shared_pool to &USR;
grant execute on dbms_sql to &USR;
grant execute on utl_file to &USR;
--grant exp_full_database to &USR; (Check with DBA before granting this privilege)
--grant imp_full_database to &USR; (Check with DBA before granting this privilege)
grant execute on SYS.UTL_MAIL to &USR;
--grant execute on SYS.CTX_DDL to &USR;
grant execute on CTXSYS.CTX_DDL to &USR;
grant execute on SYS.DBMS_CRYPTO to &USR;
grant execute on SYS.UTL_RAW to &USR;
grant select on dba_data_files to &Usr;
grant select on dba_free_space to &Usr;
  • Check the location of the dump and privileges granted on that path

ls –lrt
-rw-rw-rw- 1 db1123 oinstall 12538 Nov 20 00:00 expdp-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 2210488320 Nov 20 00:00 expdp-20112014_000001.dmp  ----- 2gb
-rw-rw-rw- 1 db1123 oinstall 4497 Nov 20 00:00 expdp-tinfra-20112014_000001.log
-rw-rw---- 1 db1123 oinstall 16900096 Nov 20 00:00 expdp-tinfra-20112014_000001.dmp  -----16 mb



  • Increase undo retention
  • Increase redo log file size

  • If db is archivelog mod then check archivelog it may get filled
ORA-19815: WARNING: db_recovery_file_dest_size of 3211788288000 bytes is 100.00% used, and has 0 remaining bytes available.
    Go to rman list archivelog all
    delete archivelog all
  • Do not invoke Import as SYSDBA, except at the request of Oracle technical support. SYSDBA is used internally and has specialized functions; its behavior is not the same as for general users.
  • Check the syntax and key words used in command

  • Check the count of the objects imported so far and find where it stopped.

  • Check the log file generated during expdp or impdp for checking the progress of the dump.

  • Generally import failed due to system or user tablespace not having free space. Also some time there is no free space in hard disk.

  • Check if any user has locked the objects, if lock exists then clear that locks

  • Parallel import needs to have multiple .dmp files

  • If there are 6 CPU then you can have 12 dump files can be optimal for parallel processing.

  • Importing data from Different Version:
  • Need to include the VERSION=10.1.0.1 parameter as well in the impdp parameter, similar to the version of the db while exporting the dump.
  • If your session get closed you can attach to running job. Attach to that job_name and find the issues by status command.

Example:

impdp username_to_whom_job_executed/pwd attach= 'SYS_IMPORT_FULL_02'
expdp ddl/slrmclaren@TCTOOLS attach= 'DDLEXP'

Status (Shows the running status of the job)
stop job immediate (We can start the job again)
impdp>status
impdp> STOP_JOB=IMMEDIATE
impdp>kill_job


Verification of status of IMPDP job

Basic Check At UNIX level to see whether import process is active:-
ps -ef |grep impdp
[ora11g22@ofss220004 impdp]$ ps -ef |grep impdp
ora11g22 3603 25472 0 12:00 pts/0 00:00:00 /bin/sh ./impdp.sh
ora11g22 3604 3603 0 12:00 pts/0 00:00:01 impdp parfile=/scratch/software/impdp/impdp.par
ora11g22 6808 6663 0 12:28 pts/9 00:00:02 impdp parfile=impdp.par
ora11g22 8929 25472 0 12:47 pts/0 00:00:00 grep impdp
Attach to the job and check its progress :-
Step 1) Find the job name to be attach
Open the logfile for import and starting few lines itself it will say
Master table "TCBNK12R2"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Here SYS_IMPORT_SCHEMA_01 is basically the job_name for this import process which is running
Step 2) At Unix prompt type
impdp username/password attach=job_name
impdp TCBNK12R2/TCBNK12R2 attach=SYS_IMPORT_SCHEMA_01
The output will be something like below :-
Import: Release 11.2.0.2.0 - Production on Thu Aug 2 12:54:50 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Job: SYS_IMPORT_SCHEMA_01
Owner: TCBNK12R2
Operation: IMPORT
Creator Privs: TRUE
GUID: C6438DC2127F0AB5E040B80A9C870E15
Start Time: Thursday, 02 August, 2012 12:00:57
Mode: SCHEMA
Instance: DONTFAIL
Max Parallelism: 1
EXPORT Job Parameters:
CLIENT_COMMAND TCBNK12R2/********@TC120R2 parfile=parfile_TCBNK12R2_FB.par
IMPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND TCBNK12R2/******** parfile=/scratch/software/impdp/impdp.par
TABLE_EXISTS_ACTION APPEND
State: EXECUTING
Bytes Processed: 3,259,068,472
Percent Done: 99
Current Parallelism: 1
Job Error Count: 0
Dump File: /scratch/software/impdp/TCBNK_12.0.0_ITR2_FINAL.DMP
Worker 1 Status:
Process Name: DW00
State: EXECUTING
Object Schema: TCBNK12R2
Object Name: ORPKS_ORDMURKE_KERNEL
Object Type: SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Completed Objects: 800
Worker Parallelism: 1
Import>
Here state means :- JOB state (if its executing it means job is running ) else it will be UNKNOWN/stopped etc
Job status can also be found via below command (if u have privileged database access)
SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
OWNER_NAME JOB_NAME             OPERATION  JOB_MODE   STATE
---------- -------------------- ---------- ---------- ------------
SYSTEM     SYS_EXPORT_FULL_01   EXPORT     FULL       EXECUTING
To start/resume the job interactively:-
Import> start_job
To check status of the job :-
Import> status
To exit out of Import> console
Import> exit_client

Estimation of Space Utilization while import

Dump Size: 1.85 gb
space used by schema: at most 2g
system tablespace used: 15g
Total time taken: 4.20 min
other estimation of import

Dump Size 73 GB
Space used after import 210 GB
Systeme tablespace consumed: 1GB
Space utilization in RMAN
For 120gb of db size required 14gb of diskspace around 12%of total db size

Finding file Location, Space Utilized, Max Space Allocated, Autoextend Y/N

SET LINESIZE 300
SET PAGESIZE 225
column File_path format a60
column AUTOEXTENSIBLE  format a3
column used_G format 999,999
column maxsize_G format 999,999
column user_bytes format 9999999
select
a.file_name File_path,
a.BYTES/(1024*1024*1024)used_G,
a.maxbytes/(1024*1024*1024)maxsize_G ,
a.AUTOEXTENSIBLE
, b.creation_time
, a.user_bytes/(1024*1024) user_bytes
-- , a.INCREMENT_BY
from dba_data_files a,v$datafile b
where a.file_id =b.file#
and a.tablespace_name like('%&TS_Name%')
and a.file_name like('%&File_Name%')
order by b.creation_time

17) Various Query used to find the issues while expdp and impdp

Query 1:

You can observe v$session_longops to monitor some parts of an Oracle export job.
select
   round(sofar/totalwork*100,2)  percent_completed,
   v$session_longops.*
from
   v$session_longops
where
   sofar <> totalwork
order by
   target,
   sid;

Query 2:

col table_name format a30
select substr(sql_text, instr(sql_text,'"')+1,
               instr(sql_text,'"', 1, 2)-instr(sql_text,'"')-1)
          table_name,
       rows_processed,
       round((sysdate
              - to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))
             *24*60, 1) minutes,
       trunc(rows_processed /
                ((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))
             *24*60)) rows_per_min
from
   v$sqlarea
where
  upper(sql_text) like 'INSERT % INTO "%'
  and
  command_type = 2
  and
  open_versions > 0;

Query 3:

select
   sid,
   serial#
from
   v$session s,
   dba_datapump_sessions d
where
   s.saddr = d.saddr;

select
   sid,
   serial#,
   sofar,
   totalwork
from
   v$session_longops;

Query 4

: (Job Name related)
DBA_DATAPUMP_JOBS Table Structure
OWNER_NAMEVARCHAR2(30)Loged in Schema name from where job executed
JOB_NAMEVARCHAR2(30)Job name system defined e.g SYS_EXPORT_SCHEMA_15, or we can specify in expdp command like job_name=new_job
OPERATIONVARCHAR2(30)IMPORT OR EXPORT
JOB_MODEVARCHAR2(30)FULL
STATEVARCHAR2(30)EXECUTING OR NOT RUNNING
DEGREENUMBER
ATTACHED_SESSIONSNUMBER
DATAPUMP_SESSIONSNUMBER
Finding the Job_name from below views
select * from USER_DATAPUMP_JOBS;
select * from DBA_DATAPUMP_SESSIONS;

select owner_name, job_name,state from dba_datapump_jobs where state not in 'NOT RUNNING';
SET LINESIZE 300
SET PAGESIZE 200
column OWNER_NAME format a10
column JOB_NAME format a30
column OPERATION format a10
column JOB_MODE format a6
column STATE format a20
column DATAPUMP_SESSIONS a10
column ATTACHED_SESSIONS a10
select OWNER_NAME,JOB_NAME,substr(operation,1,10) operation,state, ATTACHED_SESSIONS, DATAPUMP_SESSIONS from dba_datapump_jobs
where state not in 'NOT RUNNING';
select * from dba_datapump_jobs;
select job_name,state from dba_datapump_jobs;
You can Delete all the tables from the o/p of the query if state is NOT RUNNING
set col o.status 10
SELECT o.status, o.object_id, o.object_type,o.owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;
select STATUE,JOB_NAME from dba_datapump_jobs where owner_name = 'APACK2114';
SELECT o.status, o.object_id, o.object_type,o.owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.owner_name like 'APACK2114';
Dropping table which is having state as Not Running.
drop table APACK2114.SYS_IMPORT_FULL_01;
Query 4: Verification of object counts from both the target and source schema..
select count(*) from dba_objects where owner='ALBBSUP';

Keywords in EXPDP and IMPDP

The available keywords and their descriptions follow. Default values are listed within square brackets.
ATTACH
Attach to an existing job.
For example, ATTACH=job_name.
CLUSTER
Utilize cluster resources and distribute workers across the Oracle RAC.
Valid keyword values are: [Y] and N.
CONTENT
Specifies data to load.
Valid keywords are: [ALL], DATA_ONLY and METADATA_ONLY.
DATA_OPTIONS
Data layer option flags.
Valid keywords are: SKIP_CONSTRAINT_ERRORS.
DIRECTORY
Directory object to be used for dump, log and SQL files.
DUMPFILE
List of dump files to import from [expdat.dmp].
For example, DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION_PASSWORD
Password key for accessing encrypted data within a dump file.
Not valid for network import jobs.
ESTIMATE
Calculate job estimates.
Valid keywords are: [BLOCKS] and STATISTICS.
EXCLUDE
Exclude specific object types.
For example, EXCLUDE=SCHEMA:"='HR'".
FLASHBACK_SCN
SCN used to reset session snapshot.
FLASHBACK_TIME
Time used to find the closest corresponding SCN value.
FULL
Import everything from source [Y].
HELP
Display help messages [N].
INCLUDE
Include specific object types.
For example, INCLUDE=TABLE_DATA.
INCLUDE=PACKAGE,FUNCTION
INCLUDE=PROCEDURE:"LIKE'XYZ%'"
INCLUDE=TABLE:"LIKE'XYZ%'"
INCLUDE=VIEWS:"LIKE'XYZ%'"
Example include=package option:
userid=TC1203DEMO/TC1203DEMO@KERELCM
DIRECTORY=IN_WIP
DUMPFILE=TCBNK_12.0.3.0.0_BNK1203OSDC_FULL_DUMP.DMP
LOGFILE=TCBNK_12.0.3.0.0_BNK1203OSDC_FULL_DUMP2.LOG
REMAP_SCHEMA=BNK1203OSDC:TC1203DEMO
REMAP_TABLESPACE=BNK1203OSDC:TC1203DEMO
TRANSFORM=OID:n
INCLUDE=PACKAGE
IGNORE=Y
GRANTS=Y
INDEXES=Y
ROWS=Y
CONSTRAINTS=Y

The following commands are used while in interactive mode.

Note: abbreviations are allowed.
CONTINUE_CLIENT
Return to logging mode. Job will be restarted if idle.
EXIT_CLIENT
Quit client session and leave job running.
HELP
Summarize interactive commands.
KILL_JOB
Detach and delete job.
PARALLEL
Change the number of active workers for current job.
START_JOB
Start or resume current job.
Valid keywords are: SKIP_CURRENT.
STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.
STOP_JOB
Orderly shutdown of job execution and exits the client.
Valid keywords are: IMMEDIATE.


Problem and Solution:

Problem:

“segmentation error” on user site

When executing from server giving bdbcopy. CSTB_ITEM_DESC not found error
(Missing schema_name.object_name in tables=)
User command:
impdp bdbcopy/bdbcopy TABLES=CSTB_ITEM_DESC DIRECTORY=EXPDPDIR CONTENT=DATA_ONLY DUMPFILE=bdbmadrid.dmp LOGFILE=bdbmadrid_imp_labels.log;
Solution : (Need to write schema_name.object name)
Changed Command:
bdbcopy/********@WEUTF11G TABLES=BDBMADRID.CSTB_ITEM_DESC DIRECTORY=EXPDPDIR CONTENT=DATA_ONLY DUMPFILE=bdbmadrid.dmp LOGFILE=New.log REMAP_SCHEMA=BDBMADRID:BDBCOPY REMAP_TABLESPACE=BDBMADRID:BDBCOPY

Problem:

Unable to create master table. Insufficient Privileges

Solution:

Required basic privilege granted.

Problem:

Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Job "BDBFFT"."SYS_IMPORT_FULL_01" stopped due to fatal error . Instance Name-WEUTF11G at 10.184.74.181 server.

Solution:

From the log file it is also found the error : unable to create INITIAL extent for segment in tablespace FBLDEV1
Table space increased and issue resolved.

Problem:

Export file created by EXPORT:V09.02.00 via direct path IMP-00013: only a DBA can import a file exported by another DBA IMP-00000: Import terminated unsuccessfully

Solution:

Full grant given to user and after that restarted the Import given

Problem:

I am trying to do EXPDP from a FLEXCUBE fromt end screen. While doing so I am getting errors like ORA-06512: at "SYS.KUPC$QUE_INT", line 588 ORA-01013: user requested cancel of current operation ORA-06512: at "SYS.KUPC$QUE_INT", line 588 The procedure for taking EXPORT dump is working fine through a backend stub.

Solution:

Timeout parameter was set at application level

Problem:

EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully

Solution:

Resolve the Space Issue
Problem:
Processing object type SCHEMA_EXPORT/VIEW/VIEW ORA-39082: Object type VIEW:"TC120NLSDEV"."CLVW_PARKED_LOANS" created with compilation warnings Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Job "TC120NLSDEV"."SYS_IMPORT_FULL_01" stopped due to fatal error at 02:34:05

Solution:
When I Checked the temporary tablespace free space it was 100% full, so increased it as follows:
alter database tempfile ‘/tempfilelocateion/temp01.dbf’ autoextend on maxsize 4g;
alter tablespace temp add tempfile ‘/tempfilelocateion/temp02.dbf’ size 1g autoextend on maxsize 4g;
Issue got resolved.
Then I have attached to the existing failed job as follows:
Impdp tcdb/tcdb attach=’SYS_IMPORT_FULL_01’
Problem:
ORA-31626: job does not exist ORA-31633: unable to create master table "FPPERF.SYS_IMPORT_SCHEMA_05" ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT", line 1048 ORA-60019: Creating initial extent of size 14 in tablespace of extent size 8
Problem:
Issue related to Extent Size of the tablespace was different in source and target.
Solution:
Here we have to have both source and target with same extent size.
After checking the extent size source was 128K and target was 64K, after making changes the import done successfully.
Problem:
ORA-00955: name is already used by an existing object
Solution:
In following case the TEST schema does not exists and command using content=data_only
Example:
impdp system/sysdec2012@TRAININGDB schemas=test parallel=2 DIRECTORY=DATA_DUMP DUMPFILE=test_%U.dmp LOGFILE=test.log JOB_NAME=testJob content=data_only
Error:
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.UPATE_TD_ROW_IMP [62]
TABLE_DATA:"TEST"."TAB"
ORA-31603: object "TAB" of type TABLE not found in schema "TEST"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 8364
As the above job is still executing, need to kill_job,
If we re-execute following statement with same job name and different content=metadata_only, we get following error:
impdp system/sysdec2012@TRAININGDB schemas=test parallel=2 DIRECTORY=DATA_DUMP DUMPFILE=test_%U.dmp LOGFILE=test.log JOB_NAME=testJob content=metadata_only
Error:
Import: Release 11.2.0.2.0 - Production on Fri May 3 19:15:10 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-31633: unable to create master table "SYSTEM.TESTJOB"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1020
ORA-00955: name is already used by an existing object
To avoid this error kill the job by attaching to the existing job as follows:
impdp system/sysdec2012@TRAININGDB attach=testJob
Import> kill_job
Are you sure you wish to stop this job ([yes]/no): y
Problem:
impdp TC120NLSDEV/TC120NLSDEV@KERNLS DIRECTORY=EXPIMP DUMPFILE=TC12.DMP LOGFILE=TC12.log REMAP_SCHEMA=TCBNK12R2:TC120NLSDEV REMAP_TABLESPACE=TCBNK12R2:TC120NLSDEV TRANSFORM=SEGMENT_ATTRIBUTES:n TRANSFORM=OID:n EXCLUDE=GRANT IGNORE=Y
Error occurred:
Schema SYSTEM is needed to import this object, but is unaccessible,

Solution:
This is happened as the export was done with system. So either use system schema or grant the sufficient privileges to schema used in impdp.
Problem:
Job "TC12P01"."SYS_IMPORT_FULL_01" stopped due to fatal error at 18:43:10 in schema TC12P01 [DB:KERNLS] server: 10.184.74.172
Solution: import failed due to the bug in Oracle Database 11g Enterprise Edition Release 11.2.0.1.0. We will have to get a downtime and apply a patch.

(Please Note my other blog address: svgulhane wordpress IT updates)