1600 PMP mock questions 1400 CAPM mock questions 800 SCJP 6 mock questions 600 OCAJP 7 mock questions 590 OCPJP 7 mock questions 556 SCWCD 5 mock questions 500 OCEJWCD 6 mock questions pdfDownload (java,struts, hibernet etc)

ORACLE Tutorials

Export in Oracle

Oracle Utilities and Tools
Export

Export Utility enables you to take a snapshot of the database basing on the flexibility preferred/chosen by the user and generates a single file or multiple files of the size chosen by the user.

As export is a logical snap shot of database/user schema/tablespace/table neither archived log files not online redo log files can be applied to recover the database/tablespace/user schema/table. That is possible when online or offline backup of the database is available. Otherwise it is only restoration of the database/tablespace/user schema/table to a point of time when the export took place.On restoring the database/tablespace/user schema/table it can not be recovered. The data loss should be tolerated.

Exporting Data for Added Protection and Flexibility and is used as another tool for Backup of data

Because the Oracle Export utility can selectively export specific objects, consider exporting portions or all of a database for supplemental protection and flexibility in a database's backup strategy. This strategy is especially useful for logical backups of the RMAN recovery catalog, because you can quickly reimport this data into any database and rebuild the catalog if the recovery catalog database is lost.

Database exports are not a substitute for whole database backups and cannot provide the same complete recovery advantages that the built-in functionality of Oracle offers. For example, you cannot apply archived logs to logical backups in order to update lost changes. An export provides a snapshot of the logical data (tables, stored procedures, and so forth) in a database when the export was made.

Using Different Releases of Export and Import ( Source Oracle Documentation)

Export from->Import to Use Export Release Use Import Release
7.3.3 to 8.1.6 7.3.3 8.1.6
8.1.6 to 8.1.6 8.1.6 8.1.6
8.1.5 to 8.0.6 8.0.6 8.0.6
8.1.7 to 8.1.6 8.1.6 8.1.6
8.1.7 to 7.3.4 7.3.4 7.3.4
9.0.1 to 8.1.6 8.1.6 8.1.6
9.0.1 to 9.0.2 9.0.1 9.0.2

Restrictions When Using Different Releases and Versions of Export and Import


The following restrictions apply when you are using different releases of Export and Import:

Export dump files can be read only by the Import utility because they are stored in a special binary format.
Any export dump file can be imported into a higher release of the Oracle database server.
The Import utility can read export dump files created by Export release 5.1.22 and higher.
The Import utility cannot read export dump files created by the Export utility of a higher maintenance release or version. For example, a release 8.1 export dump file cannot be imported by a release 8.0 Import utility, and a version 8 export dump file cannot be imported by a version 7 Import utility.
The Oracle version 6 (or earlier) Export utility cannot be used against an Oracle8 or higher database.
Whenever a lower version of the Export utility runs with a higher version of the Oracle database server, categories of database objects that did not exist in the lower version are excluded from the export. For example, partitioned tables did not exist in the Oracle database server version 7. So, if you need to move a version 8 partitioned table to a version 7 database, you must first reorganize the table into a nonpartitioned table.
Export files generated by Oracle9i Export, either direct path or conventional path, are incompatible with earlier releases of Import and can be imported only with Oracle9i Import. When backward compatibility is an issue, use the earlier release or version of the Export utility against the Oracle9i database.


Considerations When Exporting Database Objects
The following sections describe points you should consider when you export particular database objects.
Exporting Sequences
If transactions continue to access sequence numbers during an export, sequence numbers can be skipped. The best way to ensure that sequence numbers are not skipped is to ensure that the sequences are not accessed during the export.

Sequence numbers can be skipped only when cached sequence numbers are in use. When a cache of sequence numbers has been allocated, they are available for use in the current database. The exported value is the next sequence number (after the cached values). Sequence numbers that are cached, but unused, are lost when the sequence is imported.

Exporting LONG and LOB Datatypes
On export, LONG datatypes are fetched in sections. However, enough memory must be available to hold all of the contents of each row, including the LONG data.

LONG columns can be up to 2 gigabytes in length.

All data in a LOB column does not need to be held in memory at the same time. LOB data is loaded and unloaded in sections.

Exporting Foreign Function Libraries
The contents of foreign function libraries are not included in the export file. Instead, only the library specification (name, location) is included in full database and user mode export. You must move the library's executable files and update the library specification if the database is moved to a new location.

Exporting Offline Bitmapped Tablespaces
If the data you are exporting contains offline bitmapped tablespaces, Export will not be able to export the complete tablespace definition and will display an error message. You can still import the data; however, you must first create the offline bitmapped tablespaces before importing to prevent DDL commands that may reference the missing tablespaces from failing.

Exporting Directory Aliases
Directory alias definitions are included only in a full database mode Export. To move a database to a new location, the database administrator must update the directory aliases to point to the new location.

Directory aliases are not included in user or table mode Export. Therefore, you must ensure that the directory alias has been created on the target system before the directory alias is used.

Exporting BFILE Columns and Attributes
The export file does not hold the contents of external files referenced by BFILE columns or attributes. Instead, only the names and directory aliases for files are copied on Export and restored on Import. If you move the database to a location where the old directories cannot be used to access the included files, the database administrator (DBA) must move the directories containing the specified files to a new location where they can be accessed.

External Tables
The contents of external tables are not included in the export file. Instead, only the table specification (name, location) is included in full database and user mode export. You must manually move the external data and update the table specification if the database is moved to a new location.

Exporting Object Type Definitions
In all Export modes, the Export utility includes information about object type definitions used by the tables being exported. The information, including object name, object identifier, and object geometry, is needed to verify that the object type on the target system is consistent with the object instances contained in the export file. This ensures that the object types needed by a table are created with the same object identifier at import time.

Note, however, that in table, user, and tablespace mode, the export file does not include a full object type definition needed by a table if the user running Export does not have execute access to the object type. In this case, only enough information is written to verify that the type exists, with the same object identifier and the same geometry, on the import target system.

The user must ensure that the proper type definitions exist on the target system, either by working with the DBA to create them, or by importing them from full database or user mode exports performed by the DBA.

It is important to perform a full database mode export regularly to preserve all object type definitions. Alternatively, if object type definitions from different schemas are used, the DBA should perform a user mode export of the appropriate set of users. For example, if table1 belonging to user scott contains a column on blake's type type1, the DBA should perform a user mode export of both blake and scott to preserve the type definitions needed by the table.

Exporting Nested Tables
Inner nested table data is exported whenever the outer containing table is exported. Although inner nested tables can be named, they cannot be exported individually.

Exporting Advanced Queue (AQ) Tables
Queues are implemented on tables. The export and import of queues constitutes the export and import of the underlying queue tables and related dictionary tables. You can export and import queues only at queue table granularity.

When you export a queue table, both the table definition information and queue data are exported. Because the queue table data is exported as well as the table definition, the user is responsible for maintaining application-level data integrity when queue table data is imported.

Exporting Synonyms
You should be cautious when exporting compiled objects that reference a name used as a synonym and as another object. Exporting and importing these objects will force a recompilation that could result in changes to the object definitions.

The following example helps to illustrate this problem:

CREATE PUBLIC SYNONYM emp FOR scott.emp;

CONNECT blake/paper;
CREATE TRIGGER t_emp BEFORE INSERT ON emp BEGIN NULL; END;
CREATE VIEW emp AS SELECT * FROM dual;


If the database in the preceding example were exported, the reference to emp in the trigger would refer to blake's view rather than to scott's table. This would cause an error when Import tried to reestablish the t_emp trigger.

Possible Export Errors Related to Java Synonyms
If an export operation attempts to export a synonym named DBMS_JAVA when there is no corresponding DBMS_JAVA package or when Java is either not loaded or loaded incorrectly, the export will terminate unsuccessfully. The error messages that are generated include, but are not limited to, the following: EXP-00008, ORA-00904, and ORA-29516.

If Java is enabled, make sure that both the DBMS_JAVA synonym and DBMS_JAVA package are created and valid before rerunning the export.

If Java is not enabled, remove Java-related objects before rerunning the export.


Support for Fine-Grained Access Control
You can export tables with fine-grained access control policies enabled. When doing so, consider the following:
The user who imports from an export file containing such tables must have the appropriate privileges (specifically, the EXECUTE privilege on the DBMS_RLS package so that the tables' security policies can be reinstated). If a user without the correct privileges attempts to export a table with fine-grained access policies enabled, only those rows that the exporter is privileged to read will be exported.
If fine-grained access control is enabled on a SELECT statement, then conventional path Export may not export the entire table because fine-grained access may rewrite the query.
Only user SYS, or a user with the EXPORT_FULL_DATABASE role enabled or who has been granted EXEMPT ACCESS POLICY, can perform direct path Exports on tables having fine-grained access control.


Transportable Tablespaces
The transportable tablespace feature enables you to move a set of tablespaces from one Oracle database to another.
To move or copy a set of tablespaces, you must make the tablespaces read-only, copy the datafiles of these tablespaces, and use Export and Import to move the database information (metadata) stored in the data dictionary. Both the datafiles and the metadata export file must be copied to the target database. The transport of these files can be done using any facility for copying flat binary files, such as the operating system copying facility, binary-mode FTP, or publishing on CD-ROMs.

After copying the datafiles and exporting the metadata, you can optionally put the tablespaces in read/write mode.

Export provides the following parameters to enable export of transportable tablespace metadata.

TABLESPACES
TRANSPORT_TABLESPACE


Exporting from a Read-Only Database
To extract metadata from a source database, Export uses queries that contain ordering clauses (sort operations). For these queries to succeed, the user performing the export must be able to allocate on-disk sort segments. For these sort segments to be allocated in a read-only database, the user's temporary tablespace should be set to point at a temporary, locally managed tablespace.


Using Export and Import to Partition a Database Migration
When you use the Export and Import utilities to migrate a large database, it may be more efficient to partition the migration into multiple export and import jobs. If you decide to partition the migration, be aware of the following advantages and disadvantages.
Advantages of Partitioning a Migration
Partitioning a migration has the following advantages:

Time required for the migration may be reduced because many of the subjobs can be run in parallel.
The import can start as soon as the first export subjob completes, rather than waiting for the entire export to complete.
Disadvantages of Partitioning a Migration
Partitioning a migration has the following disadvantages:

The export and import processes become more complex.
Support of cross-schema references for certain types of objects may be compromised. For example, if a schema contains a table with a foreign key constraint against a table in a different schema, you may not have the required parent records when you import the table into the dependent schema.
How to Use Export and Import to Partition a Database Migration
To perform a database migration in a partitioned manner, take the following steps:

For all top-level metadata in the database, issue the following commands:
exp dba/password FILE=full FULL=y CONSTRAINTS=n TRIGGERS=n ROWS=n INDEXES=n
imp dba/password FILE=full FULL=y
For each scheman in the database, issue the following commands:
exp dba/password OWNER=scheman FILE=scheman
imp dba/password FILE=scheman FROMUSER=scheman TOUSER=scheman IGNORE=y
All exports can be done in parallel. When the import of full.dmp completes, all remaining imports can also be done in parallel

When Direct path export is done you may encounter...

EXP-00041: Export done in server's WE8ISO8859P1, different from user's character set US7ASCII
EXP-00000: Export terminated unsuccessfully

or

EXP-00041: Export done in server's US7ASCII, different from user's character set WE8ISO8859P1
EXP-00000: Export terminated unsuccessfully

or some other session character set which differs with the database server NLS_LANGUAGE parameter.

Then

at the command prompt issue the following command depending upon the session NLS_LANGUAGE settings and DB NLS_LANG settings.

Set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

and then invoke exp and call the par file or issue on line parameters and do the export successfully.

Good for reading

Metalink Doc ID Note:155477.1
Subject Parameter DIRECT: Conventional Path Export Versus Direct Path Export

Import

Import utilitiy enables you to import the data exported. The exported definitions are executed against the database and various logical object are created.

Parameters

Import: Release 9.2.0.6.0 - Production on Sun Nov 28 16:35:07 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

You can let Import prompt you for parameters by entering the IMP
command followed by your username/password:

Example: IMP SCOTT/TIGER

Or, you can control how Import runs by entering the IMP command followed
by various arguments. To specify parameters, you use keywords:

Format: IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

-------------------------------------------------------------------------------------

USERID username/password
FULL import entire file (N)
BUFFER size of data buffer
FROMUSER list of owner usernames
FILE input files (EXPDAT.DMP)
TOUSER list of usernames
SHOW just list file contents (N)
TABLES list of table names
IGNORE ignore create errors (N)
RECORDLENGTH length of IO record
GRANTS import grants (Y)
INCTYPE incremental import type
INDEXES import indexes (Y)
COMMIT commit array insert (N)
ROWS import data rows (Y)
PARFILE parameter filename
LOG log file of screen output
CONSTRAINTS import constraints (Y)
DESTROY overwrite tablespace data file (N)
INDEXFILE write table/index info to specified file
SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N)
FEEDBACK display progress every x rows(0)
TOID_NOVALIDATE skip validation of specified type ids
FILESIZE maximum size of each dump file
STATISTICS import precomputed statistics (always)
RESUMABLE suspend when a space related error is encountered(N)
RESUMABLE_NAME text string used to identify resumable statement
RESUMABLE_TIMEOUT wait time for RESUMABLE
COMPILE compile procedures, packages, and functions (Y)
STREAMS_CONFIGURATION import streams general metadata (Y)
STREAMS_INSTANITATION import streams instantiation metadata (N)

The following keywords only apply to transportable tablespaces
TRANSPORT_TABLESPACE import transportable tablespace metadata (N)
TABLESPACES tablespaces to be transported into database
DATAFILES datafiles to be transported into database
TTS_OWNERS users that own data in the transportable tablespace set

Order of Import

Table objects are imported as they are read from the export file. The export file contains objects in the following order:

Type definitions
Table definitions
Table data
Table indexes
Integrity constraints, views, procedures, and triggers
Bitmap, functional, and domain indexes


First, new tables are created. Then, data is imported and indexes are built. Then triggers are imported, integrity constraints are enabled on the new tables, and any bitmap, functional, and/or domain indexes are built. This sequence prevents data from being rejected due to the order in which tables are imported. This sequence also prevents redundant triggers from firing twice on the same data (once when it is originally inserted and again during the import).

For example, if the emp table has a referential integrity constraint on the dept table and the emp table is imported first, all emp rows that reference departments that have not yet been imported into dept would be rejected if the constraints were enabled.

When data is imported into existing tables, however, the order of import can still produce referential integrity failures. In the situation just given, if the emp table already existed and referential integrity constraints were in force, many rows could be rejected.

A similar situation occurs when a referential integrity constraint on a table references itself. For example, if scott's manager in the emp table is drake, and drake's row has not yet been loaded, scott's row will fail, even though it would be valid at the end of the import.

Table 2-1 Privileges Required to Import Objects into Your Own Schema
Object Required Privilege (Privilege Type, If Applicable)

Clusters CREATE CLUSTER (System) and Tablespace Quota,
or UNLIMITED TABLESPACE (System)

Database links CREATE DATABASE LINK (System) and
CREATE SESSION (System) on
remote database

Triggers on tables CREATE TRIGGER (System)

Triggers on schemas CREATE ANY TRIGGER (System)

Indexes CREATE INDEX (System) and Tablespace Quota,
or UNLIMITED TABLESPACE (System)

Integrity constraints ALTER TABLE (Object)

Libraries CREATE ANY LIBRARY (System)

Packages CREATE PROCEDURE (System)

Private synonyms CREATE SYNONYM (System)

Sequences CREATE SEQUENCE (System)

Snapshots CREATE SNAPSHOT (System)

Stored functions CREATE PROCEDURE (System)

Stored procedures CREATE PROCEDURE (System)

Table data INSERT TABLE (Object)

Table definitions
(including comments
and audit options) CREATE TABLE (System) and Tablespace Quota, or
UNLIMITED TABLESPACE (System)

Views CREATE VIEW (System) and
SELECT (Object) on the base table,
or SELECT ANY TABLE (System)

Object types CREATE TYPE (System)

Foreign function libraries CREATE LIBRARY (System)

Dimension CREATE DIMENSION (System)

Operators CREATE OPERATOR (System)

Indextypes CREATE INDEXTYPE (System)

Importing Grants

To import the privileges that a user has granted to others, the user initiating the import must either own the objects or have object privileges with the WITH GRANT OPTION. Table 2-2 shows the required conditions for the authorizations to be valid on the target system.


Grant Conditions

Object privileges

The object must exist in the user's schema, or

the user must have the object privileges with the WITH GRANT OPTION or,

the user must have the IMP_FULL_DATABASE role enabled.

System privileges

User must have the SYSTEM privilege as well as the WITH ADMIN OPTION.

Importing Objects into Other Schemas

To import objects into another user's schema, you must have the IMP_FULL_DATABASE role enabled.

Importing System Objects
To import system objects from a full database export file, the role IMP_FULL_DATABASE must be enabled.
The parameter FULL specifies that these system objects are included in the import when the export file is a full export:

Profiles
Public database links
Public synonyms
Roles
Rollback segment definitions
Resource costs
Foreign function libraries
Context objects
System procedural objects
System audit options
System privileges
Tablespace definitions
Tablespace quotas
User definitions
Directory aliases
System event triggers

Export Issues

ORA-01555 and Export parameter setting of CONSISTENT=Y

This setting if allowed to take default enables consistency at the table level and not across tables. If set to ?YES? guarantees read consistency between multiple tables.
The setting CONSISTENCY=Y causes SET TRANSACTION READONLY to be executed for your export session SET TRANSACTION READONLY means all subsequent queries are consistent to the point in time when you issued the SET. So all your data remains consistent despite committed changes made by other sessions.

CONSISTENCY = N is the default setting.

When this parameter is used in the exports, one should know that he should have sufficiently large UNDO or ROLLBACK segments else you are bound to get ORA-01555 snapshot too old error.

If you need consistent exports, you take table wise exports and distribute the schema tables across multiple exports, so that the data set to be retained for consistency is less and not cause ORA-01555.

To minimize the time and space required for such exports, you should export tables that need to remain consistent separately from those that do not.

A "snapshot too old" error occurs when rollback space is used up, and space taken up by committed transactions is reused for new transactions. Reusing space in the rollback segment allows database integrity to be preserved with minimum space requirements, but it imposes a limit on the amount of time that a read-consistent image can be preserved.

If a committed transaction has been overwritten and the information is needed for a read-consistent view of the database, a "snapshot too old" error results.

To avoid this error, you should minimize the time taken by a read-consistent export. (Do this by restricting the number of objects exported and, if possible, by reducing the database transaction rate.) Also, make the rollback segment as large as possible.

Direct=Y and RECORDLENGTH=65536

These two parameters together can give performance gains. Specifying DIRECT=y causes Export to extract data by reading the data directly, bypassing the SQL command-processing layer (evaluating buffer). This method can be much faster than a conventional path Export.

The parameter QUERY cannot be specified in a direct path export (DIRECT=y)
Java classes, resources, and procedures that are created using Enterprise Java Beans (EJBs) are not placed in the export file.

Constraints that have been altered using the RELY keyword lose the RELY attribute when they are exported.

When a type definition has evolved and then data referencing that evolved type is exported, the type definition on the import system must have evolved in the same manner.

The BUFFER parameter applies only to conventional path Export. It has no effect on a direct path Export.

Virtual Private Database (VPD) and Oracle Label Security are not enforced during direct path Exports.

The following users are exempt from Virtual Private Database and Oracle Label Security enforcement regardless of the export mode, application, or utility used to extract data from the database:

The database user SYS

Database users granted the Oracle9i EXEMPT ACCESS POLICY privilege, either directly or through a database role.

This means that any user who is granted the EXEMPT ACCESS POLICY privilege is completely exempt from enforcement of VPD and Oracle Label Security. This is a powerful privilege and should be carefully managed. This privilege does not affect the enforcement of traditional object privileges such as SELECT, INSERT, UPDATE, and DELETE. These privileges are enforced even if a user has been granted the EXEMPT ACCESS POLICY privilege.

Performance Issues for Direct Path Exports
You may be able to improve performance by increasing the value of the RECORDLENGTH parameter when you invoke a direct path Export. Your exact performance gain depends upon the following factors:

DB_BLOCK_SIZE

The types of columns in your table
Your I/O layout (The drive receiving the export file should be separate from the disk drive where the database files reside.)
The following values are generally recommended for RECORDLENGTH:

Multiples of the file system I/O block size
Multiples of DB_BLOCK_SIZE
Network Considerations
This section describes factors to take into account when you use Export and Import across a network.

Transporting Export Files Across a Network
Because the export file is in binary format, use a protocol that supports binary transfers to prevent corruption of the file when you transfer it across a network. For example, use FTP or a similar file transfer protocol to transmit the file in binary mode. Transmitting export files in character mode causes errors when the file is imported.

Exporting and Importing with Oracle Net
With Oracle Net, you can perform exports and imports over a network. For example, if you run Export locally, you can write data from a remote Oracle database into a local export file. If you run Import locally, you can read data into a remote Oracle database.

To use Import with Oracle Net, include the connection qualifier string @connect_string when entering the username/password in the exp or imp command. For the exact syntax of this clause, see the user's guide for your Oracle Net protocol.

Export and issues with Sequences

If transactions continue to access sequence numbers during an export, sequence numbers can be skipped. The best way to ensure that sequence numbers are not skipped is to ensure that the sequences are not accessed during the export.

Sequence numbers can be skipped only when cached sequence numbers are in use. When a cache of sequence numbers has been allocated, they are available for use in the current database. The exported value is the next sequence number (after the cached values). Sequence numbers that are cached, but unused, are lost when the sequence is imported.

Else, extract the DDL of the sequences using dynamic SQL querying the DBA_SEQUENCES synonym/view and then save them for use after the data is imported. If there are any issues with the sequences then drop and recreate them using the DDL generated.


Export and issues LONG and LOB Datatypes

Even if DIRECT=Y the tables that have LOB, CLOB, data types are exported in CONVENTIONAL path only while LONG datatype is exported in direct path.

On export, LONG datatypes are fetched in sections. However, enough memory must be available to hold all of the contents of each row, including the LONG data.

LONG columns can be up to 2 gigabytes in length.

All data in a LOB column does not need to be held in memory at the same time. LOB data is loaded and unloaded in sections.

Export and Foreign Function Libraries

The contents of foreign function libraries are not included in the export file. Instead, only the library specification (name, location) is included in full database and user mode export. You must move the library's executable files and update the library specification if the database is moved to a new location.

Export and Offline Bitmapped Tablespaces

If the data you are exporting contains offline bitmapped tablespaces, Export will not be able to export the complete tablespace definition and will display an error message. You can still import the data; however, you must first create the offline bitmapped tablespaces before importing to prevent DDL commands that may reference the missing tablespaces from failing.

Export and Directory Aliases

Directory alias definitions are included only in a full database mode Export. To move a database to a new location, the database administrator must update the directory aliases to point to the new location.

Directory aliases are not included in user or table mode Export. Therefore, you must ensure that the directory alias has been created on the target system before the directory alias is used.

Export and BFILE Columns and Attributes

The export file does not hold the contents of external files referenced by BFILE columns or attributes. Instead, only the names and directory aliases for files are copied on Export and restored on Import. If you move the database to a location where the old directories cannot be used to access the included files, the database administrator (DBA) must move the directories containing the specified files to a new location where they can be accessed.

External Tables

The contents of external tables are not included in the export file. Instead, only the table specification (name, location) is included in full database and user mode export. You must manually move the external data and update the table specification if the database is moved to a new location.

Export and Object Type Definitions

In all Export modes, the Export utility includes information about object type definitions used by the tables being exported. The information, including object name, object identifier, and object geometry, is needed to verify that the object type on the target system is consistent with the object instances contained in the export file. This ensures that the object types needed by a table are created with the same object identifier at import time.

Note, however, that in table, user, and tablespace mode, the export file does not include a full object type definition needed by a table if the user running Export does not have execute access to the object type. In this case, only enough information is written to verify that the type exists, with the same object identifier and the same geometry, on the import target system.

The user must ensure that the proper type definitions exist on the target system, either by working with the DBA to create them, or by importing them from full database or user mode exports performed by the DBA.

It is important to perform a full database mode export regularly to preserve all object type definitions. Alternatively, if object type definitions from different schemas are used, the DBA should perform a user mode export of the appropriate set of users. For example, if table1 belonging to user scott contains a column on blake's type type1, the DBA should perform a user mode export of both blake and scott to preserve the type definitions needed by the table.

Export and Nested Tables
Inner nested table data is exported whenever the outer containing table is exported. Although inner nested tables can be named, they cannot be exported individually.

Export and Advanced Queue (AQ) Tables

Queues are implemented on tables. The export and import of queues constitutes the export and import of the underlying queue tables and related dictionary tables. You can export and import queues only at queue table granularity.

When you export a queue table, both the table definition information and queue data are exported. Because the queue table data is exported as well as the table definition, the user is responsible for maintaining application-level data integrity when queue table data is imported.

Export and Synonyms

You should be cautious when exporting compiled objects that reference a name used as a synonym and as another object. Exporting and importing these objects will force a recompilation that could result in changes to the object definitions.

The following example helps to illustrate this problem:

CREATE PUBLIC SYNONYM emp FOR scott.emp;

CONNECT blake/paper;
CREATE TRIGGER t_emp BEFORE INSERT ON emp BEGIN NULL; END;
CREATE VIEW emp AS SELECT * FROM dual;


If the database in the preceding example were exported, the reference to EMP in the trigger would refer to Blake?s view rather than to Scott?s table. This would cause an error when Import tried to reestablish the t_emp trigger.

Possible Export Errors Related to Java Synonyms

If an export operation attempts to export a synonym named DBMS_JAVA when there is no corresponding DBMS_JAVA package or when Java is either not loaded or loaded incorrectly, the export will terminate unsuccessfully. The error messages that are generated include, but are not limited to, the following: EXP-00008, ORA-00904, and ORA-29516.

If Java is enabled, make sure that both the DBMS_JAVA synonym and DBMS_JAVA package are created and valid before rerunning the export.

If Java is not enabled, remove Java-related objects before rerunning the export.

Support for Fine-Grained Access Control

You can export tables with fine-grained access control policies enabled. When doing so, consider the following:

The user who imports from an export file containing such tables must have the appropriate privileges (specifically, the EXECUTE privilege on the DBMS_RLS package so that the tables' security policies can be reinstated). If a user without the correct privileges attempts to export a table with fine-grained access policies enabled, only those rows that the exporter is privileged to read will be exported.
If fine-grained access control is enabled on a SELECT statement, then conventional path Export may not export the entire table because fine-grained access may rewrite the query.
Only user SYS, or a user with the EXPORT_FULL_DATABASE role enabled or who has been granted EXEMPT ACCESS POLICY, can perform direct path Exports on tables having fine-grained access control.

Export fails

EXP-00003 NO STORAGE DEFINITION FOUND FOR SEGMENT (XXX, XXXX)

If you are getting EXP-3 using export version lower than the instance you are exporting from but not when using it version then most likely issue is related to bug 3291851.

For this issue to occur server conditions must be met of which the main one condition is that the multiple extents must exist.

9.2.0.1.0 export utility exporting 9.2.0.5.0. server database spits this error.




Reviews and Comments


PMP, CAPM, PMI is a registered certification mark of the Project Management Institute, Inc

Copyright © www.techfaq360.com 2016


About US | Contact US | Privacy Policy | Terms and Conditions  | Website disclaimer  | Cancellation and Refund Policy  | Shipping & Delivery Policy