1600 PMP mock questions 1400 CAPM mock questions 800 SCJP 6 mock questions 600 OCAJP 7 mock questions 590 OCPJP 7 mock questions 556 SCWCD 5 mock questions 500 OCEJWCD 6 mock questions pdfDownload (java,struts, hibernet etc)

ORACLE Tutorials

Example of PERFORMANCE TUNING exercises

The PL/SQL engine executes procedural statements and sends all SQL statements present in the code to the SQL engine. The SQL engine will parse and execute the query or DML statement and return the expected output back to the PL/SQL engine. This switch between the two engines is called context switching.
Oracle uses two engines to process PL/SQL code. All procedural code is handled by the PL/SQL engine while all SQL is handled by the SQL engine. There is an overhead associated with each context switch between the two engines. If PL/SQL code loops through a collection performing the same DML operation for each item in the collection it is possible to reduce context switches by bulk binding the whole collection to the DML statement in one operation.
We mostly concentrate on the SQL statement to tune performance issues. It is worth noting that excessive context switching can affect performance. This would be substantially significant when we are carrying out SQL statements in a loop. The features discussed below were introduced to reduce this overhead of SQL processing. Introduced in Oracle 8i, these features are being improved on with every new release.
Two PL/SQL features, Bulk Bind and Bulk collect help in improving performance and utilizing resources effectively from within PL/SQL code. These features reduce context switching, (i.e., multiple SQL statements being executed from the code resulting in calls to SQL engine), and carry out the operation on the concerned object at one go. Since the SQL statements are fired in a bundle, frequent access to SQL engine is reduced.
In cases where the PL/SQL code is being executed on a different terminal than the server itself, this would also result in optimal network usage rather than too frequent network calls to access the SQL engine.
The bulk collect option instructs the SQL engine to bulk bind the output collections before returning them to the PL/SQL engine. This allows us to load data dynamically into collections at one shot for further processing. Bulk collect can be used with SELECT INTO, FETCH INTO and RETURNING INTO statements.
In Oracle8i a collection must be defined for every column bound to the DML which can make the code rather long winded. Oracle9i allows us to use Record structures during bulk operations so long as we don't reference individual columns of the collection. This restriction means that updates and deletes which have to reference inividual columns of the collection in the where clause are still restricted to the collection-per-column approach used in Oracle8i.
Bulk binds improve performance of DML statements by minimizing the number of switches between the PL/SQL and SQL engines.
You may have a piece of code, which has multiple update, delete or insert statements on the same table. This results in multiple calls to the SQL engine for carrying out the transaction. By using bulk binds, you can carry out mass scale data manipulation at one shot. The altered data has to be stored in a PL/SQL collection in the code.
The FORALL statement is used for doing the bulk-processing job at one go. This statement is similar to the FOR-LOOP statement except that LOOP/END LOOP key words are not used. The FORALL statement needs a range to work on, along with whatever DML activity is to be carried.
To test bulk binds using records we first create a test table:

id NUMBER(10),
description VARCHAR2(50));



TYPE test1_tab IS TABLE OF test1%ROWTYPE;

t_tab test1_tab := test1_tab();
FOR i IN 1 .. 10000 LOOP
    t_tab(t_tab.last).id := i;
    t_tab(t_tab.last).description := 'Description: ' || To_Char(i);

FOR i IN t_tab.first .. t_tab.last LOOP
    INSERT INTO test1 (id, description)
    VALUES (t_tab(i).id, t_tab(i).description);


Note the time taken for the above procedure. Using the FORALL construct to bulk bind the inserts this time is reduced to less than 1/10 of a second:
TYPE test1_tab IS TABLE OF test1%ROWTYPE;

t_tab test1_tab := test1_tab();
FOR i IN 1 .. 10000 LOOP

    t_tab(t_tab.last).id := i;
    t_tab(t_tab.last).description := 'Description: ' || To_Char(i);

FORALL i IN t_tab.first .. t_tab.last
     INSERT INTO test1 VALUES t_tab(i);


Since no columns are specified in the insert statement the record structure of the collection must match the table exactly. Bulk binds can also improve the performance when loading collections from a queries. The BULK COLLECT INTO construct binds the output of the query to the collection.
TYPE test1_tab IS TABLE OF test1%ROWTYPE;

t_tab test1_tab := test1_tab();

CURSOR c_data IS SELECT * FROM test1;
FOR cur_rec IN c_data LOOP
    t_tab(t_tab.last).id := cur_rec.id;
    t_tab(t_tab.last).description := cur_rec.description;

Note the time taken. Using the BULK COLLECT INTO construct reduces this time to less than 0.01 seconds:
TYPE test1_tab IS TABLE OF test1%ROWTYPE;

t_tab test1_tab := test1_tab();
SELECT id, description
FROM test1;

The select list must match the collections record definition exactly for this to be successful. Oracle9i Release 2 also allows updates using record definitions by using the ROW keyword:
TYPE test1_tab IS TABLE OF test1%ROWTYPE;
t_tab test1_tab := test1_tab();
FOR i IN 1 .. 10000 LOOP
t_tab(t_tab.last).id := i;
t_tab(t_tab.last).description := 'Description: ' || To_Char(i);

FOR i IN t_tab.first .. t_tab.last LOOP
UPDATE test1
SET ROW = t_tab(i)
WHERE id = t_tab(i).id;

PL/SQL procedure successfully completed.

Elapsed: 00:00:06.08
The reference to the ID column within the WHERE clause means that the this statement cannot use a bulk bind directly. In order to use a bulk bind a separate collection must be defined for the id column:
TYPE id_tab IS TABLE OF test1.id%TYPE;
TYPE test1_tab IS TABLE OF test1%ROWTYP

Handling Exceptions with the FORALL statements
One of the features introduced with Oracle9I to handle the errors encountered during a Bulk Bind operation was SAVE EXCEPTIONS. In the previous case during the execution of the FORALL loop if any one row failed the entire loop would be terminated, but by the usage of this clause the loop continues to process.
All the exceptions that are raised during the execution of the loop are stored in a new cursor attribute called as %BULK_EXCEPTIONS. But this attribute is only available from the exception handler block. Every record that is stored in this method has two fields, first is the ERROR_INDEX that holds the index number or the iteration number for which the exception was generated and the second field ERROR_CODE holds Oracle error code.
num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
errors NUMBER;
dml_errors EXCEPTION;
PRAGMA exception_init(dml_errors, -24381);
DELETE FROM emp WHERE sal > 500000/num_tab(i);
WHEN dml_errors THEN
dbms_output.put_line('Number of errors is ' || errors);
FOR i IN 1..errors LOOP
dbms_output.put_line('Error ' || i || ' occurred during '||
dbms_output.put_line('Oracle error is ' ||

Oracle Database 10g extends the enhancements that were introduced in Oracle9i in the area of bulk binding. The SAVE EXCEPTIONS syntax was introduced in Oracle9i to capture exceptions while bulk inserting (deleting or updating) rows. Although this feature enables the data manipulation language (DML) to continue (saving any exceptions in the SQL%BULK_EXCEPTIONS collection), the performance of the operation is greatly affected. In a scenario in which the collection being processed is sparse due to the application logic involved, this would be an unnecessary overhead. In Oracle Database 10g, you can overcome these issues with the new INDICES OF and VALUES OF features.
Incase the application needs to get the values of the record that have been effected, you can use the RETURNING CLAUSE to get the same. This will eliminate the need to select the row after any DML statement. As a result, fewer network round trips, lesser server CPU time and fewer cursors, and less server memory are required.

Create or Replace procedure change_salary
(empid IN employees.employee_id%type, changed_record OUT NUMBER)
Update employees
Set salary = salary *1.10
Where employee_id = empid
RETURNING salary into changed_record;
End change_salary;
After you create the procedure, run the following commands to obtain the results.

Variable c_sal number

Execute change_salary(201,:c_sal);

Print c_sal

Some more Performance Guidelines:
1.    Conditional Statements
One of the method of Tuning the PL/SQL code is by rephrasing the conditional constructs carefully. You need to remember that PL/SQL will stop evaluating an expression as soon as the results are known. Like for example in an expression with the OR operator, PL/SQL will stop evaluating the moment the first condition evaluates to true. Only in case if the first condition evaluates to false, the second gets evaluated. Evaluation stops after the first condition evaluating to true because the OR operator will need only either of the conditions to be true.
Similarly while using the AND operator as in the following case

If my_func (empid) AND Deptno = 30 Then

The function my_func() is called always, which can be time consuming, instead you can reverse the sequence as follows

If Deptno = 30 AND my_func (empid) Then

Now the moment the first expression evaluates to false the execution is stopped and the function need not be called. As the AND operator will need both the conditions to be true for processing. Instead of calling a procedure, its easier to check a variable. Also its less time consuming too.
2.    Implicit Conversion
Implicit data conversion needs to be avoided as it causes a great deal of performance penalty. For example while you initialize a number variable in the following manner, it is stored as a 4 byte number which will need to be converted to a 22-byte Oracle number before any numeric operation is performed.
Instead you can initialize the same as a floating point number which is anyway stored as a 22-byte number therefore no conversions will occur.
Similarly you can avoid character to number conversion also.
3.    Pls_integer datatype
Its always adviced to use a pls_integer datatype while working with number as the values require less storage area than the other datatype. Also it uses machine arithmetic for calculations which is a lot faster. The other variants of the datatype use the library arithmetic which slows down he process
4.    Working with collections and Index-by tables.
The other method of improving the performance is by using index-by tables with record datatype. Earlier the developers used to declare multiple index-by tables to create a record as the index-by tables could hold only single values which utilized a lot of memory.
Working in this manneer will be of huge performance benefit as the lookup, insert, copy and the other operations would get fewer.
The other method of enhancing the performance is by passing user-defined record types as formal parameters to procedures and functions. You can also think of passing collections as the formal parameters.
5.    Using the NOT NULL constraints
While working with variable you can go ahead and apply the NOT NULL constraint for the variable if necessary instead of writing the code to check the value after the usage. Though the working would be slower but there are various other benefits like, no extra code is needed and also when an error is implicitly raised the value for the variable is kept intact where as in the other case though the processing is faster, but the extra code could be error prone and also if the error is raised explicitly the value of the variable is lost.
Native Compilation of PL/SQL
In majority of the cases the performance of the PL/SQL code generally depends on the execution speed of the SQL statements rather that the PL/SQL code. As there is an increasing trend of using PL/SQL for non-database operations that are more computation oriented. In this case the execution speed of PL/SQL program would play an important role.
When PL/SQL is loaded into the server it is compiled to byte code before execution. In Oracle 9i the process of native compilation converts PL/SQL stored procedures to native code shared libraries which are linked into the kernel resulting in performance increases for the procedural code. The extent of the performance increase depends on the content of the PL/SQL. The compilation process does not affect the speed of database calls, only the procedural logic around them such as loops and calculations.
You can implement the same in the following manner.

ALTER SESSION SET plsql_compiler_flags = 'INTERPRETED';
ALTER SESSION SET plsql_compiler_flags = 'NATIVE';

Native Dynamic SQL
Dynamic SQL allows an application to run SQL statements whose contents are not known until runtime. The statement is built up as a string by the application and is then passed to the server. Generally dynamic SQL is slower than static SQL so it should not be used unless absolutely necessary. Also, since syntax checking and object validation cannot be done until runtime, code containing large amounts of dynamic SQL may be littered with mistakes but still compile.
The main advantage of dynamic SQL is that it allows you to perform DDL commands that are not supported directly within PL/SQL, such as creating tables. It also allows you to access objects that will not exist until runtime.

Native Dynamic SQL    DBMS_SQL
Easy to use and concise.    Often long-winded and awkward.
PL/SQL interpreter has built in support for Native Dynamic SQL so it is more efficient than DBMS_SQL.    DBMS_SQL uses a Procedural API so it is generally slower than Native Dynamic SQL.
Supports user defined types.    Does not support user defined types.
Supports FETCH INTO record types    Does not support FETCH INTO record types
Not supported in client site code.    Supported in client side code.
Does not support bulk Dynamic SQL, but it can be faked by placing all statements in a PL/SQL block.    Supports bulk Dynamic SQL.
Only supports Single row Updates/Deletes with RETURNING clause.    Supports Single and Multiple row Updates/Deletes with RETURNING clause.
Does not support SQL statements bigger than 32K    Does support SQL statements bigger than 32K
Parse required for every execution    Parse once, execute many possible
Tracing PL/SQL Execution
While working with large applications in PL/SQL it is often difficult to keep track of the processes executions. Oracle provides a tracing facility with the help of on API called dbms_trace. The implementation of the same is as follows
Before you start the implementation you need to be aware that you cannot do the tracing on the MTS server. The tracing can be implemented at the normal user login but the trace information can be seen only through the DBA login.
To start with tracing you need to be executing the following scripts as sys user. The scripts are available in the Oraclehome\rdbms\admin folder
Execute Oraclehome\rdbms\admin\ TRACETAB.SQL
Execute Oraclehome\rdbms\admin\ DBMSPBT.SQL
Execute Oraclehome\rdbms\admin\ PRVTPBT.PLB

All the above scripts will let you create the API for dbms_trace. After you are through with this you can connect as normal user and execute the following script to create the packages. The Script is available in the folder Oraclehome\plasl\demo

Execute Oraclehome\plasl\demo\Tracedemo.sql

Now you can start the tracing. You have options of tracing all the lines of code or exceptions or the once you specifically enable.

execute dbms_trace.set_plsql_trace(dbms_trace.trace_all_lines);

execute foo;

execute dbms_trace.clear_plsql_trace();

Now once this process is over you can connect back as sysuser and view the details.

Select event_seq as seq, stack_depth, event_kind as kind,
2    substr(event_unit,1,10) as unit, event_line as line,
3    event_comment
4    from plsql_trace_events;

The above statements gives you the details of execution of all the statements that are being executed by the procedure foo().
An application can always be fine-tuned for better performance with the use of better alternatives or with the new features introduced with every release of Oracle.
Simply inspecting the code can bring out the bottlenecks eating up your processing time. Using explain plan to fine tune the SQL statements resolves issues most of the time. However, sometimes it may not be that simple. It is baffling when all the SQL statements are well tuned but the routine still takes noticeable time to execute.
The DBMS_PROFILER package was introduced in Oracle8i to allows developers to profile the run-time behaviour of PL/SQL code, making it easier to identify performance bottlenecks which can then be investigated more closely.
Oracle 8i provides a new tool called PL/SQL Profiler. This is a powerful tool to analyze a Program unit execution and determine the runtime behavior. The results generated can then be evaluated to find out the hot areas in the code. This tool helps us identify performance bottlenecks, as well as where excess execution time is being spent in the code. The time spent in executing an SQL statement is also generated. This process is implemented with DBMS_PROFILER package.
The possible profiler statistics that are generated:
1.    Total number of times each line was executed.
2.    Time spent executing each line. This includes SQL statements.
3.    Minimum and maximum duration spent on a specific line of code.
4.    Code that is actually being executed for a given scenario.
The first step is to install the DBMS_PROFILER package

CONNECT sys/password@service AS SYSDBA

GRANT connect TO profiler;

CREATE PUBLIC SYNONYM plsql_profiler_runs FOR profiler.plsql_profiler_runs;
CREATE PUBLIC SYNONYM plsql_profiler_units FOR profiler.plsql_profiler_units;
CREATE PUBLIC SYNONYM plsql_profiler_data FOR profiler.plsql_profiler_data;
CREATE PUBLIC SYNONYM plsql_profiler_runnumber FOR profiler.plsql_profiler_runnumber;

CONNECT profiler/profiler@service
GRANT SELECT ON plsql_profiler_runnumber TO PUBLIC;

Once the package is ready you can use various procedures to create profiles of the selected subprogram. The procedures that are the part of this package are as follows:
1.    The DBMS_PROFILER.START_PROFILER tells Oracle to start the monitoring process. An identifier needs to be provided with each run that is used later to retrieve the statistics.
2.    The DBMS_PROFILER.STOP_PROFILER tells Oracle to stop the monitoring.
3.    The data collected for an execution is held in the memory. Calling the DBMS_PROFILER.FLUSH_DATA routine tells Oracle to save this data in profiler tables and clear the memory.
DBMS_PROFILER is a very powerful tool and the first of its kind to identify performance issues on the PL/SQL front. This utility can be best used in the development stages to fine tune code based on various applicable scenarios. It can also be used to fine tune routines that are already in production and are taking noticeable time to execute. Overall, this utility gives statistics for each line of code that will help us in evaluating and tuning at a finer level. Just as SQL statements are checked for performance, PL/SQL code should not be ignored but should be tuned for optimal results as well.
Pinning object
Oracle Applications requires space in the ORACLE System Global Area (SGA) for stored packages and functions. If SGA space is fragmented, there may not be enough space to load a package or function. You should pre-allocate space in the SGA shared pool for packages, functions, and sequences by "pinning" them.
Pinning objects in the shared pool can provide a tremendous increase in database performance, if it is done correctly. Since pinned objects reside in the SQL and PL/SQL memory areas, they do not need to be loaded and parsed from the database, which saves considerable time.
The initial call to a package causes the whole package to be loaded into the shared pool. For large packages this may represent an unacceptable delay for two reasons. First the size of the package causes a delay in load time. Second if the shared pool is already full, several smaller objects may need to be aged out to make room for it. In these circumstances, performance can be improved by pinning large packages in the shared pool.
Under normal circumstances, objects in the shared pool are placed on a least recently used (LRU) list. If the shared pool is full and a new object must be loaded, items on the LRU list are aged out. Subsequent calls to objects that have been aged out result in them being reloaded into the shared pool.
The processes of pinning objects in the shared pool removes them from the LRU list so they are no longer available to be aged out, regardless of usage. The process of pinning objects in the shared pool is achieved using the dbms_shared_pool package.
The dbms_shared_pool package is not loaded by default, so it must be loaded manually by running the dbmspool.sql script as the SYS user.
The dbms_shared_pool is supported by three different procedures that is keep, unkeep and sizes.
You can use the above package in the following manner. Before you start working with the memory you need to know as what is actually existing in the memory that is done by issuing the following statement.

SQL> select address,hash_value,SQL_TEXT,SHARABLE_MEM from v$sql
2 where sql_text like '%SCOTT%';

This is will list all the variables that are available in the memory. You can pin the procedure p3 as follows:


You can check for the status of the package as follows:


Similarly you can unkeep the package also.

Reviews and Comments

PMP, CAPM, PMI is a registered certification mark of the Project Management Institute, Inc

Copyright © www.techfaq360.com 2016

About US | Contact US | Privacy Policy | Terms and Conditions  | Website disclaimer  | Cancellation and Refund Policy  | Shipping & Delivery Policy