DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Sunday, August 31, 2014

Instance Caging to Manage CPU Usage in 11g R2


Managing CPU usage can be an issue when trying to consolidate multiple databases on to a single server, because if one database starts to hog all the CPU on the server, all the other databases will be affected.

Instance caging is a new feature in the Enterprise Edition of Oracle Database 11g Release 2 (11.2) that simplifies the management of CPU usage in consolidation environments. By enabling Resource Manager and setting the CPU_COUNT parameter in each instance, you can limit the maximum amount of CPUs/Cores the instance can use.

Enabling Resource Manager:

Resource Manager is not enabled by default, so it must be enabled by specifying a resource plan before instance caging can take effect. Oracle provide a number of resource plans out of the box.

    SELECT plan FROM dba_rsrc_plans;

    PLAN
    ------------------------------
    DEFAULT_PLAN
    INTERNAL_QUIESCE
    INTERNAL_PLAN
    APPQOS_PLAN
    DEFAULT_MAINTENANCE_PLAN
    ETL_CRITICAL_PLAN
    MIXED_WORKLOAD_PLAN
    ORA$AUTOTASK_SUB_PLAN
    ORA$AUTOTASK_HIGH_SUB_PLAN
    DSS_PLAN

    10 rows selected.

    SQL>

If you have no specific resource management needs within the instance, the simplest solution is to use the default plan.

    ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = default_plan;

Remember, the maximum utilization of each consumer group within a resource plan is a proportion of the total CPU for the instance, specified by the
CPU_COUNT parameter. If a consumer group is limited to 50% of the total CPU and the CPU_COUNT is set to 2, then the consumer group is limited to 1 CPU.
Keep this in mind when using resource plans that control CPU utilization within the instance.

Setting CPU_COUNT:

With the Resource Manager active, you must set the CPU_COUNT for each instance running on the server. The CPU_COUNT parameter is dynamic, but it is not recommended to change it too often, or by too large an amount. The following command shows how to set the parameter.

    ALTER SYSTEM SET cpu_count = 2;

Setting the parameter is easy, but knowing what to set it too can take a little thought. There are two basic approaches:

Over-Provisioning: Each instance has its CPU_COUNT set to value less than the total number of CPUs on the server, but they add up to a value greater than the CPU count of the server. This prevents any one instance hogging all the server CPU, but if all instances are busy at the same time, the server can still get swamped.

Partitioning: The sum of CPU_COUNT values of all the instances on the server are less than or equal to the total number of CPUs on the server.
This way it is unlikely that any one instance will affect the performance of any other instance from a CPU perspective.

Monitoring Instance Caging:

The throttling effect of Resource Manager can be displayed using the CONSUMED_CPU_TIME and CPU_WAIT_TIME columns of the following views.

    V$RSRC_CONSUMER_GROUP
    V$RSRCMGRMETRIC_HISTORY

The CONSUMED_CPU_TIME is the number of milliseconds of CPU time consumed by the consumer group, while the CPU_WAIT_TIME is the time waiting for CPU due to Resource Manager throttling.

    SELECT name, consumed_cpu_time, cpu_wait_time
    FROM v$rsrc_consumer_group;

    NAME                             CONSUMED_CPU_TIME   CPU_WAIT_TIME
    -------------------------------- ----------------------------       -----------------------
    SYS_GROUP                                               45155            226
    OTHER_GROUPS                                          310                 4
    ORA$AUTOTASK_MEDIUM_GROUP     23805           864
    ORA$AUTOTASK_URGENT_GROUP       3777            17
    ORA$AUTOTASK_STATS_GROUP                  0             0
    ORA$AUTOTASK_SPACE_GROUP                  0             0
    ORA$AUTOTASK_SQL_GROUP                       0             0
    ORA$AUTOTASK_HEALTH_GROUP               0             0
    ORA$DIAGNOSTICS                                       375            13
    _ORACLE_BACKGROUND_GROUP_               0             0

    10 rows selected.

    SQL>

    ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';
    SET LINESIZE 100

    SELECT begin_time,
           consumer_group_name,
           cpu_consumed_time,
           cpu_wait_time
    FROM   v$rsrcmgrmetric_history
    ORDER BY begin_time;

 BEGIN_TIME   CONSUMER_GROUP_NAME   CPU_CONSUMED_TIME  CPU_WAIT_TIME
  -------------------- -------------------------------------    --------------------------------- -----------------------
    .
    .
    .
    03-NOV-2012 14:31:20 ORA$AUTOTASK_HEALTH_GROUP              0             0
    03-NOV-2012 14:31:20 ORA$DIAGNOSTICS                                          0             0
    03-NOV-2012 14:31:20 ORA$AUTOTASK_STATS_GROUP                   0            0
    03-NOV-2012 14:32:20 SYS_GROUP                                                     515            0
    03-NOV-2012 14:32:20 OTHER_GROUPS                                                0             0
    03-NOV-2012 14:32:20 ORA$AUTOTASK_MEDIUM_GROUP             0             0
    03-NOV-2012 14:32:20 ORA$AUTOTASK_URGENT_GROUP             0             0
    03-NOV-2012 14:32:20 _ORACLE_BACKGROUND_GROUP_             0             0
    03-NOV-2012 14:32:20 ORA$AUTOTASK_SPACE_GROUP                  0            0
    03-NOV-2012 14:32:20 ORA$AUTOTASK_SQL_GROUP                       0            0
    03-NOV-2012 14:32:20 ORA$AUTOTASK_HEALTH_GROUP               0            0
    03-NOV-2012 14:32:20 ORA$DIAGNOSTICS                                           0            0
    03-NOV-2012 14:32:20 ORA$AUTOTASK_STATS_GROUP                    0            0

    420 rows selected.

    SQL>

Automatic SQL Tuning in Oracle Database 11g Release 2 (DBMS_AUTO_SQLTUNE)

Oracle Database 11g Release 2 (11.2.0.2 onward) has some minor changes to the Automatic SQL Tuning feature introduced on Oracle 11g Release 1.
In the previous release, reports and amendments to the automatic tuning task parameters was performed using the DBMS_SQLTUNE package.
From 11.2.0.2 onward, this should be done using the DBMS_AUTO_SQLTUNE package, which requires the DBA role.

Note. The top-level enabling and disabling of the admin task is still done using the DBMS_AUTO_TASK_ADMIN package, described here.

    EXECUTE_AUTO_TUNING_TASK
    SET_AUTO_TUNING_TASK_PARAMETER
    REPORT_AUTO_TUNING_TASK

Related articles.

    SQL Tuning Advisor (10g)
    Automatic SQL Tuning in Oracle Database 11g Release 1

EXECUTE_AUTO_TUNING_TASK
The EXECUTE_AUTO_TUNING_TASK procedure and function manually initiate the SYS_AUTO_SQL_TUNING_TASK task.
The only difference between the two is the function returns the task name, which is useful if you don't specify one, as it returns the system generated name.

    CONN / AS SYSDBA

    SET SERVEROUTPUT ON
    DECLARE
      l_return VARCHAR2(50);
    BEGIN
      l_return := DBMS_AUTO_SQLTUNE.execute_auto_tuning_task;
      DBMS_OUTPUT.put_line(l_return);
    END;
    /
    EXEC_63842

    PL/SQL procedure successfully completed.

    SQL>

The EXECUTE_AUTO_TUNING_TASK procedure and function can only be called when logged on as SYS.SET_AUTO_TUNING_TASK_PARAMETER

The SET_AUTO_TUNING_TASK_PARAMETER procedures are overloads that accept both string and numeric parameter values.

These replace DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER as the preferred way to amend the parameters for the SYS_AUTO_SQL_TUNING_TASK task.
The parameters that can be amended are listed here.

The following example shows both the original (DBMS_SQLTUNE) and preferred method (DBMS_AUTO_SQLTUNE) of changing the parameters for the SYS_AUTO_SQL_TUNING_TASK task.

    COLUMN parameter_value FORMAT A30

    SELECT parameter_name, parameter_value
    FROM   dba_advisor_parameters
    WHERE  task_name = 'SYS_AUTO_SQL_TUNING_TASK'
    AND    parameter_name = 'ACCEPT_SQL_PROFILES';

    PARAMETER_NAME                 PARAMETER_VALUE
    ------------------------------ ------------------------------
    ACCEPT_SQL_PROFILES            FALSE

    SQL>

    -- Pre 11.2.0.2
   
    BEGIN
      DBMS_SQLTUNE.set_tuning_task_parameter(
        task_name => 'SYS_AUTO_SQL_TUNING_TASK',
        parameter => 'ACCEPT_SQL_PROFILES',
        value     => 'TRUE');
    END;
    /

    SELECT parameter_name, parameter_value
    FROM   dba_advisor_parameters
    WHERE  task_name = 'SYS_AUTO_SQL_TUNING_TASK'
    AND    parameter_name = 'ACCEPT_SQL_PROFILES';

    PARAMETER_NAME                 PARAMETER_VALUE
    ------------------------------ ------------------------------
    ACCEPT_SQL_PROFILES            TRUE

    SQL>

    -- 11.2.0.2 Onward
   
    BEGIN
      DBMS_AUTO_SQLTUNE.set_auto_tuning_task_parameter(
        parameter => 'ACCEPT_SQL_PROFILES',
        value     => 'FALSE');
    END;
    /

    SELECT parameter_name, parameter_value
    FROM   dba_advisor_parameters
    WHERE  task_name = 'SYS_AUTO_SQL_TUNING_TASK'
    AND    parameter_name = 'ACCEPT_SQL_PROFILES';

    PARAMETER_NAME                 PARAMETER_VALUE
    ------------------------------ ------------------------------
    ACCEPT_SQL_PROFILES            FALSE

    SQL>

REPORT_AUTO_TUNING_TASK

The REPORT_AUTO_TUNING_TASK function replaces DBMS_SQLTUNE.REPORT_AUTO_TUNING_TASK as the preferred way to get the results of the SYS_AUTO_SQL_TUNING_TASK task.
Calling it with no parameters shows the results for the latest execution of the task.

    SET LONG 1000000

    SELECT DBMS_AUTO_SQLTUNE.report_auto_tuning_task
    FROM   dual;

    REPORT_AUTO_TUNING_TASK
    --------------------------------------------------------------------------------
    GENERAL INFORMATION SECTION
    -------------------------------------------------------------------------------
    Tuning Task Name                        : SYS_AUTO_SQL_TUNING_TASK
    Tuning Task Owner                       : SYS
    Workload Type                           : Automatic High-Load SQL Workload
    Execution Count                         : 32
    Current Execution                       : EXEC_63842
    Execution Type                          : TUNE SQL
    Scope                                   : COMPREHENSIVE
    Global Time Limit(seconds)              : 3600
    Per-SQL Time Limit(seconds)             : 1200

    REPORT_AUTO_TUNING_TASK
    --------------------------------------------------------------------------------
    Completion Status                       : COMPLETED
    Started at                              : 06/09/2014 09:45:57
    Completed at                            : 06/09/2014 09:46:06
    Number of Candidate SQLs                : 77
    Cumulative Elapsed Time of SQL (s)      : 5351

    -------------------------------------------------------------------------------
    There are no recommendations to improve the statements in the workload.

    -------------------------------------------------------------------------------

    SQL>

Flashback features of Oracle 11g

Following are the features of Flashback on Oracle 11g

1. Flashback Query,
2. Flashback Database,
3. Flashback Drop
4. Flashback Table.


1. Flashback Query -

Suppose that you discover at 12:30 PM that the row for employee Chung was deleted from the employees table, and you know that at 9:30AM the data for Chung was
correctly stored in the database. You can use Oracle Flashback Query to examine the contents of the table at 9:30 AM to find out what data was lost.
If appropriate, you can restore the lost data.

Below query retrieves the state of the record for Chung at 9:30AM, April 4, 2004.

Retrieving a Lost Row with Oracle Flashback Query

SELECT * FROM employees
AS OF TIMESTAMP
TO_TIMESTAMP('2004-04-04 09:30:00', 'YYYY-MM-DD HH:MI:SS')
WHERE last_name = 'Chung';

In case you need to restore the lost record, you combine the above select with an insert as mentioned in the below insert query

INSERT INTO employees (
  SELECT * FROM employees
  AS OF TIMESTAMP
  TO_TIMESTAMP('2004-04-04 09:30:00', 'YYYY-MM-DD HH:MI:SS')
  WHERE last_name = 'Chung'
);

 
2. Flashback Database -

Flashback Database allows you to restore your database to a prior point in time by undoing all the changes that have taken place since that time. This is different than a Point-In-Time Recovery in that you don’t restore a backup and then roll forward, instead you “rewind” the database by applying flashback logs. This operation can be very fast, because you do not need to restore the backups. This in turn results in much less downtime following data corruption or human error. Keep in mind that this is NOT a replacement for regular backups (preferably rman hot backups) as it DOES NOT protect you in the case of media failure, filesystem corruption, accidental deletion or anything else that effects the physical data files. It is used to undo logical data corruption, such as from a failed application upgrade. It can also be very useful in Test databases where you want to test a process repeatedly, starting with an identical data set each time.  There is overhead associated with generating and managing the flashback logs but for many databases, this is well worth it.

In order to enable Flashback Database, put the database is mount mode and run "alter database flashback on". The flashback logs will be written to your FRA as defined by the db_recovery_file_dest parameter. The db_flashback_retention_target parameter defines how long you wish to save your flashback logs.

Here is an example of enabling Flashback Database, creating a restore point and then flashing back to that restore point:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Database mounted.

SQL> alter database flashback on;
Database altered.
SQL> alter database open;
Database altered.
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES

SQL> create restore point flashback_test;
Restore point created.
SQL> select name,time from v$restore_point;
NAME                 TIME
-------------------- ---------------------------------------------------------------------------
FLASHBACK_TEST       17-APR-12 09.17.46.000000000 AM

SQL> create table flashback_test as select * from all_objects;
Table created.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Database mounted.
SQL> flashback database to restore point flashback_test;
Flashback complete.

NOTE: You can also flashback to a given time or SCN as well as to a restore point

SQL> alter database open resetlogs;
Database altered.
SQL> select * from flashback_test;
select * from flashback_test
              *
ERROR at line 1:
ORA-00942: table or view does not exist

The table we created after the restore point is no longer there; the database was "rewound" to a prior point in time. 
Related to this is the guaranteed restore point, which can be created even if Flashback Database has not been enabled.

SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
NO

SQL> create restore point restore_point_test guarantee flashback database;
Restore point created.
SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
RESTORE POINT ONLY
SQL> shutdown immediate

Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Database mounted.
SQL> flashback database to restore point restore_point_test;
Flashback complete.
SQL> alter database open resetlogs;
Database altered.
SQL> drop restore point restore_point_test;

Once you are confident that you won’t be rolling back, make sure to drop the restore point or you will continue to generate flashback logs.


3. Flashback Drop - 

It allows you to recover a non-SYS table that was dropped by making use of the Recycle Bin. 
This is a virtual container where all dropped objects reside. Underneath the covers, the objects are occupying the same space as when they were created.
You can view your dropped tables by querying the new RECYCLEBIN view.
Objects in the Recycle Bin will remain in the database until the owner of the dropped objects decides to permanently remove them using the new PURGE command.


SQL> drop table drop_test;
Table dropped.
SQL> show recyclebin;
ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
---------------- ------------------------------ ------------ -------------------
DROP_TEST        BIN$veGmdD1Yy9/gQKjAYIkPIg==$0 TABLE        2012-04-17:10:21:10
SQL> flashback table drop_test to before drop;
Flashback complete.

SQL> drop table drop_test purge;
Table dropped.
SQL> show recyclebin;
SQL> flashback table drop_test to before drop;
flashback table drop_test to before drop
*
ERROR at line 1:
ORA-38305: object not in RECYCLE BIN


4. Flashback Table -

This can be used to restore data in a table to past point in time, based upon time or SCN. 
This is similar to the Flashback Query in that it relies on the data being present in the UNDO tablespace.

SQL> create table flashback_test (col1 number);
Table created.
SQL> alter table flashback_test enable row movement;
Table altered.

--insert a bunch of rows

SQL> select current_scn from v$database;
CURRENT_SCN
-----------
    1048161
SQL> select count(*) from flashback_test;
  COUNT(*)
----------
       192

SQL> delete from flashback_test;
192 rows deleted.

SQL> commit;
Commit complete.

SQL> flashback table flashback_test to scn 1048161;
Flashback complete.

SQL> select count(*) from flashback_test;
  COUNT(*)
----------
 192

Saturday, August 30, 2014

Datapump 11g COMPRESSION Parameter

Prior to 11g only METADATA_ONLY and NONE could be specified. With now COMPRESSION=ALL, dumpfile size can be greatly reduced so no need to use any compression utility to compress the dumpfile further.

METDATA_ONLY - Only the metadata is compressed
DATA_ONLY -         Only the data is compressed; the metadata is left alone.
ALL -                          Both the metadata and data are compressed.
NONE
-                      This is the default; no compression is performed. 



Without compression parameter

expdp system/oracle schemas=scott directory=TEST_DIR dumpfile=scott.dmp logfile=scott.log



Using compression parameter

expdp system/oracle schemas=scott directory=TEST_DIR  dumpfile=scott1.dmp logfile=scott1.log compression=all

Recovery Manager (Data Recovery Advisor)

  One of the great new features of RMAN in Oracle 11g is the new tool named Data Recovery  Advisor. This tool automates the diagnose of the data failure and corruption, advises the recovery steps and performs automatic recovery of the database failure.  This tool could be used from GUI as well as from CLI.


  The Data Recovery Advisor automatically diagnoses corruption or loss of persistent data on disk, determines the appropriate repair options, and executes repairs at the user's request. This reduces the complexity of recovery process, thereby reducing the Mean Time To Recover (MTTR).
            List Failure       # lists the results of previously executed failure assessments. Revalidates   existing failures and closes them, if possible.
          
          Advise Failure   # presents manual and automatic repair options
           
           Repair Failure   # automatically fix failures by running optimal repair option, suggested by ADVISE FAILURE. Revalidates existing failures when completed.
            
           Change Failure # enables you to change the status of failures.

Scenario based explanation

Step:1 Tablespace creation

create tablespace tbs1 datafile '/u01/app/oracle/product/11.2.0.3/oradata/prod/tbs1.dbf' size 5m;
create tablespace tbs2 datafile '/u01/app/oracle/product/11.2.0.3/oradata/prod/tbs2.dbf' size 5m;

 Step:2 creating table in the tablespace

create table tab1 tablespace tbs4 as select * from scott.emp;
create table tab2 tablespace tbs5 as select * from scott.emp;
commit;
alter system switch logfile;

 Step:3 checking block in the tablespace

select segment_name , header_file , header_block     
from dba_segments

where segment_name = 'TAB1';
Step:4 taking backup using rman

backup database plus archivelog;

Step:5 checking through DBV  tool

cd $ORACLE_BASE/oradata/prod
dbv file=tbs1.dbf  blocksize=8192

Step:6 corrupt the datafile

dd of=tbs1.dbf bs=8192 conv=notrunc seek=130 << EOF

Step: 7 once again check using dbv Tool

dbv file=tbs1.dbf  blocksize=8192 

Step:8 remove the datafile in os level

cd $ORACLE_BASE/oradata/prod/

rm tbs2.dbf

Step:9 flush the buffer

alter system flush buffer_cache;

Step:10 We are facing the issue here

select count(*) from tab1;

Step:11 shutdown database

Step:12 startup mount

Step:13 alter database datafile  7 offline;

Step:14 alter database open;

Setp:15 checking the corruption

 select * from v$database_block_corruption;
 select * from V$recover_file;

Step:15 list failure;

Step:16 advise failure all;

Step:17 repair failure preview;

Step:18 repair failure;

Step:19  select * from v$database_block_corruption;

Step:20 select * from tab1 and tab2;


      

Wednesday, August 20, 2014

ORA-03113 and ORA-03114 Error when trying to insert on Global Temporary table.

Issue :


I was getting ORA-03113 and ORA-03114  Error when trying to insert on Global Temporary table.


Reproduce the Issue:


Create Global Temporary table:


CREATE GLOBAL TEMPORARY TABLE my_temp_table (
  column1  NUMBER,
  column2  NUMBER
) ON COMMIT DELETE ROWS;


Insert Record on Global Temporary table:


SQL> insert into my_temp_table values (12,12);
insert into my_temp_table values (12,12)
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 3283
Session ID: 1530 Serial number: 4921


Cause:


There may be many reason for ORA-03113 error. In my case, For SYS user default temporary tablespace was not set.  Because of this we are not able to insert any record into Global temporary table.


SQL> select USERNAME,TEMPORARY_TABLESPACE from dba_users where username = 'SYS';
USERNAME                       TEMPORARY_TABLESPACE
------------------------------ ------------------------------
SYS                            TEMP_TS_GROUP   - In our case there is no tablespace was associated with this Group.



SQL> select * from dba_tablespace_groups;
no rows selected


Solution:


Assign a separate temporary tablespace to SYS user else add temp tablespace to TEMP_TS_GROUP.


SQL> alter user SYS  temporary tablespace TEMP;
User altered.
SQL> insert into my_temp_table values (12,12);
1 row created.





...WISH YOU GOOD LUCK...


Saturday, August 16, 2014

HUNG IN AUTO SQL TUNING TASK


Scenario:

In 11.2.0.3 database , we get alert in EM related to metric “Generic Operational Error” or "Generic Operational Error Status ". Occasionally when running Automatic SQL Tuning the following messages may appear in the alert log:


Error :

Sat Aug 16 07:10:54 2014
Process 0x0x85865e800 appears to be hung in Auto SQL Tuning task
Current time = 1408187453, process death time = 1408187402
Attempting to kill process 0x0x85865e800 with OS pid = 2289
OSD kill succeeded for process 0x85865e800
Sat Aug 16 07:13:00 2014

Solution:

The explanation is that the AUTO SQL TUNING TASK has been over-running and as a protective measure it is auto killed. As thus, there is no fix for this and the solution is to disable this job and eventually manually execute it when

needed. Here is how to do that:

To Disable the job:

BEGIN
   DBMS_AUTO_TASK_ADMIN.DISABLE(
   client_name =&gt; ‘sql tuning advisor’,
   operation =&gt; NULL,
   window_name =&gt; NULL);
 END;
 /


To Enable it when needed.

BEGIN
DBMS_AUTO_TASK_ADMIN.ENABLE(
client_name =&gt; 'sql tuning advisor',
operation =&gt; NULL,
window_name =&gt; NULL);
END;
/


Reference:

“How to Avoid or Prevent Hung Auto Tuning Task Messages [ID 1344499.1]”

Tuesday, August 12, 2014

Query Result Cache in Oracle Database 11g

Oracle 11g allows the results of SQL queries to be cached in the SGA and reused to improve performance.

Setup
=====

Set up the following schema objects to see how the SQL query cache works.

    CREATE TABLE qrc_tab (
      id  NUMBER
    );

    INSERT INTO qrc_tab VALUES (1);
    INSERT INTO qrc_tab VALUES (2);
    INSERT INTO qrc_tab VALUES (3);
    INSERT INTO qrc_tab VALUES (4);
    INSERT INTO qrc_tab VALUES (5);

    CREATE OR REPLACE FUNCTION slow_function(p_id  IN  qrc_tab.id%TYPE)
      RETURN qrc_tab.id%TYPE DETERMINISTIC AS
    BEGIN
      DBMS_LOCK.sleep(1);
      RETURN p_id;
    END;
    /

    SET TIMING ON

The function contains a one second sleep so we can easily detect if it has been executed by checking the elapsed time of the query.

Test It
=======

Query the test table using the slow function and check out the elapsed time. Each run takes approximately five seconds, one second sleep for each row queried.

    SELECT slow_function(id) FROM qrc_tab;

    SLOW_FUNCTION(ID)
    -----------------
                    1
                    2
                    3
                    4
                    5

    5 rows selected.

    Elapsed: 00:00:05.15
    SQL>

Adding the RESULT_CACHE hint to the query tells the server to attempt to retrieve the information from the result cache. If the information is not present, it will cache the results of the query provided there is enough room in the result cache. Since we have no cached results, we would expect the first run to take approximately five seconds, but subsequent runs to be much quicker.

    SELECT /*+ result_cache */ slow_function(id) FROM qrc_tab;

    SLOW_FUNCTION(ID)
    -----------------
                    1
                    2
                    3
                    4
                    5

    5 rows selected.

    Elapsed: 00:00:05.20

    SELECT /*+ result_cache */ slow_function(id) FROM qrc_tab;

    SLOW_FUNCTION(ID)
    -----------------
                    1
                    2
                    3
                    4
                    5

    5 rows selected.

    Elapsed: 00:00:00.15
    SQL>

RESULT_CACHE_MODE
=================

The default action of the result cache is controlled by the RESULT_CACHE_MODE parameter. When it is set to MANUAL, the RESULT_CACHE hint must be used for a query to access the result cache.

    SHOW PARAMETER RESULT_CACHE_MODE

    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    result_cache_mode                    string      MANUAL
    SQL>

If we set the RESULT_CACHE_MODE parameter to FORCE, the result cache is used by default, but we can bypass it using the NO_RESULT_CACHE hint.

    ALTER SESSION SET RESULT_CACHE_MODE=FORCE;

    SELECT slow_function(id) FROM qrc_tab;

    SLOW_FUNCTION(ID)
    -----------------
                    1
                    2
                    3
                    4
                    5

    5 rows selected.

    Elapsed: 00:00:00.14

    SELECT /*+ no_result_cache */ slow_function(id) FROM qrc_tab;

    SLOW_FUNCTION(ID)
    -----------------
                    1
                    2
                    3
                    4
                    5

    5 rows selected.

    Elapsed: 00:00:05.14
    SQL>

How to determine if an index needs to be rebuilt.

We can get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE command.

The VALIDATE STRUCTURE command can be safely executed without affecting the optimizer.

The VALIDATE STRUCTURE command populates the SYS.INDEX_STATS table only.


SQL> ANALYZE INDEX INDEX1 VALIDATE STRUCTURE;

Statement processed.

SQL> SELECT name, height,lf_rows,lf_blks,del_lf_rows FROM INDEX_STATS;

NAME                    HEIGHT        LF_ROWS         LF_BLKS      DEL_LF_ROW
----------------------   -----------        --------------        -------------      -------------------
INDEX1                      2                     1                        3                    6

1 row selected.

There are two rules of thumb to help determine if the index needs to be rebuilt.

1)     If the index has height greater than four, rebuild the index.

2)     The deleted leaf rows should be less than 20%.

If it is determined that the index needs to be rebuilt, this can easily be done by the following command

ALTER INDEX <INDEX_NAME> REBUILD | REBULID ONLINE command.

It is not recommended to execute this command during normal operating hours.

Creating an index uses the base table as its data source that needs to put a lock on the table. The index is also unavailable during creation.

In this example, the HEIGHT column is clearly showing the value 2. This is not a good candidate for rebuilding.

 

Monday, August 4, 2014

Oracle GoldenGate Veridata Installation and Configuration


Oracle GoldenGate Veridata Installation and Configuration


Oracle GoldenGate Veridata is a product that enables us to compare two sets of data and identify and report on data that is out of synchronization. This is helpful in a 24×7 replication enviornment where takimg downtime to compare sets of data is not an option and Veridata can run concurrently with the Extract and Replicat processes.
Let us look at the different components of GoldenGate Veridata.
Veridata Server - it coordinates the different Veridata tasks, performs data comparisons, sorting where required, reporting
Repository – it makes use of an Oracle or MySQL database for storing data related to user preferences, configuration information, job runtime statistics etc.
Veridata Agent – makes the required database calls on behalf of the Veridaata server to fetch and return rows required for comparions.
Veridata Web - browser based interface to create comparison configurations and run jobs and view reports. B
Note: Veridata also has a command line interface called vericom which performs most of the tasks which can be performed via the web interface from the command line







We are Installing Veridata in Linux x86.  I have downloaded the file V35574-01.zip  from the E-Delivery web site. 

The Veridata Server requires a database user with the required privileges and also a database role called VERIDATA_ROLE. We are using our existing database user which we have used in earlier tutorials GGS_OWNER and have granted the role to this user.
SQL> create role veridata_role;
Role created.
SQL> grant create session, create table, create view, create procedure, create synonym to veridata_role;
Grant succeeded.

We then unzip this file and it creates another file called  GoldenGate_Veridata_redhatAS40_x86.sh