DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Monday, February 15, 2016

Unable To View Workflow Status Diagram

Refer Note ID 746806.1 


Applies to:

Oracle Workflow Cartridge – Version: 11.5.9 to 12.0.6
This problem can occur on any platform.

Symptoms
When clicked on the Status diagram following message appears “You have encountered an unexpected error. Please contact the System Administrator for assistance.”

The issue can be reproduced at will with the following steps:

    1. Log in to Sysadmin
    2. Select Workflow Adminstrator Responsbility
    3. Select status monitor
    4. Provide the details of the item type
    5. Click on status diagram. Now it shows error page.

Setting FND Diagnostics profile option for the user who is seeing the error message is changing “You have encountered an unexpected error. Click here for details. Please contact the System Administrator for assistance.”
When pressed on the Click Here link the stack error trace shows:
oracle.apps.fnd.framework.OAException: java.lang.NullPointerException

## Detail 0 ##
java.lang.NullPointerException
java.lang.NullPointerException

or
## Detail 0 ##
java.lang.NullPointerException
at java.util.Hashtable.get(Hashtable.java:315)
at oracle.apps.fnd.util.OracleTimezone.getTimezoneNameFromCode(OracleTimezone.java:171)
at oracle.apps.fnd.framework.OANLSServices.getTimezoneName(OANLSServices.java:1611)
at oracle.apps.fnd.wf.monitor.webui.GraphMonitorCO.processRequest(GraphMonitorCO.java:107)
at oracle.apps.fnd.framework.webui.OAWebBeanHelper.processRequest(OAWebBeanHelper.java:525)


Cause:


Server Timezone profile option at Site level is not set.

The Null Pointer Exception is occurring from within OA Framework’s code which Workflow’s code is calling.
iAS need to instantiate the class file and if it finds the profile to be null it throws a null pointer exception. The JSP is then cached once the profile is set afterwords and we will just be reading the cached class. The error started occurring after making EBS timezone aware so the profile need to be set to the timezone of the database to provide the off set calculation for user connections. 


Solution:


1. Set the profile option Server Timezone at Site level (should be set to the value of database timezone)
select DBTIMEZONE from dual;
2. Bounce the Apache Server

ORA-07445: exception encountered: core dump [qkxrPXformUnm()+46] [SIGSEGV]

While we run Supplier Payment History in Oracle EBS R12, we are getting this error.

Error:
====


ORA-07445: exception encountered: core dump [qkxrPXformUnm()+46] [SIGSEGV]

Solution:
 ======


To fix this in database include the below parameter file and restart the database.

_replace_virtual_columns= false

AutoConfig could not successfully instantiate the following files adcrdb.sh INSTE8

DB autoconfig Error:
===============


[AutoConfig Error Report]
The following report lists errors AutoConfig encountered during each
phase of its execution. Errors are grouped by directory and phase.


[INSTANTIATE PHASE]
AutoConfig could not successfully instantiate the following files:
Directory: /d01/UAT/db/tech_st/11.2.0.3/appsutil/install/UAT_tsebsdb
adcrdb.sh INSTE8
AutoConfig is exiting with status 1


Cause:
=====


Due to missing of adcrdb.sh in Target DB.

Solution:
======
Check adcrdb.sh is found in $ORACLE_HOME/appsutil/template Target DB, if not found copy from source DB and run adconfig.pl

FNDCPASS was not able to decrypt password for user ‘ANONYMOUS’ during applsys password change.

Cause:
=====
 
While change Apps password, if we changed the password by using the FNDCPASS for special character in apps password we have to use ” ” else we will get this error.

Solution:
======

FNDCPASS apps/”Fancy$3″ 0 Y system/oracle SYSTEM APPLSYS “Fancy$123”

ORA-609 : opiodr aborting process unknown ospid

Cause:

=====

The ORA-609 error is thrown when a client connection of any kind failed to complete or aborted the connection
process before the server process was completely spawned.
Beginning with 10gR2, a default value for inbound connect timeout has been set at 60 seconds.


This is also triggered, when a DB session is killed/aborted manually from the OS prompt.


Solution:
======

Increase the values for INBOUND_CONNECT_TIMEOUT at both listener and server side sqlnet.ora file as a preventive measure.
If the problem  is due to connection timeouts,an increase in the following parameters should eliminate or reduce the occurrence of the ORA-609s.


Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180

 
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120

Backup Fails Because of Control File Enqueue: ORA-00230

Error:
=====
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 12/30/2015 22:48:44

ORA-00230: operation disallowed: snapshot control file enqueue unavailable


In this scenario, a backup job fails because RMAN cannot make a snapshot control file.

Cause:
=====
When RMAN needs to back up or resynchronize from the control file, it first creates a snapshot or consistent image of the control file. If one RMAN job is already backing up the control file while another needs to create a new snapshot control file, then you may see the following message:

Waiting for snapshot control file enqueue

Under normal circumstances, a job that must wait for the control file enqueue waits for a brief interval and then successfully obtains the enqueue. RMAN makes up to five attempts to get the enqueue and then fails the job. The conflict is usually caused when two jobs are both backing up the control file, and the job that first starts backing up the control file waits for service from the media manager.

Solution:
======

To determine which job is holding the conflicting enqueue:
1.      Start a new SQL*Plus session and login with the sysdba:
2.      Execute the query to check the waiting cause:

                    SELECT s.SID, USERNAME AS "User", PROGRAM, MODULE,
                    ACTION, LOGON_TIME "Logon", l.*
                    FROM V$SESSION s, V$ENQUEUE_LOCK l
                    WHERE l.SID = s.SID
                    AND l.TYPE = 'CF'
                    AND l.ID1 = 0 AND l.ID2 = 2;
                    The output is look like as: (if there is lock)
                    SID User Program              Module                    Action           Logon
                    --- ---- -------------------- ------------------- ---------------- ---------
                    4 SYS rman@rman (TNS V1-V3) backup full datafile: c10000210 STARTED 16-NOV-15

This situation generally came across when a job is writing to a tape drive, but the tape drive is waiting for new tape to be inserted. In the mean time if you start new job then you will probably receive the enqueue message because the first job cannot complete until the new tape is loaded.
After you have determined which job is creating the enqueue, you can do one of the following:

==> Wait until the job holding the enqueue completes
==> Cancel the current job and restart it after the job holding the enqueue completes
==> Cancel the job creating the enqueue

Friday, February 12, 2016

ICM Log Files location can be found from SQL Query


SELECT logfile_name FROM (
SELECT fcp.logfile_name,fcp.concurrent_process_id,fcp.last_update_date,fcp.creation_date,fcp.session_id,
fcp.process_status_code
FROM fnd_concurrent_processes fcp, fnd_concurrent_queues fcq
WHERE fcp.concurrent_queue_id = fcq.concurrent_queue_id
AND fcp.queue_application_id = fcq.application_id
AND fcq.manager_type = '0'
ORDER BY SESSION_ID DESC)
A WHERE 1=1 AND ROWNUM<4;




Tuesday, February 9, 2016

Renaming constraints

    SQL> show user; 
    USER is "MAHE" 
     
    SQL> select dbms_metadata.get_ddl('TABLE', 'T1') from dual; 
     
    DBMS_METADATA.GET_DDL('TABLE','T1') 
    --------------------------------------------------------- 
     
      CREATE TABLE "MAHE"."T1" 
       ( "A" NUMBER 
       ) SEGMENT CREATION IMMEDIATE 
     
     
    SQL> alter table T1 add constraint t1_mypk primary key (a); 
     
    Table altered. 
     
    SQL> select dbms_metadata.get_ddl('TABLE', 'T1') from dual; 
     
    DBMS_METADATA.GET_DDL('TABLE','T1') 
    --------------------------------------------------------- 
     
      CREATE TABLE "MAHE"."T1" 
       ( "A" NUMBER, 
      CONSTRAINT "T1_MYPK" PRIMARY KEY ("A") 
     
     
    SQL> alter table T1 rename constraint T1_MYPK to T1_PK;  
     
    Table altered. 
     
    SQL> select dbms_metadata.get_ddl('TABLE', 'T1') from dual; 
     
    DBMS_METADATA.GET_DDL('TABLE','T1') 
    --------------------------------------------------------- 
     
      CREATE TABLE "MAHE"."T1" 
       ( "A" NUMBER, 
      CONSTRAINT "T1_PK" PRIMARY KEY ("A") 
     
     
      DONE****

Renaming redolog members

To complete this requirement you have to shutdown the database, move the redo log files to the new destination, startup the database in mount mode, rename the log members and then open the database.

    [oracle@localhost admin]$ sqlplus / as sysdba 
     
    SQL*Plus: Release 12.1.0.1.0 Production on Wed Mar 12 16:16:04 2014 
     
    Copyright (c) 1982, 2013, Oracle.  All rights reserved. 
     
     
    Connected to: 
    Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production 
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options 
     
    SQL> set lines 180                         
    SQL> col member format a50 
    SQL> select GROUP#, MEMBER, CON_ID from V$LOGFILE; 
     
        GROUP# MEMBER        CON_ID 
    ---------- -------------------------------------------------- ---------- 
      1 /app/oracle/oradata/TEST/redo01.log         0 
      2 /app/oracle/oradata/TEST/redo02.log         0 
      3 /app/oracle/oradata/TEST/redo03.log         0 
     
    SQL> shutdown immediate; 
    Database closed. 
    Database dismounted. 
    ORACLE instance shut down. 
    SQL> host mv /app/oracle/oradata/TEST/redo01.log /app/oracle/oradata/TEST/redo01_renamed.log 
     
    SQL> host mv /app/oracle/oradata/TEST/redo02.log /app/oracle/oradata/TEST/redo02_renamed.log 
     
    SQL> host mv /app/oracle/oradata/TEST/redo03.log /app/oracle/oradata/TEST/redo03_renamed.log 
     
    SQL> host ls -l /app/oracle/oradata/TEST/*log 
    -rw-r-----. 1 oracle oinstall 52429312 Feb  7 15:59 /app/oracle/oradata/TEST/redo01_renamed.log 
    -rw-r-----. 1 oracle oinstall 52429312 Mar  3 13:00 /app/oracle/oradata/TEST/redo02_renamed.log 
    -rw-r-----. 1 oracle oinstall 52429312 Mar 12 16:17 /app/oracle/oradata/TEST/redo03_renamed.log 
     
    SQL> startup mount; 
    ORACLE instance started. 
     
    Total System Global Area  626327552 bytes 
    Fixed Size      2291472 bytes 
    Variable Size    473958640 bytes 
    Database Buffers   146800640 bytes 
    Redo Buffers      3276800 bytes 
    Database mounted. 
    SQL> alter database rename file '/app/oracle/oradata/TEST/redo01.log','/app/oracle/oradata/TEST/redo02.log','/app/oracle/oradata/TEST/redo03.log' 
      2  to '/app/oracle/oradata/TEST/redo01_renamed.log','/app/oracle/oradata/TEST/redo02_renamed.log','/app/oracle/oradata/TEST/redo03_renamed.log'; 
     
    Database altered. 
     
    SQL> alter database open; 
     
    Database altered. 
     
    SQL> select GROUP#, MEMBER, CON_ID from V$LOGFILE; 
     
        GROUP# MEMBER        CON_ID 
    ---------- -------------------------------------------------- ---------- 
      1 /app/oracle/oradata/TEST/redo01_renamed.log        0 
      2 /app/oracle/oradata/TEST/redo02_renamed.log        0 
      3 /app/oracle/oradata/TEST/redo03_renamed.log        0  

Done*****

Block change tracking for faster Incremental RMAN backups


The details of changes in the blocks of the database after the last backup are written in the block change tracking file.RMAN uses this file for tracking the changed block and backup it instead all the blocks and hence saves time and space.

The background process Block Change Tracking Writer (CTWR) will be writing modified block details to block change tracking file.

The Change Tracking file is read during backups and the required blocks are accessed directly.

To enable block change tracking 

alter database enable block change tracking using file '/u02/oracle/oradata/GOLD/block_change_track.dbf';

when using OMF we can enable block change tracking as follows

alter database enable block change tracking ;

the change tracking file will be created in the directory mentioned by DB_CREATE_FILE_DEST.


now check ,

col filename for a70
select filename,status from v$block_change_tracking;

FILENAME STATUS
-----------------------------------                          ---------------
'/u02/oracle/oradata/GOLD/block_change_track.dbf'
ENABLED


To Disable
-----------------------

SQL>alter database disable block change tracking;


To Check
-----------------------

select status from V$BLOCK_CHANGE_TRACKING;

Solaris EM12c Agent Unreachable with Error “peer not authenticated”

./emctl status agent
Oracle Enterprise Manager Cloud Control 12c Release 2 
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Status agent Failure:unable to connect to http server at https://myhost.mydomain.ge:3872/emd/lifecycle/main/. [peer not authenticated]


solution : Stop the Agent:

$ ./emctl stop agent

Verfiy the java process has stopped

  $ ps -ef |grep java

If the java process is still running you would see like the following still running

$AGENT_HOME/core/12.1.0.2.0/jdk/bin/sparcv9/java
$kill -9 pid

Update Config File:

$cd $AGENT_HOME/agent_inst/sysman/config
$cp s_jvm_options.opt s_jvm_options.opt.bkp

Add the following line to s_jvm_options.opt

-Dsun.security.pkcs11.enable-solaris=false

Restart Agent and Test Upload:

$ cd $AGENT_HOME/bin
$ ./emctl start agent
$ ./emctl upload agent

Suspend and Resume in Oracle Database


The ALTER SYSTEM SUSPEND  statement stops all  input  and  output  (I/O)  to  datafiles (file header and file data)  and  control files. The  suspended  state  lets  us  back  up  a database  without  I/O interference. When  the database  is suspended  all  preexisting I/O operations are  allowed  to complete and any  new  database  accesses  are  placed  in a  queued state. The  suspend   command is  not  specific  to  an  instance. In  an  Oracle  Real  Application  Clusters  environment, when  we issue the  suspend command  on  one  system,  internal  locking  mechanisms  propagate  the  halt request across  instances, thereby quiescing all active instances  in  a  given cluster. However, if someone starts  a  new instance another instance is being suspended, the new instance will not be suspended .

ALTER SYSTEM RESUME  statement resumes normal database operations. The SUSPEND and  RESUME commands  can  be  issued  from  different  instances. For example, if instances 1, 2, and 3 are  running, and  we  issue  an  ALTER SYSTEM  SUSPEND  statement  from  instance 1, then  we  can issue  a RESUME  statement from instance 1, 2, or 3 with the same effect.

The  suspend/resume  feature is  not a  suitable  substitute  for  normal  shutdown  operations, because  copies  of a  suspended  database can  contain  uncommitted  updates.

To check

SQL> alter system suspend;
System altered

SQL> select database_status  from v$instance;

DATABASE_STATUS
------------------------
SUSPENDED

SQL> alter system resume ;
System altered

SQL> select database_status from v$instance ;

DATABASE_STATUS
-------------------------
ACTIVE

Locking statistics for a table

You can lock the statistics for the table if you don't want it be analyzed automatically by scheduled jobs.

exec dbms_stats.lock_table_stats('&owner','&table name');
exec dbms_stats.lock_table_stats('GOLD','TEST');

To list all tables with locked statistics in a particular schema 

select table_name, stattype_locked from dba_tab_statistics where owner ='&owner' and stattype_locked is not null;

To unlock the stats of a particular table

exec DBMS_STATS.UNLOCK_TABLE_STATS('&owner','&table name');

To unlock the tables with locked stats of a particular schema

select 'exec DBMS_STATS.UNLOCK_TABLE_STATS ('''|| owner ||''','''|| table_name ||''');' from dba_tab_statistics where owner = '&owner' and stattype_locked is not null;

To Check 

when stats is not locked the value of stattype_locked is NULL and  ALL when stats is locked

SQL> SELECT TABLE_NAME , stattype_locked FROM dba_tab_statistics where owner = 'GOLD';
TABLE_NAME                STATT
------------------------------ -----
TEST                            ALL < --- LOCKED
TEST1                                      <  --- UNLOCKED


Friday, February 5, 2016

Query to find Concurrent Program Information


SELECT cpv.user_concurrent_program_name "Concurrent Program Name",
       cpv.concurrent_program_name      "Program Short Name",
       efv.application_name             "Application",
       cpv.enabled_flag                 "Enabled Flag",
       cpv.output_file_type             "Output Format",
       fu.user_name                     "Created By (userid)",
       (SELECT meaning
          FROM fnd_lookup_values_vl flv
         WHERE UPPER (flv.lookup_type) = 'CP_EXECUTION_METHOD_CODE'
           AND flv.Lookup_code = efv.execution_method_code
       )                                "Execution Method",
       efv.executable_name              "Executable Name",
       efv.execution_file_name          "Execution Filename"
  FROM fnd_executables_form_v      efv,
       fnd_concurrent_programs_vl  cpv,
       fnd_user                    fu
 WHERE efv.executable_id  = cpv.executable_id
   AND efv.application_id = cpv.application_id
   AND cpv.created_by     = fu.user_id
   AND cpv.user_concurrent_program_name = '&program_name' -- // change it
 ORDER BY cpv.user_concurrent_program_name;

List all the registered concurrent programs by module using Sql Script


SELECT   SUBSTR(a.application_name,1,60) Application_NAME
,        b.application_short_name SHORT_NAME
,        DECODE(SUBSTR(cp.user_concurrent_program_name,4,1),':'
         ,      'Concurrent Manager Executable'
         ,      'Subprogram or Function') TYPE
,        SUBSTR(d.concurrent_program_name,1,16) PROGRAM 
,        SUBSTR(cp.user_concurrent_program_name,1,55) USER_PROGRAM_NAME
FROM     applsys.FND_CONCURRENT_PROGRAMS_TL cp, applsys.FND_CONCURRENT_PROGRAMS d, applsys.FND_APPLICATION_TL a, applsys.fnd_application b
WHERE    cp.application_id = a.application_id
AND      d.CONCURRENT_PROGRAM_ID = cp.CONCURRENT_PROGRAM_ID 
AND      a.APPLICATION_ID = b.APPLICATION_ID
AND      b.application_short_name LIKE UPPER('PA')
UNION ALL
SELECT   SUBSTR(a.application_name,1,60) c1
,        b.application_short_name c2  ,        'Form Executable' c3
,        SUBSTR(f.form_name,1,16) c4 ,        
SUBSTR(d.user_form_name,1,55) c5
FROM     applsys.fnd_form f ,        applsys.FND_APPLICATION_TL a, applsys.fnd_application b, applsys.FND_FORM_TL d
WHERE    f.application_id = a.application_id 
AND      d.FORM_ID = f.FORM_ID
AND      a.APPLICATION_ID = b.APPLICATION_ID
AND      b.application_short_name LIKE UPPER('PA') ORDER BY 1,2,3,4;

Wednesday, February 3, 2016

Query for Archive generation/hour for last one week

Query for Archive generation/hour for last one week:

SELECT * FROM (
SELECT * FROM (
SELECT TO_CHAR(FIRST_TIME, 'DD/MM') AS "DAY"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '00', 1, 0)), '999') "00:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '01', 1, 0)), '999') "01:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '02', 1, 0)), '999') "02:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '03', 1, 0)), '999') "03:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '04', 1, 0)), '999') "04:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '05', 1, 0)), '999') "05:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '06', 1, 0)), '999') "06:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '07', 1, 0)), '999') "07:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '08', 1, 0)), '999') "08:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '09', 1, 0)), '999') "09:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '10', 1, 0)), '999') "10:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '11', 1, 0)), '999') "11:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '12', 1, 0)), '999') "12:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '13', 1, 0)), '999') "13:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '14', 1, 0)), '999') "14:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '15', 1, 0)), '999') "15:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '16', 1, 0)), '999') "16:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '17', 1, 0)), '999') "17:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '18', 1, 0)), '999') "18:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '19', 1, 0)), '999') "19:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '20', 1, 0)), '999') "20:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '21', 1, 0)), '999') "21:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '22', 1, 0)), '999') "22:00"
, TO_NUMBER(SUM(DECODE(TO_CHAR(FIRST_TIME, 'HH24'), '23', 1, 0)), '999') "23:00"
FROM V$LOG_HISTORY
WHERE extract(year FROM FIRST_TIME) = extract(year FROM sysdate)
GROUP BY TO_CHAR(FIRST_TIME, 'DD/MM')
) ORDER BY TO_DATE(extract(year FROM sysdate) || DAY, 'YYYY DD/MM') DESC
) WHERE ROWNUM < 8;

Archive Log Generation - Daily & Hourly Basis

Archivelog generation on a daily basis:

set pages 1000
select trunc(COMPLETION_TIME,'DD') Day, thread#, round(sum(BLOCKS*BLOCK_SIZE)/1048576) MB,count(*) Archives_Generated from v$archived_log
group by trunc(COMPLETION_TIME,'DD'),thread# order by 1;


Archive log generation on an hourly basis:

set pages 1000
select trunc(COMPLETION_TIME,'HH') Hour,thread# , round(sum(BLOCKS*BLOCK_SIZE)/1048576) MB,count(*) Archives from v$archived_log
group by trunc(COMPLETION_TIME,'HH'),thread#  order by 1 ;


Hourly Basis Tabular Form:

  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '00', 1, NULL))
            "00-01",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '01', 1, NULL))
            "01-02",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '02', 1, NULL))
            "02-03",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '03', 1, NULL))
            "03-04",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '04', 1, NULL))
            "04-05",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '05', 1, NULL))
            "05-06",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '06', 1, NULL))
            "06-07",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '07', 1, NULL))
            "07-08",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '08', 1, NULL))
            "08-09",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '09', 1, NULL))
            "09-10",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '10', 1, NULL))
            "10-11",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '11', 1, NULL))
            "11-12",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '12', 1, NULL))
            "12-13",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '13', 1, NULL))
            "13-14",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '14', 1, NULL))
            "14-15",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '15', 1, NULL))
            "15-16",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '16', 1, NULL))
            "16-17",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '17', 1, NULL))
            "17-18",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '18', 1, NULL))
            "18-19",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '19', 1, NULL))
            "19-20",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '20', 1, NULL))
            "20-21",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '21', 1, NULL))
            "21-22",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '22', 1, NULL))
            "22-23",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '23', 1, NULL))
            "23-00",
         COUNT (*) TOTAL
    FROM V$ARCHIVED_LOG
WHERE ARCHIVED='YES'
GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')
ORDER BY TO_DATE (DAY, 'DD/MM/YYYY');

Monday, February 1, 2016

LSNRCTL-Listener Control Utility

LSNRCTL> help
LSNRCTL> version [listener_name]    -- get the version information of the listener

LSNRCTL> start [listener_name]   -- starts the listener
LSNRCTL> stop [listener_name]    -- stops the listener
LSNRCTL> status [listener_name]   -- get the status of listener
LSNRCTL> services [listener_name]   -- get the service information of the listener
LSNRCTL> reload [listener_name]    -- reload the parameter files and SIDs
LSNRCTL> save_config [listener_name]    -- saves configuration changes to parameter file
LSNRCTL> trace OFF | USER | ADMIN | SUPPORT [listener_name]    -- set tracing to the specified level
LSNRCTL> spawn [listener_name] spawn_alias [(ARGUMENTS='arg0, arg1,...')]
LSNRCTL> change_password [listener_name]   -- changes the password of the listener

LSNRCTL> set [below_modifier]
password                           rawmode                          
displaymode                        trc_file                          
trc_directory                      trc_level                        
log_file                           log_directory                    
log_status                         current_listener                  
inbound_connect_timeout            startup_waittime                  
save_config_on_stop                dynamic_registration              
enable_global_dynamic_endpoint     connection_rate_limit
LSNRCTL> show [below_modifier]
rawmode                            displaymode                      
rules                              trc_file                          
trc_directory                      trc_level                        
log_file                           log_directory                    
log_status                         current_listener                  
inbound_connect_timeout            startup_waittime                  
snmp_visible                       save_config_on_stop              
dynamic_registration               enable_global_dynamic_endpoint    
oracle_home                        pid                              
connection_rate_limit

LSNRCTL> quit
LSNRCTL> exit

Segments with highest I/O activity




set pagesize 200
setlinesize 120
col segment_name format a20
col owner format a10
select segment_name,object_type,total_physical_reads
from ( select owner||’.'||object_name as segment_name,object_type,
value as total_physical_reads
from v$segment_statistics
where statistic_name in (‘physical reads’)
order by total_physical_reads desc)
where rownum <=10;

Top SQL (Buffer Gets)



set serverout on size 1000000
declare
top5 number;
text1 varchar2(4000);
x number;
len1 number;
Cursor c1 is
select buffer_gets,substr(sql_text,1,4000)
from v$sqlarea
order by buffer_gets desc;
begin
dbms_output.put_line(‘Reads’||’ ‘||’ Text’);
dbms_output.put_line (‘—–’||’ ‘||’—————————————————’);
dbms_output.put_line(‘ ‘);
open c1;
for i in 1 .. 5 loop
fetch c1 into top5, text1;
dbms_output.put_line(rpad(to_char(top5),9)|| ‘ ‘||substr(text1,1,66));
len1 :=length(text1);
x := 66;
while len1 > x-1 loop
dbms_output.put_line(‘” ‘||substr(text1,x,64));
x := x+64;
end loop;
end loop;
end;
/

Latch Contention (top 5 latches)




set linesize 120
col name format a30
select * from
(select name, gets,misses, sleeps
from v$latch
order by sleeps desc)
where rownum < 6;

Top SQL (Physical Reads)




set serverout on size 1000000
set feedback off
declare
top5 number;
text1 varchar2(4000);
x number;
len1 number;
Cursor c1 is
select disk_reads,substr(sql_text,1,4000)
from v$sqlarea
order by disk_reads desc;
begin
dbms_output.put_line(‘Reads’||’ ‘||’ Text’);
dbms_output.put_line (‘—–’||’ ‘||’—————————————————-’);
dbms_output.put_line(‘ ‘);
open c1;
for i in 1 .. 5 loop
fetch c1 into top5, text1;
dbms_output.put_line(rpad(to_char(top5),9)|| ‘ ‘||substr(text1,1,66));
len1 :=length(text1);
x := 66;
while len1 > x-1 loop
dbms_output.put_line(‘” ‘||substr(text1,x,64));
x := x+64;
end loop;
end loop;
end;
/