DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Tuesday, December 2, 2014

ORA-28000 "the account is locked" in the standby database

Scenario:
=======

The user attempts to logon and gets an error ORA-28000 "the account is locked" in the primary database which is configure with Dataguard.  The database administrator unlock this user's account in the primary database.  Still, the user cannot connect to the standby, getting the same error ORA-28000 for the particular user, But the account status shows open in the standby.

In Standby :
=========

SYS@DOYEN > conn SARAVANA/saravana123123
ERROR:
ORA-28000: the account is locked

Warning: You are no longer connected to ORACLE.

SYS@DOYEN > select username,account_status from dba_users where username='SARAVANA';

USERNAME                                                                                   ACCOUNT_STATUS
------------------------------------------------------------------------------------------ 
SARAVANA                                                                                               OPEN


SYS@DOYEN > @check_db

                                                                                                                 Remote
       DbId Name      Log Mode     Open Mode            Database Role    Created             Resetlogs           Archive
----------- --------- ------------ -------------------- ---------------- ------------------- ------------------- --------
  603299473 DOYEN   ARCHIVELOG   READ ONLY WITH APPLY PHYSICAL STANDBY 11/16/2013 20:16:17 07/22/2014 03:54:21 ENABLED


Solution :
=======

 This is because the standby is open read-only and cannot update any tables.  When a user's account has to be locked on the standby database, it is locked only in memory there.


A privileged user (sysdba, the database administrator) must logon to the standby and unlock the account there.  A message ORA-28015 "Account unlocked, but the database is open for read-only access" confirms that the account is now unlocked in the standby database.  From then on, the user can logon to the standby database without getting any error.

SYS@DOYEN > ALTER USER SARAVANA ACCOUNT UNLOCK;
ALTER USER SARAVANA ACCOUNT UNLOCK
*
ERROR at line 1:
ORA-28015: Account unlocked, but the database is open for read-only access

SYS@DOYEN > conn SARAVANA/saravana123123
Connected.


Reference :
========

ORA-28000 "the account is locked" in the standby database, even after the account was unlocked in the primary. (Doc ID 1600401.1)

Thursday, November 27, 2014

ASM Diskgroup free space check script

#!/bin/bash
###############################################################################
# File Name   : asmdiskgroup_free_space.sh
# Author      : Vimal Raju
# Date        : 27/11/2014
# Description : asmdiskgroup size check
#
###############################################################################
#############################################################
#
# Display usage infommation
#
usage () {

  echo " "
  echo " Checking the asmdiskgroup free space "
  echo "       -h Display this message"
  echo "       -e specify the environment file with location (Mandatory) (Example : /home/oracle/db.env ) "
  echo "       -m specify the email address (Mandatory) (Example : test@gmail.com ) "
  echo "       -p Specify the Percentage of the asmdiskgroup size check (default : 80 )"
  echo "       -l Specify the Log file location (default : /tmp )  "
  echo "     Example : asmdiskgroup_free_space.sh -e /home/oracle/db.env -m support@doyensys.com "

  exit 1

}

validate() {

INDFILE=${LOG_LOC}/tablepace_size_check.ind
  # To check if the indicator file is there
  if [ -f ${INDFILE} ]; then
    echo "Indicator file ${INDFILE} exists!"
    exit 1
  else
    touch ${INDFILE}
  fi

  if [ ${#ENV_FILE} -eq 0 ]; then
    echo "No Environment file name specified!"
    usage
  fi

  if [ ${#MAIL_TO} -eq 0 ]; then
    echo "No Mail address specified!"
    usage
  fi

  if [ -f ${LOG_LOC}/asmdiskgroup_trigger_tmp.log ]; then
    echo ""
  else
    touch ${LOG_LOC}/asmdiskgroup_trigger_tmp.log
  fi

  if [ -f ${LOG_LOC}/asmdiskgroup_trigger.log ]; then
    echo ""
  else
    touch ${LOG_LOC}/asmdiskgroup_trigger.log
  fi

  if [ -f ${LOG_LOC}/asmdiskgroup_check.log ]; then
    echo ""
  else
    touch ${LOG_LOC}/asmdiskgroup_check.log
  fi

}

asmdiskgroup() {

sqlplus -s / as sysdba <<EOF
set feed off
SET LINESIZE  145
SET PAGESIZE  9999
SET VERIFY    off
spool ${LOG_LOC}/asmdiskgroup_check.log
COLUMN group_name             FORMAT a20           HEAD 'Disk Group|Name'
COLUMN sector_size            FORMAT 99,999        HEAD 'Sector|Size'
COLUMN block_size             FORMAT 99,999        HEAD 'Block|Size'
COLUMN allocation_unit_size   FORMAT 999,999,999   HEAD 'Allocation|Unit Size'
COLUMN state                  FORMAT a11           HEAD 'State'
COLUMN type                   FORMAT a6            HEAD 'Type'
COLUMN total_mb               FORMAT 999,999,999   HEAD 'Total Size (MB)'
COLUMN used_mb                FORMAT 999,999,999   HEAD 'Used Size (MB)'
COLUMN pct_used               FORMAT 999.99        HEAD 'Pct. Used'
Select * from (SELECT
    name                                     group_name
  , sector_size                              sector_size
  , block_size                               block_size
  , allocation_unit_size                     allocation_unit_size
  , state                                    state
  , type                                     type
  , total_mb                                 total_mb
  , (total_mb - free_mb)                     used_mb
  , ROUND((1- (free_mb / total_mb))*100, 2)  pct_used
FROM
    v\$asm_diskgroup
ORDER BY
    name
) where pct_used > ${PERCENTAGE}  ;
spool off
EOF

trig=`cat ${LOG_LOC}/asmdiskgroup_check.log|wc -l`
check_tmp=`cat ${LOG_LOC}/asmdiskgroup_trigger_tmp.log`

if [ "${trig}" != 0 ]
then
echo C > ${LOG_LOC}/asmdiskgroup_trigger.log
elif [ "${check_tmp}" = 'T' ]
then
echo N > ${LOG_LOC}/asmdiskgroup_trigger.log
fi

check_crit=`cat ${LOG_LOC}/asmdiskgroup_trigger.log `
if [ "${check_crit}" = 'C' ]
then
cat ${LOG_LOC}/asmdiskgroup_check.log|mailx -s "Critical : ASM DISK GROUP Total allocated > ${PERCENTAGE}" $MAIL_TO
echo T > ${LOG_LOC}/asmdiskgroup_trigger_tmp.log
elif [ "${check_crit}" = 'N' ]
then
cat ${LOG_LOC}/asmdiskgroup_check.log|mailx -s "Normal : ASM DISK GROUP Total allocated < ${PERCENTAGE}" $MAIL_TO
echo "" > ${LOG_LOC}/asmdiskgroup_trigger.log
echo "" > ${LOG_LOC}/asmdiskgroup_trigger_tmp.log
fi

}


PERCENTAGE=80
LOG_LOC=/tmp
ENV_FILE=
MAIL_TO=

while getopts :hp:l:e:m: opt $*
    do
      case $opt in
        h) usage
           ;;
        p) PERCENTAGE=${OPTARG}
           echo PERCENTAGE set to ${PERCENTAGE}
           ;;
        l) LOG_LOC=${OPTARG}
           echo LOG_LOC set to ${LOG_LOC}
           ;;
        e) ENV_FILE=${OPTARG}
           echo ENV_FILE set to ${ENV_FILE}
           ;;
        m) MAIL_TO=${OPTARG}
           echo MAIL_TO set to ${MAIL_TO}
           ;;
        *) echo "******************************************************************"
           echo "${JOBNAME}: Invalid option found on command line: ${OPTARG}"
           echo "******************************************************************"
           usage
           ;;
      esac
    done


validate

. $ENV_FILE

asmdiskgroup

if [ -f ${INDFILE} ]; then
      rm -f ${INDFILE}
fi

Tablespace free space check script

#!/bin/bash
###############################################################################
# File Name   : tablespace_free_space.sh
# Author      : Vimal Raju
# Date        : 27/11/2014
# Description : Tablespace size check
#
###############################################################################
#############################################################
#
# Display usage infommation
#
usage () {

  echo " "
  echo " Checking the Tablespace free space "
  echo "       -h Display this message"
  echo "       -e specify the environment file with location (Mandatory) (Example : /home/oracle/db.env ) "
  echo "       -m specify the email address (Mandatory) (Example : test@gmail.com,test1@gmail.com ) "
  echo "       -p Specify the Percentage of the tablespace size check (default : 80 )"
  echo "       -l Specify the Log file location (default : /tmp )  "
  echo "     Example : tablespace_free_space.sh -e /home/oracle/db.env -m support@doyensys.com "

  exit 1

}

validate() {

  INDFILE=${LOG_LOC}/tablepace_size_check.ind
  # To check if the indicator file is there
  if [ -f ${INDFILE} ]; then
    echo "Indicator file ${INDFILE} exists!"
    exit 1
  else
    touch ${INDFILE}
  fi

  if [ ${#ENV_FILE} -eq 0 ]; then
    echo "No Environment file name specified!"
    usage
  fi

  if [ ${#MAIL_TO} -eq 0 ]; then
    echo "No Mail address specified!"
    usage
  fi

  if [ -f ${LOG_LOC}/tablespace_trigger_tmp.log ]; then
    echo ""
  else
    touch ${LOG_LOC}/tablespace_trigger_tmp.log
  fi

  if [ -f ${LOG_LOC}/tablespace_trigger.log ]; then
    echo ""
  else
    touch ${LOG_LOC}/tablespace_trigger.log
  fi

  if [ -f ${LOG_LOC}/tablespace_check.log ]; then
    echo ""
  else
    touch ${LOG_LOC}/tablespace_check.log
  fi

}


tablespace() {

sqlplus -s / as sysdba <<EOF
set feed off
spool ${LOG_LOC}/tablespace_check.log
col tablespace_name for a20
Select * from (select  tbs.tablespace_name,
        tot.bytes/(1024*1024) "Total Space in MB",
        round(tot.bytes/(1024*1024)-  sum(nvl(fre.bytes,0))/(1024*1024),2) "Used in MB",
        round(sum(nvl(fre.bytes,0))/(1024*1024),2) "Free in MB",
        round((1-sum(nvl(fre.bytes,0))/tot.bytes)*100,2) Pct,
        decode(
            greatest((1-sum(nvl(fre.bytes,0))/tot.bytes)*100, 90),
            90, '', '*'
        ) Pct_warn
from    dba_free_space fre,
        (select tablespace_name, sum(bytes) bytes
        from    dba_data_files
        group by tablespace_name) tot,
        dba_tablespaces tbs
where   tot.tablespace_name    = tbs.tablespace_name
and     fre.tablespace_name(+) = tbs.tablespace_name
group by tbs.tablespace_name, tot.bytes/(1024*1024), tot.bytes
order by 5 desc, 1) where Pct > ${PERCENTAGE}  ;
spool off
EOF

trig=`cat ${LOG_LOC}/tablespace_check.log|wc -l`
check_tmp=`cat ${LOG_LOC}/tablespace_trigger_tmp.log`

if [ "${trig}" != 0 ]
then
echo C > ${LOG_LOC}/tablespace_trigger.log
elif [ "${check_tmp}" = 'T' ]
then
echo N > ${LOG_LOC}/tablespace_trigger.log
fi

check_crit=`cat ${LOG_LOC}/tablespace_trigger.log `
if [ "${check_crit}" = 'C' ]
then
cat ${LOG_LOC}/tablespace_check.log|mailx -s "Critical : Tablespace Total allocated > ${PERCENTAGE}" $MAIL_TO
echo T > ${LOG_LOC}/tablespace_trigger_tmp.log
elif [ "${check_crit}" = 'N' ]
then
cat ${LOG_LOC}/tablespace_check.log|mailx -s "Normal : Tablespace Total allocated < ${PERCENTAGE}" $MAIL_TO
echo "" > ${LOG_LOC}/tablespace_trigger.log
echo "" > ${LOG_LOC}/tablespace_trigger_tmp.log
fi

}


PERCENTAGE=80
LOG_LOC=/tmp
ENV_FILE=
MAIL_TO=

while getopts :hp:l:e:m: opt $*
    do
      case $opt in
        h) usage
           ;;
        p) PERCENTAGE=${OPTARG}
           echo PERCENTAGE set to ${PERCENTAGE}
           ;;
        l) LOG_LOC=${OPTARG}
           echo LOG_LOC set to ${LOG_LOC}
           ;;
        e) ENV_FILE=${OPTARG}
           echo ENV_FILE set to ${ENV_FILE}
           ;;
        m) MAIL_TO=${OPTARG}
           echo MAIL_TO set to ${MAIL_TO}
           ;;
        *) echo "******************************************************************"
           echo "${JOBNAME}: Invalid option found on command line: ${OPTARG}"
           echo "******************************************************************"
           usage
           ;;
      esac
    done


validate

. $ENV_FILE

tablespace

if [ -f ${INDFILE} ]; then
      rm -f ${INDFILE}
fi

Tuesday, November 25, 2014

Query String Parse Error


Issue:

We had a request to refresh FND_TOP alone. We stopped the application process and refreshed the FND_TOP from production.  As expected forms were not opened up so we have to relink FND module using adadmin.  After that it works perfectly fine.
Now the Issue was if we try opening any concurrent request log file using “Tools-> copy file” option it gives an error saying
Query String Parse Error.
temp_id=3114441888 
Solution:
We have to forcefully relink FNDWRR.exe executable to make it work. 
$cd $FND_TOP/bin
$ adrelink.sh force=y "fnd FNDWRR.exe" 
Remarks:
If the above solution didn't work, you may refer the below Metalink Note. 
Note 112934.1 Query String Parse Error When Opening a Log File 
Note 157578.1 Query String Parse Error when Viewing Request Out or Log Files
Note 743888.1 ERROR IN CONCURRENT REQUEST AT 'QUERY STRING PARSE ERROR.TEMP_ID 

Monday, November 3, 2014

Commands to rebuild Index, Partitioned Index, LOB Index, IOT (Index Organized Table)

Find the below commands to rebuild Index, Partitioned Index, LOB Index, IOT (Index Organized Table),

To rebuild the index:

set pagesize 1000

SELECT 'ALTER INDEX ' || OWNER ||'.'||index_name ||' REBUILD tablespace TARGET_TABLESPACE PARALLEL 8 ONLINE COMPUTE STATISTICS;'
from dba_indexes where tablespace_name='SOURCE_TABLESPACE';

To rebuild the partitioned index:

set pagesize 1000

SELECT 'ALTER INDEX ' || INDEX_OWNER ||'.'||index_name ||' REBUILD PARTITION '||PARTITION_NAME||' tablespace TARGET_TABLESPACE Parallel;' from dba_ind_partitions where tablespace_name='SOURCE_TABLESPACE';

To rebuild the LOB index:


select 'alter table '||owner||'.'||table_name||' move tablespace TARGET_TABLESPACE '||chr(10)||
'LOB ('||'"'||column_name||'"'||') store as '||segment_name||chr(10)||
'(tablespace TARGET_TABLESPACE);'
from dba_lobs
where owner in ('<owner_name>')
and tablespace_name='SOURCE_TABLESPACE';


To rebuild the index organized table index (IOT):


1.       Find the table name for IOT index

select owner,table_name,index_name,tablespace_name from dba_indexes where index_name='IOT_INDEX_NAME';


2.       Move that table to reubuild index

alter table OWNER.TABLE_NAME move tablespace TARGET_TABLESPACE;

EXPDP/IMPDP failing with UDE-31623: operation generated ORACLE error 31623

While exporting we got the following error,


UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551
ORA-06512: at line 1



The solution for the above error is to increase the stream pool size.
We have changed the stream pool size to 128M from 64M and started the export process which completed successfully.


SYS> show parameter stream

NAME                  TYPE             VALUE
------------------ ------------- ---------
streams_pool_size  big integer      64M


SYS> show parameter stream

NAME                  TYPE             VALUE
------------------ ------------- ---------
streams_pool_size  big integer      128M

Database is getting failed with ORA-27300: OS system dependent operation:semget failed with status: 28 while starting up the database

We got the following error while starting up the database,

SYS> startup mount pfile=initabc.ora
ORA-32006: PARALLEL_SERVER initialization parameter has been deprecated
ORA-27154: post/wait create failed
ORA-27300: OS system dependent operation:semget failed with status: 28
ORA-27301: OS failure message: No space left on device
ORA-27302: failure occurred at: sskgpbitsper


The solution is to increase the semaphore values in the kernel parameter.

We have increased the semaphore values in the kernel parameter as below and were able to startup the database after the change,


From:

[oracle@abc] cat /etc/sysctl.conf | grep -i sem
kernel.sem = 250 32000 100  128

To:

[oracle@abc] cat /etc/sysctl.conf | grep -i sem
kernel.sem = 250 32000 100  200

Enterprise Manager Cloud Control 12c Agent Keeps Going Down With java.lang.OutOfMemoryError: unable to create new native thread

If your are getting the below error while starting the 12c OEM agent,

 java.lang.OutOfMemoryError: unable to create new native thread

Then check for the below logfiles of 12c OEM agent,

$AGENT_HOME/sysman/log/emagent.nohup

emagent.nohup:


Agent is going down due to an OutOfMemoryError

----- Sat Sep  6 01:28:02 2014::31358::Checking status of EMAgent : 1685 -----
----- Sat Sep  6 01:28:02 2014::31358::EMAgent exited at Sat Sep  6 01:28:02 2014 with return value 57. -----
----- Sat Sep  6 01:28:02 2014::31358::EMAgent will be restarted because of an Out of Memory Exception. -----
----- Sat Sep  6 01:28:02 2014::31358::EMAgent is Thrashing. Exiting loop. -----
----- Sat Sep  6 01:28:02 2014::31358::Commiting Process death. -----
----- Sat Sep  6 01:28:02 2014::31358::writeAbnormalExitTimestampToAgntStmp: exitCause=OOM : restartRequired=0 -----
----- Sat Sep  6 01:28:02 2014::31358::Exited due to Thrash. -----


gcagent.log


2014-09-06 01:27:55,220 [57:GC.Executor.0 (oracle_database:abc.oracle.com7:problemTbsp_10i_Dct)] ERROR - Critical error:
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:640)
        at java.lang.UNIXProcess$1.run(UNIXProcess.java:141)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:103)
        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
        at oracle.sysman.gcagent.common.SafeProcessBuilder.start(SafeProcessBuilder.java:120)
       at oracle.sysman.gcagent.util.channel.OSChannel.runStreamingCommand(OSChannel.java:414)
        at oracle.sysman.gcagent.util.channel.OSChannel.runCommand(OSChannel.java:449)

We are getting the above error because,

The kernel nproc limit is set too low and needs to be increased.
The more targets the agent is monitoring, the higher the nproc limit must be. We have more than 20 instance running in the server. So we have increased the nproc limit to unlimited as below and started the agent without any issues,

From:

[oracle@abc] ulimit -Su
1024

[oracle@abc] ulimit -Hu
16384


To:

[oracle@abc] ulimit -Su
unlimited

[oracle@abc] ulimit -Hu
unlimited

Tuesday, October 28, 2014

Oracle Application User(s) List With Age Threshold

The Below script will List the Oracle Application users with the specified age limit


select papf.employee_number,papf.full_name, papf.date_of_birth
from per_all_people_f papf,fnd_user fu
where papf.person_id=fu.employee_id 
and substr(papf.date_of_birth,8) <= 56  --year (2014-58=56)
--Here the user with the birth year less than or equal to 56 will be listed
and fu.END_DATE is NULL
--Specify whether to list end dated user or active users
and trunc(sysdate) between papf.effective_start_date and 
papf.effective_end_date order by DATE_OF_BIRTH



For clear view use the query in Toad

Monday, October 27, 2014

Saturday, October 25, 2014

To unlock OID super user !!!


If you are getting the below error in OID while using ldap, then it means the super user is locked. 

[oracle@xyz bin]$ ./ldapsearch -h xyz.oracle.com -p 389 -D cn=orcladmin -w abc -s sub -b dc=na,dc=xyz,dc=net uid=George* 
ldap_bind: DSA is unwilling to perform 
ldap_bind: additional info: Password Policy Error :9001: cn=orcladmin : Your account is locked. Please contact the Administrator. 

Please use the following command to unlock the super user in OID. 

cd $ORACLE_HOME/ldap/bin 

./oidpasswd conn= unlock_su_acct=true 

When you run the above command, it will ask for the super user’s password. 

Please find the below screenshot. 

Wednesday, October 22, 2014

Oracle database 12c - Deprecation of catupgrd.sql and Introduction of catctl.pl Utility

Oracle Database 12c introduces the new Parallel Upgrade Utility, catctl.pl. This utility replaces the catupgrd.sql script that was used in earlier releases.
Although you can still use the catupgrd.sql script, it is deprecated starting with Oracle Database 12c and will be removed in future releases.
Oracle recommends database upgrades be performed with the new Parallel Upgrade Utility, catctl.pl.

If you choose to run the catupgrd.sql script instead of running catctl.pl, doing so now requires an additional input parameter as follows:

SQL> @catupgrd.sql PARALLEL=NO

If you run catupgrd.sql without the parameter, then Oracle displays the following error message:

NOTE:

The catupgrd.sql script is being deprecated in the 12.1 release of Oracle Database. Customers are encouraged to use catctl.pl as the replacement for catupgrd.sql when upgrading the database dictionary.

                cd $ORACLE_HOME/rdbms/admin
                $ORACLE_HOME/perl/bin/perl catctl.pl -n 4 catupgrd.sql

Refer to the Oracle Database Upgrade Guide for more information.



Saturday, October 18, 2014

Query to find out the status of the undo and temp tablespace usage currently at the time of process execution

For undo
------------
set lines 130
col SID_SERIAL format a20
col PROGRAM format a25
col Undo format a30
SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,
NVL(s.username, 'None') orauser,
s.program,
r.name undoseg,
t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"
FROM sys.v_$rollname r,
sys.v_$session s,
sys.v_$transaction t,
sys.v_$parameter x
WHERE s.taddr = t.addr
AND r.usn = t.xidusn(+)
AND x.name = 'db_block_size';



For Temp
------------

SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A,
(
SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

Friday, October 17, 2014

How to clone Oracle 10g and 11g binaries in silent mode

For the purpose of demonstration, I will clone an existing 11.2.0.2 (11gR2) ORACLE_HOME on the source server.



Directories Needed:
------------------------------

Source Server:
-----------------------

hostname=source.com

ORACLE_BASE=/opt/app/oracle
ORACLE_HOME=/opt/app/oracle/product/11.2.0.2
Staging Directory = /home/oracle/clone  [This directory will be used as a staging directory for the binaries]


Target Server:
-------------------

hostname=target.com

Create the following directories if required
------------------------------------------------------------

ORACLE_BASE=/opt/app/oracle
ORACLE_HOME=/opt/app/oracle/product/11.2.0.2 [This should be empty]
Staging Directory= /home/oracle/clone


Procedure to Clone:
----------------------------

Log on to the source server:
--------------------------------------

ssh oracle@source.com

a) Choose the Version of the ORACLE_HOME you intend to clone and ‘cd’ to that directory. In our case it is 11.2.0.2 Home

cd /opt/app/oracle/product/11.2.0.2/


b) Create a tar ball of the source ORACLE_HOME in the staging directory /home/oracle/clone

tar -cvf /home/oracle/clone/source_11202.tar ./


c) Once a tar file is created scp the file on to the remote server target.com

scp /home/oracle/clone/source_11202.tar oracle@target.com:/home/oracle/clone/


d) After the copying is done, logon to the target server

ssh oracle@target.com


e) Go to the staging directory where you have copied the tar ball. In our case it is /home/oracle/clone on the target server

cd /home/oracle/clone


f) Make sure you have the ORACLE_HOME directory created on the target server. untar the tar file

i) cd /opt/app/oracle/product/11.2.0.2

ii) tar -xvf /home/oracle/clone/source_11202.tar


g) Once untar is done, run the runInstaller command in silent mode

/opt/app/oracle/product/11.2.0.2/oui/bin/runInstaller -silent -clone ORACLE_HOME="/opt/app/oracle/product/11.2.0.2" ORACLE_HOME_NAME="ANY NAME" ORACLE_BASE="/opt/app/oracle"


h)The above step takes a while, once the setup is successful switch to root user and run the following configuration script

/opt/app/oracle/product/11.2.0.2/root.sh

Creating private database links without user credentials


The following process can be followed to create private database link for an user whose login credentials are unknown,

Here in this example the user "6543" password is unknown,

In order to create a private database link for "6543" we have to first create a procedure for him as SYS user and grant create database link privilege to the "6543" user and execute the procedure. This would create a database link. After that revoke the grant given to "6543" user.

CREATE PROCEDURE "6543".cre_db_lnk AS
BEGIN
    EXECUTE IMMEDIATE 'CREATE DATABASE LINK DATABASELINKNAME '
            ||'CONNECT TO username IDENTIFIED BY password '
            ||'USING ''HOST''';
END cre_db_lnk;
/

GRANT create database link TO "6543";

execute "6543".cre_db_lnk;

revoke create database link from "6543";

DROP PROCEDURE "6543".cre_db_lnk;

select * from dba_db_links where db_link='DATABASELINKNAME' and owner ='6543';
To copy files from one server to another without promting/giving the password.


For example there are two servers hosta and hostb

to enable passwordless scp from hosta to hostb do the followings,


a. login to hosta server and from the home directory go to .ssh/ directory

    cd .ssh/

b. open and copy the content of the file "id_dsa.pub"

c. Login to the hostb server(navigate to .ssh/ directory ) and paste the content copied from hosta server to the file "authorized_keys" in the hostb server.

d. try the copy process

Note : - if .ssh directory is missing in the server create it using "ss h-keygen -t dsa" command, this would create the .ssh directory in the home location.Then proceed with step a, b,c and d.


Recovering a standby database from missing archive logs


There would be scenarios where the standby database lags far behind from the primary database leading to Archive Gap. It could be due to archive logs getting missed out on the primary database or the archives getting corrupted and there would be no valid backups.

In such cases where the standby lags far behind from the primary database, incremental backups can be used as one of the  methods to roll forward the physical standby database to have it in sync with the primary database.


Step 1: Take a note of the Current SCN of the Physical Standby Database.

Standby Database:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
991247

Note down the CURRENT_SCN value of the standby database (991247) to proceed further.


Step 2 : Cancel the Managed Recovery Process on the Standby database.

Standby Database:

SQL>alter database recover managed standby database cancel;


Step 3: On the Primary database, take the incremental SCN backup from the SCN that is currently recorded on the standby database (991247)

Connect to the primary database and take the incremental SCN backup.

Primary Database:

[oracle@dev ~]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Sun Mar 2515:44:45 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SSPM (DBID=1624493265)

RMAN> backup incremental from scn 991247 database format '/u02/bkp/stnd_backp_%U.bak';

Starting backup at 25-MAR-12

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=39 device type=DISK
backup will be obsolete on date 01-APR-12
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA_NEW/sspm/datafile/system.256.778803539
input datafile file number=00002 name=+DATA_NEW/sspm/datafile/sysaux.257.778803541
input datafile file number=00003 name=+DATA_NEW/sspm/datafile/undotbs1.258.778803541
input datafile file number=00004 name=+DATA_NEW/sspm/datafile/users.259.778803543
channel ORA_DISK_1: starting piece 1 at 25-MAR-12
channel ORA_DISK_1: finished piece 1 at 25-MAR-12
piece handle=/u02/bkp/stnd_backp_10n6p3nl_1_1.bak tag=TAG20120325T154639 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45

using channel ORA_DISK_1
backup will be obsolete on date 01-APR-12
archived logs will not be kept or backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 25-MAR-12
channel ORA_DISK_1: finished piece 1 at 25-MAR-12
piece handle=/u02/bkp/stnd_backp_11n6p3p4_1_1.bak tag=TAG20120325T154639 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 25-MAR-12


Step 4: Take the standby controlfile backup of the Primary database controlfile.

Connect to the Primary database and create the standby controlfile backup.

Primary Database :

RMAN> backup current controlfile for standby format '/u02/stnd_%U.ctl';

Starting backup at 25-MAR-12
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including standby control file in backup set
channel ORA_DISK_1: starting piece 1 at 25-MAR-12
channel ORA_DISK_1: finished piece 1 at 25-MAR-12
piece handle=/u02/stnd_12n6p3qt_1_1.ctl tag=TAG20120325T154845 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 25-MAR-12

Step 5: Transfer the backups from the Primary Server to the Standby Server.


Step 6: On the standby server, connect the Standby Database through RMAN and catalog the copied incremental backups so that the Controlfile of the Standby Database would be aware of these incremental backups.

I had the incremental backuppieces copied to the location ‘/u02/bkp‘ on the standby server.

Standby Database:

[oracle@uat ~]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Sun Mar 2515:51:02 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SSPM (DBID=1624493265, not open)

RMAN> catalog start with '/u02/bkp';
Starting implicit crosscheck backup at 25-MAR-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK
Crosschecked 15 objects
Finished implicit crosscheck backup at 25-MAR-12

Starting implicit crosscheck copy at 25-MAR-12
using channel ORA_DISK_1
Crosschecked 2 objects
Finished implicit crosscheck copy at 25-MAR-12

searching for all files in the recovery area
cataloging files...
File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_200.453.778846881
File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_201.454.778846881
File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_202.455.778846881
File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_203.456.778846881
File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_137.457.778846881
File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_204.458.778846881
File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_205.459.778846883
.
.
.
.

File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_856.947.778861691
File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_858.949.778861709
File Name: +arch/SSSB/ARCHIVELOG/2012_03_25/thread_1_seq_857.950.778861719

searching for all files that match the pattern /u02/bkp

List of Files Unknown to the Database
=====================================
File Name: /u02/bkp/stnd_backp_10n6p3nl_1_1.bak
File Name: /u02/bkp/stnd_backp_11n6p3p4_1_1.bak

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u02/bkp/stnd_backp_10n6p3nl_1_1.bak
File Name: /u02/bkp/stnd_backp_11n6p3p4_1_1.bak


Step 7: Recover the standby database with the cataloged incremental backup pieces.


RMAN> recover database noredo;

Starting recover at 25-MAR-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=32 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: +DATA/sssb/datafile/system.274.778865099
destination for restore of datafile 00002: +DATA/sssb/datafile/sysaux.275.778865193
destination for restore of datafile 00003: +DATA/sssb/datafile/undotbs1.276.778865259
destination for restore of datafile 00004: +DATA/sssb/datafile/users.277.778865273
channel ORA_DISK_1: reading from backup piece /u02/bkp/stnd_backp_10n6p3nl_1_1.bak
channel ORA_DISK_1: piece handle=/u02/bkp/stnd_backp_10n6p3nl_1_1.bak tag=TAG20120325T154639
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

Finished recover at 25-MAR-12


Step 8 : Shutdown the physical standby database, start it in nomount stage and restore the standby controlfile backup that we had taken from the primary database.

Standby Database:

RMAN> shutdown immediate

database dismounted
Oracle instance shut down

RMAN> startup nomount

connected to target database (not started)
Oracle instance started

Total System Global Area     659730432 bytes

Fixed Size                     2216264 bytes
Variable Size                398462648 bytes
Database Buffers             255852544 bytes
Redo Buffers                   3198976 bytes

RMAN> restore standby controlfile from '/u02/stnd_12n6p3qt_1_1.ctl';

Starting restore at 25-MAR-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
output file name=+DATA/sssb/controlfile/current.273.778864875
Finished restore at 25-MAR-12
Step 9: Shutdown the standby database and mount the standby database, so that the standby database would be mounted with the new controlfile that was restored in the previous step.

Standby Database:

RMAN> shutdown immediate

Oracle instance shut down

RMAN> startup mount

connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area     659730432 bytes

Fixed Size                     2216264 bytes
Variable Size                398462648 bytes
Database Buffers             255852544 bytes
Redo Buffers                   3198976 bytes



Step 11: On the standby database, start the Managed Recovery Process.

Standby Database:


SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL> select process,status,sequence# from v$managed_standby;

PROCESS   STATUS        SEQUENCE#
--------- ------------ ----------
ARCH      CONNECTED             0
ARCH      CONNECTED             0
ARCH      CONNECTED             0
ARCH      CONNECTED             0
RFS       IDLE                  0
RFS       IDLE                  0
RFS       IDLE               1010
RFS       IDLE                  0
MRP0      WAIT_FOR_LOG          0

9 rows selected.


Step 12: On the Primary database, check the Maximum Archivelog Sequence generated.

Primary Database:

SQL> select thread#,max(sequence#) from v$archived_log group bythread#;

THREAD# MAX(SEQUENCE#)
------- ----------------
1       1009


Step 13: Check the maximum archivelog sequence that is applied on the Physical standby database.


SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

THREAD# MAX(SEQUENCE#)
------- ----------------
1       1009

So, here we can see from Steps 12 and 13 that the maximum archivelog sequence generated on the Primary database is sequence# 1009 and that applied on the Physical Standby Database is also 1009 which means that the Standby database is in sync with the Primary Database. You can check it out by generating an archive sequence on the Primary database and check if its shipped and applied on the standby database.

Primary Database:

SQL> alter system switch logfile;

System altered.

SQL> select thread#,max(sequence#) from v$archived_log group bythread#;

THREAD# MAX(SEQUENCE#)
------- ----------------
1       1010

Standby Database:


SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

THREAD# MAX(SEQUENCE#)
------- ----------------
1       1010


Now standby database is in sync with the Primary Database.

Here you go !!

Wednesday, September 10, 2014

Internal error code, arguments [kcratr_nab_less_than_odr]

Solution for kcratr_nab_less_than_odr argument

This error may cause due to improper shutdown and will throws the error 

ORA-00600: internal error code, arguments: [kcratr_nab_less_than_odr]

while issuing the command alter database open

Option 1:

SQL>Startup mount ;

SQL>Show parameter control_files

Query 1
------------

SQL> select a.member, a.group#, b.status from v$logfile a ,v$log b where a.group#=b.group# and b.status='CURRENT' ;

Note down the name of the redo log

SQL> Shutdown abort ;

Take a OS Level backup of the controlfile (This is to ensure we have a backup of current state of controlfile)

SQL> Startup mount ;

SQL> recover database using backup controlfile until cancel ;

Enter location of redo log shown as current in Query 1 when prompted for recovery

Hit Enter

SQL> Alter database open resetlogs ;


Option 2:


Recreate the controlfile using the Controlfile recreation script

With database in mount stage

rman target /

rman> spool log to '/tmp/rman.log';

Rman> list backup ;

Rman> exit

Keep this log handy

Go to sqlplus

SQL> Show parameter control_files 

Keep this location handy.

SQL> oradebug setmypid 

SQL> Alter session set tracefile_identifier='controlfilerecreate' ;

SQL> Alter database backup controlfile to trace ;

SQL> oradebug tracefile_name ; --> This command will give the path and name of the trace file


Go to this location ,Open this trace file and select the controlfile recreation script with NO Resetlogs option 


SQL> Shutdown immediate;


Rename the existing controlfile to <originalname>_old ---> This is Important as we need to have a backup of existing controlfile since we plan to recreate it

SQL> Startup nomount

Now run the Controlfile recreation script with NO Resetlogs mode

SQL> Alter database open ;

For database version 10g and above 

Once database is opened you can recatalog the rman backup information present in the list /tmp/rman.log using

Rman> Catalog start with '<location of backupiece>' ;



it is recommended to take a hot backup of the database once recovery completed.

Friday, September 5, 2014

Enable Auditing for DB user login attempts in 11g


1 - Add initialization parameters & bounce instance:

alter system set audit_trail=db scope=spfile;
audit_file_dest='/u01/app/oracle/mysid/mydir/'


2 - Enable auditing of failed logion attempts :

SQL> audit create session whenever not successful;


To view all the login attempts either success or failure :

col os_username format a15
col username format a15
col terminal format a15
col to_char(timestamp) format a35
select os_username,username,terminal,to_char(timestamp,'MM-DD-YYYY HH24:MI:SS') from dba_audit_trail;


To view the failed login attempts:

set lines 130
col OS_USERNAME for a20
col USERNAME for a20
col USERHOST for a20
select OS_USERNAME,USERNAME,USERHOST,to_char(timestamp,'MM-DD-YYYY HH24:MI:SS'), returncode
from dba_audit_trail
where returncode > 0;


Also we can able to see the sys failed and succesful login attempts in audit_file_dest files.

Monday, September 1, 2014

Using adrci Tool in 11g



Oracle 11g introduces adrci tool. This tool is used to examine contents of ADR repository and also to package information related to a specific problem into a compressed (zip) file to send it to Oracle support.

To invoke the tool, log in as the Oracle software owner and type Adrci in the command line.
To list the tool command line commands, type the following:

adrci –help

To display the tool commands, type the following command in the
Adrci command line:

adrci>help

To obtain guiding information about specific command, type the following:

adrci>help show incident

adrci commands will run on the ADR root (known when dealing with this toolas ADR base).

To display the ADR base directory the tool is handling, type the following in the adrci command:

adrci>show base

adrci tool can deal with all the Oracle homes under the ADR base. If you want to specify which home should be handled by the tool, you must specify the current homepath
. If you do not specify the current homepath, all the homes will be handled by the tool.

To display homes under the ADR root, issue the following command:

adrci>show homes

To display the current ADR homepath, issue the following command:

adrci>show homepath

To set the current ADR home, issue the following command:
adrci>set homepath diag\rdbms\ora11g\ora11g

Note
You can specify multiple homes as current homepaths. In this case, adrci tool will deal with all the
specified current homepaths. However, not all adrci commands can work with multiple current
homepaths.
Any text output from adrci can be captured and saved in an external text file using spool command:

adrci>spool /u01/myfiles/myadrci.txt
adrci> ...
adrci>spool off

Using adrci to View the Alert Log

By default, adrci displays the alert log in your default editor. You can use the SET EDITOR
command to change your default editor.

adrci>set editor notepad.exe
To display contents of the alert log file (xml tags will be excluded), issue the following command:

adrci>show alert

To see only the last 30 messages in the alert, modify the command as follows:
adrci>show alert –tail 30

To display messages containing ORA-600 errors, issue the following command:

adrci>show alert –p "MESSAGE TEXT LIKE '%ORA-600%'*

Using adrci to List Trace Files

Trace files can be listed in adrci using the following command:

adrci>show tracefile

Using adrci to View Incidents

Use the following command to obtain a report about all the incidents in the current homepath(s):
adrci>show incident

If you want to obtain further details about an incident, issue the command with –p(predicate string) option:

adrci>show incident –mode detail –p "incident_id=112564"

You can use many fields in the predicate options.To list all available fields, issue the command
describe incident
.
Using adrci to Package Incidents

With adrci tool, you can package all the diagnostic files related  to specific problems into a ZIP file to submit it to Oracle support. To do so, you use special commands called IPS as shown in the following steps:

1. Create a logical package: use  ips create package command to create an empty logical package as shown in the example below. The package will be given a serially generated number.

adrci>ips create package

2. Add diagnostic data to the logical package: this is done by ips add incident command as shown below:

adrci>ips add incident 112564 package 1

Actually, there are formats of the ips create package command which enables you to perform the steps 1 and 2 in one command. Following are those command formats:

O ips create package problem
O ips create package problem key
O ips create package incident
O ips create package time

3. Generate the physical package. The files related to the incident will be collected in a ZIP file. The   following  example shows the command to perform this task:

adrci>ips generate package 1 in /u01/myfiles/incidents

If you decide to add or change any diagnostic data later, you can do so by generating an
Incremental ZIP file. Modify the command as follows to achieve that:

adrci>ips generate package 1 in /u01/myfiles/incidents incremental

You will notice that the generated file has the phase INC in its name indicating that it is an incremental ZIP file.

ips commands behavior is controlled by various configuration options. To display those configuration
options, use the command ips show configuration