DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Sunday, October 25, 2015

RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece from nfs mount point



RMAN> restore controlfile from '/mount/nfs/mydb/20141213/control_132343_gdfgvdf.bak';

Starting restore at 20.12.2014 07:28
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=235 device type=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 12/20/2014 06:28:43
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece


Check the alert log :

Alert log content:
==============

Sat Dec 20 07:28:43 2014
WARNING:NFS mount of file /mount/nfs/mydb/20141213/control_132343_gdfgvdf.bak on filesystem /mount/nfs/mydb/20141213/control_132343_gdfgvdf.bak done with incorrect options
WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,
WARNING:NFS mount of file /mount/nfs/mydb/20141213/control_132343_gdfgvdf.bak on filesystem /mount/nfs/mydb/20141213/control_132343_gdfgvdf.bak done with incorrect options
WARNING:Expected NFS mount options: rsize>=32768,wsize>=32768,hard,

Solution:

Here are the mount options that need to be used for NFS volumes on Linux are:
rsize=32k, wsize=32k, hard, actimeo=0

Along with the error there are warning messages printed in the alert log with the mount options to be used.Sometimes you may need to do the following to allow datapump to work with NFS mounted files:

Set the event in the init.ora, to disable the mount point parameter checking:

event="10298 trace name context forever, level 32" 

         
Note that the event 10298 skips all mount checks.  Setting this event has to be done with care. Verify settings with your System Administrator to ensure no datafiles are corrupted



ORA-01422: exact fetch returns more than requested number of rows" When Running DBMS_SQLTUNE.ACCEPT_SQL_PROFILE


When trying to accept a sql profile, it fails with "ORA-01422: exact fetch returns more than requested number of rows":
execute DBMS_SQLTUNE.ACCEPT_SQL_PROFILE(task_name => '<TASK_NAME>',task_owner => '<TASK_OWNER>', replace => TRUE);
*
ERROR at line 1:
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "SYS.DBMS_SQLTUNE_INTERNAL", line 16446
ORA-06512: at "SYS.PRVT_SQLPROF_INFRA", line 31
ORA-06512: at "SYS.DBMS_SQLTUNE", line 7544
ORA-06512: at "SYS.DBMS_SQLTUNE", line 7568
ORA-06512: at line 1


This problem occurs because there are two or more distinct executions recorded for the same task_id. You can see this with the following select:

SELECT f.execution_name,
  f.finding_id,
  rec_id,
  r.flags
FROM dba_advisor_objects o,
  dba_advisor_findings f,
  dba_advisor_recommendations r
WHERE o.task_id IN
  (SELECT DISTINCT(task_id)
  FROM dba_advisor_tasks
  WHERE task_name=dummy;

EXECUTION_NAME                 FINDING_ID     REC_ID      FLAGS
------------------------------ ---------- ---------- ----------
EXEC_9047                               1          1          1
EXEC_9047                               2          2          4
EXEC_9048                               3          3          1
EXEC_9048                               4          4          4
EXEC_9087                               5          5          1
EXEC_9087                               6          6          4

This root cause of this issue is still being investigated via Bug 16758043 ORA-01422 WHEN EXECUTING DBMS_SQLTUNE.ACCEPT_SQL_PROFILE

Solution:

There is currently no fix for this issue. As a workaround, dropping the tuning task will clear the problem. To drop the tuning task execute:

exec DBMS_SQLTUNE.DROP_TUNING_TASK('<TASK_NAME>');

You will need to recreate and rerun the tuning task afterwards.

Friday, October 23, 2015

                      EBS -- Data pump expdp error ORA-33272 , ZPB, DBMS_AW, dba_aws


While exporting an EBS 11i database using datapump , you can get ORA-33272 error on Analytical Workspaces..

This errors are mostly due to the versions of aw objects..
Suppose you have upgraded your Ebs database from 9i to 10g and the AW objects belong to 9i version of Oracle Database have remained in your database..  In such a situation, you can encounter ORA-33272 error on your future data pump exports..

An example for these kind of errors are below;

ORA-39127: unexpected error from call to export_string :=SYS.DBMS_AW_EXP.schema_info_exp('ZPB',1,1,'10.02.00.00.00',newblock)
ORA-33272: Analytic workspace ZPB.ZPBCODE cannot be opened.
ORA-00600: internal error code, arguments: [xschkmk:!dict], [], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_AW", line 67
ORA-06512: at "SYS.DBMS_AW", line 131
ORA-06512: at "SYS.DBMS_AW", line 792
ORA-06512: at "SYS.DBMS_AW", line 1142
ORA-06512: at "SYS.DBMS_AW_EXP", line 517
ORA-06512: at line 1
ORA-06512: at "SYS.DB.

For the solution, use following query to list the version for the object in question;

select * from all_aws where owner='ZPB';  (ZPB is the schema name of failing objects in the example above)

SQL> select * from dba_aws;

Example output:
OWNER AW_NUMBER AW_NAME AW_V PAGESPACES GENERATIONS FROZEN
---------- ---------- -------------------- ---- ---------- ----------- ------
ZPB 1000 ZPBCODE 9.1 292 2
SYS 129 AWREPORT 10.2 9 2
SYS 128 AWXML 10.2 64 2
ZPB 1001 ZPBANNOT 9.1 7 1
ZPB 1002 ZPBDATA 9.1 7 1
SYS 125 AWMD 10.2 363 2
SYS 124 EXPRESS 10.2 46 2
SYS 126 AWCREATE 10.2 21 2
SYS 127 AWCREATE10G 10.2 9 2

Analyze the output of the query and if you have older versions, delete them..
For the example above: the ZPB Aws are 9i and they are need to be deleted.

You can use dbms_aw.execute to do that..

exec dbms_aw.execute('aw delete zpb.zpbcode');
exec dbms_aw.execute('aw delete zpb.zpbannot');
exec dbms_aw.execute('aw delete zpb.zpbdata');

Log in AS ZPB
SQL> set serverout on
SQL> call dbms_aw.execute('AW DELETE ZPBCODE');
SQL> call dbms_aw.execute('AW DELETE ZPBANNOT');
SQL> call dbms_aw.execute('AW DELETE ZPBDATA');
If you receive errors deleting the AWs, drop the associated AW$ tables :
AS SYS:
SQL> drop table AW$ZPBCODE;
SQL> drop table AW$ZPBANNOT;
SQL> drop table AW$ZPBDATA;

select * from dba_aws;

SYS 125 AWREPORT 9.1 9 2
SYS 124 AWXML 9.1 50 2
SYS 123 AWCREATE10G 9.1 9 2
SYS 122 AWCREATE 9.1 18 2
SYS 121 AWMD 9.1 205 2
SYS 120 EXPRESS 9.1 38 2


Also if you have all 9i 's in awr$ table, then you may be need to -> Remove OLAP, Clean up Left Over Objects And Add It back (Doc ID 1560841.1)


Tuesday, October 13, 2015

                Enabling RESTful Management Services

WebLogic Server 12c introduces the possibility of monitoring WebLogic Server using RESTful
Web Services with new RESTful Management Services.

Step to do it...

Carry out the following steps to enable RESTful Management Services:

1. Access the Administration Console with your web browser at

http://prod01.domain.local:7001/console

2. Click on the Lock & Edit button to create a new change session.

3. Navigate to Settings for Domain | General by clicking Home and then Domain or by
clicking the PROD_DOMAIN link.

4. Check the Enable RESTful Management Services checkbox

5. Restart WebLogic Administration Server and all Servers of the domain.

Output:

http://prod01.domain.local:7001/management/tenantmonitoring/servers
            Choosing the JRockit garbage collection Mode


Getting ready

A JVM startup argument will be added, by using the Administration Console, so make sure
the Administration Server is running.

Steps to do it...

To change the JVM garbage collection mode:

1. Access the Administration Console by pointing your web browser to

http://adminhost.domain.local:7001/console

2. Click on the Lock & Edit button to start a new edit session.

3. Expand the Environment tree to the left and then click on Servers.

4. Click on the PROD_Server01 link and then click on the Server Start tab.

5. Add the following to the Arguments field and click on the Save button:

-Xgc:pausetime

6. Click on the Activate Changes button.

7. Restart PROD_Server01.

8. Repeat the preceding steps for Managed Servers PROD_Server02, PROD_Server03,
and PROD_Server04.



Recovering the WebLogic admin password


The WebLogic Administrator username and password are used to start up the WebLogic
Server instances. They are stored encrypted in the boot.properties file.

Here we have the steps to recover the username and password from the boot.
properties file of the PROD_DOMAIN domain.

1. Log in as the wls user to shell and set the domain environment variables for the
domain you want to recover:

[wls@prod01]$ cd $DOMAIN_HOME/bin
[wls@prod01]$ . ./setDomainEnv.sh

2. Start WLST:

[wls@prod01]$ $WL_HOME/common/bin/wlst.sh

3. Run the following WLST commands to display the username and password:

from weblogic.security.internal import BootProperties
BootProperties.load("/oracle/Middleware/user_projects/domains/
PROD_DOMAIN/servers/PROD_AdminServer/security/boot.properties",
false)
prop = BootProperties.getBootProperties()
print "username: " + prop.getOneClient()
print "password: " + prop.getTwoClient()

The username and password will be displayed on the screen.


Wednesday, October 7, 2015

After Server Reboot Database not started automatically in Windows

Issue:

Whenever user reboot the windows server he got the below issue.

Error:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor


Analysis:

We checked the service status in "services.msc" reboot configuration, it is mentioned as "Automatic".

When we checked the status it is connected as "ideal instance". Immediatly we started the database manually.


Cause:

Database not stated automatically. So we started the database manually and analyse the issue the database is not configured with auto startup option


Solution:

We edited the instance configuration using "oradim" utility.

Command used:

oradim -EDIT -SID <database name> -STARTMODE  auto -SPFILE 


-> Retested the issue and the database came up automatically and issue has been fixed

WebLogic Server Installer Fails With Insufficient Disk Space Error

Weblogic Password Field Is Not Editable When Configuring a New Domain

Monday, October 5, 2015

STEP BY STEP PROCEDURE TO EXPORT DATA FROM HIGHER ENVIRONMENT AND IMPORT INTO LOWER ENVIRONMENT

STEP 1: CREATE DIRECTORY FOR SCOTT USER IN SOURCE DB 11.2 VERSION:

SQL> create or replace directory test_dir as '/home/oracle/dump';

Directory created.

SQL> grant read,write on directory test_dir to scott;

Grant succeeded.

SQL> conn scott/XXXX

Connected.

SQL> create table testversion(version varchar2(20));

Table created.

SQL> insert into testversion values('oralce11gr2');

1 row created.

SQL> commit;


Commit complete.

STEP 2: EXPORT TABLE USING DATAPUMP IN SOURCE DB 11.2 USING VERSION PARAMETER AND COPY THE DUMPFILES TO TARGET SERVER:

expdp scott/xxxxxxxx directory=test_dir dumpfile=testver.dmp tables=testversion
version=10.2 reuse_dumpfiles=yes

Export: Release 11.2.0.1.0 - Production on Sun Jan 23 16:06:47 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** directory=test_dir dumpfile=testver.dmp tables=testversion
version=10.2 reuse_dumpfiles=yes
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."TESTVERSION"                       4.968 KB       1 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /HOME/ORACLE/DUMP/TESTVER.DMP
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 16:06:54

cd HOME/ORACLE/DUMP/

scp TESTVER.DMP oracle@servername:/home/oracle/dump

STEP 3: CREATE DIRECTORY FOR SCOTT USER IN TARGET DB 10.2 VERSION AND IMPORT IT:

SQL> create or replace directory test_dir as '/home/oracle/dump';

Directory created.

SQL> grant read,write on directory test_dir to scott;

Grant succeeded.

impdp scott/xxxxxxxx directory=test_dir dumpfile=testver.dmp tables=testversion remap_tablespace=users_tbs:users

Import: Release 10.2.0.1.0 - 64bit Production on Sunday, 23 January, 2011 16:08:37

Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit
Production
With the Partitioning, OLAP and Data Mining options
Master table "SCOTT"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_TABLE_01":  scott/******** directory=test_dir d
umpfile=testver.dmp tables=testversion remap_tablespace=users_tbs:users
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."TESTVERSION"                       4.968 KB       1 rows

Job "SCOTT"."SYS_IMPORT_TABLE_01" successfully completed at 16:08:39



Saturday, October 3, 2015

How to check the undo status with active, expired, unexpired details

col value for a15
select
  A.value, B.TOTALMB , C_ACTIVE.ACTIVEMB, C_EXPIRED.EXPIREDMB, C_UNEXPIRED.UNEXPIREDMB,
     (B.TOTALMB - C_ACTIVE.ACTIVEMB - C_UNEXPIRED.UNEXPIREDMB) AvailableBeforeGetting1055
from v$parameter A,
    (select tablespace_name, trunc(sum(bytes)/1024/1024) TOTALMB from dba_data_files group by tablespace_name) B,
     (select tablespace_name, trunc(sum(bytes)/1024/1024) ACTIVEMB from dba_undo_extents where STATUS='ACTIVE' group by tablespace_name) C_ACTIVE,
     (select tablespace_name, trunc(sum(bytes)/1024/1024) EXPIREDMB from dba_undo_extents where STATUS='EXPIRED' group by tablespace_name) C_EXPIRED,
     (select tablespace_name, trunc(sum(bytes)/1024/1024) UNEXPIREDMB from dba_undo_extents where STATUS='UNEXPIRED' group by tablespace_name) C_UNEXPIRED
where A.name = 'undo_tablespace'
  and B.tablespace_name = A.value
  and C_ACTIVE.tablespace_name = A.value
  and C_EXPIRED.tablespace_name = A.value
  and C_UNEXPIRED.tablespace_name = A.value;

How to create histogram using shell script

#usage create_histogram.ksh <INSTANCE> <DBPASSWD> <ORACLE_HOME>
if [ "$#" -eq "3" ] ; then
export ORACLE_SID=$1
export DBPASSWD=$2
export ORACLE_HOME=$3
export MAILEE=""xyz@doyensys.com"
. /home/oracle/ora_PROD.env

echo "Creating Histogram ...."  > /home/oracle/siebel_stats_gathering/sbltesthist.log
$ORACLE_HOME/bin/sqlplus -s <<EOF>> /home/oracle/siebel_stats_gathering/sbltesthist.log
cai_dba/${DBPASSWD}
set pagesize 0
set linesize 100
select 'Started At : '||to_char(sysdate,'DD/MON/YYYY HH:MI:SS') from dual;

analyze table SIEBEL.S_SRV_REQ COMPUTE STATISTICS FOR COLUMNS template_flg SIZE 10;
select 'Ended At   : '||to_char(sysdate,'DD/MON/YYYY HH:MI:SS') from dual;
select 'Started At : '||to_char(sysdate,'DD/MON/YYYY HH:MI:SS') from dual;
ANALYZE TABLE siebel.S_SRV_REQ ESTIMATE STATISTICS FOR COLUMNS PRDINT_ID;

select 'Ended At   : '||to_char(sysdate,'DD/MON/YYYY HH:MI:SS') from dual;
EOF
mailx -s "Histogram creation in ${ORACLE_SID}/`hostname` Completed" $MAILEE < /home/oracle/siebel_stats_gathering/sbltesthist.log
else
echo "usage create_histogram.ksh <INSTANCE> <DBPASSWD> <ORACLE_HOME>"
fi

How to gather stats for single table using shell script

#usage S_ACT_EMP_stats.ksh  <DBPASSWD>
. /home/oracle/ora_PROD.env

export MAILEE="xyz@doyensys.com"

$ORACLE_HOME/bin/sqlplus -s $CONNECT_DBA << EOF > /home/oracle/siebel_stats_gathering/S_ACT_EMP.log
set head off
set pagesize 0
set linesize 100
set feed off
set trimspool off
set time on
set timing on



select 'Started At : '||to_char(sysdate,'DD/MON/YYYY HH:MI:SS') from dual;

exec dbms_stats.gather_table_stats(-
OWNNAME =>'SIEBEL', -
tabname => 'S_ACT_EMP', -
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,-
method_opt => 'FOR ALL COLUMNS SIZE AUTO', -
cascade => TRUE, -
degree => 4 -
);

select 'Ended At   : '||to_char(sysdate,'DD/MON/YYYY HH:MI:SS') from dual;
EOF


mailx  -s "Stats collection for S_ACT_EMP table Completed" $MAILEE < /home/oracle/siebel_stats_gathering/S_ACT_EMP.log

How to kill the spid for inactive sessions logged in more than 15 minutes

SELECT 'kill -9 '|| spid || ''
FROM V$SESSION a,v$process b where
(a.USERNAME = 'TEST_USER' AND a.STATUS = 'INACTIVE' and a.seconds_in_wait > 900) or
(a.USERNAME = 'SECURITY' AND a.STATUS = 'INACTIVE' and a.seconds_in_wait > 7500)
and
paddr=addr
;

kill -9 11055262  

How to kill inactive sessions logged in more than 15 minutes.

 SELECT 'alter system kill session (' || SID || ',' || SERIAL# || ') immediate;'FROM V$SESSION WHERE (USERNAME = 'WATSON' AND STATUS = 'INACTIVE' and seconds_in_wait > 900);

'ALTER SYSTEM KILL SESSION('||SID||','||SERIAL#||');'                              
--------------------------------------------------------------------------------
alter system kill session (513,2378) immediate;                                              
alter system kill session (613,3053) immediate;                                              
alter system kill session (820,351) immediate;                                              
alter system kill session (843,5830) immediate;                                              
alter system kill session (909,116) immediate;                                              
alter system kill session (952,1333) immediate;                                              

Thursday, October 1, 2015

Convert physical standby database to snapshot standby database


A snapshot standby database is a fully updateable standby database created by converting a physical standby database into a snapshot standby database.
Primary Database
Oracle Database version: 11.2.0.3 Enterprise Edition
Primary database: 
DO_PRIMARY
Details with respect to the primary database:

SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE    OPEN_MODE
 ------------ ---------------- ---------------- --------------------
 OPEN         do_primary           PRIMARY          READ WRITE

SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD# MAX(SEQUENCE#)
 ---------- --------------
1                  243


Standby database Details:

Oracle database version: 11.2.0.3 Enterprise Edition
Standby database name: 
DO_PSTBY
Details with respect to the physical standby database:


SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE    OPEN_MODE
------------ ---------------- ---------------- ----------------
OPEN         do_pstby          PHYSICAL STANDBY READ ONLY WITH APPLY

SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

THREAD# MAX(SEQUENCE#)
 ---------- --------------
 1            243

SQL>
SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

You can observe that the standby database is in sync with the primary database. Below outcome shows that the Flash Recovery Area is configured on the physical standby database.

SQL> show parameter db_recovery_file_dest

NAME                         TYPE         VALUE
---------------------------  -----------  -------------
db_recovery_file_dest        string       +FRA_DO
db_recovery_file_dest_size   big integer  6000M

SQL>

Step 1: Cancel the Managed Recovery Process (MRP) on the physical standby database, shut it down and place it in Mount mode.

SQL> alter database recover managed standby database cancel;
Database altered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
Database mounted.

Step 2: Once the standby database is mounted, convert the Physical standby database to snapshot standby database.

SQL> alter database convert to snapshot standby;
Database altered.

Step 3: You can now open the snapshot standby database and check its mode.

SQL> alter database open;
Database altered.

SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance;

STATUS      INSTANCE_NAME   DATABASE_ROLE    OPEN_MODE
----------- --------------- ---------------- ------------------
OPEN        do_pstby         SNAPSHOT STANDBY READ WRITE



Small Test on the snapshot standby database.

1. Create a user called “SNAPDOTEST”

2. Create a table called “DOTEST” whose owner is “SNAPDOTEST” and insert some records in it. You can also update some of the records as well.

SQL> create user snapdotest identified by doyensys;
User created.

SQL> grant connect,resource to snapdotest;
Grant succeeded.

SQL> conn snapdotest/doyensys@do_pstby
Connected.
SQL>
SQL> create table dotest(code number, name char(20));
Table created.

SQL> insert into test values (500,'MAX');
1 row created.

SQL> insert into test values(750,'MARK');
1 row created.

SQL> commit;
Commit complete.

SQL> select * from test;
CODE       NAME
---------- --------------------
500        MAX
750        MARK

SQL> update snapdotest.dotest set code=550 where name=’MAX’;
1 row updated.

SQL> commit;
Commit complete.

SQL> select * from snaptest.test;
CODE       NAME
---------- --------------------
500        MAX
750        MARK

In the mean time, you can also see that the redo data from the primary database is received by the snapshot standby database but would not be applied.
On primary database the latest sequence generated is 248 and that on the standby database, the RFS process is idle for sequence 249.
Primary:

1
2
3
4
5
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;

THREAD#    MAX(SEQUENCE#)
---------- --------------
 1         248

Standby:
SQL> select process,status,sequence# from v$managed_standby;

PROCESS   STATUS        SEQUENCE#
 --------- ------------ ----------
 ARCH      CLOSING               1
 ARCH      CONNECTED             0
 ARCH      CONNECTED             0
 ARCH      CONNECTED             0
 RFS       IDLE                  0
 RFS       IDLE                249
 RFS       IDLE                  0

7 rows selected.