DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Thursday, March 31, 2016

To Grant Read only access on v$ tables to User

SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 31 02:51:14 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> grant select on v$instance to gnos;
grant select on v$instance to cognos
                *
ERROR at line 1:
ORA-02030: can only select from fixed tables/views


SQL> select owner, object_name from dba_objects where object_name in ('V$INSTANCE','V$DATABASE');

OWNER                 OBJECT_NAME

--------------------------------------------------------------------------------
PUBLIC                  V$DATABASE

PUBLIC                  V$INSTANCE





SQL> SELECT TABLE_OWNER,TABLE_NAME FROM DBA_SYNONYMS  where SYNONYM_NAME in ('V$INSTANCE','V$DATABASE');

TABLE_OWNER                       TABLE_NAME

--------------------------------------------------------------------------------
SYS                              V_$DATABASE

SYS                              V_$INSTANCE


SQL> grant select on V_$DATABASE to gnos;

Grant succeeded.

SQL>  grant select on V_$INSTANCE to gnos;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Auto Apply Patch in DR from PROD in weblogicserver



Scripts


#!/bin/bash
#############################################################Move_bkp.sh#############################################
cd /data/bea/user_projects/domains/domain_FirstGen/
for f in *.ear
do
   cp -rf $f /data/bea/user_projects/domains/domain_FirstGen/${f%.ear}.ear_bkp_$(date +%m%d%y)
rm -rf $f
done
############################################################## Test1.sh ###############################################
ssh root@192.168.1.1 "bash -s" < /xgen/xgtest3/script/move_bkp.sh >> /xgen/xgtest3/script/1.log

scp -r /xgen/xgtest3/bea/user_projects/domains/domain_FirstGen/*.ear root@192.168.1.1:/data/bea/user_projects/domains/domain_FirstGen

 

 

 

Scripts Explanation


#!/bin/bash
#############################################################Move_bkp.sh#############################################
cd /data/bea/user_projects/domains/domain_FirstGen/
for f in *.ear
do
   cp -rf $f /data/bea/user_projects/domains/domain_FirstGen/${f%.ear}.ear_bkp_$(date +%m%d%y)
rm -rf $f
done


Move_bkp
           
            In move_bkp  script we have to take backup of the previous ear files in DR site to copy the new ear file in DR.


Scripts for Test1

ssh root@192.168.1.1 "bash -s" < /xgen/xgtest3/script/move_bkp.sh >> /xgen/xgtest3/script/1.log

scp -r /xgen/xgtest3/bea/user_projects/domains/domain_FirstGen/*.ear root@192.168.1.1:/data/bea/user_projects/domains/domain_FirstGen



Test
         
This is the main script it calls the move_bkp script to take backup of old ear files in DR and copy the new ear files from PROD to DR

Script to Automated Apply or Move the Weblogic / Middle ware patches to DR from PROD



Scripts

#!/bin/bash
############################################################## XML ###############################################
rm -rf /home/DR_test/remonte_scp_xml.sh
rm -rf /home/DR_test/remonte_scp_jasper.sh
rm -rf /home/DR_test/remonte_scp_jrxml.sh
find /home/DR_test/mxl -name "*.xml" >> /home/DR_test/demo_xml.log
ssh root@192.168.1.1 "bash -s" < /home/DR_test/remonte_xml.sh >> /home/DR_test/export_xml.log

file1="/home/DR_test/demo_xml.log"
file2="/home/DR_test/export_xml.log"

comm -23 $file1 $file2 >> /home/DR_test/test.log

c=`cat /home/DR_test/test.log | wc -l`
i=1
while [ $i -le $c ]
do
cat /home/DR_test/test.log  |awk "FNR=="$i"" |awk '{print " scp -rp " $1 " root@192.168.1.1:" $1 }' >> /home/DR_test/remonte_scp_xml.sh
i=`expr $i + 1`
done

rm /home/DR_test/demo_xml.log
rm /home/DR_test/export_xml.log
rm /home/DR_test/test.log


############################################################## JASPER ###############################################
find /home/DR_test/mxl -name "*.jasper" >> /home/DR_test/demo_jasper.log
ssh root@192.168.1.1 "bash -s" < /home/DR_test/remonte_jasper.sh >> /home/DR_test/export_jasper.log

file1="/home/DR_test/demo_jasper.log"
file2="/home/DR_test/export_jasper.log"
comm -23 $file1 $file2 >> /home/DR_test/test.log

c=`cat /home/DR_test/test.log | wc -l`
i=1
while [ $i -le $c ]
do
cat /home/DR_test/test.log  |awk "FNR=="$i"" |awk '{print " scp -rp " $1 " root@192.168.1.1:" $1 }' >> /home/DR_test/remonte_scp_jasper.sh
i=`expr $i + 1`
done

rm /home/DR_test/demo_jasper.log
rm /home/DR_test/export_jasper.log
rm /home/DR_test/test.log

############################################################## JRXML ################################################

find /home/DR_test/mxl -name "*.jrxml" >> /home/DR_test/demo_jrxml.log
ssh root@192.168.1.1 "bash -s" < /home/DR_test/remonte_jrxml.sh >> /home/DR_test/export_jrxml.log

file1="/home/DR_test/demo_jrxml.log"
file2="/home/DR_test/export_jrxml.log"

comm -23 $file1 $file2 >> /home/DR_test/test.log

c=`cat /home/DR_test/test.log | wc -l`
i=1
while [ $i -le $c ]
do
cat /home/DR_test/test.log  |awk "FNR=="$i"" |awk '{print " scp -rp " $1 " root@192.168.1.1:" $1 }' >> /home/DR_test/remonte_scp_jrxml.sh
i=`expr $i + 1`
done

rm /home/DR_test/demo_jrxml.log
rm /home/DR_test/export_jrxml.log
rm /home/DR_test/test.log
sh /home/DR_test/remonte_scp_xml.sh
sh /home/DR_test/remonte_scp_jasper.sh
sh /home/DR_test/remonte_scp_jrxml.sh
find /home/DR_test/mxl -name "*.xml"
find /home/DR_test/mxl -name "*.jasper"
find /home/DR_test/mxl -name "*.jrxml"
Scripts Explanation

#!/bin/bash
############################################################## XML ###############################################
rm -rf /home/DR_test/remonte_scp_xml.sh
rm -rf /home/DR_test/remonte_scp_jasper.sh
rm -rf /home/DR_test/remonte_scp_jrxml.sh
find /home/DR_test/mxl -name "*.xml" >> /home/DR_test/demo_xml.log
ssh root@192.168.1.1 "bash -s" < /home/DR_test/remonte_xml.sh (find /home/DR_test/mxl -name "*.xml") >> /home/DR_test/export_xml.log
file1="/home/DR_test/demo_xml.log"
file2="/home/DR_test/export_xml.log"

comm -23 $file1 $file2 >> /home/DR_test/test.log

c=`cat /home/DR_test/test.log | wc -l`
i=1
while [ $i -le $c ]
do
cat /home/DR_test/test.log  |awk "FNR=="$i"" |awk '{print " scp -rp " $1 " root@192.168.1.1:" $1 }' >> /home/DR_test/remonte_scp_xml.sh
i=`expr $i + 1`
done

rm /home/DR_test/demo_xml.log
rm /home/DR_test/export_xml.log
rm /home/DR_test/test.log



XML
    
    In XML script we have to copy all the xml files to DR site. First we have to take count and number of xml files in PROD and  DR and store it in demo_xml.log and export_xml.log. Now compare this tow files and store the o/p in test.log file. Then take a count from test.log file and store it in a variable C and check with condition while 1 is less than or equal to variable  C  then copy the files from source to destination.


Scripts for JASPER

 ############################################################## JASPER ###############################################
find /home/DR_test/mxl -name "*.jasper" >> /home/DR_test/demo_jasper.log
ssh root@192.168.1.1 "bash -s" < /home/DR_test/remonte_jasper.sh >> /home/DR_test/export_jasper.log

file1="/home/DR_test/demo_jasper.log"
file2="/home/DR_test/export_jasper.log"
comm -23 $file1 $file2 >> /home/DR_test/test.log

c=`cat /home/DR_test/test.log | wc -l`
i=1
while [ $i -le $c ]
do
cat /home/DR_test/test.log  |awk "FNR=="$i"" |awk '{print " scp -rp " $1 " root@192.168.1.1:" $1 }' >> /home/DR_test/remonte_scp_jasper.sh
i=`expr $i + 1`
done

rm /home/DR_test/demo_jasper.log
rm /home/DR_test/export_jasper.log
rm /home/DR_test/test.log


JASPER
   
In Jasper script we have to copy all the jasper files to DR site. First we have to take count and number of jasper files in PROD and  DR and store it in demo_jasper.log and export_jasper.log. Now compare this tow files and store the o/p in test.log file. Then take a count from test.log file and store it in a variable C and check with condition while 1 is less than or equal to variable  C  then copy the files from source to destination.






Scripts for JRXML


############################################################## JRXML ################################################

find /home/DR_test/mxl -name "*.jrxml" >> /home/DR_test/demo_jrxml.log
ssh root@192.168.1.1 "bash -s" < /home/DR_test/remonte_jrxml.sh >> /home/DR_test/export_jrxml.log

file1="/home/DR_test/demo_jrxml.log"
file2="/home/DR_test/export_jrxml.log"

comm -23 $file1 $file2 >> /home/DR_test/test.log

c=`cat /home/DR_test/test.log | wc -l`
i=1
while [ $i -le $c ]
do
cat /home/DR_test/test.log  |awk "FNR=="$i"" |awk '{print " scp -rp " $1 " root@192.168.1.1:" $1 }' >> /home/DR_test/remonte_scp_jrxml.sh
i=`expr $i + 1`
done

rm /home/DR_test/demo_jrxml.log
rm /home/DR_test/export_jrxml.log
rm /home/DR_test/test.log
sh /home/DR_test/remonte_scp_xml.sh
sh /home/DR_test/remonte_scp_jasper.sh
sh /home/DR_test/remonte_scp_jrxml.sh

Script to generate AWR report for 7 days and send it as attachment in mail

$cat awr_7days2.sh

export ORACLE_HOME=/PROD/app/oracle/product/11.2.0/db_1
export PATH=$PATH:$ORACLE_HOME/bin
$ORACLE_HOME/bin/sqlplus -s '/as sysdba' <<EOF
@/home/oracle/gangai/awrrpt_script/7_days_24hr_AWR_report.sql


@/home/oracle/gangai/awrrpt_script/master_awr_control.sql
EOF
exit 0


$ cat awr_report_2.sh

for inst in PROD
do
export ORACLE_SID=$inst
export ORACLE_HOME=`grep $ORACLE_SID /home/oracle/gangai/oratab|awk -F : '{print $2}'`
export ORACLE_HOME=/PROD/app/oracle/product/11.2.0/db_1
export PATH=$PATH:$ORACLE_HOME/bin
export MAILEE="gangai.nathan@doyensys.com"
./awr_7days2.sh $inst
#mailx -s "AWR report in ${ORACLE_SID}/`hostname` Completed" $MAILEE < /home/oracle/gangai/awrrpt_script/${ORACLE_SID}_*.html
cd /home/oracle/gangai/awrrpt_script/
tar cvf html.tar *.html
mailx -s "AWR report in ${ORACLE_SID}/`hostname` Completed" -a html.tar $MAILEE < /dev/null
done



$ cat awrrpt_script/7_days_24hr_AWR_report.sql
set lines 300;
set serveroutput on
spool /home/oracle/gangai/awrrpt_script/master_awr_control.sql
select 'set head off' from dual;
declare
cursor c is
select d.dbid id,d.name name,min(s.snap_id) bs,max(s.snap_id) es from dba_hist_snapshot s,gv$database d where trunc(s.begin_interval_time) = trunc(sysdate) group by d.name,d.dbid
union
select d.dbid id,d.name name,min(s.snap_id) bs,max(s.snap_id) es from dba_hist_snapshot s,gv$database d where trunc(s.begin_interval_time) = trunc(sysdate-1) group by d.name,d.dbid
union
select d.dbid id,d.name name,min(s.snap_id) bs,max(s.snap_id) es from dba_hist_snapshot s,gv$database d where trunc(s.begin_interval_time) = trunc(sysdate-2) group by d.name,d.dbid
union
select d.dbid id,d.name name,min(s.snap_id) bs,max(s.snap_id) es from dba_hist_snapshot s,gv$database d where trunc(s.begin_interval_time) = trunc(sysdate-3) group by d.name,d.dbid
union
select d.dbid id,d.name name,min(s.snap_id) bs,max(s.snap_id) es from dba_hist_snapshot s,gv$database d where trunc(s.begin_interval_time) = trunc(sysdate-4) group by d.name,d.dbid
union
select d.dbid id,d.name name,min(s.snap_id) bs,max(s.snap_id) es from dba_hist_snapshot s,gv$database d where trunc(s.begin_interval_time) = trunc(sysdate-5) group by d.name,d.dbid
union
select d.dbid id,d.name name,min(s.snap_id) bs,max(s.snap_id) es from dba_hist_snapshot s,gv$database d where trunc(s.begin_interval_time) = trunc(sysdate-6) group by d.name,d.dbid
union
select d.dbid id,d.name name,min(s.snap_id) bs,max(s.snap_id) es from dba_hist_snapshot s,gv$database d where trunc(s.begin_interval_time) = trunc(sysdate-7) group by d.name,d.dbid;
begin
for c1 in c
loop
if c1.bs > 0 then
dbms_output.put_line('spool /home/oracle/gangai/awrrpt_script/'||c1.name||'_'
||c1.bs||'_'||c1.es||'.html');
dbms_output.put_line('select output from table(dbms_workload_repository.awr_report_html( '||c1.id||',1,'||
c1.bs||','||
c1.es||',0 ));');
dbms_output.put_line('spool off');
end if;
end loop;
end;
/
spool off;
set heading off
set pages 50000
set linesize 1500
set trimspool on
set trimout on
set term off
set verify off;
set feedback off;

Query to check concurrent history

set pagesize 200
set linesize 200
col "Who submitted" for a25
col "Status" for a10
col "Parameters" for a20
col USER_CONCURRENT_PROGRAM_NAME for a42
SELECT distinct t.user_concurrent_program_name,
r.REQUEST_ID,
to_char(r.ACTUAL_START_DATE,'dd-mm-yy hh24:mi:ss') "Started at",
to_char(r.ACTUAL_COMPLETION_DATE,'dd-mm-yy hh24:mi:ss') "Completed at",
decode(r.PHASE_CODE,'C','Completed','I','Inactive','P ','Pending','R','Running','NA') phasecode,
decode(r.STATUS_CODE, 'A','Waiting', 'B','Resuming', 'C','Normal', 'D','Cancelled', 'E','Error', 'F','Scheduled', 'G','Warning', 'H','On Hold', 'I','Normal', 'M',
'No Manager', 'Q','Standby', 'R','Normal', 'S','Suspended', 'T','Terminating', 'U','Disabled', 'W','Paused', 'X','Terminated', 'Z','Waiting') "Status",r.argument_text "Parameters",substr(u.description,1,25) "Who submitted",round(((nvl(v.actual_completion_date,sysdate)-v.actual_start_date)*24*60)) Etime
FROM
apps.fnd_concurrent_requests r ,
apps.fnd_concurrent_programs p ,
apps.fnd_concurrent_programs_tl t,
apps.fnd_user u, apps.fnd_conc_req_summary_v v
WHERE
r.CONCURRENT_PROGRAM_ID = p.CONCURRENT_PROGRAM_ID
AND r.actual_start_date >= (sysdate-30)
--AND r.requested_by=22378
AND   r.PROGRAM_APPLICATION_ID = p.APPLICATION_ID
AND t.concurrent_program_id=r.concurrent_program_id
AND r.REQUESTED_BY=u.user_id
AND v.request_id=r.request_id
--AND r.request_id ='2260046' in ('13829387','13850423')
and t.user_concurrent_program_name like '%%'
order by to_char(r.ACTUAL_COMPLETION_DATE,'dd-mm-yy hh24:mi:ss');

Query to check log switch between PROD and DR

set linesize 170
set pagesize 100
column  day     format a15              heading 'Day'
column  d_0     format a3               heading '00'
column  d_1     format a3               heading '01'
column  d_2     format a3               heading '02'
column  d_3     format a3               heading '03'
column  d_4     format a3               heading '04'
column  d_5     format a3               heading '05'
column  d_6     format a3               heading '06'
column  d_7     format a3               heading '07'
column  d_8     format a3               heading '08'
column  d_9     format a3               heading '09'
column  d_10    format a3               heading '10'
column  d_11    format a3               heading '11'
column  d_12    format a3               heading '12'
column  d_13    format a3               heading '13'
column  d_14    format a3               heading '14'
column  d_15    format a3               heading '15'
column  d_16    format a3               heading '16'
column  d_17    format a3               heading '17'
column  d_18    format a3               heading '18'
column  d_19    format a3               heading '19'
column  d_20    format a3               heading '20'
column  d_21    format a3               heading '21'
column  d_22    format a3               heading '22'
column  d_23    format a3               heading '23'
select
        substr(to_char(FIRST_TIME,'DY, YYYY/MM/DD'),1,25) day,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'00',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'00',1,0))) d_0,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'01',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'01',1,0))) d_1,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'02',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'02',1,0))) d_2,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'03',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'03',1,0))) d_3,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'04',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'04',1,0))) d_4,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'05',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'05',1,0))) d_5,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'06',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'06',1,0))) d_6,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'07',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'07',1,0))) d_7,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'08',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'08',1,0))) d_5,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'09',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'09',1,0))) d_9,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'10',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'10',1,0))) d_10,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'11',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'11',1,0))) d_11,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'12',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'12',1,0))) d_12,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'13',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'13',1,0))) d_13,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'14',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'14',1,0))) d_14,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'15',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'15',1,0))) d_15,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'16',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'16',1,0))) d_16,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'17',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'17',1,0))) d_17,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'18',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'18',1,0))) d_18,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'19',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'19',1,0))) d_19,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'20',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'20',1,0))) d_20,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'21',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'21',1,0))) d_21,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'22',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'22',1,0))) d_22,
        decode(sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'23',1,0)),0,'-',sum(decode(substr(to_char(FIRST_TIME,'HH24'),1,2),'23',1,0))) d_23
from
        v$log_history
group by
        substr(to_char(FIRST_TIME,'DY, YYYY/MM/DD'),1,25)
order by
        substr(to_char(FIRST_TIME,'DY, YYYY/MM/DD'),1,25) desc;

set linesize 80
set pagesize 14
clear columns

Auto startup / shutdown the database while OS starts / restart



Please  touch the file as root user

1. copy the oracle.txt file into /etc/init.d/oracle.
2. chmod 750 /etc/init.d/oracle
3. chkconfig --add oracle
4. To start the custom listener open the first $ORACLE_HOME/bin/dbstart and edit file as dbstart and dbshut.




cat  /etc/init.d/oracle
#!/bin/sh
#
# /etc/rc.d/init.d/dbase
# Description: Starts and stops the Oracle database and listeners
# See how we were called.
ORAHOME1=/u01/app/oracle/product/11.2.0/db_1
ORAHOME2=/u01/app/oracle/product/11.2.0/PROD

case "$1" in
  start)
        echo -n "Starting All Oracle Databases: "
        echo "----------------------------------------------------" >> /var/log/dbase.log
        date +"! %T %a %D : Starting Oracle Databases as part of system up." >> /var/log/dbase.log
        echo "----------------------------------------------------" >> /var/log/dbase.log
                echo -n "Starting Oracle Listeners:PROD"
        su - oracle -c "$ORAHOME2/bin/lsnrctl start PROD" >> /var/log/dbase.log
        echo "Done."
        su - oracle -c dbstart >> /var/log/dbase.log
        echo "Done."
                echo -n "Starting Oracle Listeners:ORCL "
        su - oracle -c "$ORAHOME1/bin/lsnrctl start ORCL" >> /var/log/dbase.log
        echo "Done."
        echo ""
        echo "----------------------------------------------------" >> /var/log/dbase.log
        date +"! %T %a %D : Finished." >> /var/log/dbase.log
        echo "----------------------------------------------------" >> /var/log/dbase.log

        echo ""
        echo "----------------------------------------------------" >> /var/log/dbase.log
        date +"! %T %a %D : Finished." >> /var/log/dbase.log
        echo "----------------------------------------------------" >> /var/log/dbase.log
                touch /var/lock/subsys/oracle
        ;;
  stop)
        echo -n "Shutting Down Oracle Listeners: "
        echo "----------------------------------------------------" >> /var/log/dbase.log
        date +"! %T %a %D : Shutting Down Oracle Databases because of system down." >> /var/log/dbase.log
        echo "----------------------------------------------------" >> /var/log/dbase.log
        su - oracle -c "$ORAHOME1/bin/lsnrctl stop ORCL" >> /var/log/dbase.log
        echo "Done. ORCL"
                su - oracle -c "$ORAHOME2/bin/lsnrctl stop PROD" >> /var/log/dbase.log
        echo "Done. PROD"
        rm -f /var/lock/subsys/oracle
        echo -n "Shutting Down All Oracle Databases: "
        su - oracle -c dbshut >> /var/log/dbase.log
        echo "Done."
        echo ""
        echo "----------------------------------------------------" >> /var/log/dbase.log
        date +"! %T %a %D : Finished." >> /var/log/dbase.log
        echo "----------------------------------------------------" >> /var/log/dbase.log
        ;;
  restart)
        echo -n "Restarting Oracle Databases: "
        echo "----------------------------------------------------" >> /var/log/dbase.log
        date +"! %T %a %D : Restarting Oracle Databases as part of system up." >> /var/log/dbase.log
        echo "----------------------------------------------------" >> /var/log/dbase.log
        su - oracle -c dbshut >> /var/log/dbase.log
        su - oracle -c dbstart >> /var/log/dbase.log
        echo "Done."
        echo -n "Restarting Oracle Listeners: ORCL"
        su - oracle -c "$ORAHOME1/bin/lsnrctl stop ORCL" >> /var/log/dbase.log
        su - oracle -c "$ORAHOME1/bin/lsnrctl start ORCL" >> /var/log/dbase.log
                 echo "Done. ORCL"
                echo -n "Restarting Oracle Listeners: PROD"
        su - oracle -c "$ORAHOME2/bin/lsnrctl stop PROD" >> /var/log/dbase.log
        su - oracle -c "$ORAHOME2/bin/lsnrctl start PROD" >> /var/log/dbase.log
        echo "Done. PROD"
        echo ""
        echo "----------------------------------------------------" >> /var/log/dbase.log
        date +"! %T %a %D : Finished." >> /var/log/dbase.log
        echo "----------------------------------------------------" >> /var/log/dbase.log
        touch /var/lock/subsys/oracle
        ;;
  *)
        echo "Usage: oracle {start|stop|restart}"
        exit 1
esac












 To start the custom Listener got to $ORACLE_HOME/bin/dbstart and edit like this


#PROD
ORACLE_HOME_LISTNER=/u01/app/oracle/product/11.2.0/PROD
if [ ! $ORACLE_HOME_LISTNER ] ; then
  echo "ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener"
  echo "Usage: $0 ORACLE_HOME"
else
  LOG=$ORACLE_HOME_LISTNER/listener.log

  # Set the ORACLE_HOME for the Oracle Net Listener, it gets reset to
  # a different ORACLE_HOME for each entry in the oratab.
  export ORACLE_HOME=$ORACLE_HOME_LISTNER

  # Start Oracle Net Listener
  if [ -x $ORACLE_HOME_LISTNER/bin/tnslsnr ] ; then
    echo "$0: Starting Oracle Net Listener" >> $LOG 2>&1
    $ORACLE_HOME_LISTNER/bin/lsnrctl start PROD >> $LOG 2>&1 &
    VER10LIST=`$ORACLE_HOME_LISTNER/bin/lsnrctl version | grep "LSNRCTL for " | cut -d' ' -f5 | cut -d'.' -f1`
    export VER10LIST
  else
    echo "Failed to auto-start Oracle Net Listener using $ORACLE_HOME_LISTNER/bin/tnslsnr"
  fi
fi
To stop the Custom listener go to $ORACLE_HOME/bin/dbshut and edit the file
as same
#PROD
# The  this to bring down Oracle Net Listener
ORACLE_HOME_LISTNER=/u01/app/oracle/product/11.2.0/PROD
if [ ! $ORACLE_HOME_LISTNER ] ; then
  echo "ORACLE_HOME_LISTNER is not SET, unable to auto-stop Oracle Net Listener"
  echo "Usage: $0 ORACLE_HOME"
else
  LOG=$ORACLE_HOME_LISTNER/listener.log

  # Set the ORACLE_HOME for the Oracle Net Listener, it gets reset to
  # a different ORACLE_HOME for each entry in the oratab.
  export ORACLE_HOME=$ORACLE_HOME_LISTNER

  # Stop Oracle Net Listener
  if [ -f $ORACLE_HOME_LISTNER/bin/tnslsnr ] ; then
    echo "$0: Stoping Oracle Net Listener" >> $LOG 2>&1
    $ORACLE_HOME_LISTNER/bin/lsnrctl stop PROD >> $LOG 2>&1 &
  else
    echo "Failed to auto-stop Oracle Net Listener using $ORACLE_HOME_LISTNER/bin/tnslsnr"
  fi
fi

Script to chek Number of connections and CPU load at Peak time in database


$ cat metrix_connections.sh

/home/oracle/gangai/metrix_op.log
a=`cat $1`; export a

for i in $a

do

sqlplus -S username/passwd@$i << EOF
spool metrix_op.log append

@/home/oracle/gangai/Metrix/metrix_connections.sql


exit;
EOF


done


$ cat metrix_connections.sql

set echo off;
set heading off;
set feedback off;
set verify off;
set verify off;
set trimspool off;
set lines 150;
set colsep '';
set space 0;
set pagesize 0;
select wm_concat(distinct(to_char(pvm.measured_date, 'DD-MON-RRRR:HH24'))) "Peak Hr",round(pvm.total_users) "Total Users",
round((select avg(a.total_users)
from stats$totalusers a
where a.measured_date between to_date('11-dec-2015:10:00', 'DD-MON-RRRR:HH24:MI') and to_date('11-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI'))) "Avg Users"
from stats$totalusers pvm
where pvm.total_users =
(select max((pvm1.total_users)) highest_total_users
from stats$totalusers pvm1
where pvm1.measured_date between to_date('11-dec-2015:10:00', 'DD-MON-RRRR:HH24:MI')
and to_date('11-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI'))
and pvm.measured_date between to_date('11-dec-2015:10:00', 'DD-MON-RRRR:HH24:MI')
and to_date('11-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI')
group by (pvm.total_users);




$ cat metrix_cpu.sh
a=`cat $1`; export a

for i in $a

do

sqlplus -S username/passwd@$i << EOF
spool metrix_op.log append

@/home/oracle/gangai/Metrix/metrix_cpu.sql


exit;
EOF


done



$cat metrix_cpu.sql


set echo off;
set heading off;
set feedback off;
set verify off;
set verify off;
set trimspool off;
set lines 150;
set colsep '';
set space 0;
set pagesize 0;
select wm_concat(distinct(to_char(b1.start_date, 'DD-MON-RRRR:HH24'))) "Peak_Hr", round((b1.user_cpu+b1.system_CPU+b1.wait_cpu)) "Max_CPU",
round((select avg(b2.user_cpu+b2.system_CPU+wait_cpu)  from stats$vmstat2 b2
where
b2.start_date between to_date('18-dec-2015:01:00', 'DD-MON-RRRR:HH24:MI') and to_date('18-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI'))) "Avg_CPU"
from stats$vmstat2 b1
where (b1.user_cpu+b1.system_CPU+b1.wait_cpu) =
(select max((b3.user_cpu+system_CPU+wait_cpu)) max_cpu
from stats$vmstat2 b3
where b3.start_date between to_date('18-dec-2015:01:00', 'DD-MON-RRRR:HH24:MI')
and to_date('18-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI'))
and b1.start_date between to_date('18-dec-2015:01:00', 'DD-MON-RRRR:HH24:MI') and to_date('18-dec-2015:13:00', 'DD-MON-RRRR:HH24:MI')
group by (b1.user_cpu+b1.system_CPU+b1.wait_cpu);



$cat list1.log

db1
db2
db3
db4


Execution steps:

Modify the date as per your requirement.


$metrix_connections.sh list1.log

It will dispaly message

$metrix_cpu.sh list1.log

It will dispaly message

$

Script to check Disk space in all remonte servers in network

#!/bin/bash
# ssh password less login is required

# mention below remote server seperated by space

remote_srv=(192.168.1.1 192.168.1.2  )

do_ssh() {
    ssh $s "$@"
    echo -e
}

header() {
    echo "#==================={$@}==========================#"
}

n=${#remote_srv[@]} # number of ip's in array

for (( i=0;i<$n;i++)); do
            echo -e
            echo "$(tput bold)$(tput setaf 2)* Connected Server: ${remote_srv[$i]}  @@ $(date) @@"
            echo "* Fetching info...$(tput sgr 0)"
            s=${remote_srv[$i]}

            header "Disk Usage"
            do_ssh df -h
done

## EOF

RMAN Automation script to clone the database from DR site





#!/bin/ksh
##  Script to auto clone DOYENDB based on RMAN Duplicate
## Created by M.GANGAINATHAN
/bin/mailx -s "RPT:-DOYENDB CLONING PROCESS Started at :-`date` " $NOTIFY_LIST <$LOGFILE
NOTIFY_LIST="gnathan@1800flowers.com"
LOGFILE=/u03/oracle/CLONE/DOYENDB/logs/DOYENDB_CLONE.txt
LOGFILE1=/u03/oracle/CLONE/DOYENDB/logs/rman_clone.txt
echo "Start of DOYENDB CLONE  CREATION  at `date +%D-%T`">$LOGFILE
echo "Beginning Maintenance Window for OEM DOYENDB  at `date +%D-%T`">>$LOGFILE
. /orahome/env/oracle_Agent12C_env
cd $ORACLE_HOME/bin
./emctl start blackout DOYENDB_CLONE DOYENDB
echo "Ending Maintenance Window for OEM DOYENDB  at `date +%D-%T`">>$LOGFILE

# ENV file
. /orahome/env/oracle_DOYENDB_11g_env

# Stopping Listeners For preparation of Cloning
echo "Stopping LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/stop_listener.sh
cat /u03/oracle/CLONE/DOYENDB/logs/stop_DOYENDB_LISTENER.txt >>$LOGFILE
echo "Stopping LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE

echo "Beginning Dropping Database DOYENDB  at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba" <<-EOF >>$LOGFILE
shutdown immediate;
startup nomount;
alter database mount exclusive;
alter system enable restricted session;
drop database;
EOF
echo "Finished Dropping Database  DOYENDB at `date +%D-%T`">>$LOGFILE

echo " Starting DB in nomount state " >>$LOGFILE
echo "Beginning Starting Database DOYENDB in MOUNT STATE  at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba" <<-EOF >>$LOGFILE
startup nomount pfile='/orahome/app/oracle/product/11204/DOYENDB/dbs/initDOYENDB.ora';
create spfile from pfile='/orahome/app/oracle/product/11204/DOYENDB/dbs/initDOYENDB.ora';
EOF
echo "End of Starting Database ATLLTRG in MOUNT STATE  at `date +%D-%T`">>$LOGFILE

# Starting CLONE Listeners For preparation of Cloning
echo "Starting LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/start_listener_CLONE.sh
cat /u03/oracle/CLONE/DOYENDB/logs/start_DOYENDB_LISTENER.txt >>$LOGFILE
echo "Starting LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE


echo " Starting RMAN CLONE PROCESS at `date +%D-%T`">>$LOGFILE
$ORACLE_HOME/bin/rman target /@ATLPROD_DD_RESTORE auxiliary / CMDFILE /u03/oracle/CLONE/DOYENDB/SQL/rman_clone.sql LOG $LOGFILE1
echo " Starting RMAN CLONE PROCESS END at `date +%D-%T`">>$LOGFILE
echo " LOGS FOR RMAN CLONE DUMP "
echo "===================================================="
echo "===================================================="
cat /u03/oracle/CLONE/DOYENDB/logs/rman_clone.txt >>$LOGFILE
echo "===================================================="
echo "===================================================="
echo " LOGS FOR RMAN CLONE DUMP "

echo " Starting DB in nomount state at `date +%D-%T` " >>$LOGFILE
echo "Beginning Starting Database DOYENDB in FOR PFILE CREATION at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba" <<-EOF >>$LOGFILE
shutdown immediate;
startup mount;
alter database noarchivelog;
alter database open;
EOF
echo "End of Starting Database ATLLTRG in OPEN STATE  at `date +%D-%T`">>$LOGFILE

# Starting Listeners After Cloning
echo "Starting LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/start_listener.sh
cat /u03/oracle/CLONE/DOYENDB/logs/start_DOYENDB_LISTENER.txt >>$LOGFILE
echo "Starting LISTENER DOYENDB PROCESS Ended at `date +%D-%T`">>$LOGFILE

echo "Starting of POST CLONE Database ACTIVITY at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba" <<-EOF >>$LOGFILE
PROMPT DROPPING THE PUBLIC SYNONYNMS
@/u03/oracle/CLONE/DOYENDB/SQL/post_clone.sql
@/u03/oracle/CLONE/DOYENDB/SQL/list.sql
EOF

echo " Starting of POST CLONE Database ACTIVITY :- Dropping Private DB Links at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/drop_dblink.sh /u03/oracle/CLONE/DOYENDB/SQL/list.log
cat /u03/oracle/CLONE/DOYENDB/logs/drop_dblink_unixscript.txt >> $LOGFILE
echo " Starting of POST CLONE Database ACTIVITY Finished :- Dropping Private DB Links at `date +%D-%T`">>$LOGFILE

echo "Starting of POST CLONE1 Database ACTIVITY at `date +%D-%T`">>$LOGFILE
${ORACLE_HOME}/bin/sqlplus -s "/ as sysdba" <<-EOF >>$LOGFILE
PROMPT DROPPING THE PUBLIC SYNONYNMS
@/u03/oracle/CLONE/DOYENDB/SQL/post_clone1.sql
EOF

echo "LOGS for POST CLONE Database ACTIVITY at `date +%D-%T`">>$LOGFILE
cat /u03/oracle/CLONE/DOYENDB/logs/drop_dblink.txt >>$LOGFILE
cat /u03/oracle/CLONE/DOYENDB/logs/post_clone.txt >>$LOGFILE
echo "LOGS for POST CLONE Database ACTIVITY Ended at `date +%D-%T`">>$LOGFILE

# Stopping CLONE Listeners For preparation of Cloning
echo "Stopping LISTENER DOYENDB PROCESS  at `date +%D-%T`">>$LOGFILE
sh -x /u03/oracle/CLONE/DOYENDB/stop_listener_CLONE.sh
cat /u03/oracle/CLONE/DOYENDB/logs/stop_DOYENDB_LISTENER.txt >>$LOGFILE
echo "Stopping LISTENER DOYENDB PROCESS Ended at `date +%D-%T`">>$LOGFILE
echo "End of ATLLTRG CREATION  at `date +%D-%T`">>$LOGFILE

echo "Beginning Stopping Maintenance Window for OEM DOYENDB  at `date +%D-%T`">>$LOGFILE
. /orahome/env/oracle_Agent12C_env
cd $ORACLE_HOME/bin
./emctl stop blackout DOYENDB_CLONE
echo "Ending Maintenance Window for OEM DOYENDB  at `date +%D-%T`">>$LOGFILE

/bin/mailx -s "RPT:-DOYENDB CLONING PROCESS Ended at :-`date` " $NOTIFY_LIST <$LOGFILE

set `date`
cp $LOGFILE $LOGFILE.$2$3$4





#!/bin/sh
. /orahome/env/oracle_DOYENDB_11g_env
LOGFILE=/u03/oracle/CLONE/DOYENDB/logs/stop_DOYEN_LISTENER.txt
echo "LISTENER STOP PROCESS STARTED AT :-`date`" > $LOGFILE
echo " Stopping LISTENER_DOYENDB_OEM " >> $LOGFILE
lsnrctl << EOF
set current_listener LISTENER_DOYENDB_OEM
stop
exit
EOF
echo " Stopping LISTENER_DOYENDB_JOBS" >> $LOGFILE
lsnrctl << EOF
set current_listener LISTENER_DOYENDB_JOBS
stop
exit
EOF
echo "LISTENER STOP PROCESS ENDED AT :-`date`" >> $LOGFILE

 cat /home/oracle/CLONE/DOYENDB/SQL/
PROMPT DROPPING PRIVATE DBLINKS
@/export/home/oracle/CLONE/DOYENDB/SQL/drop_existin_dblink.sql
spool /export/home/oracle/CLONE/DOYENDB/logs/drop_dblink.txt
set echo on
@/export/home/oracle/CLONE/DOYENDB/SQL/drop_dblink.sql
spool off

oracle@dbatlbocoprod01:~/CLONE/DOYENDB$ cat /export/home/oracle/oraprocs/DOYENDB/logs/start_DOYENDB_LISTENER.txt
LISTENER STOP PROCESS STARTED AT :-Wednesday, March 30, 2016 03:00:16 PM
 Stopping LISTENER_DOYENDB_OEM
 Stopping LISTENER_DOYENDB_JOBS
LISTENER STOP PROCESS ENDED AT :-Wednesday, March 30, 2016 03:00:17 PM

Wednesday, March 30, 2016

Cause: The field PARAMETER.CONFIG could not be located or read

Problem Summary
------------------------

In R12.2.4 After clone , the concurrent ouput/log file shows the error Cannot read value from field PARAMETER.CONFIG


Problem Description
-------------------------

After clone we get the following error in log and output file , but the request completed Normal


Error in the Output/log file :
---------------------------------

Cause: The field PARAMETER.CONFIG could not be located or read.

Action: This error is normally the result of an incorrectly-entered field name string in a trigger, or a field name string that does not uniquely specify a field in your form.

Correct your trigger logic to precisely specify a valid field.


Env Details:
--------------

2 DB node , 2 CM nodes , 2 APP nodes


To find out the issue :
-------------------------

Enable Trace for FNDFS in all the nodes (CM) in listener file :
--------------------------------------------------------------------------

( SID_DESC = ( SID_NAME = FNDFS  )
                 ( ORACLE_HOME = /u01/app/doyuat/fs1/EBSapps/10.1.2 )
                 ( PROGRAM = /u01/app/DOYuat/fs1/EBSapps/appl/fnd/12.0.0/bin/FNDFS )
                 (

envs='EPC_DISABLED=TRUE,NLS_LANG=American_America.UTF8,LD_LIBRARY_PATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,SHLIB_PATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,LIBPATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,APPLFSTT=DOYUAT_806_BALANCE;DOYUATB_FO;DOYUATB;DOYUAT;DOYUATA;DOYUAT_FO;DOYUATA_FO,APPLFSWD=/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/appl/admin;/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/appltmp;/u01/app/DOYuat/fs1/FMW_Home/Oracle_EBS-app1/applications/oacore/html/oam/nonUix/launchMode/restricted,FNDFS_LOGGING=ON,FNDFS_LOGFILE=/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/logs/ora/10.1.2/network/FNDFS_DOYuat.log' )))


Line to be added in the listener for enabling trace and path of the log file :
---------------------------------------------------------------------------

FNDFS_LOGGING=ON,FNDFS_LOGFILE=/u01/app/doyuat/fs1/inst/apps/DOYUAT_doyensys01/logs/ora/10.1.2/network/FNDFS_doyuat.log'

Now bounce the listener and check for the request log/output files.

a) Retest the issue .

b) You will get the FNDFS log output in the specified location as mentioned in the listener (In the node in which the particular request try to fetch log/output file).

c) Now the error in the log file shows when connecting to the Node using the TWO_TASK it errored out.


Solution :
-----------

1. Check the TWO_TASK in Application nodes.(ex.It can be DOYUAT_BALANCE)
2. Check the listener file entry in Application Tier for the parameter APPLFSTT for which the TWO_TASK is included.
3. The same two_task entry should be included in both the CM nodes APPLFSTT value in listener file.
4. The Two_TASK value for CM nodes may be different like DOYUAT1 and DOYUAT2 for each nodes. But listener file should have included the Application's TWO_TASK for APPLFSTT.
5. The same value should be added in TNSNAMES file in both the CM NODES through which the Application node will connect to the CM node.
6. As a permenent fix change the value of APPLFSTT in XML file and run Autoconfig in CM node.


Following tns entry needs to be added in the tnsnames.ora in Both CM Nodes :
-------------------------------------------------------------------------------

DOYUAT_BALANCE=
        (DESCRIPTION=
            (ADDRESS_LIST=
                (LOAD_BALANCE=YES)
                (FAILOVER=YES)
                (ADDRESS=(PROTOCOL=tcp)(HOST=doyensys02.doyen.net)(PORT=1526))
                (ADDRESS=(PROTOCOL=tcp)(HOST=doyensys01.doyen.net)(PORT=1526))
            )
            (CONNECT_DATA=
                (SERVICE_NAME=DOYUAT)
            )
        )


After making the Entry in parameter (APPLFSTT)  in listener file in Both CM nodes  :


( SID_DESC = ( SID_NAME = FNDFS  )
                 ( ORACLE_HOME = /u01/app/doyuat/fs1/EBSapps/10.1.2 )
                 ( PROGRAM = /u01/app/DOYuat/fs1/EBSapps/appl/fnd/12.0.0/bin/FNDFS )
                 (envs='EPC_DISABLED=TRUE,NLS_LANG=American_America.UTF8,LD_LIBRARY_PATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,SHLIB_PATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,LIBPATH=/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib32:/u01/app/DOYuat/fs1/EBSapps/10.1.2/lib:/usr/X11R6/lib:/usr/openwin/lib:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/server:/u01/app/DOYuat/fs1/EBSapps/10.1.2/jdk/jre/lib/i386/native_threads:/u01/app/DOYuat/fs1/EBSapps/appl/sht/12.0.0/lib,APPLFSTT=DOYUAT_BALANCE;DOYUAT_806_BALANCE;DOYUATB_FO;DOYUATB;DOYUAT;DOYUATA;DOYUAT_FO;DOYUATA_FO,APPLFSWD=/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/appl/admin;/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/appltmp;/u01/app/DOYuat/fs1/FMW_Home/Oracle_EBS-app1/applications/oacore/html/oam/nonUix/launchMode/restricted,FNDFS_LOGGING=ON,FNDFS_LOGFILE=/u01/app/DOYuat/fs1/inst/apps/DOYUAT_doyensys01/logs/ora/10.1.2/network/FNDFS_DOYuat.log' )   )  )

Tuesday, March 29, 2016


Try to Schedule Sales Order In New Test Instance - Get Message - Summary Concurrent 
====================================================
program is running. Try ATP Later
=====================


On : 11.5.10.2 version, Scheduling and Sourcing

When attempting to book the sales order,
the following error occurs.
Error message in the application is:
Summary Concurrent program is running.  Try ATP Later

OM debug log file shows
ERROR
-----------------------
-----------------Loading MRP Result---------------
MRP COUNT IS 1
SCHEDULE ACTION CODE SCHEDULE

X_RETURN_STATUS E
RR: L2
AFTER ACTION SCHEDULE : E
SCH SHIP DATE
SCH ARR DATE
SHIP_FROM 2453
Exiting schedule_line: 11-JUN-12 10:48:57
schedule return status E
scheduling flow error ORA-0000: normal, successful completion
In WF save messages
ENTER Save_API_Messages
L_MSG_DATA=Scheduling failed.


STEPS
-----------------------
The issue can be reproduced at will with the following steps:
1. PS OM Super User
2. Orders/Returns-->
3. Book the sales order
4. error occurs.

BUSINESS IMPACT
-----------------------
The issue has the following business impact:
Due to this issue, users cannot book the sales order.


SOLUTION :
=========

1. Run the following SQL on the APS destination instance.

update MSC_APPS_INSTANCES set SUMMARY_FLAG = NULL;
commit;

Somehow the flag in msc_apps_instances got set to 2... this would be normal if you ran the program -
 Load ATP Summary Based on Collected Data OR Load ATP Summary Based on Planning Output
- while the program is running, it would set this flag and provide this message, then upon completion, 
would remove the flag so you could continue scheduling orders.
- This program is launched by user or when you have profile - MSC: Enable ATP Summary Mode - Yes

Appears instance was cloned with this flag - 2

Note, you should not use this program when you have distributed install, and both instances 
should have the profile MSC: Enable ATP Summary Mode - NO


Planning Ods Load Errors: ORA-00920: invalid relational operator
=========================================


On : 12.1.3 version, ATP based on collected data

When attempting to run MSCPDCW module: Planning ODS Load Worker ,
the following error occurs.

ERROR
-----------------------
21-MAR 11:08:54 : Procedure MSC_CL_RPO_ODS_LOAD.LOAD_IRO_DEMAND started.
21-MAR 11:08:54 : ORA-00920: invalid relational operator
21-MAR 11:08:54 : ORA-00920: invalid relational operator
21-MAR 11:08:54 : Sending message :-9999999 to monitor.
21-MAR 11:08:54 : Error_Stack...
21-MAR 11:08:54 : ORA-00920: invalid relational operator

21-MAR 11:08:54 : Error_Backtrace...
21-MAR 11:08:54 : ORA-06512: at "APPS.MSC_CL_COLLECTION", line 3013
ORA-06512: at "APPS.MSC_CL_COLLECTION", line 3232



SOLUTION :
=======

To implement the solution, please execute the following steps:

1. Download and review the readme and pre-requisites for VCP 12.1.3.6 Patch 12695646 or later

2. Ensure that you have taken a backup of your system before applying 
the recommended patch. 

3. Apply the patch in a test environment. 

4. Confirm the following file versions: 
msc patch/115/import/US/mscprg.ldt 120.86.12010000.61 or higher.

5. Retest the issue. 

6. Migrate the solution as appropriate to other environments.