DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Wednesday, February 29, 2012

R12 Configurations in DMZ

DMZ ... Easy to notify as External URL...

[ I dont want to explain ... what is DMZ ??? Lots of stuff available in MOS Doc : 380490.1 , below is to provide the straight steps to configure DMZ ( either External URL),
this makes you understand the steps in configuration]

R12 Configuration in DMZ

1. copy and clone the webtier (or)
copy the application tier and enable web services alone
into another seperate server where you would configure the DMZ server.
[for cloning procedure please follow the metalink document id : 406982.1 ]

2. From Apps Node ( not web tier/ new dmz node ),
execute the below sql for DMZ configurations
sqlplus apps/apps @$FND_TOP/patch/115/sql/txkChangeProfH.sql SERVRESP

3. Run autoconfig in all apps nodes (including new dmz node)

cd $ADMIN_SCRIPTS_HOMEsh adautocfg.sh

4. Make the profile options changes required for configuring DMZ
i. Login to Oracle E-Business Suite as sysadmin user using the internal URL
ii. Select the System Administrator Responsibility
iii. Select Profile / Systemiv. From the 'Find system profile option Values' window,
check the Server (B) box, and select the server that you want to designate as the external web tier. Query for %NODE%TRUST%. You will see a profile option named 'Node Trust Level'.
The value for this profile option at the site level will be Normal. Leave this setting unchanged.
vi. Set the value of this profile option to External at the server level.
The site level value should remain set to Normal
vii. Save your changes.

5. Modify the Module based profile options ,which are required to map the DMZ Url

for eg:
Profile name : POS: External URL.
Sample value: http://testebsdmz.xyz.com:8000/pls/TEST

Profile name : POS: Internal URL.
Sample value: http://testebsapp.xyz.com:8000/pls/TEST

Tuesday, February 21, 2012

To debug oracle application web server issues after cloning


The AOL/J Setup Test suite consists of a collection of tests that you can run to determine if your web server is configured properly. The suite is made up of Java Server Pages (JSPs) designed to trouble-shoot AOL/J setup problems. These JSPs exercise various features of AOL/J and provide feedback on the results (they do not change any settings). The test suite is accessed from the URL:

http://<host_name>:<port_number>/OA_HTML/jsp/fnd/aoljtest.jsp

Monday, February 20, 2012

To display the details of available server CPU resources in UNIX

Display the number of CPUs in HP/UX

In HP/UX the ioscan command is used to display the number of processors available on the server.


root> ioscan -C processor | grep processor | wc -l
16

Display number of CPUs in Solaris



In Sun Solaris, the prsinfo command can be used to count the number of CPUs on the processor.

root> psrinfo -v|grep "Status of processor"|wc -l
       2
For details about the Solaris CPUs, the -v (verbose) option can be used with the psrinfo command

root> psrinfo -v

Status of processor 0 as of: 12/13/00 14:47:41
  Processor has been on-line since 11/05/00 13:26:42.
  The sparcv9 processor operates at 450 MHz,
  and has a sparcv9 floating point processor.
Status of processor 2 as of: 12/13/00 14:47:41
  Processor has been on-line since 11/05/00 13:26:43.
  The sparcv9 processor operates at 450 MHz,
  and has a sparcv9 floating point processor.

Display number of CPUs in Linux


To see the number of CPUs on a Linux server, you can cat the /proc/cpuinfo file.  In the example below we see that our Linux server has 4 CPUs.

root> cat /proc/cpuinfo|grep processor|wc -l
      4
 
Display number of CPUs in AIX

The lsdev command can be used to see the number of CPUs on an IBM AIX server.  Below we see that this AIX server has four CPUs:


root> lsdev -C|grep Process|wc -l


*********************************************************************************************************

To report the current CPU Activity

sar -u 1 5

sar utility can be used to provide various information about the server resources.

In the above example the parameters used and the details are as mentioned below
   -u option is for CPU related information
   1 is the frequency of execution of the command in seconds
   5 is the number of times the command should be executed.
   So the command displays the CPU information every 1 second for 5 times.


It reports the CPU related usage by the user(%usr), system(%sys), the CPU Wait for I/O(%wio) and the Idle Percentage(%idle).

Sample Output


14:08:17    %usr    %sys    %wio   %idle   physc
14:08:18      25        0          0           75       4.00
14:08:19      25        0          0           74       4.00
14:08:20      25        0          0           75       4.00
14:08:21      25        0          0           75       4.00
14:08:22      25        0          0           75       4.03
Average       25        0          0           75       4.01

Finding files in Unix based on specific criteria

-->

FIND FILES THAT WERE MODIFIED/ACCESSED WITHIN A SPECIFIED TIME

            $   find  /<your path>  -mtime   1  -type   f 
This command would return files modified  in the last 24 hours .
You can use -mtime option to return a list of files that were last modified N*24 hours ago. For example to find a file in last month (30 days) you would need to use -mtime +30 options.
  • -mtime +30   means you are looking for a file modified 30 days ago.
  • -mtime -30    means less than 30 days.
  • -mtime 30      If you skip + or - it means exactly 30 days
      -type f      searches only for files and not directories
To list the files in the directory tree that were modified within the past five minutes, type
$  find /<your path>  -mmin -5

To return a list of files that were accessed in the last 24 hours you would need to use the –atime option.
$  find  /<your path>   -atime   1  -type   f 
******************************************************************

FINDING FILES WITH A SPECIFIC EXTENTION WITHIN A DIRECTORY

         $  find   /<your path>   -name  "*.cfg"
The command –name matches the file names with the specified pattern

******************************************************************

FINDING FILES BASED ON THE FILE-PERMISSIONS


Files with execute permission for group :
$ find /<your path>  -perm g=x   -type f 
Files with execute permission for others:
$ find /<your path>  -perm  o=x   -type f
Where ‘g’ denotes groups and ‘o’ denotes others. x denotes execute permission.
******************************************************************

KILLING ACTIVE UNIX PROCESSES FOR SPECIFIC COMPONENTS


-->ps -ef |grep applmgr |grep <Component Name> |grep -v grep |awk '{print $2}' |xargs kill -9

For example to kill all active forms processes, we can use the following command:


ps -ef |grep applmgr |grep frm | grep -v grep |awk '{print $2}' |xargs kill -9 

Friday, February 17, 2012

Purging Data in Oracle Applications


Purge Concurrent Request and/or Manager Data Program :
Some times when we submit any request, it waits for about half a minute or a minute for the request to commit, this happens when large amount of records or rows exist in the Concurrent related tables. This type of performance issues can be resolved by running "Purge Concurrent Requests and/or Manager Data" Concurrent program.

The “Purge Concurrent Requests and/or Manager Data” concurrent program will delete information about completed concurrent requests from a set of tables.
It can also delete the log and output files for those concurrent requests from your UNIX file system. Most companies run this report and delete all recordsolder than 7 -31 days.
                                                                (or)

When the tables FND_CONCURRENT_REQUESTS and FND_CONCURRENT_PROCESSES has large amount of records or rows, the performance diminishes.

You have to run Purge Concurrent Request and/or Manager Data program on a regular basis depending on the amount of requests being run.


How to run the Purge Concurrent Request and/or Manager Data program :
-------------------------------------------------------------------------------------------------------------------

1. Log in to Application as System Administrator responsibility.
2. Navigate to Request> Run> Single Request
3. Query up Purge Concurrent Requests.


Program options:
-------------------------
ENTITY = ALL   : Purge of concurrent requests, concurrent managers, request log files,
manager log files and report output files. The following tables are purged
-  Fnd_Concurrent_Processes
-  Fnd_Dual
-  Fnd_Concurrent_Requests,
-  Fnd_Run_Requests
-  Fnd_Conc_Request_Arguments
-  Fnd_Dual
-  Fnd_Context_Env
-  Deletes concurrent requests’ log and out files from OS

ENTITY = MANAGER   : Purge of concurrent managers and manager log files.
The following tables are purged    
-  Fnd_Concurrent_Processes
-  Fnd_Dual
-  Deletes concurrent manager log files from OS

ENTITY = REQUEST   : Purge of concurrent requests, request log files and output files.
The following tables are purged
– Fnd_Concurrent_Requests,
– Fnd_Run_Requests
– Fnd_Conc_Request_Arguments
– Fnd_Dual
– Deletes concurrent requests’ log and out files from OS

 Mode                  :  AGE         Number of days.
                            : COUNT     Number of records

Mode Value                :   valid values are 1 – 9999999
User Name                   :   application username
Oracle ID                      :   Oracle ID
Program Application      :   application
Program                       :   program
Manager  Application     :   application associated with the concurrent manager
Manager                       :   concurrent manager
Resp. Application          :   application associated with the responsibility
Responsibility               :   responsibility or “All”.
Report                          :   No       Run the program but do not generate a report.
                                    :   Yes      Run the program and generate a report.


WARNING : The only option which purges all tables is the option “ENTITY = ALL”.
It is better to use this option to synchronise the Concurrent Requests
and Concurrent Processes tables.

Detailed information of the tables that will be purged:

FND_CONCURRENT_REQUESTS
This table contains a complete history of all concurrent requests.
FND_RUN_REQUESTS
When a user submits a report set, this table stores information about the
reports in the report set and the parameter values for each report.
FND_CONC_REQUEST_ARGUMENTS
This table records arguments passed by the concurrent manager to each program
it starts running.
FND_DUAL
This table records when requests do not update database tables.
FND_CONCURRENT_PROCESSES
This table records information about Oracle Applications and operating system
processes.
FND_CONC_STAT_LIST
This table collects runtime performance statistics for concurrent requests.
FND_CONC_STAT_SUMMARY
This table contains the concurrent program performance statistics generated by
the Purge Concurrent Request and/or Manager Data program. The Purge Concurrent
Request and/or Manager Data program uses the data in FND_CONC_STAT_LIST to
compute these statistics.
FND_CONC_PP_ACTIONS
Stores the post request processing actions(e.g., print, notify) for each
submitted request. There’s a concurrent_request_id here for each request_id
in the FND_CONCURRENT_REQUESTS.
FND_RUN_REQ_PP_ACTIONS
Stores the post request processing actions(e.g., print, notify) for
submitted request set programs that are stored in FND_RUN_REQUESTS
FND_ENV_CONTEXT
This table stores information about environment name and value for each of the
concurrent process

Purge Obsolete Workflow Runtime Data Concurrent Program :
------------------------------------------------------------------------------------------------

This program purges obsolete runtime information associated with work items as well as obsolete design information, such as activities that are no longer in use and expired users and roles, and obsolete runtime information not associated with work items,
such as notifications or Oracle XML Gateway transactions that were not handled through a workflow process.

1.  Navigate to the Submit Requests form in Oracle Applications to submit the Purge Obsolete Workflow Runtime Data concurrent program. When you install and set up Oracle Applications and Oracle Workflow,
your system administrator needs to add this concurrent program to a request security group for the responsibility that you want to run this program from.

2.  Submit the Purge Obsolete Workflow Runtime Data concurrent program as a request.

3.  In the Parameters window, enter values for the following parameters:


Item Type                   :           Item type associated with the obsolete runtime data you want to delete. Leave this argument null to delete obsolete runtime data for all item types.

Item Key                     :            A string generated from the application object's primary key. The string uniquely identifies the item within an item type. If null, the program purges all items in the specified itemtype.

Age                               :         Minimum age of data to purge, in days, if the persistence type is set to 'TEMP'. The default is 0.

Persistence Type     :              Persistence type to be purged, either 'TEMP' for Temporary or 'PERM' for    Permanent. The default is 'TEMP'.

Core Workflow         :               Enter 'Y' to purge only obsolete runtime data associated with work items, or 'N' to purge all obsolete runtime data as well obsolete design data. The  default  is  'N'  Only

Transaction Type     :             The Oracle XML Gateway transaction type to purge. Leave this argument null to purge the runtime data for all transaction types.

Transaction subtype :          The Oracle XML Gateway transaction subtype to purge. The transaction subtype is a code for a particular transaction within the application specified by the  transaction type.
Leave this argument null to purge the runtime data for all transactions of the specified transaction type.


4.  Choose OK to close the Parameters window.


5.  When you finish modifying the print and run options for this request, choose submit to submit the Request.
-->  -->

Gather Schema Statistics - Oracle Applications

-->
Gather  Schema  Statistics :
Gather Schema Statistics program generates statistics that quantify the data distribution and storage characteristics of tables, columns, indexes, and partitions.

The cost-based optimization (CBO)  uses these statistics to calculate the selectivity of prediction and to estimate the cost of each execution plan.

As a general rule, run Gather Schema Statistics under the following circumstances:

1.  After there has been a significant change in data in either content or volume.

2.  After importing data.

3.  Any time end-users notice deterioration in performance in routine day-to-day business transactions or when running concurrent programs.

4.  Run on a regular basis (weekly at a minimum) and anytime after application of patch, conversion, etc.


Estimate Percentage / Modification threshold defines the percentage which should be used to initiate gather stats for those objects which have actually changed beyond the threshold.
 The default is 10% (i.e. meaning any table which has changed via DML more than 10%, stats will be collected, otherwise it will be skipped).


How to run Gather Schema Statistics concurrent program:

1. Log on to Oracle Applications with
    Responsibility = System Administrator

2. Submit Request Window
    Navigate to: Concurrent > Requests

3. Query for the Gather Schema Statistics

4. Enter the appropriate parameters. This can be run for specific schemas by specifying the schema name or entering  ‘ALL’  to gather statistics for every schema in the database

5. Submit the Gather Schema Statistics program


Parameters :
------------------
Schema Name:  Schema for which statistics are to be gathered. Specify ALL for all Oracle Applications schemas

Percent:  The sampling percentage. If left blank, the default value of 10 is used. The valid range is from 0 to 100

Degree:  The degree of parallelism to be used for gathering statistics. If a Degree is not provided, it defaults to the minimum ofparallel_max_servers and cpu_count.

Backup Flag:  NOBACKUP is used, then the GATHER_SCHEMA_STATS procedure will not backup the current statistics. This way the GATHER_SCHEMA_STATS procedure will run faster.

Restart Request ID:  In the case where the Gather Schema Statistics run fails due to whatever reasons, the concurrent request can be re-submitted and it will pick up where the failed run left off, if you provide the concurrent request_id of the failed run.
History Mode:  Last Run – History records for each object are maintained only for the last gather statistics run. Each subsequent run will overwrite the previous history record for the object. This is the default behavior

Gather Options:  GATHER: All tables and indexes of the schema schema name are selected for stats gathering. This is the default

Modifications Threshold:  Applicable only to GATHER AUTO and LIST AUTO Options

Invalidate Dependent Cursors:  This flag indicates whether cursors dependent on the table being analyzed should be invalidated or not. By default, dependent cursors are invalidated.


How to Gather the Statistics of Custom Schema when we submit the concurrent request called Gather Schema Statistics :


When we submit Gather Schema Stats with Parameter  ALL, concurrent request will complete successfully, and DBAs will not realize that custom schemas are not analyzed.

Sql > select count(table_name)  from  dba_tables  where  last_analyzed  is  not null  and  owner= <custom_schema_name>;

Here you realize none of the tables in custom schema are analyzed.


Gather Schema Statistics program gathers statistics for all schemas , however it skips custom schemas registered in Oracle Applications.

Reason:  

Whenever Custom schemas are registerd in Oracle Applications , the entries are done in 2 tables
ie  FND_ORACLE_USERID  and  FND_APPLICATIONS_TL
However , when Gather schema statistics is submitted it uses the below query to get schema information
Sql > select distinct upper(oracle_username) sname
          from fnd_oracle_userid a,
         fnd_product_installations b
         where a.oracle_id = b.oracle_id
         order by sname;
Note : When custom schemas are created the entry is not made in  FND_PRODUCT_INSTALLATIONS  and hence it is not picked up in the above query.
Solution :
How can we make an entry in fnd_product_installations so that it is picked up by Gather Schema Stats. Follow below steps
Go to the Responsibility called Alert Manager and Navigate to the form -> Installations under Systems Menu.
Define custom application in this form. Go to the last record and make entry for custom applications. Once this is done , it will insert an entry in fnd_product_installations.
Submit Gather Schema stats and then query dba_tables and you will realize , stats are being gathered for custom schemas as well.