DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Tuesday, July 26, 2016

Oracle Big Data SQL

Oracle Big Data SQL supports queries against vast amounts of big data stored in multiple data sources, including Apache Hive, HDFS, Oracle NoSQL Database, and Apache HBase. You can view and analyze data from various data stores together, as if it were all stored in an Oracle database.
Using Oracle Big Data SQL, you can query data stored in a Hadoop cluster using the complete SQL syntax. You can execute the most complex SQL SELECT statements against data in Hadoop, either manually or using your existing applications, to tease out the most significant insights.
Oracle Big Data SQL is licensed separately from Oracle Big Data Appliance.

Software Component Overview for Oracle Big Data

The major software components perform three basic tasks:
  • Acquire
  • Organize
  • Analyze and visualize
The best tool for each task depends on the density of the information and the degree of structure. Figure 1-2 shows the relationships among the tools and identifies the tasks that they perform.


Description of Figure 1-2 follows

Software for Oracle Big Data

The Oracle Linux operating system and Cloudera's Distribution including Apache Hadoop (CDH) underlie all other software components installed on Oracle Big Data Appliance. CDH is an integrated stack of components that have been tested and packaged to work together.
CDH has a batch processing infrastructure that can store files and distribute work across a set of computers. Data is processed on the same computer where it is stored. In a single Oracle Big Data Appliance rack, CDH distributes the files and workload across 18 servers, which compose a cluster. Each server is a node in the cluster.
The software framework consists of these primary components:
  • File system: The Hadoop Distributed File System (HDFS) is a highly scalable file system that stores large files across multiple servers. It achieves reliability by replicating data across multiple servers without RAID technology. It runs on top of the Linux file system on Oracle Big Data Appliance.
  • MapReduce engine: The MapReduce engine provides a platform for the massively parallel execution of algorithms written in Java. Oracle Big Data Appliance 3.0 runs YARN by default.
  • Administrative frameworkCloudera Manager is a comprehensive administrative tool for CDH. In addition, you can use Oracle Enterprise Manager to monitor both the hardware and software on Oracle Big Data Appliance.
  • Apache projects: CDH includes Apache projects for MapReduce and HDFS, such as HivePigOozieZooKeeperHBase,Sqoop, and Spark.
  • Cloudera applications: Oracle Big Data Appliance installs all products included in Cloudera Enterprise Data Hub Edition, includingImpalaSearch, and Navigator.

The Oracle Big Data Solution

Oracle Big Data Appliance is an engineered system comprising both hardware and software components. The hardware is optimized to run the enhanced big data software components.
Oracle Big Data Appliance delivers:
  • A complete and optimized solution for big data
  • Single-vendor support for both hardware and software
  • An easy-to-deploy solution
  • Tight integration with Oracle Database and Oracle Exadata Database Machine
Oracle provides a big data platform that captures, organizes, and supports deep analytics on extremely large, complex data streams flowing into your enterprise from many data sources. You can choose the best storage and processing location for your data depending on its structure, workload characteristics, and end-user requirements.
Oracle Database enables all data to be accessed and analyzed by a large user community using identical methods. By adding Oracle Big Data Appliance in front of Oracle Database, you can bring new sources of information to an existing data warehouse. Oracle Big Data Appliance is the platform for acquiring and organizing big data so that the relevant portions with true business value can be analyzed in Oracle Database.
For maximum speed and efficiency, Oracle Big Data Appliance can be connected to Oracle Exadata Database Machine running Oracle Database. Oracle Exadata Database Machine provides outstanding performance in hosting data warehouses and transaction processing databases. Moreover, Oracle Exadata Database Machine can be connected to Oracle Exalytics In-Memory Machine for the best performance of business intelligence and planning applications. The InfiniBand connections between these engineered systems provide high parallelism, which enables high-speed data transfer for batch or query workloads.


Oracle Engineered Systems for Big Data


Description of Figure 1-1 follows

Creating Multiple Indexes on the Same Set of Columns

You can create multiple indexes on the same set of columns when the indexes are different in some way.


To create multiple indexes on the same set of columns, the following prerequisites must be met:
  • The prerequisites for required privileges in "Creating Indexes".
  • Only one index on the same set of columns can be visible at any point in time.
    If you are creating a visible index, then any existing indexes on the set of columns must be invisible.
Alternatively, you can create an invisible index on the set of columns

For example, the following steps create a B-tree index and a bitmap index on the same set of columns in the oe.orders table:

Create a B-tree index on the customer_id and sales_rep_id columns in the oe.orders table:

CREATE INDEX oe.ord_customer_ix1    ON oe.orders (customer_id, sales_rep_id);


The oe.ord_customer_ix1 index is visible by default.



Alter the index created in Step 1 to make it invisible:

ALTER INDEX oe.ord_customer_ix1 INVISIBLE;


Alternatively, you can add the INVISIBLE clause in Step 1 to avoid this step.

Create a bitmap index on the customer_id and sales_rep_id columns in the oe.orders table:

CREATE BITMAP INDEX oe.ord_customer_ix2    ON oe.orders (customer_id, sales_rep_id);
The oe.ord_customer_ix2 index is visible by default.



If the oe.ord_customer_ix1 index created in Step 1 is visible, then the CREATE BITMAP INDEX statement in this step returns an error.

Creating an Invisible Index

An invisible index is an index that is ignored by the optimizer unless you explicitly set the OPTIMIZER_USE_INVISIBLE_INDEXES initialization parameter to TRUE at the session or system level.
To create an invisible index: 
  • Use the CREATE INDEX statement with the INVISIBLE keyword.
    The following statement creates an invisible index named emp_ename for the ename column of the emp table:


CREATE INDEX emp_ename ON emp(ename)
      TABLESPACE users
      STORAGE (INITIAL 20K
      NEXT 20k)
      INVISIBLE;

Creating an Unusable Index

When you create an index in the UNUSABLE state, it is ignored by the optimizer and is not maintained by DML. An unusable index must be rebuilt, or dropped and re-created, before it can be used.
If the index is partitioned, then all index partitions are marked UNUSABLE.
The database does not create an index segment when creating an unusable index.
The following procedure illustrates how to create unusable indexes and query the database for details about the index.
To create an unusable index:

If necessary, create the table to be indexed.
For example, create a hash-partitioned table called hr.employees_part as follows:


sh@PROD> CONNECT hrEnter password: **Connected.


hr@PROD> CREATE TABLE employees_part  2    PARTITION BY HASH (employee_id) PARTITIONS 2  3    AS SELECT * FROM employees; Table created.
hr@PROD> SELECT COUNT(*) FROM employees_part;   COUNT(*)----------       107

Create an index with the keyword UNUSABLE.

The following example creates a locally partitioned index on employees_part, naming the index partitions p1_i_emp_ename and p2_i_emp_ename, and making p1_i_emp_ename unusable:


hr@PROD> CREATE INDEX i_emp_ename ON employees_part (employee_id)  2    LOCAL (PARTITION p1_i_emp_ename UNUSABLE, PARTITION p2_i_emp_ename); Index created.


(Optional) Verify that the index is unusable by querying the data dictionary.


The following example queries the status of index i_emp_ename and its two partitions, showing that only partition p2_i_emp_ename is unusable:


hr@PROD> SELECT INDEX_NAME AS "INDEX OR PARTITION NAME", STATUS  2  FROM   USER_INDEXES  3  WHERE  INDEX_NAME = 'I_EMP_ENAME'  4  UNION ALL  5  SELECT PARTITION_NAME AS "INDEX OR PARTITION NAME", STATUS  6  FROM   USER_IND_PARTITIONS  7  WHERE  PARTITION_NAME LIKE '%I_EMP_ENAME%'; INDEX OR PARTITION NAME        STATUS------------------------------ --------I_EMP_ENAME                    N/AP1_I_EMP_ENAME                 UNUSABLEP2_I_EMP_ENAME                 USABLE

(Optional) Query the data dictionary to determine whether storage exists for the partitions.

For example, the following query shows that only index partition p2_i_emp_ename occupies a segment. Because you created p1_i_emp_ename as unusable, the database did not allocate a segment for it.

hr@PROD> COL PARTITION_NAME FORMAT a14
hr@PROD> COL SEG_CREATED FORMAT a11
hr@PROD> SELECT p.PARTITION_NAME, p.STATUS AS "PART_STATUS",
  2         p.SEGMENT_CREATED AS "SEG_CREATED",   
  3  FROM   USER_IND_PARTITIONS p, USER_SEGMENTS s
  4  WHERE  s.SEGMENT_NAME = 'I_EMP_ENAME';

PARTITION_NAME PART_STA SEG_CREATED 
-------------- -------- ----------- 
P2_I_EMP_ENAME USABLE   YES       
P1_I_EMP_ENAME UNUSABLE NO

Advanced Index Compression

Advanced Index Compression works well on all supported indexes, including those indexes that are not good candidates for the existing prefix compression feature; including indexes with no, or few, duplicate values in the leading columns of the index.

Creating an index using advanced index compression reduces the size of all supported unique and non-unique indexes.  Advanced Index Compression improves the compression ratios significantly while still providing efficient access to the index.
Before enabling advanced index compression, the database must be at 12.1.0 or higher compatibility level. You enable advanced index compression using the COMPRESS ADVANCED LOW clause. For example, the following statement enables advanced index compression during the creation of the hr.emp_mndp_ix index:

CREATE INDEX hr.emp_mn_ix ON hr.employees(manager_id, department_id)   COMPRESS ADVANCED LOW;

You can also specify the COMPRESS ADVANCED LOW clause during an index rebuild. For example, during rebuild, you can enable advanced index compression for the hr.emp_manager_ix index as follows:
ALTER INDEX hr.emp_manager_ix REBUILD COMPRESS ADVANCED LOW;

The COMPRESSION column in the ALL_INDEXESALL_IND_PARTITIONS, and ALL_IND_SUBPARTITIONS views shows whether an index is compressed

About Incidents in OEM

An incident is a single occurrence of a problem. When a problem (critical error) occurs multiple times, an incident is created for each occurrence. Incidents are timestamped and tracked in the Automatic Diagnostic Repository (ADR). Each incident is identified by a numeric incident ID, which is unique within the ADR. When an incident occurs, the database:
  • Makes an entry in the alert log.
  • Sends an incident alert to Cloud Control.
  • Gathers first-failure diagnostic data about the incident in the form of dump files (incident dumps).
  • Tags the incident dumps with the incident ID.
  • Stores the incident dumps in an ADR subdirectory created for that incident.

Database Smart Flash Cache

If your database is running on Solaris or Oracle Linux, you can optionally add another memory component: Database Smart Flash Cache. Database Smart Flash Cache is an extension of the SGA-resident buffer cache, providing a level 2 cache for database blocks. It can improve response time and overall throughput for both read-intensive online transaction processing (OLTP) workloads and ad hoc queries and bulk data modifications in a data warehouse environment. Database Smart Flash Cache resides on one or more flash disk devices, which are solid state storage devices that use flash memory. Database Smart Flash Cache is typically more economical than additional main memory, and is an order of magnitude faster than disk drives.




Consider adding Database Smart Flash Cache when all of the following are true:
  • Your database is running on the Solaris or Oracle Linux operating systems. Database Smart Flash Cache is supported on these operating systems only.
  • The Buffer Pool Advisory section of your Automatic Workload Repository (AWR) report or STATSPACK report indicates that doubling the size of the buffer cache would be beneficial.
  • db file sequential read is a top wait event.
  • You have spare CPU.
Sizing Database Smart Flash Cache

As a general rule, size Database Smart Flash Cache to be between 2 times and 10 times the size of the buffer cache. Any multiplier less than two would not provide any benefit. If you are using automatic shared memory management, make Database Smart Flash Cache between 2 times and 10 times the size of SGA_TARGET. Using 80% of the size of SGA_TARGET instead of the full size would also suffice for this calculation.



Friday, July 15, 2016

check the load average in Solaris platform

                                 Commands to check the load average in Solaris platform

The following commands are present by default in Solaris Operating system solaris 10u11.


  • prstat -a
  • sar 1 5 
  • iostat -xnctz 5
  • prstat -s cpu -n 5
  •          top (Externally package for solaris 10, And from Solaris 11 on wards  it  is a default in OS)


prstat -a ---> To check to monitor the all activitives that are running in server. 


The prstat utility iteratively examines all active processes on the system and reports statistics based on the selected output mode and sort order. 
prstat provides options to examine only processes matching specified PIDs, UIDs, zone IDs, CPU IDs, and processor set IDs.

OUTPUT:

oracle@server1 # prstat -a
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 15480 oracle     10G   10G cpu37    0    0  98:21:42 0.8% oracle/1
  4569 oracle     10G   10G cpu65    0    0  91:38:37 0.7% oracle/1
 23579 oracle     10G   10G cpu74    0    0   0:00:19 0.4% oracle/2
 28441 oracle     10G   10G sleep   10    0   0:04:32 0.4% oracle/2
 27062 oracle     10G   10G sleep   59    0   2:12:17 0.1% oracle/2
  1733 root      156M   72M sleep  110    -  20:55:28 0.1% ologgerd/11
  2721 root      101M   62M sleep   59    0  12:36:04 0.0% orarootagent.bi/45
  1691 root       86M   51M cpu52  100    -  10:46:55 0.0% osysmond.bin/13
  1656 root      127M   94M sleep   59    0  11:37:19 0.0% orarootagent.bi/25
  8247 oracle     10G   10G sleep   52    0   5:44:01 0.0% oracle/1
  1657 grid       92M   56M sleep   59    0   7:21:54 0.0% gipcd.bin/10
 25471 oracle     10G   10G sleep   59    0   2:49:20 0.0% oracle/1
  2295 grid     1323M 1053M sleep   34    0   5:28:15 0.0% oracle/1
     5 root        0K    0K sleep    0  -20   7:15:43 0.0% vmtasks/128
  3095 oracle    107M   83M sleep   59    0   4:46:21 0.0% oraagent.bin/18
  2443 root      175M  131M sleep   59    0   5:31:32 0.0% crsd.bin/50
  2652 grid      126M  100M cpu82   39    0   2:46:33 0.0% oraagent.bin/26
    74 oracle   4456K 4168K cpu84   59    0   0:00:00 0.0% prstat/1
  7266 oracle     10G   10G cpu101  39    0   0:04:38 0.0% oracle/1
  1873 root       77M   45M sleep   59    0   3:35:23 0.0% octssd.bin/13
  1713 grid      103M   71M sleep  110    -   3:20:26 0.0% ocssd.bin/20
  1372 root      143M  102M sleep   59    0   3:34:27 0.0% ohasd.bin/34
  1604 grid      115M   90M sleep   59    0   2:53:32 0.0% oraagent.bin/29
   102 root     9016K 7208K cpu120  19    0   0:00:00 0.0% perl/1
 22267 oracle     10G   10G sleep   59    0   0:01:15 0.0% oracle/2
 20401 oracle     10G   10G sleep   59    0   0:00:26 0.0% oracle/1
  8256 oracle     10G   10G sleep   60  -20   1:14:45 0.0% oracle/1
  8253 oracle     10G   10G sleep   60  -20   1:11:11 0.0% oracle/1
  8250 oracle     10G   10G sleep   60  -20   1:15:31 0.0% oracle/1
  8277 oracle     10G   10G sleep   59    0   1:00:01 0.0% oracle/1
 NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
   374 oracle     15G   15G    23% 224:27:16 2.4%
    62 root      751M  867M   1.3%  83:13:57 0.2%
    48 grid     1862M 1997M   3.1%  31:55:16 0.1%
     6 daemon   7864K   10M   0.0%   0:42:26 0.0%
     1 smmsp    2952K   10M   0.0%   0:00:11 0.0%
     1 noaccess  140M  108M   0.2%   0:21:44 0.0%



Total: 492 processes, 1435 lwps, load averages: 4.93, 4.98, 5.22




sar 1 5  ---> To check the Monitoring System Activities 

Use the sar command to perform the following tasks:

  • Organize and view data about system activity.

  • Access system activity data on a special request basis.

  • Generate automatic reports to measure and monitor system performance, as well as special request reports to pinpoint specific performance problems. 
For information about how to set up the sar command to run on your system, as well as a description of these tools, see Collecting System Activity Data Automatically (sar).

OUTPUT:

oracle@server1 # sar 1 5


SunOS server1 5.10 Generic_147147-26 sun4v    07/15/2016

18:56:53    %usr    %sys    %wio   %idle
18:56:54       3       1       0      96
18:56:55       4       1       0      95
18:56:56       4       0       0      96
18:56:57       4       1       0      95
18:56:58       4       0       0      96

Average        4       1       0      96



iostat -xnctz 5  --->To check the particular mount point usage 

The iostat utility iteratively reports terminal, disk, and tape I/O activity, as well as CPU utilization. 
The first line of output is for all time since boot; each subsequent line is for the prior interval only.

OUTPUT:

oracle@server1 # iostat -xnctz 5
   tty         cpu
 tin tout  us sy wt id
   0    4   7  1  0 93
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.4    3.0    5.1    8.2  0.0  0.0    1.3   13.3   0   1 md/d10
    0.2    3.0    2.5    7.5  0.0  0.0    0.0   12.4   0   1 md/d11
    0.2    3.0    2.5    7.5  0.0  0.0    0.0   11.7   0   1 md/d12
    0.0    0.0    0.0    0.0  0.0  0.0    0.0   13.2   0   0 md/d20
    0.0    0.0    0.0    0.0  0.0  0.0    0.0   15.0   0   0 md/d21
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    9.5   0   0 md/d22
    0.6    3.0   16.5   43.6  0.0  0.0    2.5   12.1   1   2 md/d30
    0.3    3.0    8.2   42.8  0.0  0.0    0.0   10.7   0   1 md/d31
    0.3    3.0    8.3   42.8  0.0  0.0    0.0   10.5   0   1 md/d32
    0.5    7.3   10.8   50.9  0.0  0.1    0.0   11.3   0   3 c1t0d0
    0.5    7.3   10.8   50.9  0.0  0.1    0.0   10.9   0   3 c1t1d0
  271.7  104.6 30567.0 2274.1  0.9  4.0    2.4   10.5   1  53 c6t60080E500017EF140000624A565942BEd0
  337.0    6.8 2940.5 2705.3  0.0  0.2    0.0    0.6   0   9 c6t60080E500017F1B2000009FC5646E2DBd0
    0.0   41.7    0.0 1335.1  0.0  0.0    0.0    0.3   0   1 rmt/0
    0.0   42.3    0.0 1354.9  0.0  0.0    0.0    0.3   0   1 rmt/1
   tty         cpu
 tin tout  us sy wt id
   0  259   3  1  0 96
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.4    0.0    3.2  0.0  0.0    9.4    6.9   0   0 md/d30
    0.0    0.4    0.0    3.2  0.0  0.0    0.0    6.8   0   0 md/d31
    0.0    0.4    0.0    3.2  0.0  0.0    0.0    6.4   0   0 md/d32
    0.0    1.0    0.0    3.5  0.0  0.0    0.0    6.4   0   1 c1t0d0
    0.0    1.0    0.0    3.5  0.0  0.0    0.0    7.2   0   1 c1t1d0
  162.9   28.4 39476.4 5889.6  0.0  0.2    0.0    1.1   0  17 c6t60080E500017EF140000624A565942BEd0
   tty         cpu
 tin tout  us sy wt id
   0  129   3  1  0 97
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.2    0.0    4.5  0.0  0.0    9.5    7.6   0   0 md/d10
    0.0    0.2    0.0    4.5  0.0  0.0    0.0    4.8   0   0 md/d11
    0.0    0.2    0.0    4.5  0.0  0.0    0.0    7.5   0   0 md/d12
    0.0    1.0    0.0   10.1  0.0  0.0    3.9    8.2   0   1 md/d30
    0.0    1.0    0.0   10.1  0.0  0.0    0.0    8.1   0   1 md/d31
    0.0    1.0    0.0   10.1  0.0  0.0    0.0    6.6   0   1 md/d32
    0.0    2.0    0.0   15.0  0.0  0.0    0.0    7.3   0   1 c1t0d0
    0.0    2.0    0.0   15.0  0.0  0.0    0.0    6.2   0   1 c1t1d0
  282.9   14.0 68544.6 1275.8  0.0  0.3    0.0    0.9   0  24 c6t60080E500017EF140000624A565942BEd0
   tty         cpu
 tin tout  us sy wt id
   0  170   3  1  0 96
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.8   23.2    6.4   28.3  0.0  0.3    1.3   12.2   3   4 md/d10
    0.4   23.2    3.2   28.3  0.0  0.3    0.0   11.8   0   4 md/d11
    0.4   23.2    3.2   28.3  0.0  0.3    0.0   11.8   0   4 md/d12
    0.2    0.2    1.6    1.6  0.0  0.0    3.8    7.3   0   0 md/d30
    0.0    0.2    0.0    1.6  0.0  0.0    0.0    5.7   0   0 md/d31
    0.2    0.2    1.6    1.6  0.0  0.0    0.0    7.2   0   0 md/d32
    0.4   26.0    3.2   31.2  0.0  0.3    0.0   11.9   0   6 c1t0d0
    0.6   26.0    4.8   31.2  0.0  0.3    0.0   11.7   0   6 c1t1d0
    5.0   23.0   39.4 4819.6  0.0  0.0    0.0    1.6   0   5 c6t60080E500017EF140000624A565942BEd0
   tty         cpu
 tin tout  us sy wt id
   0  170   3  1  0 96
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.2    0.0    0.6  0.0  0.0    0.0    9.1   0   0 md/d10
    0.0    0.2    0.0    0.6  0.0  0.0    0.0    9.0   0   0 md/d11
    0.0    0.2    0.0    0.6  0.0  0.0    0.0    5.5   0   0 md/d12
    0.0    1.2    0.0   10.2  0.0  0.0    3.3    6.6   0   1 md/d30
    0.0    1.2    0.0   10.2  0.0  0.0    0.0    6.2   0   1 md/d31
    0.0    1.2    0.0   10.2  0.0  0.0    0.0    5.8   0   1 md/d32
    0.0    2.0    0.0   11.1  0.0  0.0    0.0    6.0   0   1 c1t0d0
    0.0    2.0    0.0   11.1  0.0  0.0    0.0    5.9   0   1 c1t1d0
    5.2   30.6   45.8 6398.5  0.0  0.1    0.0    1.5   0   5 c6t60080E500017EF140000624A565942BEd0





prstat -s cpu -n 5  ---> To check the CPU usage of  the server

OUTPUT:

oracle@server1 # prstat -s cpu -n 5
   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 15480 oracle     10G   10G cpu37    0    0  98:26:24 0.8% oracle/1
 24149 oracle     10G   10G cpu66    0    0   0:01:26 0.8% oracle/2
  4569 oracle     10G   10G cpu110   0    0  91:43:03 0.7% oracle/1
 28441 oracle     10G   10G sleep   10    0   0:06:43 0.3% oracle/2
   802 oracle     10G   10G sleep    0    0   0:00:07 0.2% oracle/2
Total: 501 processes, 1449 lwps, load averages: 5.93, 5.21, 5.23

top ---> To check memory,swap, load,paging space.

         The top program provides a dynamic real-time view of a running
       system.  It can display system summary information as well as a list
       of processes or threads currently being managed by the Linux kernel.


OUTPUT:
oracle@mserver1 # top
load averages:  5.83,  5.26,  5.22;               up 9+00:03:32                                                                                                19:09:39
490 processes: 480 sleeping, 10 on cpu
CPU states: 95.0% idle,  4.5% user,  0.5% kernel,  0.0% iowait,  0.0% swap
Kernel: 10137 ctxsw, 3627 trap, 8210 intr, 29003 syscall, 61 flt
Memory: 64G phys mem, 38G free mem, 50G total swap, 50G free swap

   PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
  2135 oracle     1   0    0    0K    0K cpu/101   0:32  0.78% oracle
 25485 oracle     2  20    0    0K    0K cpu/37   0:44  0.77% oracle
 15480 oracle     1  20    0    0K    0K cpu/19  98.6H  0.76% oracle
  4569 oracle     1  20    0    0K    0K cpu/110  91.9H  0.72% oracle
 22136 oracle     1   0    0    0K    0K cpu/46  41:29  0.70% oracle
  2193 oracle     2  51    0    0K    0K sleep    0:03  0.19% oracle
  1723 oracle     2  33    0    0K    0K sleep    0:59  0.16% oracle
  1733 root      11 110  -20    0K    0K cpu/62  20.9H  0.07% ologgerd
 27062 oracle     2  59    0    0K    0K sleep  133:24  0.06% oracle
 25471 oracle     1  59    0    0K    0K sleep  169:40  0.06% oracle
  1691 root      13 100  -20    0K    0K sleep  647:38  0.05% osysmond.bin
  1656 root      25  59    0    0K    0K sleep  698:01  0.04% orarootagent.bi
  2721 root      13  59    0    0K    0K cpu/74 756:50  0.04% orarootagent.bi
  8247 oracle     1  59    0    0K    0K sleep  344:35  0.03% oracle
  8265 oracle     2  59    0    0K    0K sleep   32:50  0.02% oracle
  1657 grid      10  59    0    0K    0K sleep  442:21  0.02% gipcd.bin
  2295 grid       1  49    0    0K    0K cpu/56 328:36  0.02% oracle
  8263 oracle     2  59    0    0K    0K sleep   33:05  0.02% oracle
  8262 oracle     2  59    0    0K    0K sleep   33:06  0.02% oracle
  2443 root      47  59    0    0K    0K sleep  331:50  0.02% crsd.bin
  3095 oracle    17  59    0    0K    0K sleep  286:38  0.02% oraagent.bin
  8264 oracle     2  59    0    0K    0K sleep   32:47  0.02% oracle
  8256 oracle     1  60  -20    0K    0K sleep   74:50  0.01% oracle
  1372 root      33  59    0    0K    0K sleep  214:39  0.01% ohasd.bin
  1873 root      13  59    0    0K    0K sleep  215:35  0.01% octssd.bin
  8250 oracle     1  60  -20    0K    0K sleep   75:36  0.01% oracle
  8277 oracle     1  49    0    0K    0K sleep   60:08  0.01% oracle
  1713 grid      20 110  -20    0K    0K sleep  200:38  0.01% ocssd.bin
  8253 oracle     1  60  -20    0K    0K sleep   71:16  0.01% oracle
  1604 grid      29  59    0    0K    0K sleep  173:43  0.01% oraagent.bin


OATM Migration Appears to Have Hung. How Can I Monitor It's Progress?

               OATM Migration Appears to Have Hung. How Can I Monitor It's Progress?



The Oracle Applications Tablespace Model was another long awaited feature that got
introduced in 11.5.10.Prior to 11.5.10 by default which is used for reducing the tablespace
 


When you run the migration the screen output tells you the logfiles you need to monitor, for example



1) Migration processes for tables with LONG and LONG RAW columns started.

Please monitor the log file $APPL_TOP/admin/log/fndmlong20050120230037.log for errors


2) Sequential migration process started. Please monitor the log file

$APPL_TOP/admin/log/fndemseq20050120230038.log for error


3) Parallel migration processes started. Please monitor the log file

$APPL_TOP/admin/log/fndemcmd20050120230048.log for errors


Ref: (Doc ID 1073783.1)

Alert Log: Shutdown Waiting for Active Calls to Complete.

                  Alert Log: Shutdown Waiting for Active Calls to Complete.


Error:

You are attempting to shut down the database and the database hangs.  The 
alert log contains the following message: 
   SHUTDOWN: Waiting for active calls to complete 
There are no other error messages in the alert log. 


Solution:

Locate and kill any client connections to the database at the Unix level, as follows: 
1. Locate any client connections to the database using ps, and grep for any 
   processes belonging to this . 
  
       Example: ps -ef | grep V733 
2. Look for processes that include a 'Local=No' designation. 
       Example: osupport 6235 1 0 Nov 24  0:01 oracleV733  (LOCAL=NO) 
3. Kill the Unix process(es) with the 'Local=No' designation. 
       Example: Kill -9 6235 

Explanation:

The database is waiting for pmon to clean up processes, but pmon is unable to 
clean them. The client connections to the server are causing the shutdown 
immediate or normal to hang.  Killing them allows pmon to clean up and release 
the associated Oracle processes and resources.

What resources are we talking about?

1) Any non committed transactions must be rolled back
2) Any temporary space (sort segments / lobs / session temporary tables) must be freed
3) The session itself and any associated memory consumed by the session.
4) Internal locks / en queues must be cleaned up.

Often Oracle (SMON or PMON depending on whether Shared Server is used)
will wait for the OS to terminate the process(es) associated with the session.