Search This Blog

Saturday, April 30, 2011

How to do successful oracle 11g upgrade?

I recommend users who are upgrading to oracle 11g to check the sys/system
duplicate objects before upgrade. Duplicate check step is not 
part of the 11g upgrade doc or manual installation doc.
 
I recommend following all steps which are provided in your 
installation doc and use the following procedure to find any 
duplicate before you proceed with your upgrade. 
 
Oracle should include this steps part of upgrade doc to make 
upgrade successful. 
 
Problem Description: 
==================== 
 
If the same data dictionary object has been created under both user SYS and 
SYSTEM schema then errors will often occur when trying to use the database 
features associated with these objects. 
 
Problem Explanation: 
==================== 
 
During the installation of Oracle many scripts are run which create the 
underlying data dictionary objects. Most of these scripts are run at database 
creation time, but others are only run if specific database features (such as 
replication or shared pool management) are needed. These scripts are usually 
run manually after the database has been created. 
 
Solution Description: 
====================== 
 
In order to clean up the duplicate objects you need to issue a SQL script to 
find out the names of the duplicate objects. You can then manually drop the 
objects or use a 'SQL generating SQL' script to generate a list of drop 
commands. 
 
Below is a SQL*Plus script that will list all objects that have been created 
in both the SYS and SYSTEM schema: 
 
column object_name format a30 
select object_name, object_type 
from dba_objects 
where object_name||object_type in 
   (select object_name||object_type  
    from dba_objects 
    where owner = 'SYS') 
and owner = 'SYSTEM'; 
 
The output from this script will either be 'zero rows selected' or will look 
something like the following: 
 
OBJECT_NAME                OBJECT_TYPE 
------------------------------ ------------- 
ALL_DAYS                       VIEW 
CHAINED_ROWS                   TABLE 
COLLECTION                     TABLE 
COLLECTION_ID                  SEQUENCE 
DBA_LOCKS                      SYNONYM 
DBMS_DDL                       PACKAGE 
DBMS_SESSION                   PACKAGE 
DBMS_SPACE                     PACKAGE 
DBMS_SYSTEM                    PACKAGE 
DBMS_TRANSACTION               PACKAGE 
DBMS_UTILITY                   PACKAGE 
 
If the select statement returns any rows then this is an indication that at 
least 1 script has been run as both SYS and SYSTEM. 
 
Since most data dictionary objects should be owned by SYS (see exceptions 
below) you will want to drop the objects that are owned by SYSTEM in order to 
clear up this situation. 
 
EXCEPTION TO THE RULE 
===================== 
THE REPLICATION SCRIPTS (XXX) CORRECTLY CREATES OBJECTS WITH THE SAME NAME IN 
THE SYS AND SYSTEM ACCOUNTS. LISTED BELOW ARE THE OBJECTS USED BY REPLICATION 
THAT SHOULD BE CREATED IN BOTH ACCOUNTS. DO NOT DROP THESE OBJECTS FROM THE 
SYSTEM ACCOUNT IF YOU ARE USING REPLICATION. DOING SO WILL CAUSE REPLICATION 
TO FAIL! 
 
The following objects are duplicates that will show up (and should not be removed)
when running this script in 8.1.x and higher.
 
Without replication installed:
 
INDEX           AQ$_SCHEDULES_PRIMARY
TABLE           AQ$_SCHEDULES
 
If replication is installed by running catrep.sql:
 
INDEX           AQ$_SCHEDULES_PRIMARY
PACKAGE         DBMS_REPCAT_AUTH
PACKAGE BODY    DBMS_REPCAT_AUTH
TABLE           AQ$_SCHEDULES
 
When database is upgraded to 11g using DBUA, following duplicate 
objects are also created
 
 
OBJECT_NAME                OBJECT_TYPE 
------------------------------ ------------- 
Help                           TABLE 
Help_Topic_Seq                  Index
 
The objects created by sqlplus/admin/help/hlpbld.sql must be owned by SYSTEM 
because when sqlplus retrieves the help information, it refers to the SYSTEM 
schema only. DBCA runs this script as SYSTEM user when it creates the database 
but DBUA runs this script as SYS user when upgrading the database (
reported as an unpublished BUG 10022360).  You can drop the ones in SYS schema.
 
Now that you have a list of duplicate objects you will simply issue the 
appropriate DROP command to get rid of the object that is owned by the SYSTEM user. 
 
If the list of objects is large then you may want to use the following 
SQL*Plus script to automatically generate an SQL script that contains 
the appropriate DROP commands: 
 
set pause off 
set heading off 
set pagesize 0 
set feedback off 
set verify off 
spool dropsys.sql 
select 'DROP ' || object_type || ' SYSTEM.' || object_name || ';' 
from dba_objects 
where object_name||object_type in 
   (select object_name||object_type  
    from dba_objects 
    where owner = 'SYS') 
and owner = 'SYSTEM'; 
spool off 
exit 
 
You will now have a file in the current directory named dropsys.sql that 
contains all of the DROP commands. You will need to run this script as a 
normal SQL script as follows: 
 
$ sqlplus 
SQL*Plus: Release 3.3.2.0.0 - Production on Thu May  1 14:54:20 1997 
Copyright (c) Oracle Corporation 1979, 1994.  All rights reserved. 
Enter user-name: system 
Enter password: manager 
SQL> @dropsys 
 
Note: You may receive one or more of the following errors: 
 
      ORA-2266 (unique/primary keys in table 
      referenced by enabled foreign keys):  
 
      If you encounter this error then some of the tables 
      you are dropping have constrints that prevent the table from being 
      dropped. To fix this problem you will have to manually drop the 
      objects in a different order than the script does. 
 
      ORA-2429 (cannot drop index used for enforcement of 
      unique/primary key): 
 
      This is similar to the ORA-2266 error except that it points to 
      an index. You will have to manually disable the constraint 
      associated with the index and then drop the index. 
 
      ORA-1418 (specified index does not exist): 
 
      This occurs because the table that the index was created on 
      has already been dropped which also drops the index. When the 
      script tries to drop the index it is no longer there and thus 
      the ORA-1418 error. You can safely ignore this error.

Monday, February 21, 2011

Oracle Enterprise Manager 11g for Oracle Database 11g Release 2

What's new in OEM 11g R2


  • Compression Advisor: A Compression Advisor has been added in Oracle Database 11g Release 2 to facilitate choosing the correct compression level for your data. The Compression Advisor is available through Oracle Enterprise Manager 11g Grid Control console. As part of the existing Advisor framework in Oracle Database 11g, the compression advisor analyzes objects in the database, discovers the possible compression ratios that could be achieved, and recommends optimal compression settings.


  • Instance Caging: Oracle Enterprise Manager 11g introduces more flexible alternatives to server partitioning to support resource management. Instance caging allows you to limit CPU count per instance and enables resource management that has more consistent support across operating systems compared to resource managers available in many operating systems. It also has lower administration overhead than traditional virtualization techniques.


  • Global RAC Diagnostics: Oracle Enterprise Manager 11g extends Automatic Database Diagnostics Monitor (ADDM) for Oracle Database 11gR2 by greatly enhancing cluster-wide performance analysis for Real Application Clusters (RAC) databases. For RAC environments ADDM analyses the RAC cluster and reports on issues that are affecting the entire database as well as its individual instances. DBAs can now use ADDM to perform database-wide analysis of global resources, such as high-load SQL, global cache interconnect traffic, network latency issues, skew in instance response times, and I/O capacity.


  • Tuning Standby Databases: Active Session History (ASH) is now available for performance diagnostics for standby databases. Using Grid Control, DBAs can quickly drill-down into problem areas that are affecting performance and resolve them with minimal impact.



  • High Availability Console: Accessed through Grid Control, the new High Availability Console allows you a configuration-wide summary view, with charts, of RAC and HA setups instead of individual targets. It provides HA configuration best practices and standards, along with recommendations from the new Maximum Availability Architecture Advisor, to help you implement a HA system. The HA Console also uses EM's powerful workflows and provisioning automation to minimize downtime for things like single instance to MAA conversions.

    Thursday, January 6, 2011

    What's new in PostgreSQL 9.0.2

    PostgreSQL 9.x has two good features:

    Hot Standby and Streaming Replication are the two new features that mark Version 9.0 as a landmark in PostgreSQL's development . 

    Other benefits:
    • Avoid failure when trying to report an impending transaction wraparound condition from outside a transaction. This oversight prevented recovery after transaction wraparound got too close, because database startup processing would fail, backpatched to all versions.
    • Fix recovery from base backup when the starting checkpoint WAL record is not in the same WAL segment as its redo point, backpatched to all supported versions.
    • Force default wal_sync_method to be fdatasync for Linux, backpatched to all supported versions.
    • Fix memory leak with ANALYZE and complex index expressions, backpatched to all supported versions.
    • Fix memory leak in contrib/xml2 functions, backpatched to all supported versions.
    • Fix for segfault in pgcrypto when attempting to access an undefined public key, backpatched to all supported versions.
    • Prevent inlining an SQL function with multiple OUT parameters, backpatched to all supported versions.
    • Fix bugs in WAL replay logic for GIN indexes, backpatched to 8.2.
    • Fix segfault when accept() or a call made right after accepting a connection fails when server is compiled with GSSAPI support, backpatched to 8.3.
    • Fix error handling in tempfile deletion with log_temp_files active, backpatched to 8.3.
    • Prevent slowing of of autovacuum workers because of leakage in cost parameter over life of worker, backpatched to 8.3.
    • Check for too many children before returning "okay to proceed", backpatched to 8.4.
    • Improve performance of internal tuplestore structures for window functions, backpatched to 8.4.
    The 9.0.2 update also contains several fixes for issues with features introduced or changed in version 9.0:
    • Fix low-risk potential denial of service against RADIUS login.
    • Make OFF keyword unreserved.
    • Fix bug in tracking of latest removed WAL segment in streaming replication.
    • Fix for buffer overrun in pg_upgrade.
    • Fix memory leak in autovacuum launcher.
    • Fix for race condition in transaction ID assignment and lock acquisition in hot standby.
    • Improve performance for parallel restore with many blobs.

    Monday, December 20, 2010

    Cool SCAN feature 11gR2

    SCAN Concepts

    • Single client access name (SCAN) is the virtual hostname to provide for all clients connecting to the cluster (as opposed to the vip hostnames in 10g and 11gR1).  
    • SCAN is a domain name registered to at least one and up to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS).
    • By default, the name used as the SCAN is also the name of the cluster and must be globally unique throughout your enterprise. The default value for the SCAN is based on the local node name. SCAN name must be at least one character long and no more than 15 characters in length, must be alphanumeric - cannot begin with a numeral and may contain hyphens (-). If you require a SCAN that is longer than 15 characters, then select an Advanced installation.
    • For installation to succeed, the SCAN must resolve to at least one address.
    • SCAN VIP addresses must be on the same subnet as virtual IP addresses and public IP addresses.
    • Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts file. But if you use the hosts file to resolve SCAN name, you can have only one SCAN IP address.
    • If hosts file is used to resolve SCAN hostname, you will receive Cluster Verification Utility failure at end of installation (see Note: 887471.1 for more details)
    • For high availability and scalability, Oracle recommends that you configure the SCAN to use DNS Round Robin resolution to three addresses.
    • Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database.
    • Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN. Clients using the SCAN can also access the cluster using EZCONNECT.
    • Grid Infrastructure will start local listener LISTENER on all nodes to listen on local VIP, and SCAN listener LISTENER_SCAN1 (up to three cluster wide) to listen on SCAN VIP(s); 11gR2 database by default will set local_listener to local LISTENER, and remote_listener to SCAN listener.

    Oracle 11gR2 Basics

    The Basics of 11gR2:
    • The clusterware is now known as Grid Infrastructure.
    • CRS and ASM binaries reside under a single Oracle Home known as GRID Home.
    • GRID Home and the Oracle Home must be installed in different locations.
    • Oracle Grid Infrastructure cannot be placed under Oracle Base since permissions will be changed to root.
    • OCR and Voting File can no longer be placed on RAW or block devices. They need to be placed either in ASM or on Shared NFS Storage.
    • Voting file does not need to be backed up using dd command and is not supported anymore.The contents of Voting file are backed into the OCR.

    Oracle 11g Known bugs

    Oracle 11g bug.
    BUGs:
    • Hugepages not being used. First check if /etc/security/limits.conf is set correctly for 'memlock'. You can also check 'ulimit -l'. However there is a bug with 11.2.0.1 where if database is started by srvctl and not sqlplus then hugepages are not used. The workaround is to edit either $GRID_HOME/bin/ohasd or /etc/init.d/ohasd and put 'ulimit -l unlimited' explicitly. This bug is supposedly fixed in 11.2.0.2.
    • If loopback database link is used, the SQL would hang forever waiting on 'DFS Lock Wait'. Till date the issue persists and there doesn't seem to be fix from Oracle. Unofficially though I know one of the underscore parameter fixes the issue but would essentially disable majority of cluster features.

    Saturday, September 18, 2010

    Oracle update performance speed best practices

    Oracle update performance speed best practices

    The SQL standard for DML UPDATE statements can be complex and convoluted and there are best practices that can help you to write efficient UPDATE statements.

        * Run updates in batch mode
        * Use CTAS in lieu of large updates
        * Include the SET condition in the WHERE clause
        * Simplify the WHERE predicates
        * Have a small, separate data cache for high DML tables

    Running updates in batch mode

    If you can take your application offline during the monthly update, you use many update performance features to speed up the job performance:

        * Drop indexes/constraints and rebuild after mass update:  Dropping indexes before a mass update and rebuilding them afterwards can improve update performance significantly. Oracle removes index entries without re-balancing the index tree (a "logical delete"), but this is still time-consuming, especially if you have lots of indexes on the target table.  Also note that you can rebuild the dropped indexes nologging mode.

        * Parallelize the updates:  If you have an SMP server you can run Oracle parallel DML.    You can also manually parallelize the update by breaking them into multiple jobs and submit them simultaneously using dbms_scheduler, cron  or nohup.

    Use CTAS in lieu of large updates

    When you are updating the majority of rows in a table, using Create Table As Select (CTAS) is often more efficient performance than a standard update.  For example, assume that the following update changed 75% of the table rows:

        update
           mytab
        set
           status = 'new'
        where
           status = 'old;

    In this case, a parallelized CTAS may perform far faster (Note: Make sure that you have an SMP server before using the parallel degree option):

        create table new_mytab NOLOGGING as
        select  /*+ full parallel(mytab,35)*/
           decode (status,'new','old',status,
           col2, col3, col4
        from mytab;

        -- rebuild indexes, triggers and constraints to new_mytab

        rename mytab to bkup_mytab;
        rename new_mytab to mytab;
        

    Include the SET condition in the WHERE clause

    This note shows a case where the developer forgot to include the SET condition in the UPDATE WHERE clause, causing high redo waits (log file parallel write waits, log file sync waits).  Simply including the existing state of the SET clause can result in a huger performance improvement for UPDATE statements:

    -- zillion row update
    UPDATE HISTORY SET FLAG=0 WHERE CLASS='X'

    -- hundred row update
    UPDATE HISTORY SET FLAG=0 WHERE CLASS='X' AND FLAG!=0
    The select for update is not a good locking strategy because there are many things that can go wrong.  Instead of select for update, savvy Oracle developers will adopt alternatives mechanisms like a re-read upon update commit where the transaction re-read the rows and ensure that it has not changed since the original read.
    Have a small, separate data cache for high DML tables

    As more people adopt 64-bit servers with giant data buffers, we see a delay caused by the database writer process having to scan through giant data buffers seeking dirty blocks.  Many shops are replacing their platter-style disks with solid-state disks, and creating a very small data buffer, just for the updates.  The book Oracle Tuning: The Definitive Reference notes that if you are still using traditional disks, many shops segregate high-updates objects (tables & indexes) into a separate blocksize so that they can have a separate, small data buffer.

        "By segregating high activity tables into a separate, smaller data buffer, Oracle has far less RAM frames to scan for dirty block, improving the throughput and also reducing CPU consumption. This is especially important for high update tables with more than 100 row changes per second."

    MOSC Note:223299.1 also embraces the importance of multiple blocksizes, listing the multiple buffer regions as among the most important tuning parameters in Oracle9i.
    Simplify the WHERE clause predicates

    The most common issue with updates is the requirement to have a complex SELECT statement is the where clause to identify the rows to be updated.  The best techniques for simplifying UPDATE where clauses include:

        * Predicate pushing
        * Rewriting subqueries as outer joins (if possible)
        * Oracle SQL subquery unnesting
        * Partitioning Pruning
        * Avoid IN and NOT IN - It is a good idea to discourage the use of the NOT IN clause (which invokes a sub-query) and to prefer NOT EXISTS (which invokes a correlated sub-query).