Search This Blog

Thursday, January 6, 2011

What's new in PostgreSQL 9.0.2

PostgreSQL 9.x has two good features:

Hot Standby and Streaming Replication are the two new features that mark Version 9.0 as a landmark in PostgreSQL's development . 

Other benefits:
  • Avoid failure when trying to report an impending transaction wraparound condition from outside a transaction. This oversight prevented recovery after transaction wraparound got too close, because database startup processing would fail, backpatched to all versions.
  • Fix recovery from base backup when the starting checkpoint WAL record is not in the same WAL segment as its redo point, backpatched to all supported versions.
  • Force default wal_sync_method to be fdatasync for Linux, backpatched to all supported versions.
  • Fix memory leak with ANALYZE and complex index expressions, backpatched to all supported versions.
  • Fix memory leak in contrib/xml2 functions, backpatched to all supported versions.
  • Fix for segfault in pgcrypto when attempting to access an undefined public key, backpatched to all supported versions.
  • Prevent inlining an SQL function with multiple OUT parameters, backpatched to all supported versions.
  • Fix bugs in WAL replay logic for GIN indexes, backpatched to 8.2.
  • Fix segfault when accept() or a call made right after accepting a connection fails when server is compiled with GSSAPI support, backpatched to 8.3.
  • Fix error handling in tempfile deletion with log_temp_files active, backpatched to 8.3.
  • Prevent slowing of of autovacuum workers because of leakage in cost parameter over life of worker, backpatched to 8.3.
  • Check for too many children before returning "okay to proceed", backpatched to 8.4.
  • Improve performance of internal tuplestore structures for window functions, backpatched to 8.4.
The 9.0.2 update also contains several fixes for issues with features introduced or changed in version 9.0:
  • Fix low-risk potential denial of service against RADIUS login.
  • Make OFF keyword unreserved.
  • Fix bug in tracking of latest removed WAL segment in streaming replication.
  • Fix for buffer overrun in pg_upgrade.
  • Fix memory leak in autovacuum launcher.
  • Fix for race condition in transaction ID assignment and lock acquisition in hot standby.
  • Improve performance for parallel restore with many blobs.

Monday, December 20, 2010

Cool SCAN feature 11gR2

SCAN Concepts

  • Single client access name (SCAN) is the virtual hostname to provide for all clients connecting to the cluster (as opposed to the vip hostnames in 10g and 11gR1).  
  • SCAN is a domain name registered to at least one and up to three IP addresses, either in the domain name service (DNS) or the Grid Naming Service (GNS).
  • By default, the name used as the SCAN is also the name of the cluster and must be globally unique throughout your enterprise. The default value for the SCAN is based on the local node name. SCAN name must be at least one character long and no more than 15 characters in length, must be alphanumeric - cannot begin with a numeral and may contain hyphens (-). If you require a SCAN that is longer than 15 characters, then select an Advanced installation.
  • For installation to succeed, the SCAN must resolve to at least one address.
  • SCAN VIP addresses must be on the same subnet as virtual IP addresses and public IP addresses.
  • Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts file. But if you use the hosts file to resolve SCAN name, you can have only one SCAN IP address.
  • If hosts file is used to resolve SCAN hostname, you will receive Cluster Verification Utility failure at end of installation (see Note: 887471.1 for more details)
  • For high availability and scalability, Oracle recommends that you configure the SCAN to use DNS Round Robin resolution to three addresses.
  • Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database.
  • Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN. Clients using the SCAN can also access the cluster using EZCONNECT.
  • Grid Infrastructure will start local listener LISTENER on all nodes to listen on local VIP, and SCAN listener LISTENER_SCAN1 (up to three cluster wide) to listen on SCAN VIP(s); 11gR2 database by default will set local_listener to local LISTENER, and remote_listener to SCAN listener.

Oracle 11gR2 Basics

The Basics of 11gR2:
  • The clusterware is now known as Grid Infrastructure.
  • CRS and ASM binaries reside under a single Oracle Home known as GRID Home.
  • GRID Home and the Oracle Home must be installed in different locations.
  • Oracle Grid Infrastructure cannot be placed under Oracle Base since permissions will be changed to root.
  • OCR and Voting File can no longer be placed on RAW or block devices. They need to be placed either in ASM or on Shared NFS Storage.
  • Voting file does not need to be backed up using dd command and is not supported anymore.The contents of Voting file are backed into the OCR.

Oracle 11g Known bugs

Oracle 11g bug.
BUGs:
  • Hugepages not being used. First check if /etc/security/limits.conf is set correctly for 'memlock'. You can also check 'ulimit -l'. However there is a bug with 11.2.0.1 where if database is started by srvctl and not sqlplus then hugepages are not used. The workaround is to edit either $GRID_HOME/bin/ohasd or /etc/init.d/ohasd and put 'ulimit -l unlimited' explicitly. This bug is supposedly fixed in 11.2.0.2.
  • If loopback database link is used, the SQL would hang forever waiting on 'DFS Lock Wait'. Till date the issue persists and there doesn't seem to be fix from Oracle. Unofficially though I know one of the underscore parameter fixes the issue but would essentially disable majority of cluster features.

Saturday, September 18, 2010

Oracle update performance speed best practices

Oracle update performance speed best practices

The SQL standard for DML UPDATE statements can be complex and convoluted and there are best practices that can help you to write efficient UPDATE statements.

    * Run updates in batch mode
    * Use CTAS in lieu of large updates
    * Include the SET condition in the WHERE clause
    * Simplify the WHERE predicates
    * Have a small, separate data cache for high DML tables

Running updates in batch mode

If you can take your application offline during the monthly update, you use many update performance features to speed up the job performance:

    * Drop indexes/constraints and rebuild after mass update:  Dropping indexes before a mass update and rebuilding them afterwards can improve update performance significantly. Oracle removes index entries without re-balancing the index tree (a "logical delete"), but this is still time-consuming, especially if you have lots of indexes on the target table.  Also note that you can rebuild the dropped indexes nologging mode.

    * Parallelize the updates:  If you have an SMP server you can run Oracle parallel DML.    You can also manually parallelize the update by breaking them into multiple jobs and submit them simultaneously using dbms_scheduler, cron  or nohup.

Use CTAS in lieu of large updates

When you are updating the majority of rows in a table, using Create Table As Select (CTAS) is often more efficient performance than a standard update.  For example, assume that the following update changed 75% of the table rows:

    update
       mytab
    set
       status = 'new'
    where
       status = 'old;

In this case, a parallelized CTAS may perform far faster (Note: Make sure that you have an SMP server before using the parallel degree option):

    create table new_mytab NOLOGGING as
    select  /*+ full parallel(mytab,35)*/
       decode (status,'new','old',status,
       col2, col3, col4
    from mytab;

    -- rebuild indexes, triggers and constraints to new_mytab

    rename mytab to bkup_mytab;
    rename new_mytab to mytab;
    

Include the SET condition in the WHERE clause

This note shows a case where the developer forgot to include the SET condition in the UPDATE WHERE clause, causing high redo waits (log file parallel write waits, log file sync waits).  Simply including the existing state of the SET clause can result in a huger performance improvement for UPDATE statements:

-- zillion row update
UPDATE HISTORY SET FLAG=0 WHERE CLASS='X'

-- hundred row update
UPDATE HISTORY SET FLAG=0 WHERE CLASS='X' AND FLAG!=0
The select for update is not a good locking strategy because there are many things that can go wrong.  Instead of select for update, savvy Oracle developers will adopt alternatives mechanisms like a re-read upon update commit where the transaction re-read the rows and ensure that it has not changed since the original read.
Have a small, separate data cache for high DML tables

As more people adopt 64-bit servers with giant data buffers, we see a delay caused by the database writer process having to scan through giant data buffers seeking dirty blocks.  Many shops are replacing their platter-style disks with solid-state disks, and creating a very small data buffer, just for the updates.  The book Oracle Tuning: The Definitive Reference notes that if you are still using traditional disks, many shops segregate high-updates objects (tables & indexes) into a separate blocksize so that they can have a separate, small data buffer.

    "By segregating high activity tables into a separate, smaller data buffer, Oracle has far less RAM frames to scan for dirty block, improving the throughput and also reducing CPU consumption. This is especially important for high update tables with more than 100 row changes per second."

MOSC Note:223299.1 also embraces the importance of multiple blocksizes, listing the multiple buffer regions as among the most important tuning parameters in Oracle9i.
Simplify the WHERE clause predicates

The most common issue with updates is the requirement to have a complex SELECT statement is the where clause to identify the rows to be updated.  The best techniques for simplifying UPDATE where clauses include:

    * Predicate pushing
    * Rewriting subqueries as outer joins (if possible)
    * Oracle SQL subquery unnesting
    * Partitioning Pruning
    * Avoid IN and NOT IN - It is a good idea to discourage the use of the NOT IN clause (which invokes a sub-query) and to prefer NOT EXISTS (which invokes a correlated sub-query).

Monday, September 6, 2010

11g R2 New Features

Application Development

Application Express:
  • Application Date Format
  • Custom Themes
  • BLOB Support
  • JavaScript Libraries
  • Report Printing (Enhanced)
  • Forms Conversion
  • Security (Enhanced)
  • Runtime-Only Install
General Features:
  •  Pro*C
  •  “WITH HOLD” Option for Cursors
  •  8-byte Numeric host Variable
  •  Pro*COBOL
  •  8-byte Numeric host Variable
JDBC
  • TZ Patching
  • SecureFile Zero-Copy
OCI
  • 8-byte Numeric host Variable
Availability


  • Backup and Recovery:
  • Automatic Block Repair
  • OSB Cloud Module (Amazon Service)
  • DUPLICATE
  • Without Target Connection
  • NOREDO and “UNDO TABLSPACE”
  • Tablespace Checks
  • TSPITR (Enhanced)
  • “SET NEWNAME” format and clauses  
Online Maintenance and Upgrade:
  • Edition Based Redefinition
  • “CREATE OR REPLACE TYPE” to Allow FORCE
  • Fine-Grained Dependencies for Triggers
  • IGNORE_ROW_ON_DUPKEY_INDEX Hint (Insert Statement)

 Data Guard:

  • Compressed Table (Logical Standby and LogMiner)
  • Real-Time Query Lagging
  • FAN Support
  • 30 Standby Databases
BI and Data Warehousing

Analytics:
  • Analytic Functions 2.0
  • Recursive WITH
Data Loading:
  • EXECUTE Privilege for Directory Objects
  • ORACLE_LOADER Access Driver Extended
Partitioning:
  • Virtual Columns in PK or FK
  • System-Managed Indexes for List Partitions
Performance and Scalability:
  • In-Memory Parallel Execution
  • Auto POD
  • DBMS_PARALLEL_EXECUTE Package
  • On-Commit Fast Refresh (Significant Performance Improvement)
Warehouse Builder:
  • Mapping Editor
  • Advanced Find
  • Copy and Paste of Operators and Attributes
  • Grouping and Spotlighting of Objects
  • Business Intelligence Tool Integration (BI Standard and Enterprise)
  • Current Configuration Dropdown in Design Center
  • Flat File Imports (Enhanced Support)
  • Table Function (Enhanced Support)
  • Experts Available in Editor Menu
  • Expression Editing in Operator Edit
  • Management/UI of Locations Registered (Enhanced)
  • Key Lookup Operator (Enhanced)
  • Mapping Debugger (Enhanced)
  • JDeveloper-Style UI
  • Operator References Included in Generated PL/SQL Code
  • Quick Mapper
  • Repository Browser (Enhanced)
  • Simplified Repository Upgrades
  • LONG Data Type Data Extraction (Supported)
  • Sub-Queries in Join Operator (Supported)
Clustering :RAC in-use
  • Cluster Time Service
  • Oracle Cluster Registry (Enhancements)
  • Voting Disk (Enhancements)
  • Grid Plug and Play
  • Oracle Restart
  • Policy-Based Cluster/Capacity Management
  • Clusterware Resource Modeling (Enhanced)
  • Role-Separated Management
  • Agent Development Framework
  • Zero Downtime Patching (Clusterware and RAC)
  • Out-of-Place Clusterware Upgrade
  • Server Control (SRVCTL)
  • Oracle Restart (Support)
  • Grid Plug and Play (Support)
  • Single Instance Database in a Cluster (Support)
  • Universal Connection Pool (UCP)
  • Data Guard Integration
  • RAC Integration
  • JDBC (Support)
  • Enterprise Manager
  • Clusterware Resource Management
  • Provisioning (Clusterware and RAC)
  • Grid Plug and Play (Support)
  • Oracle Restart (Support)
  • Universal Installer
  • Removing RAC (Support)
  • Installation Rollback
  • Oracle Restart Integration
  • Out-of-Place Upgrade (Support)
  • Fixup Scripts and Grid Infrastructure Checks
  • Typical Installation (Option)
  • Single Client Access Name (SCAN)
  • Downgrade Database Configured with DBControl
  • Java API for RAC FAN Events
  • Intelligent Platform Management Interface (IPMI) Integration
  • ASM Integration with Clusterware
  • Configuration Assistants
  • New Features (Support)
  • Removing RAC (Support)
  • Cluster Verification Utility
  • New Features (Enhanced)
  • Integration into Universal Installer

Database General:
  • Flashback Data Archive DDL (Support)
  • Instance Caging
  • Scheduler:
  • E-mail Notification
  • File Watcher
  • Multiple Destination Jobs
  • Remote Database Jobs
  • Utilities:
  • Data Pump Legacy Mode
  • IPv6:
  • JDBC Thin Clients (Support)
  • JVM and Java Debuggers (Support)
  • Smart Flash Cache
  • Stored Outline Migration to SQL Plan Management
  • Client Result Cache Using Table Annotations (Support)
  • 4 KB Sector Disk Drive (Support)
Security
  • Audit Data Management:
  • Audit Trail Cleanup
  • Encryption Key Management
  • Tablespace Master Key Rekey
Server Management
  • Automatic Storage Management (ASM) for all Data
  • ASM Dynamic Volume Manager (DVM)
  • ASM Cluster File System (ACFS)
  • ASM FS Snapshot
  • Oracle Cluster Registry (OCR) and Voting Disk
  • ASM Intelligent Data Placement
  • ASM Configuration Assistant
  • ASM File Access Control
  • ASMCMD (Extension)
  • Enterprise Manager Integration
  • ACFS (Support)
  • ASM File Access Control
  • EMCA for New RAC Configurations (Support)
  • Patch Application with DBConsole (Single-Instance)
  • Automatic Patching of Time Stamp w/ Time Zone Data
  • Prevent Data Loss with Local Time Zone Data Type
  • Segment Creation on Demand
  • Zero-Sized Unusable Indexes and Index Partitions
  • Metadata SXML Comparison Tool
  • Compare Period Report
  • Compare SQL Tuning Sets
  • Sampling for Active Data Guard
  • Exadata Simulation
  • Global RAC ASH Report and ADDM Backward Compatability
  • Replay capture on MTS and Streams (Support)
  • SQL Tuning Advisor (Support)
  • Parallel Query (Auto DOP)
  • Action Plan Recording
  • Export SQL Tuning Set (STS)
  • Replay Synchronization Controls
Multimedia and DICOM:
  • Extraction of Requested Attributes Only
  • Client –Side DICOM Attribute Extraction
  • DICOM (Enhancements)
  • Watermarking and Image Processing (Enhancements)

Spatial:
  • 3D Visualization Framework
  • Network Data Model (Enhancements)
  • GeoRaster JAVA API
  • Raster Reprojections and Ground Control Point-Based Georeferencing
  • Routing and Geocoding (Enhancements)

SecureFiles:

  • Database File System (DBFS)
  • LZO Compression (Support)

XML DB:

  • Binary XML (Enhancements)
  • XML DB Repository Performance (Enhancements)
  • XMLIndex (Enhancements)
  • XMLType Partitioning











Oracle JDBC encryption

How to configure encrypted connection to encrypt data over the wire, and check summing, which prevents certain classes of network attacks. .

Preliminary - Gather Information

DBA must have installed the Oracle server software with the Advanced Security option, and the DBA must have configured the database listener to at least allow encrypted sessions and check summing.

Install Oracle Client

The standard Oracle client software is sufficient on the client side. There are no special client editions of the software to consider. Install the client software and test it as per usual.


Configuring the Client for Encryption

Edit your $ORACLE_HOME/network/admin/sqlnet.ora file on the client machine. For this example, we'll assume DBA has setup your instance to use RC4_256 for encryption and MD5 for checksums. We'll also assume you want your connection to be encrypted, or else let the connection attempt fail.

To configure this, add the following lines to your sqlnet.ora :

SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT = (MD5)
SQLNET.ENCRYPTION_TYPES_CLIENT = (RC4_256)
SQLNET.ENCRYPTION_CLIENT = required
SQLNET.CRYPTO_CHECKSUM_CLIENT = required
SQLNET.CRYPTO_SEED = 'insert a random string from 10-70 characters here'

The full set of options for both ENCRYPTION_CLIENT and CRYPTO_CHECKSUM_CLIENT are: accepted, rejected, requested, required

The full set of options for ENCRYPTION_TYPES_CLIENT are: RC4_256, RC4_128, RC4_56, RC4_40, AES256, AES192, AES128, 3DES168, 3DES112, DES, DES40 (Note: Oracle recommends RC4_256 for performance reasons.)

The full set of options for CRYPTO_CHECKSUM_TYPES_CLIENT are: MD5, SHA1

The CRYPTO_SEED must be quoted in single or double quotes.


It's possible to specify more than one encryption or checksum algorithm. To do so, separate them by commas like so:

SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256,RC4_256,DES)

They will be tried in order until a match is found. If no match is found, then the connection is refused (if either client or server require encryption) or allowed but is unencrypted (all other cases). This may be useful if two different instances have a disjoint set of accepted encryption algorithms.

Note: There is no plausible reason we can think of to configure the client or server to always reject encrypted connections. However, if one side is set to reject, and the other to require, the result is what you'd expect: a connection can never be established.

Verifying encryption is working

To verify the encryption is working, it's both tedious and error prone to use a packet sniffer to verify the connect to the Oracle instance is encrypted. It is not trivial to distinguish compressed data streams from encrypted ones.

A more straightforward and reasonably solid approach is to enable client-side tracing and let Oracle's diagnostic information verify if encryption is working. To do this, edit your sqlnet.ora file, and temporarily add the following lines:

TRACE_LEVEL_CLIENT=10
TRACE_DIRECTORY_CLIENT=(a fully qualified directory to which you have write permissions)

NOTE: On Windows, TRACE_DIRECTORY_CLIENT is reported to be ignored. Instead, look under:

"C:\Documents and Settings\\Oracle\oradiag_\diag\clients\user_\host_<...>\trace\"

Use sqlplus to login to your instance. We recommend not using more elaborate or GUI tools for this test, as many such tools tend to automatically issue a number of queries on login, and will make the trace logs extremely verbose. After you have logged in successfully, logout.

In the TRACE_DIRECTORY_CLIENT, you should see a file named cli_NNNNN.trc, where NNNNN is an integer. (Note for Windows: the trace files are reported to begin with ora_ rather than cli_.) Grep or visually inspect this file, looking for the string 'encryption'. For example, a session that has successfully set up an encrypted connection will contain a line such as:

[24-OCT-2006 10:00:32:087] na_tns: encryption is active, using RC4_256

A cleartext connection would instead look like:

[24-OCT-2006 10:07:34:707] na_tns: encryption is not active

If you are familiar with Oracle client traces, please note encrypted session negotiation will not turn up at trace levels below 10. You must use an ADMIN or SUPPORT trace level.

Once testing is complete, you will probably want to remove TRACE_LEVEL_CLIENT and TRACE_DIRECTORY_CLIENT from your sqlnet.ora file.

Reference:
http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/clntsec.htm#insertedID5