Oracle has come a long way from its inception of the 9i RAC database to its current Oracle RAC 12c version. I remember the days when we had to deal with RAC related issues, ranging from performance to stability which gradually improved as this product matured.
Features continued to be added with each release and in 12c now, it has a very flexible architecture, being able to fulfill numerous requirements using different configurations. Not only is it a robust highly availability database solution but it is also capable of providing the infrastructure to host other applications. Here is a brief review of these features and their functionality.
The following are some of the important new features introduced in Oracle RAC 12c (18.104.22.168.0).
- Flex Cluster
- Flex ASM
- IPv6 Support for PUBLIC, SCAN, VIP, GNS address
- Global Data Services
- Online Resource Attribute Modification
- ASM Disk Group: Shared ASM Password File
- Valid Node Checking: Restricting Service Registration
- Shared GNS
- CHM Enhancements for Flex Clusters
- Windows: Support for Oracle Home User
- OUI: Enhancements and Improvements
- Application Continuity
- Transaction Idempotence and Java Transaction Guard
Following are some of the new features introduced in Oracle RAC 12c (22.214.171.124.0).
- Rapid Home Provisioning
- Oracle Clusterware support for Diagnosability Framework (ADR)
- Oracle Trace File Analyzer Collector
- Automatic Installation of GIMR
Oracle 12c RAC New Features
Oracle ASM Flex
This will help reduce the overhead on the database server by running an ASM instance remotely. Instances can use the remote ASM node for any planned or unplanned downtime as well. All the metadata requests can be covered by non-local ASM instances. In addition to that this feature removes the risk of single point of failure, of having only one ASM instance for storage management. Now if a local ASM instance fails, the database instance can now connect directly to any of the other surviving remote ASM instances in the cluster.
ASM Disk Scrubbing
This monitors all the disks in the ASM disk groups and will discover logical corruptions. Normally these corruptions were discovered when an RMAN backup job ran. Disk Scrubbing will try to automatically recover those logical corruptions without the DBA even knowing!
Shared Password file in ASM
A single password file can now be stored in the ASM diskgroup and can be shared by all nodes. No need to have individual copies for each instance.
ASM Multiple Diskgroup Rebalance and Disk Resync Enhancements
Resync Power limit – Allows multiple diskgroups to be resynced concurrently.
Disk Resync Checkpoint – Faster recovery from instance failures.
Grid Infrastructure Rolling Migration support for one-off’s
When applying a one-off patch to the ASM instance, the databases that it is serving can be pointed to use a different ASM instance.
Oracle Clusterware Flex Cluster
This feature may appear similar to the ASM Flex feature but actually it is not. This is another type of cluster that has been introduced in Oracle 12c. We need to understand the two main components Hub Nodes and Leaf Nodes.
Hub Nodes are nodes that you currently see in the 11g RAC architecture. Every node is a full-fledged node with the required Clusterware software, share storage with a voting disk, interconnect network etc. components. On the other hand the leaf nodes are lightweight nodes with no shared storage and minimal Clusterware software. A leaf node will be connected to a Hub Node.
Grid Home Server
This new feature will allow you to have a single Golden Oracle Home on one of nodes and all other nodes to be a client of that Golden Home. You will only have to patch the single golden Oracle Home and rest will take it from there.
This helps minimize the application downtime caused by temporary failures in the infrastructure and/or the database servers. This piece sits between the application and the database working at the JDBC driver layer. If any failure occurs and is recoverable, it will be recovered automatically while being transparent to the application. The application will only observe a minor latency delay in the transactions and the failure will automatically be recovered transparently. Additionally Oracle guarantees the successful completion of the in-flight transactions, eliminating the chance of duplicate transactions.
The purpose of leaf nodes is to include application servers and other servers with additional software running on the Oracle 12c Clusterware infrastructure. These leaf nodes will not have any database instances running on them. If a leaf node goes down then there will be no impact on Hub nodes either. This allows the flexibility to run leaf nodes on Virtual Machines while Hub nodes can run on the actual physical machines.
IPv6 was supported in Oracle database 11gr2 but was only available for a standalone database. In 12c Database, clients can also now connect to database in RAC environments, using the IPv6 protocol. The interconnect however still only supports IPv4. This feature helps the customers meet the PCI, SOX and other Security Compliance standards.
Multiple SCAN’s Per Subnet
Now you can configure multiple SCAN’s per subnet mask, per cluster. This obviously is made available to provide redundancy.
The new ghctl utility will improve patching process.
UI auto runs root.sh
Oracle UI will execute the root.sh script on all nodes. You don’t have to do this on all nodes manually.
While it is good to be familiar with the new features it equally if not more to be aware of what features of RAC are being deprecated so plans can be made to move away from those Deprecated feature and alternatives chosen.
Oracle Restart feature which was provided as a part of the Oracle Grid Infrastructure has been deprecated. It will be de-supported in future versions.
RAW/Block Storage Devices
Oracle Database 12c and Oracle Clusterware 12c, no longer support raw storage devices. The files must be moved to Oracle ASM before upgrading to Oracle Clusterware 12c.