Quantcast
Channel: GoldenGate – Oracle DBA – Tips and Techniques
Viewing all 94 articles
Browse latest View live

GoldenGate 12.2 supports INVISIBLE columns

$
0
0

Oracle Goldengate 12.2 now provides support for replication of tables with INVISIBLE columns which was not possible in earlier releases.

Let us look at an example.

We create a table on both the source as well as target databases with both an INVISIBLE and VIRTUAL column COMMISSION.

SQLcreate table 
You need to be logged in to see this part of the content. Please Login to access.

Tuning Integrated Replicat performance using EAGER_SIZE parameter

$
0
0

Is Oracle GoldenGate really designed for batch processing or “large” transactions? – not sure what the official Oracle take on this is but I would hazard a guess and say maybe no. Maybe that is something better suited to an ETL type of product like Oracle Data Integrator.

Goldengate considers

You need to be logged in to see this part of the content. Please Login to access.

Goldengate 12.2 New Feature – Check and validate parameter files using chkprm

$
0
0

In GoldenGate 12.2 we can now validate parameter files before deployment.

There is a new utility called chkprm which can be used for this purpose.’

To run the chkprm utility we provide the name of the parameter file and can optionally indicate what process this parameter file belongs to using the COMPONENT keyword.

Let us look at an example.

This content is available for purchase. Please select from available options.
Login & Purchase

GoldenGate 12.2 New Feature – INFO and GETPARAMINFO

$
0
0

New in Oracle GoldenGate 12.2 is the feature to detailed help about the usage of a particular parameter (INFO) as well as information about the active parameters associated with a running Extract, Replicat as well as Manager process (GETPARAMINFO)

 

INFO
 
In this example we see all the information

You need to be logged in to see this part of the content. Please Login to access.

Configuring a Downstream Capture database for Oracle GoldenGate

$
0
0

Oracle GoldenGate versions 11.2 and above enables downstream capture of data from a single source or multiple sources. This feature is specific to Oracle databases only. This feature helps customers meet their IT requirement of limiting new processes being installed on their production source system.

This feature requires some configuration

You need to be logged in to see this part of the content. Please Login to access.

How to configure high availability for Oracle GoldenGate on Exadata

$
0
0

This note describes the procedure used to configure high availability for Oracle GoldenGate 12.2 on Oracle Database Machine (Exadata X5-2) using Oracle Database File System (DBFS), Oracle Clusterware and Oracle Grid Infrastructure Agent.

 

The note also describes how we can create different DBFS file

You need to be logged in to see this part of the content. Please Login to access.

GoldenGate INSERTALLRECORDS and OGG-01154 SQL error 1400

$
0
0

The Goldengate INSERTALLRECORDS commands can be used in cases where the requirement is to have on the target database a transaction history or change data capture (CDC) tables which will keep a track of changes a table undergoes at the row level.

So every INSERT, UPDATE or DELETE statement on

You need to be logged in to see this part of the content. Please Login to access.

GoldenGate 12c Performance Tuning Webinar

$
0
0

I will be conducting two sessions of a webinar on GoldenGate Performance Tuning Tips and Techniques.

Use the link below to register for this FREE webinar!

https://attendee.gotowebinar.com/rt/6709628250976917251

Hurry as space is limited for this free webinar.

 

 


GoldenGate Performance Tuning Webinar

$
0
0

The Oracle GoldenGate Performance Tuning Webinar was well received by over 200 attendees over two separate sessions.

Feedback received was very positive and am sharing the slide deck which can be downloaded from the link below:

Download the presentation ….

 

Installing and Configuring Oracle GoldenGate Veridata 12c

$
0
0

This note demonstrates how to install and configure Oracle GoldenGate Veridata 12c both server as well as agent.

At a high level the steps include:

  • Install Veridata Server
  • Create the GoldenGate Veridata Repository Schema using RCU
  • Configure WebLogic domain for Oracle GoldenGate Veridata
  • Start Admin and Managed Servers
  • Create the
You need to be logged in to see this part of the content. Please Login to access.

Installing and Configuring Oracle GoldenGate Monitor 12c (12.1.3.0)

$
0
0

GoldenGate Monitor is a web-based monitoring console that provides a real-time graphical overview of all the Oracle GoldenGate instances in our enterprise.

We can view statistics and alerts as well as monitor the performance of all the related GoldenGate components in all environments in our enterprise from a single console.

You need to be logged in to see this part of the content. Please Login to access.

Oracle GoldenGate 12c Release 3 (12.3.0.1.0) Microservices Architecture

$
0
0

Microservices Architecture is a method or approach to developing applications where an application is deployed as a suite of independently deployed small modular services.

Each module supports a specific business goal and uses a simple, light weight  and well-defined interface to communicate with other sets of services.

Oracle GoldenGate Microservices Architecture (MA)  is a similar architecture based on REST APIs which enable us to configure, monitor, and manage Oracle GoldenGate services using a web-based user interface.

So now in Oracle GoldenGate 12.3, we have two architectures available for deploying GoldenGate – the (original) Classic Architecture and the new Microservices Architecture.

The Microservices Architecture in Oracle GoldenGate is comprised of five main components:

  • Service Manager
  • Administration Server
  • Distribution Server
  • Receiver Server
  • Performance Metrics Server

Service Manager: Enables us to administer, monitor and manage other services available with Microservices Architecture.  Through Service Manager, we can manage one or multiple Oracle GoldenGate deployments on a local host.

Administration Server: It is the central and main entity for managing the various components of a GoldenGate deployment. Administration Server can create and manage local Extract and Replicat processes even without access to the server where Oracle GoldenGate is installed. Tasks like creating or altering extract and replicat processes, creating credentials for GoldenGate security, viewing report files, adding supplemental log data, creating checkpoint and heartbeat tables etc can all be performed from web browser as well as from the command line Admin Client.

Distribution Server: It is a high performance application which functions as a networked data distribution agent. The Distribution Server distributes one or more trails to one or more destinations and also performs some filtering operations if configured to do so. It supports a number of different communication protocols including the classic Oracle GoldenGate protocol for communication between the Distribution Server and the Collector , WebSockets for SSL secured HTTPS-based streaming and Proxy support for cloud-based environments.

The Distribution Server is used to set up a Path between the source and target deployments.

Receiver Server: It provides the central services to handle all incoming trail files and communicates with the Distribution Server over the network.

Performance Metrics Server: It collects and stores performance data related to a GoldenGate deployment. Enables us to monitor performance metrics using a web application and use the data to tune deployments for maximizing performance.

 

Let us look at an example of GoldenGate 12c Release 3 Microservices Architecture (MA) at work!

 

Demo environment setup

 

We will be using a single Oracle 12c Release 2 database for this demo with two schemas – SOURCE and TARGET.

We will create the MYOBJECTS table in both schemas and perform some DML activity on the source table and note how replication is performed and managed using GoldenGate Microservices.

 

 

SQL> alter database force logging;

Database altered.


SQL> alter database add supplemental log data;

Database altered.


SQL> alter system set enable_goldengate_replication=true;

System altered.


SQL> create user oggadmin identified by oracle;

User created.



SQL> grant dba to oggadmin;

Grant succeeded.



SQL> exec dbms_goldengate_auth.grant_admin_privilege('OGGADMIN');

PL/SQL procedure successfully completed.


SQL> create user source identified by oracle
    default tablespace users
 temporary tablespace temp;

User created.

SQL> create user target identified by oracle
      default tablespace users
   temporary tablespace temp;

User created.


SQL> grant connect,resource,create table to source,target;

Grant succeeded.


SQL> alter user source quota unlimited on users;

User altered.


SQL> alter user target quota unlimited on users;

User altered.


SQL> conn source/oracle

Connected.


SQL> create table myobjects
     as select * from all_objects
     where 1=2;

Table created.


SQL> conn target/oracle

Connected.

SQL> /

Table created.


SQL> conn source/oracle

Connected.


SQL> alter table myobjects add constraint pk_myobjects
   primary key (object_id);

Table altered.


SQL> conn target/oracle

Connected.


SQL> /

Table altered.


Download the MA software (Oracle GoldenGate 12.3.0.1.0 Microservices for Oracle) from OTN.

 

Install Oracle GoldenGate 12.3.0.1.0 Microservices

 

 

 

 

 

 

 

 

 

 

 

Create a deployment using Oracle GoldenGate Configuration Assistant

 

Launch the Configuration Assistant via the oggca.sh script located in the $OGG_HOME/bin directory. Through the Configuration Assistant we can create the Service Manager as well as configure the deployment. 

A single Service Manager can support a number of deployments.

 

[oracle@linux01 Disk1]$ export TNS_ADMIN=/u02/app/oracle/product/12.2.0/dbhome_1/network/admin

[oracle@linux01 Disk1]$ export OGG_HOME=/u04/app/oracle/ogg_ma

[oracle@linux01 Disk1]$ cd $OGG_HOME/bin

[oracle@linux01 bin]$ ./oggca.sh

 

 

 

 

 

 

 

 

 

 

 

 

 

Note: in this example we are not configuring any wallets or certificates required for SSL security.

 

 

Allocate the ports – enter 9001 for the Administration Server port and the other ports are assigned automatically by clicking in the port number fields.

 

 

 

 

 

 

 

 

 

 

[root@linux01 etc]# /u04/app/oracle/ogg_sm/bin/registerServiceManager.sh

----------------------------------------------------

     Oracle GoldenGate Install As Service Script   

----------------------------------------------------

OGG_HOME=/u04/app/oracle/ogg_ma

OGG_CONF_HOME=/u04/app/oracle/ogg_sm/etc/conf

OGG_USER=oracle

Running OracleGoldenGateInstall.sh...

 

 

 

 

Connect to Oracle GoldenGate Service Manager

 

 

 

 

 

Connect to Oracle GoldenGate Administration Server

 

 

 

 

 

Create the Credentials for the OGGADMIN user

 

 

 

 

 

 

Add Supplemental Logging for the SOURCE schema

 

 

 

 

 

 

 

 

Create the CHECKPOINT Table

 

 

 

Create the Classic Extract EXT1

 

 

 

 

 

 

 

 

 

 

 

Edit the parameter file and add the TABLE parameter

 


 

 

 

 

 

View details of the EXT1 Classic Extract

 

 

 

 

 

 

 

 

 

 

Connect to the Distribution Server

Click on the Distribution Server link in the Service column.

 

 

 

 

 

Create the Path

 

 

 

 

 

 

 

 

 

 

 

View details of the distribution path PUMP1

 

 

 

 

Connect to the Administration Server and create the Replicat process

 

 

 

 

 

 

 

 

 

 

Edit the parameter file and change the MAP and TARGET parameters

 

 

 

 

 

 

 

 

 

Perform DML on the source table

 

On the SOURCE schema MYOBJECTS  table, we will now run an INSERT statement.

 

SQL> conn source/oracle

Connected.


SQL> insert into myobjects 

   select * from all_objects;


56332 rows created.


SQL> commit;


Commit complete.

View details of the replicat process REP1 and note the number of rows which have been processed.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Connect to the Distribution Server and note the statistics of PUMP1 process

 

 

Connect to the Receiver Server and note the statistics related to the Network and I/O due to the last transaction which was processed.

Click on the Action drop-down ….

 

 

 

 

 

 

Connect to the Performance Metrics Server and note the performance metrics related to extract EXT1

 

 

 

Click on EXT1

 

 

 

Oracle GoldenGate 12c Release 3 New Feature Parallel Replicat

$
0
0

One of the new features introduced in GoldenGate 12c Release 3 (12.3.0.1) is the Parallel Replicat feature.

So now in addition to the Classic Replicat, Co-ordinated Replicat and Integrated Replicat options, we also have another replicat option available as well.

The Parallel Replicat on the surface appears to be very similar to the Integrated Replicat in the sense that we can control the number of applier processes manually and also the apply process is auto-tuned as well where additional applier processes are added on the fly based on the workload being performed by the replicat process. This is managed by the Parallel Replicat parameters APPLY_PARALLELISM, MIN_APPLY_PARALLELISM and MAX_APPLY_PARALLELISM.

In addition, similar to the EAGER_SIZE which was used in the Integrated Replicat to help define what a ‘large’ transaction was, now in the Parallel Replicat we have something quite similar called CHUNK_SIZE.

We also have a parameter called SPLIT_TRANS_RECS which we can use to break a large transaction into logically smaller pieces which can then be applied in parallel. Dependencies are managed and maintained as well.

But what is different from the Integrated Replicat is that there is no requirement to set the STREAMS_POOL_SIZE  and no Log Miner Server related processing happening inside the database.

Let us look at an example of using the Parallel Replicat feature.

The example assumes the following:

  • Oracle database software is 12c Release 2 and the source and target databases have been configured appropriately for Oracle GoldenGate replication
  • Oracle GoldenGate 12c Release 3 Micro Services software has been installed
  • A deployment called test_ogg_123 has been created via Oracle GoldenGate 12.3 Service Manager
  • Credential Store has been configured
  • TRANDATA has been configured at the schema level
  • Checkpoint Table has been created
  • SOURCE and TARGET schemas have been created
  • MYSALES table has been created in both schemas (script below)
SQL> create table mysales
 (id number,
flag number ,
 product varchar2(20),
channel_id number,
cust_id number ,
 amount_sold number,
order_date date,
ship_date date)
;

 

We will see how to use the web interfaces as well as the command line Admin Client to configure Parallel Replicat.

 

Launch Service Manager

 

Create a Classic Extract

 

 

 

Add the MYSALES table to the extract parameter file

 

 

 

Create the distribution path – very similar to creating the Extract Pump process in the Classic Architecture.

 

 

Create the Parallel Replicat. In this case we are creating a non-integrated Parallel Replicat.

 

 

Add the MYSALES table to the replicat parameter file

 

We next use the Admin Client to add some other parameters to the replicat parameter file.

We are changing the value of the parameter MAP_PARALLELISM from the default value of 2 to 4 – this controls the number of mapper processes which will scan or process the trail file.

The default value for APPLY_PARALLELISM is 4 which controls the number of apply processes.

The parameter SPLIT_TRANS_RECS will break up the transaction into units of 10000 rows each and these will be applied in parallel.

 

[oracle@rac03 bin]$ ./adminclient
Oracle GoldenGate Administration Client for Oracle
Version 12.3.0.1.0 OGGCORE_12.3.0.1.0_PLATFORMS_170721.0154
Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.
Linux, x64, 64bit (optimized) on Jul 21 2017 07:16:02
Operating system character set identified as UTF-8.

OGG (not connected) 1> connect http://rac03.localdomain:9001 deployment test_ogg_123 as oggadmin password oracle

OGG (http://192.168.56.102:9001 test_ogg_123) 29> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt
ADMINSRVR   RUNNING  
DISTSRVR    RUNNING  
PMSRVR      RUNNING  
RECVSRVR    RUNNING  
EXTRACT     RUNNING     EXT1        00:00:00      00:00:09   
REPLICAT    RUNNING     REP1        00:00:00      00:00:10   


OGG (http://192.168.56.102:9001 test_ogg_123) 30> edit params rep1

replicat rep1
useridalias oggadmin domain OracleGoldenGate
MAP_PARALLELISM 4
SPLIT_TRANS_RECS 10000
MAP source.mysales, TARGET target.mysales;

 

Check that the extract and replicat process are both up and running.

 

 

On the source database, issue the INSERT statement which will populate the MYSALES table with 200,000 rows and commit the transaction.

 SQL> insert into mysales
 select
 rownum,
 rownum + 1,
 'Samsung Galaxy S7',
 mod(rownum,5),
 mod(rownum,1000) ,
 5000,
 trunc(sysdate - 10000 + mod(rownum,10000)),
 trunc(sysdate - 9999 + mod(rownum,10000))
 from dual connect by level<=2e5
 ;

200000 rows created.

SQL> commit;

 

View the statistics of the extract process – similar to STATS ext1 LATEST command. Do the same for the Parallel Replicat.

 

 

Note the position in the trail files the extract and distribution  server process pump1 are writing to.

 

 

 

We can see that the parallel replicat process is processing trail file rt000000000 and the location of the trail file now in the Micro Service architecture environment is under the specific deployment name top level folder in the /var/lib/data sub-directory.

 

OGG (http://192.168.56.102:9001 test_ogg_123) 40> info rep1

No EXTRACT groups found, but some coordinated threads may have been excluded

REPLICAT   REP1      Last Started 2017-10-19 16:55   Status RUNNING
Parallel
Checkpoint Lag       00:00:00 (updated 00:00:02 ago)
Process ID           10459

Log Read Checkpoint  File /u01/app/oracle/test_ogg_123/var/lib/data/rt000000000
                     2017-10-19 21:06:15.715609  RBA 42758112

 

Connect to the Performance Metrics Server home page from Service Manager home page and we can see the individual performance related metrics for REP1 parallel replicat process as well as the 4 mapper processes REP1M0* and 4 applier processes REP1A0*.

 

 

Oracle GoldenGate 18c Upgrade

$
0
0

This note outlines the procedure followed to upgrade GoldenGate 12.3 to the latest 18c version (18.1.0.0.0).

Note:

  • If we are upgrading from Oracle GoldenGate 11.2.1.0.0 or earlier, we also need to upgrade the Replicat checkpoint table via the GGSCI command UPGRADE CHECKPOINTTABLE [owner.table]
  • If we are using trigger-based DDL replication support, then additional steps need to be carried out which are described in more detail in the GoldenGate Upgrade documentation outlined in the URL below:

https://docs.oracle.com/en/middleware/goldengate/core/18.1/upgrade/upgrading-release-oracle-database.html#GUID-9B490BE5-F0AE-44D1-B63C-F5299B9DFD16

In this example, the source database version is higher than 11.2.0.4 and we are using Integrated Extract where DDL capture support is integrated into the database logmining server.
 
 

  • Verify that there are no open and uncommitted transactions

 

GGSCI (rac01.localdomain) 2> send ext1 showtrans
Sending SHOWTRANS request to EXTRACT EXT1 ...
No transactions found.

GGSCI (rac01.localdomain) 3> send ext1 logend
Sending LOGEND request to EXTRACT EXT1 ...
YES

 

  • Stop the Extract (and Pump)

 

GGSCI (rac01.localdomain) 5> stop extract * 

Sending STOP request to EXTRACT EXT1 ...
Request processed.

Sending STOP request to EXTRACT PUMP1 ...
Request processed.

 

  • Ensure Replicat has finished processing all current DML and DDL data in the Oracle GoldenGate trails before stopping the replicat

Issue the command SEND REPLICAT with the STATUS option until it returns a status of “At EOF” to indicate that it finished processing all of the data in the trail file.
 

GGSCI (rac01.localdomain) 4> send rep1 status 
Sending STATUS request to REPLICAT REP1 ...
  Current status: At EOF
  Sequence #: 2
  RBA: 1,538
  0 records in current transaction.

GGSCI (rac01.localdomain) 6> stop replicat * 

Sending STOP request to REPLICAT REP1 ...
Request processed.

 

  • Stop the Manager process

 

GGSCI (rac01.localdomain) 7> stop mgr !

Sending STOP request to MANAGER ...
Request processed.
Manager stopped.

 

  • Take a backup of the current Oracle GoldenGate installation directory on the source and target systems as well as any working directories that have been installed for a cluster configuration on a shared file system like dirprm,dircrd,dirchk,BR,dirwlt,dirrpt etc.

We do not need to backup up the dirdat folder which contain the trail files.

It is recommended to upgrade both the source as well as target Oracle GoldenGate environments at the same time.

If we are not upgrading Replicat on the target systems at the same time as the source, add the following parameter to the Extract parameter file(s) to specify the version of Oracle GoldenGate that is running on the target.

This parameter causes Extract to write a version of the trail that is compatible with the older version of Replicat.

{EXTTRAIL | RMTTRAIL} file_name FORMAT RELEASE major.minor

For example:

EXTTRAIL ./dirdat/lt FORMAT RELEASE 12.3

  • On both source and target Goldengate environments install Oracle GoldenGate 18c (18.1.0) using Oracle Universal Installer (OUI) into an existing Oracle GoldenGate directory.

Note: Ensure the checkbox to start the Manager is not ticked.
 

[oracle@rac01 sf_software]$ cd 181000_fbo_ggs_Linux_x64_shiphome
[oracle@rac01 181000_fbo_ggs_Linux_x64_shiphome]$ cd fbo_ggs_Linux_x64_shiphome/
[oracle@rac01 fbo_ggs_Linux_x64_shiphome]$ cd Disk1
oracle@rac01 Disk1]$ ./runInstaller 

 

 

 


 

  • Execute the ulg.sql script located in the GoldenGate software root directory as SYSDBA. This script converts the existing supplemental log groups to the format as required by the new release.

 

[oracle@rac01 goldengate]$ sqlplus sys as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 8 11:26:34 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> @ulg.sql
Oracle GoldenGate supplemental log groups upgrade script.
Please do not execute any DDL while this script is running. Press ENTER to continue.


PL/SQL procedure successfully completed.

 

  • After the installation/upgrade is completed, alter the primary Extract process as well as the associated data pump Extract processes to write to a new trail sequence number via the ETROLLOVER command.

Reposition both the existing Extract Pump as well as the Replicat processes to start reading from and processing the new trail file.

[oracle@rac01 goldengate]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 18.1.0.0.0 OGGCORE_18.1.0.0.0_PLATFORMS_180928.0432_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Sep 29 2018 04:22:21
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2018, Oracle and/or its affiliates. All rights reserved.



GGSCI (rac01.localdomain) 1> info all 

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED                                           
EXTRACT     STOPPED     EXT1        00:00:00      01:13:55    
EXTRACT     STOPPED     PUMP1       00:00:00      01:13:55

GGSCI (rac01.localdomain) 2> alter extract ext1 etrollover 

2019-01-08 00:44:13  INFO    OGG-01520  Rollover performed.  For each affected output trail of Version 10 or higher format, after starting the source extract, issue ALTER EXTSEQNO for that trail's reader (either pump EXTRACT or REPLICAT) to move the reader's scan to the new trail file;  it will not happen automatically.
EXTRACT altered.


GGSCI (rac01.localdomain) 3> alter extract pump1 etrollover 

2019-01-08 00:44:51  INFO    OGG-01520  Rollover performed.  For each affected output trail of Version 10 or higher format, after starting the source extract, issue ALTER EXTSEQNO for that trail's reader (either pump EXTRACT or REPLICAT) to move the reader's scan to the new trail file;  it will not happen automatically.
EXTRACT altered.


GGSCI (rac01.localdomain) 4> info ext1 detail 

EXTRACT    EXT1      Initialized   2019-01-07 14:36   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:53 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2019-01-07 23:29:38
                     SCN 0.3272690 (3272690)

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type

  ./dirdat/ogg1/lt                                     3          0        500 EXTTRAIL  


GGSCI (rac01.localdomain) 5> alter pump1 extseqno 3 extrba 0
EXTRACT altered.


GGSCI (rac01.localdomain) 6> info pump1 detail 

EXTRACT    PUMP1     Initialized   2019-01-08 00:45   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:08 ago)
Log Read Checkpoint  File /acfs_oh/app/goldengate/dirdat/ogg1/lt000000003
                     First Record  RBA 0

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type

  ./dirdat/ogg2/rt                                     3          0        500 RMTTRAIL  

  
GGSCI (rac01.localdomain) 7> alter rep1 extseqno 3 extrba 0

2019-01-08 00:46:08  INFO    OGG-06594  Replicat REP1 has been altered. Even the start up position might be updated, duplicate suppression remains active in next startup. To override duplicate suppression, start REP1 with NOFILTERDUPTRANSACTIONS option.

REPLICAT (Integrated) altered.

  • Start all the GoldenGate processes in the new GoldenGate 18c environment
GGSCI (rac01.localdomain) 8> start mgr
Manager started.


GGSCI (rac01.localdomain) 9> info all 

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     STARTING    EXT1        00:00:00      00:02:06    
EXTRACT     STARTING    PUMP1       00:00:00      00:00:50    
REPLICAT    STARTING    REP1        00:00:00      00:00:11    


GGSCI (rac01.localdomain) 10>

GGSCI (rac01.localdomain) 10> !
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXT1        00:00:00      00:00:06    
EXTRACT     RUNNING     PUMP1       00:00:00      00:00:07    
REPLICAT    RUNNING     REP1        00:00:00      00:00:03    

Oracle GoldenGate Automatic Conflict Detection and Resolution(CDR)

$
0
0

Automatic Conflict Detection and Resolution is a new feature that is specific to Oracle GoldenGate 12c (12.3.0.1) and Oracle Database 12c Release 2 (12.2.0.1) and above.

We can now configure and manage Oracle GoldenGate automatic conflict detection and resolution in the Oracle Database via the DBMS_GOLDENGATE_ADM package as well as monitor CDR using a number of data dictionary views.

This is done using the ADD_AUTO_CDR procedure which is part of the Oracle database DBMS_GOLDENGATE_ADM package.

Prior to GoldenGate 12.3 we had to use the Replicat COMPARECOLS and RESOLVECONFLICTS parameters for CDR like for example:

MAP SH.test_cdr, TARGET SH.test_cdr,&
COMPARECOLS (ON UPDATE ALL, ON DELETE ALL ),&
RESOLVECONFLICT (INSERTROWEXISTS,(DEFAULT,OVERWRITE));

There are two methods used for Automatic CDR:

a)Latest Timestamp Conflict Detection and Resolution
b)Delta Conflict Detection and Resolution

This note provides an example of Automatic CDR using Latest Timestamp Conflict Detection and Resolution method.

The environment for this example consists of two CDB’s (CDB1,CDB2) located on the same VirtualBox VM – each containing a single Pluggable Database (PDB1, PDB2). We have configured an Active-Active GoldenGate environment which replicates data between PDB1 and PDB2 and vice-versa.

So on the GoldenGate environment side we have this setup:

EXT1>>PUMP1>>REP1>>PDB2
EXT2>>PUMP2>>REP2>>PDB1

We will simulate a conflict by inserting a single row with the same Primary Key value into the same table in PDB1 and PDB2 – at the same time and we do this via a cronjob which calls a shell script which performs the DML activity.

The table name is TEST_CDR and the schema is HR.

Steps:

Execute the ADD_AUTO_CDR procedure and specify the table to configure for Automatic CDR

Note: we do this for both databases connecting as the GoldenGate administration user we have created C##OGGADMIN.

This will add an invisible column to the TEST_CDR table called CDRTS$ROW as well as create a table which has ‘tombstone’ entries which contains the LCRs of rows which are deleted/inserted to handle conflicts related to DELETE/INSERTs.

In the Replicat parameter file we need to add a parameter – MAPINVISIBLECOLUMNS.
 

SQL> conn c##oggadmin/oracle@pdb1
Connected.

SQL> BEGIN
  DBMS_GOLDENGATE_ADM.ADD_AUTO_CDR(
    schema_name => 'HR',
    table_name  => 'TEST_CDR',
  record_conflicts => TRUE);
END;
/    2    3    4    5    6    7  

PL/SQL procedure successfully completed.


SQL> COLUMN TABLE_OWNER FORMAT A15
COLUMN TABLE_NAME FORMAT A15
COLUMN TOMBSTONE_TABLE FORMAT A15
COLUMN ROW_RESOLUTION_COLUMN FORMAT A25

SELECT TABLE_OWNER,
       TABLE_NAME, 
       TOMBSTONE_TABLE,
       ROW_RESOLUTION_COLUMN 
  FROM ALL_GG_AUTO_CDR_TABLES
  ORDER BY TABLE_OWNER, TABLE_NAME;SQL> SQL> SQL> SQL> SQL>   2    3    4    5    6  

TABLE_OWNER	TABLE_NAME	TOMBSTONE_TABLE ROW_RESOLUTION_COLUMN
--------------- --------------- --------------- -------------------------
HR		TEST_CDR	DT$_TEST_CDR	CDRTS$ROW

 

View Column Group information

A column group is a logical grouping of one or more columns in a replicated table enabled for Automatic CDR where conflict detection and resolution is performed on the columns in the column group separately from the other columns in the table.

When we configure the TEST_CDR table for Automatic CDR with the ADD_AUTO_CDR procedure, all the columns in the table are added to a default column group. To define other column groups for the same table, run the ADD_AUTO_CDR_COLUMN_GROUP procedure.

The documentation states the following about Column Groups:

“Column groups enable different databases to update different columns in the same row at nearly the same time without causing a conflict. When column groups are configured for a table, conflicts can be avoided even if different databases update the same row in the table. A conflict is not detected if the updates change the values of columns in different column groups”
 

SQL> COLUMN TABLE_OWNER FORMAT A10
COLUMN TABLE_NAME FORMAT A10
COLUMN COLUMN_GROUP_NAME FORMAT A17
COLUMN COLUMN_NAME FORMAT A15
COLUMN RESOLUTION_COLUMN FORMAT A23

SELECT TABLE_OWNER,
       TABLE_NAME, 
       COLUMN_GROUP_NAME,
       COLUMN_NAME,
       RESOLUTION_COLUMN 
  FROM ALL_GG_AUTO_CDR_COLUMNS
  ORDER BY TABLE_OWNER, TABLE_NAME; 

TABLE_OWNE TABLE_NAME COLUMN_GROUP_NAME COLUMN_NAME	RESOLUTION_COLUMN
---------- ---------- ----------------- --------------- -----------------------
HR	   TEST_CDR   IMPLICIT_COLUMNS$ REC_ID		CDRTS$ROW
HR	   TEST_CDR   IMPLICIT_COLUMNS$ REC_DESC	CDRTS$ROW

 
Create two shell scripts which will perform INSERT into the TEST_CDR table and execute both the scripts at the same time via cron
 
Note: the primary key column is REC_ID and we are inserting the row in the table in PDB1 and PDB2 using the same value for REC_ID which is going to cause a conflict which needs to be resolved.
 

[oracle@rac02 ~]$ vi cdb1_dml.sh
 #!/bin/bash
 export ORACLE_HOME=/acfs_oh/product/12.2.0/dbhome_1
 export ORACLE_SID=cdb1_2
 PATH=$PATH:$ORACLE_HOME/bin
 sqlplus -s system/G#vin2407@pdb1<<EOF
 insert into test_cdr (rec_id,rec_desc) values (1,'INSERT @ PDB1');
 commit;
 EOF

[oracle@rac02 ~]$ chmod +x cdb1_dml.sh

[oracle@rac02 ~]$ vi cdb2_dml.sh
 #!/bin/bash
 export ORACLE_HOME=/acfs_oh/product/12.2.0/dbhome_1
 export ORACLE_SID=cdb2_2
 PATH=$PATH:$ORACLE_HOME/bin
 sqlplus -s system/G#vin2407@pdb2<<EOF
 insert into test_cdr (rec_id,rec_desc) values (1,'INSERT @ PDB2');
 commit;
 EOF

[oracle@rac02 ~]$ chmod +x cdb2_dml.sh

[oracle@rac02 ~]$ crontab -e
 20 14 * * * /home/oracle/cdb1_dml.sh
 20 14 * * * /home/oracle/cdb2_dml.sh

[oracle@rac02 ~]$ crontab -l
 20 14 * * * /home/oracle/cdb1_dml.sh
 20 14 * * * /home/oracle/cdb2_dml.sh

 

After the shell scripts have been automatically executed via cron, verify the row which has finally been inserted into the TEST_CDR table in both databases

Note: the row having the values 1,’INSERT @ PDB1′ has been discarded.
 

SQL> conn system/G#vin2407@pdb1
Connected.

SQL> select * from hr.test_cdr;

    REC_ID REC_DESC
---------- --------------------
	 1 INSERT @ PDB2

SQL> conn system/G#vin2407@pdb2
Connected.

SQL> /

    REC_ID REC_DESC
---------- --------------------
	 1 INSERT @ PDB2

 
Note the value of the hidden column CDRTS$ROW09-JAN-19 06.24.02.210285. This is used to resolve the INSERT conflict.
 

SQL> alter table hr.test_cdr  modify CDRTS$ROW visible;

Table altered.

SQL> select * from hr.test_cdr;

    REC_ID REC_DESC
---------- --------------------
CDRTS$ROW
---------------------------------------------------------------------------
	 1 INSERT @ PDB2
09-JAN-19 06.24.02.210285 AM

 

Who has won and who has lost?

If we query the DBA_APPLY_ERROR_MESSAGES view in PDB1 we can see the APPLIED column has the value ‘WON’ while the same column in PDB2 has the value of ‘LOST’.

We can also see that the CDRTS$ROW column has been used to resolve the INSERT ROW EXISTS conflict.

This means that the row was changed in PDB2 has been applied on PDB1 (WON) and the row which was changed on PDB1 (and which was replicated to PDB2) has been discarded at PDB2 (LOST).
 

SQL>  conn system/G#vin2407@pdb1
Connected.

SQL> select OBJECT_NAME, CONFLICT_TYPE,APPLIED_STATE,CONFLICT_INFO
  2  from DBA_APPLY_ERROR_MESSAGES;

OBJECT_NAM CONFLICT_TYPE      APPLIED CONFLICT_INFO
---------- ------------------ ------- --------------------
TEST_CDR   INSERT ROW EXISTS  WON     CDRTS$ROW:W

SQL> conn system/G#vin2407@pdb2
Connected.

SQL> /

OBJECT_NAM CONFLICT_TYPE      APPLIED CONFLICT_INFO
---------- ------------------ ------- --------------------
TEST_CDR   INSERT ROW EXISTS  LOST    CDRTS$ROW:L

 

How was the conflict resolved using the Latest Timestamp Method?

The conflict is resolved using this criteria:

If the timestamp of the row LCR is earlier than the timestamp in the table row, then the row LCR is discarded, and the table values are retained.”

So when a change is made in PDB1, EXT1 extract captures the change and writes to local trail file, PUMP1 transmits trail file over network and it is processed by REP1 inserting into database PDB2.

On the other hand when a change is made in PDB2, EXT2 extract captures the change and writes to local trail file, PUMP2 transmits trail file over network and it is processed by REP2 inserting into database PDB1.

If we look at the trail file processed by REP1 (which inserts into PDB2) using logdump utility, we can see the timestamp value of the CDRTS$ROW column is:2019-01-09:06:24:02.210285000
 

2019/01/09 14:24:02.000.630 Insert               Len    65 RBA 2771 
Name: PDB2.HR.TEST_CDR  (TDR Index: 2) 
After  Image:                                             Partition 12   G  s   
 0000 0500 0000 0100 3101 0011 0000 000d 0049 4e53 | ........1........INS  
 4552 5420 4020 5044 4232 0200 1f00 0000 3230 3139 | ERT @ PDB2......2019  
 2d30 312d 3039 3a30 363a 3234 3a30 322e 3231 3032 | -01-09:06:24:02.2102  
 3835 3030 30                                      | 85000  
Column     0 (x0000), Len     5 (x0005)  
 0000 0100 31                                      | ....1  
Column     1 (x0001), Len    17 (x0011)  
 0000 0d00 494e 5345 5254 2040 2050 4442 32        | ....INSERT @ PDB2  
Column     2 (x0002), Len    31 (x001f)  
 0000 3230 3139 2d30 312d 3039 3a30 363a 3234 3a30 | ..2019-01-09:06:24:0  
 322e 3231 3032 3835 3030 30                       | 2.210285000  

If we look at the trail file processed by REP2 (which inserts into PDB1) using logdump utility, we can see the timestamp value of the CDRTS$ROW column is:2019-01-09:06:24:02.205639000
 

2019/01/09 14:24:02.000.529 Insert               Len    65 RBA 3394 
Name: PDB1.HR.TEST_CDR  (TDR Index: 2) 
After  Image:                                             Partition 12   G  s   
 0000 0500 0000 0100 3101 0011 0000 000d 0049 4e53 | ........1........INS  
 4552 5420 4020 5044 4231 0200 1f00 0000 3230 3139 | ERT @ PDB1......2019  
 2d30 312d 3039 3a30 363a 3234 3a30 322e 3230 3536 | -01-09:06:24:02.2056  
 3339 3030 30                                      | 39000  
Column     0 (x0000), Len     5 (x0005)  
 0000 0100 31                                      | ....1  
Column     1 (x0001), Len    17 (x0011)  
 0000 0d00 494e 5345 5254 2040 2050 4442 31        | ....INSERT @ PDB1  
Column     2 (x0002), Len    31 (x001f)  
 0000 3230 3139 2d30 312d 3039 3a30 363a 3234 3a30 | ..2019-01-09:06:24:0  
 322e 3230 3536 3339 3030 30                       | 2.205639000  

 
So the value of the CDRTS$ROW in the table is higher than the value of the column as contained in the trail file, so it is ignored here and not applied to the database – whereas in the other case timestamp value in trail file file is higher than that of the database column value and it was overwritten and replaced.
 

Check the CDR statistics

 

GGSCI (rac01.localdomain) 4> stats rep2 latest reportcdr

Sending STATS request to REPLICAT REP2 ...

Start of Statistics at 2019-01-09 15:04:40.


Integrated Replicat Statistics:

	Total transactions            		           1.00
	Redirected                    		           0.00
	Replicated procedures         		           0.00
	DDL operations                		           0.00
	Stored procedures             		           0.00
	Datatype functionality        		           0.00
	Event actions                 		           0.00
	Direct transactions ratio     		           0.00%

Replicating from PDB2.HR.TEST_CDR to PDB1.HR.TEST_CDR:

*** Latest statistics since 2019-01-09 14:24:26 ***
	Total inserts                   	           1.00
	Total updates                   	           0.00
	Total deletes                   	           0.00
	Total discards                  	           0.00
	Total operations                	           1.00
	Total CDR conflicts                    	           1.00
	CDR resolutions succeeded              	           1.00
	CDR INSERTROWEXISTS conflicts          	           1.00

End of Statistics.

Oracle GoldenGate 18c New Features

$
0
0

Oracle GoldenGate 18c now provides support for some new features which were introduced in Oracle database 12c – namely support for Identity Columns and In-Database Row Archival.

Identity columns enables us to specify that a column should be automatically populated from a system-created sequence which is similar to the AUTO_INCREMENT column in MySQL or IDENTITY column in SQL Server.

The Oracle 12c Information Life Cycle Management (ILM) feature called In-Database Archiving provides the database the ability to distinguish from active data and ‘older’ inactive data while at the same time storing all the data in the same database.

When we enable row archival for a table, a hidden column called ORA_ARCHIVE_STATE column is added to the table and this column is automatically assigned a value of 0 to denote current data and we can decide what data in the table is to be considered as candidates for row archiving and they are assigned the value 1

Once the older and more current data is distinguished, we can archive and compress the older data to reduce the size of the database or move that older data to a cheaper storage tier to reduce cost of storing data.

Note that Oracle GoldenGate support for these features requires Oracle Database 18c and above. It also requires usage of the Integrated Extract and Integrated Replicat or Integrated Parallel Replicat.

 
Identity Columns

Note that the IDENTITY COLUMN in the table POSITION_ID is automatically populated.
 

SQL> insert into hr.job_positions
  2  (position_name)
  3  values
  4  ('President');

1 row created.

SQL> insert into hr.job_positions
  2  (position_name)
  3   values
  4  ('Vice-President');

1 row created.

SQL>  insert into hr.job_positions
  2  (position_name)
  3   values
  4  ('Manager');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from hr.job_positions;

POSITION_ID POSITION_NAME
----------- --------------------
	  1 President
	  2 Vice-President
	  3 Manager

 

Verify the extract has captured the changes
 

GGSCI (rac01.localdomain) 3> stats ext1 latest 

Sending STATS request to EXTRACT EXT1 ...

Start of Statistics at 2019-01-16 12:01:19.

Output to ./dirdat/ogg1/lt:

Extracting from PDB1.HR.JOB_POSITIONS to PDB1.HR.JOB_POSITIONS:

*** Latest statistics since 2019-01-16 12:00:15 ***
	Total inserts                   	           3.00
	Total updates                   	           0.00
	Total deletes                   	           0.00
	Total discards                  	           0.00
	Total operations                	           3.00

End of Statistics.

 
Verify replication has been performed on the target table
 


SQL>  select * from hr.job_positions;

POSITION_ID POSITION_NAME
----------- --------------------
	  1 President
	  2 Vice-President
	  3 Manager

 
 
In-Database Row Archival
 

Enable row archival for the SYSTEM.MYOBJECTS table. This table is based on the data dictionary object ALL_OBJECTS


SQL> alter table system.myobjects row archival;

Table altered.

SQL> select distinct ora_archive_state from system.myobjects;

ORA_ARCHIVE_STATE
--------------------------------------------------------------------------------
0

 

We now perform the row archival. Data older than 01-JUL-18 is considered as ‘old’ and needs to be archived. Use the ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(1) clause in the UPDATE statement to achieve this row archival.

If we query the table after the archival is performed, we see that it showing now that the table has only 310 rows and not 71710 rows!
 

SQL>  select count(*) from system.myobjects;

  COUNT(*)
----------
     71710

SQL> select count(*) from system.myobjects where created < '01-JUL-18';

  COUNT(*)
----------
     71400

SQL> select count(*) from system.myobjects where created > '01-JUL-18';

  COUNT(*)
----------
       310

SQL>  update system.myobjects
 set ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(1)
  where created <'01-JUL-18';  2    3  

71400 rows updated.

SQL> commit;

Commit complete.

SQL> select count(*) from system.myobjects;

  COUNT(*)
----------
       310

 
Verify the extract has captured this UPDATE statement
 

GGSCI (host01.localdomain as c##oggadmin@ORCLCDB/PDB1) 19> stats ext1 latest 

Sending STATS request to EXTRACT EXT1 ...

Start of Statistics at 2019-01-19 10:37:54.

Output to ./dirdat/lt:

Extracting from PDB1.SYSTEM.MYOBJECTS to PDB1.SYSTEM.MYOBJECTS:

*** Latest statistics since 2019-01-19 10:26:27 ***
	Total inserts                   	       71710.00
	Total updates                   	       71400.00
	Total deletes                   	           0.00
	Total discards                  	           0.00
	Total operations                	      143110.00

End of Statistics.

 

Note that replication has also been performed on the target table and with row archival also enabled on the target table we see just 310 rows as present in the table.
 

GGSCI (host02.localdomain) 10> stats rep1 latest 

Sending STATS request to REPLICAT REP1 ...

Start of Statistics at 2019-01-19 10:43:44.


Integrated Replicat Statistics:

	Total transactions            		           2.00
	Redirected                    		           0.00
	Replicated procedures         		           0.00
	DDL operations                		           0.00
	Stored procedures             		           0.00
	Datatype functionality        		           0.00
	Event actions                 		           0.00
	Direct transactions ratio     		           0.00%

Replicating from PDB1.SYSTEM.MYOBJECTS to PDB2.SYSTEM.MYOBJECTS:

*** Latest statistics since 2019-01-19 10:43:07 ***
	Total inserts                   	       71710.00
	Total updates                   	       71400.00
	Total deletes                   	           0.00
	Total discards                  	           0.00
	Total operations                	       143110.00

End of Statistics.

SQL>  select count(*) from system.myobjects;

  COUNT(*)
----------
       310

Oracle Goldengate 12c on DBFS for RAC and Exadata

$
0
0
You need to be logged in to see this part of the content. Please Login to access.

GoldenGate 12c (12.2) New Features

$
0
0
You need to be logged in to see this part of the content. Please Login to access.

Goldengate 12.2 New Feature Self-describing Trail Files

$
0
0
You need to be logged in to see this part of the content. Please Login to access.

GoldenGate 12.2 supports INVISIBLE columns

$
0
0
You need to be logged in to see this part of the content. Please Login to access.
Viewing all 94 articles
Browse latest View live


Latest Images