GoldenGate numeric data validation with REGEXP

Here is a simple validation mapping to verify if the value in the trail file is numeric.

I had to use this where the source was a varchar2 and the target was a numeric field.

The requirement is if the value in the trail file is not numeric just add a “zero” in the field.


MAP SOURCE.T1, TARGET TARGET.T1, &
SQLEXEC (ID lookup_st, &
QUERY " select REGEXP_INSTR(:vac_num, 'a|b|c|d|e|f|g|h|i|j|k|l|m|n|o|p|q|r|s|t|u|v|w|x|y|z', 1, 1, 0, 'i') as ac_num_true from dual ", &
PARAMS(vac_num = num)), &
COLMAP ( num = @if (lookup_st.ac_num_true = 0,num,0), text = text ));

Advertisements

Install and Configure Oracle GoldenGate 12.3 Microservices Architecture

The objective of this document is to demonstrate how to install and configure the new Oracle 12.3 GoldenGate Microservices Architecture.

For the purpose of demonstration and training I have created one VM database (oradb1) and will create a unidirectional replication from 2 existing schemas:  HR_SOURCE to HR_TARGET

The Oracle GoldenGate Microservices Architecture have 6 main components:

 

  • Service manager
    • A Service Manager acts as a watchdog for other services available with Microservices Architecture.
  • Administration Server
    • An Administration Server supervises, administers, manages, and monitors processes operating within an Oracle GoldenGate deployment for both active and inactive processes.
  • Distribution Server
    • A Distribution Server is an application that functions as a networked data distribution agent in support of conveying and processing data and commands in a distributed networked deployment. It is a high performance application that is able to handle multiple commands and data streams from multiple source trail files, concurrently.
  • Receiver Server
    • A Receiver Server is the central control service that handles all incoming trail files. It interoperates with the Distribution Server and provides compatibility with the classic architecture pump for remote classic deployments.
  • Performance Metrics Server
    • The Performance Metrics Server uses the metrics service to collect and store instance deployment performance results. This metrics collection and repository is separate from the administration layer information collection.
  • Admin Client
    • The Admin Client is a command line utility (similar to the classic GGSCI utility). It uses the REST API published by the Microservices Architecture(MA) Servers to accomplish control and configuration tasks in an Oracle GoldenGate deployment.

View the entire document here:  Install Oracle GoldenGate 12.3 Microservices Continue reading

Replicat abending with database error 1403

Replicat abending with database error 1403() during the initial startup during a compressed update.

This error can be seen very frequently on a production environment when implementing Goldengate for the first time.

During the instantiation of the target database, if there is any open transaction which is yet to be committed, the extract is started with “begin now” and then the export/rman backup is started. This open transaction could be missed out by either of the processes and when this open transaction is committed at a later point of time and an update/delete is fired against this transaction which is captured by the extract, guess what happens?

Replicat would not be able to find the row for doing the latter transaction.

There is a way to avoid this situation by altering the extract to the oldest long-pending transaction from gv$transaction/v$transaction.

Use the below to find:

SQL> select min(START_TIME) from gv$transaction;

MIN(START_TIME)
——————–
10/25/16 21:13:04

Alter the extract with the begin timestamp to process the oldest open transaction:
# delete the existing extract and pump trails
# create a new trail sequence for extract and point the pump to read the new trail sequence

# ggsci> alter EXTRACT BEGIN 2016-10-25 21:13:04
# ggsci> alter EXTRACT BEGIN 2016-10-25 21:13:04, threads

# alter the replicat to the new trail sequence and start the processes.

Of course remember to use HANDLECOLLISIONS, until you pass by the time your ORIGINAL TRAIL was processed until.

And for NEXT TIME make sure you run your GV$TRANSACTION sql above. And create the extract process to start capturing from the time you get from GV$TRANSACTION.

Extract Index DDL from expdp

I recently wanted to drop the indexes during a load and recreate after.

There are several ways to do that, but this time since I had a schema export taken I wanted to extract from the dump file.

Here are the scripts files:

JDEPROD@proa1-/cloudfs/DBA/scripts$ cat expdp_JDE8_schemas.sh
export NOW=$(date +”%F_%H%M%S”)
export ORACLE_SID=JDEPROD
export ORAENV_ASK=NO
. oraenv

echo “ORACLE HOME is: “$ORACLE_HOME
mv /cloudfs/DBA/scripts/JDE82_schemas_exp_*.* /cloudfs/DBA/scripts/old_dump

$ORACLE_HOME/bin/expdp \’/ as sysdba\’ \
schemas=JDE,PRODDTA,PRODCTL,PD920 \
directory=DATA_PUMP_DIR \
CONTENT=METADATA_ONLY \
dumpfile=JDE82_schemas_exp_$NOW.dmp \
logfile=JDE82_schemas_exp_$NOW.log

mv /u01/app/oracle/admin/JDEPROD/dpdump/JDE82_schemas_exp_$NOW.* /cloudfs/DBA/scripts

 

$ cat 2_impdp_index_only.sh
export NOW=$(date +”%F_%H%M%S”)
export ORACLE_SID=JDEPROD
export ORAENV_ASK=NO
. oraenv

echo “ORACLE HOME is: “$ORACLE_HOME

$ORACLE_HOME/bin/impdp \’/ as sysdba\’ \
schemas=PRODCTL,PRODDTA \
directory=EXP_DIR \
CONTENT=METADATA_ONLY \
INCLUDE=”INDEX” \
sqlfile=7_create_indexes.sql \
dumpfile=JDE82_schemas_exp_2017-07-06_122035.dmp \
logfile=JDE82_schemas_exp.log
JDEPROD@droa1-/cloudfs/DBA/scripts

Output in the 7_create_indexes.sql file:

$ cat 7_create_indexes.sql | more
— CONNECT SYS
ALTER SESSION SET EVENTS ‘10150 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘10904 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘25475 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘10407 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘10851 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ‘;
— new object type path: SCHEMA_EXPORT/TABLE/INDEX/INDEX
— CONNECT PRODDTA
CREATE UNIQUE INDEX “PRODDTA”.”F0000194_0″ ON “PRODDTA”.”F0000194″ (“SYEDUS”, “SYEDBT”, “SYEDTN”, “SYEDLN”)
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE( PCTINCREASE 0)
TABLESPACE “PRODDTAI” PARALLEL 4 ;

ALTER INDEX “PRODDTA”.”F0000194_0″ NOPARALLEL;
CREATE UNIQUE INDEX “PRODDTA”.”F00022_0″ ON “PRODDTA”.”F00022″ (“UKOBNM”)
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE( PCTINCREASE 0)
TABLESPACE “PRODDTAI” PARALLEL 4 ;

ALTER INDEX “PRODDTA”.”F00022_0″ NOPARALLEL;
CREATE UNIQUE INDEX “PRODDTA”.”F00023_0″ ON “PRODDTA”.”F00023″ (“DNCTID”, “DNSY”, “DNORNN”, “DNKCO”, “DNDCT”, “DNCTRY”, “DNFY”, “DNOBNM”)
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE( PCTINCREASE 0)
TABLESPACE “PRODDTAI” PARALLEL 4 ;

..