Run SQLcl commands from SQL Developer

SQLcl has many new little tricks that can help us on a daily basis, below is an example how we can execute SQLcl command from SQL developer:

Here is an info <table>

infotable

Here is an DDL for the same table:

ddl <table name>

ddltable

Advertisements

Extract Index DDL from expdp

I recently wanted to drop the indexes during a load and recreate after.

There are several ways to do that, but this time since I had a schema export taken I wanted to extract from the dump file.

Here are the scripts files:

JDEPROD@proa1-/cloudfs/DBA/scripts$ cat expdp_JDE8_schemas.sh
export NOW=$(date +”%F_%H%M%S”)
export ORACLE_SID=JDEPROD
export ORAENV_ASK=NO
. oraenv

echo “ORACLE HOME is: “$ORACLE_HOME
mv /cloudfs/DBA/scripts/JDE82_schemas_exp_*.* /cloudfs/DBA/scripts/old_dump

$ORACLE_HOME/bin/expdp \’/ as sysdba\’ \
schemas=JDE,PRODDTA,PRODCTL,PD920 \
directory=DATA_PUMP_DIR \
CONTENT=METADATA_ONLY \
dumpfile=JDE82_schemas_exp_$NOW.dmp \
logfile=JDE82_schemas_exp_$NOW.log

mv /u01/app/oracle/admin/JDEPROD/dpdump/JDE82_schemas_exp_$NOW.* /cloudfs/DBA/scripts

 

$ cat 2_impdp_index_only.sh
export NOW=$(date +”%F_%H%M%S”)
export ORACLE_SID=JDEPROD
export ORAENV_ASK=NO
. oraenv

echo “ORACLE HOME is: “$ORACLE_HOME

$ORACLE_HOME/bin/impdp \’/ as sysdba\’ \
schemas=PRODCTL,PRODDTA \
directory=EXP_DIR \
CONTENT=METADATA_ONLY \
INCLUDE=”INDEX” \
sqlfile=7_create_indexes.sql \
dumpfile=JDE82_schemas_exp_2017-07-06_122035.dmp \
logfile=JDE82_schemas_exp.log
JDEPROD@droa1-/cloudfs/DBA/scripts

Output in the 7_create_indexes.sql file:

$ cat 7_create_indexes.sql | more
— CONNECT SYS
ALTER SESSION SET EVENTS ‘10150 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘10904 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘25475 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘10407 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘10851 TRACE NAME CONTEXT FOREVER, LEVEL 1’;
ALTER SESSION SET EVENTS ‘22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ‘;
— new object type path: SCHEMA_EXPORT/TABLE/INDEX/INDEX
— CONNECT PRODDTA
CREATE UNIQUE INDEX “PRODDTA”.”F0000194_0″ ON “PRODDTA”.”F0000194″ (“SYEDUS”, “SYEDBT”, “SYEDTN”, “SYEDLN”)
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE( PCTINCREASE 0)
TABLESPACE “PRODDTAI” PARALLEL 4 ;

ALTER INDEX “PRODDTA”.”F0000194_0″ NOPARALLEL;
CREATE UNIQUE INDEX “PRODDTA”.”F00022_0″ ON “PRODDTA”.”F00022″ (“UKOBNM”)
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE( PCTINCREASE 0)
TABLESPACE “PRODDTAI” PARALLEL 4 ;

ALTER INDEX “PRODDTA”.”F00022_0″ NOPARALLEL;
CREATE UNIQUE INDEX “PRODDTA”.”F00023_0″ ON “PRODDTA”.”F00023″ (“DNCTID”, “DNSY”, “DNORNN”, “DNKCO”, “DNDCT”, “DNCTRY”, “DNFY”, “DNOBNM”)
PCTFREE 10 INITRANS 2 MAXTRANS 255
STORAGE( PCTINCREASE 0)
TABLESPACE “PRODDTAI” PARALLEL 4 ;

..

 

 

Purge DDL tables in GoldenGate for multiple instances

When setting up a DDL replication in Integrated Capture there are couple tables that gets populated on a regular basis, Oracle provides a purge process that one can add to the manager parameter file and it will purge as configured.

e.g:

 userid ogg, password xxxx  
 PURGEDDLHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 5, FREQUENCYMINUTES 30  
 PURGEMARKERHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 5, FREQUENCYMINUTES 30  

This works really well when you have one instance that uses GoldenGate to replicate.

However if you have 2 or more database instances and you enable DDL replication from the same GoldenGate Home, multiple database logins in the same manager are not supported.

e.g:

 userid ogg@SID1, password xxxx  
 PURGEDDLHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 5, FREQUENCYMINUTES 30  
 PURGEMARKERHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 5, FREQUENCYMINUTES 30  
 userid ogg@SID2, password xxxx  
 PURGEDDLHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 5, FREQUENCYMINUTES 30  
 PURGEMARKERHISTORY MINKEEPDAYS 3, MAXKEEPDAYS 5, FREQUENCYMINUTES 30  

At this time the only way to get around this restriction is to have multiple GoldenGate homes, one for each database instance the extract need to connect or create a temporary customized purge process as per below.

This procedure will delete rows older then 10 days.

 CREATE OR REPLACE PROCEDURE GGATE.PURGE_GGS_HISTORY IS     
    BEGIN  
    delete from GGS_MARKER where sysdate - to_date(OPTIME, 'yyyy-mm-dd hh24:mi:ss')  > 10;  
    delete from GGS_DDL_HIST where sysdate - to_date(OPTIME, 'yyyy-mm-dd hh24:mi:ss')  > 10;  
    commit;  
    EXCEPTION  
      WHEN NO_DATA_FOUND THEN  
        NULL;  
      WHEN OTHERS THEN  
        -- Consider logging the error and then re-raise  
        RAISE;  
 END PURGE_GGS_HISTORY;  
 /  

It important to point out that DDL replication needs those records to replicat to the target, if for some reason the purge deletes the row before it gets replicated it will cause issues with DDL.

That’s why the supported PURGE procedure is reliable and preferred method.

From GoldenGate Documentation:

Caution:

“Use caution when purging the history tables. They are critical to the integrity of the DDL synchronization processes. Premature purges are non-recoverable through Oracle GoldenGate. To prevent any possibility of permanent DDL data loss, make regular backups of the Oracle GoldenGate DDL schema.”

https://docs.oracle.com/goldengate/1212/gg-winux/GWURF/gg_parameters120.htm#GWURF594

https://docs.oracle.com/goldengate/1212/gg-winux/GWURF/gg_parameters121.htm#GWURF596

How to add a new not null column in a table without providing a default value?

The following shows how to add in a table a new column that is NOT NULL when a table already has rows without providing a default value.

SQL> create table scott.testn ( x number not null);

Table created.

SQL> insert into scott.testn values ( 11 );

1 row created.

SQL> commit;

Commit complete.

-- First step add new column without the NULL constraint
SQL> alter table scott.testn add nn number;

Table altered.

-- Then modify the column to be NOT NULL using NOVALIDATE
SQL> alter table scott.testn modify nn not null novalidate;

Table altered.

-- Shows the table doesn't show the NOT NULL constraint on the column NN yet
SQL> desc scott.testn;
Name Null? Type
—————————————– ——– ———————
X NOT NULL NUMBER
NN NUMBER

-- As shown above even though not shown the NOT NULL constraint is there as new values cannot be inserted in the table that are NULL
SQL> insert into scott.testn values (12, null);
insert into scott.testn values (12, null)
*
ERROR at line 1:
ORA-01400: cannot insert NULL into (“SCOTT”.”TESTN”.”NN”)

-- Now we can enable the constraint by updating the rows that have NULL value so we need to find the constraint name
SQL> select column_name, constraint_name from dba_cons_columns where table_name = 'TESTN' and owner = 'SCOTT';

COLUMN_NAME CONSTRAINT_NAME
——————– ——————————
X SYS_C005261
NN SYS_C005262

-- Rows update
SQL> update scott.testn set nn = 1 where nn is null;

1 row updated.

SQL> commit;

Commit complete.

--
SQL> alter table scott.testn modify constraint SYS_C005262 validate;

Table altered.

-- Shows the NOT NULL constraint on the column NN
SQL> desc scott.testn;
Name Null? Type
—————————————– ——– —————————-
X NOT NULL NUMBER
NN NOT NULL NUMBER

-- shows that one can't add a new column not null as table is not empty
SQL> alter table scott.testn add mm number not null;
alter table scott.testn add mm number not null
*
ERROR at line 1:
ORA-01758: table must be empty to add mandatory (NOT NULL) column

-- shows it can't add a new column not null with novalidate as table is not empty
SQL> alter table scott.testn add mm number not null novalidate;
alter table scott.testn add mm number not null novalidate
*
ERROR at line 1:
ORA-01758: table must be empty to add mandatory (NOT NULL) column