Example of using sql profile to use switch to a different execution plan

Below are the 2 SQL statements, the first one uses the index AAA_IDX and then second one does a full table scan and in this case we want to make the first one that uses the index use the same execution plan as the 2nd one.

SQL statement:
select * from aaa e where ename = 'aaa';
select /*+ FULL(e) */ * from aaa e where ename = 'aaa'

By query v$sql we found the sql_id, child_number and plan_hash_value also checked there isn't a sql_profile attached to the SQL.

SQL> select sql_id, child_number, sql_profile, plan_hash_value, sql_text from v$sql where sql_id IN ('63cg18v928540', '0tjtg6yqqbbxk');

SQL_ID CHILD_NUMBER SQL_PROFILE PLAN_HASH_VALUE
————- ———— ————————————————- —————
SQL_TEXT
————————————————-
63cg18v928540 0 2022030255
select * from aaa e where ename = 'aaa'

0tjtg6yqqbbxk 0 864433273
select /*+ FULL(e) */ * from aaa e where ename = 'aaa'

Displaying the execution plan of the 2nd SQL with the outline option
SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor('0tjtg6yqqbbxk', 0, 'outline'));
PLAN_TABLE_OUTPUT
——————————————————————————–

SQL_ID 0tjtg6yqqbbxk, child number 0
————————————-
select /*+ FULL(e) */ * from aaa e where ename = 'aaa'

Plan hash value: 864433273

————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————–
| 0 | SELECT STATEMENT | | | | 2 (100)| |
|* 1 | TABLE ACCESS FULL| AAA | 1 | 5 | 2 (0)| 00:00:01 |
————————————————————————–

Outline Data
————-

/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
DB_VERSION('11.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@”SEL$1″)
FULL(@”SEL$1″ “E”@”SEL$1”)
END_OUTLINE_DATA
*/

Predicate Information (identified by operation id):
—————————————————

1 – filter(“ENAME”='aaa')

From the outline generated used the hint to add to first SQL to change its execution plan i.e. full table scan.
define SQL_ID = '63cg18v928540';

DECLARE
clsql_text CLOB;
BEGIN
SELECT sql_fulltext INTO clsql_text FROM V$sqlarea where sql_id = '&SQL_ID';
dbms_sqltune.import_sql_profile(sql_text => clsql_text,
profile=> sqlprof_attr('FULL(@SEL$1 E@SEL$1)'),
name=>'PROFILE_&SQL_ID',
force_match=>true);
end;
/

Shows now when we execute the SQL it uses the same execution plan as the 2nd one i.e. full table scan
SQL> select * from aaa e where ename = 'aaa';
Execution Plan
———————————————————-
Plan hash value: 864433273

————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 5 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| AAA | 1 | 5 | 2 (0)| 00:00:01 |
————————————————————————–

Predicate Information (identified by operation id):
—————————————————

1 – filter(“ENAME”='aaa')

Note
—–
– SQL profile “PROFILE_63cg18v928540” used for this statement

How to generate execution plan created from AWR snapshot?

Using $ORACLE_HOME/rdbms/admin/awrsqrpt.sql one can generate execution plan from AWR by passing start snap id, end snap id and SQL ID.

It will ask for most of the same inputs like awrrpt.sql, type of report (html/text), start snap id, end snap id, SQL id and report name and create a output in the file specified.

Example:
SQL> @?/rdbms/admin/awrsqrpt.sql

How to schedule a snap for STATSPACK?

One can automate taking of snapshots of STATSPACK by executing DBMS_JOB or by scheduling it through crontab.

Schedule automatic STATSPACK through DBMS_JOB – By executing @?/rdbms/admin/spauto.sql it will schedule to run statspack once every hour which is the default.

Output:
SQL> connect perstat@TESTDB
Password: **********
SQL> @?/rdbms/admin/spauto.sql

PL/SQL procedure successfully completed.

Job number for automated statistics collection for this instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Note that this job number is needed when modifying or removing
the job:

JOBNO
———-
2

Job queue process
~~~~~~~~~~~~~~~~~
Below is the current setting of the job_queue_processes init.ora
parameter – the value for this parameter must be greater
than 0 to use automatic statistics gathering:

NAME TYPE VALUE
———————————— ———– ——————————
job_queue_processes integer 1000

Next scheduled run
~~~~~~~~~~~~~~~~~~
The next scheduled run for this job is:

JOB NEXT_DATE NEXT_SEC
———- ——— ——–
2 27-DEC-11 19:00:00

One can view the job by executing the following SQL.

SQL> select * FROM User_jobs WHERE job = 2;

JOB LOG_USER PRIV_USER
———- —————————— ——————————
SCHEMA_USER LAST_DATE LAST_SEC THIS_DATE THIS_SEC NEXT_DATE
—————————— ——— ——– ——— ——– ———
NEXT_SEC TOTAL_TIME B
——– ———- –
INTERVAL
——————————————————————————–
FAILURES
———-
WHAT
——————————————————————————–
NLS_ENV
——————————————————————————–
MISC_ENV INSTANCE
—————————————————————- ———-
1 SYS SYS
SYS 27-DEC-11
19:00:00 0 N
trunc(SYSDATE+1/24,'HH')

statspack.snap;
NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA' NLS_CURRENCY='$' NLS_ISO_CURRENC
Y='AMERICA' NLS_NUMERIC_CHARACTERS='.,' NLS_DATE_FORMAT='DD-MON-RR' NLS_DATE_LAN
GUAGE='AMERICAN' NLS_SORT='BINARY'
0102000202000000 1

To change the snap interval one can execute dbms_job.interval in the example below it changes to collect every 1/2 hr
SQL> exec dbms_job.interval(1, 'trunc(SYSDATE+1/48,''HH'')');

PL/SQL procedure successfully completed.

SQL> select * FROM User_jobs;

JOB LOG_USER PRIV_USER
———- —————————— ——————————
SCHEMA_USER LAST_DATE LAST_SEC THIS_DATE THIS_SEC NEXT_DATE
—————————— ——— ——– ——— ——– ———
NEXT_SEC TOTAL_TIME B
——– ———- –
INTERVAL
——————————————————————————–
FAILURES
———-
WHAT
——————————————————————————–
NLS_ENV
——————————————————————————–
MISC_ENV INSTANCE
—————————————————————- ———-
1 SYS SYS
SYS 27-DEC-11
19:00:00 0 N
trunc(SYSDATE+1/48,'HH')

statspack.snap;
NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA' NLS_CURRENCY='$' NLS_ISO_CURRENC
Y='AMERICA' NLS_NUMERIC_CHARACTERS='.,' NLS_DATE_FORMAT='DD-MON-RR' NLS_DATE_LAN
GUAGE='AMERICAN' NLS_SORT='BINARY'
0102000202000000 1

How to delete or truncate Statspack data?

One can truncate data in STATSPACK or delete range of snaps stored.

To truncate STATSPACK data

Using $ORACLE_HOME/rdbms/admin/sptrunc.sql, one can truncate STATSPACK data. To run the script you will need to connect as PERFSTAT user.

perfstat@TESTDB> @?/rdbms/admin/sptrunc.sql

Warning
~~~~~~~
Running sptrunc.sql removes ALL data from Statspack tables. You may
wish to export the data before continuing.

About to Truncate Statspack Tables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If would like to exit WITHOUT truncating the tables, enter any text at the
begin_or_exit prompt (e.g. &'exit&'), otherwise if you would like to begin
the truncate operation, press

Enter value for begin_or_exit:

To delete STATSPACK data – This will prompt for starting snap id and end snap id.
One can run the following SQL that will show all the snap ids on the current instance.
select s.snap_id
, to_char(s.snap_time,&' dd Mon YYYY HH24:mi:ss&') snap_date
, s.baseline
, s.snap_level #level#
, di.host_name host_name
, s.ucomment
from perfstat.stats$snapshot s
, perfstat.stats$database_instance di
, v$database v
, v$instance i
where s.dbid = v.dbid
and di.dbid = v.dbid
and s.instance_number = i.instance_number
and di.instance_number = i.instance_number
and di.startup_time = s.startup_time
order by di.db_name, i.instance_name, s.snap_id
/

SNAP_ID SNAP_DATE B level
———- ——————— – ———-
HOST_NAME
—————————————————————-
UCOMMENT
——————————————————————————–
1 27 Dec 2011 18:30:56 5
MACHINE
2 27 Dec 2011 18:30:58 6
MACHINE

Script to automate generation AWR for a day

— The following anonymous PL/SQL code can be used to generate a script which can be run to generate AWR report for one day. Currently the script runs to create report for yesterday’s report. It creates a report between each snap.
set serveroutput on
set feedback off
spool rpt.sql
set termout off
set linesize 200
set trimspool on
DECLARE
dbid v$database.dbid%TYPE;
dbname v$database.name%TYPE;
-- snap history get snap ids
CURSOR l_cur IS
SELECT snap_id, begin_interval_time, end_interval_time FROM dba_hist_snapshot
WHERE begin_interval_time >= trunc(SYSDATE) – 1 and end_interval_time <= trunc(SYSDATE) + 1/24
ORDER BY begin_interval_time;
l_rec l_cur%ROWTYPE;
l_prev_rec l_cur%ROWTYPE;
l_begin_date VARCHAR2(30);
l_end_date VARCHAR2(30);
BEGIN
SELECT dbid, name INTO dbid, dbname FROM v$database;

OPEN l_cur;
FETCH l_cur INTO l_rec;
WHILE l_cur%FOUND
LOOP
l_prev_rec := l_rec;
FETCH l_cur INTO l_rec;
IF l_cur%found THEN
l_begin_date := to_char(l_prev_rec.BEGIN_INTERVAL_TIME, 'YYYYMMDD_HH24MI');
l_end_date := to_char(l_prev_rec.end_interval_time, 'YYYYMMDD_HH24MI');
dbms_output.put_line('-- ' || l_begin_date
|| ' – ' || l_end_date);
dbms_output.put_line('define inst_num = 1;');
dbms_output.put_line('define num_days = 0;');
dbms_output.put_line('define inst_name = ''' || dbname || ''';');
dbms_output.put_line('define db_name = ''' || dbname || ''';');
dbms_output.put_line('define dbid = ' || dbid || ';');
dbms_output.put_line('define report_type = ''html'';');
dbms_output.put_line('define begin_snap = ' || l_prev_rec.snap_id);
dbms_output.put_line('define end_snap = ' || l_rec.snap_id);
dbms_output.put_line('define report_name = /tmp/awr_' || dbname
|| '_' || l_begin_date || '_' || l_end_date || '.html');
dbms_output.put_line('@?/rdbms/admin/awrrpti');
END IF;
END LOOP;
CLOSE l_cur;
END;
/

Sample Output from rpt.sql:
-- 20111125_0000 – 20111125_0100
define inst_num = 1;
define num_days = 0;
define inst_name = ‘TESTDB’;
define db_name = ‘TESTDB’;
define dbid = 2547329494;
define report_type = ‘html’;
define begin_snap = 384
define end_snap = 385
define report_name = /tmp/awr_TESTDB_20111125_0000_20111125_0100.html
@?/rdbms/admin/awrrpti
-- 20111125_0100 – 20111125_0200
define inst_num = 1;
define num_days = 0;
define inst_name = ‘TESTDB’;
define db_name = ‘TESTDB’;
define dbid = 2547329494;
define report_type = ‘html’;
define begin_snap = 385
define end_snap = 386
define report_name = /tmp/awr_TESTDB_20111125_0100_20111125_0200.html
@?/rdbms/admin/awrrpti

How to view/change statistics retention and space used to store stats history?

See the oldest statistics history available
SQL> SELECT DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY FROM DUAL;

GET_STATS_HISTORY_AVAILABILITY
—————————————————————————
25-OCT-11 10.00.55.093000000 PM -07:00

See the stats retention
SQL> select dbms_stats.get_stats_history_retention from dual;

GET_STATS_HISTORY_RETENTION
—————————
31

Modify retention
SQL> exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(60);

Space currently used to store statistics in SYSAUX in KBytes, so increasing the retention would affect it’s size
SQL> select occupant_desc, space_usage_kbytes from v$sysaux_occupants where OCCUPANT_DESC like '%Statistics%';

OCCUPANT_DESC SPACE_USAGE_KBYTES
—————————————————————- ——————
Server Manageability – Optimizer Statistics History 17920

How to restore statistics and view statistics history?

Shows in this scenrio there are backup of stats that exists for SCOTT.TEST table
SQL> select * FROM DBA_TAB_STATS_HISTORY where owner = 'SCOTT' and table_name = 'TEST';

no rows selected

Display current timestamp of last time when the table and index was analyzed
SQL> select last_analyzed from dba_indexes where table_name = 'TEST' and table_owner = 'SCOTT';

LAST_ANALYZED
——————–
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25

SQL> select last_analyzed from dba_tables where table_name = 'TEST' and owner = 'SCOTT';

LAST_ANALYZED
——————–
2011-JUL-21 00:20:34

Gather/Update stats on the table
SQL> exec dbms_stats.gather_table_stats('SCOTT', 'TEST', cascade=>true);

Check stats on the table by checking last_analyzed date, shows date updated
SQL> select last_analyzed from dba_indexes where table_name = 'TEST' and table_owner = 'SCOTT';

LAST_ANALYZED
——————–
2011-NOV-26 10:38:57
2011-NOV-26 10:39:14
2011-NOV-26 10:39:18
2011-NOV-26 10:39:19
2011-NOV-26 10:39:28
2011-NOV-26 10:39:28
2011-NOV-26 10:39:28

SQL> select last_analyzed from dba_tables where table_name = 'TEST' and owner = 'SCOTT';

LAST_ANALYZED
——————–
2011-NOV-26 10:38:55

When stats are gathered dbms_stats backups the stats are backup and viewed in DB_TAB_STATS_HISTORY which can be used to restore stats
SQL> select * FROM DBA_TAB_STATS_HISTORY where owner = 'SCOTT' and table_name = 'TEST';

OWNER TABLE_NAME STATS_UPDATE_TIME
————— —————————— —————————-
SCOTT TEST 11-11-26 10:38:56,021074 -07

Restore stats as of yesterday i.e. to time before the stats were run
SQL> exec dbms_stats.restore_table_stats('SCOTT', 'TEST', TO_DATE('26-NOV-2011 11:00', 'DD-MON-YYYY HH24:MI') – 1));

Check last analyzed date it indicates the stats were restored back
SQL> select last_analyzed from dba_indexes where table_name = 'TEST' and table_owner = 'SCOTT';

LAST_ANALYZED
——————–
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25
2006-MAY-09 00:00:25

SQL> select last_analyzed from dba_tables where table_name = 'TEST' and owner = 'SCOTT';

LAST_ANALYZED
——————–
2011-JUL-21 00:20:34

dbms_stats.restore_table_stats backs up the stats again to restore the stats that was changed
SQL> select * FROM DBA_TAB_STATS_HISTORY where owner = 'SCOTT' and table_name = 'TEST';

OWNER TABLE_NAME STATS_UPDATE_TIME
————— —————————— —————————-
SCOTT TEST 11-11-26 11:53:24,517857 -07
SCOTT TEST 11-11-26 10:38:56,021074 -07

Monitoring AIX

On AIX – nmon on AIX can be used to monitoring a machine (CPU, Memory, Network traffic, I/O, top processes, etc). Here is the link to download nmon http://www.ibm.com/developerworks/wikis/display/WikiPtype/nmon

Example to collect nmon data and extract output to a file:
$ nmon_aix53 -fTNWLA -I 0.001 -s 300 -c 288 -m /home/nmon_data
This collects the data once a day in the directory /home/nmon_data at once every 5 minutes and with 288 snapshot which makes a excellent graph detail level. It also collects top processes and user command lines (T), NFS stats (N), Workload Manager but no Subclasses (W), Large page stats (L) and Asynchronous I/O details. The reporting threshold is 0.001 percent of a CPU.

The output from nmon can be graphed using nmon analyzer it reads output from nmon and it can be loaded into excel that graphs to aid in analysis and reports to analyze nmon.
http://www.ibm.com/developerworks/wikis/display/Wikiptype/nmonanalyser