Showing posts with label Database. Show all posts
Showing posts with label Database. Show all posts

Friday, January 9, 2009

RMAN Improvements Over Versions

Back in the days of 9iR2, RMAN was getting very popular but there were some issues with features where end-user requirements were not met to it's fullest extent. Over the next set of releases Oracle RMAN has improved on these issues and has become my favorites these days.

One of the biggest improvements in RMAN came in Oracle 10g with the introduction of block change tracking. However, there was another feature that left us longing for more. The "KEEP FOREVER" feature that would let us keep a backup forever (or until we desired) so to speak.

In Oracle 9i (9.2.0.7 to be precise), I tried to use this feature to retain my Oracle ERP 11i backups until a critical phases of the projects were complete say, CRP II or UAT etc. In 9i, when the retention was set to number of redundancies, and a copy of the database backup was meant to be kept forever, we ran into an issue when the newest nth backup will actually not happen.
In other words, If the redundancy was set at (let's say 5), and the first backup was "kept forever", then the 6th backup was not actually retained for restore. Instead the backup commands will complete successfully without the physical backup taking place. The only option, as I recall, was to keep incrementing the redundancy until the "keep forever" backup was to expire and do catalog maintenance to cleanup subsequent backups before getting back to the original redundancy requirement. To say the least, this was not very helpful.

Then in Oracle 10g, I was delighted to see that this issue was resolved but the one last nagging issue still remained with long term backups. That is the requirement to have the backup in a consistent state in order to have the archived logs deleted. If the backup was to be taken in an inconsistent state, Oracle kept every single one of archived logs for recovery. I used to wonder why this requirement was enforced as it is expected that an inconsistent backup when used for restore will be restored to a point-in-time or SCN and should be opened "resetlogs". In this case, there is no need for "all" the archives to be kept up until right this minute.

Finally, this critical issue has been answered in Oracle 11g.

Oracle 11g has introduced a new feature called a "restore point" in rman backups wherein a restore point that tracks the last SCN for a long-term inconsistent backup is maintained. Gone are the syntaxes "keep forever logs or nologs" instead we use "keep forever restore point XXX". This will provide us with the ability to delete all the archive logs generated after the long term backup thereby saving lot of space that in it's absence would be needed to maintain all the archived logs.

So with 11g, Now we can take a inconsistent backup that is long term with no overhead storage requirement for all the archived logs generated. That's great!!!

Below are the rman output for a 10g and 11g "keep forever" backup respectively.


10g Keep Forever Backup

RMAN> backup full database tag='full_keep_forever' keep forever logs;

Starting backup at 04-DEC-08
using channel ORA_DISK_1
using channel ORA_DISK_2
backup will never be obsolete
archived logs required to recover from this backup will expire when this backup expires
skipping datafile 6; already backed up 2 time(s)
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00004 name=C:\ORACLE\PRODUCT\ORADATA\O10203\USERS01.DBF
input datafile fno=00008 name=C:\ORACLE\PRODUCT\ORADATA\O10203\RMAN_TEST_TBS01.DBF
input datafile fno=00005 name=C:\ORACLE\PRODUCT\ORADATA\O10203\USERS02.DBF
channel ORA_DISK_1: starting piece 1 at 04-DEC-08
channel ORA_DISK_2: starting full datafile backupset
channel ORA_DISK_2: specifying datafile(s) in backupset
input datafile fno=00001 name=C:\ORACLE\PRODUCT\ORADATA\O10203\SYSTEM01.DBF
input datafile fno=00002 name=C:\ORACLE\PRODUCT\ORADATA\O10203\UNDOTBS01.DBF
input datafile fno=00003 name=C:\ORACLE\PRODUCT\ORADATA\O10203\SYSAUX01.DBF
input datafile fno=00007 name=C:\ORACLE\PRODUCT\ORADATA\O10203\NEW_RW_TBS01.DBF
channel ORA_DISK_2: starting piece 1 at 04-DEC-08
channel ORA_DISK_2: finished piece 1 at 04-DEC-08
piece handle=F:\RMANBACKUP\E3K1DTCM_1_1 tag=FULL_KEEP_FOREVER comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:03:21
channel ORA_DISK_1: finished piece 1 at 04-DEC-08
piece handle=F:\RMANBACKUP\E2K1DTCK_1_1 tag=FULL_KEEP_FOREVER comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:03:38
Finished backup at 04-DEC-08

Starting Control File and SPFILE Autobackup at 04-DEC-08
piece handle=C:\ORACLE\PRODUCT\DB102\DATABASE\C-4123959065-20081204-01 comment=NONE
Finished Control File and SPFILE Autobackup at 04-DEC-08

11g Keep Forever Backup

RMAN> backup full database tag='full_keep_forever' keep forever restore point abc;

Starting backup at 04-DEC-08
current log archived

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=C:\ORACLE\PRODUCT\ORADATA\ORA11\SYSAUX01.DBF
input datafile file number=00001 name=C:\ORACLE\PRODUCT\ORADATA\ORA11\SYSTEM01.DBF
input datafile file number=00003 name=C:\ORACLE\PRODUCT\ORADATA\ORA11\UNDOTBS01.DBF
input datafile file number=00004 name=C:\ORACLE\PRODUCT\ORADATA\ORA11\USERS01.DBF
input datafile file number=00005 name=C:\ORACLE\PRODUCT\ORADATA\ORA11\EISCAT01.DBF
input datafile file number=00006 name=C:\ORACLE\PRODUCT\ORADATA\ORA11\HYPUSER01.DBF
input datafile file number=00007 name=C:\ORACLE\PRODUCT\ORADATA\ORA11\PROD_SRC01.DBF
channel ORA_DISK_1: starting piece 1 at 04-DEC-08
channel ORA_DISK_1: finished piece 1 at 04-DEC-08
piece handle=C:\ORACLE\PRODUCT\11G\DB_1\DATABASE\11K1DTMI_1_1 tag=FULL_KEEP_FOREVER comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:02:58


current log archived
using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=191 RECID=31 STAMP=672593814
channel ORA_DISK_1: starting piece 1 at 04-DEC-08
channel ORA_DISK_1: finished piece 1 at 04-DEC-08
piece handle=C:\ORACLE\PRODUCT\11G\DB_1\DATABASE\12K1DTSQ_1_1 tag=FULL_KEEP_FOREVER comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 04-DEC-08
channel ORA_DISK_1: finished piece 1 at 04-DEC-08
piece handle=C:\ORACLE\PRODUCT\11G\DB_1\DATABASE\13K1DTT9_1_1 tag=FULL_KEEP_FOREVER comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01

using channel ORA_DISK_1
backup will never be obsolete
archived logs required to recover from this backup will be backed up
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 04-DEC-08
channel ORA_DISK_1: finished piece 1 at 04-DEC-08
piece handle=C:\ORACLE\PRODUCT\11G\DB_1\DATABASE\14K1DTTN_1_1 tag=FULL_KEEP_FOREVER comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 04-DEC-08

RMAN>

Friday, November 7, 2008

Virtual Tables, Virtual Columns…How about Virtual Tablespaces?

Oracle Database server has always been in the forefront when it comes to providing ground-breaking technologies that is critical to businesses and improving the organizations’ bottom line. Two features that have been greatly helpful for information management and data retrieval are: virtual tables (a.k.a views) and more recently in 11g they introduced virtual columns, derived values of columns based on data that is physically stored in other columns thereby reducing amount of space being used to store data. Another feature introduced in 10g when it comes to managing the explosive growth of data is the “BIGFILE” tablespaces.


Not too long ago, databases where measured in the 100s of GBs where individual tablespaces that made up the databases were a factor smaller. The primary reason BIGFILE tablespaces were introduced was to tackle the growing problem of data volume of today’s databases and the sheer complexity and time it takes for DBAs to manage the logical and physical data storage aspects; namely tablespaces and data files. Also, with the introduction of Automatic Storage Management (ASM) where some of the layers of the storage aspects were removed from a database standpoint and the SAME concept espoused in storage management, provided a compelling reason to adopt BIGFILE tablespaces.

There is a problem however. When the tablespaces were reasonably small and hence manageable in size, life was a little easy from a restore and recovery perspective. 

Let me illustrate this point. One of the great features of Oracle database technology is the ability to recover the database to “point in time” (provided the underlying requirements are met). There have been instances where a single table had to be restored from a backup due to whatever reason. In the past, this type of recovery was very simple. Just restore SYSTEM tablespace, the tablespace that contains the object of interest and you are done. Of course, there is couple of other things to take care of but essentially this is all we needed to do to recover the object that we needed. Once this was complete, we could export the object and import it wherever it was needed. Now, these types of tablespace restore and recovery is faster and simple because the tablespaces were not too big.

With advanced features like flashback database, standby databases etc. one can argue that we can simply get the object quickly and easily but all of this had to be done in a “time frame”. More often than not, developers will realize that something horrible has happened well beyond the recovery window available via flashback databases or standby databases. Also, not all organizations out there have adopted these technologies. In such circumstances, the only option is to recover the tablespace that contains the object by the above method. And here comes the problem.

With the BIGFILE tablespaces, the restore and recovery of the tablespace and subsequently the object takes a lot longer because of the sheer amount of data volume (data files that need to be restored).

To summarize, here are the pros and cons of the two approaches:

Small file Tablespaces:
- Difficult to manage
- Easier to restore and recover

Bigfile tablespaces:
- Easy to manage
- Difficult to restore and recover.

How about we came up with an approach to make bigfile tablespaces have all the advantages of both bigfile and smallfile tablespaces.

How about virtual tablespaces, along the lines of virtual tables, and virtual columns?

We know how the segments and extents of these segments are created and managed in a typical tablespace. Also, tablespaces themselves are logical in nature. In other words, these are virtual in nature and it is the data files underneath the tablespaces hold the data objects. So wouldn’t it be great if we could add another virtual layer to the BIGFILE tablespaces called a virtual tablespaces and database objects can be created on these virtual tablespaces?

I envision virtual tablespaces to be a layer in-between tablespaces and segments that make up the tablespace. The data segments and index segments are wholly contained in a virtual tablespace much like how it currently exists. This will provide the best of both worlds from a small file and a big file tablespace perspective.

1. From a maintenance viewpoint, this does not add any more overhead than what a bigfile tablespace entails.
2. When it comes to restore, because of data file multiplexing, which is very common during backups, the data files that makes up a tablespace are spread across backupsets and hence needs to be scanned for a full restore.

However, by creating a virtual tablespace, we could potentially reduce the time it takes to restore and recover by only accessing the portion of the backup that makes up the virtual tablespace. Then the final question would be how this “logical” entity will be recovered as a physical entity from which we can recover the data objects that are of interest? This would probably need a little more work. Not sure how practically feasible this is but it would a whole lot advantageous if we can have something like this.

What do you think? 

Please send me your comments.

Friday, August 8, 2008

Plan Capture and Playback

Quite often performance tuning activity mainly involves working with execution plans and trying to identify the root cause of the problem and subsequently taking corrective actions.

Oracle's CBO is getting better by the day in determining the optimal execution plan and thus providing the best performance possible. However, there is always room for improvement and performance experts could help since they happen to have better information regarding the "data". In cases where we are tasked to work on sql performance issues, it is an often heard comment that, "this was working fine till last week and *NO CHANGE* was made to the system", yet the performance is horrible since morning. We need a fix.

In situations like these, we wish we had access to the execution plan from the "best performing" time so we can compare to the current plan. In many cases, this will help us quickly nail down the reason for the poor sql execution and take appropriate fix.

There are many ways to get plans. One of them being from the AWR data collected itself. But this would mean doing a lot more work, collecting more data than necessary. One solution would be to collect and store the execution plans for all SQLs in SQL CACHE based on a repeated interval.

Assuming the SQL itself remained unchanged; the SQL_ID will remain the same. This piece of information could be used to query for versions of sql plans for a given sql and start narrowing down to fix the problem.

The script below provides a simple solution to query and capture the execution plans for future troubleshooting.

**********************************************************************************
The script has been tested in both 10g and 11g environments. Please test the script in your sandbox environment and make sure it conforms to your needs and satisfaction before proceeding.
**********************************************************************************

Couple of comments:

1. The script prompts for SYS password to perform some "grants".
2. It expects a db username that will own the package, table and indexes to accomplish the task set forth.
3. I have commented couple of statements: "a drop table" before creating it, and a "drop job". I did not want to drop something with the same name in your schema inadvertently. Please make appropriate changes before executing.
4. The package currently captures the plan for a given SQL once per day. This is mainly to avoid capturing the same SQL/PLAN more than necessary. You may have to comment out the "where clause" in cursor c1 is this is not desirable and you want the plan captured no matter what.
5. The scheduler job's repeat interval is set to "1 minute". Make changes to fit your needs.





prompt please enter sys password
accept sys_password
prompt please enter the username that will own package
accept username
prompt please enter password
accept password
conn sys/&&sys_password as sysdba
grant resource, connect, create job to &&username;
grant select on v_$sql_plan to &&username;
conn &&username/&&password
--drop table plan_capture_hist;
CREATE TABLE PLAN_CAPTURE_HIST
(
EX_PLAN VARCHAR2(2000 BYTE) NULL,
SQL_ID VARCHAR2(20 BYTE) NULL,
PLAN_HASH_VALUE NUMBER NULL,
CAPTURED_DATE DATE DEFAULT SYSDATE NULL,
SQL_HASH_VALUE NUMBER NULL,
ITERATION NUMBER NULL,
COMMENTS VARCHAR2(2000 BYTE) NULL,
CHILD_NUMBER NUMBER(38) NULL,
RN NUMBER NULL
)
/
CREATE INDEX PCH_SQL_ID ON PLAN_CAPTURE_HIST
(SQL_ID)
/
create index fn_pch_captured_dt on plan_capture_hist(trunc(captured_date))
/
drop package xx_plan_capture;
CREATE OR REPLACE PACKAGE xx_plan_capture IS
PROCEDURE sqlcache_capture;
PROCEDURE sqlid_capture(v_sql_id IN VARCHAR2,v_child_number IN NUMBER);
END;
/
CREATE OR REPLACE PACKAGE BODY xx_plan_capture IS
PROCEDURE sqlcache_capture IS
v_sql_id VARCHAR2(20);
b NUMBER := NULL;
--defaults to the parameters passed to display_cursor
c VARCHAR2(30) := 'TYPICAL ALLSTATS -NOTE';
-- set the plan_capture iteration count for the day
v_iteration NUMBER;
CURSOR c1 IS
SELECT DISTINCT a.sql_id,a.plan_hash_value,a.hash_value,a.child_number
FROM sys.v_$sql_plan A
MINUS
SELECT sql_id,plan_hash_value,sql_hash_value,child_number
FROM plan_capture_hist
where trunc(captured_date) = trunc(sysdate);
CURSOR c2 IS
SELECT ROWNUM rn, plan_table_output
FROM (SELECT * FROM TABLE(dbms_xplan.display_cursor(v_sql_id,b,c)));
BEGIN
SELECT nvl2(MAX(iteration),MAX(iteration)+1,1) INTO v_iteration
FROM plan_capture_hist WHERE TRUNC(captured_date)=TRUNC(SYSDATE);
FOR c_plan_details IN c1 LOOP
BEGIN
v_sql_id := c_plan_details.sql_id;
b :=c_plan_details.child_number;
FOR c_sql_plan IN c2 LOOP
INSERT INTO plan_capture_hist(ex_plan,sql_id,plan_hash_value,sql_hash_value,iteration,child_number,rn)
VALUES (c_sql_plan.plan_table_output,c_plan_details.sql_id,c_plan_details.plan_hash_value,
c_plan_details.hash_value,v_iteration,c_plan_details.child_number,c_sql_plan.rn);
END LOOP; --c_sql_plan
END;
END LOOP; --c_plan_details
COMMIT;
END; --sqlcache_capture
PROCEDURE sqlid_capture(v_sql_id IN VARCHAR2,v_child_number IN NUMBER) IS
b NUMBER := NULL;
--defaults to the parameters passed to display_cursor
c VARCHAR2(30) := 'TYPICAL ALLSTATS -NOTE';
-- set the plan_capture iteration count for the day
v_iteration NUMBER;
x_sql_id VARCHAR2(20);
v1_sql_id VARCHAR2(20);
CURSOR c1(v1_sql_id VARCHAR2) IS
SELECT DISTINCT a.sql_id,a.plan_hash_value,a.hash_value,a.child_number
FROM sys.v_$sql_plan A
WHERE a.sql_id = v_sql_id AND child_number=v_child_number;
CURSOR c2 IS
SELECT ROWNUM rn, plan_table_output
FROM (SELECT * FROM TABLE(dbms_xplan.display_cursor(v_sql_id,v_child_number,c)));
BEGIN
SELECT nvl2(MAX(iteration),MAX(iteration)+1,1) INTO v_iteration
FROM plan_capture_hist WHERE TRUNC(captured_date)=TRUNC(SYSDATE);
dbms_output.put_line(v_iteration);
FOR c_plan_details IN c1(v1_sql_id) LOOP
dbms_output.put_line(c_plan_details.sql_id);
BEGIN
FOR c_sql_plan IN c2 LOOP
--dbms_output.put_line(c_sql_plan.plan_table_output);
INSERT INTO plan_capture_hist(ex_plan,sql_id,plan_hash_value,sql_hash_value,iteration,child_number,rn)
VALUES (c_sql_plan.plan_table_output,c_plan_details.sql_id,c_plan_details.plan_hash_value,
c_plan_details.hash_value,v_iteration,c_plan_details.child_number,c_sql_plan.rn);
END LOOP; -- c_sql_plan
END;
END LOOP; --cursor c_plan_details
COMMIT;
END; --sqlcache_capture
END; --package body
/
--exec dbms_scheduler.drop_job('sql_plan_capture');
exec DBMS_SCHEDULER.create_job( job_name => 'sql_plan_capture', job_type=>'STORED_PROCEDURE' -
,job_action=>'xx_plan_capture.sqlcache_capture' -
, start_date => SYSTIMESTAMP -
, repeat_interval => 'FREQ=MINUTELY; INTERVAL=1' -
, end_date => NULL, enabled => TRUE, comments => 'Plan Capture for all SQL Cache Cursors');

Tuesday, July 22, 2008

V$Views and Permissions. Interesting Issue.

I was working on creating a package to capture and store runtime execution plans for historical review and troubleshooting purposes. More on that later. While I was working on the package, I came across this interesting issue. I thought I might share this. In the snippet below, I have reduced the code to illustrate the issue at hand.

I named the schema "SQLCAPTURE". So going ahead with the example:

By logging in as SYS I create the SQLCAPTURE database user and assign privileges.

SQL> create user sqlcapture identified by sqlcapture;

User created.

SQL> grant dba,scheduler_admin to sqlcapture;

Grant succeeded.

Now I log in as SQLCAPTURE and Compile the Package.


THE COMPLETE CODE IN SQL_CAPTURE1 Script

CREATE OR REPLACE PACKAGE xx_plan_capture1 IS
PROCEDURE sqlcache_capture;
END;
/
CREATE OR REPLACE PACKAGE BODY xx_plan_capture1 IS
PROCEDURE sqlcache_capture IS
v_iteration NUMBER;
begin
SELECT count( a.sql_id ) into v_iteration
FROM v$sql_plan A where rownum <2;
dbms_output.put_line(v_iteration);
END; --sqlcache_capture

END; --package body
/

SQL> conn sqlcapture/sqlcapture
Connected.
SQL> @E:\sql_capture1
Package created.


Warning: Package Body created with compilation errors.

It returns an error. Upon finding out the offending statement:

SQL> sho err
Errors for PACKAGE BODY XX_PLAN_CAPTURE1:

LINE/COL ERROR
-------- --------------------------------------------------
5/1 PL/SQL: SQL Statement ignored
6/6 PL/SQL: ORA-00942: table or view does not exist
SQL> l 6
6* FROM v$sql_plan A where rownum <2; href="https://metalink.oracle.com/metalink/plsql/f?p=130:14:5850005794970861733::::p14_database_id,p14_docid,p14_show_header,p14_show_help,p14_black_frame,p14_font:NOT,1062335.6,1,1,1,helvetica">Metalink Note), I do that.

SQL> sho user
USER is "SYS"
SQL> grant select on v_$sql_plan to sqlcapture;

SQL> sho user
USER is "SQLCAPTURE"
SQL> @E:\sql_capture1

Package created.


Package body created.

All is well, the Package and Package Body is created successfully.

Now comes the interesting part.

By logging in as SYS, I revoke the grant on V_$SQL_PLAN
SQL> revoke select on v_$sql_plan from sqlcapture;

Then as SQLCAPTURE, I recompile the script.

SQL> sho user
USER is "SQLCAPTURE"
SQL> @E:\sql_capture1

Package created.


Package body created.
SQL> col object_name for a19
SQL> select object_name,object_type,status from user_objects where object_type like 'PACKAGE%';

OBJECT_NAME OBJECT_TYPE STATUS
------------------- ------------------- -------
XX_PLAN_CAPTURE1 PACKAGE VALID
XX_PLAN_CAPTURE1 PACKAGE BODY VALID

It Compiles!!!!!

Though at runtime it fails with 942, Interesting to know it compiled.

I guess the takeaway is while coding your own scripts using V$views, it is safer to use the underlying (SYS) tables. Otherwise, at runtime you might
end up with ugly errors and unsuccessful execution of your procedures.

I could not reproduce this in 11g as the package compiles in 11g without even having to give the GRANT on the base table.

Saturday, February 9, 2008

Stored Outlines Last Usage Time

When it comes to Oracle performance tuning, one of the invaluable features that DBAs and Performance experts have in their toolbox is SQL Stored Outlines. Much has been written about stored outlines since it earlier days. One of such interesting articles, can be found here.

Though the supplied PL/SQL package provides many useful routines to play with the outlines, one critical item that is being missed in the package set is a routine to indicate whether a stored outline is consistently being used? And more importantly, when was the last time a stored outline was used? Granted, we can “clear” the stored outlines’ usage and see if it resets to ensure that the outline is being used but that seems too much intrusion to the live production system. While working on this issue, I discovered a simpler way to determine the last time an outline had been used for a given SQL.

With Oracle 10g there has been great improvements in the V$ views area providing a wealth of information without the need to access many V$ views. This tradition has been carried over to 10g R2 as well. The V$SQL view in 10g R2 has new columns that cater to stored outlines. One such column is the LAST_ACTIVE_TIME. By querying the SQL_ID, OUTLINE_CATEGORY, and LAST_ACTIVE_TIME columns from V$SQL you can verify whether a stored outline is being actively used and also the last time it was used.

Here is an example.

NOTE: The outline creation steps are assumed.

11:52:27 SQL> alter session set use_stored_outlines=HV;

Session altered.

11:55:43 SQL> select count(*) from ptn_tab where col2=2;

COUNT(*)
----------
300000
11:56:57 SQL> select name,used from dba_outlines where name='HV';

NAME USED
-------------------- ------
HV USED

11:58:00 SQL> exec dbms_outln.clear_used('HV');

PL/SQL procedure successfully completed.

11:58:10 SQL> select name,used from dba_outlines where name='HV';

NAME USED
-------------------- ------
HV UNUSED
11:58:18 SQL> select count(*) from ptn_tab where col2=2;

COUNT(*)
----------
300000

11:58:59
SQL> select name,used from dba_outlines where name='HV';

NAME USED
-------------------- ------
HV USED

11:59:04 SQL>

1 select sql_id,outline_category,exact_matching_signature,
2 force_matching_signature,last_active_time
3* from v$sql where sql_text like 'select count(*) from ptn_tab%'
11:59:06 SQL> /

SQL_ID OU EXACT_MATCHING_SIGNATURE FORCE_MATCHING_SIGNATURE
------------- -- ------------------------ ------------------------
LAST_ACTIVE_TIME
--------------------
10d6hmvy8t74n HV 0 0
08-FEB-2008 11:58:59