In Oracle file structure, there is one type of file called redo log file, which record down all the transactions done to the database. There will be a finite number of redo log files, which will keep on reusing the files in a Mary Go Round way. As the database is enabled with archived log, the redo log file will be duplicated into a copy of archived log before it is being reused.
Aside from recovering purpose, archived log can also be served as a transaction tracking material. However the log itself is in binary form, which we need a utility called LogMiner to decode the log file.
- Before you can use LogMiner you need to install the package by executing the following SQL file as SYS user:
$ORACLE_HOME/rdbms/admin/dbmslm.sql
- Then grant necessary privilege to user who can execute LogMiner package (Let’s say user James):
grant execute_catalog_role to james;
- Then create synonym for the LogMiner package:
create public synonym dbms_logmnr for sys.dbms_logmnr;
- You can query the list of archived log:
SELECT distinct member LOGFILENAME FROM V$LOGFILE;
There is 2 ways to read the archived log, one is to read the archived log at the database with the archived log, or move the archived log to another database and read it there. To read the log at the same database:
- Check where is your archived log placed (you may check it from the parameter file
log_archive_dest
). Assume that it is /u01/db/archive
and the log you want to encode is 123.arc
.
- Run statement below to add the archived log to the dictionary:
exec DBMS_LOGMNR.ADD_LOGFILE('/u01/db/archive/123.arc');
- Then run statement below to start the LogMiner:
exec DBMS_LOGMNR.START_LOGMNR(options => dbms_logmnr.dict_from_online_catalog);
To read the log at a different database:
- Move the archived logs to another database server (let’s say you placed in at
/u01/temp
)
- Run statement below to build a data dictionary for the archived log (you can change the file and directory):
exec dbms_logmnr.build('logmnr.dat','/u01/temp');
- The run statement below to add the log file into the dictionary:
exec DBMS_LOGMNR.ADD_LOGFILE('/u01/temp/123.arc');
- Start the LogMiner:
exec DBMS_LOGMNR.START_LOGMNR(dictfilename => '/u01/temp/logmnr.dat');
Either way, you can query the log content through this view v$logmnr_contents
. However the data in the view will be wiped off after you close the session, which mean you need to add the log file and start the LogMiner again in order to get back the data. Some useful columns include:
- username: database user used
- timestamp: when it is happened
- sql_redo: SQL statement of what actually been done
- sql_undo: SQL statement to undo the transaction
- scn: system change number, the unique id of this transaction
- seg_type_name: object type
- table_space: from which tablespace
- session#
- seg_owner: object owner
- table_name: table name
- operation: update, insert, delete, commit or rollback
- session_info: include info such as from which machine