vicker313 tech blog

February 2, 2012

Oracle PL/SQL Script to Find Table(s) Based on Column Name and Value

Filed under: Oracle — Tags: , , — vicker313 @ 8:12 pm

Here is a simple oracle PL/SQL script, to find table(s) based on a column name and value that match with it. Just copy the scripts below and run it in SQL Plus. It will prompt user to enter the column name, the value you want to match and the schema, then it will show the table name and row count.

set serveroutput on
col varchar2(20) := '&column_input';
val varchar2(20) := '&value_input';
cursor c is select table_name
from dba_tab_columns
where owner = '&owner_input'
and column_name = col;
cnt number;
for r in c loop
cnt := 0;
execute immediate 'select count(*) from ' || r.table_name ||
' where ' || col || ' = ''' || val || '''' into cnt;
dbms_output.put_line(r.table_name || ': ' || cnt);
end loop;
set serveroutput off

June 11, 2011

Copy or Move Oracle Database

Filed under: Oracle — Tags: , , — vicker313 @ 12:18 pm

Consider the following scenario: I got a oracle database that reside in /db/mydb. For some reason I need to copy or move the database to somewhere else, let’s say /db2/mydb2.

To do this, follow the steps:

  1. Shutdown the database either using service or sql statement: shutdown immediate.
  2. Copy or move the database to /db2 and rename the new database directory to mydb2.
  3. Edit the initialize parameter file in the mydb2 (something like initxxx.ora), change the all the paths to the new path (from /db/mydb to /db2/mydb2)
  4. Log in to sqlplus with NOLOG (run in terminal): sqlplus /nolog
  5. Connect to empty instance (run in SQLPLUS): connection / as sysdba
  6. Create SPFILE, aware that the path of the initialize parameter file might be vary (run in SQLPLUS): create spfile from pfile='/db2/mydb2/admin/pfile/initxxx.ora';
  7. Start database in mount mode (run in SQLPLUS): startup mount;
  8. Change the data file path to the new path (run in SQLPLUS): alter database rename file '/db/mydb/data.dbf', '/db/mydb/system.dbf' to '/db2/mydb2/data.dbf', '/db2/mydb2/system.dbf';
  9. Ensure you change all the data file path! Or else it will cause the orginal database cannot be startup! (Refer updates below)
  10. Open your database (run in SQLPLUS): alter database open;
  11. Now it is good to go.

If you want to run both databases in the same time, you need to rename the new database. Follow the following steps:

  1. Rename the new database to mydb2
  2. Remember to change the database name in mydb2 parameter file as well.
  3. Log into SQLPLUS. At this point, you need to aware the SID_NAME in your environment parameter, need to be point to the database that you want to log in. Now we want to log in to the original, so we set the SID_NAME to mydb (terminal): export SID_NAME=mydb
  4. Connect to empty instance (SQLPLUS): connect / as sysdba
  5. Create the SPFILE again using the original parameter file (SQLPLUS): create spfile from pfile='/db/mydb/admin/pfile/initxxx.ora';
  6. Now you should able to start your database (SQLPLUS): startup

Update 2013/2/24

You may find out the data files path using the following query (thanks to Concerned_Netizen):

  • select name from v$datafile
  • select member from v$tempfile
  • select member from v$logfile

April 8, 2011

Setup Customized User Access Trace Log in Oracle Database

Filed under: Oracle — Tags: , , — vicker313 @ 2:26 pm

Here is a simple way to trace user access (log in and log out) in Oracle, by using trigger and table.

  1. Create a trace log table. For example:
       SESSION_ID        NUMBER(20),
       OS_USER           VARCHAR2(30 BYTE),
       HOST              VARCHAR2(30 BYTE),
       IP_ADDRESS        VARCHAR2(30 BYTE),
       TERMINAL          VARCHAR2(30 BYTE),
       MODULE            VARCHAR2(30 BYTE),
       LAST_ACTION       VARCHAR2(32 BYTE),
       LOGON_DATE        DATE,
       LOGOFF_DATE       DATE
  2. Grand privilege to public to update the TRACELOG table
  3. Create Log On Trigger. For example:
    CREATE OR REPLACE TRIGGER on_logon_trigger
    after logon on database
       insert into tracelog(session_id, session_user, os_user, host, ip_address, terminal, module, logon_date)
       SELECT sys_context('USERENV', 'sessionid'),
       sys_context('USERENV', 'os_user'),
       sys_context('USERENV', 'host'),
       sys_context('USERENV', 'ip_address'),
       sys_context('USERENV', 'terminal'),
       sys_context('USERENV', 'module'),
       FROM dual;
  4. Create Log Off Trigger. For example:
    CREATE OR REPLACE TRIGGER on_logoff_trigger
    before logoff on database
       update tracelog
       set logoff_date = sysdate
       where session_id = sys_context('USERENV', 'sessionid')
       and logoff_date is null;

Now you can trace user access by querying the TRACELOG table.

February 21, 2010

Another Management Tool for Firebird, and Other DB…

Filed under: Firebird — Tags: , , , , , — vicker313 @ 9:39 am

I have recommend Flamerobin as Firebird Management Tool in my earlier post, because it is free and light weighted. Recently I found another tool called EMS SQL Manager for InterBase/Firebird. It is a commercial software which has a free version. The free version does not have an expire date, but it will have a splash screen that halt the program for a while during start up. However I do believe the wait is worth because it has a bunch of features that every developer want.

To use EMS SQL Manager (I download the stand alone version, which no need for installation), you will need Firebird Access Library that within Firebird binary folder. Depend on what type of server you want to connect, for server based Firebird, you need to configure the Manager where fbclient.dll located. And for embedded Firebird, you need to locate where is fbembed.dll.

Other than Firebird, there are tools for other database such as:

  1. EMS SQL Manager for MySQL
  2. EMS SQL Manager for MSSQL
  3. EMS SQL Manager for Oracle

January 7, 2010

Migrate Data using Transportable Tablespace in Oracle

Filed under: Oracle — Tags: , — vicker313 @ 8:00 am

There are few types of data migration methods: offline migration, migrate using exp and imp utilities, migrate using expdp and impdp utilities and of course using transportable tablespace. You can refer Oracle Database Cross Platform Transportable Tablespace for more details.

To perform migration using transportable tablespace:

  1. At source database, make the tablespace readonly
    alter tablespace users read only;
  2. Export it as SYSDBA
    exp file=exp.dmp log=exp.log tablespaces=users transport_tablespace=y
  3. Copy over the dump file and the tablespace data files to the destination
  4. At source database, put the tablespace back to read write mode
    alter tablespace users read write;
  5. If the same tablespace exists in the destination database, make  it offline and drop it.
    drop tablespace users including contents and datafiles;
  6. Lastly import the copied dump file:
    imp file=exp.dmp log=exp.log tablespaces=users transport_tablespace=y datafiles=('/u01/users01.dbf', '/u01/users02.dbf')

If there is error such as “Package DBMS_PLUGTS not exist”, try:

  1. Execute this SQL file to create the package in SYS user: {ORACLE_HOME}/rdbms/admin/dbmsplts.sql
  2. Log into the user you want to import the data, create a synonym to the package
    create synonym dbms_plugts for sys.dbms_plugts;
  3. Re-run the import command

December 28, 2009

Reduce Size of TEMP Tablespace

Filed under: Oracle — Tags: , — vicker313 @ 8:00 am

Similar to Reduce Size of UNDO Tablespace, you need to recreate temporary tablespace in order to reduce the size. Issue a resize statement might result error below:

alter database tempfile '/u01/database/temp1.dbf' resize 250M
ORA-03297: file contains used data beyond requested RESIZE value

To recreate the temporary tablespace, you need to create a second temporary tablespace:

  1. Create second temporary tablespace
    create temporary tablespace temp2 TEMPFILE '/u01/database/temp2.dbf'size 5M reuse autoextend on next 1M maxsize unlimited extent management local uniform size 1M;
  2. Set second temporary tablespace as default
    alter database default temporary tablespace temp2;
  3. Drop the first temporary tablepsace
    drop tablespace temp including contents and datafiles;
  4. Recreate first temporary tablespace
    create temporary tablespace temp tempfile '/u01/database/TEMP01.dbf' size 300M reuse autoextend on next 50M maxsize unlimited extent management local uniform size 1M;
  5. Set first temporary tablespace as default
    alter database default temporary tablespace temp;
  6. Drop the second tablespace
    drop tablespace temp2 including contents and datafiles;

December 21, 2009

Reduce Size of UNDO Tablespace

Filed under: Oracle — Tags: , — vicker313 @ 8:00 am

UNDO tablespace can grow quickly if long and uncommitted transactions are executed, sometime even take out the whole hard disk and crash the database, and lastly database itself not even can be started up.

To reduce the size of the data file (especially when it has taken all your hard disk space), we need to recreate the default UNDO tablespace because the resize data file statement will not work:
alter database datafile 'undo.bdf' resize 100M;

  1. Log in as sysdba
    connect / as sysdba
  2. Start database in nomount state
    startup nomount;
  3. Create temporary UNDO tablespace in anywhere got free space
    create undo tablespace undo2 datafile '/tmp/undo2.dbf' size 50M autoextend off;
  4. Set the temporary UNDO tablespace as default UNDO tablespace
    alter system set undo_tablespace = undo2;
  5. Drop the original UNDO tablespace.
    drop tablespace undo1 including contents and datafiles;
  6. Recreate the original UNDO tablespace.
    create undo tablespace undo1 datafile '/u01/database/undo1.dbf' size 500M autoextend on next 5M maxsize 1000M;
  7. Set back the original UNDO tablespace to default UNDO tablespace
    alter system set undo_tablespace = undo1;
  8. Drop the temporary UNDO tablespace
    drop tablespace undo2 including contents and datafiles;

November 7, 2009

Drop Data File from Oracle Database

Filed under: Oracle — Tags: , — vicker313 @ 9:29 am

You want to start an Oracle Database, but fail because of 1 particular data file is missing (for some reason). And you want to drop the data file in order to start the database:

  1. Log in SQLPLUS as SYSDBA
    sqlplus / as sysdba
  2. Start database in mount mode
    start mount
  3. Then remove the missing data file with statement below
    alter database datafile 'missing datafile' offline drop

October 28, 2009

Trace Oracle Database Transactions with LogMiner

Filed under: Oracle — Tags: , , — vicker313 @ 8:54 am

In Oracle file structure, there is one type of file called redo log file, which record down all the transactions done to the database. There will be a finite number of redo log files, which will keep on reusing the files in a Mary Go Round way. As the database is enabled with archived log, the redo log file will be duplicated into a copy of archived log before it is being reused.

Aside from recovering purpose, archived log can also be served as a transaction tracking material. However the log itself is in binary form, which we need a utility called LogMiner to decode the log file.

  1. Before you can use LogMiner you need to install the package by executing the following SQL file as SYS user:
  2. Then grant necessary privilege to user who can execute LogMiner package (Let’s say user James):
    grant execute_catalog_role to james;
  3. Then create synonym for the LogMiner package:
    create public synonym dbms_logmnr for sys.dbms_logmnr;
  4. You can query the list of archived log:

There is 2 ways to read the archived log, one is to read the archived log at the database with the archived log, or move the archived log to another database and read it there. To read the log at the same database:

  1. Check where is your archived log placed (you may check it from the parameter file log_archive_dest). Assume that it is /u01/db/archive and the log you want to encode is 123.arc.
  2. Run statement below to add the archived log to the dictionary:
    exec DBMS_LOGMNR.ADD_LOGFILE('/u01/db/archive/123.arc');
  3. Then run statement below to start the LogMiner:
    exec DBMS_LOGMNR.START_LOGMNR(options => dbms_logmnr.dict_from_online_catalog);

To read the log at a different database:

  1. Move the archived logs to another database server (let’s say you placed in at /u01/temp)
  2. Run statement below to build a data dictionary for the archived log (you can change the file and directory):
  3. The run statement below to add the log file into the dictionary:
    exec DBMS_LOGMNR.ADD_LOGFILE('/u01/temp/123.arc');
  4. Start the LogMiner:
    exec DBMS_LOGMNR.START_LOGMNR(dictfilename => '/u01/temp/logmnr.dat');

Either way, you can query the log content through this view v$logmnr_contents. However the data in the view will be wiped off after you close the session, which mean you need to add the log file and start the LogMiner again in order to get back the data. Some useful columns include:

  1. username: database user used
  2. timestamp: when it is happened
  3. sql_redo: SQL statement of what actually been done
  4. sql_undo: SQL statement to undo the transaction
  5. scn: system change number, the unique id of this transaction
  6. seg_type_name: object type
  7. table_space: from which tablespace
  8. session#
  9. seg_owner: object owner
  10. table_name: table name
  11. operation: update, insert, delete, commit or rollback
  12. session_info: include info such as from which machine

October 21, 2009

Recover Strategy of Oracle Database with Archive Log Enabled

Filed under: Oracle — Tags: , , , , — vicker313 @ 8:00 am

After we setup the Backup Strategy, we need to prepare for recovering. Basically it involves 2 major steps: restore and recover.

Restore database simply means uncompress (such as UNTAR) the full backup from the backup tape:

  1. Uncompressed the full backup from tape (assume the tape is known as /dev/st0)
    tar xvf /dev/st0
  2. Don’t try to start the database first before you finish the next step, recovering database.

Different between restore and recover is that: restore is directly extracted from a backup, while recover is rebuilding data of an outdated database (such as from a full backup) to a status of particular moment.

  1. Remove all the archived log in the archived log directory (assume it is /u01/oracle/archive)
    rm -fv /u01/oracle/archive/*
  2. Uncompressed the daily backup (which is the archived log) .
    tar xvf /dev/st0
  3. Start SQLPLUS as SYSDBA.
    sqlplus / as sysdba
  4. Start database in mount mode
    startup mount;
  5. In case you have problem in startup the database because of the parameter file is not set, issue the command below and try again:
    create spfile from pfile = 'dest_of_your_parameter_file';
  6. Then issue command below to start recover from archived log:
    recover database using backup controlfile until cancel
  7. It will keep asking for your comfirmation on the name of the archived log to be recovered. Either press Enter, the location of the archived log, or key in cancel to stop the recover.
  8. After recover open the database by issuing command below, aware that you won’t able the continue recover the archived log after issue this command, or else you need to start all over again from restoring database:
    alter database open resetlogs;
  9. In case you face problem in open the database, add _allow_resetlogs_corruption=true to the parameter file, issue command below and try again.
    create sp file from pfile = 'dest_of_your_parameter_file';
Older Posts »

Blog at