Friday, January 9, 2009

Oracle Streams Step by Step setup

The first step is to create our streams administrator. I will follow the guidelines from the oracle docs exactly for this:
Connect as sysdba:
sqlplus / as sysdba

Create the streams tablespace (change the name and/or location to suit):
create tablespace streams_tbs datafile 'c:\temp\stream_tbs.dbf' size 25M
reuse autoextend on maxsize unlimited;

Create our streams administrator:

create user strmadmin identified by strmadmin
default tablespace streams_tbs
quota unlimited on streams_tbs;

I haven't quite figured out why, but we need to grant our administrator DBA privs. I think this is a bad thing. There is probably a work around where I could do some direct grants instead but I haven't had time to track those down.
grant dba to strmadmin;

We also want to grant streams admin privs to the user.

BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strmadmin',
grant_privileges => true);
END;
/

The next steps we'll run as the HR user.
conn hr/hr

Grant all access to the employee table to the streams admin:
grant all on hr.employees to strmadmin;

We also need to create the employee_audit table. Note that I am adding three columns in this table that do not exist in the employee table.

CREATE TABLE employee_audit(
employee_id NUMBER(6),
first_name VARCHAR2(20),
last_name VARCHAR2(25),
email VARCHAR2(25),
phone_number VARCHAR2(20),
hire_date DATE,
job_id VARCHAR2(10),
salary NUMBER(8,2),
commission_pct NUMBER(2,2),
manager_id NUMBER(6),
department_id NUMBER(4),
upd_date DATE,
user_name VARCHAR2(30),
action VARCHAR2(30));

Grant all access to the audit table to the streams admin user:
grant all on hr.employee_audit to strmadmin;

We connect as the streams admin user:
conn strmadmin/strmadmin

We can create a logging table. You would NOT want to do this in a high-volume production system. I am doing this to illustrate user defined monitoring and show how you can get inside the capture process.
CREATE TABLE streams_monitor (
date_and_time TIMESTAMP(6) DEFAULT systimestamp,
txt_msg CLOB );

Here we create the queue. Unlike AQ, where you have to create a separate table, this step creates the queue and the underlying ANYDATA table.
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
/

This just defines that we want to capture DML and not DDL.
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'hr.employees',
streams_type => 'capture',
streams_name => 'capture_emp',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
inclusion_rule => true);
END;
/

Tell the capture process that we want to know who made the change:
BEGIN
DBMS_CAPTURE_ADM.INCLUDE_EXTRA_ATTRIBUTE(
capture_name => 'capture_emp',
attribute_name => 'username',
include => true);
END;
/

We also need to tell Oracle where to start our capture. Change the source_database_name to match your database.
DECLARE
iscn NUMBER;
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'hr.employees',
source_database_name => 'ORCL',
instantiation_scn => iscn);
END;
/

And the fun part! This is where we define our capture procedure. I'm taking this right from the docs but I'm adding a couple steps.

CREATE OR REPLACE PROCEDURE emp_dml_handler(in_any IN ANYDATA) IS
lcr SYS.LCR$_ROW_RECORD;
rc PLS_INTEGER;
command VARCHAR2(30);
old_values SYS.LCR$_ROW_LIST;
BEGIN
-- Access the LCR
rc := in_any.GETOBJECT(lcr);
-- Get the object command type
command := lcr.GET_COMMAND_TYPE();

-- I am inserting the XML equivalent of the LCR into the monitoring table.
insert into streams_monitor (txt_msg)
values (command ||
DBMS_STREAMS.CONVERT_LCR_TO_XML(in_any) );

-- Set the command_type in the row LCR to INSERT
lcr.SET_COMMAND_TYPE('INSERT');
-- Set the object_name in the row LCR to EMP_DEL
lcr.SET_OBJECT_NAME('EMPLOYEE_AUDIT');

-- Set the new values to the old values for update and delete
IF command IN ('DELETE', 'UPDATE') THEN
-- Get the old values in the row LCR
old_values := lcr.GET_VALUES('old');
-- Set the old values in the row LCR to the new values in the row LCR
lcr.SET_VALUES('new', old_values);
-- Set the old values in the row LCR to NULL
lcr.SET_VALUES('old', NULL);
END IF;

-- Add a SYSDATE for upd_date
lcr.ADD_COLUMN('new', 'UPD_DATE', ANYDATA.ConvertDate(SYSDATE));
-- Add a user column
lcr.ADD_COLUMN('new', 'user_name',
lcr.GET_EXTRA_ATTRIBUTE('USERNAME') );
-- Add an action column
lcr.ADD_COLUMN('new', 'ACTION', ANYDATA.ConvertVarChar2(command));

-- Make the changes
lcr.EXECUTE(true);
commit;
END;
/

Create the DML handlers:
BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'hr.employees',
object_type => 'TABLE',
operation_name => 'INSERT',
error_handler => false,
user_procedure => 'strmadmin.emp_dml_handler',
apply_database_link => NULL,
apply_name => NULL);
END;
/
BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'hr.employees',
object_type => 'TABLE',
operation_name => 'UPDATE',
error_handler => false,
user_procedure => 'strmadmin.emp_dml_handler',
apply_database_link => NULL,
apply_name => NULL);
END;
/
BEGIN
DBMS_APPLY_ADM.SET_DML_HANDLER(
object_name => 'hr.employees',
object_type => 'TABLE',
operation_name => 'DELETE',
error_handler => false,
user_procedure => 'strmadmin.emp_dml_handler',
apply_database_link => NULL,
apply_name => NULL);
END;
/

Create the apply rule. This tells streams, yet again, that we in fact do want to capture changes. The second calls tells streams where to put the info. Change the source_database_name to match your database.

DECLARE
emp_rule_name_dml VARCHAR2(30);
emp_rule_name_ddl VARCHAR2(30);
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'hr.employees',
streams_type => 'apply',
streams_name => 'apply_emp',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
source_database => 'ORCL',
dml_rule_name => emp_rule_name_dml,
ddl_rule_name => emp_rule_name_ddl);
DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
rule_name => emp_rule_name_dml,
destination_queue_name => 'strmadmin.streams_queue');
END;
/


We don't want to stop applying changes when there is an error, so:

BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'apply_emp',
parameter => 'disable_on_error',
value => 'n');
END;
/

Turn on the apply process:
BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'apply_emp');
END;
/

Turn on the capture process:
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'capture_emp');
END;
/

Connect as HR and make some changes to Employees.
sqlplus hr/hr

INSERT INTO hr.employees VALUES(207, 'JOHN', 'SMITH',
'JSMITH@MYCOMPANY.COM',
NULL, '07-JUN-94', 'AC_ACCOUNT', 777, NULL, NULL, 110);
COMMIT;
UPDATE hr.employees SET salary=5999 WHERE employee_id=206;
COMMIT;
DELETE FROM hr.employees WHERE employee_id=207;
COMMIT;

It takes a few seconds for the data to make it to the logs and then back into the system to be applied. Run this query until you see data (remembering that it is not instantaneous):

SELECT employee_id, first_name, last_name, upd_Date, action
FROM hr.employee_audit
ORDER BY employee_id;

Then you can log back into the streams admin account:
sqlplus strmadmin/strmadmin

View the XML LCR that we inserted during the capture process:
set long 9999
set pagesize 0
select * from streams_monitor;

That's it! It's really not that much work to capture and apply changes using Oracle 10g. Of course, it's a little bit more work to cross database instances, but it's not that much.

One of the things that amazes me is how little code is required to accomplish this. The less code I have to write, the less code I have to maintain.

AQ 10g to 9i

NOTE: Streams is not available with Oracle 10g XE. Download and install EE. If you have 1 GB or more of RAM on your PC, you can download EE and use the DBCA to run two database instances. You do not physically need two machines to get this to work.

You need two Oracle instances for this. I used a 10g instance as my source (required) and a 9i database as my target (you could also use a 10g instance here). I called my first instance ORCL (how creative!) and I called my second instance SECOND. It works for me!

As I said, ORCL will be my source instance and SECOND will be my target instance. You should already have the CDC code from the last article running in ORCL. If you are not using the same names for your databases and you are not sure of the exact name of your databases (including domain), use global_name, i.e. select * from global_name;

ORCL must be in archivelog mode to run CDC. SECOND does not need archivelog mode. Having two databases running on a single PC in archivelog mode can really beat up a poor IDE drive.

You already created your streams admin user in ORCL so now do the same thing in SECOND. The code below is mostly the same code that you ran on ORCL. I made a few minor changes in case you are running both instances on a single PC:
sqlplus / as sysdba

create tablespace streams_second_tbs datafile 'c:\temp\stream_2_tbs.dbf' size 25M
reuse autoextend on maxsize unlimited;

create user strmadmin identified by strmadmin
default tablespace streams_second_tbs
quota unlimited on streams_second_tbs;

grant dba to strmadmin;

Connect as strmadmin. You need to create an AQ table, AQ queue and then start the queue. That's what the code below does.
BEGIN
DBMS_AQADM.CREATE_QUEUE_TABLE(
queue_table => 'lrc_emp_t',
queue_payload_type => 'sys.anydata',
multiple_consumers => TRUE,
compatible => '8.1');


DBMS_AQADM.CREATE_QUEUE(
queue_name => 'lrc_emp_q',
queue_table => 'lrc_emp_t');


DBMS_AQADM.START_QUEUE (
queue_name => 'lrc_emp_q');
END;
/

You also need to create a database link. You have to have one from ORCL to SECOND but for debugging, I like a link in both. So, while you're in SECOND, create a link:
CREATE DATABASE LINK orcl.world
CONNECT TO strmadmin
IDENTIFIED BY strmadmin
USING 'orcl.world';

Log into ORCL as strmadmin and run the exact same command there. Most of the setup for this is exactly the same between the two instances.

Create your link on this side also.

CREATE DATABASE LINK second.world
CONNECT TO strmadmin
IDENTIFIED BY strmadmin
USING 'second.world';

Ok, now we have running queues in ORCL and SECOND. While you are logged into ORCL, you will create a propagation schedule. You DO NOT need to run this in SECOND.
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'hr.employees',
streams_name => 'orcl_2_second',
source_queue_name => 'strmadmin.lrc_emp_q',
destination_queue_name => 'strmadmin.lrc_emp_q@second.world',
include_dml => true,
include_ddl => FALSE,
source_database => 'orcl.world');
END;
/

This tells the database to take the data in the local lrc_emp_q and send it to the named destination queue.

We're almost done with the propagation now. We are going to modify the EMP_DML_HANDLER (from above) so that we get an enqueue block just above the execute statement:

CREATE OR REPLACE PROCEDURE emp_dml_handler(in_any IN ANYDATA) IS
lcr SYS.LCR$_ROW_RECORD;
rc PLS_INTEGER;
command VARCHAR2(30);
old_values SYS.LCR$_ROW_LIST;
BEGIN
-- Access the LCR
rc := in_any.GETOBJECT(lcr);
-- Get the object command type
command := lcr.GET_COMMAND_TYPE();
-- I am inserting the XML equivalent of the LCR into the monitoring table.
insert into streams_monitor (txt_msg)
values (command ||
DBMS_STREAMS.CONVERT_LCR_TO_XML(in_any) );
-- Set the command_type in the row LCR to INSERT
lcr.SET_COMMAND_TYPE('INSERT');
-- Set the object_name in the row LCR to EMP_DEL
lcr.SET_OBJECT_NAME('EMPLOYEE_AUDIT');
-- Set the new values to the old values for update and delete
IF command IN ('DELETE', 'UPDATE') THEN
-- Get the old values in the row LCR
old_values := lcr.GET_VALUES('old');
-- Set the old values in the row LCR to the new values in the row LCR
lcr.SET_VALUES('new', old_values);
-- Set the old values in the row LCR to NULL
lcr.SET_VALUES('old', NULL);
END IF;
-- Add a SYSDATE value for the timestamp column
lcr.ADD_COLUMN('new', 'UPD_DATE', ANYDATA.ConvertDate(SYSDATE));
-- Add a user value for the timestamp column
lcr.ADD_COLUMN('new', 'user_name',
lcr.GET_EXTRA_ATTRIBUTE('USERNAME') );
-- Add an action column
lcr.ADD_COLUMN('new', 'ACTION', ANYDATA.ConvertVarChar2(command));


DECLARE
enqueue_options DBMS_AQ.enqueue_options_t;
message_properties DBMS_AQ.message_properties_t;
message_handle RAW(16);
recipients DBMS_AQ.aq$_recipient_list_t;
BEGIN
recipients(1) := sys.aq$_agent(
'anydata_subscriber',
'strmadmin.lrc_emp_q@second.world',
NULL);
message_properties.recipient_list := recipients;


DBMS_AQ.ENQUEUE(
queue_name => 'strmadmin.lrc_emp_q',
enqueue_options => enqueue_options,
message_properties => message_properties,
payload => anydata.convertObject(lcr),
msgid => message_handle);
EXCEPTION
WHEN OTHERS THEN
insert into streams_monitor (txt_msg)
values ('Anydata: ' || DBMS_UTILITY.FORMAT_ERROR_STACK );
END;


-- Make the changes
lcr.EXECUTE(true);
commit;
END;
/

The declaration section above created some variables required for an enqueue. We created a subscriber (that's the name of the consumer). We will use that name to dequeue the record in the SECOND instance.

We then enqueued our LCR as an ANYDATA datatype.

I put the exception handler there in case there are any problems with our enqueue.

That's all it takes. Insert some records into the HR.employees table and commit them. Then log into strmadmin@second and select * from the lrc_emp_t table. You should have as many records there as you inserted.

There are not a lot of moving parts so there aren't many things that will go wrong. Propagation is where I have the most troubles. You can query DBA_PROPAGATION to see if you have any propagation errors.

That's it for moving the data from 10g to 9i.
Manual Creation of an LCR
This part of the paper is not required to move data from 10g to 9i. It is very handy if you need to manually create test data though. I recently had the need to create some test data in LCR format. If you have manual control of your LCR, you can test specific data issues a lot easier.

Anyway, I (and a coworker) searched the web and looked through various documentation but was not able to find a concise description of how to go about creating an LCR manually. So I decided to write one.

NOTE: Everything below is specifically for a ROW type LCR as opposed to a DDL type LCR. The concepts would be the same but the specific code would change.

First a little definition. An LCR format is the format of data that Oracle uses in the redo logs and is used for Oracle Streams (and probably data guard although I am guessing about that). The LCR has information about what the object is as well as the old and new values. The old and new values are exactly the same as :old and :new in a trigger.

You can see the definition of the LCR format by viewing the LCR XML Schema. http://download-east.oracle.com/docs/cd/B14117_01/server.101/b10727/ap_xmlsc.htm

An LCR is an object type. Actually, it is two table collections of an object type embedded within another object type. You can get the details of that from the documentation in the supplied PL/SQL Packages and Types documentation. http://www.oracle.com/pls/db102/to_toc?pathname=appdev.102%2Fb14258%2Ftoc.htm&remark=portal+%28Application+development%29

The short story is that an LCR stored object level information (database, object_name, owner, and command type) at the top level. Beneath that is column and data information in a name/value pair collection.

I tend to think of things like this in a relational format. If I put it in database terms, it might look something like:


Figure 1 - LCR Structure
Excuse the poor diagram. An artiste I am not.

You can do a describe to see the methods as well as view the documentation. The important thing to note is the constructor. Normally, a constructor has the same name as the object type. In this case, they chose to name it CONSTRUCT.
SQL> desc sys.lcr$_row_record
METHOD
------
STATIC FUNCTION CONSTRUCT RETURNS LCR$_ROW_RECORD
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
SOURCE_DATABASE_NAME VARCHAR2 IN
COMMAND_TYPE VARCHAR2 IN
OBJECT_OWNER VARCHAR2 IN
OBJECT_NAME VARCHAR2 IN
TAG RAW IN DEFAULT
TRANSACTION_ID VARCHAR2 IN DEFAULT
SCN NUMBER IN DEFAULT
OLD_VALUES LCR$_ROW_LIST IN DEFAULT
NEW_VALUES LCR$_ROW_LIST IN DEFAULT

Based on that info, populating the test LCR is relatively straight-forward.

Just for your info, the type LCR$_ROW_LIST is a collection of LCR$_ROW_UNIT. Those types are also documented in the reference guide I mentioned above. You will not want to access those directly though. You can use the built-in LCR$_ROW_RECORD methods to populate those fields.

One thing to remember is that the data values that you are dealing with are sys.AnyData data types. That type has it's own rules and deserves a blog entry all to itself.

Here is a function that will create an empty LCR for you automatically for any table. Once you have the LCR you can modify the values to suit.

CREATE OR REPLACE FUNCTION create_lcr(
p_table_owner IN all_tables.owner%TYPE,
p_table_name IN all_tables.table_name%TYPE,
p_command IN VARCHAR2 )
RETURN sys.lcr$_row_record
AS
v_lcr sys.lcr$_row_record;
v_database global_name.global_name%TYPE;
BEGIN

-- verify the command type
IF p_command NOT IN ('INSERT', 'UPDATE', 'DELETE')
THEN
RETURN v_lcr;
END IF;

-- Get the database name
-- This could be parameterized
SELECT global_name
INTO v_database
FROM global_name;

-- Construct the LCR
v_lcr := sys.lcr$_row_record.construct(
source_database_name => v_database,
command_type => p_command,
object_owner => p_table_owner,
object_name => p_table_name );

-- You can override the values in the constructor by calling these methods
v_lcr.set_command_type(p_command);
v_lcr.set_object_name(p_table_name);
v_lcr.set_object_owner(p_table_owner);
v_lcr.set_source_database_name(v_database);

-- Loop through the columns and add new and old values
FOR c1 IN (
SELECT column_name, data_type
FROM all_tab_columns
WHERE owner = p_table_owner
AND table_name = p_table_name
ORDER BY column_id )
LOOP

-- Create an anydata based on column data type
-- You would expand this for all data types
-- I'm going to keep this example fairly simple
CASE c1.data_type
WHEN 'VARCHAR2'
THEN
v_lcr.add_column('new', c1.column_name, sys.AnyData.convertVarChar2(TO_CHAR(NULL)));
v_lcr.add_column('old', c1.column_name, sys.AnyData.convertVarChar2(TO_CHAR(NULL)));
WHEN 'DATE'
THEN
v_lcr.add_column('new', c1.column_name, sys.AnyData.convertDate(TO_DATE(NULL)));
v_lcr.add_column('old', c1.column_name, sys.AnyData.convertDate(TO_DATE(NULL)));
WHEN 'NUMBER'
THEN
v_lcr.add_column('new', c1.column_name, sys.AnyData.convertNumber(TO_NUMBER(NULL)));
v_lcr.add_column('old', c1.column_name, sys.AnyData.convertNumber(TO_NUMBER(NULL)));
END CASE;
END LOOP;

RETURN v_lcr;
END;

To call this function and manipulate it, you might write something like the following:
DECLARE
v_lcr sys.lcr$_row_record;
BEGIN
v_lcr := create_lcr( 'HR', 'EMPLOYEES', 'INSERT' );

-- Set some values
v_lcr.set_value('new', 'first_name', sys.anyData.convertVarchar2('Lewis'));
v_lcr.set_value('old', 'first_name', sys.anyData.convertVarchar2('George'));

-- Display Some Values
DBMS_OUTPUT.PUT_LINE(
'Database: ' || v_lcr.get_source_database_name() ||
', Object Owner: ' || v_lcr.get_object_owner() ||
', Object Name: ' || v_lcr.get_object_name() ||
', Command: ' || v_lcr.get_command_type() );

DBMS_OUTPUT.PUT_LINE(
'New First Name: ' || sys.anyData.accessVarchar2(v_lcr.get_value('new', 'first_name')) ||
', Old First Name: ' || sys.anyData.accessVarchar2(v_lcr.get_value('old', 'first_name')) );
END;

The output from this is:
Database: XE, Object Owner: HR, Object Name: EMPLOYEES, Command: INSERT
New First Name: Lewis, Old First Name: George

My Logminer Document

Oracle LogMiner


 Oracle Log miner , which is a part of oracle database , enables you to quey online and archived log files
 It can be used from the command line interface
 Also you can use Oracle Log miner viewer from the oracle enterprise manager
 For example, If you insert some wrong data in the tables or you deleted some of the important system tables, you can use logminer utility to recover the database to the previous state
 For that you need to query V$LOGMNR_CONTENTS table.




 Log miner can be used to track all the DDL and DML statements that are executed on the database


 Log miner configuration:

 There are four basic objects in the logminer configuration

1) Source database
2) Mining database
3) Log miner dictionary
4) Redolog files containing the data of interest


 Source Database: Source database is the database that produce all the redolog files containing the data of interest
 Mining Database: Mining database is the database that logminer uses when it analyzing the redologfiles
 LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data.
For example, consider the following the SQL statement:
INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY) VALUES('IT_WT','Technical Writer', 4000, 11000);

Without the dictionary, LogMiner will display:
insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values
(HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),
HEXTORAW('c229'),HEXTORAW('c3020b'));

 The redo log files contain the changes made to the database or database dictionary.

 You must enable supplemental logging prior to generating redo log files that will be analysed by log miner


To enable supplental logging

SQL > alter database add supplental log data;

Database altered

To check ,whether supplemental data logging is enabled or not

View the supplemental_log_min coloumn in the v$database view

If the value is Yes or implicit, supplemtal logging is enabled.


 Set utl_file_dir in the pfile or spfile , so that database can create files in the specified area , Here I mentoned like

Utl_file_dir=/usr/tmp, /d01/oracle


Then execute the dbms_logmnr_d.build(‘dictioanry.ora’,’/d01/oracle’);

Check that that file dictionary.ora is created in the specified directory you specified in the procedure .

Then create some archived redo logs by using

Sql> alter system archive log current;

Repeat this step for 4 -5 time to get 5 archived redo log files


Then register your archived log files by using

Execute
dbms_logmnr.add_logfile(‘/d01/oracle/oracle/archivedlog.arc’,dbms_logmnr.addfile);

Like this add all the archived redologfiles
 alter session set nls_date_format=’yyyy-mm-dd hh24:mi:ss’;


select low_scn,high_scn,low_time, high_time from v$logmnr_logs;

note down the values

and then execute dbms_logmnr.start_logmnr(dictfilename=>’dict location’,starttime=>to_Date(‘starttime ‘,’yyyy-mm-dd hh24:mi:ss’),’endtime=>to_date(endtime,’yyyy-mm-dd hh24:mi:ss))’;

Log miner

Using Log Miner utility, you can query the contents of online redo log files and archived log files. Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.

LogMiner Configuration

There are three basic objects in a LogMiner configuration that you should be familiar with: the source database, the LogMiner dictionary, and the redo log files containing the data of interest:

* The source database is the database that produces all the redo log files that you want LogMiner to analyze.
* The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request.

LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data.

For example, consider the following the SQL statement:

INSERT INTO HR.JOBS(JOB_ID, JOB_TITLE, MIN_SALARY, MAX_SALARY) VALUES('IT_WT','Technical Writer', 4000, 11000);



Without the dictionary, LogMiner will display:

insert into "UNKNOWN"."OBJ# 45522"("COL 1","COL 2","COL 3","COL 4") values

(HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),

HEXTORAW('c229'),HEXTORAW('c3020b'));



· The redo log files contain the changes made to the database or database dictionary.



LogMiner Dictionary Options

LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for supplying the dictionary:

* Using the Online Catalog

Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.

* Extracting a LogMiner Dictionary to the Redo Log Files

Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.

* Extracting the LogMiner Dictionary to a Flat File

This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.

Using the Online Catalog

To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows:

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(-

OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Extracting a LogMiner Dictionary to the Redo Log Files

To extract a LogMiner dictionary to the redo log files, the database must be open and in ARCHIVELOG mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary extracted to the redo log files is guaranteed to be consistent (whereas the dictionary extracted to a flat file is not).

To extract dictionary information to the redo log files, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_LOGS option. Do not specify a filename or location.

SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

Extracting the LogMiner Dictionary to a Flat File

When the LogMiner dictionary is in a flat file, fewer system resources are used than when it is contained in the redo log files. Oracle recommends that you regularly back up the dictionary extract to ensure correct analysis of older redo log files.

1. Set the initialization parameter, UTL_FILE_DIR, in the initialization parameter file. For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the dictionary file is placed, enter the following in the initialization parameter file:

UTL_FILE_DIR = /oracle/database



2. Start the Database

SQL> startup

3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary file. For example, enter the following to create the file dictionary.ora in /oracle/database:

SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora','/oracle/database/',

DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);

Redo Log File Options

To mine data in the redo log files, LogMiner needs information about which redo log files to mine.

You can direct LogMiner to automatically and dynamically create a list of redo log files to analyze, or you can explicitly specify a list of redo log files for LogMiner to analyze, as follows:

Automatically

If LogMiner is being used on the source database, then you can direct LogMiner to find and create a list of redo log files for analysis automatically. Use the CONTINUOUS_MINE option when you start LogMiner.

Manually

Use the DBMS_LOGMNR.ADD_LOGFILE procedure to manually create a list of redo log files before you start LogMiner. After the first redo log file has been added to the list, each subsequently added redo log file must be from the same database and associated with the same database RESETLOGS SCN. When using this method, LogMiner need not be connected to the source database.

Example: Finding All Modifications in the Current Redo Log File

The easiest way to examine the modification history of a database is to mine at the source database and use the online catalog to translate the redo log files. This example shows how to do the simplest analysis using LogMiner.

Step 1 Specify the list of redo log files to be analyzed.

Specify the redo log files which you want to analyze.

SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -

LOGFILENAME => '/usr/oracle/ica/log1.ora',

OPTIONS => DBMS_LOGMNR.NEW);



SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -

LOGFILENAME => '/u01/oracle/ica/log2.ora',

OPTIONS => DBMS_LOGMNR.ADDFILE);

Step 2 Start LogMiner.

Start LogMiner and specify the dictionary to use.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(

OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

Step 3 Query the V$LOGMNR_CONTENTS view.

Note that there are four transactions (two of them were committed within the redo log file being analyzed, and two were not). The output shows the DML statements in the order in which they were executed; thus transactions interleave among themselves.

SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');



USR XID SQL_REDO SQL_UNDO

---- --------- ----------------------------------------------------

HR 1.11.1476 set transaction read write;



HR 1.11.1476 insert into "HR"."EMPLOYEES"( delete from "HR"."EMPLOYEES"

"EMPLOYEE_ID","FIRST_NAME", where "EMPLOYEE_ID" = '306'

"LAST_NAME","EMAIL", and "FIRST_NAME" = 'Mohammed'

"PHONE_NUMBER","HIRE_DATE", and "LAST_NAME" = 'Sami'

"JOB_ID","SALARY", and "EMAIL" = 'MDSAMI'

"COMMISSION_PCT","MANAGER_ID", and "PHONE_NUMBER" = '1234567890'

"DEPARTMENT_ID") values and "HIRE_DATE" = TO_DATE('10-JAN-2003

('306','Mohammed','Sami', 13:34:43', 'dd-mon-yyyy hh24:mi:ss')

'MDSAMI', '1234567890', and "JOB_ID" = 'HR_REP' and

TO_DATE('10-jan-2003 13:34:43', "SALARY" = '120000' and

'dd-mon-yyyy hh24:mi:ss'), "COMMISSION_PCT" = '.05' and

'HR_REP','120000', '.05', "DEPARTMENT_ID" = '10' and

'105','10'); ROWID = 'AAAHSkAABAAAY6rAAO';



OE 1.1.1484 set transaction read write;



OE 1.1.1484 update "OE"."PRODUCT_INFORMATION" update "OE"."PRODUCT_INFORMATION"

set "WARRANTY_PERIOD" = set "WARRANTY_PERIOD" =

TO_YMINTERVAL('+05-00') where TO_YMINTERVAL('+01-00') where

"PRODUCT_ID" = '1799' and "PRODUCT_ID" = '1799' and

"WARRANTY_PERIOD" = "WARRANTY_PERIOD" =

TO_YMINTERVAL('+01-00') and TO_YMINTERVAL('+05-00') and

ROWID = 'AAAHTKAABAAAY9mAAB'; ROWID = 'AAAHTKAABAAAY9mAAB';



OE 1.1.1484 update "OE"."PRODUCT_INFORMATION" update "OE"."PRODUCT_INFORMATION"

set "WARRANTY_PERIOD" = set "WARRANTY_PERIOD" =

TO_YMINTERVAL('+05-00') where TO_YMINTERVAL('+01-00') where

"PRODUCT_ID" = '1801' and "PRODUCT_ID" = '1801' and

"WARRANTY_PERIOD" = "WARRANTY_PERIOD" =

TO_YMINTERVAL('+01-00') and TO_YMINTERVAL('+05-00') and

ROWID = 'AAAHTKAABAAAY9mAAC'; ROWID ='AAAHTKAABAAAY9mAAC';



HR 1.11.1476 insert into "HR"."EMPLOYEES"( delete from "HR"."EMPLOYEES"

"EMPLOYEE_ID","FIRST_NAME", "EMPLOYEE_ID" = '307' and

"LAST_NAME","EMAIL", "FIRST_NAME" = 'John' and

"PHONE_NUMBER","HIRE_DATE", "LAST_NAME" = 'Silver' and

"JOB_ID","SALARY", "EMAIL" = 'JSILVER' and

"COMMISSION_PCT","MANAGER_ID", "PHONE_NUMBER" = '5551112222'

"DEPARTMENT_ID") values and "HIRE_DATE" = TO_DATE('10-jan- 2003

('307','John','Silver', 13:41:03', 'dd-mon-yyyy hh24:mi:ss')

'JSILVER', '5551112222', and "JOB_ID" ='105' and

"DEPARTMENT_ID"

TO_DATE('10-jan-2003 13:41:03', = '50' and ROWID =

'AAAHSkAABAAAY6rAAP';

'dd-mon-yyyy hh24:mi:ss'),

'SH_CLERK','110000', '.05',

'105','50');



OE 1.1.1484 commit;



HR 1.15.1481 set transaction read write;



HR 1.15.1481 delete from "HR"."EMPLOYEES" insert into "HR"."EMPLOYEES"(

where "EMPLOYEE_ID" = '205' and "EMPLOYEE_ID","FIRST_NAME",

"FIRST_NAME" = 'Shelley' and "LAST_NAME","EMAIL","PHONE_NUMBER",

"LAST_NAME" = 'Higgins' and "HIRE_DATE", "JOB_ID","SALARY",

"EMAIL" = 'SHIGGINS' and "COMMISSION_PCT","MANAGER_ID",

"PHONE_NUMBER" = '515.123.8080' "DEPARTMENT_ID") values

and "HIRE_DATE" = TO_DATE( ('205','Shelley','Higgins',

'07-jun-1994 10:05:01', and 'SHIGGINS','515.123.8080',

'dd-mon-yyyy hh24:mi:ss') TO_DATE('07-jun-1994 10:05:01',

and "JOB_ID" = 'AC_MGR' 'dd-mon-yyyy hh24:mi:ss'),

and "SALARY"= '12000' 'AC_MGR','12000',NULL,'101','110');

and "COMMISSION_PCT" IS NULL

and "MANAGER_ID"

= '101' and "DEPARTMENT_ID" =

'110' and ROWID =

'AAAHSkAABAAAY6rAAM';





OE 1.8.1484 set transaction read write;



OE 1.8.1484 update "OE"."PRODUCT_INFORMATION" update "OE"."PRODUCT_INFORMATION"

set "WARRANTY_PERIOD" = set "WARRANTY_PERIOD" =

TO_YMINTERVAL('+12-06') where TO_YMINTERVAL('+20-00') where

"PRODUCT_ID" = '2350' and "PRODUCT_ID" = '2350' and

"WARRANTY_PERIOD" = "WARRANTY_PERIOD" =

TO_YMINTERVAL('+20-00') and TO_YMINTERVAL('+20-00') and

ROWID = 'AAAHTKAABAAAY9tAAD'; ROWID ='AAAHTKAABAAAY9tAAD';



HR 1.11.1476 commit;

Step 4 End the LogMiner session.

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

Example of Mining Without Specifying the List of Redo Log Files Explicitly

The previous example explicitly specified the redo log file or files to be mined. However, if you are mining in the same database that generated the redo log files, then you can mine the appropriate list of redo log files by just specifying the time (or SCN) range of interest. To mine a set of redo log files without explicitly specifying them, use the DBMS_LOGMNR.CONTINUOUS_MINE option to the DBMS_LOGMNR.START_LOGMNR procedure, and specify either a time range or an SCN range of interest.

Example : Mining Redo Log Files in a Given Time Range

This example assumes that you want to use the data dictionary extracted to the redo log files.

Step 1 Determine the timestamp of the redo log file that contains the start of the data dictionary.

SQL> SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG

WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG

WHERE DICTIONARY_BEGIN = 'YES');





NAME FIRST_TIME

-------------------------------------------- --------------------

/usr/oracle/data/db1arch_1_207_482701534.dbf 10-jan-2003 12:01:34



Step 2 Display all the redo log files that have been generated so far.

This step is not required, but is included to demonstrate that the CONTINUOUS_MINE option works as expected, as will be shown in Step 4.

SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS

WHERE LOW_TIME > '10-jan-2003 12:01:34';



NAME

----------------------------------------------

/usr/oracle/data/db1arch_1_207_482701534.dbf

/usr/oracle/data/db1arch_1_208_482701534.dbf

/usr/oracle/data/db1arch_1_209_482701534.dbf

/usr/oracle/data/db1arch_1_210_482701534.dbf

Step 3 Start LogMiner.

Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY, PRINT_PRETTY_SQL, and CONTINUOUS_MINE options.

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(-

STARTTIME => '10-jan-2003 12:01:34', -

ENDTIME => SYSDATE, -

OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + -

DBMS_LOGMNR.COMMITTED_DATA_ONLY + -

DBMS_LOGMNR.PRINT_PRETTY_SQL + -

DBMS_LOGMNR.CONTINUOUS_MINE);

Step 4 Query the V$LOGMNR_LOGS view.

This step shows that the DBMS_LOGMNR.START_LOGMNR procedure with the CONTINUOUS_MINE option includes all of the redo log files that have been generated so far, as expected. (Compare the output in this step to the output in Step 2.)

SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS;



NAME

------------------------------------------------------

/usr/oracle/data/db1arch_1_207_482701534.dbf

/usr/oracle/data/db1arch_1_208_482701534.dbf

/usr/oracle/data/db1arch_1_209_482701534.dbf

/usr/oracle/data/db1arch_1_210_482701534.dbf



Step 5 Query the V$LOGMNR_CONTENTS view.

To reduce the number of rows returned by the query, exclude all DML statements done in the sys or system schema. (This query specifies a timestamp to exclude transactions that were involved in the dictionary extraction.)

Note that all reconstructed SQL statements returned by the query are correctly translated.

SQL> SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID,

SQL_REDO FROM V$LOGMNR_CONTENTS

WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND

TIMESTAMP > '10-jan-2003 15:59:53';



USR XID SQL_REDO

----------- -------- -----------------------------------

SYS 1.2.1594 set transaction read write;

SYS 1.2.1594 create table oe.product_tracking (product_id number not null,

modified_time date,

old_list_price number(8,2),

old_warranty_period interval year(2) to month);

SYS 1.2.1594 commit;



SYS 1.18.1602 set transaction read write;

SYS 1.18.1602 create or replace trigger oe.product_tracking_trigger

before update on oe.product_information

for each row

when (new.list_price <> old.list_price or

new.warranty_period <> old.warranty_period)

declare

begin

insert into oe.product_tracking values

(:old.product_id, sysdate,

:old.list_price, :old.warranty_period);

end;

SYS 1.18.1602 commit;



OE 1.9.1598 update "OE"."PRODUCT_INFORMATION"

set

"WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),

"LIST_PRICE" = 100

where

"PRODUCT_ID" = 1729 and

"WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and

"LIST_PRICE" = 80 and

ROWID = 'AAAHTKAABAAAY9yAAA';

OE 1.9.1598 insert into "OE"."PRODUCT_TRACKING"

values

"PRODUCT_ID" = 1729,

"MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:03',

'dd-mon-yyyy hh24:mi:ss'),

"OLD_LIST_PRICE" = 80,

"OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');



OE 1.9.1598 update "OE"."PRODUCT_INFORMATION"

set

"WARRANTY_PERIOD" = TO_YMINTERVAL('+08-00'),

"LIST_PRICE" = 92

where

"PRODUCT_ID" = 2340 and

"WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00') and

"LIST_PRICE" = 72 and

ROWID = 'AAAHTKAABAAAY9zAAA';



OE 1.9.1598 insert into "OE"."PRODUCT_TRACKING"

values

"PRODUCT_ID" = 2340,

"MODIFIED_TIME" = TO_DATE('13-jan-2003 16:07:07',

'dd-mon-yyyy hh24:mi:ss'),

"OLD_LIST_PRICE" = 72,

"OLD_WARRANTY_PERIOD" = TO_YMINTERVAL('+05-00');



OE 1.9.1598 commit;

Step 6 End the LogMiner session.

SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR()

Sunday, January 4, 2009

Sample Resume of Oracle apps dba(2yrs)

xxxxxxxxxxname Contact : +91 040 xxxxxxxxxxx
Email : xxxxx@xxxx.com


B.E with 2+ years of experience In Oracle Applications DBA

Summary:

 Having 2+years of experience in Oracle Application 11i DBA.

 Strong skills and working knowledge in Oracle Applications 11i.

 Administrating of Production, Test & Development Instances of Oracle 11i Applications.

 Good experience in Installing,Patching of Oracle DB and Applications.

 Good experience in Troubleshooting Concurrent Manager.

 Good experience with Oracle applications Cloning.

 Cloning of R11i using Rapidclone.

 Patch analysis for database and applications patches including mini pack,one-of patch.

 Having a knowledge of setting up jobs for routine monitoring,DB shutdown, backup , DB startup and database builds.

 Worked on 24/7 high availability environments.

SKILLS:

Environment SUN SOLARIS, LINUX

Tools `Oracle Enterprise Manager,SQL PLUS

ERP Oracle Applications 11i

RDBMS Oracle 9i with DBA

Oracle Utilities Sql*Loader , Export , Import ,Putty

11i Utilities FNDLOAD , WFLOAD

EDUCATIONAL QUALIFICATIONS

• B.E(ETC) From KITS(DeemedUniversity) , Bhubaneswar in 2003


Professional Experience
Organisation
Client
Period
Job profile Oracle Applications DBA




ROLE:
• We have a support central tool,where cases are queued up daily.We need to pick up cases and resolve depending upon the severity of the bug.
• Monitoring and diagnosing database performance through monitoring of alerts.
• Database Monitoring, Cloning.
• Apply and analysis Patch for database and Applications including mini pack and one-off pack.
• Starting and stopping apps services,Concurrent Manager Setup.




RESPONSIBILITIES

 Installation and maintenance of Oracle Products on SUN and LINUX platforms.

 Oracle Application 11i installation.

 Cloning of Oracle Applications R11i.

 Assigning responsibilities to users.

 Code Migration through FNDLOAD, WFLOAD.

 Registering & configuring remote Application Printers.

 Extensively worked on Oracle Applications AD Utilities.

 DataBase Administration tasks.

 Monitor alert logs, space and object growth related problems in the databases.

 Oracle Applications System Administration such as managing concurrent programs and requests, application security, user registration, assignment of responsibilities.

 Planning and managing database backups.

 Analysis of trace and log files to determine and resolve potential problem areas.

 Monitored and diagnosed database performance through monitoring of alert files, trace files.

 Post installation trouble shooting.

 Cloning from Production database to Development database and Testing.

 Perform a daily database refresh of the training and practices instances from pre-production and Production server.

 Monitored concurrent processes / spawned processes so that performance of the system is maintained.

 Having a good knowledge on routine monitoring, database shutdown, backup, startup etc.

 Knowledge on Cloning of Release 11i using the Rapidclone.

 Cloning of Oracle Applications with autoconfig.

 Creation and sizing tablespaces and setting tablespace attributes

 Patch analysis for database and applications patches including , family packs, and one-of patches.

 Performed patch analysis for patches involving lot of pre-requisites.

Other Strong Points
• Having a good exposure to Unix environment .
• Confident at troubleshooting, comfortable at UNIX commands .
• Willingness to work in shifts/flexible hours.
• Willingness to relocate/travel on requirement.



.
.

Imporatant Unix Commands

Contents
• cat --- for creating and displaying short files
• chmod --- change permissions
• cd --- change directory
• cp --- for copying files
• date --- display date
• echo --- echo argument
• ftp --- connect to a remote machine to download or upload files
• grep --- search file
• head --- display first part of file
• ls --- see what files you have
• lpr --- standard print command (see also print )
• more --- use to read files
• mkdir --- create directory
• mv --- for moving and renaming files
• ncftp --- especially good for downloading files via anonymous ftp.
• print --- custom print command (see also lpr )
• pwd --- find out what directory you are in
• rm --- remove a file
• rmdir --- remove directory
• rsh --- remote shell
• setenv --- set an environment variable
• sort --- sort file
• tail --- display last part of file
• tar --- create an archive, add or extract files
• telnet --- log in to another machine
• wc --- count characters, words, lines
________________________________________
cat
This is one of the most flexible Unix commands. We can use to create, view and concatenate files. For our first example we create a three-item English-Spanish dictionary in a file called "dict."
% cat >dict
red rojo
green verde
blue azul

%
stands for "hold the control key down, then tap 'd'". The symbol > tells the computer that what is typed is to be put into the file dict. To view a file we use cat in a different way:
% cat dict
red rojo
green verde
blue azul
%
If we wish to add text to an existing file we do this:
% cat >>dict
white blanco
black negro

%
Now suppose that we have another file tmp that looks like this:
% cat tmp
cat gato
dog perro
%
Then we can join dict and tmp like this:
% cat dict tmp >dict2
We could check the number of lines in the new file like this:
% wc -l dict2
8
The command wc counts things --- the number of characters, words, and line in a file.
________________________________________
chmod
This command is used to change the permissions of a file or directory. For example to make a file essay.001 readable by everyone, we do this:
% chmod a+r essay.001
To make a file, e.g., a shell script mycommand executable, we do this
% chmod +x mycommand
Now we can run mycommand as a command.
To check the permissions of a file, use ls -l . For more information on chmod, use man chmod.
________________________________________
cd
Use cd to change directory. Use pwd to see what directory you are in.
% cd english
% pwd
% /u/ma/jeremy/english
% ls
novel poems
% cd novel
% pwd
% /u/ma/jeremy/english/novel
% ls
ch1 ch2 ch3 journal scrapbook
% cd ..
% pwd
% /u/ma/jeremy/english
% cd poems
% cd
% /u/ma/jeremy
Jeremy began in his home directory, then went to his english subdirectory. He listed this directory using ls , found that it contained two entries, both of which happen to be diretories. He cd'd to the diretory novel, and found that he had gotten only as far as chapter 3 in his writing. Then he used cd .. to jump back one level. If had wanted to jump back one level, then go to poems he could have said cd ../poems. Finally he used cd with no argument to jump back to his home directory.
________________________________________
cp
Use cp to copy files or directories.
% cp foo foo.2
This makes a copy of the file foo.
% cp ~/poems/jabber .
This copies the file jabber in the directory poems to the current directory. The symbol "." stands for the current directory. The symbol "~" stands for the home directory.
________________________________________
date
Use this command to check the date and time.
% date
Fri Jan 6 08:52:42 MST 1995
________________________________________
echo
The echo command echoes its arguments. Here are some examples:
% echo this
this
% echo $EDITOR
/usr/local/bin/emacs
% echo $PRINTER
b129lab1
Things like PRINTER are so-called environment variables. This one stores the name of the default printer --- the one that print jobs will go to unless you take some action to change things. The dollar sign before an environment variable is needed to get the value in the variable. Try the following to verify this:
% echo PRINTER
PRINTER
________________________________________
ftp
Use ftp to connect to a remote machine, then upload or download files. See also: ncftp
Example 1: We'll connect to the machine fubar.net, then change director to mystuff, then download the file homework11:
% ftp solitude
Connected to fubar.net.
220 fubar.net FTP server (Version wu-2.4(11) Mon Apr 18 17:26:33 MDT 1994) ready.
Name (solitude:carlson): jeremy
331 Password required for jeremy.
Password:
230 User jeremy logged in.
ftp> cd mystuff
250 CWD command successful.
ftp> get homework11
ftp> quit
Example 2: We'll connect to the machine fubar.net, then change director to mystuff, then upload the file collected-letters:
% ftp solitude
Connected to fubar.net.
220 fubar.net FTP server (Version wu-2.4(11) Mon Apr 18 17:26:33 MDT 1994) ready.
Name (solitude:carlson): jeremy
331 Password required for jeremy.
Password:
230 User jeremy logged in.
ftp> cd mystuff
250 CWD command successful.
ftp> put collected-letters
ftp> quit
The ftp program sends files in ascii (text) format unless you specify binary mode:
ftp> binary
ftp> put foo
ftp> ascii
ftp> get bar
The file foo was transferred in binary mode, the file bar was transferred in ascii mode.
________________________________________
grep
Use this command to search for information in a file or files. For example, suppose that we have a file dict whose contents are
red rojo
green verde
blue azul
white blanco
black negro
Then we can look up items in our file like this;
% grep red dict
red rojo
% grep blanco dict
white blanco
% grep brown dict
%
Notice that no output was returned by grep brown. This is because "brown" is not in our dictionary file.
Grep can also be combined with other commands. For example, if one had a file of phone numbers named "ph", one entry per line, then the following command would give an alphabetical list of all persons whose name contains the string "Fred".
% grep Fred ph | sort
Alpha, Fred: 333-6565
Beta, Freddie: 656-0099
Frederickson, Molly: 444-0981
Gamma, Fred-George: 111-7676
Zeta, Frederick: 431-0987
The symbol "|" is called "pipe." It pipes the output of the grep command into the input of the sort command.
For more information on grep, consult
% man grep
________________________________________
head
Use this command to look at the head of a file. For example,
% head essay.001
displays the first 10 lines of the file essay.001 To see a specific number of lines, do this:
% head -20 essay.001
This displays the first 20 lines of the file.
________________________________________
ls
Use ls to see what files you have. Your files are kept in something called a directory.
% ls
foo letter2
foobar letter3
letter1 maple-assignment1
%
Note that you have six files. There are some useful variants of the ls command:
% ls l*
letter1 letter2 letter3
%
Note what happened: all the files whose name begins with "l" are listed. The asterisk (*) is the " wildcard" character. It matches any string.
________________________________________
lpr
This is the standard Unix command for printing a file. It stands for the ancient "line printer." See
% man lpr
for information on how it works. See print for information on our local intelligent print command.
________________________________________
mkdir
Use this command to create a directory.
% mkdir essays
To get "into" this directory, do
% cd essays
To see what files are in essays, do this:
% ls
There shouldn't be any files there yet, since you just made it. To create files, see cat or emacs.
________________________________________
more
More is a command used to read text files. For example, we could do this:
% more poems
The effect of this to let you read the file "poems ". It probably will not fit in one screen, so you need to know how to "turn pages". Here are the basic commands:
• q --- quit more
• spacebar --- read next page
• return key --- read next line
• b --- go back one page
For still more information, use the command man more.
________________________________________
mv
Use this command to change the name of file and directories.
% mv foo foobar
The file that was named foo is now named foobar
________________________________________
ncftp
Use ncftp for anonymous ftp --- that means you don't have to have a password.
% ncftp ftp.fubar.net
Connected to ftp.fubar.net
> get jokes.txt
The file jokes.txt is downloaded from the machine ftp.fubar.net.
________________________________________
print
This is a moderately intelligent print command.
% print foo
% print notes.ps
% print manuscript.dvi
In each case print does the right thing, regardless of whether the file is a text file (like foo ), a postcript file (like notes.ps, or a dvi file (like manuscript.dvi. In these examples the file is printed on the default printer. To see what this is, do
% print
and read the message displayed. To print on a specific printer, do this:
% print foo jwb321
% print notes.ps jwb321
% print manuscript.dvi jwb321
To change the default printer, do this:
% setenv PRINTER jwb321
________________________________________
pwd
Use this command to find out what directory you are working in.
% pwd
/u/ma/jeremy
% cd homework
% pwd
/u/ma/jeremy/homework
% ls
assign-1 assign-2 assign-3
% cd
% pwd
/u/ma/jeremy
%
Jeremy began by working in his "home" directory. Then he cd 'd into his homework subdirectory. Cd means " change directory". He used pwd to check to make sure he was in the right place, then used ls to see if all his homework files were there. (They were). Then he cd'd back to his home directory.
________________________________________
rm
Use rm to remove files from your directory.
% rm foo
remove foo? y
% rm letter*
remove letter1? y
remove letter2? y
remove letter3? n
%
The first command removed a single file. The second command was intended to remove all files beginning with the string "letter." However, our user (Jeremy?) decided not to remove letter3.
________________________________________
rmdir
Use this command to remove a directory. For example, to remove a directory called "essays", do this:
% rmdir essays
A directory must be empty before it can be removed. To empty a directory, use rm.
________________________________________
rsh
Use this command if you want to work on a computer different from the one you are currently working on. One reason to do this is that the remote machine might be faster. For example, the command
% rsh solitude
connects you to the machine solitude. This is one of our public workstations and is fairly fast.
See also: telnet
________________________________________
setenv
% echo $PRINTER
labprinter
% setenv PRINTER myprinter
% echo $PRINTER
myprinter
________________________________________
sort
Use this commmand to sort a file. For example, suppose we have a file dict with contents
red rojo
green verde
blue azul
white blanco
black negro
Then we can do this:
% sort dict
black negro
blue azul
green verde
red rojo
white blanco
Here the output of sort went to the screen. To store the output in file we do this:
% sort dict >dict.sorted
You can check the contents of the file dict.sorted using cat , more , or emacs .
________________________________________
tail
Use this command to look at the tail of a file. For example,
% head essay.001
displays the last 10 lines of the file essay.001 To see a specific number of lines, do this:
% head -20 essay.001
This displays the last 20 lines of the file.
________________________________________
tar
Use create compressed archives of directories and files, and also to extract directories and files from an archive. Example:
% tar -tvzf foo.tar.gz
displays the file names in the compressed archive foo.tar.gz while
% tar -xvzf foo.tar.gz
extracts the files.
________________________________________
telnet
Use this command to log in to another machine from the machine you are currently working on. For example, to log in to the machine "solitude", do this:
% telnet solitude
See also: rsh.
________________________________________
wc
Use this command to count the number of characters, words, and lines in a file. Suppose, for example, that we have a file dict with contents
red rojo
green verde
blue azul
white blanco
black negro
Then we can do this
% wc dict
5 10 56 tmp
This shows that dict has 5 lines, 10 words, and 56 characters.
The word count command has several options, as illustrated below:
% wc -l dict
5 tmp
% wc -w dict
10 tmp
% wc -c dict
56 tmp

The Case Studies of Sql*loader

The Case Studies

This chapter contains the following case studies:

Case 1: Loading Variable-Length Data

Loads stream format records in which the fields are delimited by commas and may be enclosed
by quotation marks. The data is found at the end of the control file.

Case 2: Loading Fixed-Format Fields:

Loads a datafile with fixed-length fields, stream-format records, all records the same length.

Case 3: Loading a Delimited, Free-Format File

Loads data from stream format records with delimited fields and sequence numbers. The data is
found at the end of the control file.

Case 4: Loading Combined Physical Records

Combines multiple physical records into one logical record corresponding to one database row

Case 5: Loading Data into Multiple Tables

Loads data into multiple tables in one run

Case 6: Loading Using the Direct Path Load Method

Loads data using the direct path load method

Case 7: Extracting Data from a Formatted Report

Extracts data from a formatted report

Case 8: Loading Partitioned Tables

Loads partitioned tables.

Case 9: Loading LOBFILEs (CLOBs)

Adds a CLOB column called RESUME to the table emp, uses a FILLER field (RES_FILE), and
loads multiple LOBFILEs into the emp table.

Case 10: Loading REF Fields and VARRAYs

Loads a customer table, which has a primary key as its OID and which stores order items in a
VARRAY and loads an order table which has a REF to the customer table and the order times in
a VARRAY.

Case Study Files

The distribution media for SQL*Loader contains files for each case:

• control files (for example, ULCASE1.CTL)

• data files (for example, ULCASE2.DAT)
• setup files (for example, ULCASE3.SQL)
If the sample data for the case study is contained in the control file, then there will be no .DAT
file for that case.

If there are no special setup steps for a case study, there may be no .SQL file for that case.
Starting (setup) and ending (cleanup) scripts are denoted by an S or E after the case number.

Table 4-1 lists the files associated with each case:

Table 4-1 Case Studies and Their Related Files

CAS
E .CTL .DAT .SQL
1 x x
2 x x
3 x x
4 x x x
5 x x x
6 x x x
7 x x x S, E


8 x x x
9 x x x
10 x x

Additional Information: The actual names of the case study files are operating system-dependent.
See your Oracle operating system-specific documentation for the exact names.

Tables Used in the Case Studies

The case studies are based upon the standard Oracle demonstration database tables EMP and
DEPT owned by SCOTT/TIGER. (In some of the case studies, additional columns have been
added.)

Contents of Table EMP

(empno NUMBER(4) NOT NULL,
ename VARCHAR2(10),
job VARCHAR2(9),
mgr NUMBER(4),
hiredate DATE,
sal NUMBER(7,2),
comm NUMBER(7,2),
deptno NUMBER(2))

Contents of Table DEPT

(deptno NUMBER(2) NOT NULL,
dname VARCHAR2(14),
loc VARCHAR2(13))

References and Notes

The summary at the beginning of each case study contains page number references, directing you
to the sections of this guide that discuss the SQL*Loader feature being demonstrated in more
detail.

In the control file fragment and log file listing shown for each case study, the numbers that appear
to the left are not actually in the file; they are keyed to the numbered notes following the listing.
Do not use these numbers when you write your control files.


Running the Case Study SQL Scripts

You should run the SQL scripts ULCASE1.SQL and ULCASE3.SQL through ULCASE10.SQL
to prepare and populate the tables. Note that there is no ULCASE2.SQL as Case 2 is handled by
ULCASE1.SQL.

Case 1: Loading Variable-Length Data

Case 1 demonstrates


A simple control file identifying one table and three columns to be loaded. See
Identifying Data in the Control File with BEGINDATA.

Including data to be loaded from the control file itself, so there is no separate datafile. See
Identifying Data in the Control File with BEGINDATA.

Loading data in stream format, with both types of delimited fields -- terminated and
enclosed. See Delimited Fields.
Control File

The control file is ULCASE1.CTL:

1) LOAD DATA

2) INFILE *

3) INTO TABLE dept

4) FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'

5) (deptno, dname, loc)

6) BEGINDATA
12,RESEARCH,"SARATOGA"
10,"ACCOUNTING",CLEVELAND11,"ART",SALEM13,FINANCE,"BOSTON"
21,"SALES",PHILA.
22,"SALES",ROCHESTER42,"INT'L","SAN FRAN"

Notes:

The LOAD DATA statement is required at the beginning of the control file.

INFILE * specifies that the data is found in the control file and not in an external file.

The INTO TABLE statement is required to identify the table to be loaded (DEPT) into. By
default, SQL*Loader requires the table to be empty before it inserts any records.

FIELDS TERMINATED BY specifies that the data is terminated by commas, but may also
be enclosed by quotation marks. Datatypes for all fields default to CHAR.

Specifies that the names of columns to load are enclosed in parentheses. Since no datatype is
specified, the default is a character of length 255.


BEGINDATA specifies the beginning of the data.

Invoking SQL*Loader

To run this example, invoke SQL*Loader with the command:

sqlldr userid=scott/tiger control=ulcase1.ctl log=ulcase1.log

SQL*Loader loads the DEPT table and creates the log file.

Additional Information: The command "sqlldr" is a UNIX-specific invocation. To invoke
SQL*Loader on your operating system, refer to your Oracle operating system-specific
documentation.

Log File

The following shows a portion of the log file:

Control File: ulcase1.ctl

Data File: ulcase1.ctl
Bad File: ulcase1.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytesContinuation: none specifiedPath used: Conventional

Table DEPT, loaded from every logical record.
Insert option in effect for this table: INSERT

Column Name Position Len Term Encl
Datatype

1) DEPTNO FIRST * , O(")
CHARACTER

DNAME NEXT * , O(")
CHARACTER
2) LOC NEXT * , O(")
CHARACTER

Table DEPT:
7 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.


Space allocated for bind array: 65016 bytes(84rows)
Space allocated for memory besides bind array: 0 bytes

Total logical records skipped: 0
Total logical records read: 7
Total logical records rejected: 0
Total logical records discarded: 0

Run began on Sun Nov 08 11:08:19 1998Run ended on Sun Nov 08 11:08:20 1998

Elapsed time was: 00:00:01.16
CPU time was: 00:00:00.10

Notes:

Position and length for each field are determined for each record, based on delimiters in the
input file.

WHT signifies that field LOC is terminated by WHITESPACE. The notation O(") signifies
optional enclosure by quotation marks.

Case 2: Loading Fixed-Format Fields

Case 2 demonstrates

• A separate datafile. See INFILE: Specifying Datafiles.
• Data conversions. See Datatype Conversions.
In this case, the field positions and datatypes are specified explicitly.

Control File

The control file is ULCASE2.CTL.

1) LOAD DATA
2) INFILE 'ulcase2.dat'
3) INTO TABLE emp
4) (empno POSITION(01:04) INTEGER EXTERNAL,


ename POSITION(06:15) CHAR,
job POSITION(17:25) CHAR,
mgr POSITION(27:30) INTEGER EXTERNAL,
sal POSITION(32:39) DECIMAL EXTERNAL,
comm POSITION(41:48) DECIMAL EXTERNAL,


5) deptno POSITION(50:51) INTEGER EXTERNAL)

Notes:


The LOAD DATA statement is required at the beginning of the control file.
The name of the file containing data follows the keyword INFILE.
The INTO TABLE statement is required to identify the table to be loaded into.
Lines 4 and 5 identify a column name and the location of the data in the datafile to be loaded


into that column. EMPNO, ENAME, JOB, and so on are names of columns in table EMP.
The datatypes (INTEGER EXTERNAL, CHAR, DECIMAL EXTERNAL) identify the
datatype of data fields in the file, not of corresponding columns in the EMP table.

Note that the set of column specifications is enclosed in parentheses.

Datafile

Below are a few sample data lines from the file ULCASE2.DAT. Blank fields are set to null

automatically.
7782 CLARK MANAGER 7839 2572.50 10
7839 KING PRESIDENT 5500.00 10
7934 MILLER CLERK 7782 920.00 10
7566 JONES MANAGER 7839 3123.75 20
7499 ALLEN SALESMAN 7698 1600.00 300.00 30
7654 MARTIN SALESMAN 7698 1312.50 1400.00 30

Invoking SQL*Loader

Invoke SQL*Loader with a command such as:

sqlldr userid=scott/tiger control=ulcase2.ctl log=ulcase2.log

EMP records loaded in this example contain department numbers. Unless the DEPT table is
loaded first, referential integrity checking rejects these records (if referential integrity constraints
are enabled for the EMP table).

Additional Information: The command "sqlldr" is a UNIX-specific invocation. To invoke
SQL*Loader on your operating system, refer to your Oracle operating system-specific
documentation.

Log File

The following shows a portion of the log file:

Control File: ulcase2.ctl

Data File: ulcase2.dat

Bad File: ulcase2.bad

Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50


Bind array: 64 rows, maximum of 65536 bytesContinuation: none specifiedPath used: Conventional

Table EMP, loaded from every logical record.
Insert option in effect for this table: INSERT

Column Name Position Len Term Encl
Datatype

--------------
EMPNO 1:4 4
CHARACTER
ENAME 6:15 10
CHARACTER
JOB 17:25 9
CHARACTER
MGR 27:30 4
CHARACTER
SAL 32:39 8
CHARACTER
COMM 41:48 8
CHARACTER
DEPTNO 50:51 2
CHARACTER

Table EMP:

7 Rows successfully loaded.

0 Rows not loaded due to data errors.

0 Rows not loaded because all WHEN clauses were failed.

0 Rows not loaded because all fields were null.

Space allocated for bind array: 65520 bytes(1092rows)
Space allocated for memory besides bind array: 0 bytes

Total logical records skipped: 0
Total logical records read: 7
Total logical records rejected: 0
Total logical records discarded: 0

Run began on Sun Nov 08 11:09:31 1998Run ended on Sun Nov 08 11:09:32 1998

Elapsed time was: 00:00:00.63
CPU time was: 00:00:00.16


Case 3: Loading a Delimited, Free-Format File

Case 3 demonstrates


Loading data (enclosed and terminated) in stream format. See Delimited Fields.

Loading dates using the datatype DATE. See DATE.

Using SEQUENCE numbers to generate unique keys for loaded data. See Setting a
Column to a Unique Sequence Number.

Using APPEND to indicate that the table need not be empty before inserting new records.
See Loading into Empty and Non-Empty Tables.

Using Comments in the control file set off by double dashes. See Control File Basics.

Overriding general specifications with declarations for individual fields. See Specifying
Field Conditions.
Control File

This control file loads the same table as in Case 2, but it loads three additional columns
(HIREDATE, PROJNO, LOADSEQ). The demonstration table EMP does not have columns
PROJNO and LOADSEQ. So if you want to test this control file, add these columns to the EMP
table with the command:

ALTER TABLE EMP ADD (PROJNO NUMBER, LOADSEQ NUMBER)

The data is in a different format than in Case 2. Some data is enclosed in quotation marks, some
is set off by commas, and the values for DEPTNO and PROJNO are separated by a colon.

1) -- Variable-length, delimited and enclosed data format

LOAD DATA
2) INFILE *
3) APPEND

INTO TABLE emp4) FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
(empno, ename, job, mgr,

5) hiredate DATE(20) "DD-Month-YYYY",
sal, comm, deptno CHAR TERMINATED BY ':',
projno,

6) loadseq SEQUENCE(MAX,1))
7) BEGINDATA
8) 7782, "Clark", "Manager", 7839, 09-June-1981, 2572.50,,


10:101
7839, "King", "President", , 17-November-1981,5500.00,,10:1027934, "Miller", "Clerk", 7782, 23-January-1982, 920.00,,
10:102

7566, "Jones", "Manager", 7839, 02-April-1981, 3123.75,,

20:101
7499, "Allen", "Salesman", 7698, 20-February-1981, 1600.00,
(same line continued) 300.00, 30:1037654, "Martin", "Salesman", 7698, 28-September-1981, 1312.50,
(same line continued) 1400.00, 3:1037658, "Chan", "Analyst", 7566, 03-May-1982, 3450,, 20:101
Notes:

Comments may appear anywhere in the command lines of the file, but they should not appear
in data. They are preceded with a double dash that may appear anywhere on a line.

INFILE * specifies that the data is found at the end of the control file.

Specifies that the data can be loaded even if the table already contains rows. That is, the table
need not be empty.

The default terminator for the data fields is a comma, and some fields may be enclosed by
double quotation marks (").

The data to be loaded into column HIREDATE appears in the format DD-Month-YYYY. The
length of the date field is a maximum of 20. If a length is not specified, the length is a
maximum of 20. If a length is not specified, then the length depends on the length of the
date mask.

The SEQUENCE function generates a unique value in the column LOADSEQ. This function
finds the current maximum value in column LOADSEQ and adds the increment (1) to it
to obtain the value for LOADSEQ for each row inserted.

BEGINDATA specifies the end of the control information and the beginning of the data.

Although each physical record equals one logical record, the fields vary in length so that
some records are longer than others. Note also that several rows have null values for
COMM.

Invoking SQL*Loader

Invoke SQL*Loader with a command such as:

sqlldr userid=scott/tiger control=ulcase3.ctl log=ulcase3.log

Additional Information: The command "sqlldr" is a UNIX-specific invocation. To invoke
SQL*Loader on your operating system, see your Oracle operating system-specific
documentation.

Log File

The following shows a portion of the log file:

Control File: ulcase3.ctl
Data File: ulcase3.ctl


Bad File: ulcase3.bad
Discard File: none specified


(Allow all discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytesContinuation: none specifiedPath used: Conventional

Table EMP, loaded from every logical record.
Insert option in effect for this table: APPEND

Column Name Position Len Term Encl
Datatype

EMPNO FIRST * , O(")
CHARACTER
ENAME NEXT * , O(")
CHARACTER
JOB NEXT * , O(")
CHARACTER
MGR NEXT * , O(")
CHARACTER
HIREDATE NEXT 20 , O(") DATEDD-Month-YYYY
SAL NEXT * , O(")
CHARACTER
COMM NEXT * , O(")
CHARACTER
DEPTNO NEXT * : O(")
CHARACTER
PROJNO NEXT * , O(")
CHARACTER
LOADSEQSEQUENCE (MAX, 1)

Table EMP:
7 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Space allocated for bind array: 65379 bytes(31rows)
Space allocated for memory besides bind array: 0 bytes


Total logical records skipped: 0
Total logical records read: 7
Total logical records rejected: 0
Total logical records discarded: 0

Run began on Sun Nov 08 11:13:41 1998Run ended on Sun Nov 08 11:13:46 1998

Elapsed time was: 00:00:04.83
CPU time was: 00:00:00.09

Case 4: Loading Combined Physical Records

Case 4 demonstrates:


Combining multiple physical records to form one logical record with CONTINUEIF; see
Assembling Logical Records from Physical Records.

Inserting negative numbers.

Indicating with REPLACE that the table should be emptied before the new data is
inserted; see Loading into Empty and Non-Empty Tables.

Specifying a discard file in the control file using DISCARDFILE; see Specifying the
Discard File.

Specifying a maximum number of discards using DISCARDMAX; see Specifying the
Discard File.

Rejecting records due to duplicate values in a unique index or due to invalid data values;
see Rejected Records.

Control File

The control file is ULCASE4.CTL:

LOAD DATA

INFILE 'ulcase4.dat'

1) DISCARDFILE 'ulcase4.dsc'

2) DISCARDMAX 999

3) REPLACE

4) CONTINUEIF THIS (1) = '*'
INTO TABLE emp(empno POSITION(1:4)
ename POSITION(6:15)
job POSITION(17:25)
mgr POSITION(27:30)
sal POSITION(32:39)
comm POSITION(41:48)
deptno POSITION(50:51)
hiredate POSITION(52:60)

Notes:

INTEGER EXTERNAL,
CHAR,
CHAR,
INTEGER EXTERNAL,
DECIMAL EXTERNAL,
DECIMAL EXTERNAL,
INTEGER EXTERNAL,
INTEGER EXTERNAL)



DISCARDFILE specifies a discard file named ULCASE4.DSC.

DISCARDMAX specifies a maximum of 999 discards allowed before terminating the run
(for all practical purposes, this allows all discards).

REPLACE specifies that if there is data in the table being loaded, then SQL*Loader should
delete that data before loading new data.

CONTINUEIF THIS specifies that if an asterisk is found in column 1 of the current record,
then the next physical record after that record should be appended to it to from the logical
record. Note that column 1 in each physical record should then contain either an asterisk
or a non-data value.

Data File

The datafile for this case, ULCASE4.DAT, is listed below. Note the asterisks in the first position
and, though not visible, a new line indicator is in position 20 (following "MA", "PR", and so on).
Note that CLARK's commission is -10, and SQL*Loader loads the value converting it to a

negative number.
*7782 CLARK
MANAGER 7839 2572.50 -10 2512-NOV-85
*7839 KING
PRESIDENT 5500.00 2505-APR-83
*7934 MILLER
CLERK 7782 920.00 2508-MAY-80
*7566 JONES
MANAGER 7839 3123.75 2517-JUL-85
*7499 ALLEN
SALESMAN 7698 1600.00 300.00 25 3-JUN-84
*7654 MARTIN
SALESMAN 7698 1312.50 1400.00 2521-DEC-85
*7658 CHAN
ANALYST 7566 3450.00 2516-FEB-84
* CHEN
ANALYST 7566 3450.00 2516-FEB-84
*7658 CHIN
ANALYST 7566 3450.00 2516-FEB-84
Rejected Records

The last two records are rejected, given two assumptions. If there is a unique index created on
column EMPNO, then the record for CHIN will be rejected because his EMPNO is identical to
CHAN's. If EMPNO is defined as NOT NULL, then CHEN's record will be rejected because it
has no value for EMPNO.

Invoking SQL*Loader

Invoke SQL*Loader with a command such as:

sqlldr userid=scott/tiger control=ulcase4.ctl log=ulcase4.log

Additional Information: The command "sqlldr" is a UNIX-specific invocation. To invoke
SQL*Loader on your operating system, see your operating Oracle system-specific


documentation.

Log File

The following is a portion of the log file:

Control File: ulcase4.ctl

Data File: ulcase4.dat
Bad File: ulcase4.bad
Discard File: ulcase4.dis

(Allow 999 discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytesContinuation: 1:1 = 0X2a(character '*'), in current physicalrecord
Path used: Conventional

Table EMP, loaded from every logical record.
Insert option in effect for this table: REPLACE

Column Name Position Len Term Encl
Datatype

--------------
EMPNO 1:4 4
CHARACTER
ENAME 6:15 10
CHARACTER
JOB 17:25 9
CHARACTER
MGR 27:30 4
CHARACTER
SAL 32:39 8
CHARACTER
COMM 41:48 8
CHARACTER
DEPTNO 50:51 2
CHARACTER
HIREDATE 52:60 9
CHARACTER

Record 8: Rejected - Error on table EMP.
ORA-01400: cannot insert NULL into ("SCOTT"."EMP"."EMPNO")


Record 9: Rejected - Error on table EMP.
ORA-00001: unique constraint (SCOTT.EMPIX) violated


Table EMP:



7 Rows successfully loaded.

2 Rows not loaded due to data errors.

0 Rows not loaded because all WHEN clauses were failed.

0 Rows not loaded because all fields were null.

Space allocated for bind array: 65520 bytes(910rows)
Space allocated for memory besides bind array: 0 bytes

Total logical records skipped: 0
Total logical records read: 9
Total logical records rejected: 2
Total logical records discarded: 0

Run began on Sun Nov 08 11:49:42 1998Run ended on Sun Nov 08 11:49:42 1998

Elapsed time was: 00:00:00.69
CPU time was: 00:00:00.13

Bad File

The bad file, shown below, lists records 8 and 9 for the reasons stated earlier. (The discard file is
not created.)

*
CHEN ANALYST
7566 3450.00 2516-FEB-84
*
CHIN ANALYST
7566 3450.00 2516-FEB-84
Case 5: Loading Data into Multiple Tables

Case 5 demonstrates


Loading multiple tables. See Loading Data into Multiple Tables.

Using SQL*Loader to break down repeating groups in a flat file and load the data into
normalized tables -- one file record may generate multiple database rows

Deriving multiple logical records from each physical record. See Using Multiple INTO
TABLE Statements.

Using a WHEN clause. See Choosing which Rows to Load.

Loading the same field (EMPNO) into multiple tables.
Control File

The control file is ULCASE5.CTL.

-- Loads EMP records from first 23 characters

-- Creates and loads PROJ records for each PROJNO listed


-- for each employee
LOAD DATA
INFILE 'ulcase5.dat'
BADFILE 'ulcase5.bad'
DISCARDFILE 'ulcase5.dsc'


1) REPLACE

2) INTO TABLE emp(empno POSITION(1:4) INTEGER EXTERNAL,
ename POSITION(6:15) CHAR,
deptno POSITION(17:18) CHAR,
mgr POSITION(20:23) INTEGER EXTERNAL)

2) INTO TABLE proj-- PROJ has two columns, both not null: EMPNO and PROJNO3) WHEN projno != ' '

(empno POSITION(1:4) INTEGER EXTERNAL,
3) projno POSITION(25:27) INTEGER EXTERNAL) -- 1st proj3) INTO TABLE proj4) WHEN projno != ' '

(empno POSITION(1:4) INTEGER EXTERNAL,
4) projno POSITION(29:31 INTEGER EXTERNAL) -- 2nd proj

2) INTO TABLE proj5) WHEN projno != ' '
(empno POSITION(1:4) INTEGER EXTERNAL,
5) projno POSITION(33:35) INTEGER EXTERNAL) -- 3rd proj

Notes:

REPLACE specifies that if there is data in the tables to be loaded (EMP and PROJ),
SQL*loader should delete the data before loading new rows.


Multiple INTO clauses load two tables, EMP and PROJ. The same set of records is processed
three times, using different combinations of columns each time to load table PROJ.

WHEN loads only rows with non-blank project numbers. When PROJNO is defined as
columns 25...27, rows are inserted into PROJ only if there is a value in those columns.

When PROJNO is defined as columns 29...31, rows are inserted into PROJ only if there is a
value in those columns.

When PROJNO is defined as columns 33...35, rows are inserted into PROJ only if there is a
value in those columns.

Data File

1234 BAKER 10 9999 101 102 103
1234 JOKER 10 9999 777 888 999
2664 YOUNG 20 2893 425 abc 102
5321 OTOOLE 10 9999 321 55 40


2134 FARMER 20 4555 236 456
2414 LITTLE 20 5634 236 456 40
6542 LEE 10 4532 102 321 14
2849 EDDS xx 4555 294 40
4532 PERKINS 10 9999 40
1244 HUNT 11 3452 665 133 456
123 DOOLITTLE 12 9940 132
1453 MACDONALD 25 5532 200

Invoking SQL*Loader

Invoke SQL*Loader with a command such as:

sqlldr userid=scott/tiger control=ulcase5.ctl log=ulcase5.log

Additional Information: The command "sqlldr" is a UNIX-specific invocation. To invoke
SQL*Loader on your operating system, see your Oracle operating system-specific
documentation.

Log File

The following is a portion of the log file:

Control File: ulcase5.ctl

Data File: ulcase5.dat
Bad File: ulcase5.bad
Discard File: ulcase5.dis

(Allow all discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytesContinuation: none specifiedPath used: Conventional

Table EMP, loaded from every logical record.
Insert option in effect for this table: REPLACE

Column Name Position Len Term Encl
Datatype

EMPNO 1:4 4
CHARACTER
ENAME 6:15 10
CHARACTER
DEPTNO 17:18 2
CHARACTER
MGR 20:23 4


CHARACTER

Table PROJ, loaded when PROJNO != 0X202020(character ' ')
Insert option in effect for this table: REPLACE

Column Name Position Len Term Encl
Datatype

EMPNO 1:4 4
CHARACTER
PROJNO 25:27 3
CHARACTER

Table PROJ, loaded when PROJNO != 0X202020(character ' ')
Insert option in effect for this table: REPLACE

Column Name Position Len Term Encl
Datatype

EMPNO 1:4 4
CHARACTER
PROJNO 29:31 3
CHARACTER

Table PROJ, loaded when PROJNO != 0X202020(character ' ')
Insert option in effect for this table: REPLACE

Column Name Position Len Term Encl
Datatype

EMPNO 1:4 4
CHARACTER
PROJNO 33:35 3
CHARACTER


1) Record 2: Rejected - Error on table EMP, column DEPTNO.
1) ORA-00001: unique constraint (SCOTT.EMPIX) violated


1) Record 8: Rejected - Error on table EMP, column DEPTNO.
1) ORA-01722: invalid number


1) Record 3: Rejected - Error on table PROJ, column PROJNO.
1) ORA-01722: invalid number


Table EMP:
2) 9 Rows successfully loaded.
2) 3 Rows not loaded due to data errors.
2) 0 Rows not loaded because all WHEN clauses were failed.



2) 0 Rows not loaded because all fields were null.


Table PROJ:
3) 7 Rows successfully loaded.
3) 2 Rows not loaded due to data errors.
3) 3 Rows not loaded because all WHEN clauses were failed.
3) 0 Rows not loaded because all fields were null.


Table PROJ:
4) 7 Rows successfully loaded.
4) 3 Rows not loaded due to data errors.
4) 2 Rows not loaded because all WHEN clauses were failed.
4) 0 Rows not loaded because all fields were null.


Table PROJ:
5) 6 Rows successfully loaded.
5) 3 Rows not loaded due to data errors.
5) 3 Rows not loaded because all WHEN clauses were failed.
5) 0 Rows not loaded because all fields were null.


Space allocated for bind array:
rows)
Space allocated for memory besides bind array:
65536 bytes(1024
0 bytes
Total logical records skipped:
Total logical records read:
Total logical records rejected:
Total logical records discarded:
0
12
3
0
Run began on Sun Nov 08 11:54:39 1998Run ended on Sun Nov 08 11:54:40 1998
Elapsed time was:
CPU time was:
00:00:00.67
00:00:00.16
Notes:

Errors are not encountered in the same order as the physical records due to buffering (array
batch). The bad file and discard file contain records in the same order as they appear in
the log file.

Of the 12 logical records for input, three rows were rejected (rows for JOKER, YOUNG, and
EDDS). No data was loaded for any of the rejected records.

Nine records met the WHEN clause criteria, and two (JOKER and YOUNG) were rejected
due to data errors.

Ten records met the WHEN clause criteria, and three (JOKER, YOUNG, and EDDS) were


rejected due to data errors.

Nine records met the WHEN clause criteria, and three (JOKER, YOUNG, and EDDS) were
rejected due to data errors.

Loaded Tables

These are results of this execution of SQL*Loader:

SQL> SELECT empno, ename, mgr, deptno FROM emp;
EMPNO ENAME MGR DEPTNO

1234 BAKER 9999 10
5321 OTOOLE 9999 10
2134 FARMER 4555 20
2414 LITTLE 5634 20
6542 LEE 4532 10
4532 PERKINS 9999 10
1244 HUNT 3452 11
123 DOOLITTLE 9940 12
1453 ALBERT 5532 25

SQL> SELECT * from PROJ order by EMPNO;

EMPNO PROJNO

123 132
1234 101
1234 103
1234 102
1244 665
1244 456
1244 133
1453 200
2134 236
2134 456
2414 236
2414 456
2414 40
4532 40
5321 321
5321 40
5321 55
6542 102
6542 14
6542 321

Case 6: Loading Using the Direct Path Load Method

This case study loads the EMP table using the direct path load method and concurrently builds all
indexes. It illustrates the following functions:



Use of the direct path load method to load and index data. See Chapter 8, "SQL*Loader:
Conventional and Direct Path Loads".

How to specify the indexes for which the data is pre-sorted. See Pre-sorting Data for
Faster Indexing.

Loading all-blank numeric fields as null. See Loading All-Blank Fields.

The NULLIF clause. See NULLIF Keyword.
Note: Specify the name of the table into which you want to load data; otherwise, you will
see LDR-927. Specifying DIRECT=TRUE as a command-line parameter is not an option
when loading into a synonym for a table.

In this example, field positions and datatypes are specified explicitly.

Control File

The control file is ULCASE6.CTL.

LOAD DATA
INFILE 'ulcase6.dat'
INSERT
INTO TABLE emp


1) SORTED INDEXES (empix)

2) (empno POSITION(01:04) INTEGER EXTERNAL NULLIF empno=BLANKS,
ename POSITION(06:15) CHAR,
job POSITION(17:25) CHAR,
mgr POSITION(27:30) INTEGER EXTERNAL NULLIF mgr=BLANKS,
sal POSITION(32:39) DECIMAL EXTERNAL NULLIF sal=BLANKS,
comm POSITION(41:48) DECIMAL EXTERNAL NULLIF comm=BLANKS,
deptno POSITION(50:51) INTEGER EXTERNAL NULLIF deptno=BLANKS)

Notes:

The SORTED INDEXES clause identifies indexes:presorting data:case study the indexes on
which the data is sorted. This clause indicates that the datafile is sorted on the columns in
the EMPIX index. This clause allows SQL*Loader to optimize index creation by
eliminating the sort phase for this data when using the direct path load method.

The NULLIF...BLANKS clause specifies that the column should be loaded as NULL if the
field in the datafile consists of all blanks. For more information, refer to Loading All-
Blank Fields.

Invoking SQL*Loader

Run the script ULCASE6.SQL as SCOTT/TIGER then enter the following at the command line:

sqlldr scott/tiger ulcase6.ctl direct=true log=ulcase6.log


Additional Information: The command "sqlldr" is a UNIX-specific invocation. To invoke
SQL*Loader on your operating system, see your Oracle operating system-specific
documentation.

Log File

The following is a portion of the log file:

Control File: ulcase6.ctl

Data File: ulcase6.dat
Bad File: ulcase6.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50
Continuation: none specifiedPath used: Direct

Table EMP, loaded from every logical record.
Insert option in effect for this table: REPLACE

Column Name Position Len Term Encl
Datatype

--------------
EMPNO 1:4 4
CHARACTER
ENAME 6:15 10
CHARACTER
JOB 17:25 9
CHARACTER
MGR 27:30 4
CHARACTER
NULL if MGR = BLANKS
SAL 32:39 8
CHARACTER
NULL if SAL = BLANKS
COMM 41:48 8
CHARACTER
NULL if COMM = BLANKS
DEPTNO 50:51 2
CHARACTER
NULL if EMPNO = BLANKS

The following index(es) on table EMP were processed:
index SCOTT.EMPIX loaded successfully with 7 keys


Table EMP:
7 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Bind array size not used in direct path.
Space allocated for memory besides bind array: 0 bytes


Total logical records skipped: 0
Total logical records read: 7
Total logical records rejected: 0
Total logical records discarded: 0


Run began on Sun Nov 08 11:15:28 1998
Run ended on Sun Nov 08 11:15:31 1998


Elapsed time was: 00:00:03.22
CPU time was: 00:00:00.10


Case 7: Extracting Data from a Formatted Report

In this case study, SQL*Loader's string processing functions extract data from a formatted report.
It illustrates the following functions:

• Using SQL*Loader with an INSERT trigger (see the chapter on database triggers in
Oracle8i Application Developer's Guide - FundamentalsError! Hyperlink reference
not valid.Use of the SQL string to manipulate data; see Applying SQL Operators to
Fields.


Different initial and trailing delimiters; see Specifying Delimiters.

Use of SYSDATE; see Setting a Column to the Current Date.

Use of the TRAILING NULLCOLS clause; see TRAILING NULLCOLS.

Ambiguous field length warnings; see Conflicting Native Datatype Field Lengths and
Conflicting Character Datatype Field Lengths.
Note: This example creates a trigger that uses the last value of unspecified fields.

Data File

The following listing of the report shows the data to be loaded:

Today's Newly Hired EmployeesDept Job Manager MgrNo Emp Name EmpNo Salary(Comm)


-
20 Salesman Blake 7698 Shepard(3%)

Falstaff
(5%)

Major(14%)
30 Clerk Scott 7788 Conrad

Ford
DeSilva
Manager King 7839 Provo

8061

8066

8064

8062
7369
8063
8065

$1,600.00
$1,250.00
$1,250.00
$1,100.00

$800.00
$2,975.00

Insert Trigger

In this case, a BEFORE INSERT trigger is required to fill in department number, job name, and
manager's number when these fields are not present on a data line. When values are present, they
should be saved in a global variable. When values are not present, the global variables are used.

The INSERT trigger and the package defining the global variables is:

CREATE OR REPLACE PACKAGE uldemo7 AS -- Global Package

Variables
last_deptno NUMBER(2);
last_job VARCHAR2(9);
last_mgr NUMBER(4);
END uldemo7;

/

CREATE OR REPLACE TRIGGER uldemo7_emp_insertBEFORE INSERT ON empFOR EACH ROW

BEGIN
IF :new.deptno IS NOT NULL THENuldemo7.last_deptno := :new.deptno; -- save value for later
ELSE

:new.deptno := uldemo7.last_deptno; -- use last valid value
END IF;
IF :new.job IS NOT NULL THEN

uldemo7.last_job := :new.job;
ELSE


:new.job := uldemo7.last_job;
END IF;
IF :new.mgr IS NOT NULL THEN


uldemo7.last_mgr := :new.mgr;
ELSE
:new.mgr := uldemo7.last_mgr;


END IF;
END;
/

Note: The phrase FOR EACH ROW is important. If it was not specified, the INSERT trigger
would only fire once for each array of inserts because SQL*Loader uses the array interface.


Control File

The control file is ULCASE7.CTL.

LOAD DATA
INFILE 'ULCASE7.DAT'
APPEND
INTO TABLE emp


1) WHEN (57) = '.'
2) TRAILING NULLCOLS
3) (hiredate SYSDATE,
4) deptno POSITION(1:2) INTEGER EXTERNAL(3)
5) NULLIF deptno=BLANKS,


job POSITION(7:14) CHAR TERMINATED BY WHITESPACE
6) NULLIF job=BLANKS "UPPER(:job)",
7) mgr POSITION(28:31) INTEGER EXTERNAL

TERMINATED BY WHITESPACE, NULLIF mgr=BLANKS,
ename POSITION(34:41) CHARTERMINATED BY WHITESPACE "UPPER(:ename)",
empno POSITION(45) INTEGER EXTERNALTERMINATED BY WHITESPACE,

sal POSITION(51) CHAR TERMINATED BY WHITESPACE
8) "TO_NUMBER(:sal,'$99,999.99')",
9) comm INTEGER EXTERNAL ENCLOSED BY '(' AND '%'

":comm * 100"
)


Notes:

The decimal point in column 57 (the salary field) identifies a line with data on it. All other
lines in the report are discarded.

The TRAILING NULLCOLS clause causes SQL*Loader to treat any fields that are missing
at the end of a record as null. Because the commission field is not present for every
record, this clause says to load a null commission instead of rejecting the record when
only six fields are found instead of the expected seven.

Employee's hire date is filled in using the current system date.

This specification generates a warning message because the specified length does not agree
with the length determined by the field's position. The specified length (3) is used.

Because the report only shows department number, job, and manager when the value
changes, these fields may be blank. This control file causes them to be loaded as null, and
an RDBMS insert trigger fills in the last valid value.

The SQL string changes the job name to uppercase letters.

It is necessary to specify starting position here. If the job field and the manager field were
both blank, then the job field's TERMINATED BY BLANKS clause would cause
SQL*Loader to scan forward to the employee name field. Without the POSITION clause,


the employee name field would be mistakenly interpreted as the manager field.

Here, the SQL string translates the field from a formatted character string into a number. The
numeric value takes less space and can be printed with a variety of formatting options.

In this case, different initial and trailing delimiters pick the numeric value out of a formatted
field. The SQL string then converts the value to its stored form.

Invoking SQL*Loader

Invoke SQL*Loader with a command such as:

sqlldr scott/tiger ulcase7.ctl ulcase7.log

Log File

The following is a portion of the log file:

1) SQL*Loader-307: Warning: conflicting lengths 2 and 3 specifiedfor column
DEPTNO table EMP
Control File: ulcase7.ctl
Data File: ulcase7.dat

Bad File: ulcase7.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytesContinuation: none specifiedPath used: Conventional

Table EMP, loaded when 57:57 = 0X2e(character '.')
Insert option in effect for this table: APPENDTRAILING NULLCOLS option in effect

Column Name Position Len Term Encl
Datatype

--------------
HIREDATE SYSDATE
DEPTNO 1:2 3
CHARACTER
NULL if DEPTNO = BLANKS
JOB 7:14 8 WHT
CHARACTER
NULL if JOB = BLANKS
MGR
SQL string for column : "UPPER(:job)"
28:31 4 WHT
CHARACTER
NULL if MGR = BLANKS


ENAME 34:41 8 WHT
CHARACTER

SQL string for column : "UPPER(:ename)"
EMPNO NEXT * WHT
CHARACTER
SAL 51 * WHT
CHARACTER

SQL string for column : "TO_NUMBER(:sal,'$99,999.99')"
COMM NEXT * (
CHARACTER

%
SQL string for column : ":comm * 100"

2)Record 1: Discarded - failed all WHEN clauses.
Record 2: Discarded - failed all WHEN clauses.
Record 3: Discarded - failed all WHEN clauses.
Record 4: Discarded - failed all WHEN clauses.
Record 5: Discarded - failed all WHEN clauses.
Record 6: Discarded - failed all WHEN clauses.
Record 10: Discarded - failed all WHEN clauses.

Table EMP:
6 Rows successfully loaded.
0 Rows not loaded due to data errors.

2)
7 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Space allocated for bind array: 65286 bytes(81rows)
Space allocated for memory besides bind array: 0 bytes

Total logical records skipped: 0
Total logical records read: 13
Total logical records rejected: 0
2) Total logical records discarded: 7

Run began on Sun Nov 08 11:16:30 1998Run ended on Sun Nov 08 11:16:31 1998

Elapsed time was: 00:00:00.75
CPU time was: 00:00:00.09

Notes:

A warning is generated by the difference between the specified length and the length derived
from the position specification.

The 6 header lines at the top of the report are rejected, as is the blank separator line in the
middle.


Dropping the Insert Trigger and the Global-Variable Package

After running the example, use ULCASE7E.SQL to drop the insert trigger and global-variable
package.

Case 8: Loading Partitioned Tables

Case 8 demonstrates


Partitioning of data. See Oracle8i ConceptsError! Hyperlink reference not
valid.Explicitly defined field positions and datatypes.

Loading using the fixed record length option.
Control File

The control file is ULCASE8.CTL. It loads the lineitem table with fixed length records,
partitioning the data according to shipdate.

LOAD DATA

1)
INFILE 'ulcase10.dat' "fix 129"

BADFILE 'ulcase10.bad'

TRUNCATE

INTO TABLE lineitem

PARTITION (ship_q1)

2)
(l_orderkey position (1:6) char,
l_partkey position (7:11) char,
l_suppkey position (12:15) char,
l_linenumber position (16:16) char,
l_quantity position (17:18) char,
l_extendedprice position (19:26) char,
l_discount position (27:29) char,
l_tax position (30:32) char,
l_returnflag position (33:33) char,
l_linestatus position (34:34) char,
l_shipdate position (35:43) char,
l_commitdate position (44:52) char,
l_receiptdate position (53:61) char,
l_shipinstruct position (62:78) char,
l_shipmode position (79:85) char,
l_comment position (86:128) char)

Notes:

Specifies that each record in the datafile is of fixed length (129 characters in this example).
See Input Data and Datafiles.

Identifies the column name and location of the data in the datafile to be loaded into each
column.


Table Creation

In order to partition the data the lineitem table is created using four (4) partitions according to the
shipment date:

create table lineitem
(l_orderkey number,
l_partkey number,
l_suppkey number,
l_linenumber number,
l_quantity number,
l_extendedprice number,
l_discount number,
l_tax number,
l_returnflag char,
l_linestatus char,
l_shipdate date,
l_commitdate date,
l_receiptdate date,
l_shipinstruct char(17),
l_shipmode char(7),
l_comment char(43))
partition by range (l_shipdate)
(
partition ship_q1 values less than (TO_DATE('01-APR-1996', 'DDMON-
YYYY'))
tablespace p01,
partition ship_q2 values less than (TO_DATE('01-JUL-1996', 'DDMON-
YYYY'))
tablespace p02,
partition ship_q3 values less than (TO_DATE('01-OCT-1996', 'DDMON-
YYYY'))
tablespace p03,
partition ship_q4 values less than (TO_DATE('01-JAN-1997', 'DDMON-
YYYY'))
tablespace p04
)


Input Data File

The datafile for this case, ULCASE8.DAT, is listed below. Each record is 129 characters in
length. Note that five(5) blanks precede each record in the file.

1 151978511724386.60 7.04.0NO09-SEP-6412-FEB-9622-MAR


96DELIVER IN

PERSONTRUCK iPBw4mMm7w7kQ zNPL i261OPP1 2731 73223658958.28.09.06NO12-FEB-9628-FEB-9620-APR-96TAKE

BACK RETURN

MAIL 5wM04SNyl0AnghCP2nx lAi1 3370 3713 810210.96 .1.02NO29-MAR-9605-MAR-9631-JAN-96TAKE

BACK RETURN

REG AIRSQC2C 5PNCy4mM1 5214 46542831197.88.09.06NO21-APR-9630-MAR-9616-MAY-96NONE

AIR Om0L65CSAwSj5k6k


1 6564 6763246897.92.07.02NO30-MAY-9607-FEB-9603-FEB96DELIVER
IN
PERSONMAIL CB0SnyOL PQ32B70wB75k 6Aw10m0wh

1 7403 160524 31329.6 .1.04NO30-JUN-9614-MAR-9601 APR-96NONE
FOB C2gOQj OB6RLk1BS15 igN

2 8819 82012441659.44 0.08NO05-AUG-9609-FEB-9711-MAR97COLLECT
COD
AIR O52M70MRgRNnmm476mNm

3 9451 721230 41113.5.05.01AF05-SEP-9629-DEC-9318-FEB-94TAKE
BACK RETURN
FOB 6wQnO0Llg6y

3 9717 1834440788.44.07.03RF09-NOV-9623-DEC-9315-FEB-94TAKE
BACK RETURN
SHIP LhiA7wygz0k4g4zRhMLBAM

3 9844 1955 6 8066.64.04.01RF28-DEC-9615-DEC-9314-FEB-94TAKE
BACK RETURN
REG AIR6nmBmjQkgiCyzCQBkxPPOx5j4hB 0lRywgniP1297

Invoking SQL*Loader

Invoke SQL*Loader with a command such as:

sqlldr scott/tiger control=ulcase8.ctl data=ulcase8.dat

Additional Information: The command "sqlldr" is a UNIX-specific invocation. To invoke
SQL*Loader, see the Oracle operating system-specific documentation.

Log File

The following shows a portion of the log file:

Control File: ulcase8.ctl

Data File: ulcase8.dat
File processing option string: "fix 129"
Bad File: ulcase10.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytesContinuation: none specifiedPath used: Conventional

Table LINEITEM, partition SHIP_Q1, loaded from every logicalrecord.
Insert option in effect for this partition: TRUNCATE

Column Name Position Len Term Encl
Datatype


L_ORDERKEY 1:6 6
CHARACTER
L_PARTKEY 7:11 5
CHARACTER
L_SUPPKEY 12:15 4
CHARACTER
L_LINENUMBER 16:16 1
CHARACTER
L_QUANTITYCHARACTER
17:18 2
L_EXTENDEDPRICE 19:26 8
CHARACTER
L_DISCOUNT 27:29 3
CHARACTER
L_TAX 30:32 3
CHARACTER
L_RETURNFLAG 33:33 1
CHARACTER
L_LINESTATUS 34:34 1
CHARACTER
L_SHIPDATE 35:43 9
CHARACTER
L_COMMITDATE 44:52 9
CHARACTER
L_RECEIPTDATE 53:61 9
CHARACTER
L_SHIPINSTRUCT 62:78 17
CHARACTER
L_SHIPMODE 79:85 7
CHARACTER
L_COMMENT 86:128 43
CHARACTER

Record 4: Rejected - Error on table LINEITEM, partition SHIP_Q1.
ORA-14401: inserted partition key is outside specified partition


Record 5: Rejected - Error on table LINEITEM, partition SHIP_Q1.
ORA-14401: inserted partition key is outside specified partition


Record 6: Rejected - Error on table LINEITEM, partition SHIP_Q1.
ORA-14401: inserted partition key is outside specified partition


Record 7: Rejected - Error on table LINEITEM, partition SHIP_Q1.
ORA-14401: inserted partition key is outside specified partition


Record 8: Rejected - Error on table LINEITEM, partition SHIP_Q1.
ORA-14401: inserted partition key is outside specified partition


Record 9: Rejected - Error on table LINEITEM, partition SHIP_Q1.
ORA-14401: inserted partition key is outside specified partition



Record 10: Rejected - Error on table LINEITEM, partition SHIP_Q1.
ORA-14401: inserted partition key is outside specified partition

Table LINEITEM, partition SHIP_Q1:
3 Rows successfully loaded.
7 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Space allocated for bind array: 65532 bytes(381rows)
Space allocated for memory besides bind array: 0 bytes

Total logical records skipped: 0
Total logical records read: 10
Total logical records rejected: 7
Total logical records discarded: 0

Run began on Sun Nov 08 11:30:49 1998Run ended on Sun Nov 08 11:30:50 1998

Elapsed time was: 00:00:01.11
CPU time was: 00:00:00.14

Case 9: Loading LOBFILEs (CLOBs)

Case 9 demonstrates

• Adding a CLOB column called RESUME to the table emp.
• Using a FILLER field (RES_FILE).
• Loading multiple LOBFILEs into the emp table.
Control File

The control file is ULCASE9.CTL. It loads new emp records with the resume for each employee
coming from a different file.

LOAD DATA
INFILE *
INTO TABLE EMP
REPLACE
FIELDS TERMINATED BY ','
( EMPNO INTEGER EXTERNAL,


ENAME CHAR,
JOB CHAR,
MGR INTEGER EXTERNAL,
SAL DECIMAL EXTERNAL,
COMM DECIMAL EXTERNAL,



DEPTNO INTEGER EXTERNAL,
1) RES_FILE FILLER CHAR,
2) "RESUME" LOBFILE (RES_FILE) TERMINATED BY EOF NULLIF RES_FILE= 'NONE'
)
BEGINDATA
7782,CLARK,MANAGER,7839,2572.50,,10,ulcase91.dat7839,KING,PRESIDENT,,5500.00,,10,ulcase92.dat7934,MILLER,CLERK,7782,920.00,,10,ulcase93.dat7566,JONES,MANAGER,7839,3123.75,,20,ulcase94.dat7499,ALLEN,SALESMAN,7698,1600.00,300.00,30,ulcase95.dat7654,MARTIN,SALESMAN,7698,1312.50,1400.00,30,ulcase96.dat7658,CHAN,ANALYST,7566,3450.00,,20,NONE

Notes:

This is a filler field. The filler field is assigned values from the datafield to which it is
mapped. See Secondary Data Files (SDFs) and LOBFILES for more information.

RESUME is loaded as a CLOB. The LOBFILE function is used to specify the name of the
field that specifies name of the file which contains the data for the LOB field. See
Loading LOB Data Using LOBFILEs for more information.

Input Data Files

>>ulcase91.dat<<
Resume for Mary Clark

Career Objective: Manage a sales team with consistent recordbreaking

Career Objective: President of large computer services company

Education:
Experience:
performance.
BA Business University of Iowa 19921992-1994 - Sales Support at MicroSales Inc.
Won "Best Sales Support" award in 1993 and 19941994-Present - Sales Manager at MicroSales Inc.
Most sales in mid-South division for 2 years
>>ulcase92.dat<<
Resume for Monica King

Education: BA English Literature Bennington, 1985Experience: 1985-1986 - Mailroom at New World Services
1986-1987 - Secretary for sales management at

New World Services
1988-1989 - Sales support at New World Services1990-1992 - Saleman at New World Services
1993-1994 - Sales Manager at New World Services1995 - Vice President of Sales and

Marketing at


New World Services
1996-Present - President of New World Services
>>ulcase93.dat<<
Resume for Dan Miller
Career Objective: Work as a sales support specialist for a

services

Education:
Experience:
Services

>>ulcase94.dat<<

companyPlainview High School, 19961996 - Present: Mail room clerk at New World

Resume for Alyson Jones

Career Objective: Work in senior sales management for a vibrant

and

Education:
Experience:
Led in

Services. My
each

>>ulcase95.dat<<

growing companyBA Philosophy Howard Univerity 19931993 - Sales Support for New World Services1994-1995 - Salesman for New World Services.

US sales in both 1994 and 1995.
1996 - present - Sales Manager New World

sales team has beat its quota by at least 15%

year.

Resume for David Allen

Career Objective: Senior Sales man for agressive Services companyEducation: BS Business Administration, Weber State 1994Experience: 1993-1994 - Sales Support New World Services

1994-present - Salesman at New World Service.
Won sales

award for exceeding sales quota by over 20%

in 1995, 1996.

>>ulcase96.dat<<

Resume for Tom Martin

Career Objective: Salesman for a computing service companyEducation: 1988 - BA Mathematics, University of the NorthExperience: 1988-1992 Sales Support, New World Services

1993-present Salesman New World Services


Invoking SQL*Loader

Invoke SQL*Loader with a command such as:

sqlldr sqlldr/test control=ulcase9.ctl data=ulcase9.dat

Additional Information: The command "sqlldr" is a UNIX-specific invocation. To invoke
SQL*Loader, see the Oracle operating system-specific documentation.

Log File

The following shows a portion of the log file:

Control File: ulcase9.ctl

Data File: ulcase9.ctl
Bad File: ulcase9.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytesContinuation: none specifiedPath used: Conventional

Table EMP, loaded from every logical record.
Insert option in effect for this table: REPLACE

Column Name Position Len Term Encl
Datatype

EMPNO FIRST * ,
CHARACTER
ENAME NEXT * ,
CHARACTER
JOB NEXT * ,
CHARACTER
MGR NEXT * ,
CHARACTER
SAL NEXT * ,
CHARACTER
COMM NEXT * ,
CHARACTER
DEPTNO NEXT * ,
CHARACTER
RES_FILE NEXT * ,
CHARACTER


(FILLER FIELD)
"RESUME" DERIVED * WHT
CHARACTER


Dynamic LOBFILE. Filename in field RES_FILE
NULL if RES_FILE = 0X4e4f4e45(character 'NONE')


Table EMP:
7 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Space allocated for bind array: 63984 bytes(31rows)
Space allocated for memory besides bind array: 0 bytes

Total logical records skipped: 0
Total logical records read: 7
Total logical records rejected: 0
Total logical records discarded: 0

Run began on Sun Nov 08 11:31:11 1998Run ended on Sun Nov 08 11:31:19 1998

Elapsed time was: 00:00:08.14
CPU time was: 00:00:00.09

Case 10: Loading REF Fields and VARRAYs

Case 10 demonstrates


Loading a customer table, which has a primary key as its OID and which stores order
items in a VARRAY.

Loading an order table which has a REF to the customer table and the order times in a
VARRAY.
Control File

LOAD DATA
INFILE *
CONTINUEIF THIS (1) = '*'
INTO TABLE customers
replacefields terminated by ","
(

cust_no char,
name char,
addr char


)
INTO TABLE orders
replacefields terminated by ","
(


order_no char,
1) cust_no FILLER char,
2) cust REF (CONSTANT 'CUSTOMERS',
cust_no),
1) item_list_count FILLER char,
3) item_list varray count (item_list_count)

(
4) item_list column object(

5)
item char,
cnt char,
price char

)

)
)
6) BEGINDATA*00001,Spacely Sprockets,15 Space Way,
*00101,00001,2,
*Sprocket clips, 10000, .01,
Sprocket cleaner, 10, 14.00*00002,Cogswell Cogs,12 Cogswell Lane,
*00100,00002,4,
*one quarter inch cogs,1000,.02,
*one half inch cog, 150, .04,
*one inch cog, 75, .10,
*Custom coffee mugs, 10, 2.50

Notes:

This is a filler field. The filler field is assigned values from the datafield to which it is
mapped. See Secondary Data Files (SDFs) and LOBFILES for more information.

This field is created as a REF field. See

item_listis stored in a VARRAY.

The second occurence of item_list identifies the datatype of each element of the VARRAY.
Here, the datatype is a column object.

This list shows all attributes of the column object that are loaded for the VARRAY. The list
is enclosed in parenthesis. See section Loading Column Objects for more information
about loading column objects.

The data is contained in the control file and is preceded by the keyword BEGINDATA.

Invoking SQL*Loader

Invoke SQL*Loader with a command such as:

sqlldr sqlldr/test control=ulcase10.ctl

Additional Information: The command "sqlldr" is a UNIX-specific invocation. To invoke


SQL*Loader, see the Oracle operating system-specific documentation.

Log File

The following shows a portion of the log file:

Control File: ulcase10.ctl

Data File: ulcase10.ctl
Bad File: ulcase10.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0Errors allowed: 50
Bind array: 64 rows, maximum of 65536 bytesContinuation: 1:1 = 0X2a(character '*'), in current physicalrecord
Path used: Conventional

Table CUSTOMERS, loaded from every logical record.
Insert option in effect for this table: REPLACE

Column Name Position Len Term Encl
Datatype

CUST_NO FIRST * ,
CHARACTER
NAME NEXT * ,
CHARACTER
ADDR NEXT * ,
CHARACTER


Table ORDERS, loaded from every logical record.
Insert option in effect for this table: REPLACE


Column Name Position Len Term Encl
Datatype

ORDER_NO NEXT * ,
CHARACTER
CUST_NO NEXT * ,
CHARACTER


(FILLER FIELD)
CUST DERIVED REF

Arguments are:
CONSTANT 'CUSTOMERS'
CUST_NO


ITEM_LIST_COUNT NEXT * ,


CHARACTER
(FILLER FIELD)
ITEM_LIST DERIVED * VARRAY
Count for VARRAY
ITEM_LIST_COUNT

*** Fields in ITEM_LIST
ITEM_LIST DERIVED * COLUMN
OBJECT

*** Fields in ITEM_LIST.ITEM_LIST
ITEM FIRST * ,
CHARACTER
CNT NEXT * ,
CHARACTER
PRICE NEXT * ,
CHARACTER
*** End of fields in ITEM_LIST.ITEM_LIST

*** End of fields in ITEM_LIST

Table CUSTOMERS:
2 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Table ORDERS:
2 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Space allocated for bind array: 65240 bytes(28rows)
Space allocated for memory besides bind array: 0 bytes

Total logical records skipped: 0
Total logical records read: 2
Total logical records rejected: 0
Total logical records discarded: 0

Run began on Sun Nov 08 11:46:13 1998Run ended on Sun Nov 08 11:46:14 1998

Elapsed time was: 00:00:00.65
CPU time was: 00:00:00.16