Archive for 2013

Agent logging

Logging on the agent is very important for debugging purposes. It is one of the best practices to configure the logger for the agent to desired levels which might be different from the default levels. This is especially important if the you are facing an issue which is to be debugged.


ODI logging is based on Java logging APIs. ODI agent logs via OJDL These logs can be configured via an config XML file "ODI-logging-config.xml" under $ODI_HOME\oracledi\agent\bin\.

ODI logging system consists of several logger and log handlers. These are similar to the Java logging components. Here is a good introductions to Java logging components.By default we have two handlers one is console handler which logs data to console and one is file handler which logs the data to the file.
Console handler is for the defaulted to the startup messages. When ODI novices talk to logging they usually refer to console logging. In a typical installation agent is started as a background process and the console messages are routed to a out file(text file) and they call it a day. But little known thing is that ODI is capable of logging directly to the files and that we can control the level of logging also via the configuration file mentioned above. In this post we will talk mainly about the file handler.

Here are some important sections of the configuration file.

ODI-file-handler params:

level (log_handler parameter): File handler does not place any restriction on the level messages being logged. It will log what ever messages passed to it. That is why the level is set to "ALL".
path: This is the path to the log file. All the log messages are appended this log files.
maxFileSize: Interestingly, ODI handler does automatic log file size management. after the maxfilesize is reached it will move the file with a suffix and creates a new file and starts logging in the new file. This makes sure that the file size does not bloe out of control. If the log file with the name ODIagent.log size is greater the maxfilesize it will rename the file ODIagent.log to ODIagent-1.log and start writing logs to the new ODIagent.log file.


These are the components which send log messages to the handlers. Default setting on the root logger (oracle.odi) is NOTIFICATION:16 which is inherited by all the sub loggers (oracle.odi.XXXX). This root logger is configured to send all the messages to be logged to console handler and the file handler. Btw, if you notice even if the console handler and the root handler both get the same messages of level NOTIFICATION:16, console handler only logs the messages of level NOTIFICATION:1 and file handler logs all the messages.So, if we want to increase the log level/granularity, we can just change the log level on the root logger to any of the below parameters

Java Equivalent ODI Level
    SEVERE intValue()+100 INCIDENT_ERROR:1

Monday, December 30, 2013
Posted by Arjun Lagisetty

Load monitoring using SNP_tables.

I have had an interesting problem at work today (This post was long time in draft mode, so this was technically a long time ago, but I would still like to keep maintain this tone). I have been asked to create a report on all the instances of failure of a couple of scenarios for the previous day. For some strange reason automatic load monitoring was not built into the original code. (tch.. tch..). Instead manually drugging though the entire operator listing the scenarios failed, I set out to automate the process. 

There are several ways to monitor ODI loads. Most novices use the technique, illustrated in this diagram from Oracle website to monitor the ODI load. Red cross was added by me for dramatization. I am not a big fan of reinventing the wheel, so I will point you to some good blog-posts out there which describe the dangers of this techniques (Click here. Also in my case, following this technique involved changing hundreds of existing packages (which was a no go).

Also, some recommendations like this, alternatives to above process which only works in some situations.

Here is my method of monitoring the load processes. 

 I assume that you at-least have a read only  permissions to work repository tables, which stores the session state. Easy method is to create a Data server in ODI topology and a physical schema for work repository and logical schema, In the designer, create a model for the work rep and reverse engineer only required tables/data stores.

We had a very simple requirements to indicate the admin about the failed scenarios every day at a certain time. Please use your creativity to fit it to your own needs.

I created a procedure which reads the state of all the sessions which ran today, and email the admin if there is an error in any of the scenarios. We generated a scenario based from this procedure and scheduled it in the morning just before the backups..

I used SNP_ tables to get the data and generate a report. Yeah it was  a pain in the neck to guess what each column means. It would be considerate of oracle to release description of each column along with the release.

Query used:
SELECT   COUNT ( * ) FAIL_COUNT-- Number of failed Scenarios
     FROM   SNP_SESS -- Table which contains the status of the scenarios
    WHERE   SCEN_VERSION IS NOT NULL -- Gives us only Scenario execution steps
            AND SESS_STATUS = 'E' -- Gives us the steps which have Error status
            AND TRUNC(SESS_BEG) = TRUNC(SYSDATE) -- All the scenario which ran today.


If you are using load plans.. You can define the email in exception step. See more details here. We can define email in case of exception for each load plan step, where each load plan step is the scenario we want to monitor.

Monday, December 16, 2013
Posted by Arjun Lagisetty

Popular Post


Blog Archive

Copyright © ODI Pundits - Oracle Data Integrator - Maintained by Arjun Lagisetty