Archive for 2016

How to extract host names from a TNS file?

To extract host names we use a regular expression. You can use Regular expression editor in standard editors like Notepad++ or use http://regexr.com/


Regex:
(HOST\s?=\s?)(.*?)(\))

Extract second group from the matches using:
$2 \n 
Easiest way to use the above regex is in  http://regexr.com/

  1. Copy paste the tns file into Text editor
  2. Copy paste the expression into expression editor
  3. Enable global flag from "flags" on the top right to match entire file
  4. In Tools > list type the group extractor

Regex is a very important tool in your tool set to do complex string manipulation.

Example regex and file here:
http://regexr.com/3esbi


Tuesday, December 13, 2016
Posted by Arjun Lagisetty

How to name a context? Hint: Not GLOBAL

Most of the implementations I have seen do not give an optimal name to the context.

Yes the default context id GLOBAL and you might think what's wrong with it?

There is nothing wrong with it. If all you have is one set source and target. Dev source database loads to Dev Target database.  For production I recommend having only one context and it being GLOBAL.

In DEV, we can start with one context, when we are loading from one dev source to one target. As time moves on, you might have received a request to load from hotfix environment which contains a data which is not present in dev source database. I have seen couple of things happen in this instance to load data from new data source.
  1. We change the existing data server connection details to point to new source - Very simple (Good for one time)
  2. Create new data source and update the logical and physical mapping (on GLOBAL context which have to be reverted again)

For onetime events, both approaches are ok. Slowly you start getting requests to load from the new data source again and again, and you find yourself repeating these procedures over and over.

My recommendation in this instance would be to create a new context named a follows

HOTFIX_DEV

<SRC>_<TARGET>
<SRC> --> Source data source code where you are loading from
<TARGET> --> Target data source code where you are loading to

Data source code can be any text which can help you identify that group of data sources.

When you name the contexts in the above mentioned manner and you perform the logical and physical mapping accordingly, you have couple of advantages.
  1. You can react to requests for loading from multiple instances of sources instantaneously without scope for error
  2. You have an history of what loads have been performed from what sources. E.g. your testing team might come and hey, when are full load done from this data source. You can preserve the history to answer these questions based on context.


Ideal scenario might be to have one source per one target but we all know that we are living far from ideal world and this might make your life a little easier to deal with. 
Wednesday, December 7, 2016
Posted by Arjun Lagisetty

Toad Shortcuts & Tips

Most Useful Shortcuts:

Block Comment
CTRL+B
Comment selected lines
Uncomment
CTRL+SHIFT+B
Uncomment Selected lines
Format Code
CRTL+SHIFT+F
Format and indent code.
Describe
F4 or ALT + Enter
Opens a new window describing the table
Lower case
CRTL+L
Convert all the text to lowercase
Describe Select Query
CRTL + F9
Opens a new window describing the select statement
Execute Statement
F9
Run SQL code and show results in Script Output tab
Execute Script
F5
Run SQL code and show results in Data Grid
SQL Recall
F8
Opens a section which shows all the SQLs run
Previous SQL Recall statement
ALT + up
Acts like history navigation in bash shell for SQL commands
Next SQL Recall statement
ALT + down
Acts like history navigation in bash shell for SQL commands
Next Navigator Node
CRTL+PgDn
Goes not next statement block in the editor
Previous Navigator Node
CRTL+PgUP
Goes not previous statement block in the editor
Quick Template
CRTL+Space
Opens a prompt on the editor with all template for code blocks like begin end, for loop,etc



Helpful Views:

View>Code Snippets
Provides sample code snippets like hints, date formats, conversion functions (quick alternative to googling for synrax)
View>Object Palate
Used for building queries by selecting object name and column name by dragging and dropping them into editor
Posted by Arjun Lagisetty

Code Back up Strategy for Development Work Repositories

Problem:

Have you ever had a problem when you accidentally changed something in an interface and you forgot what was changed. You wished you had a copy of yesterday's interface so you can restore from it or at the least compare the current interface to previous day's copy.

ODI developers typically use inconsistently internal versioning system to save versions of the code. All these versions are stored the master repository in blob format and litter the repository after a while. This method is also inconsistent, a developer might save an interface but he would have versioned data store used in that interface. Making copies of the objects is even more disastrous, it will make the repository horribly unreadable. Relying on DB backups will mean restore process is all or none. You cannot individually choose which objects to restore. Given these challenges we designed a process which does not involve developers intervention to backup the code nightly.


Solution:

I developed a package to export object by object in the development work repository for the following objects. This code is hosted here. This is scheduled every night. This process backs up the objects to an archive file and retains them for 28 days.


Note: 


  • This  code contains some Hard-coded values for directories. I am yet to make the code more parameter driven. Due to lack of time I am publishing the code as is. You need to make changes to the procedure named "PROC_OBJECT_BACKUP" 
  • This procedure is written for windows and uses 7zip to archive. 
  • Objects are created in the following format <Prefix>_<Object_ID>.xml. There was some error while using object names in the file names.
  • This uses Sunopsis memory engine's default physical schema. 


Object Type
Child Objects Yes/No
Prefix
PROJECT
No Child objects
PROJ
INTERFACES
With Child Objects
POP
PROCEDURES
With Child Objects
TRT
PACKAGES        
No Child Objects
PACK
VARIABLES
With Child Objects
VAR
USER FUNCTIONS
With Child Objects
UFUNC
MODELS
No Child Objects
MOD
SUB-MODELS
No Child Objects
SMOD
DATASTORE
With Child Objects
TAB

Potential uses: 

  1. These process can be used to restore the code if some developers accidentally corrupts the code. 
  2. We can modify this process to only extract changed objects nightly and check them into a version control system. 
  3. We can modify this process to extract objects based on tag for exporting for migration to another environment.


Thursday, October 20, 2016
Posted by Arjun Lagisetty

YAJSW: Changing ODI params

If you installed ODI agent as a windows service on your host using YAJSW as documented in this blog post on Oracle.

https://blogs.oracle.com/dataintegration/entry/running_odi_11gr1_standalone_a

To Change ODI params you have to follow the following steps. Changing odiparams.bat file will have no effect.


  1. Stop the service
  2. Make a copy of the %YAJSW%/conf/wrapper.conf.file used to create this service.
  3. Edit the %YAJSW%/conf/wrapper.conf for required changes

    • If you want to increase the heap size locate the -Xmx  and -Xms parameter
    • Change the configuration.
  4. Restart the service via Windows services 
Verify the change using a tool Process explorer from sysinternals



Friday, September 30, 2016
Posted by Arjun Lagisetty

YAPSSSTB: Yet Another PowerShell Script for SQL Server Table Backups

Thanks to my very descriptive title. I can skip the obligatory introduction statement. 

Requirement:

Backup SnapShot Fact tables after every snapshot load. 
Archive the backups with timestamp filename.

Solution:

Download the script and run the following command

.\backup_sql_tables_ps.ps1 -ServerInstance INSTANCE_NAME -Database DATABASE_NAME -ExportPath C:\temp\ -TableNames DATABASE_NAME.schema.TABLE1,DATABASE_NAME.schema.TABLE2,DATABASE_NAME.schema.TABLE3

This Script uses BCP to backup SQL server Objects. 
This script should be run in a machine where sql server is installed with a user which has privileges and access to BCP ulitily provided with Sql Server
This script needs 7Zip installed in $env:ProgramFiles\7-Zip\
It will create folders called MMddyyyyHHmmsdata for data files
It will create folders called MMddyyyyHHmmserror for error and info detail files in the archive
Each table's data is backed up in a file called TableName.bat (If you are trying to backup two tables with same name from different schema this might not work for you.)
If archive name is not provided it create archive with timestamp MMddyyyyHHmms.7z
Sample Command

ODI Specific:

This can command can be called from ODI if you have ODI and SQL server installed in the same machine and it should work with out a sweat.
Wednesday, September 14, 2016
Posted by Arjun Lagisetty

Start a Load Plan Remotely and synchronously using Powershell

Recently, I moved to a company which where all the hosts were windows based. So, all my pretty linux scripts were rather unloved. So, I had to dig into powershell and get my hands dirty. Powershell is Awesome.

Requirement:

Run a load plan from remote windows machine (Our enterprise scheduler).
Wait for the load plan to complete.
Based on the output of the load plan, run another job.

Challenge:

As most of you are aware that there is no script provided by Oracle to run a load plan remotely. So, I had to create a script to run load plan. This script had to kick off the load plan and wait for load plan to complete then exit the code. I went with powershell instead of bat because of the many pre-existing cmd-lets I can leverage. 

I use OdiInvoke webservice installed with standalone agent to trigger the load plan and to monitor the status. 

Solution:

Download the script here 


All command should be run in PowerShell NOT cmd

This script is well documented. Documentation can be seen using  the command 

Get-Help .\startlpremote_ps.ps1 -full

Sample command to execute the script.

\startlpremote_ps.ps1 -lp_name LOAD_PLAN_NAME -context GLOBAL -work_repository WORKREP1 -agent_url http://myhost:20910/oraclediagent -odi_user SUPERVISOR -odi_password welcome1 -log_level 5 -synchronous 1 -lp_status_throw_error = 1

Some of the notable features (These help in chaining commands in some external scheduler):
  • Ability to run the command synchronously, i.e. wait for the response of the load plan before exiting the powershell
  • Ability to report Error in the load plan as an error in the powershell.

Note:


This script has been tested in powershell V2 on windows server 2008 and on ODI 11.1.1.6.0 stand alone agent. Since, the commands used are pretty based this should work in other environment with little or not modification. 

Sample run:

  • I created a test load plan called LP_WAIT_3_MINS, which waits only 90 seconds before it errors out. 
  • Run script with parameter synchronous 1  and lp_status_throw_error = 1
Script waits for the Load plan to complete and then throws an error statement. 




Tuesday, September 6, 2016
Posted by Arjun Lagisetty

Use Case for "User Functions" in ODI.

This Blog has been in the cooking pot for long time almost half a decade, but a recent event triggered my attention to this post.

We have a very complex interface with a extensive logic in each mapped column, which uses some other columns which are already calculated.

Here are some of the requirements (These are not actual requirements but they are tuned down to ETL folks to make a point):

Requirements


1. Calculate the base price using a complex calculation encompassing 10 different columns

CASE
  WHEN SOME_CONDTION
UNITPRC * UNITCOST * CONV_FACTOR
WHEN SOME_OTHER_CONDITION
  ANOTHER_FORMULA ETC..

2. Use the above base price to calculate other 10 measures like royalty, etc..

Though these calculations are typically not done in BI, we needed to do them as a projections for booked sale.


Take 1:

One of the design approaches for this was to write the case statement for the base price directly into the mapping and when the base price is used in another mapping copy/paste the base price formula and apply additional logic on top of it.

Here was my first red flag, I am always wary of copy/pasting. If, I ever encounter a need to do that I evaluate my design.


Take 2:

We needed to reuse the base price in another calculations so, we took the most basic measure i.e. the base price and encapsulated it in a function, this function did not need any parameters

ufnc_baseprice
{CASE
  WHEN SOME_CONDTION
UNITPRC * UNITCOST * CONV_FACTOR
WHEN SOME_OTHER_CONDITION
  ANOTHER_FORMULA ETC..}

Note: When you use a function in mapping, ODI replaces the function call with the actual text of the function.

Map base_price to  unfc_baseprice()

Call the unfc_baseprice() when you need to use base_price measure

Example:


Map Royalty to unfc_baseprice() * royalty_rate

This lends to clean readable code and encapsulation of logic in one place.



Posted by Arjun Lagisetty

Query to get all the functions and their implementations

Below is the Query to get the list of all the functions and their implementations from the work repository. We had to write this because the code we inherited were using functions with in functions we wanted to find our which functions used other functions with in them so we understand the impact of changing functions on the overall code base.



SELECT SNP_UFUNC.I_UFUNC
,SNP_UFUNC.UFUNC_NAME AS "FUNCTION_NAME"
,SNP_UFUNC.GROUP_NAME AS "GROUP"
,SNP_TXT_HEADER_DEF.FULL_TEXT AS "DEFINITION"
,SNP_TXT_HEADER_DESC.FULL_TEXT AS "DESCRIPTION"
,SNP_TXT_HEADER_FUNC_IMPL.FULL_TEXT AS "IMPLEMENTATION"
,SNP_UFUNC_TECHNO.TECH_INT_NAME AS "TECH_NAME"
FROM SNP_UFUNC
LEFT OUTER JOIN SNP_TXT_HEADER AS SNP_TXT_HEADER_DEF ON SNP_TXT_HEADER_DEF.I_TXT = SNP_UFUNC.I_TXT_DEF
LEFT OUTER JOIN SNP_TXT_HEADER AS SNP_TXT_HEADER_DESC ON SNP_TXT_HEADER_DESC.I_TXT = SNP_UFUNC.I_TXT_DESC
LEFT OUTER JOIN SNP_UFUNC_IMPL ON SNP_UFUNC_IMPL.I_UFUNC = SNP_UFUNC.I_UFUNC
LEFT OUTER JOIN SNP_TXT_HEADER AS SNP_TXT_HEADER_FUNC_IMPL ON SNP_TXT_HEADER_FUNC_IMPL.I_TXT = SNP_UFUNC_IMPL.I_TXT_IMPL
LEFT OUTER JOIN dbo.SNP_UFUNC_TECHNO ON SNP_UFUNC_TECHNO.I_UFUNC_IMPL = dbo.SNP_UFUNC_IMPL.I_UFUNC_IMPL
WHERE SNP_UFUNC_TECHNO.TECH_INT_NAME LIKE '%' -- CHANGE THIS IF YOU WANT TO GET SPECIFIC DEFITION OF THE FUNCTIONS.
ORDER BY SNP_UFUNC.UFUNC_NAME

Monday, May 16, 2016
Posted by Arjun Lagisetty

Documenting Interfaces.



Way before the time of Quick Edit tab in ODI. It was painful to understand to get an overview of the Interfaces. Some questions listed below took time to be answered.

  • What Data sources are being used?
  • What are the joins?
  • What are the mapped columns.
  • Generate SQL for extracting required details from the table.
  • Format the SQL and print it out to a reports.

Fumbling through the mapping tab was clumsy and it took may click to get these answers. Quick edit tab did solve a lot these issues. It did not have to render the whole mapping as a diagram thus saving a lot of time.

However, If I want to document an interface it takes awful lot of time and there is no way to export the interface in text readable format. Since ODI stores all the data in the work repository tables. I could easily generate the documentation from this tables.

Below is my attempt to document the interfaces from the interface table. In the next post we will explore how to document procedure, package and variables, export it all to human readable ASCII text files. This will give us ability to do simple search with in the code. Example, where a particular function is used in the interface(s).

I created some python scripts to print out interfaces to text file but your approach can be different. I will attempt to clean up my python scripts before I upload them but here are the queries for  your pleasure.
--Get list of sources for an interface
SELECT DISTINCT SNP_SOURCE_TAB.LSCHEMA_NAME || '.' || SNP_SOURCE_TAB.TABLE_NAME AS SOURCE_TABLE
FROM SNP_PROJECT
LEFT OUTER JOIN SNP_FOLDER
ON SNP_FOLDER.I_PROJECT = SNP_PROJECT.I_PROJECT
LEFT OUTER JOIN SNP_POP
ON SNP_POP.I_FOLDER = SNP_FOLDER.I_FOLDER
LEFT OUTER JOIN SNP_SOURCE_TAB SNP_SOURCE_TAB
ON SNP_SOURCE_TAB.I_POP = SNP_POP.I_POP
WHERE SNP_POP.POP_NAME = "ENTER_INTERFACE_NAME"
AND SNP_FOLDER.FOLDER_NAME = "ENTER_FOLDER_NAME"
AND SNP_PROJECT.PROJECT_NAME = "ENTER_PROJECT_NAME";


-- Get all the filters on an interface
SELECT DISTINCT S_TXT.TXT AS FILTER_TXT
FROM SNP_PROJECT
LEFT OUTER JOIN SNP_FOLDER
ON SNP_FOLDER.I_PROJECT = SNP_PROJECT.I_PROJECT
LEFT OUTER JOIN SNP_POP
ON SNP_POP.I_FOLDER = SNP_FOLDER.I_FOLDER
LEFT OUTER JOIN SNP_POP_CLAUSE
ON SNP_POP_CLAUSE.I_POP = SNP_POP.I_POP
LEFT OUTER JOIN SNP_TXT S_TXT
ON S_TXT.I_TXT = SNP_POP_CLAUSE.I_TXT_SQL
WHERE SNP_POP_CLAUSE.CLAUSE_TYPE = 3
AND SNP_POP.POP_NAME = "ENTER_INTERFACE_NAME"
AND SNP_FOLDER.FOLDER_NAME = "ENTER_FOLDER_NAME"
AND SNP_PROJECT.PROJECT_NAME = "ENTER_PROJECT_NAME";



-- Get all active mappings on the interface
SELECT SNP_POP.POP_NAME AS INTF_NAME
,SNP_POP.TABLE_NAME AS TARGET_TABLE_NAME
,SNP_DATA_SET.DS_NAME AS DATA_SET
,SNP_POP_COL.COL_NAME AS TARGET_COLUMN
,CASE

WHEN SNP_POP_COL.EXE_DB = 'T'
THEN SNP_TXT_HEADER2.FULL_TEXT
ELSE SNP_TXT_HEADER.FULL_TEXT
END AS MAPPPING
,CASE

WHEN SNP_POP_COL.IND_INS = 1
THEN 'True'
ELSE 'False'
END AS INSERT_MAP
,CASE

WHEN SNP_POP_COL.IND_UPD = 1
THEN 'True'
ELSE 'False'
END AS UPDATE_MAP
FROM SNP_POP
LEFT OUTER JOIN SNP_POP_COL ON SNP_POP.I_POP = SNP_POP_COL.I_POP
LEFT OUTER JOIN SNP_POP_MAPPING ON SNP_POP_COL.I_POP_COL = SNP_POP_MAPPING.I_POP_COL
LEFT OUTER JOIN SNP_TXT_HEADER ON SNP_POP_MAPPING.I_TXT_MAP = SNP_TXT_HEADER.I_TXT
LEFT OUTER JOIN SNP_TXT_HEADER SNP_TXT_HEADER2 ON SNP_TXT_HEADER2.I_TXT = SNP_POP_COL.I_TXT_MAP
LEFT OUTER JOIN SNP_DATA_SET ON SNP_POP.I_POP = SNP_DATA_SET.I_POP
AND SNP_POP_MAPPING.I_DATA_SET = SNP_DATA_SET.I_DATA_SET
WHERE SNP_POP_COL.IND_ENABLE = 1
AND SNP_POP.POP_NAME = 'ENTER_INTERFACE_NAME'
ORDER BY DS_NAME
,SNP_POP_COL.COL_NAME;
-- Get the target model for the interface
SELECT SNP_SOURCE_TAB.LSCHEMA_NAME || '.' || SNP_TABLE.TABLE_NAME AS "TARGET_TABLE_NAME"
FROM SNP_PROJECT
JOIN SNP_FOLDER
ON SNP_FOLDER.I_PROJECT = SNP_PROJECT.I_PROJECT
JOIN SNP_POP
ON SNP_POP.I_FOLDER = SNP_FOLDER.I_FOLDER
JOIN SNP_MODEL
ON SNP_MODEL.I_MOD = SNP_POP.I_MOD
JOIN SNP_TABLE
ON SNP_POP.I_TABLE = SNP_TABLE.I_TABLE
WHERE SNP_POP.POP_NAME = "ENTER_INTERFACE_NAME"
AND SNP_FOLDER.FOLDER_NAME = "ENTER_FOLDER_NAME"
AND SNP_PROJECT.PROJECT_NAME = "ENTER_PROJECT_NAME";

-- Get all the joins on the interface
SELECT DISTINCT PC.I_TXT_SQL AS GRP
,T1.TABLE_NAME AS LEFT_TABLE
,T2.TABLE_NAME AS RIGHT_TABLE
,NVL2(PC.I_TABLE2, DECODE(PC.IND_OUTER1, 1, 'LEFT ') || DECODE(PC.IND_OUTER2, 1, 'RIGHT ') || DECODE(PC.IND_OUTER1 + PC.IND_OUTER2, 0, 'INNER ', 'OUTER ') || 'JOIN', 'FILTER') CLAUSE_TYPE
FROM SNP_POP_CLAUSE PC
,SNP_POP P
,SNP_TABLE T1
,SNP_TABLE T2
,SNP_SOURCE_TAB ST1
,SNP_SOURCE_TAB ST2
,SNP_PROJECT PRJ
,SNP_FOLDER FLD
WHERE 1 = 1
AND P.I_POP = PC.I_POP
AND ST1.I_SOURCE_TAB = PC.I_TABLE1
AND ST2.I_SOURCE_TAB = PC.I_TABLE2
AND ST1.I_TABLE = T1.I_TABLE
AND ST2.I_TABLE = T2.I_TABLE
AND PC.CLAUSE_TYPE = 1
AND P.I_FOLDER = FLD.I_FOLDER
AND FLD.I_PROJECT = PRJ.I_PROJECT
AND SNP_POP.POP_NAME = "ENTER_INTERFACE_NAME"
AND SNP_FOLDER.FOLDER_NAME = "ENTER_FOLDER_NAME"
AND SNP_PROJECT.PROJECT_NAME = "ENTER_PROJECT_NAME";


Here is a comical picture of interface icon just for laughs.





















Thursday, May 12, 2016
Posted by Arjun Lagisetty

How to debug ODI Studio installations

Did you ever have a Oracle Data Integrator Studio Installation fail on you without any error info? Well, it happened to me. I was perplexed for a while. Most of times, installing studio was a straightforward process. Well, what do you do if it ever went awry? Where do you start?

Tip 1: 

Start the installer in command prompt. When you simply double click the installer you will not be able to see all the logs written to console. It helps to start a program from command prompt. For all you cmd noobs out there see this link.

http://www.howtogeek.com/209694/running-an-.exe-file-via-command-prompt/

Tip 2:

Understand how the Oracle Universal Installer works. See this link to understand OUI.
http://docs.oracle.com/cd/B28359_01/em.111/b31207/b_oui_appendix.htm#OUICG373

Quick Tip:

See this location for log files: This location will be displayed in console when you launch it from the command prompt

C:\Users\<Username>\AppData\Local\Temp\OraInstall2015-05-04_11-39-33AM\




Thursday, April 7, 2016
Posted by Arjun Lagisetty

Understanding Star transformation and Bitmap Indexes

Bitmap indexes and star transformation of the BI queries play a critical role in achieving effective performance for dimensionally modeled database. So  a  BI/ETL Developer its important to understand the concepts of the Star transformation and the use of bit map indexes in this blog post.
There is plenty of good work being done on the public web with respect to these concepts. Instead of rehashing, I will point you to some very good resources. Though this may be common knowledge for someone coming from a old school Oracle Datawarehousing world. May ODI developers are plunge into Data warehouse development without any exposure to data warehousing principles or Oracle database related knowledge. This blogpost will be a good starting point for them.

What are bitmap indexes and what role do they play in data warehouse and some best practices?
Something to look out for
Bitmap Index on Single Table
Bitmap Index for Join!
https://docs.oracle.com/cd/B28359_01/server.111/b28313/indexes.htm


This following blog post will help you understand what star transformation is and how bitmap indexes are used in star transformations.
https://blogs.oracle.com/optimizer/entry/star_transformation


Posted by Arjun Lagisetty

Encrypting Data in ODI Over the Wire.

For us, Data Geeks. Security is least of our concern it's somewhere in back of our minds, scratch that may be it's not even there. When it comes to ODI administration, Some admins out there think that security is just creating restrictive profiles for user roles in Security Navigator. That's only part of the security solution. But have you ever considered security over Wire, We are moving tonnes of data over the network where it's prone to sniffing and tapping. Most of our work loads often happen behind the firewall in our data center but, in this cloud era where we are sharing the network with others it's important to consider encrypting data over the wire.

In this blog topics we are limiting our discussion to, connecting to Oracle Databases using JDBC thin driver.

Pre-requisites:

Features are only available in Oracle 11g r2. Oracle DBA has to enable Oracle Advanced Security on the server side.

Configuration:

  • Navigate to Topology
  • Select the Oracle data server to add properties.
  • On the Properties tab click Add a Property.
  • Specify a Key identifying this property. This key is case-sensitive.
  • Specify a value for the property.
  • From the File menu, click Save.
You should set the following four properties for encrypting the client connection.



Optionally: You can also use the following parameters to enable authentication services.

See these links for more details:
https://docs.oracle.com/database/121/DBSEG/asojbdc.htm#DBSEG9613
https://docs.sdbor.edu/oracle/db11gr2/network.112/e40393/asojbdc.htm#ASOAG9608


Using Thick Drivers:
How to enable encryption using thick drivers?
https://kb.berkeley.edu/page.php?id=23274
How to use thick oci drivers in ODI?
http://blog.whitehorses.nl/2011/06/21/using-the-thick-oci-drivers-in-odi-11g-ide/
Monday, February 22, 2016
Posted by Arjun Lagisetty

Popular Post

Labels

Blog Archive

Copyright © ODI Pundits - Oracle Data Integrator - Maintained by Arjun Lagisetty