Steps to create new index in OBIA.
In the previous blog post we explored how to create new index in OBIA and how to set proper flex fields on the constraints object to enable proper index maintenance. find that blog post here
After setting parameters on the flex fields we have to load the table W_ETL_ALL_INDEXES. This can be done using the following scenario.
This steps can be skipped if you are running a domain load plan.
BI Apps Project > Components > DW > Oracle > Load Index Metadata > Load Index Metadata > EXEC_LOAD_INDEX_METADATA Version 001
How to purge an Year's worth of ODI logs?
Problem:
What happens when you realize that you a need to clear years worth of data from the ODI work repository?
Whats wrong with just purging using purge option using Operator?
You are most likely to run out of undo table space with error.
ORA-30036: unable to extend segment by 8 in undo tablespace '<Undo Tablespace name>'
Solution:
Delete logs in small batches one day of logs at a time. Using OdiTools and a procedure.
1. Create a procedure with the following "Command on Target" code
MultiExport list file ".lst" file create from ODI Rep SQL
Use the following code for creating a .lst file from ODI repository database. This file can be used for backup or migration.
This query gives the list of all project folders whose parent folder name start with 'CUSTOM'. 1100 is the object type code for folder objects in SNP_FOLDER table.
You can modify the SQL query as per your needs, query spooling options will deliver a clean file which can be directly used in the multi-export interface.
This query gives the list of all project folders whose parent folder name start with 'CUSTOM'. 1100 is the object type code for folder objects in SNP_FOLDER table.
You can modify the SQL query as per your needs, query spooling options will deliver a clean file which can be directly used in the multi-export interface.
SET TRIM ON
SET HEADING ON
SET PAGESIZE 660
SET NEWPAGE NONE
SET LINESIZE 40
SET TRIMSPOOL ON
SET FEEDBACK OFF
SET UNDERLINE OFF
SPOOL C:\TEMP\MULTEXPORTLIST.lst
SELECT 1100 || ',' || CHILD_FOLDER.I_FOLDER AS DESIGNER
FROM OBIA_BIA_ODIREPO.SNP_FOLDER CHILD_FOLDER
JOIN OBIA_BIA_ODIREPO.SNP_FOLDER PAR_FOLDER
ON PAR_FOLDER.I_FOLDER = CHILD_FOLDER.PAR_I_FOLDER
WHERE PAR_FOLDER.FOLDER_NAME LIKE 'CUSTOM%';
SPOOL OFF;
SQL to get last analyzed date and row counts of multiple tables.
When we have performance problem we often have to retrieve certain data for multiple tables for example row counts, last analyzed date from multiple databases. In an effort to simply that I create the following query which spools following details to the file c:\temp\tabledetailslist.txt.
If you want to capture the same details for multiple databases run this SQL on each database, you should see all the results in one file
- Row count of each table
- Last analyzed date of each table
- Number of Indexes on each table
- List of all indexes names on the table
SPOOL C:\temp\countlist.txt APPEND;
SET PAGESIZE 1000;
BREAK ON DATABASE;
SELECT TRIM (SYS_CONTEXT ('userenv', 'instance_name')) AS DATABASE,
TRIM (OWNER) AS OWNER,
TRIM (TABLE_NAME) AS TABLE_NAME,
LAST_ANALYZED,
TO_NUMBER (
EXTRACTVALUE (
XMLTYPE (
DBMS_XMLGEN.GETXML (
'select count(*) c from '
|| OWNER
|| '.'
|| TABLE_NAME)),
'/ROWSET/ROW/C'))
ROW_COUNTS,
TO_NUMBER (
EXTRACTVALUE (
XMLTYPE (
DBMS_XMLGEN.GETXML (
'SELECT COUNT(1) C from ALL_INDEXES WHERE TABLE_OWNER = '''
|| OWNER
|| ''' AND TABLE_NAME= '''
|| TABLE_NAME
|| '''')),
'/ROWSET/ROW/C'))
NUM_INDEXES,
EXTRACT (
XMLTYPE (
DBMS_XMLGEN.GETXML (
'SELECT INDEX_NAME ||''; '' C from ALL_INDEXES WHERE TABLE_OWNER = '''
|| OWNER
|| ''' AND TABLE_NAME= '''
|| TABLE_NAME
|| '''')),
'/ROWSET/ROW/C/text()')
INDEX_NAMES
FROM ALL_TABLES
WHERE TABLE_NAME IN ('TAB1',
'TAB2',
'TAB3',
'TAB4')
ORDER BY TABLE_NAME;
SPOOL OFF;
How to login to remote "Cloud" based ODI repository over ssh port forwarding?
Scenario:
When a database is hosted on a public cloud, security policies often prevent the admins from exposing direct db port i.e. 1521 to the world. In this scenario, admin asks oracle developers to use ssh tunneling a.k.a port forwarding to connect to the database from our ODI Studio. So, how we do this?
High-Level Steps:
- You need to have ssh access to the database server in addition to DB login credentials. (One time)
- Download and install putty (One time)
- Create putty profile with port forward called mysshprofile (One time)
- Connect to putty and launch a session with mysshprofile. (Before every login)
- Create a new connection pointing to local port in ODI studio (One time)
- Launch ODI Studio and connect to the remote database.
The Nitty Gritty:
How to download and install putty?
You can find answer here: https://www.htpcbeginner.com/install-putty-on-windows/
How does port forwarding work?
You can find answer here: http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
How connect to the remote database using port forward?
You can find answer here: http://www.akadia.com/services/ssh_putty.html
Assumptions:
Remote database host IP: 192.168.205.205
SSH Port: 22
Unused local port on localhost: 9000
Remote database port: 1521
Remote database sid: orclsid
Connecting to remote server, creating a tunnel and creating a putty profile:
- Add Source Port, this is a local port which listens for traffic and forwards it to remote DB port. This port should not be used by any system, I used 9000 port. Add Destination in Remote IP: DB Port format (See assumptions)
- Go back to session tab on left menu and add profile name "tunnelremotedb" in the Saved Sessions field
- Click save and open.
- You will now be prompted to enter username and password: Please enter ssh login credentials which you got from DBA
- Completing these steps will create a successful tunnel into the remote server. Please note, if you close the ssh session you will close the tunnel.
- Subsequently, you can connect to the database using putty saved session. You will not need to repeat steps 1-5
Connecting to ODI repository from ODI Studio:
Creating a connection to cloud repository and a regular database have very similar steps but the major different would be the jdbc URL.
Putting it all together:
Once the setup is done you should
- Create SSH connection
- Open putty
- Load the saved session
- Log in to the remote server.
- Connect to ODI repo using MyRemoteRepConnection
In the next installment, we will talk about how to automate putty connection using plink
How add and maintain custom indexes in OBIA using ODI?
How add and maintain indexes in OBIA using ODI?
Monday, April 10, 2017
2:05 PM
If you look at the ODI code base for OBIA (all 11.X versions) before every SDE mapping step or SIL mapping steps. We will see two sets of steps.
- Initialize steps (Before the map is run)
- Indexes are dropped
- Finalize steps (After the map is run)
- Indexes are created
- Stats are gathered
When we add a new index to a table in DW schema or when we delete an index, we do not have to add CREATE INDEX, DROP INDEX or GATHER STATS steps to the ODI code base. There are hooks in the code to take care of these steps irrespective of the number of indexes. Index maintenance steps can be achieved by a couple of simple steps in Designer without having to write a single line of code.
Let's start by understanding what are the steps involved in adding an index to a table.
Step 1: Create indexes on the database
Step 2: Reverse engineer the model (typically named Oracle BI Applications)
Customized Reverse-Engineering using OBIA specific RKM
Step 3: Modify options on the index.
Step 4: Execute EXEC_LOAD_INDEX_METADATA scenario under BI Apps Project > Components > DW > Oracle > Load Index Metadata > Load Index Metadata > EXEC_LOAD_INDEX_METADATA Version 001 (to load W_ETL_ALL_INDEXES table). If you are running domain load this step can be skipped.
There are three main changes to be made to the index (Constraint object).
- Choose "Key or Index Type" under "Description" Tab
- Not Unique Index: If the index is not unique
- Primary Key: If the Index is Primary key
- Alternate key: If the index is unique but not the primary key.
- Choose "OBI Index Type" under "Flexfields" tab
- Choose "OBI Bitmap Index" under "Flexfields" tab
Most indexes can be categorized into following types. Let's evaluate what are the options to choose for each type of index as mentioned in Step 3
- Primary key index (BTREE)
- "Key or Index Type": Primary Key
- "OBI Index Type": ETL
- "OBI Bitmap Index": N
- Alternate key index (BTREE)
- "Key or Index Type": Alternate Key
- "OBI Index Type": ETL
- "OBI Bitmap Index": N
- Non-Unique Index (BTREE )
- "Key or Index Type": Not Unique Index
- "OBI Index Type": ETL
- "OBI Bitmap Index": N
- Non-Unique Index (BITMAP)
- "Key or Index Type": Not Unique Index
- "OBI Index Type": QUERY
- "OBI Bitmap Index": Y
Posted by
Arjun Lagisetty
Use Option values within Substitution API calls (Hint: Substitution Passes) [Nested odiRef Calls]
Here is a simple question: How to use Option values with in a substitution API?
In our design we created a procedure which look table name as option value (TABLE_NAME) and it created a DML statement around it. For this procedure to work we needed to use odiRef.getOption() function with in odiref.getTableName() function.
My first take on it was to write the following code
OK, here is my first take at the answer. HINT: This does not work.
Why does it not work?
As some of you are aware, Odi's procedure language is based on Java Bean Shell. It is interpreted i.e. the code is executed top down and left to right. So, the first function it encounters is getTableName function and it executes that first. At this point getOption function is not called yet. So the instead of receiving the option value, getTableName function receives entire string "odiRef.getOption("TABLE_NAME")" as input.
Typically in most languages code is executed once. Since ODI is based on java beanshell code, the interpreter takes multiple passes through the code. To understand multiple passes in detail, please refer to the ODI cookbook chapter 5.
To cut through the chase here is the final answer for the above problem.
Also, refer to this blog post by Uli Bethke at Sonra for another example on substitution passes.
In our design we created a procedure which look table name as option value (TABLE_NAME) and it created a DML statement around it. For this procedure to work we needed to use odiRef.getOption() function with in odiref.getTableName() function.
My first take on it was to write the following code
OK, here is my first take at the answer. HINT: This does not work.
<%= odiRef.getTableName("<%=odiRef.getOption("TABLE_NAME")%>") %>
Why does it not work?
As some of you are aware, Odi's procedure language is based on Java Bean Shell. It is interpreted i.e. the code is executed top down and left to right. So, the first function it encounters is getTableName function and it executes that first. At this point getOption function is not called yet. So the instead of receiving the option value, getTableName function receives entire string "odiRef.getOption("TABLE_NAME")" as input.
Typically in most languages code is executed once. Since ODI is based on java beanshell code, the interpreter takes multiple passes through the code. To understand multiple passes in detail, please refer to the ODI cookbook chapter 5.
To cut through the chase here is the final answer for the above problem.
<%= odiRef.getTableName("<?=odiRef.getOption("TABLE_NAME")?>") %>
Also, refer to this blog post by Uli Bethke at Sonra for another example on substitution passes.
How to extract host names from a TNS file?
To extract host names we use a regular expression. You can use Regular expression editor in standard editors like Notepad++ or use http://regexr.com/
Regex:
Extract second group from the matches using:
Regex is a very important tool in your tool set to do complex string manipulation.
Example regex and file here:
Regex:
(HOST\s?=\s?)(.*?)(\))
Extract second group from the matches using:
$2 \nEasiest way to use the above regex is in http://regexr.com/
- Copy paste the tns file into Text editor
- Copy paste the expression into expression editor
- Enable global flag from "flags" on the top right to match entire file
- In Tools > list type the group extractor
Regex is a very important tool in your tool set to do complex string manipulation.
Example regex and file here:
http://regexr.com/3esbi
How to name a context? Hint: Not GLOBAL
Most of the
implementations I have seen do not give an optimal name to the context.
Yes the default
context id GLOBAL and you might think what's wrong with it?
There is
nothing wrong with it. If all you have is one set source and target. Dev source database loads to Dev Target database. For production I recommend having only one
context and it being GLOBAL.
In DEV, we can start
with one context, when we are loading from one dev source to one target. As
time moves on, you might have received a request to load from hotfix environment which contains a data which is not present in dev source database. I have seen couple of things
happen in this instance to load data from new data source.
- We change the existing data server connection details to point to new source - Very simple (Good for one time)
- Create new data source and update the logical and physical mapping (on GLOBAL context which have to be reverted again)
For onetime events,
both approaches are ok. Slowly you start getting requests to load from the new
data source again and again, and you find yourself repeating these procedures
over and over.
My recommendation in
this instance would be to create a new context named a follows
HOTFIX_DEV
<SRC>_<TARGET>
<SRC> -->
Source data source code where you are loading from
<TARGET>
--> Target data source code where you are loading to
Data source code can
be any text which can help you identify that group of data sources.
When you name the
contexts in the above mentioned manner and you perform the logical and physical
mapping accordingly, you have couple of advantages.
- You can react to requests for loading from multiple instances of sources instantaneously without scope for error
- You have an history of what loads have been performed from what sources. E.g. your testing team might come and hey, when are full load done from this data source. You can preserve the history to answer these questions based on context.
Ideal scenario might
be to have one source per one target but we all know that we are living far
from ideal world and this might make your life a little easier to deal with.
Toad Shortcuts & Tips
Most
Useful Shortcuts:
Block Comment
|
CTRL+B
|
Comment selected
lines
|
Uncomment
|
CTRL+SHIFT+B
|
Uncomment Selected
lines
|
Format Code
|
CRTL+SHIFT+F
|
Format and indent
code.
|
Describe
|
F4 or ALT + Enter
|
Opens a new window
describing the table
|
Lower case
|
CRTL+L
|
Convert all the text to lowercase
|
Describe Select
Query
|
CRTL + F9
|
Opens a new window
describing the select statement
|
Execute Statement
|
F9
|
Run SQL code and
show results in Script Output tab
|
Execute Script
|
F5
|
Run SQL code and
show results in Data Grid
|
SQL Recall
|
F8
|
Opens a section
which shows all the SQLs run
|
Previous SQL
Recall statement
|
ALT + up
|
Acts like history
navigation in bash shell for SQL commands
|
Next SQL Recall
statement
|
ALT + down
|
Acts like history
navigation in bash shell for SQL commands
|
Next Navigator
Node
|
CRTL+PgDn
|
Goes not next
statement block in the editor
|
Previous Navigator
Node
|
CRTL+PgUP
|
Goes not previous
statement block in the editor
|
Quick Template
|
CRTL+Space
|
Opens a prompt on
the editor with all template for code blocks like begin end, for loop,etc
|
Helpful
Views:
View>Code
Snippets
|
Provides sample
code snippets like hints, date formats, conversion functions (quick
alternative to googling for synrax)
|
View>Object
Palate
|
Used for building
queries by selecting object name and column name by dragging and dropping
them into editor
|
Posted by
Arjun Lagisetty
Code Back up Strategy for Development Work Repositories
Problem:
Have you ever had a problem when you accidentally changed something in an interface and you forgot what was changed. You wished you had a copy of yesterday's interface so you can restore from it or at the least compare the current interface to previous day's copy.ODI developers typically use inconsistently internal versioning system to save versions of the code. All these versions are stored the master repository in blob format and litter the repository after a while. This method is also inconsistent, a developer might save an interface but he would have versioned data store used in that interface. Making copies of the objects is even more disastrous, it will make the repository horribly unreadable. Relying on DB backups will mean restore process is all or none. You cannot individually choose which objects to restore. Given these challenges we designed a process which does not involve developers intervention to backup the code nightly.
Solution:
I developed a package to export object by object in the development work repository for the following objects. This code is hosted here. This is scheduled every night. This process backs up the objects to an archive file and retains them for 28 days.Note:
- This code contains some Hard-coded values for directories. I am yet to make the code more parameter driven. Due to lack of time I am publishing the code as is. You need to make changes to the procedure named "PROC_OBJECT_BACKUP"
- This procedure is written for windows and uses 7zip to archive.
- Objects are created in the following format <Prefix>_<Object_ID>.xml. There was some error while using object names in the file names.
- This uses Sunopsis memory engine's default physical schema.
Object Type
|
Child Objects Yes/No
|
Prefix
|
PROJECT
|
No Child objects
|
PROJ
|
INTERFACES
|
With Child Objects
|
POP
|
PROCEDURES
|
With Child Objects
|
TRT
|
PACKAGES
|
No Child Objects
|
PACK
|
VARIABLES
|
With Child Objects
|
VAR
|
USER FUNCTIONS
|
With Child Objects
|
UFUNC
|
MODELS
|
No Child Objects
|
MOD
|
SUB-MODELS
|
No Child Objects
|
SMOD
|
DATASTORE
|
With Child Objects
|
TAB
|
Potential uses:
- These process can be used to restore the code if some developers accidentally corrupts the code.
- We can modify this process to only extract changed objects nightly and check them into a version control system.
- We can modify this process to extract objects based on tag for exporting for migration to another environment.
YAJSW: Changing ODI params
If you installed ODI agent as a windows service on your host using YAJSW as documented in this blog post on Oracle.
https://blogs.oracle.com/dataintegration/entry/running_odi_11gr1_standalone_a
To Change ODI params you have to follow the following steps. Changing odiparams.bat file will have no effect.
https://blogs.oracle.com/dataintegration/entry/running_odi_11gr1_standalone_a
To Change ODI params you have to follow the following steps. Changing odiparams.bat file will have no effect.
- Stop the service
- Make a copy of the %YAJSW%/conf/wrapper.conf.file used to create this service.
- Edit the %YAJSW%/conf/wrapper.conf for required changes
- Restart the service via Windows services
Verify the change using a tool Process explorer from sysinternals
YAPSSSTB: Yet Another PowerShell Script for SQL Server Table Backups
Thanks to my very descriptive title. I can skip the obligatory introduction statement.
Requirement:
Backup SnapShot Fact tables after every snapshot load.
Archive the backups with timestamp filename.
Solution:
Download the script and run the following command
.\backup_sql_tables_ps.ps1 -ServerInstance INSTANCE_NAME -Database DATABASE_NAME -ExportPath C:\temp\ -TableNames DATABASE_NAME.schema.TABLE1,DATABASE_NAME.schema.TABLE2,DATABASE_NAME.schema.TABLE3
This Script uses BCP to backup SQL server Objects.
This script should be run in a machine where sql server is installed with a user which has privileges and access to BCP ulitily provided with Sql Server
This script needs 7Zip installed in $env:ProgramFiles\7-Zip\
It will create folders called MMddyyyyHHmmsdata for data files
It will create folders called MMddyyyyHHmmserror for error and info detail files in the archive
Each table's data is backed up in a file called TableName.bat (If you are trying to backup two tables with same name from different schema this might not work for you.)
If archive name is not provided it create archive with timestamp MMddyyyyHHmms.7z
Sample Command
ODI Specific:
This can command can be called from ODI if you have ODI and SQL server installed in the same machine and it should work with out a sweat.