Sending Cube Data into a Database

Create a new Turbo Integrator Process

For now we won’t set the data source for the TI Process because we want this to be a Cube View that we create as the first step (Prolog procedure).
So the first thing is to create the view and assign dimensions subsets which will filter the rows of the data source that we will be sending into the source system. An example of this which we will actually use is filtering the Scenario Dimension to only Budget.
So click on the Advanced Tab within the Turbo Integrator Editor and then the Prolog Sub-Tab.

Code to Create a Blank View:

CubeName = ‘SalesCube’;
ViewName = ‘Writeback View of ‘ | CubeName;
SubName = ViewName;
IF( ViewExists(CubeName,ViewName) = 1 );

Code to Create a Subset, Insert an Element into it and Save it to the view:

DimName = ‘actvsbud’;
IF( SubsetExists(DimName,SubName) = 1 );
ViewSubsetAssign(CubeName, ViewName, DimName, SubName)
Due to the structure of my destination table, I will also be filtering this view to show only sales volumes (units) as where there are sales volumes there is revenue and cost and I only wish to write a single ‘INSERT’ SQL Statement per row of my data source. The reason this is important is because the destination table has a separate column for Revenue, Cost and Volume (as opposed to a separate line per measure).

Code to Create a Subset, Insert an Element into it and Save it to the view:

DimName = ‘account1’;
IF( SubsetExists(DimName,SubName) = 1 );
ViewSubsetAssign(CubeName, ViewName, DimName, SubName);
Now that we have the code which will create a view of all the data in the cube, then filter that by the Budget Scenario we need to save this TI Process and execute it. This will create the view so that we can use it as a data source.

Setting the Data source

  1. Click on the Data source tab within the Turbo Integrator Editor and set the Type to Cube View.
  2. Select your cube and view.
  3. Click ‘Preview’ for good measure, check that the filter (Subset) you assigned has taken effect. If it hasn’t you have made a mistake with the code.

  1. Click on the ‘Variables’ Tab, You will be prompted to update or leave the variables as they are.
  2. Chose “Keep Derived Variables Only”

  1. You will now see a list of the fields/columns of the view you made earlier. You can name each field as you want.
  2. Change the ‘Contents’ Column of each field/variable to ‘Other’
You have now defined your Source Data for the Process; the next step is to prepare your destination. For this tutorial I will be sending the data into a MySQL Database via DSN ODBC. Note: DSN ODBC is a must for this process; you cannot use an ADO Connection String. I wont cover setting up an DSN ODBC connection as there are thousands of tutorials covering this already.
For the purposes of opening a connection to the output database and closing that connection when we are done with it we will be using the two functions ODBCOpen & ODBCClose.
Within the Prolog Procedure add the Connection Open Function with the DNS and credentials. Within the Epilog close the connection.
Now we can write the code which first retrieves the cost and revenue of the sales. Then creates an SQL statement to push this into the destination table and finally uses the ODBCOutput function to execute the statement against the destination database.
The result is that the contents of your view are transferred to the destination table.
Presently if we were to run this process multiple times the data would be duplicated within the destination table. We can solve that by clearing the table before we start. Although this may be suitable for my tutorial process, this kind of statement is not very practical in a business environment.
To clear the table I will be using the SQL ‘DELETE FROM’ Statement. I do this in the Prolog after connecting to the ODBC Output Database.

This concludes the Tutorial. Please note that tables and cubes are hardly ever the same between two implementations, especially if done by different Consultancies, Freelance Consultants or even when done by the vendor. As a result any ETL style process will need to be adjusted according to the structures holding the business data.


You might also like