Quantcast
Channel: SCN : Blog List - SAP HANA Developer Center
Viewing all 676 articles
Browse latest View live

SAP HANA Self Learning as Never Before(Part 1) - the first lesson for startups to learn HANA

$
0
0

As a technology advisor to the startups at SAP Startup Focus Program, I have the opportunities to work with the innovative startups from all different kinds of areas. SAP Startup Focus is a 12-month global program for startups with big data, predictive analytics and/or real-time data decision solutions. We make SAP HANA available to the startup community, help eligible startups accelerate the development of their solutions. We also help the startups with validated HANA solution accelerate market traction.


Im from the second phase of the program that we call Development Accelerator, in which phase we are helping startups to build the Minimum Viable Product(MVP) within 1-year period of free technical supports. As a deep technical and hands-on person of the team, my job is to help the startups solve all kind of technical problems, advise architecture designs and act the last line of defense for resolving all technical obstacles. I also own the technical thoughts and creations technical contents for startup educations, the team I am with have done many prototyping workshops(a 1-day classroom training) to train the engineers from startups. We have trained thousands startups all over the world.


I had reviewed many existing educational content, thanks to SAP product and development teams in creating the amazing SAP HANA Interactive Education (SHINE), it will be a solid start for new developers to SAP HANA. Click the link to learn more about SHINE.


The reason we found that we cannot just reuse SHINE is because of the diversity of the startups, many of them aren’t in the area of the enterprise world and it is hard for them to understand the data model of SAP EPM system. Besides, startups are the geeks and they want to have something fun so I decided to create something interesting and more close to the mindset of startups. We have used these contents to train thousands startups, the feedbacks we received that they are very much enjoyed that I finally decide to share it with you through a series of blogs and this is the first blog that in which I will cover the basics.


Ok. Let’s get started. I do want to tell you that I have evaluated many open datasets that include twitter you may have seen in my another blog, LinkedIn data and some other dataset, eventually CrunchBase data stands itself out because it is so close to what I want. For those who don’t know CrunchBase data yet, the simple description is it is the dataset about Startups, Investors, Competitors, Fundings and Acquisitions that you can imagine it is very close of the startups’ daily life.


Data Model


CrunchBase is a free database of technology companies and start-ups operated by TechCrunch, which comprises around 500,000 data points profiling companies, people, investors, fundings and acquisitions. Below is the number of points for each entity type in CrunchBase:

1.jpg

 

CrunchBase itself don’t compare the companies and there is no option to aggregate and calculate even discover the relationships between the various datasets, by loading the data into a in-memory database like SAP HANA and utilize the data modeling tool or embedded analysis algorithms, some very interesting questions like below can be answered in real time:

 

  • What kind of companies have more opportunities to be invested or acquired?
  • What are the likable competitors of a company?
  • What is the location distribution of companies had received investments over 3 rounds?
  • What are the shortest or average time to IPO?


The diagram below shows the entity relationships. For each company, it can have zero to multiple funding rounds, acquisitions, IPOs, persons work or had worked for the company, competitions as well as offices. The financial organizations are usually the venture capitalists.


2.jpg


You can think there are many ways to use the data to find the insights behind startups and investors community. But don’t forgot our mission here is to use it to demonstrate HANA capabilities, here are some examples:

 

  • Modeling: Investment history model to aggregate all the funding records of each financial organization
  • SQLScript Procedures: Define proprietary algorithms to calculate startup ranks based on the fundings, competition landscape analysis
  • Text Analysis: Extract sentiment results of company related information
  • Predictive Analysis: Investor clustering
  • Geospatial Analysis: Funding and acquisition location distributions
  • Visualization: Using SAPUI5 for Mobile, CVOM charts to show funding, acquisition records
  • XS Engine(OData & XSJS): Declare OData services or XSJS services for data exposure to UI layer


Applications


Ok, now let’s take a look the applications I have been created.


1. Startup Profile, Ranking, Funding Visualization. Is Twitter still a startup, maybe I should use another company as an example


3.png


2. Competition Analysis, algorithms implemented in SQLScript to find out the competitors


4.png


3. Global Startup Funding Heat-map, use SAP HANA Geospatial Engine and Google Maps as the client


6.png


4. Investor Clustering by K-means, use SAP HANA Predictive Analysis Library


5.png


5. Company Sentiment Ratings, use SAP HANA Text Analysis


7.png


6. Discover Startups and Investors, use most of SAP HANA Platform features


8.png


Investor Profile Page


9.png


Request to you


Please forgive me I don’t have enough time to cover all the details and paste the source codes here all in a single blog but I will do it in my subsequent blogs, please do let me know which one(s) you would like by commenting below that I will try to make your preferred one(s) be online first.



Creating a C# Application Using SAP HANA and ADO.NET

$
0
0

Introduction

The following tutorial demonstrates how to create a .NET application that retrieves information from an SAP HANA database. ADO.NET is a component of the .NET Framework that links a database connection to a set of predefined classes and is primarily used to access and manipulate data inside a relational database.  This tutorial implements a sample application to show the different techniques and options available to a HANA user when writing a solution in C# and .NET.

 

The accompanying code for this project is available for download here.

 

Prerequisites

To create the sample application in this tutorial, you must have an available HANA database installed. You must also have the appropriate credentials to access the data you need in that system. If you do not have a database on premise, you can sign up for a free developer account using HANA Cloud Platform.

 

The mock data used for this demo was taken from the SAP HANA Interactive Education (SHINE) schema, which you can find and import via the following documentation. This schema is useful for testing HANA application logic without the need to worry about migrating existing data.

 

The tools used for this sample application were:

Microsoft Visual Studio 2012

HANA SPS09 Rev 91

Windows 8

.NET Framework 4.5

 

Using ADO.NET with SAP HANA

In order to create an application using SAP HANA with ADO.NET, you must add the HANA Driver to your project. This driver will be installed by the HANA Client if the machine has Visual Studio installed beforehand.

 

To add it to your project, right click on your project’s References folder in the Solution Explorer and select “Add Reference…” Search for the “Sap.Data.Hana for .NET 4.5” reference in the .NET tab and click OK. If you are using a different version of .NET, please select that one from the list instead.

 

1.png

 

When using any of the ADO.NET classes you need to also include the “Sap.Data.Hana” library in the appropriate namespaces.

Once you have your environment configured, you can use ADO.NET. For those familiar with ADO.NET, you can start working right away by simply replacing the prefix of the classes you use with “Hana”.

 

For example, the following equivalent classes are available:

 

HanaDataAdapter

OdbcDataAdapter

HanaDataReader

OdbcDataReader

HanaCommand

OdbcCommand

HanaConnection

OdbcConnection

 

For those unfamiliar with ADO.NET, I put together a simple demo application that uses some of the features available. Although there are many different ways to accomplish similar results, I chose to write as much C# code as possible, for the sake of learning something new.

 

Creating the Application

This sample application takes advantage of the product data and employee data within the SHINE database. There are two different tabs of information, displayed in different ways. The first tab will display the different products available in a grid view, with additional details populated when a row is clicked. The second tab will have a simple TreeView of the employee data sorted according to gender.

 

2.png

 

To begin creating the application, create a new Windows Forms Application in Visual Studio by going to File > New > Project and selecting ‘Windows Forms Application’. At this point you can follow the steps above to add the ADO.NET driver into your project.

 

Double click on the form to create a loading event listener.  To hide your credentials from users of the application, create a connection string in your project’s App.config file.

 

<connectionStrings>     <add name="Hana" connectionString="Server=HOST:PORT;UserName=USER;Password=PASSWORD" providerName="Sap.Data.Hana" /></connectionStrings>


Use port 3##15, where the ## refers to your HANA instance number. For example, a 00 instance would refer to port 30015.

Next, add the following code to connect to your HANA database in this event listener.

 

conn = new HanaConnection( System.Configuration.ConfigurationManager.ConnectionStrings["Hana"].ConnectionString);
conn.Open();

Declare the HanaConnection outside of your Form_Load handler so that you can access it later. We are going to leave the connection open so that we can access the product details without having to reconnect every time. For applications with multiple users, the connection should be closed whenever possible to limit the number of connections being made to the database at a given time.

 

In this example, I’ve also saved constants with the schema name and important SHINE table names along with the HanaConnection. This is a good practice for larger applications where the schema may change, and can also help shorten the SQL query code significantly.

 

using Sap.Data.Hana;
namespace HANADemo
{     public partial class Form1 : Form     {          HanaConnection conn;          const string SCHEMA = "SAP_HANA_DEMO";          const string PRODUCTS_TABLE = "sap.hana.democontent.epm.data::EPM.MD.Products";          const string PARTNER_TABLE = "sap.hana.democontent.epm.data::EPM.MD.BusinessPartner";          const string TEXT_TABLE = "sap.hana.democontent.epm.data::EPM.Util.Texts";          const string EMPLOYEE_TABLE = "sap.hana.democontent.epm.data::EPM.MD.Employees";          public Form1()          {               InitializeComponent();          }          private void Form1_Load(object sender, EventArgs e)          {               conn = new HanaConnection( System.Configuration.ConfigurationManager.ConnectionStrings["Hana"].ConnectionString);               conn.Open();          }     }
}

 

In the Designer view, drag and drop a TabControl into your form. You can also change the text on your tab to reflect the data that will be inside, in this case ‘Employees’ and ‘Products’. Next, add a DataGridView from the Data tab of your toolbox into your Product tab. Optionally change the name of this DataGridView to ‘productGridView’.

 

3.png

 

In the same Form_Load event handler, create a new HanaAdapter with the query you would like to use to populate the DataGridView.

 

HanaDataAdapter dataAdapter = new HanaDataAdapter(
"SELECT t.TEXT AS \"Name\", p.PRODUCTID as \"Product ID\", p.CATEGORY as \"Category\"" +
" FROM \"" + SCHEMA + "\".\"" + PRODUCTS_TABLE + "\" p INNER JOIN \"" + SCHEMA + "\".\"" + TEXT_TABLE + "\" t ON t.TEXTID = p.NAMEID " + "INNER JOIN \"" + SCHEMA + "\".\"" + PARTNER_TABLE + "\" bp ON p.\"SUPPLIERID.PARTNERID\" = bp.PARTNERID", conn);

Create a new DataTable and use your adapter to fill the table.

 

DataTable testTable = new DataTable();
dataAdapter.Fill(testTable);

Finally, set the DataSource of your DataGridView to be the DataTable linked to your query.

 

productGridView.DataSource = testTable;
//Format the grid (optional)
productGridView.AutoResizeColumns(DataGridViewAutoSizeColumnsMode.AllCells);

Once you have done this, the results should look similar to the following.

 

4.png

 

You can also display more detailed information for each product by adding an event handler linked to the RowEnter event. To do so, select the DataGridView in the designer and view its properties by right-clicking and selecting properties. Enter the Events tab (the lightning bolt) and double click the RowEnter event.

 

5.png

 

Since we pulled the primary key (PRODUCT_ID) into the DataGridView, we can use it in a new query to grab more details about the product. Using a HanaCommand this time, we will get a HanaDataReader that we can use to access additional product details.

 

As an example, here is a HanaCommand that uses a very simple query to select the price of a product given its primary key.

 

HanaCommand cmd = new HanaCommand("SELECT PRICE FROM \"" + SCHEMA + "\".\"" + PRODUCTS_TABLE + "\" WHERE PRODUCTID = '" + PK_ID + "'", conn);
HanaDataReader productInfoReader = cmd.ExecuteReader();
productInfoReader.Read();
string price = productInfo.GetString(0);

As you can see, the DataReader classes are used to return rows that you can iterate through. In this case, because we are using a primary key in the query, only one result should be returned. However, when you are dealing with multiple results you can simply run:

 

while (productInfoReader.Read())
{     //Use results
}

To get the primary key of the selected row, you can access the values of your DataGridView directly. In this case my DataGridView was called ‘productGridView’.

 

private void productGridView_RowEnter(object sender, DataGridViewCellEventArgs e)
{     if (productGridView[1, e.RowIndex] != null)     {          string PK_ID = productGridView[1, e.RowIndex].Value.ToString();          HanaDataReader productInfoReader = null;          try          {               //Get the product description, category, and price from the database.                HanaCommand cmd = new HanaCommand("SELECT t.TEXT, p.CATEGORY, " + "p.PRICE, p.CURRENCY FROM \"" + SCHEMA + "\".\"" + PRODUCTS_TABLE + "\" p INNER JOIN \"" + SCHEMA + "\".\"" + TEXT_TABLE + "\" t ON t.TEXTID = p.DESCID " + " WHERE p.PRODUCTID = '" + PK_ID + "'", conn);              productInfoReader = cmd.ExecuteReader();               productInfoReader.Read();          }          catch (Exception exc)          {               //For debugging purposes               MessageBox.Show(exc.ToString());          }          finally          {               if (productInfoReader != null)               {                    productInfoReader.Close();               }          }     }
}

In order to display the data, I added a RichTextBox with the name ‘productDescription’, a PictureBox with the name ‘displayPicture’, and two Labels named ‘productPrice’ and ‘productName’ beside the DataGridView.

 

6.png

 

Feel free to change the properties of these elements to remove borders, change fonts and background colours, etc.

 

To display the images, I grouped the products into their respective categories and had a single image display for each category. I added each of the resources into the project and matched the file name to the categories in the SHINE database. I also added a default image for when the category was not found. To add images to your project:

 

  1. Expand the Properties folder of your project in the Solution Explorer.
  2. Double click on Resources.resx.
  3. Under ‘Add Resource’ select ‘Add Existing File’ and import your images.

 

Once this was complete, I set the properties of the various elements we added previously to reflect the product that was being selected.

The resulting code should now look similar to this.

 

private void productGridView_RowEnter(object sender, DataGridViewCellEventArgs e)
{     if (productGridView[1, e.RowIndex] != null)     {          //Get the primary key (productID) and product name from the grid view          string PK_ID = productGridView[1, e.RowIndex].Value.ToString();          string name = productGridView[0, e.RowIndex].Value.ToString();          HanaDataReader productInfo = null;          try          {                //Get the product description, category, and price from the database.                HanaCommand cmd = new HanaCommand("SELECT t.TEXT, p.CATEGORY, " + "p.PRICE, p.CURRENCY FROM \"" + SCHEMA + "\".\"" + PRODUCTS_TABLE + "\" p INNER JOIN \"" + SCHEMA + "\".\"" + TEXT_TABLE + "\" t ON t.TEXTID = p.DESCID " + " WHERE p.PRODUCTID = '" + PK_ID + "'", conn);               productInfo = cmd.ExecuteReader();               productInfo.Read();               //Display the price and currency               productPrice.Text = productInfo.GetString(2) + " " + productInfo.GetString(3);               //Display the product name again               productName.Text = name;               //Display the product description and category using RTF               string category = productInfo.GetString(1);               productDescription.Rtf = "{\\rtf1\\ansi\\deff0 {\\fonttbl {\\f0 Consolas;}}\\f0\\fs20" + "{\\b Category: } " + category + "\\line {\\b Description: }" + productInfo.GetString(0) + "}";               //Find the product image using the product category               category = category.Replace(' ', '_');               object imgObj = Resources.ResourceManager.GetObject(category);               //Set and resize the product image               displayPicture.Image = (imgObj == null) ? Resources._default : (Image)imgObj;               displayPicture.SizeMode = PictureBoxSizeMode.Zoom;          }          catch (Exception exc)          {               MessageBox.Show(exc.ToString());          }          finally          {               if (productInfo != null)               {                    productInfo.Close();               }          }     }
}

And here is the updated application.

 

7.png

 

Filling TreeNodes

Now we can begin working on our second tab. Add a TreeView to your Employees tab. Use the ‘Enter’ event of your tab to load data into the tree view. In order to load the data into the view, simply iterate through the results of another query and sort the data into an array of TreeNodes. You can then create new parent TreeNodes with these arrays.

 

The final step is adding these parent TreeNodes to your TreeView, in this example named ‘genderTreeView’.

 

privatevoid employeesTab_Enter(object sender, EventArgs e)        
{     try     {          //Read the employee data from the database          HanaCommand data = new HanaCommand(          "SELECT \"NAME.FIRST\", \"NAME.LAST\", ***" + " FROM \"" + SCHEMA + "\".\"" + EMPLOYEE_TABLE + "\"", conn);          HanaDataReader reader = data.ExecuteReader();          var maleList = new List<TreeNode>();          var femaleList = new List<TreeNode>();          //Iterate through each employee          while (reader.Read())          {               //Create display data (full name)               TreeNode employeeInfo = new TreeNode(reader.GetString(0) + " " + reader.GetString(1));               //Sort the employee into the different categories.               string gender = reader.GetString(2);               if (gender == "M")               {                    maleList.Add(employeeInfo);               }               else                {                    femaleList.Add(employeeInfo);               }                          }          //Create TreeNodes with the sorted lists          TreeNode maleEmployees = new TreeNode("Male Employees", maleList.ToArray());          TreeNode femaleEmployees = new TreeNode("Female Employees", femaleList.ToArray());          //Add the nodes to the view          genderTreeView.Nodes.Add(maleEmployees);          genderTreeView.Nodes.Add(femaleEmployees);     }     catch (Exception exc)     {          MessageBox.Show(exc.ToString());     }
}

The resulting TreeView should look similar to the following.

 

8.png

Conclusion                           

The ADO.NET driver for HANA is a powerful tool that brings the full capabilities of the ADO.NET library into your HANA applications. I hope that this sample application has been useful for those interested in learning how to integrate their .NET apps with SAP HANA.

Using HANA Database Triggers to capture Selected Column updates

$
0
0

Background:

We had a specific requirement to capture specific column updates for a sensitive transaction table, although this blog doesn't go into the specifics of the requirement, I hope going through the process of adding the database trigger and recording the changes in the requested structure will highlight some useful database functions (Database Triggers, Transition Variables, Sequences, Identity columns) that are available in SAP HANA.

 

Disclaimer:

Database Triggers should be approached with caution, large and complex database triggers not a good design approach, are also very hard to maintain and support. This use case was to ensure we captured updates to sensitive data, regardless of the origin of those changes (app tier, db, services etc) and the trigger code was kept lean, just capturing old/new values, the column(s) that have been updated, User & DateTime of update.

 

 

Our Sandbox:

db_version.PNG

 


CREATE TRIGGER - SAP HANA SQL and System Views Reference - SAP Library

 

Important notes in relation to SAP HANA db triggers: (As of SP9)

 

  • Events available on Insert/Update/Delete
  • Can fire Before ( validate data, prevent and fix erroneous data etc) or After (record and possibly take action based on content)
  • Update & Delete event have access to both old/new transition variables.
  • Statement level triggers currently only supported against ROW store tables.
  • You can't reference the original table, i.e. the table the trigger is defined on, in the trigger body. According to Rich in the following link, this limitation may be lifted in SPS10, search page for Trigger New SQLScript Features in SAP HANA 1.0 SPS9
  • You can define up to 1024 triggers per single table and per DML. This means a table can have a maximum of 1024 insert triggers, 1024 update triggers and 1024 delete triggers at the same time.
  • Limited SQL Script syntax supported, the following is not currently supported)
    • DDL or Set session variables
    • resultset assignment (select resultset assignment to tabletype),
    • exit/continue command (execution flow control),
    • cursor open/fetch/close (get each record data of search result by cursor and access record in loop),
    • dynamic sql execution (build SQL statements dynamically at runtime of SQLScript),
    • return (end SQL statement execution)



Working Example:

Please excuse the simple nature of the fictitious tables created for this example, these are merely for illustrative proposes


Transaction Table

 

PRIMARY KEY ( COUNTRY)

TrxTab2.PNG


Requirements


1. Fire only on Update

2. Capture only changes to the DOLLAR_VALUE & RATING fields.

3. Identify multiple updates on the same row using the same ID field

4. Record updates using an Insert into an audit table (country_acc_audit)

 

Create section

TrigCreateHeader1.PNG

Trigger Body


All the DDL sql are available in the attached scripts, just highlighting some lines of interest here.

Select statement2.PNG

 

  • Using the connection_id plus a sequence value to uniquely identify this update transaction.
  • The application_user_name may be relevant for folks who are connecting from an application layer through a common user (e.g. in the case of SAP applications, SAP<SID>), the current user will have the connection user (e.g. SAPSR3).


IfStmt2.PNG

 

Test Scenarios

 

Update 1:

 

-- 1 row 2 field update
update country_acc_details
set dollar_value = '11000', rating = 11
where country = 'USA';
commit;

Audit table

Res1.PNG

Note: ID field is an identity column on the Audit table, it's a built in sequence on the country_acc_audit table. Also note trx_session_id is the same for both records.



CREATE COLUMN TABLE "COUNTRY_ACC_AUDIT" ("ID" BIGINT CS_FIXED GENERATED BY DEFAULT AS IDENTITY NOT NULL ,


-------------------------------------------------------------

Update 2:

 

 

-- 2 rows 1 column, same connection
update country_acc_details
set dollar_value = '1000'
where country IN ('IE', 'IND');
commit;

Audit table

Res2.PNG

-------------------------------------------------------------


Update 3:

 

-- new connection, 10 rows 2 field update
update country_acc_details
set dollar_value = '100000', rating = 1;
commit;

Audit table

Res3.PNG

 

 

Conclusion

Quite a few limitations in what you can use within the Trigger body, I would have liked to use session variables to tie all the updates executed in the same connection. I also had some indexserver crashes on trigger create and trigger execution for queries that joined m_connections & m_service_threads table. Not sure if it was directly related to those tables or the join types, but I need to do more research before opening an incident.

Otherwise the trigger behaves as expected and as you can see above, met the requirements laid out.



Real time train information to the passengers who are travelling..

$
0
0

My idea is I want people to get real-time live messages to their mobile phones, the details of the long distance trains travelling. Passengers should have a real time information from the time they board until their departure of journey.This helps the passengers to look for an alternative if the train has cancelled or suppose lets say the train is at station W and their is some track problem which takes 2 hours for it to repair and a passenger has to board at Y to give an attempt to his exam. If he doesnt have any idea about the train and waits for it he looses his exam.But if the passenger gets that the train starts after 2 hours from W he would take an alternate to his destination to avoid chaos at the last minute. As i experienced this situation I would suggest if SAP HANA cloud computing can solve this problem.

SAP HANA IoT With Arduino and Raspberry Pie:Part 1

$
0
0

1.png

Before starting the project we should know why we selected both Arduino and Raspberry Pie .

The Raspberry Piis a low-cost credit-card-size computer with an ARM-processor that has a huge community to help build applications .Raspberry Pi can multitask processes—it can run multiple programs in the background while activated. For example, you can have a Raspberry Pi that is serving as both a print server and a VPN serverat the same time.

On the other hand Arduino is a microcontroller with easier capability to integrate analogous input, The Arduino IDE is significantly easier to use than Linux. For example, if you wanted to write a program to blink an LED with Raspberry Pi, you’d need to install an operating system and some code libraries—and that’s just to start. On Arduino, you can get an LED light to blink in just eight lines of code. Since Arduino isn’t designed to run an OS or a lot of software, you can just plug it in and get started.

On the other hand, you can leave an Arduino plugged in as it conducts a single process for a long time, and just unplug it when you’re not using it. This is why expert would recommend the Arduino for beginners before going for Pi.

As per Limor Fried, the founder ofAdafruit, a DIY electronics store that offers parts and kits for both Arduino and Pi projects, “The Arduino is simpler, harder to 'break' or 'damage' and has much more learning resources at this time for beginners, With the Pi you have to learn some Linux as well as programming—such as Python. The Arduino works with any computer and can run off of a battery. You can also turn it on and off safely at any time. The Pi setup can be damaged by unplugging it without a proper shutdown.”

While the Raspberry Pi shines in software application, the Arduino makes hardware projects very simple. It’s simply a matter of figuring out what you want to do. Sound like Raspberry Pi is superior to Arduino, but that's only when it comes to software applications. Arduino’s simplicity makes it a much better bet for pure hardware projects.

The ultimate answer when deciding between the Pi and Arduino is, “Why choose?” If you’re looking to learn about IoT, each one will teach you something different.   Raspberry Pi and Arduino are complementary. Ideally expert suggests a scenario where the Arduino is the sensory workhouse, while the Pi doles out directions.

So we are going to do exactly that , in this our 3 step  blog we are going to use arduino that is to interface analogous and provide data in digital format to Pie and Pie should take care of communication to SAP HANA.

Simplified steps are :


Step 1: Connect Arduino to Computer and checking the Analog input is working perfectly .For this Experiment we are choosing Photo sensors which will detect the intensity of light and will give the data to computer by serial port communication.

Step 2. Connect Raspberry Pie to Arduino and able to establish the same configuration which was achieved via computer and Arduino .Also setting up webserver in Raspberry pie which can communicate over internet.

Step 3: Storing data into the SAP HANA system from pie and displaying it using SAP UI5 in near real time.

 

For this blog we are going to perform Step 1:

 

First install the Arduino Kit from here to your computer, for me it is windows .It looks like this after installation:

2.png

 

Also check the Serial port which is connected to Arduino and set the right port in your installed software

 

3.png

Now for this demo we are going for below circuit diagram:

4.png

And mine circuit looks like this in real world

 

5.png

 

We are going to use code which takes Analog input from serial output:

 

 

/*  Analog input, analog output, serial output
Reads an analog input pin, maps the result to a range from 0 to 255
and uses the result to set the pulsewidth modulation (PWM) of an output pin.
Also prints the results to the serial monitor.
The circuit:
* potentiometer connected to analog pin 0.   Center pin of the potentiometer goes to the analog pin.   side pins of the potentiometer go to +5V and ground
* LED connected from digital pin 9 to ground
created 29 Dec. 2008
modified 9 Apr 2012
by Tom Igoe
This example code is in the public domain.
*/
// These constants won't change. They're used to give names
// to the pins used:
const int analogInPin = A0; // Analog input pin that the potentiometer is attached to
const int analogOutPin = 9; // Analog output pin that the LED is attached to
int sensorValue = 0; // value read from the pot
int outputValue = 0; // value output to the PWM (analog out)
void setup() {  // initialize serial communications at 9600 bps:  Serial.begin(9600);
}
void loop() {  // read the analog in value:  sensorValue = analogRead(analogInPin);  // map it to the range of the analog out:  outputValue = map(sensorValue, 0, 1023, 0, 255);  // change the analog out value:  analogWrite(analogOutPin, outputValue);  // print the results to the serial monitor:  Serial.print("sensor = " );  Serial.print(sensorValue);  Serial.print("\t output = ");
Serial.println(outputValue);  // wait 2 milliseconds before the next loop  // for the analog-to-digital converter to settle  // after the last reading:  delay(200);
}


Here we are trying to read the analog signal from the photo sensor via Arduino and then Arduino is going to send it via Serial port to computer and use it to show the data reading of the sensor.

After writing the program you should upload the program to Arduino.


6.png

And to see the magic open the serial Monitor in top right side of the program.

 

7.png

 

We have demonstrated the result of the step 1 in a short video which is here.

 

-Ajay Nayak

UI5CN

Hana Smart Data Integration - Inside Realtime streams

$
0
0

Most Hana Adapters do support realtime push of changes. Not only for databases as sources but everything, e.g. Twitter or adapters you wrote. While the documentation contains all the information needed from an Adapter developer and user perspective, I'd like to show the internals that might be helpful.

 

 

As shown in the blog entry about adapters and their architecture (see Hana SPS09 Smart Data Integration - Adapters)  and the Adapter SDK manual (SAP HANA Data Provisioning Adapter SDK - SAP Library), Adapters provide an interface to interact with the Hana database that revolve about remote sources (=connection to the remote source system) and virtual tables (=the structure of remote information).

For realtime the remote subscription is the central Hana object.

 

Hana remote subscriptions

 

The syntax for this command is quite self explanatory:

 

create remote subscription <subscriptionname> using (select * from <virtual_table_name> where ...) target [table | task | procedure] <target_name>;

 

Hana will send the passed SQL select of that command to the Smart Data Access layer and depending on the capabilities of the adapter, as much as possible is passed to the Adapter. The Adapter can do whatever it takes to get changes, send them to Hana and there the change rows are put either into a target table, a target task (=transformation) or a target stored procedure.

 

The remote subscription object contains all of this information, as a query on the catalog object shows:

 

select * from remote_subscriptions;

 

realtime_insight1.png

As seen from the Adapter, above create remote subscription command does nothing. All it does is basic logical validations like checking if such virtual table exists, if the SQL is simple enough for being pushed to the adapter, if the target object exists and the selected columns match the target structure. All checks performed on metadata Hana has already.

 

Activating a remote subscription

 

Replicating a table consists of two parts, the initial load to get all current data into the target tables and then applying the changes onward. But in what order?

The usual answer is to set the source system to read only, then perform the initial load, then activate the change data processing and allow users to modify data after. As the initial load can take hours, such down time is not very appreciated. Therefore we designed the remote subscription activation to support two phases.

 

First phase is initiated with the command

 

alter remote subscription <subscriptionname> queue;

 

With this command the Adapter will be notified to start capturing changes in the source. The adapter gets all the information required for that, a Hana connection to use, the SQL select so it knows the table, columns and potential filters. The only thing the adapter is required to do is to add a BeginMarker row into the stream of rows being sent. This is a single line of code in the Adapter and it will tell the Hana receiver that at this point the Adapter started to produce changes for this table.

 

Example:

The adapter does replicate CUSTOMER already, now above remote-subscription-queue command was issued for a subscription using the remote table REGION. Such stream of changes might look like

 

TransactionTableChange TypeRow
13.01:55.0000CUSTOMERinsertinsert into customer(key, name) values (100,'John');
13.01:55.0000commit
13:02:56.0000CUSTOMERupdateupdate customer set name = 'Franck' where key = 7;
13:02:56.0000CUSTOMERupdateupdate customer set name = 'Frank' where key = 7;
13:02:56.0000commit
13:47:33.0000REGIONBeginMarkerBeginMarker for table REGION
13:55:10.0000REGIONinsertinsert into region(region, isocode) value ('US', 'US');
13:55:10.0000commit

 

The Hana server will take all incoming change rows and process them normally, rows for subscriptions that are in queue mode only, that is a BeginMarker was found in the stream but no EndMarker yet, will be queued on the Hana server. In above example, the CUSTOMER rows end up normally in its target table, the target table for REGION rows will remain empty for now.

 

Therefore the initial load can be started and it does not have to worry about changes that happened. From the looks of the initial load, the target table is empty and not a single change will be loaded.

 

Once the initial load is finished, the command

 

alter remote subscription <subscriptionname> distribute;


should be executed. This will tell the Adapter to add a EndMarker into the stream of data.

When the Hana server finds such EndMarker row it starts to empty the queue and apply the changes to the target table. All rows between the Begin- EndMarker for the given table are loaded carefully, as it is unknown if those had been covered by the initial load already or haven't. Technically that is quite simple, the insert/update rows are loaded with an upsert command, hence either inserted if the initial load did not find them or updated if already present. Rows of the ChangeType Delete are deleted of course.

All rows after the EndMarker are processed normally.

 

Error handling

 

During operation various errors can happen. The Adapter has a problem with the source and does raise an exception. The Adapter or the Agent itself dies. The network connection between Hana and Agent is interrupted. Hana was shutdown....

In all these cases the issue is logged as exception in a Hana catalog table.

 

select * from remote_subscription_exceptions;

 

realtime_insight2.png

 

In above instance the connection between the Agent and Hana was interrupted. Therefore the adapter got a close() call and should have cleaned up everything. Brought in a state where nothing is active anymore. On the Hana side an remote subscription exception is shown with an EXCEPTION_OID. Using this unique row number the exception can be cleared and the connection re-established, using the command

 

process remote subscription exception 42 retry;

 

This command will reestablish the connection with the Agent and its Adapter, send all remote subscription definitions to the adapter again plus the information where the adapter should start again. The adapter then has to start reading the changes from this point onward.

 

 

Pausing realtime propagation

 

Another situation might be to either pause the capture of changes in the source or to pause applying the changes into the target objects. This cannot be done on remote subscription level but for the entire source system using the command

 

alter remote source <name> suspend capture;

alter remote source <name> suspend distribution;

 

and the reverse operation

 

alter remote source <name> resume capture;

alter remote source <name> resume distribution;

 

 

The magic of Change Types

 

Whenever an adapter creates realtime changes, these CDC Rows have a RowType, in the UI called Change Type, which is either insert, update, delete, or something else. This Change Type information is used when loading a target table or inside a task to process the data correctly.

For simple 1:1 replications the handling of the Change Type is quite straight forward, the Change Type is used as the loading command, so an insert row is inserted, a delete row deleted etc.

 

Therefore it is important that the adapter sends useful Change Types. Take the RSS Adapter with its virtual table RSSFEED. The adapter polls the URL, gets the latest news headers and they should be loaded. The primary key of the virtual table is the URI of the news headline and so has the replicated target table.

 

If the adapter would send all rows with Change Type = Insert, the first realtime transaction would insert the headlines, the second iteration fail with a primary key violation. An RSS Feed simply does not know what was changed, what had been received already. Not even the Adapter knows that for sure as the Adapter might have been restarted and as seen from its perspective it is the first read, it has no idea what happened before it was stopped.

 

One solution to this would be to send two rows, a Delete row plus an insert row. Would certainly work but cause a huge overhead in the database as twice as many rows are sent and deleting rows and inserting again, even if not changed, is expensive as well.

 

The solution was to add more Change Types to simplify adapter development. In case of above RSS Adapter, the RowType Upsert was used.

 

Another special Change Type is the eXterminate value. Imagine a subscription using the SQL "select * from twitter where region = 'US'" and let's assume this filter cannot be passed to the Adapter but is executed in the Hana Federation layer.

So Twitter sends rows from all regions to Hana, in Hana the filter region = 'US' is applied and only the resulting ones are loaded. No problem. Except for Delete messages from Twitter. Because Twitter does not tell all values, only the TweetID of the Tweet to be removed. So the adapter does send a row with the column TweetID being filled, all other columns are null, especially the region column. Hence this delete row does not pass the filter region='US' and will never make it to the target table. Therefore, instead of sending such row as Delete, the Twitter adapter does send this row as eXterminate row.

This tells the applier process that only the primary key is filled and it does not use the filter condition on those rows.

 

Another Change Type is Truncate. Using this the Adapter can tell to delete many rows at once. An obvious example is, in a database source somebody emptied the source table using a truncate table command. The adapter might send a truncate row with all columns being NULL, instead of deleting every single row. But with the Truncate Change Type subsets of data can be deleted as well. All the Adapter has to do is sending a truncate row with some columns having a value. For example, an Adapter might send a truncate row with region='US' to delete all rows where the region = 'US'. That might sound as a weird example but imagine a source database command like "alter table drop partition".

 

Another use case of the Truncate Change Type goes together with the Replace rows. Imagine an Adapter that does not know what row has been changed, only that something changed within a dataset. Let's say it is a file adapter and whenever a file appears in a directory, the entire file is sent into the target table. It might happen that a file contained wrong data and hence is put into the directory with the same name as previously. None of the above Change Types can deal with that situation. Insert would result in a primary key violation, upsert would work but what if the file contains less rows as one got deleted?

The solution is to send a first truncate row with the file name column being set, hence the command "delete from target where filename = ?" will be executed and now all rows of the file can be inserted again. But use the Change Type Replace instead of Insert. It does the same thing internally, all replace rows are being inserted but it helps to understand that these Replace rows belong to the previous Truncate row and additional optimizations and validations can be done.

 

All of the above Change Types work with Tasks as target as well. Understanding what each transform has to do for each row was hard, very hard in fact. But the advantage we get is, complete dataflows do not work in batch but can transform realtime streams of data as well. No delta loads needed, the initial load dataflow can be turned into a realtime task receiving the changes. Per SPS09 for single tables only, but how to deal with joins in realtime is the next big thing.

End to end integrated Scenario of ECC, HANA and BO : HANA Modelling to IDT (Part 2)

$
0
0

Hi,

This document contains end to end scenario of creating Universe in IDT by fetching data from SAP HANA. IDT is Information Design Tool under BusinessObjects.(details about IDT is given in below steps)

 

 

For Previous Steps :End to end integrated Scenario of ECC, HANA and BO :  ECC to HANA and HANA Modelling (Part 1)


For DASHBOARD SCENARIO : Step by Step procedure of  how to consume SAP HANA views in Dashboards.

 

Following are the step involved in  :

 

Step-By-Step Development Process

 

http://scn.sap.com/servlet/JiveServlet/downloadImage/38-124232-688956/496-400/fnl.png

 

A. UNIVERSE DEVELOPMENT

1. Project Creation

 

Login to IDT.

  • Click START ---> ALL PROGRAMS ---> SAP BUSINESS INTELLIGENCE ---> SAP BUSINESSOBJECTS BI PLATFORM 4 CLIENT TOOLS ---> INFORMATION DESIGN TOOL

  2.png

 

NOTE : The information design tool is an SAP BusinessObjects metadata design environment that enables a designer to extract, define, and manipulate metadata from relational and OLAP sources to create and deploy SAP BusinessObjects universes.

 

 

 

 

 

Below is the Home screen to create universe. Left navigation panel gives us the already existing projects. Also Repository resources.

3.png

 

NOTE : A universe is an organized collection of metadata objects that enable business users to analyze and report on corporate data in a non-technical language.

 

 

 

 

Here we create project for developing Universe.

  • Click FILE ---> NEW ---> PROJECT

4.png

 

 

Specify Project name and location to workspace. Devendra_Po_Analysis is project name used in scenario.

  • PROJECT_NAME ---> PROJECT_LOCATION ---> FINISH

5.png

 

 

 

 

2. Connection Setup

 

  • Insert Session or open the existing one from Repository Resource tab to left panel.

 

6.png

 

 

  • Enter credentials for creating session.

   System, User Name and Password. Click OK

7.png

 

NOTE : A session contains the Central Management Server (CMS) system name and authentication information needed to access resources stored in a repository. Workflows in the information design tool that require access to secured resources, prompt you to open a session.

 

 

 

  • CONNECTIONS and UNIVERSE are two folders under session.

8.png

 

 

  • After expanding connections we use one of the connection. HANA_CONN is JDBC secured connection used here.

9.png

NOTE : A Personal connection is created by one user and cannot be used by other users. The connection details are stored in PDAC.LSI file.

A shared connection can be used by other users through a shared server. The connection details are stored in SDAC.LSI file in the Business Objects installation folder. However one cannot set rights and securities on objects in a shared connection. Neither can a Universe to exported to repository using a shared connection.

A secured connection overcomes these limitations. Through it rights can be set on objects and documents. Universes can be exported to the central repository only through a secured connection. The connection parameters in this case are saved in the CMS

 

 

 

 

 

  • Check whether connection is working properly or not.

   TEST CONNECTION option checks the connection.

10.png

 

 

 

  • Right Click CONNECTION_NAME ----> CREATE RELATIONAL CONNECTION SHORTCUT

11.png

NOTE : If you need to access data from a table and regular RDBMS then your connection should be a relational connection but if your source is a application and data is stored in cube(technically there are tables involve which you normally do not see) then you would use a OLAP connection.

 

 

  • Select PROJECT_NAME from local project.

 

12.png

 

 

 

  • CONNC_NAME.cns file is added under PROJECT_NAME

 

13.png

 

 

3. Data Foundation

 

  • Create Data Foundation layer under project.
    • Right Click PROJECT_NAME ---> NEW ---> DATA FOUNDATION

    14.png

     

    NOTE : Data Foundation can consume one or more connections. So you can bring tables from multiple databases into a single Data Foundation. The Data Foundations contains the tables, joins, and contexts.

     

     

    • Specify the Resource name and description for data foundation layer.

    15.png

     

     

     

    • Select Data foundation type.

       Here we use Single Source data type.

    16.png

     

    NOTE : Single-source data foundations support a single connection. The connection can be local or secured, which means you can publish universes based on the data foundation either locally or to a repository.

    Multisource-enabled data foundations support one or more connections. You can add connections when you create the data foundation and anytime later. Multisource-enabled data foundations only support secured connections, and universes based on this type of data foundation can only be published to a repository.

     

     

     

     

    • Select connection from list to setup connection between HANA system to Universe.

        HANA_CONN.cns is secured relational connection used.

     

    17.png

     

     

     

     

    • DF_PO_ANA.dfx is data foundation layer used in current scenario.

     

    18.png

     

     

     

     

    • We insert tables from HANA system as shown below.
      • Click INSERT ----> INSERT TABLES

      19.png

       

       

       

      • Select the proper schema and tables which need to be called from HANA system.

      20.png

       

       

       

      • Click SCHEMA_NAME ----> TABLES_NAMES ----> FINISH

         EKKO and EKPO tables under ECC_NEW_HANA schema is called.

       

      21.png

       

       

       

      • At current screenshot, tables called into Data foundation layer.

       

      22.png

       

       

       

      4. Business Layer

       

      • Right click PROJECT_NAME ----> NEW ----> BUSINESS LAYER

      23.png

       

       

      NOTE : Business layer is a collection of metadata objects that map to SQL or MDX definitions in a database, for example, columns, views, database functions, or pre-aggregated calculations. The metadata objects include dimensions, hierarchies, measures, attributes, and predefined conditions. Each object corresponds to a unit of business information that can be manipulated in a query to return data. Business layers can be created directly on an OLAP cube, or on a data foundation that is built on a relational database

       

       

       

       

       

      • Select type of data source for Business layer.

         Here Relational Data Foundation is used.

      24.png

       

       

       

       

      • Specify resource name and description for business layer.

      25.png

       

       

       

      • Select Data foundation file.
        • Select DF_NAME.dfx ----> FINISH

         

        26.png

         

         

         

         

         

        • Automatically all the dimensions and measures of particular tables or views are displayed to left panel under business layer.

        27.png

         

         

         

         

         

        • After creating business layer we need to publish it to repository.
          • Right click BUSINESS_LAYER ----> PUBLISH ----> TO A REPOSITORY

           

          28.png

           

           

           

           

           

          • Publishing allows universe to publish to repository. So it is mandatory to check integrity for all connection.

          29.png

           

           

           

          • Select proper workspace for storing. Here we have used DEVENDRA folder.

          30.png

           

           

          • Click at FINISH.

             Universe is published successfully at specified location.

           

          31.png

           

           

          For Next Step: End to end integrated Scenario of ECC, HANA and BO : IDT to Web Intelligence Reporting (Part 3)

          Debug XSJS Using HANA Studio Step By Step

          $
          0
          0

          Hi Folks,


          When I was trying to debug and facing problem I went through many blog posts and didn't find even a single post which gave all the steps needed to debug XSJS. Hence I am writing one and hoping someone is happy to see this.



               1. To be able to debug you will require following debugger role to your HANA ID:

                    sap.hana.xs.debugger::Debugger    


               2. Create Debug Configuration for your XS project.

          debug01.png

          debug02.png

          debyug03.png

               Click on Apply/Save button to save the debug configuration.


               3. Now create breakpoints wherever you want to debug. You can create breakpoints by double clicking on left most side of the code line.

          debug04.png

               4. Now run your project in a web browser(preferably in Google Chrome). When your XS application runs a unique session id will be generated . The             session id can be seen by opening Develeoper tools of web browser. Press "F12" to open developer tools and note down the session id.

          sess.png

               5. Now go to HANA Studio and start debugging. Select your project and click debug icon and select the debug configuration you created in earlier                   stage.


          a.png

               6. Once you start debugging HANA studio will ask which session to debug hence you will have to select a session which you noted down in earlier                 stage.

          a.png


               7.
          Now go to the web browser where your application is running and you noted down session id. Just go to the point where you want application to go in debug         mode/ to the point where you create breakpoints.

                  Once the break point reaches you will see in HANA studio a green line indicating debug mode.

          final.png



          I hope it will help someone.


          --

          Regards,

          Rauf


             


          Try hanatrial using Python or nodejs

          $
          0
          0

          Step by step example how to connect to SAP HANA Trial instance using Python or nodejs open source client, PyHDB or node-hdb and HANA Cloud Platform Console Client.

           

          1. Download the SDK

          To connect to hanatrial instance you need the command line client, from SAP HANA Cloud Platform Tools. If not already installed within your eclipse installation, download the SDK of choice from SAP Development Tools for Eclipse repository.

          1_sdk.png

           

          Unpack, start the bash shell on Linux or OSX, or Command Prompt on Windows and go to tools subfolder of the SDK. This example is tested on Linux (Ubuntu 14.04), on Windows should work the same way.

           

           

          2. If you are behind a proxy, configure the proxy in your shell

          following the readme.txt in tools folder.

           

           

          3. Open the tunnel to hanatrial instance

          following SAP HANA Cloud documentation,  like for example:

           

          Pass.png

           

          Username, account name and HANA instance name you may check in hanatrial Cockpit

           

          Account.png

           

          Instance.png

           

           

          After entering the password, the tunnel is opened and the localhost proxy created, for hanatrial instance access:

           

          TunnelParams.png

           

          The default local port is 30015 but check if different.

           

           

          4. Connect from Python or nodejs Client

          Use displayed parameters to connect from Python client and display table names for example:

           

          importpyhdb

          connection = pyhdb.connect('localhost', 30015, 'DEV_4C55S55VRW5Z1W3STRMBCWLE0', 'Gy5q95tQaGnOZbz')

          cursor = connection.cursor()

          cursor.execute('select * from tables')

          tables = cursor.fetchall()

          for table in tables:

              print table[1]

           

          Screen Shot 2015-04-24 at 15.09.46.png

           

          It works the same way for nodejs client, only connection parameters adapted from node-hdb Getting Started example:

           

          var hdb    = require('hdb');

          var client = hdb.createClient({

            host    : 'localhost',

            port    : 30015,

            user    : 'DEV_4C55S55VRW5Z1W3STRMBCWLE0',

            password : 'Gy5q95tQaGnOZbz'

          });

          client.on('error', function (err) {

            console.error('Network connection error', err);

          });

          client.connect(function (err) {

            if (err) {

              return console.error('Connect error', err);

            }

            client.exec('select * from DUMMY', function (err, rows) {

              client.end();

              if (err) {

                return console.error('Execute error:', err);

              }

              console.log('Results:', rows);

            });

          });

           

          Screen Shot 2015-04-24 at 15.35.02.png

          Join us at the SAP HANA Developers Expert Summit!

          $
          0
          0


          blogimage.jpgThe SAP HANA Product Management team is inviting developers who are actively developing applications on SAP HANA to this free 1 day event, where we are interested to hear from you firsthand about your SAP HANA application development experiences, challenges, and what is working for you and what is not working for you.   If you’ve been spending your days banging away at the keyboard and building killer apps on SAP HANA, we want to talk to you.   We need to talk to the developers out there who have been neck deep in creating tables and views via CDS(Core Data Services), writing SQLScript stored procedures, and exposing services such as OData and server-side JavaScript.   We’d also like to talk to the developers who are creating applications leveraging some of the HANA specific features such as spatial, predictive, and text analysis. So if you have worked with any of these topics, please consider joining us in Newtown Square or Palo Alto for this free event.   The plan is to have a set of brief update presentations for each topic followed by a round of feedback sessions where you will be invited to sit down with the product manager responsible for that topic and let them know what your greatest successes have been, and what your worst pain-points are.   This is exclusive access to the product managers within SAP who can help you influence the direction of the product for the better. Don’t miss this opportunity.


          This is an interactive event with a small number of experienced hands-on customer experts, networking together and providing direct and candid feedback to SAP. To this end the number of registrations will be limited, where all attendance request will be given careful consideration and we will contact you with a confirmation email to attend the event and the next steps.


          We are running this event in two locations, register today for an invitation!  Please register for one location only.


          Register here for an invitation for Palo Alto - June 24th, 2015

          Register here for an invitation for Newtown Square - September 2nd, 2015
           

          The tentative agenda is as follows:



          Time

          Agenda Topics

          Speaker

          8:00 am – 9:00am

          Breakfast & Check-In

           

          9:00 am – 9:20am

          Welcome & Introduction

          Mike Eacrett

          9:30 am – 10:30am

          Topic Update Presentations I

           

           

            Tooling & Lifecycle Management

          Mark Hourani/Ron Silberstein

           

            Core Data Services

          Thomas Jung

           

            Modeling

          Lori Vanourek

           

            SQLScript

          Rich Heilman

          10:30 am – 12:10 pm

          Break Out Feedback Sessions I

           

          12:10 pm – 1:00 pm

          Lunch

           

          1:00 pm – 2:15 pm

          Topic Update Presentations II

           

           

            SAP HANA XS

          Rich Heilman

           

            The Future of SAP HANA XS

          Thomas Jung

           

            Predictive

          Mark Hourani

           

            Spatial

          Balaji Krishna

           

            Text

          Anthony Waite

          2:15 pm – 2:30 pm

          Afternoon Break

           

          2:30 pm – 4:35 pm

          Breakout Feedback Sessions II

           

          4:45 pm – 5:00 pm

          Closing Remarks

          Mike Eacrett

          5:00 pm – 8:00 pm

          Networking Reception & Dinner

           

          Sapphire: WHO'S WITH ME?

          $
          0
          0

          When I played Rugby back in the day, If I was running with the ball and there was one of my teammates on the outside ready to turn the corner and score a try, they would yell out "I'M WITH YOU!"

           

           

          england-rugby-wallpapers-new-300x188.jpg

           

           

           

          As you and I get ready for a great week next week at Sapphire - I want to ask you this question: "WHO's WITH ME?" The HANA Platform and the HANA Cloud Platform teams have put together an awesome agenda and a comprehensive set of content for Sapphire, and we want you to come along with us. For the HANA Platform and HANA Cloud Platform, there will be:

           

          • 15 customer theater presentations, from marquis brands such as Coca Cola, Under Armor, Lockheed Martin, Schlumberger, Siemens and others.
          • 4 customer panels, one hosted by Steve Lucas and one by Irfan Khan
          • 46 Microforums, discussing everything from HANA Best Practices, to How to get up and running with BW on HANA, to the value of extending your applications with SAP's PaaS offering (HCP)
          • 63 Demo Theaters, from Dynamic Tiering to IoT to Big data to hybrid cloud to cloud extensions and more
          • 19 demo stations, with HANA experts ready to "show and tell" all kinds of cool things HANA and the HANA Cloud Platform can do for your business

           

           

          If you want to join me for my personal session participation (WHO'S WITH ME?), I will have a few things going on my agenda where you can join me:

           

          • Tuesday 12:30-1:30pm (BI12324): The HANA Platform Roadmap session.  I am fortunate to be co-presenting with Mike Eacrett, SAP VP of HANA Product Management
          • Tuesday 1:30-2pm (SID 20299): Join Craig Parker from Genband revolutionized their customers experience with the HANA Cloud Platform
          • Tuesday 4:30-5pm:  Come join me in the SUSE booth as we discuss ways SAP and SUSE will help you migrate off of Oracle (boo!) onto an SAP database (yea!)
          • Wednesday 11-11:45am (PT20275):  Hear from Cisco and from SAP consultants how you can chooses an optimal use case for your company to get started with HANA.
          • Wednesday 4:30-5pm (PT20258): Theater session with Lockheed Martin's Stephan Gerali to understand how HANA is helping Lockheed Martin innovate
          • Thursday 8-9am (RT1630): Dialog and Q&A with the HANA Innovation Award Winners.  Don't miss this opportunity to hear from customers who have transformed their business with HANA
          • Thursday 5pm-5:30pm (SID 20301): Enjoy Prakash Darji, GM of our HCP unit, discuss HCP success with three customers in an interactive panel discussion

           

          Over and above my sessions, I want to invite you to join me at two very special events (WHO's WITH ME?):

           

           

          • HANA Innovation Awards celebration (Tuesday night, 6:30pm, Orlando Hilton):  Come celebrate with us as we recognize top customers using HANA to simplify, accelerate and innovate their businesses.  Tuesday evening in the Orlando Ballroom at the Orlando Hilton (Across the street from the Convention Center).  Everyone who attends gets the new Hasso Plattner/Bernd Leukert book, and another special giveaway.  Reserve your spot now by sending an email to mailto:nina.hunter@sap.com.
          • HANA Cloud Ice Breaker Reception (Wednesday night, 6:30pm, Minus5 Ice Bar):  Chill out at the Minus5 Ice Bar and mingle with SAP HCP and other SAP Cloud customers.  Don't know much about SAP's cloud initiatives?  Find out at the Ice Bar on Wed night.

           

          I'm looking forward to finding out WHO'S WITH ME? in my activities in Orlando at Sapphire.  Together we will score a try!!!

           

           

          NZ try.jpg

           

           

          As an added bonus, just as I believe HANA is the greatest in-memory data management and application platform ever, in my humble opinion this is the greatest Rugby Try ever scored:  1973 New Zealand vs. Barbarians (Gareth Edwards).  What is your opinion - about HANA and about the greatest Rugby try?

           

           

          Typescript definitions for HANA XS engine

          $
          0
          0

          Hi All,

          I've written a typescriptdefinitions file for use with SAP HANA XS engine.

           

          This allows you to create your XS applications in typescript, fire up your favorite typescript editor, along with the definitions file:

           

          ts.png

           

          This shows Atom with its typescript plugin, it provides syntax highlighting and typeahead features.

          Saving the file will compile the typescript to javascript:

           

          compiled.png

           

          The compiled javascript can be uploaded to the HANA server and run there.

           

           

          Future Work:

          1. Improving definitions file
          2. Working with typescript in library files(xsjslib) is a mess, the typescript files can be compiled into one javascript file using compiler flag --out

           

          https://github.com/larshp/xsjs.d.ts

          Using #SQLServer 2014 Integration Services (#SSIS) with #SAPHANA – Part 1

          $
          0
          0

          Now that SAP has awarded me SAP HANA Distinguished Engineer status, it’s time to get some new content out to the community. Over the last year, I’ve been working with Microsoft to see what works and what doesn’t work when using SQL Server 2014 Integration Services (SSIS) with SAP HANA. After all, SAP HANA supports ODBC connections and starting with SPS 9, developers can use the SAP HANA Data Provider for Microsoft ADO.NET. For customers using SQL Server as their data platform for SAP Business Suite / NetWeaver solutions, SSIS is part of your licensed version of SQL Server. So why pay for an additional Enterprise Information Management system, when just about everything you need is with your SQL Server license? Check out the “Enterprise Information Management using SSIS, MDS, and DQS Together [Tutorial]”. Along the way, I’ve learned what worked well and what you might want to avoid when using SSIS with SAP HANA.

           

          In this blog series, I’ll share my experiences over the last year using a video blog with videos hosted on YouTube. While they may not the quality of the SAP HANA Academy YouTube videos in terms of production, they won’t have any marketing spin – this is developer to developer. Here is what you can expect in the series.

            1. Getting started with a free trial of Microsoft Azure. I’ll use Microsoft Azure’s 30 day free trial to create a client virtual for the tools. This means I have to complete this entire blog series within in 30 days. This is the topic for this blog post.
            2. Creating a virtual machine with SQL Server 2014 and the Visual Studio Data Tools.
            3. Creating your first SSIS solution to import a flat file into SAP HANA.
            4. Best practices in security when creating SSIS packages.
            5. Copying data from a SQL Server database into an SAP HANA star-schema.
            6. Using SQL Server change data capture to incrementally update fact and dimension tables on SAP HANA.
            7. Copying data from SAP HANA into a SQL Server database.
            8. Using SSIS to extract data from SAP Business Suite into SAP HANA.
            9. Using SSIS to load data files in parallel to improve data loading performance.
            10. Trade-offs in using ODBC versus the SAP HANA Data Provider for Microsoft ADO.NET when loading text files.
            11. Using SQL Server Agent to run SSIS jobs.
            12. How to monitor SSIS job execution.
            13. How to do error handling during a connection failure to SAP HANA.
            14. How to do SSIS logging to debug issues and audit packages.

           

          I’ll try to keep the videos to five minutes in length thru creative editing out of long running operations. I’ll let you know when I edit out large chunks and let you know how long the operation really took.

           

          Without further ado, here is the first video on the series.

          Please let me know if you like this approach with the videos (with your ratings of course ). If there is a particular topic you would like to see sooner, let me know.I hope you enjoy the series.

           

          Regards,
          Bill Ramos, SAP HANA Distinguished Engineer

          Follow me on:

          Twitter - http://twitter.com/billramo

          LinkedIn - https://www.linkedin.com/in/billramo

          SAP TechEd (#SAPtd) Lecture of the Week: Big Data Analytics Using SAP HANA Dynamic Tiering

          $
          0
          0

          Greetings, TechEd enthusiasts!  I am pleased to re-introduce my presentation from TechEd 2014 on HANA dynamic tiering. HANA dynamic tiering is a feature that enhances HANA with an integrated disk-backed storage and processing tier – a warm store - for managing less frequently accessed data.  But wait – hasn’t SAP always promoted memory ONLY for real-time transaction processing and analytics?  Essentially, yes - SAP’s mantra is now memory FIRST.  For “hot” data, there is nothing better than HANA, which has been architected from the ground up as an all in memory solution with blazing performance.  However, not all data requires real-time access, and economies of scale are achieved through use of storage tiering technologies integrated with the HANA platform.  This broadens the HANA platform to encompass Big Data and large volume warm/cold data for a 360 degree enterprise view.

           

          The first version of HANA dynamic tiering was released in the fall of 2014 as an add-on option to HANA SPS09.  My 2014 TechEd presentation gave an overview of the feature – the motivation for developing it, the use cases it is designed for, and technical details:

           

           

           

           

           

          HANA dynamic tiering in its first incarnation was targeted primarily at SAP BW on HANA customers who were looking to reduce their HANA footprint by moving less critical data out of memory and onto cheaper storage.  SAP BW integrated HANA dynamic tiering to automatically reposition potentially large persistent staging area tables and write-optimized DSOs to disk for a significantly reduced memory footprint.  The SP10 version of HANA dynamic tiering – currently under development - will bring improved HA/DR, query speed, and data aging capabilities which will make the feature attractive to HANA developers who are building applications that manage large amounts of data – much of which does not need the low latency performance of continuous in-memory residence.


          I am excited to look back to last year’s overview of HANA dynamic tiering, and also to look ahead to upcoming, improved versions of the capability.  Dynamic tiering will extend HANA’s reach into new problem spaces, and open up new opportunities for our customers.

          SAP TechEd Stragety Talk: SAP's Platform-as-a-Service Strategy

          $
          0
          0

          In this Strategy Talk, recorded at SAP TechEd Bangalore 2015, Ashok Munirathinam, Director PaaS APJ speaks about

          how SAP intends to focus the SAP HANA Cloud Platform for customers, partners, and developers to build new applications, extend on-premise applications, or extend cloud applications.  in this session you can get an understanding of the platform today, and the direction SAP is headed. Also understand key partnerships and use cases for the platform today, as well as future capabilities that are being developed. Understand the value and simplicity of cloud extensibility, as well as how to engage with SAP in a simple way.



          Realtime Business Intelligence with Hana

          $
          0
          0

          The desire of enabling Business Intelligence on current data was always present and multiple approaches had been suggested. One thing they had in common, the failed miserably because they never met the expectations.

          With Hana we do have all the building blocks from a technical point of view to finally implement that vision.

           

          Requirements

           

          1. Speed: A Business Intelligence solution that has response times greater than a second will not be used by customers
          2. External data: Reporting within an application is no Business Intelligence, it is dumb reporting. BI means comparing data and the more there is to compare with, the more intelligent findings will be made.
          3. Historical correct data: If past data is reported on, the result should stay the same. Even if the master data did change. For example last year's revenue per customer region should remain the same although a customer moved to a different location.
          4. Data consistency: When data is filtered or grouped by a column, this column should have well defined values, not duplicates but with different spelling. Also consistency between tables becomes important, e.g. a sales line item without a matching sales order row would be a bad thing.

           

          The goal should obviously be all green in each of the categories

          SpeedExternal DataHistorical Correct DataData Consistency

           

           

          What had been suggested in the past

           

          To accomplish realtime Business Intelligence two major suggestions had been made: Near Realtime loads and EII.

           

          The idea of a near-realtime data warehouse is simple, instead of loading the Data Warehouse once every night, load it every hour. Well not very "near" real time, load it every 5 minutes, every minute, every second even?

          This approach is feasible down to a certain frequency, but how long does a Data Warehouse delta run take? One part of the time is the data volume for sure. But assuming the data is loaded that frequently that most of the time there were no changes in the source system at all, this factor can be reduced t zero. The most time is usually spent in finding out what has changed. One table has a timestamp based delta, hence a query reading all rows with a newer timestamp is executed. For other tables a change log/transaction log is read. And the majority of the tables do not have any change indicator, hence a read entirely and compared with the target.

          Above logic does not only take time, it costs resources as well. Constantly the source is queried "Is there a change?" "Is there a change?" "Is there a change?". For every single table.

          While this approach has all the advantages of the Data Warehouse, fast query response time, no issue with adding external data, no issue preserving historical data, it is simply not feasible to build, aside from exceptional cases.

          SpeedExternal DataHistorical Correct DataData Consistency

           

           

          Another idea became popular in mid 2000 was to create a virtual data warehouse, meaning you create a simple to understand data model via views but data is not loaded into that data model, instead data is queried from the various sources on request. Therefore called Enterprise Information Integration (EII). So all the complexity of the transformations are done inside the database view instead of in the ETL tool. As the source data is queried directly, it returns current data per definition and the entire delta logic can be spared.

          This works as long as the queries against the source systems are highly independent, e.g. System1: select quarter, sum(revenue); System2: select quarter, business_year. And the source system can produce the results quickly enough.

          For typical Business Intelligence queries both points are not fulfilled usually.

          Also, often you have to cut down on the amount of transformations being done, else the query speed would suffer even more. A common example would be standardizing on search terms, finding duplicates in the master data. These things are either be done during data entry - slowing down the person entering the data - or not done at all with negative impact on the decision being made due to wrong assumptions.

          Hence, although the idea as such has its merits it died quickly due to the bad query performance.

          SpeedExternal DataHistorical Correct DataData Consistency

           

           

           

          The situation with Hana

           

          Data Federation - EII

          From a technology point of view Hana supports EII, it is called Smart Data Access (Data Federation) there. The pros and cons of EII are the same however. When reading from a Hana Virtual Table, the required data is requested from the remote database, hence the overall query performance depends on the amount of data to be transferred, how long the remote database requires to produce the data and the time to create the final query result in Hana.

          And as only data that is available can be queried, and changes in an ERP system are usually just that, changes, there is no history available way too often.

          SpeedExternal DataHistorical Correct DataData Consistency

           

          Sidecar - S/4Hana

          As a temporary workaround, until the ERP system itself runs on Hana and therefore does participate on the Hana query performance, the Side-by-Side scenario is used. The idea is to copy the source database to Hana and keep it updated in realtime, all the queries that would take too long for the other database are executed within that Hana box instead. And once the entire ERP system runs on Hana, those queries can be kept unchanged but run on the ERP system tables now.

          So basically this is reporting on the ERP tables directly. Due to the raw computing power of Hana the speed is much better and this became feasible again but it is not as fast as a data model optimized for queries. The reasons for this I have listed in this blog post: Comparing the Data Warehouse approach with CalcViews - Overview

          Another issue is again the history of changes. If a sales order entered last month got updated and the amount reduced from 400USD to 300USD, the sum of revenue for last month will be different than it was yesterday. In BW you would see the old amount of 400USD in last month and another row with the amount -100USD for today. Hence the data is historical correct.

          SpeedExternal DataHistorical Correct DataData Consistency

           

          Realtime Data Warehouse

          One feature Hana got with the Smart Data Integration option is to combine realtime feeds with transformations. Previously this was not possible with any other tool because of the complexity. Realtime had been used as synonym from Replication, meaning the source data is copied 1:1 into the target, just like in above sidecar approach. With this the downsides of EII are combined. But with Hana a realtime subscription can push the data into a task instead of a table, inside the task the data is transformed and loaded into the query optimized Data Warehouse data model.

          Therefore the advantages of realtime and Data Warehouse are combined without introducing more complexity.

          1. The query speed is based on Hana and all complex transformations are done whenever the data is changed, not every single time somebody queries the data.
          2. External data is no problem, new sources can be added and harmonized with the other data easily.
          3. Historical correct data is possible as well, either a change triggers an update in Hana or the change information is added as new row. In other words, a task might either load the target table directly or there is a History Preserving transform used prior to the target table.
          4. Data consistency is no problem either. A realtime push of the source data preserves the transaction of the source, so if a sales line item got added and hence the sales order's total amount updated, both are done in one transaction in the source and in Hana - Smart Data Integration feature takes care of that. Also all transforms to standardize the data are available. Their execution takes a while but that does not matter as it processes the changed data only, not all, and only once, not every time somebody queries the data.
          SpeedExternal DataHistorical Correct DataData Consistency

           

          S4/Hana with external data

          Using above technologies, Federation and Realtime Transformations, external data can be added to the S/4Hana database into a different schema. This allows to pick the proper tchnology for each case, e.g. it was said that Federation works only for cases when the amount of data returned is small. Very often the remote dataset is tiny anyhow, hence Federation is perfect. And if it is not the data can be brought into Hana in realtime, either by simply copying the data and hence having to do all the hamronization with the EPR data at query time. Or even better, pushing the realtime changes into a task object which does all the harmonization already. Therefore the resulting view is as simple as a union-all of two identical table structures, both being in Hana already.

          While this approach allows for all the flexibility on the external data, the local ERP data has the same issue as before - missing history, data consistency and not optimal speed due to the number of transformation done in the view.

          Theoretically realtime replication from the ERP system into another schema of the very same Hana database could be enabled to preserve the history. But that will not be liked a lot.

          SpeedExternal DataHistorical Correct DataData Consistency

          Configure HTTPS for HANA XS on SP9

          $
          0
          0

          Hello All,

          I recently had to configure my server to use https with signed certificate (signed by certificate authority and not only self-signed). It took me quite a while because all the tutorials, manuals did not seem to work for me on SP9. After lots of research I finally made it. I would like to share my findings with you so that you can save days of research. Please notice that I’m working on SAP internal cloud platform therefore the steps below might need adjusting depending on your server location and configuration.

           

          I found this post very helpful and it explains many things in details, it is good place to start:

          http://scn.sap.com/community/developer-center/hana/blog/2013/11/02/outbound-https-with-hana-xs-part-1--set-up-your-hana-box-to-use-ssltls

           

           

          The difference in the SP9 version is that it is by default configured to use https with self-signed certificate therefore you and other users will get red warning message all over to warn that the connection is not safe. In SP9 you do not need to import sapgenpse or libsapcrypto.so because it’s already there. You do not need to configure web dispatcher to use SSL and those libraries because it’s already done.

           

          My system is internal SAP server hosted on SAP cloud platform. If you are using different platform/server the certificate request might be different for you.

          In my commands below I’m using place holders. Please replace them with the data of your server:

          [host_name] – in my case: mo123456

          [host_url] – in my case: mo123456.mo.sap.corp

          [instance_number] – in my case 00

          [instance]- in my case MV1

           

          First upload to SAPNetCA_G2.cer to  /usr/sap/[instance]/HDB[instance_number]/[host_name]/sec

          I use winscp tool for this.

          If you are working on SAP internal system you can find certificates here:

          https://security.wdf.sap.corp/SAPNetCA_G2/

          If you use this link to sign certificates please make sure that you select response encoding to: PKCS#7 because X.509 did not work for me.

           

          Log on to your system via putty. Log on as admin - [instance]adm for example mv1adm.

          Define 2 variables to shorten up the script later:

          export SECUDIR=/usr/sap/[instance]/HDB[instance_number]/[host_name]/sec

          This is the folder with signed certificates and where the requests file will be placed.

           

          export TEMPEXELIB=/usr/sap/[Iinstance]/exe/linuxx86_64/HDB_1.00.090.00.1416514886_1804508

          This is the location of sapgenpse. As of SP9 you don’t need to copy those files manually like in previous tutorial, you can run this program from this library. Please notice that the location of the file might be slightly different in your case depending on the version of HANA. Please check the folder: /usr/sap/[instance]/exe/linuxx86_64/ for subfolders. In my case it’s: DB_1.00.090.00.1416514886_1804508 . Please check if sapgenpse is in that fodler .

           

          SP9 comes with self-signed certificates (at least my came) therefore you need to delete those before you import new certificates signed by certificate authority. Please delete following files: SAPSSLS.pse , sapsrv.pse, sapcli.pse from security folder /usr/sap/[instance]/HDB[instance_number]/[host_name]/sec

           

          Run sapgenpse to generate request:

          $TEMPEXELIB/sapgenpse get_pse -p $SECUDIR/SAPSSLS.pse -x '' -r $SECUDIR/SAPSSLS.req "CN=[host_url], OU=00, O=SAP, C=DE"

           

          It’s important that the request and pse file are named SAPSSLS in other tutorials I found different name and that did not work form me. Web dispatcher is already configured to look for the certificate with SAPSSLS name therefore it's easier just to replace those.

           

          View the request:

          cat $SECUDIR/SAPSSLS.req

          Copy the text, sign it at your certificate authority, and copy the response text.

          Create new file for the response:

          vi $SECUDIR/SAPSSLS.cert

          Press “i” to start text editing.

          Paste the response to command line (in putty it’s just right mouse click)

          Press escape key and type:

          :wq

          Press enter/return key.

          Alternatively you can copy the request text into text file on your local pc and upload it to the server as $SECUDIR/SAPSSLS.cert . However I read on couple of other posts that there might be problem with the way windows editors encode new line sign therefore it’s recommended to create the text under linux.

          Import the certificate:

          $TEMPEXELIB/sapgenpse import_own_cert -c $SECUDIR/SAPSSLS.cert -p $SECUDIR/SAPSSLS.pse -x '' -r $SECUDIR/SAPNetCA_G2.cer

          Check the message if operation was successful.

           

          Create credentials for the file:

          $TEMPEXELIB/sapgenpse seclogin -p $SECUDIR/SAPSSLS.pse -x '' -O [instance]adm

          Make sure that only admin has access to this file:

          chmod 600 $SECUDIR/cred_v2

           

          Follow similar steps for sapsrv.

           

          $TEMPEXELIB/sapgenpse get_pse -p $SECUDIR/sapsrv.pse -x '' -r $SECUDIR/sapsrv.req "CN=[host_url], OU=00, O=SAP, C=DE"

          cat $SECUDIR/sapsrv.req

          Copy the text, sign it at your certificate authority, and copy the response text.

          vi $SECUDIR/sapsrv.cert

          Press “i”.

          Paste response text.

          Press esc, type  :wq

           

          $TEMPEXELIB/sapgenpse import_own_cert -c $SECUDIR/sapsrv.cert -p $SECUDIR/sapsrv.pse -r $SECUDIR/SAPNetCA_G2.cer

           

          $TEMPEXELIB/sapgenpse seclogin -p $SECUDIR/sapsrv.pse -x '' -O [instance]adm

           

          Create request for sapcli

          $TEMPEXELIB/sapgenpse gen_pse -p $SECUDIR/sapcli.pse -x '' "CN=[host_url], OU=00, O=SAP, C=DE"

          In the previous post I did not see that this request was signed therefore I just left it like this.

           

           

          There is no need for additional web dispatcher configuration.

          Afterwards it’s important to restart web dispatcher; I personally prefer to restart the whole server.

          You can check the link https://[host_url]:43[instance_number]sap/hana/xs/admin/

          If the certificate was imported successfully you should not see any red warning messages.

           

          Best regards and good luck,

          Marcin

          Internet of Things Foosball - Part 1

          $
          0
          0

          With a long history of playing foosball inside the walls of company I work for we needed something that could take our game to a new level. Over the years there has been a continuous disagreement among the players who was the best and who has the highest winning rate. We wanted this to be sorted out.

           

          Among these disagreements there had been evolving some ideas on what could be done to sort things out and on how we could achieve this. Everyday and every game played the ideas kept stacking up, whether they where simple or crazy .

          The company had its own HANA box for an inhouse development. But we where lacking thoughts on how to create a use-case/business case to experiment the power of SAP HANA.

           

          So when we visited SAP Teched in Berlin(2014) we saw the light. There was this "new" concept on how to create big data. We saw it in the Keynote and there where some sessions introducing Internet of Things. And after visiting the Hackers Lounge we were sure that these two, Foosball and HANA, could bring the best of each other.

           

          Now we had the vision, but nothing happened.

           

          We were lacking time. The workload of the developers was well enough in other projects for our customers and we new this could probably take days to implement. We could not remove developers from projects that were billed as that would lower our income. So we had a new headache.

           

          It was then very convenient when we got an email from the Reykjavik University. They where asking companies in Iceland to submit a proposal for a Final Project for undergraduates in BSc in Computer Science. We decided to submit our project: Foosball IoT.

           

          We decided to present this idea with a simple approach.The idea was this:

          • Capture the "game" with sensors using a arm based computer( rasberry/arduino ).
          • Everything sensed via the sensors should be pushed into HANA because we want as much of data as possible.
          • Use the HANA XS platform for the application api.
          • Use SAPUI5 as UI.
          • The UI should allow players to create a game, follow the score and browse through variety of statistics

           

          Next steps, the implementation and final result in part 2.

          Testing UI5 Apps

          $
          0
          0

          Hi All,

           

          I've been developing some apps in SAP HANA for desktop and mobile viewports. Here's a tip for testing those apps without using the mobile phone simulator to check the rendering for mobile screens:

           

          In the <head> tag of the html file there is a block of code to initialise sap.ui libraries, themes and others. It looks like this:

           

          <script id='sap-ui-bootstrap' type='text/javascript'        src='https://sapui5.netweaver.ondemand.com/resources/sap-ui-core.js'          data-sap-ui-theme='sap_bluecrystal'        data-sap-ui-libs='sap.m'></script>

           

          In order to enable the testing for mobile screens the following line should be added before closing the <script> tag:

           

          data-sap-ui-xx-fakeOS='ios'

           

          This line allows simulation of an iPhone/iPad viewport.

           

          Hope this helps all SAP HANA starters.

           

          Regards,

           

          Alejandro Fonseca

          Twitter: @MarioAFC

          Part 2 - Creating a client VM on Azure with Visual Studio 2013 & SQL Server Data Tools for BI

          $
          0
          0

          Ok, so the title doesn't include SAP or HANA, but I'm getting there. In this video blog, I will walk you through the steps to create an Azure virtual machine with the free Visual Studio 2013 Community edition pre-installed. I then go through the process of downloading and installing the SQL Server Data Tools for BI. The video is almost 17 minutes in length, but the overall process took about 1 hour and 10 minutes. To go back to the index for the blog series, check out the Part 1 – Using #SQLServer 2014 Integration Services (#SSIS) with #SAPHANA.

           

          NOTE: SSIS is not yet certified by the SAP ICC group. However, the content of this blog series is based on the certification criteria.

           

          On with the show!

           

          Check me out at the HDB blog area at: The SAP HDE blog

          Follow me on twitter at:  @billramo

          Viewing all 676 articles
          Browse latest View live


          <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>