Quantcast
Channel: SCN : Blog List - SAP HANA Developer Center
Viewing all 676 articles
Browse latest View live

SAP HANA Run and Deployment options

$
0
0

when I first got in contact with SAP HANA I was a bit lost cause I didn't understand the different run and deployment options. In the past you bought hardware, purchased a SAP license and downloaded the software from marketplace to install it at your hardware. Nowadays things have changed. You have several other options. To give my colleagues and me an overview I created the following simple presentation. Maybe it may help you, too.

 

Note: This is neither a complete overview nor an official list of all options. It's just my personal view with my current knowledge in this field. Please add comments if you have suggestions or find errors.

Folie1.png

 

Folie2.png

 

Folie3.png

 

Folie4.png

 

Folie5.png

 

Here are the links of this presentation

8 Hardware vendors

SAP Cloud Appliance Library

SAP HANA Cloud Platform

 

If you want to view the entire presentation you can do so at slideshare.


HADOOP HDFS Explorer built with HANA XS and SAPUI5

$
0
0

For those of you that have started to explore HADOOP you might be familiarly with HUE (Hadoop User Interface)  or Linux command line for moving files to and from the HADOOP Distributed File system (HDFS).

 

As a bit of a novelty I thought it'd be interesting to build a prototype of an HDFS Explorer, using HANA XS & SAPUI5.

 

 

HUE File Browser                                                            HANA: HADOOP HDFS Explorer                         

 

  

 

 

HUE  is primarily built using PYTHON with an number of  PYTHON plugins, such as DJANGO and MAKO.

 

My HANA  HDFS Explorer is built using HANA XS,  SAPUI5 (Horizontal Splitter, Tree & Table controls), a sprinkling of custom Javascript & the HADOOP WebHDFS REST API (accessible via xshttpdest).  It's a very rough prototype, and is not indeed to reproduce all the same functionality of the HUE file browser.

 

NOTE: as standard the webHDFS REST service may run off ports 50070 & 50075 on your HADOOP cluster. I needed to redirect these to 50001 & 50013 respectively to enable xshttpdest to work. For Hana cloud platform the valid ports to use are documented here SAP HANA Cloud Platform

 

 

Here a few screen shots comparing the original and the new improved HANA version

Original HUE:


HANA XS Version (with Download option):

 

Select the file and then click the Download button:

 

 

The dataflow of HANA XS version is:

 

 

The HADOOP webHDFS Rest service makes uses of different url parameters  [operations (op)]  for interacting with HDFS.

I only used the following operations in this example:

 

FILESTATUS          Which acts like 'dir' or 'ls' commands on a specified HDFS directory

OPEN                    Which acts like an ftp 'get' command.

          

NOTE: OPEN is called twice. 1st on the HADOOP NameNode to get the DataNode the file is stored, then secondly on the specified DataNode to download the file.

 

The full code is available to download here: https://github.com/AronMacDonald/HanaHdfsExplorer

Detecting World Cup GOAL using Twitter and SAP HANA

$
0
0

I am a software developer who wants to learn to develop application on HANA. Yes, I have done HANA-related courses at Open SAP, I have gone through several videos provided by SAP HANA Academy and they are excellent resources to get myself familiar with HANA. However, I found myself wanting to learn more. Rather than going through HANA guides without having a useful context, I challenged myself to develop a scenario/application that utilises HANA.

 

I came across this blog describing a strategy to detect world cup goal using twitter data so I thought to myself why not build this on HANA since all major components (except raspberry pi) are available and can be easily configured in HANA. The major components are namely:

  1. Connection with Twitter API using HANA XS Outbound  Connection
  2. Full-text indexes for text analysis to find GOAL
  3. Exponential smoothing algorithm to produce smoothed version of tweets data
  4. SAPUI5 for presentation

After that, I will compare the result with actual events and I will describe some improvements that can be made.

My source code is available here.

 

Requirements


- SAP HANA SPS6 or above is needed to make XS Outbound connection. I am using SAP HANA developer edition rev 72 available in SAP CAL. To get your own developer edition, refer to http://scn.sap.com/docs/DOC-28294

- Twitter application created in your twitter account. Refer to the following link for more information http://scn.sap.com/docs/DOC-49203

 

 

1. Connection with Twitter API – HANA XS Outbound  Connection


To get data from Twitter into HANA, I need XSJS Outbound connection, XSJS service to store tweets to database and XSJS scheduler to run the service in regular basis.

 

XSJS Outbound connection - twitter_connection.xshttpdest

outbound_connection.jpg

XSJS service - TwitterCollector.xsjs

In the service, I want to collect tweets that contain “#BELvsUSA” which is the hashtag suggested by Twitter for Belgium vs USA soccer game.

twitter_1.jpg

twitter_2.jpg

If you are familiar with Twitter API, you may be wondering why in my code, I’m not using HTTPS protocol since Twitter only allows HTTPS protocol. That is because I don’t have access to download SAP cryptographic library (sapgenpse) which is the prerequisite to make an HTTPS outbound connection with HANA XS (reference). So I created php service to act as an adapter to Twitter. I would have made HTTPS connection directly to Twitter API if I have access to the library.

 

 

2. Full-text indexes for text analysis


Once I have tweets data, I resort to full-text indexes that I have created on the tweet data for text analysis. I use LINGANALYSIS_FULL to basically breakdown the tweets into individual words. For example: User @juanvofficial tweeted “Well deserved goal by the Belgium team #BELvsUSA”. The index will provide the following information

full_index.jpg

Why do I need to breakdown the tweet into words? It’s just to make it easier for me to exclude the text that does not indicate an actual goal, i.e. @SmileyElie tweeted “#USA has the best goalkeeper ever!!! So impressed!!! #BELvsUSA”. The tweet contains “goal” but the tweet does not indicate that one of the team has scored.

 

 

3. Exponential smoothing algorithm


The formula I chose to indicate goal is percentage of tweets mentioning goal to indicate whether a goal has happened.

 

     No of tweets mentioning goal per minute * 100 / No of tweets per minute

 

After we get all tweets that mentions GOAL, I want to perform exponential smoothing algorithm which is usually used to remove noise data. Let me just show you a Line chart indicating the percentage of tweets mentioning goal / minute before the algorithm.

 

percent_raw.jpg

I chose single exponential smoothing algorithm which is available SAP PAL library and after applying the algorithm, the result is shown in green line which is smoother than original graph.

 

percent_combination.jpg

 

4. SAPUI5 for presentation


Oh well, you can see that the graphs above are SAPUI5 graphs.

 

 

Result


Now let’s compare our graphs with actual events. There are 3 goals during Belgium vs USA game in round of 16 World Cup 2014.

 

actual_result.jpg

 

If I can translate the timeline into our graph, the goals happened at the bold green points in the graph below and after a goal happens, the percentage number of tweets mentioning goal increase to more than approximately 8%.

 

percent_smooth.jpg

First goal by Belgium in extra time definitely gets people excited because a lot of people has been expecting a goal. The percentage went up again because of second goal by Belgium but there weren’t as many tweets as the before. Moreover, Twitter was busy again after first goal by USA in second half of the extra time. This definitely raised hope for USA to turn around the game. Unfortunately, USA could not score another goal and lost the game.

 

 

Improvements


1. Twitter API

I collect the data from search/tweets which is an REST API source that provides relevant tweets from a limited corpus of recent tweets. To get more relevant search result, Streaming API should be used and it requires more complex solution. However, for the purpose of this exercise, I did not want to put too much effort in getting the tweets and as you can see from the graph above, the collected tweets from REST API gives sufficient information to indicate when a goal happens.

 

2. Text analysis

Improvement can be made when performing analysis especially if you are working with data from different language i.e. golazo means goal in Spanish and there could be many different variations in other languages.

 

There is also limitation with text analysis I chose, i.e. @Guevara_Caro tweeted “Were it not for Howard and Beasley, #USA would be losing by a couple of goals! #BEL has a great team. Let's pick it up!! #GoUSA #BELvsUSA”. Obviously, the tweet contains word goal but the whole sentence does not indicate a goal has happened. Regular expression (which is available in R & other programming language, but not in HANA SQL) should be used to capture goal in tweets more accurately.

 

 

Conclusion


Various components available in HANA make it easier for me to obtain unstructured data, perform analysis on the data and present the analysis result using SAPUI5. In this case, goals can be identified by using twitter data. The solution is far from perfect and the threshold of 8% may not be used to indicate a goal in other world cup matches but I am hoping to gain more analytical knowledge and explore more interesting scenario in near future using SAP HANA.

Downloading Query Results to HADOOP HDFS

$
0
0

Most people using HANA XS by now are probably familiar with the SHINE demo content:

http://help.sap.com/hana/SAP_HANA_Interactive_Education_SHINE_en.pdf

 

It has a nice example of downloading the results of a query to EXCEL.

 

With a small tweak I've taken the idea one step further to enable the results to be downloaded, as a TAB delimited text file, onto the HADOOP Distribute File system (HDFS), using the HADOOP WebHDFS REST API CREATE operation


Storing reports on HADOOP, rather than a traditional file system might then make these user driven snapshots more accessible for future querying, perhaps even being consumed back into HANA via Smart data access.

 

Here is the new button I added to the SHINE PO Worklist example:

 

Click the 'Download to HADOOP' button.

 


Now lets see if a file was created on the HADOOP File System.

SUCCESS!!!!

 

[checked using HADOOP HDFS Explorer built with HANA XS and SAPUI5]

 

[checked using standard Hadoop User Interface HUE]

 

 

The wedHDFS library I've built for this example is available at https://github.com/AronMacDonald/HanaHdfsExplorer

 

I also made the following small changes/additions to the SHINE Code:

 

poWorklistQuery.js

//Changes from ~ line 260 onwards  [After downloadExcel() ]
.
.
/****************************************************************************/
/****************** START   HADOOP MOD    ***********************************/
function downloadHADOOP() {  //import a library for doing PUT & GET to webHDFS  $.import("s0000716522trial.trial.HadoopHbase1.webHDFS","webHDFS");   var webHDFS = $.s0000716522trial.trial.HadoopHbase1.webHDFS.webHDFS; //$.HanaHbase.Hbase;  var body = '';  try {  var query = 'SELECT TOP 25000 \"PurchaseOrderId\", \"PartnerId\", \"CompanyName\", \"CreatedByLoginName\", \"CreatedAt\", \"GrossAmount\" '  + 'FROM \"_SYS_BIC\".\"s0000716522trial.trial.shine.data::purchaseOrderHeaderExt\" order by \"PurchaseOrderId\"';  var conn = $.db.getConnection();  var pstmt = conn.prepareStatement(query);  var rs = pstmt.executeQuery();  body = MESSAGES.getMessage('SEPM_POWRK', '002') + "\t" + // Purchase  // Order ID  MESSAGES.getMessage('SEPM_POWRK', '003') + "\t" + // Partner ID  MESSAGES.getMessage('SEPM_POWRK', '001') + "\t" + // Company Name  MESSAGES.getMessage('SEPM_POWRK', '004') + "\t" + // Employee  // Responsible  MESSAGES.getMessage('SEPM_POWRK', '005') + "\t" + // Created At  MESSAGES.getMessage('SEPM_POWRK', '006') + "\n"; // Gross Amount  while (rs.next()) {  body += rs.getNString(1) + "\t" + rs.getNString(2) + "\t"  + rs.getNString(3) + "\t" + rs.getNString(4) + "\t"  + rs.getDate(5) + "\t" + rs.getDecimal(6) + "\n";  }  rs.close();  pstmt.close();  } catch (e) {  $.response.status = $.net.http.INTERNAL_SERVER_ERROR;  $.response.setBody(e.message);  return;  }  //Path & Filename to save on HADOOP HDFS  var pathFile = '/user/admin/hanaDownload/POList_';  // append timestamp    pathFile += new Date( Date.now()).toISOString()                                     .replace("T","")                                     .replace(/\./g,'')                                     .replace(/\-/g,'')                                     .replace(/\:/g,'')                                     .substring(0,14) + '.txt';  var webHDFSResponse = webHDFS.PutCreate(escape(pathFile), body);  $.response.contentType = "application/json";  $.response.setBody(JSON.stringify(webHDFSResponse));  $.response.status = $.net.http.OK;
}
/****************** STOP    HADOOP MOD    ***********************************/
/****************************************************************************/
var aCmd = $.request.parameters.get('cmd');
switch (aCmd) {
case "filter":  getFilter();  break;
case "getTotalOrders":  getTotalOrders();  break;
case "Excel":  downloadExcel();  break;
/****************************************************************************/
/****************** START   HADOOP MOD    ***********************************/
case "HADOOP":  downloadHADOOP();  break;
/****************** STOP    HADOOP MOD    ***********************************/
/****************************************************************************/
case "getSessionInfo":  SESSIONINFO.fillSessionInfo();  break;
default:  $.response.status = $.net.http.INTERNAL_SERVER_ERROR;  $.response.setBody(MESSAGES.getMessage('SEPM_ADMIN', '002', aCmd));
}


Table.controller.js

//Changes from ~ line 37 onward
.
.
.
//Excel Download
if (oEvent.getSource().getId()=="btnExcel"){  $.download('../../../services/poWorklistQuery.xsjs','cmd=Excel','GET' );  return;
}
/****************************************************************************/
/****************** START    HADOOP DOWNLOAD   ******************************/
//Download to HADOOP
if (oEvent.getSource().getId()=="btnHADOOP"){     jQuery.ajax({       url: "../../../services/poWorklistQuery.xsjs?cmd=HADOOP" ,       method: 'GET',     success: function(collection) {       sap.ui.commons.MessageBox.show('Download to HADOOP Complete');       return;         },       error: function(xhr, textStatus, errorThrown) {return;} });  return;
}
/****************** STOP    HADOOP MOD    ***********************************/
/****************************************************************************/
//Check for selected item for all other events
var oTable = sap.ui.getCore().byId("poTable");
var data = oTable.getModel();
.
.
.

 

Table.view.js

//Changes from ~ line 134 onward
.
.



oToolbar1.addItem(oButton1);
/****************** START    HADOOP DOWNLOAD   ******************************/
/****************************************************************************/
var oButton2 = new sap.ui.commons.Button("btnHADOOP",{



text : 'Download to HADOOP',

tooltip : 'Download to HADOOP',

press : function(oEvent){

oController.onTBPress(oEvent,oController); }
});



oToolbar1.addItem(oButton2);
/****************** STOP    HADOOP MOD    ***********************************/
/****************************************************************************/
oTable.setToolbar(oToolbar1);

 

If you give it a try please let me know how you get on.

Preventing SQL Injection in HANA

$
0
0

Most of the developers creating HANA calculation views or Stored procedures use input parameters that are used in a SQL query to retrieve data from HANA tables. It is important to parse the input parameter before they can be used so as to prevent the SQL injections in the code. A safe way to prevent SQL injection is to created a utility Procedure that will do the same for you. This can be reused in different procedures as and when needed.

 

Example:

Start with creating a new stored procedure by the name GET_QUERY_QUOTE. Input Parameter IV_FILTER_STRING NVARCHAR 256. Output Parameter QUOTE_T with QUOTE NVARCHAR 256

 

 

 

/********* Begin Procedure Script ************/

BEGIN

 

 

QUOTE_T = SELECT REPLACE(:IV_FILTER_STRING, '''', '''''') AS QUOTE FROM DUMMY;

 

 

END;

/********* End Procedure Script ************/

 

 

Capture.JPG

 

This procedure will prevent intentional SQL injection of code into your procedure.

Real Time GPS Tracker using SAP HANA XS, SAPUI5, Cordova Geolocation Plugin and OData

$
0
0

Hello to all,

 

I would like to share how to build a simple Android app GPS tracker powered by SAP HANA using XS, SAPUI5 with Cordova Geolocation Plugin and ODATA. The idea is to send the latitude and longitude position from the Android phone to the SAP HANA. The backend web application will show  the current location in Google Map.

 

The app screenshots  will look like this:

 

  • Android Client
    10.jpg

 

  • Backend Web App
    5.jpg

 

Required Components

  • Android phone with GPS and 3G/4G enabled. I am using Samsung Galaxy S4 for the demo.
  • SAP HANA access. I am using the 30-days HANA version. Refer to http://scn.sap.com/docs/DOC-28191
  • SAPUI5 with Cordova Geolocation plugin.

 

6.jpg

 

The Android Client

 

This client will send the current position via OData service to the SAP HANA.

 

  • Create SAPUI5 index.html.
    In theonSuccess function, we'll get the latitude & longitude position and pass these values to the oEntry array and create the OData request (Post method).

    function onSuccess(position) {
    var element = document.getElementById('geolocation');
    element.innerHTML = 'Latitude: ' + position.coords.latitude + '<br />' +
    'Longitude: ' + position.coords.longitude + '<br />' +
    '<hr />' + element.innerHTML;
    jQuery.sap.require("sap.ui.model.odata.datajs");
    var sUrl = "hana2.vm.cld.sr:8000/tracker/info/service/GPSTracker.xsodata";
    var oModel = new sap.ui.model.odata.ODataModel(sUrl, true, "userid", "password");
    var oEntry = {};
    oEntry.STIMESTAMP = '/Date(0000000000000)/';
    oEntry.SDATE      = '/Date(0000000000000)/';
    oEntry.LAT =  position.coords.latitude.toString();
    oEntry.LONG = position.coords.longitude.toString();
    // oData Create
    oModel.create('/GPSTracker', oEntry, null,
    function() { success = true;},
    function() {  failure = true;}   );
    if (failure == true) {
    alert("Create failed");
    }
    else {
    alert("Data Saved!");
    }
    }
    2.jpg

  • Create the Cordova Geolocation plugin
    Create a cordova project in C:\programs:

    - mkdir c:\programs

    - cd c:\programs

    - cordova create c:\programs\Geolocation com.enterprisemobility.Geolocation Geolocation

    - cd geolocation

    - cordova platform add android

    - cordova plugin add org.apache.cordova.geolocation

    Copy the index.html from the SAPUI5 project that we have created in previous step to c:\programs\geolocation\www and build the project:

    cordova build


    If the build is success, you will get the debug apk in the C:\programs\Geolocation\platforms\android\ant-build. Install this apk on your device.
    4.jpg

The HANA XS


The key components in the HANA XS:

  • The HANA table GPSTRACKER for storing the latitude and longitude position.

    table.schemaName = "GPSTRACKER";

    table.tableType = COLUMNSTORE;

    table.description = "GPSTRACKER";

    table.loggingType = NOLOGGING;

    table.columns = [

    {name = "STIMESTAMP"; sqlType = TIMESTAMP;  nullable = true;},

    {name = "SDATE"; sqlType = DATE; nullable = true;},

    {name = "LAT"; sqlType = NVARCHAR; length = 50; nullable = true;},

    {name = "LONG"; sqlType = NVARCHAR; length = 50; nullable = true;}

    ];

    table.primaryKey.pkcolumns = ["STIMESTAMP"];

  • An OData service GPSTracker.xsodata with custom insert procedure.

    service namespace "tracker.info.service" {

    "GPSTRACKER"."tracker.info.init::GPSTracker" as "GPSTracker"

    create using "tracker.info.service::InsertRowGPSTracker";

    }

  • An Insert procedure InsertRowGPSTracker.procedure for inserting the latitude and longitude position to GPSTRACKER table.

    insert into "GPSTRACKER"."tracker.info.init::GPSTracker"

    values (current_timestamp, current_date,  lv_LAT,lv_LONG);


The Back-end Web App with Google Map


The back-end web app displays the timestamp, latitude & longitude coordinate in the table. User can select the item to display the exact location in the Google map. The "Refresh Data" button will update the information in the table with the new location received from the client.

 

In the GPS.view.js, we call the OData GPSTracker.xsodata/GPSTracker and bind it.

 

oController.oModel = new sap.ui.model.odata.ODataModel("hana2.vm.cld.sr:8000/tracker/info/service/GPSTracker.xsodata",true, "userid", "password");
vCol = "STIMESTAMP";
oControl = new sap.ui.commons.TextField({editable:false}).bindProperty("value",vCol);
oTable.addColumn(new sap.ui.table.Column({label:new sap.ui.commons.Label({text: "Timestamp" }),
template: oControl,
sortProperty: vCol,
filterProperty: vCol
}));
vCol = "LAT";
oControl = new sap.ui.commons.TextField({editable:false}).bindProperty("value",vCol);
oTable.addColumn(new sap.ui.table.Column({label:new sap.ui.commons.Label({text: "Latitude" }),
template: oControl,
sortProperty: vCol,
filterProperty: vCol
}));
vCol = "LONG";
oControl = new sap.ui.commons.TextField({editable:false}).bindProperty("value",vCol);
oTable.addColumn(new sap.ui.table.Column({label:new sap.ui.commons.Label({text: "Longitude" }),
template: oControl,
sortProperty: vCol,
filterProperty: vCol
}));
oTable.sort(oTable.getColumns()[0]);
// Google Map
oTable.attachRowSelectionChange(function(oEvent) {
var currentRowContext = oEvent.getParameter("rowContext");
var lat = oController.oModel.getProperty("LAT", currentRowContext);
var lang = oController.oModel.getProperty("LONG", currentRowContext);
oController.actSearch(lat, lang);
});
oTable.setModel(oController.oModel);
oTable.bindRows("/GPSTracker");
oPanel.addContent(oTable);

 

The oTable.attachRowSelectionChange function calls the Google Map actSearch function with parameter lat and lang.

 

actSearch: function (lat, lang) {
this.geocoder = new google.maps.Geocoder();
var latlng = new google.maps.LatLng(lat, lang);
var mapOptions = {
center: latlng,
zoom: 16,
mapTypeId: google.maps.MapTypeId.ROADMAP
};
this.map = new google.maps.Map($('#map_canvas').get(0), mapOptions);
var marker = new google.maps.Marker({
position: latlng,
map: this.map,
title: 'You are here'
});
}

 

Source Code


You can find the complete source code in the GitHub: ferrygun/SAPHanaGPSTracker · GitHub

The final structure as per below screenshots:

 

8.jpg9.jpg

 

 

Run the App

 

After you have installed the apk in your device, enable the GPS and tap the app to run it

11.jpg
Upon receiving the GPS signal, the app will display the longitude and latitude position and send this information to HANA server. You can stop it by clicking the Clear Watch button
12.jpg

 

To run the backend web app,  open the link http://hana2.vm.cld.sr:8000/tracker/info/index.html in Chrome browser. Replace "hana2.vm.cld.sr" with your HANA server's address. You will get the current position from the client.

7.jpg

 

 

Thank you for reading my blog and I hope you also enjoy developing the SAP HANA XS application.

Please feel free to drop me any feedback/comment.


P.S: this blog was inspired by http://scn.sap.com/community/developer-center/hana/blog/2013/11/17/latest-share-price-google-finance-odata-crud

Simple Door Monitoring System with HANA XS and Raspberry Pi

$
0
0

Intro

 

I have built a simple door sensor with SAP HANA and Raspberry Pi to monitor the door-opening events. The wireless door sensor is a magnetic gap sensor that will send the RF 433 Mhz signal to the receiver when someone open the door. The Raspberry Pi with the receiver within the range listens for the incoming signal from the sensor. Once the signal is detected, the Raspberry Pi will send the ODATA http post request  to the HANA server. You can monitor the events status from the web app (backend).

 

If you never heard of Raspberry Pi, you may refer to its website at http://www.raspberrypi.org/ . Building the receiver will not be covered in this blog. Please refer to this blog to build it: Adding 433 to your Raspberry Pi | Ninja Blocks

rpi.jpg

Slide1.JPG

HTTP POST Command: Curl

 

On the Raspberry Pi, there is a command line tool in Linux to post the http request: curl with the JSON data: body.json.

if (mySwitch.getReceivedValue() == 13981149 ) {

                system("curl -H 'Content-Type: application/json' --data @/home/pi/433Utils/RPi_utils/body.json http://vm.cld.sr:8000/rpi/sensor/service/RPISensor.xsoda$

          }


The body.json contains the current timestamp, date and STATUS = Open. The timestamp and date will be auto-populated by the insert procedure on the HANA XS side.

{"STIMESTAMP":"\/Date(0000000000000)\/","SDATE":"\/Date(0000000000000)\/","STATUS":"Open"}


To run the receiver, just type sudo ./RFSniffer on the Raspberry Pi command prompt. This command listens to the incoming signal from the sensor and sends the http post ODATA request to HANA XS. On the HANA XS side, the insert procedure will update the HANA table.

sensor.jpg

Project Structure: Key Components


The structure and explanation is pretty much the same as in my earlier blog here: http://scn.sap.com/community/developer-center/hana/blog/2014/07/07/real-time-gps-tracker-using-sap-hana-xs-sapui5-cordova-geolocation-plugin-and-odata

aa.jpg

 

Let's focus here on the key components:

  • The RpiSensor table to store the status of the door events:b.jpg

 

  • The  XSODATA:
    xsodata.jpg

 

  • The InsertRowRPISensor procedure:
    c.jpg

 

  • And also the web app backend SAPUI5 to view the information:
    d.jpg

 

A complete source code can be found on GitHub: ferrygun/HANADoorSensor · GitHub

 

Summary

 

You could make an internet of things by combining the power of SAP HANA and the micro-computer/controller like the Raspberry Pi/Arduino with other sensors (e.g,. GPS, temperature/humidity, contact sensors, etc) and create a dashboard to present the status of the sensors.

 

Thanks for reading my blog. I hope you enjoy it.

SAP HANA Text Join Issue/Resolution

$
0
0

This blog is to address a common problem related to a "Text Join" in SAP HANA graphical modelling (SAP HANA version SP08 & below).

 

For Example, in the following example the text join has been setup correctly, marking ‘SPRAS’ as a language column.

 

Blog1.png

 

And also the Default Language is set to “English

 

Blog2.png

 

Problem:

 

The model which has the text-join will get activated successfully but in some cases the data-preview will provide ‘NULL’ values for the test description

Blog3.png

Though the text description table has values for ‘English


Blog4.png

 

Resolution:

 

The text join only recognizes the single character representation in the language column. So the value in the "SPRAS" column needs to be‘E’ instead of ‘EN’ and ‘D’ instead of ‘DE’, etc.

 

Blog5.png

 

Once we make this change of “EN” to “E”, “DE” to “D” etc , we notice that the data-preview has the correct text description.

Blog6.png


Big Data - SAP HANA - World Cup 2014

$
0
0

There is something magical.

When the time I read the article

http://global.sap.com/news-reader/index.epx?articleId=23079

http://blogs.wsj.com/cio/2014/07/10/germanys-12th-man-at-the-world-cup-big-data/

I also wished that German's should win this game for SAP HANA.

 

From the time I first attended the SAP HANA sessions and read more on that, I really admire how it worked.

I always promote that more in our office among other technology people.  They are also now more enthusiastic.

They are also confused whether all this are true or not, whether they should also start SAP basics.

 

Most consultants working in SAP technology this time really wished that Germany should win.

We could read interesting facebook posts like ' Offically support Germany,  but in heart with Argentina'.

 

I was also so exicited to see, whether Germans' will really win... Yes they did it.

I feel so happy...sitting in India.....after seeing the match till 3:30 AM..

 

Today morning....my friend shared a very interesing comment.

 

She was sad for Argentina as she is strong supporter, But for SAP she was happy that German Team won.

 

True spirit of SAP consultants around the world.

 

My Oracle DBA calls --- SAPHANA - as SAHANA...  she is sweet and magical.

We are great fan of yours... waiting to see more magical products and solutions from SAP.


Let's build a robot! Part 3: Shake it!

$
0
0

Ok, it's been more than half a year since my previous post on building robots! I know, it's unacceptable, but other big and important things (such as organizing SAP Inside Track Singapore; Internal SAP d-codes in many SAP locations; and finally planning to take over the world) were keeping me busy.

 

So let's continue where we left off.

 

Introduction

 

Do you remember the robotic arm from the previous post? Well, let's talk about it. Here's the story. Once at an internal SAP meeting I overheard a manager of a sales team telling all reporting employees that their annual performance reviews will be super streamlined: if the sales quota is met, they will get their bonus. If the sales quota is not met, they will not get the bonus*. Simple and easy, black and white. And of course all of the sales data is in HANA now, so quick access to it is easier than it was before. I thought it was an amazing idea, and that is also because I realized that with such an approach we could fire the manager, and replace him/her with a robot!

 

Because I do not like the concept of annual performance reviews, and prefer continuous feedback, I thought it would be really cool to build a sales manager robot that would interact with employees on a daily basis. It would consist of a hand only. Claw, to be more precise. The claw would be used to strangle a sales person (manager's subordinate) if the expected sales for the day were low. Once sales go up, the claw would let the sales person go, only to punish him or her if they underperformed again. Simple, isn't it? And exactly as the said manager would imagine it, I guess.

 

Let me describe it in a more visual way**.

 

DSC_7227.jpeg

Exhibit one: happy sales person, not knowing that the manager is about to review the sales results.


DSC_7231.jpeg

Exhibit two: the manager's claw (Clawager)


DSC_7229.jpeg

Exhibit three: the sales person being strangled by the Clawager because the sales numbers are low.


DSC_7228.jpeg

Exhibit four: the sales person caught up, sales got better, time to breathe in!

 

 

Now, let us analyze how the whole thing was implemented.

 

HANA

 

First, we created a schema and a table to store our sample revenue sales data. Obviously the data is used just to demonstrate the concept. In fact, we do not have daily data in our dataset, but it is fairly easy to modify the code accordingly.

 

create schema PLUGIN;
create column table "PLUGIN".revenue (ID INTEGER PRIMARY KEY, GROSS DOUBLE, QUARTER INTEGER);
INSERT INTO "PLUGIN".revenue values (1, 100.0, 1);
INSERT INTO "PLUGIN".revenue values (2, 40.0, 2);
INSERT INTO "PLUGIN".revenue values (3, 120.0, 3);
INSERT INTO "PLUGIN".revenue values (4, 320.0, 4);
INSERT INTO "PLUGIN".revenue values (5, 10.0, 5);
INSERT INTO "PLUGIN".revenue values (6, 0.0, 6);
INSERT INTO "PLUGIN".revenue values (7, 1000.0, 7);
INSERT INTO "PLUGIN".revenue values (8, 100.0, 8);
INSERT INTO "PLUGIN".revenue values (9, 500.0, 9);
INSERT INTO "PLUGIN".revenue values (10, 500.0, 10);
INSERT INTO "PLUGIN".revenue values (11, 1000.0, 11);
INSERT INTO "PLUGIN".revenue values (12, 100.0, 12);
INSERT INTO "PLUGIN".revenue values (13, 300.0, 13);
INSERT INTO "PLUGIN".revenue values (14, 300.0, 14);
INSERT INTO "PLUGIN".revenue values (15, 400.0, 15);
INSERT INTO "PLUGIN".revenue values (16, 10.0, 16);
INSERT INTO "PLUGIN".revenue values (17, 90.0, 17);
INSERT INTO "PLUGIN".revenue values (18, 150.0, 18);
INSERT INTO "PLUGIN".revenue values (19, 250.0, 19);
INSERT INTO "PLUGIN".revenue values (20, 50.0, 20);

Then we created a view to compute the moving average over the last 3 quarters.

 

create view "PLUGIN".MOVING_AVG as
select AVG(GROSS) AS MOVING_AVG from
(       select GROSS from "PLUGIN".revenue order by id limit 3
)

Finally, we created an XS file ‘moving_avg.xsjs’ to expose the moving average of the sales data to the outside world. It retrieves the moving average and the last quarter data and computes the trend as the ratio between the two. We normalize the resulting value to a [0, 1] interval using a sigmoid curve.

The result is output as the final value to be consumed by the client (i.e the Clawager's Arduino)

 

We originally wanted to use OData but stumbled across the problem a OData required a primary key. It was our lack of experience with HANA, as we later found out that OData can generate the keys automatically if configured correctly but oh well, XS did the job as well.

 

moving_avg.xsjs

 

$.response.contentType = "text/html";
var conn = $.db.getConnection();
var output = "";
var pstmt = conn.prepareStatement("select MOVING_AVG FROM \"PLUGIN\".MOVING_AVG");
var latest = conn.prepareStatement("select GROSS from \"PLUGIN\".REVENUE ORDER BY ID DESC limit 1");
var rs = pstmt.executeQuery();
if(!rs.next())
{        $.response.setBody("Failed to retrieve data");        $.response.status = $.net.http.INTERNAL_SERVER_ERROR;
}
else
{       var mv_avg = output + rs.getDouble(1);
}
rs = latest.executeQuery();
if(!rs.next())
{        $.response.setBody("Failed to retrieve data");        $.response.status = $.net.http.INTERNAL_SERVER_ERROR;
}
else
{       var last_value = output + rs.getDouble(1);
}
var trend = last_value / mv_avg;
trend = 1/ (1 + Math.exp( - trend));
trend = (trend - 0.5) * 2;
output = output + trend + "\n" ;
rs.close();
pstmt.close();
latest.close();
conn.close();
$.response.setBody(output);

Arduino

 

We wrote a short piece of python code that would connect to HANA from a laptop computer to get the service response. This response was then sent over serial to Arduino. Now, since it was done a few months ago, I cannot find this particular piece of code anymore. But in principle and in short, we followed the steps outlined in Connecting to HANA with Python using ODBC or oData and then sent the output digit over serial.


Clawager is built using two servos. One controls the spread of the claws, the other tilts the claw. In the end the tilt of the Clawager was controlled using a flex sensor (so that we could manually tilt it to grab an employee) and the spread was determined based on the values coming from HANA. Here's the Arduino code.


#include <Servo.h>
Servo bottomservo;  // create servo object to control a servo
Servo topservo;
int potpin = 2;  // analog pin used to connect the flex sensor
int val;    // variable to read the value from the analog pin
int incomingByte = 0; // this is what we're getting from our XS service
void setup()
{  Serial.begin(9600);  bottomservo.attach(9);  // attaches the servo on pin 9 to the servo object  - this is the flex sensor controlled servo  topservo.attach(10);  // this is where we attach the actual claw
}
void loop()
{  val = analogRead(potpin);            // reads the value of the potentiometer (value between 0 and 1023)  // Serial.println(val);  val = map(val, 128, 350, 0, 179);     // scale it to use it with the servo (value between 0 and 180)  bottomservo.write(val);                  // sets the servo position according to the scaled value   if (Serial.available() > 0) {                // read the incoming byte:                incomingByte = Serial.read();                Serial.print("I received: "); //                Serial.println(incomingByte-48, DEC);
topservo.write((incomingByte-48)*15);   }  delay(500);                           // waits for the servo to get there
}

Final result

 

The whole setup was done in about two hours of work, during an internal SAP event, called PLUGin. The code was written by a few people during the event, and biggest kudos go to: Daniel Dahlmeier, Zbigniew Mandziejewicz and Marc Lester Tan from the SAP office in Singapore.

 

DSC_5652.jpg

Clawager being tested on a real human

 

DSC_5654.jpg

Full hardware setup of Clawager. From the left: battery pack, claw, Arduino, breadboard.

 

DSC_5657.jpg

Note the flex sensor on the far right (in focus).

 

DSC_5663.jpg

Photo: We also realized that if put upside down, the Clawager looks like a chicken. Which leads us to the final video below.

 

Video

This is just a fun summary of a fun project. If you observe carefully, you will see that the tilt of the head of Clawager is controlled by the flex servo. The spread, which appears to be random, is in fact the claw's response to the sales data coming from HANA. We thought it looked like a fun dance, and decided to put a colleague's badge on top of the claw, to make it more human.

 

 

I hope you enjoyed reading this summary as much as I did working on this project. Hoping that it will take me a bit less time to write the next post in the series, let me just say that I am currently working with Arduino Yún, which is a cloud-connected Arduino with Linux (Linino) on board. It's a perfect little microcontroller for small hardware projects. I am also playing with more powerful microcontrollers (such as Raspberry Pi) and computer vision. Perhaps this is the next area we are going to explore in this blog series!

 

* All the internal stories I ever share are completely made up, I will nether confirm nor deny whether similar situations actually happen at SAP.

** Big thanks to my kids for allowing me to use some of their toys for this blog post.

Metric² for SAP HANA (v2.0)

$
0
0

After releasing metric² at TechEd Las Vegas last year it has had well over a hundred downloads, and has a couple loyal followers always offering advice and feedback (which I am truly thankful for). Last week we released version 2.0 of the free and open source app and subsequently put it onto GitHub under the MIT license which you can find here.

 

The biggest changes of the new version of metric² include:

 

  • A new UI (Twitter bootstrap based)
  • The ability for users to create their own account
  • A few new metrics
  • No longer uses SAPUI5 (a blog post in itself!)
  • The biggest change was the front-end/UI and the code has been refactored and split up into multiple library files (XSJSLib) and is simplified for easier reading.

 

You can download the source code over at Github or you can download the HANA Delivery unit file (tgz) here.

 

The primary goal of the app is, and always has been to try and visualize your big data. Along with providing a self service tool to monitor and provide insights on the data you have in, and around your HANA system, while using it to the fullest.

 

 

Short Survey

 

I am in the process of trying to understand where to take the app in terms of use and direction and if you are interested in helping guide the discussion, if you have downloaded, used or seen the app I would greatly appreciate if you took 1 minute to complete a short 5 question, multiple choice survey here.

 

(Read further below for Future updates and the mobile app) ,,,

 

Below are a few screenshots of the new version (2.0):

 

Screen Shot 2014-07-06 at 9.11.31 PM.png

 

 

Screenshot2.png

 

 

 

 

Mobile App

 

If you are a HANA admin, or are simply managing your own dev instance for learning HANA, a new iOS app called metric2 - Admin will be in the app store within the next month, it will be a free app with some pretty basic core metrics centered around HANA performance/availability. Below are a few screenshots of the upcoming mobile app:

 

 

 

Screen Shot 2014-07-15 at 3.37.38 PM.png

 

 

Next Release

 

A couple of new features for the upcoming release (v2.1) are centered around some new metrics, including support for iBeacon devices (upload a map, designate regions and you can monitor and track from the live dashboards). Another metric in the process of being developed is around quickly and easily showing twitter sentiment using your HANA instance. Using HANA Live views will also be included as a data source option for the metrics allowing you to make use of the predefined SAP designed views. From a code perspective I am in the process of pulling out widgets to be independent, upgradable and customizable without touching the core code.

 

As usual if you are using the app in your environment I am always looking for ways to improve it and the feedback provided so far is invaluable. Feel free to leave a comment or reach out to me at paschmann@metric2.com

Our experience with an Oracle Migration: Part 1

$
0
0

INTRODUCTION

 

This blog is related to few of  my experience working on  SAP HANA. It has the problems that I faced during a migration project (from Oracle PL/SQL to SAP HANA SQL Script) , in which I am working on and the possible solutions for those problems.

 

The solutions that are explained in here may be helpful for developers who are facing similar kinds of problems. This blog will be useful for the beginners who are working on HANA development.

 

Usage of RESULTVIEW :

 

Problem :

Oracle materialized view has to be converted to HANA view, So I tried to create RESULTVIEW in a procedure. I found that creation of RESULTVIEW in HANA Studio Project could not be transported via the repository.

 

Solution:

  • RESULTVIEW is useful while creating in console (to create a runtime view of data)  but it is not transportable via the repository.
  • So as an alternative I created a procedure with OUT parameter and used it to retrieve the result set.

 

 

 

 

Changing the preferences in HANA studio :

 

Problem :

Change the number of results shown after executing a query in SQLConsole.

 

Solution:

  • To get more than 1000 rows of a table as result of a select query I navigated to
  • WINDOW->preferences->SAP HANA->Runtime->Result  and typed the max number of rows to be displayed.

 

Preferences_1.jpg

Problem :

To change the delimiter while exporting the result (data) of a query or changing the NULL representation character or any other settings related to the data export.

 

Solution:

  • To change the export settings such as DELIMIETER (Data separator) , NULL value representation , etc.,
  • Go to    WINDOW -> PREFERENCES -> SAP HANA ->RUNTIME -> COMMON

 
  prefences_2.jpg

 

Table types in project:

 

Problem:

            I tried to create a table type in my project as I created in console but couldn’t be transported via repository.

 

Solution:

  • Creating table types in console is different from creating  them in a project .
  • While creating a table type in project HANA expects a .hdbdd file to be created and appropriate artifacts to be mapped with each of the table type columns.

 

 

 

Also the artifact is case sensitive. Here, I have mentioned the context as follows

 

Context abcd{

Type STR25 : String(25);

};

TT1.jpg

But , when creating the table type I created as

TYPE TTT{

       "coLUM" :str25;

       };

 

 

Which gives me the error (since str is in lower case):

tt2.jpg

When I assign the column to the table type , I should use coLUM (case sensitive)


Look at the following code snippet:


TT3.JPG


createprocedure sample(OUT abc "SRIVATSAN"."SRI.DEV.DDL::ABCD.TTT")

as

begin

DECLARE colum varchar(25);

colum := 'hello';

abc = select colum from dummy;

end;

 

When I execute this , I get the error as follows :


SAP DBTech JDBC: [1306] (at 138): return type mismatch: Attribute name "coLUM"  different from Attribute name: "COLUM" : line 7 col 1 (at pos 138).

 

This shows that the column name should match exactly the way it is defined in .hdbdd file.

 

createprocedure sample(OUT abc "SRIVATSAN"."SRI.DEV.DDL::ABCD.TTT")

as

begin

DECLARE colum varchar(25);

colum := 'hello';

abc = select colum as “coLUM”  from dummy;

end;

 

This code will work since the name we assign to the column (alias) is exactly the same as .hdbdd definition.

 

 

 

Using Row store tables in modeling:

 

Problem :

            When creating modeling views, I tried to use a table (of ROW type) which I couldn’t accomplish.

 

Solution:

  • It is not possible to use ROW tables (Created as Row type) in modeling views. Only columnar tables can be used.
  • When I tried to drag and drop a ROW store table to the data foundation, it doesn’t allow.
  • Also when I right click the data foundation and choose Add Objects , It shows only the tables which are of COLUMN type.


rowtype1.jpg

 

Exception handling :

 

Problem:

            For few error codes in HANA, I was unable to retrieve the error message using ::SQL_ERROR_MESSAGE.

 

Solution :

  • For few error codes (::SQL_ERROR_CODE), HANA doesn’t return the  :: SQL_ERROR_MESSAGE  (returns a blank as error description),
  • so to get the DESCRIPTION , I used the SYSTEM table M_ERROR_CODES from SYS schema (SYS.M_ERROR_CODES) by passing the error code to the column CODE and getting the DESCRIPTION column's value.

 

Here is a sample code ,

 

PROCEDURE"SRI.DEV.LIB::P_LOG_ERROR"

                                        (IN IN_ERROR_CODE INTEGER,IN IN_OBJECT NVARCHAR(100),IN IN_ERROR_MSG NVARCHAR(2000))

       LANGUAGE SQLSCRIPT   SQL SECURITY INVOKER

       AS  BEGIN

       DECLARE V_NEXTVAL INTEGER;

       DECLARE V_ERROR_MSG NVARCHAR(2000);

     

       SELECT"SRIVATSAN"."SRI.DEV.DDL::SEQ_ERRLOG".NEXTVAL

       INTO   V_NEXTVAL

       FROM DUMMY;

              V_ERROR_MSG := IN_ERROR_MSG;

       IF V_ERROR_MSG = ''THEN

              SELECT DESCRIPTION INTO :V_ERROR_MSG

               FROM SYS.M_ERROR_CODES WHERE CODE = :IN_ERROR_CODE;

       ENDIF;

     

       INSERTINTO"SRIVATSAN"."SRI.DEV.DDL::SCP_ERRORLOG"

               VALUES (:V_NEXTVAL,

                            CURRENT_UTCTIMESTAMP,

                            IN_ERROR_CODE,

                            IN_OBJECT,

                            :V_ERROR_MSG,

                            CURRENT_USER);

     END;

                      EXCEP1.JPG

 

This exception handling procedure will be called in all the main procedures with the 3 parameters

(::SQL_ERROR_CODE , ‘<Name of the object that caused the exception>’,::SQL_ERROR_MESSAGE) so as to insert the error details in a table .

 

Importing CSV file to SAP HNA table:

 

Problem :

            I faced issues in importing CSV files to HANA tables. I was getting errors due to  formatting issues of date and time.

 

Solution:

  • While importing a CSV file to HANA table, I ensure that the DATE data type is of the format YYYY-MM-DD and the Timestamp has YYYY-MM-DD HH:MM:SS.
  • To do this I opened the CSV file in MS Excel, selected all the columns with data/time, right clicked and chose Format Cell and chose Custom option to type the exact format of the column (Date / time / Date Time).
  • Saved  the excel file and then I started importing the file to table which worked fine.

 

 

Altering the table columns:

 

Problem :

            I tried to change the data type of a column (ROW TYPE table) , which was not supported.

 

Solution :

 

Only tables created as Column store allows altering the data type of columns (Eg., When a column with DATE is being altered to TIMESTAMP , that is not allowed if the table is created as ROWstore)

 

Look at the following row table,

createrowtable rowtype (col1 integer,col2 date);

 

Here, I have created a row store table called “rowtype” with two columns namely col1 as integer type and col2 as date type.

 

Now, I try to alter the table to change the data type of col2 from date to Timestamp ,

Altertable rowtype alter (col2timestamp)

 

I got the error as follows:

Could not execute 'Alter table rowtype alter (col2 timestamp)' in 220 ms 781 µs .

SAP DBTech JDBC: [7] (at 27): feature not supported: cannot modify data type of column: COL2: line 1 col 28 (at pos 27)

 

Now , the same is tried with the column type table ,

createcolumntable coltype(col1 integer,col2 date);

 

Altering the column data type :

Altertable coltype alter (col2 timestamp)

 

Statement 'Alter table coltype alter (col2 timestamp)'

successfully executed in 236 ms 633 µs  (server processing time: 15 ms 169 µs) - Rows Affected: 0

 

Whereas increasing the length of a column (same datatype) is possible in both row and column types.

 

create  rowtable rowtype(col1 varchar2(20),col2 date);

 

Altertable rowtype alter (col1 varchar(40))

 

Statement 'Alter table rowtype alter (col1 varchar(40))'

successfully executed in 278 ms 609 µs  (server processing time: 4 ms 254 µs) - Rows Affected: 0

Build an Interactive Voice Response (IVR) System connected to SAP HANA - Part 1

$
0
0

On the first part of this blog post, I would like to walk through how to build an IVR (Interactive Voice Response) system that is connected to SAP HANA Database or ODATA service. IVR is an automated telephony system that interacts with the callers, gathers information and routes calls to the appropriate recipient.

 

For the demo, we will make a call to the hotline (IVR) number to inquiry on the amount of the net value. The system will ask you to type the 8 digit billing document number. Once you have entered the number, if the record is exist, the system will tell you the net value.

 

2.jpg

 

Required Components

 

Software:

Hardware:

 

Preparing the PBX Server


We will be using a Raspberry Pi as an IVR server.

  1. Download the latest image of Asterisk from http://www.raspberry-asterisk.org/downloads/ and write it to the SD card.
    I am using the below version:
    1.jpg
    Once you have written the SD card, boot it up and run this command on the console to update with the latest components:
    raspbx-upgrade
  2. Configure the static IP-address.
    Edit the file /etc/network/interface by typing the following command:
    nano /etc/network/interfaces
    Remove the following line:
    iface eth0 inet dhcp

    And add the following lines:

    iface eth0 inet staticaddress 192.168.0.12
    netmask 255.255.255.0
    gateway 192.168.0.1
    dns-nameservers 192.168.0.1


    Update those numbers highlighted in blue with your IP address. And finally run the following command to restart the network service:

    service networking restart
  3. Installing IncrediblePBX

    Execute the following  command on the Raspberry Pi console:
    cd /
    wget http://incrediblepbx.com/incrediblepbx11-raspbx.gz
    gunzip incrediblepbx11-raspbx.gz
    chmod +x incrediblepbx11-raspbx./incrediblepbx11-raspbx
    It takes about 15~30 minutes to complete. It will ask you to reset all the root password, the FreePBX  and ARI admin password, the extension 701 password, the extension 701 voicemail password, the email delivery address for extension 701 and the telephone reminders password for scheduling reminders by phone. Please proceed to reset all the password.

 

Installing Node.JS


Just to ensure the Raspberry Pi is up to date and ready to go, please execute the following commands:


sudo apt-get upgrade

sudo apt-get update

Download the latest version of Node.JS:

 

sudo wget http://node-arm.herokuapp.com/node_latest_armhf.deb

Install it:

sudo dpkg -i node_latest_armhf.deb

Check if the installation is successful by typing the following command:

node -v

You will see the version if the installation is successful, for my case is:

v0.10.29

 

 

Installing SAP HANA Database Client for Node

npm.png

The next step that we are going to perform is to install the JavaScript client for node to connect to the SAP HANA database and execute the SQL query.


In the Raspberry Pi console, type the following command:


git clone https://github.com/SAP/node-hdb.git

cd node-hdb

npm install


 

Setup the Voice Modem Gateway

 

Setting-up the voice modem gateway Linksys SPA3102 is pretty much simple.

 

The main important point here is the connection to the IVR server (Raspberry Pi). I will touch in more detail in the Part 2.

 

The Line port connected to the RJ-11 in the DSL/Cable Modem router. The Phone port connected to your analog phone and the Internet port connected to the DSL/Cable Modem router.

Slide2.JPG

That's all for Part 1.  We will continue on the Part 2 to perform the following:

 

  • Configure Linksys SPA3102 voice modem gateway in order to connect to the IVR server.
  • Create and setup the HANA database. We will import the data from the CSV to the HANA table.
  • Create a dialplan. The dialplan specifies how to interpret digit sequences dialed by the user and how to convert those sequences into a meaningful action. In our case, the dialplan will execute the shell script to query the HANA SQL database in order to get the net value of the particular billing document number.
  • Create the Node.JS shell script to connect to HANA database or ODATA service (if the source is ODATA).
  • Test IVR system. To test the IVR configuration that we have done and check if everything is being setup correctly.
  • Demo video.

 

Stay tune for part 2.

Build an Interactive Voice Response (IVR) System connected to SAP HANA - Part 2

$
0
0

Once you have completed the basic setup required to build an IVR system as I have described in the Part 1, let's continue our journey to complete the remaining tasks:

 

  • Create HANA table & view. Import the data from the CSV file to the HANA table.
  • Configure Linksys SPA3102 voice modem gateway in order to connect to the IVR Asterisk server.
  • Adding the VoIP gateway to the IVR Asterisk server
  • Create a dialplan. The dialplan specifies how to interpret digit sequences dialed by the user and how to convert those sequences into a meaningful action. In our case, the dialplan will execute the shell script to query the HANA SQL database in order to get the net value of the particular billing document number.
  • Create the Node.JS shell script to connect to HANA database or ODATA service (if the source is ODATA).
  • Test IVR system. To test the IVR configuration that we have done and check if everything is being setup correctly.
  • Demo video.

 

 

Create HANA Table & View


  • IVR.hdbschema

    schema_name="IVR";

  • KNA1.hdbtable

    table.schemaName = "IVR";

    table.tableType = COLUMNSTORE;

    table.description = "IVR";

    table.loggingType = NOLOGGING;

    table.columns = [

    {name = "MANDT"; sqlType = NVARCHAR; length = 3; nullable = false;},

    {name = "KUNNR"; sqlType = NVARCHAR; length = 10; nullable = false;},

    {name = "LAND1"; sqlType = NVARCHAR; length = 3; nullable = false;},

    {name = "NAME1"; sqlType = NVARCHAR; length = 35; nullable = false;},

    {name = "NAME2"; sqlType = NVARCHAR; length = 35; nullable = false;},

    {name = "ORT01"; sqlType = NVARCHAR; length = 35; nullable = false;},

    {name = "PSTLZ"; sqlType = NVARCHAR; length = 10; nullable = false;},

    {name = "REGIO"; sqlType = NVARCHAR; length = 3; nullable = false;},

    {name = "SORTL"; sqlType = NVARCHAR; length = 10; nullable = false;},

    {name = "STRAS"; sqlType = NVARCHAR; length = 35; nullable = false;},

    {name = "TELF1"; sqlType = NVARCHAR; length = 16; nullable = false;},

    {name = "TELFX"; sqlType = NVARCHAR; length = 31; nullable = false;}

    ];

    table.primaryKey.pkcolumns = ["KUNNR"];

  • VBRK.hdbtable

    table.schemaName = "IVR";

    table.tableType = COLUMNSTORE;

    table.description = "IVR";

    table.loggingType = NOLOGGING;

    table.columns = [

    {name = "MANDT"; sqlType = NVARCHAR; length = 3; nullable = false;},

    {name = "VBELN"; sqlType = NVARCHAR; length = 10; nullable = false;},

    {name = "FKART"; sqlType = NVARCHAR; length = 4; nullable = false;},

    {name = "WAERK"; sqlType = NVARCHAR; length = 5; nullable = false;},

    {name = "VKORG"; sqlType = NVARCHAR; length = 4; nullable = false;},

    {name = "NETWR"; sqlType = NVARCHAR; length = 15; nullable = false;},

    {name = "KUNAG"; sqlType = NVARCHAR; length = 10; nullable = false;}

    ];

  • ATTRIBUTE_IVR.attributeview
    Join the KUNNR.KNA1 with KUNAG.VBRK and import the data in the CSV format into these two tables. I have attached the sample data in CSV format. And then perform the select query to the field NETWR which is consist of the net value:
    select NETWR from "_SYS_BIC"."ivr.hana/ATTRIBUTE_IVR" Where VBELN='" + val + "'";
    view.jpg

 

Asterisk PBX Console


You have already installed the PBX server on the part 1. To view the console and verify it is running, execute the command: asterisk -r in the raspberry pi console.

a0.jpg

If you want to quite the console, just type exit.

 

To control and manage the Asterisk configuration from the web console using FreePBX, just open the web browser and type http://<Raspberr_Pi_Address> :

 

a2.jpg

 

Log on to "Free PBX Administration" with userid admin and password that you have defined in the part 1 - Preparing the PBX server: Step 3 - Installing IncrediblePBX.

 

a3.jpg

Once you have successfully logged-on, you will see the system resources consumption statistic and status.

 

Configure Linksys SPA3102

 

Linksys SPA3102 acts as a VoIP gateway between the PSTN line and network.

 

Once you have done the step "Setup the Voice Modem Gateway" in Part 1, connect a handset to phone plug and press **** to enter the configuration menu.

 

  • To get the IP address:
    dial 110# and note down the IP address
  • To enable Web Interface:
    dial 7932# followed by 1# and 1
  • To reset the admin's password, press **** followed by 73738#, and confirm with 1
  • Open the web browser and type http://linksys_ip_addressand click on Admin Login and advanced:


a1.jpg

 

  • Wan Setup
    wan.jpg
  • Lan Setup
    lan.jpg
  • Voice > System
    voice.jpg
  • Voice > SIP
    sip1.jpgsip2.jpg
  • Regional (for Singapore)
    regional1.jpgregional3.jpg

    regional4.jpg
    In case of echo, adjust the value of FXS Port Impedance and FXS Port Output/Input Gain.

    More information about the regional settings, please refer to these links:
    http://www.3amsystems.com/wireline/tone-search.htm
    http://www.3amsystems.com/wireline/daa-search.htm

  • Line 1
    Set the SIP port of Line 1 to 5060.
    line1a.jpg
     
    Set the proxy to IP address of Raspberry PI (PBX server), user ID: line1 with password: papamama
    line1b.jpg
  • PSTN Line
    Set the SIP port of PSTN Line to 5061.
    pstnline1.jpg
      Set the proxy to IP address of Raspberry PI (PBX server), user ID: pstn with password: papamama
    pstnline2.jpg
      "S0<:123@192.168.0.12>" means that incoming PSTN calls will call the extension 123 on the PBX.   
    pstnline3.jpg
     
    Please make sure you set the correct value of Disconnect Tone and FXO Port Impedance. PSTN Answer Delay is the number of seconds before the SPA3102 will call the PBX.
    pstnline4.jpg
    pstnline5.jpg
    In case of echo,  adjust the value of SPA to PSTN Gain and PSTN to SPA Gain.

 

Adding the VoIP Gateway to the IVR Asterisk Server

 

From the web console FreePBX, select Connectivity> Trunk> Add SIP Trunk:

 

siptrunk1.jpg

We will add trunk line1 and pstn:

trunk2.jpg

trunk3.jpg

[line1]

disallow=all

type=friend

host=dynamic

context=internal

username=line1

secret=papamama

mailbox=line1@internal

nat=force_rport,comedia

canreinvite=no

dtmfmode=rfc2833

qualify=yes

allow=g722

allow=silk8

allow=silk16

allow=silk24

allow=ulaw

allow=alaw

allow=gsm

allow=h263

videosupport=yes

 

[pstn]

disallow=all

type=friend

host=dynamic

context=pstn

username=pstn

secret=papamama

mailbox=pstn@pstn

nat=force_rport,comedia

canreinvite=no

dtmfmode=rfc2833

qualify=yes

insecure=port,invite

allow=g722

allow=silk8

allow=silk16

allow=silk24

allow=ulaw

allow=alaw

 

Click Submit Changes and Apply Config.  Do the same for pstn trunk.
submit.jpgapply.jpg

The configuration will be saved in the file /etc/asterisk/sip_additional.conf. Just open the file and see if the config is there.

puttysip.jpg

 

Create Dialplan

 

 

We will create a simple IVR flow in the dialplan. We start with a greeting "Welcome to IVR HANA Demo" as described in the below diagram:

 

ivrflow.jpg

In the Raspberry Pi console, go to /etc/asterisk and modify the extensions_custom.conf:

 

cd /etc/asterisk

nano extensions_custom.conf


Add the following lines and save it. Any incoming call from the pstn or line1 will be routed to extension 123 in the PBX. Asterisk uses the Google Text to Speech for the IVR agent.

 

[pstn]

exten => 123,1,Answer

exten => 123,n,Set(VOLUME(TX)=10)

exten => 123,n,Set(VOLUME(RX)=10)

exten => 123,n,Set(TIMEOUT(digit)=7)

exten => 123,n,Set(TIMEOUT(response)=10)

exten => 123,n,agi(googletts2.agi, "Welcome to IVR Hana Demo")

exten => 123,n,agi(googletts2.agi, "Please type your 8 digit billing document number after the beep")

exten => 123,n,NoOp(STORE NUMBER: ${DIGIT8})

exten => 123,n,Read(DIGIT8,beep,8)

exten => 123,n,GotoIf($["${DIGIT8}" = ""]?dial2)

exten => 123,n,agi(googletts2.agi, "Please hold for response.")

exten => 123,n,agi(ivrhana.sh,${DIGIT8})

exten => 123,n,NoOp(Received answer: ${answer})

exten => 123,n,GotoIf($["${answer}" = "No"]?dial1)

exten => 123,n,agi(googletts2.agi,"The net value is ${answer}. Thank you")

exten => 123,n,Hangup

exten => 123,n(dial1),agi(googletts2.agi,"No Result found")

exten => 123,n,Hangup

exten => 123,n(dial2),agi(googletts2.agi,"You did not type any  number. Good bye")

exten => 123,n,Hangup


[internal]

exten => 123,1,Answer

exten => 123,n,Set(TIMEOUT(digit)=7)

exten => 123,n,Set(TIMEOUT(response)=10)

exten => 123,n,agi(googletts2.agi, "Welcome to IVR Hana Demo")

exten => 123,n,agi(googletts2.agi, "Please type your 8 digit billing document number after the beep")

exten => 123,n,NoOp(STORE NUMBER: ${DIGIT8})

exten => 123,n,Read(DIGIT8,beep,8)

exten => 123,n,GotoIf($["${DIGIT8}" = ""]?dial2)

exten => 123,n,agi(googletts2.agi, "Please hold for response.")

exten => 123,n,agi(ivrhana.sh,${DIGIT8})

exten => 123,n,NoOp(Received answer: ${answer})

exten => 123,n,GotoIf($["${answer}" = "No"]?dial1)

exten => 123,n,agi(googletts2.agi,"The net value is ${answer}. Thank you")

exten => 123,n,Hangup

exten => 123,n(dial1),agi(googletts2.agi,"No Result found")

exten => 123,n,Hangup

exten => 123,n(dial2),agi(googletts2.agi,"You did not type any  number. Good bye")

exten => 123,n,Hangup

 

The variable DIGIT8 stores the 8-digit numbers input from the user and pass it to shell script ivhrana.sh. The output/result  will be stored in the variable answer.


Let's create the shell script. Go to /var/lib/asterisk/agi-bin and create  ivrhana.sh:

cd /var/lib/asterisk/agi-bin

nano ivrhana.sh

 

 

Add the following lines and save it:

#!/bin/bash

# Do some work and set the value of 'answer'

VALUE=`/usr/local/bin/node  /root/hana/node-hdb/ivrhana.js $1`

#VALUE=1234 # Value passed back for 'asnwer'

echo -e "SET VARIABLE answer $VALUE"

 

Create the Node.JS script: ivrhana.js

 

Go to /root/hana/node-hdb and create ivrhana.js:

cd /root/hana/node-hdb

nano ivrhana.js


Add the following lines and save it.

process.argv.forEach(function (val, index, array) {

 

 

        if (index==2) {

 

 

                var hdb    = require('hdb');

                var client = hdb.createClient({

                        host    : 'hana2.vm.cld.sr',

                        port    : 30015,

                        user    : 'SYSTEM',

                        password : 'password'

                });

 

 

                client.connect(function (err) {

                        if (err) {

                                return console.error('Connect error', err);

                        }

 

 

                        var url = "select NETWR from \"_SYS_BIC\".\"ivr.hana/ATTRIBUTE_IVR\" Where VBELN='" + val + "'";

                        //console.log(url);

 

 

                        client.exec(url, function (err, rows) {

                                client.end();

                                if (err) {

                                        return console.error('Execute error:', err);

                                }

 

 

                                if(!isEmptyObject(rows)) {

                                        console.log(rows[0].NETWR);

                                } else {

                                        console.log('No Result');

                                }

 

 

 

 

                        });

                });

 

 

                //console.log(index + ': ' + val);

        }

});

 

function isEmptyObject(obj) {

  return !Object.keys(obj).length;

}

 

 

The above script performs a query to HANA database by executing the select  SQL statement and write the result to the console.  Update the host, user ID and password with yours. To run the script, type the following command:

node ivrhana.js <billing_document_number>

 

You also can connect to ODATA service instead of HANA database. Below is the snippet of JavaScript: getMaterialDescr.js to query to ODATA service "/sap/opu/odata/sap/ZGW_MATERIAL_SRV/Materials('0009620-081')" to get the material description:

 

 

var http = require('http'),    xml2js = require('xml2js');
var username = 'username',    password = 'password';
var sapgw = {  host: 'sapnetweavergatewayserver.com',  port: 8000, // Change the port number accordingly  path: "/sap/opu/odata/sap/ZGW_MATERIAL_SRV/Materials('0009620-081')",  headers: {    'Authorization': 'Basic ' + new Buffer(username + ':' + password).toString('base64')  }
}
request = http.get(sapgw, function(res){  var body = "";  res.on('data', function(data) {  body += data;  });  res.on('end', function(result) {  //console.log(body);  var tag = 'd:MatlDesc'; // Print the material description  var value = getValue(tag,body);  console.log(value);  });  res.on('error', function(e) {      console.log("Got error: " + e.message);  });
});
function getValue(tag,xmlString){    var value;    var tempString;    var startTag,endTag;    var startPos,endPos;    startTag = "<"+tag+">";    endTag = "</"+tag+">";    tempString=xmlString;    startPos = tempString.search(startTag) + startTag.length;    endPos = tempString.search(endTag);    value = tempString.slice(startPos,endPos);    return value;
};

 

Test IVR System

 

On your Raspberry Pi console, type:

asterisk -r

reload

sip reload

sip show peers

Once you have executed the last command, you will see the status of  pstn line is active.

testivr.jpg

 

Dial your IVR hotline number and after the "beep", type in the billing document number "30247008". The system will speak out the net value "566.05".

If you didn't hear anything, please check again the configuration on the Linksys  SPA3102 and the Asterisk PBX. The IVR hotline number  is my landline number.

 

To debug, type the following command at the Asterisk's command prompt:

core set verbose 5

 

Redial again and see the information showed in the command prompt and check if there is any error. Below is the debug screenshot:

debug.jpg

Instead of calling the pstn line, you also can dial in from the network using the SIP client. Download and install the Linphone  in your iPhone/Android and configure the SIP address.

photo.PNG

 

 

Demo Video

 

 

 

 

Conclusion

 

In this blog, we have walked through how to configure the PBX server using the Asterisk, writing a dialplan, installing Node.JS and SAP HANA Database Client for Node, creating the HANA Table & View and also write a JavaScript to connect  to the SAP HANA Database/ODATA service.

You can improve the IVR system to response on the voice input using the voice recognition system and also to gather more complex business scenario.

 

Please feel free to drop me any email/question and see you until next time

SAP HANA & Excel - Bidirectional Data/Sync Data - Using Python & OData

$
0
0

The purpose of this exercise is to access the data to & from Excel to SAP HANA. This will demonstrate the bidirectional transfer of the data.

 

STEP 1: Creating a DB table in SAP HANA:

I have created a simple table with 2 field using SAP RIVER Code. Alternately we can also use the table which is created by .HDBTABLE file

 

@OData

applicationRivermohas98.HelloExcelApp{

  exportentityexceldata{

  keyelementemploy:String(10);

         elementskill:String(10);

       }

}

        

STEP 2: Creating .XSODATA:


I haven’t used the OData created by SAP RIVER.  The Excel file has difficulty to read the OData created by SAP RIVER. So I have alternately created an OData using .XSODATA with name rivertab.xsodata

 

servicenamespace"rivermohas98.services" {

"Rivermohas98"."Rivermohas98::HelloExcelApp.exceldata"

as"Excel";

}


Testing of OData:


I have inserted a record in the table using Insert SQL.


Table.jpg

  oData Test.txt.jpg


STEP 3: Retrieve SAP HANA DB from Excel:


Now I have an OData which can be accessed. I have chosen an .XSLM file, considering Macros/VBA script can be saved in .XSLM files.


Calling Odata using Power Query:

    1. My next step is to access the OData in this Excel File. I am sure there must be multiple ways to access it, I tried using the Excel Add-in PowerPivot.

excel odata feeds.jpg

          2. A pop-up will be displayed for the OData Feed. I have entered the OData which I have created using .XSODATA

odata2.jpg

          3. Query Editor will be opened. Check the feed and Click on Apply & Close

odata3.jpg

 

          4. You will promted for User ID and Password. Alternately you can set the Credentials here.

odata4.jpg

          5. The below screenshot will help to set the OData Refresh settings. Also you can see the data has been retrieved from SAP HANA DB.

odata5.jpg

 


STEP 4:Sending Data from Excel to SAP HANA DB:

 

Now lets see how to post the data. Data will be posted to SAP by Python code.


  1. Installation of Python and XLRD library:

I have installed Python33. The next important step is to install XLRD. XLRD is a library to read .XSLS, .XSLM files in python. I have downloaded the package “xlrd-0.9.3.tar.gz”.

Once downloaded, Unzip, go to Command line (RUN->CMD), Navigate to the folder and run "python setup.py install" in command line. Follow other sources to install python and xlrd.

 

    2.  Code in VBA:

A code has been written to trigger the Python File in Excel. Go to ALT+F11 and use the below code to trigger your Python file.

Write your code in Worksheet, this will have multiple options when to run this code. I want to trigger the Python code on Saving of the Excel file.

 

vba.jpg

 

Private Sub Workbook_AfterSave(ByVal Success As Boolean)

Dim pyPrgm As String, pyScript As String

pyPrgm = "C:\Python33\python.exe "

pyScript = "C:\pyt\LoadExcel.py"

Call Shell(pyPrgm & pyScript, vbMaximizedFocus)

End Sub

 

Explanation for the Highlighted in RED

pyPrgm = "C:\Python33\python.exe " -> This is the path where Python is installed.

pyScript = "C:\pyt\LoadExcel.py" -> This is where my python code stored to trigger OData.

 

 

     3.  Python Code:

Python code is in LoadExcel.py stored in the above mentioned path.

 

#! /usr/bin/python

# All import files may not require

 

import os

import requests

import json

import csv

import sys

import xlrd

 

# OData URL. I am using the OData Created by SAP RIVER. Give the full address of the server.

appURL = 'http://v………..8008/Rivermohas98/odata/Rivermohas98.HelloExcelApp'

 

# Credentials

auth = 'userid','password'

 

s = requests.Session()

  1. s.headers.update({'Connection': 'keep-alive'})

 

headers = {'X-CSRF-TOKEN': 'Fetch'}

r = s.get(url=appURL, headers=headers, auth=auth)

CSRFtoken = r.headers['x-csrf-token']

print("CSRFToken: " + CSRFtoken)

headers = {'X-CSRF-TOKEN': CSRFtoken}

url = appURL + "/exceldata"

 

# Your file has to be in this location. This is the path which we have given in Excel (VBA code). Full path is required

workbook = xlrd.open_workbook('C:\pyt\employ.xlsm')

 

# Give the worksheet  name

worksheet = workbook.sheet_by_name('Sheet1')

 

num_rows = worksheet.nrows - 1

num_cells = worksheet.ncols - 1

curr_row = 0

while curr_row < num_rows:

                curr_row += 1

                row = worksheet.row(curr_row)

                print(  row[0], row[1])

                print(  row[0], row[1])

                data = '{"employ": " ' + str(row[0].value) + ' ", "skill": " ' + str(row[1].value) + ' " }'

                # print("Check Data: " + data)

                r = s.post(url, data=data, headers=headers)

 

     4.  Test your Python Code:

 

You can test your Python Code from command line. Below screenshot is for reference.

python test.jpg

 

STEP 5: Post the data to SAP HANA DB from Excel:

 

     1. Now add an entry in Excel:

add entry.jpg

 

     2. Check the result in SAP HANA:


Now we have two records.

excel to sap.jpg

 

STEP 6: Post the data to SAP HANA DB from Excel:


Now create one more record in SAP HANA table and check whether it is updating the Excel. As Similar to Step 2 and check the result in Excel.

 

insert 3rd rec.jpg

 

 

final.jpg

 

Currently I have only used OData 'POST' operation. I believe further operations can be used like PUT & DELETE or triggering the Python Code for GET operation instead of using Excel OData feeds.

 

Thanks to Philip MUGGLESTONE for his video on Python @ SAP HANA Academy

Thanks for reading.


Introduction to Software Development on SAP HANA - Part I

$
0
0

Part II --->

SAP HANA is an in-memory data platform that is deployable as an appliance or in the cloud. At its core, it is an innovative in-memory relational database management system that makes full sense of the capabilities of current hardware to increase application performance, to reduce cost of ownership, and to enable new scenarios and applications that were not possible before. With SAP HANA, you have the opportunity to build applications that integrate the business logic, control logic, and the database layer with unprecedented performance. As a developer, one of the key questions is how you can minimize data movements. The more you can do directly on the data in memory next to the CPUs, the better the application will perform.

 

Content

 

  1. Developing Applications for SAP HANA
  2. Database Tasks, Loading, and Modeling
  3. SQLScript Basics
  4. Exposing and Consuming Data with OData
  5. Exposing and Consuming Data with Server-Side JavaScript
  6. SAP HANA Advanced Development Options

 

Target Audience

 

Application developers.

 

1. Developing Applications for SAP HANA

 

1.1 SAP HANA Native Application Basics

 

SAP HANA is a platform. It’s offered as both an appliance and it’s offered as a cloud offering. Inside this platform is our ability to use both SAP HANA as a database and SAP HANA as an application platform itself. SAP HANA is first and foremost a database. SAP HANA is an in-memory database. That means that all the operations that happen, happen in memory. It is designed for massively parallel processing. HANA can store data both in a row format and in a column format.

 

Row-Format-and-Column-Format.png

 

Application server has been built into HANA itself. This application server is named SAP HANA Extended Application Services, or sometimes you will see it abbreviated as XS, XS standing for Extended Application Services.

 

SAP-HANA-XS-in-a-Nutshell.png

 

XS also means extra small, meaning the application server layer is extra small.

 

There are two main types of applications that could leverage SAP HANA. We have the native SAP HANA applications that will be the focus of this blog.

 

Native-SAP-HANA-Applications.png

 

Data intensive logic is written in the database using SQL and SQLScript. Server-side JavaScript is the main programming language for HANA Extended Application Services applications. There is also support for REST-based OData services with both JSON and Atom XML rendering. Furthermore, there is also support for XMLA and MDX access.

 

Non-native SAP HANA applications are also using application server in addition to the one that’s embedded inside of HANA.

 

Non-Native-SAP-HANA-Applications1.png

 

They use ABAP, Java, .Net, and they use one of the open standard interfaces like JDBC or ODBC to connect to SAP HANA. ABAP kernel and the ABAP language has been heavily optimized to also be able to connect to and consume HANA content.

 

We’ll show how to create schemas, tables, and then the views on top of those tables. We’ll learn about SQLScript. That is SAP’s language for creating stored procedures and allowing to put more and more of logic deeply down inside the database. We’ll learn how we can both expose and consume SAP HANA data via HTTP, leverage that built in Web server to serve out our user interface. We’ll also learn how to code service enablement and validation logic on the server side using JavaScript, and how to use the built in OData service generation framework. We’ll learn about SAP’s UI library for building HTML5-based applications, something that we call SAPUI5. We’ll learn about the complete end-to-end lifecycle management and how to build artifacts, how to translate them, how to package them for delivery through your own system landscape up from development to QA to production. And then finally we’ll look at SAP UI integration services. This is a special offering for building a lightweight portal that runs on top of HANA.

 

1.2 SAP HANA Application Development Tools

 

Tools available for a developer to build applications in SAP HANA. The primary application development tool is the SAP HANA Studio.

 

SAP-HANA-Studio-Editing-Tools.png

 

The studio is based upon Eclipse and it has very much the Eclipse look and feel.

 

The SAP HANA Client generally is used because it has the JDBC and ODBC drivers in it, but it contains a special piece of functionality for us as developers, as well.

 

SAP-HANA-Client.png

 

It contains a tool called regi. That stands for Repository Git. It’s actually this little tool, this little regi tool that’s part of HANA client, that does the check in and check out. The HANA studio itself doesn’t know how to do that.

 

1.3 SAP HANA Software Downloads

 

We have the SAP Community Network (SCN).

 

SAP-Community-Network-SCN.png

 

This is the main umbrella site that most people will go to if they need any information about how to do something with SAP, they need help, they need downloadable content, message boards, all these sort of things are available at the SAP Community Network.

 

Within the SAP Community Network, we have a subset of the community that’s designed for developers. And this is the SAP HANA Developer Center.

 

SAP-HANA-Developer-Center.png

 

You’ll also find links where you can download various pieces of free software. For instance, you can download the SAP HANA studio and the SAP HANA client. There’s a link to the SAP HANA Academy, which has series of e-learning videos. But it is always recommended to people that are new to an area, new to a topic area–maybe you’re new to SAP HANA–one of the best learning resources is to go watch the forums. Put a watch on them and just read the questions that other people are asking. You’re going to learn so much by seeing their questions and seeing the answers that are being posted to their questions.

 

1.4 Access to SAP HANA Systems (in the Cloud)

 

An SAP HANA system is available on the cloud.

 

1.5 Example Application

 

A complete Hello World example application is built.

 

2. Database Tasks, Loading, and Modeling

 

2.1 Database Schemas and Database Tables

 

The SAP HANA repository manages all artifacts. It stores data artifacts, data-intensive logic, control flow logic, and presentation logic.

 

SAP-HANA-Repository.png

 

The schema is a mandatory database object; all database objects have to belong to a schema.

 

Database-Schema-SAP_HANA_EPM_DEMO.png

 

Tables are created inside of a schema.

 

Database-Tables-for-SAP-HANA.png

 

2.2 Sequences and SQL Views

 

A database sequence is an incrementing list of numeric values.

 

Database-Sequences-for-SAP-HANA.png

 

It allows you to have a unique key that you will auto-increment as you insert new records into the database.

 

A SQL view is a basic join between two or more tables.

 

SQL-Views-for-SAP-HANA.png

 

2.3 Authorizations

 

SAP has delivered some default roles (built-in roles), such as CONTENT_ADMIN, the MODELING, and the PUBLIC.

 

Roles-for-SAP-HANA.png

 

2.4 EPM Demo Schema

 

SAP has built and delivered a demo scenario, which can be used for learning and other purposes.

 

EPM-Demo-Schema.png

 

EPM stands for Enterprise Procurement Management. The EPM demo content has its own schema named SAP HANA EPM demo. Inside that schema there is a variety of tables, views, sequences, synonyms, and other content. There is a data generator, so you can control how much data that you want in this tool.

 

2.5 Single File Data Load of CSVs

 

Look at how we can set up an initial data load into a table so that every time that the table is activated in a new system, some base set of data will automatically be loaded into that table. Now to do this single file load of Comma-Separated Values, we actually need three files that will be created in the content repository. First we need the CSV file itself.

 

CSV-File.png

 

Most often, you will usually use Microsoft Excel to create the data or to cleanse the data. Next we need the Table Import Model (TIM).

 

Table-Import-Model-TIM.png

 

This is the file that really defines the destination for the data. It defines the database schema and table we want to insert it into every time that table gets activated. And then finally there is a third file that we need to create and that’s the table import data (TID).

 

Table-Import-Data-TID.png

 

This is what connects the CSV file and the model so it connects the target and the base data, the CSV data that we want to load into that target table.

 

2.6 Attribute Views

 

The Attribute view is heavily optimized for processing of joins between multiple tables. The Attribute view may contain calculated columns, where we write formulas or perform conversions on data from other columns.

 

Calculated-Column.png

 

We can also have hierarchies. Hierarchies are drill-in capabilities.

 

2.7 Analytic Views

 

The Analytic View is designed to take advantage of the computational power of SAP HANA and, specifically, to help you with calculating aggregates. Analytic view is actually processed by a different engine than the Attribute view. The Analytic view is processed by the OLAP, or analytic engine, inside of HANA, as opposed to the join and primarily transactional based engine that processes the Attribute view.

 

Analytic-View.png

 

Analytic views always need at least one of what we’ll call a “measure”. A measure is basically anything that can be aggregated; therefore it must be a numeric-based column.

 

2.8 Calculation Views

 

We can choose weather we want to do the design of a Calculation View graphically or using SQLScript.

 

Calculation-View.png

 

The graphical approach allows us to diagram the flow of the logic in the graphical view. We have the ability to group, to union, to join, maybe then to project again, so we can have many different nodes processing of a graphical SQL script. In the SQLScript approach a text editor is opened where we can write SQLScript.

 

2.9 Analytic Privileges

 

SQL privileges are usually set at the schema level or at the table level.

 

Types-of-Privileges.png

 

System privileges are for administrative tasks or development tasks. These are set directly to the user and/or their role. Package privileges are all about controlling editing and activation rights at a package level within the SAP HANA content repository. Analytic privilege allows us to really set authorizations at a row level as well. And really you shouldn’t have a view without a corresponding analytic privilege, particularly if you’re accessing your views from some of the reporting tools, like BusinessObjects tool framework, which absolutely require that you have an analytic privilege.

 

3. SQLScript Basics

 

3.1 Introduction to SQLScript

 

SQLScript is SAP’s interface for applications to access SAP HANA. Calculations are executed in the database layer to get the maximum benefit from SAP HANA features such as fast column operations, query optimization, and parallel execution. SQLScript analyzes your statements and it will see which of them can be performed in parallel.

 

Performance-Gains.png

 

In this query we have a select from the product table (query one) and then we have a select from the text table (query two).

 

Code-Example.png

 

They can be compressed down into one query when executed in the database. Next, we have query three and four. Here we’re doing a select count for product text and we’re doing another select count for a different WHERE condition. These two can be processed completely in parallel.

 

The syntax for creating a stored procedure that uses SQLScript is pretty simple.

 

Procedure-Example.png

 

First we say CREATE PROCEDURE. Next we have the READS SQL DATA, and there we specify if this is a read only or read/write procedure.

 

3.2 Create an SQLScript Procedure with SELECT Statement

 

A simple SQLScript procedure is created.

 

3.3 Create an SQLScript Procedure with Calculation Engine (CE) Functions

 

The CE, or the calculation engine, is the main engine that processes all SQLScript statements.

 

Calculation-Engine.png

 

There are several engines inside of HANA.

 

Different-Engines-Different-Responsibilities.png

 

There’s a calculation engine. That’s where calculation views and SQLScript are executed. Then there’s an OLAP engine for analytic processing. That’s where analytical views are executed. And then there’s a join engine, and that’s where attribute views are processed. And all this is within the column store. There’s actually a separate row store engine as well. In addition to all these other engines, we have a SQL optimizer.

 

One of the places where you can potentially run into less-performant operations than what you might expect is when you have to transfer control and data between multiple engines. In particular, this is what potentially happens here if you have SQL statements inside SQLScript. They have to be handed off to the SQL optimizer. But if you can stay inside the calculation engine, then you save that trip over to the SQL optimizer for it to have to do some work. SAP has taken common SQL statements and implemented them directly inside the calculation engine so that we don’t have to go over to the SQL optimizer. And that’s what is called CE functions, or CE built-in functions. Here are some examples.

 

Calculation-Model-Example-with-CE-Functions.png

 

We have a CE function, CE_CALC_VIEW; that allows us to read data from a view. Then we have a CE_COLUMN_TABLE; that allows us to read data from a database table. We have CE_PROJECTION. That’s where we can supply a WHERE condition or narrow our field selection. We have the CE_JOIN. This would be similar to writing a JOIN statement between two tables or a view and a table.

 

3.4 Create an SQLScript Procedure with Imperative Logic

 

Imperative logic, in general, means that we are processing one record at a time. Unlike SQL or CE functions that perform bulk operations on an entire record set, this is one variable or one record at a time. In SQLScript, we have the following logic available:

 

  • IF, ELSE, ENDIF
  • WHILE Loops
  • FOR Loops
  • Case

 

3.5 Using the SQLScript Debugger

 

The standard Eclipse debugger has several parts to it.

 

Debug-Screen.png

 

Up in the top left-hand corner we have the Debug Session. This shows us which server we’re connected to and which debug session we’re currently in. In the top right-hand corner, we have our Watch area, that would have a listing of our breakpoints and it would show us all of our variables. And from there we’ll be able to see values inside those variables as well. And in the bottom, we have the Code so we can see which line of code we’re currently debugging.

Part II --->

Introduction to Software Development on SAP HANA - Part II

$
0
0

<--- Part I

4. Exposing and Consuming Data with OData

 

4.1 Exposing and Consuming Data – Architecture

 

In the traditional architecture you have the database layer, the application server layer, and a client layer.

 

Classic-SAP-Architecture1.png

 

In HANA Extended Application Services architecture the application server is shrunk down, meaning the XS really stands for “extra small” and it runs inside the database itself.

 

SAP-HANA-XS-Architecture-I.png

 

We use more code pushdown so we are able to put more data-intensive logic deeply down inside the database itself. We put most of our business logic down in the database layer. UI rendering has been moved out to the client. As client devices have gotten more powerful, even phones and tablets we can do the complete UI rendering even on the client side. XS application server layer does the service enablement.

 

The client device makes an HTTP connection to the HANA server.

 

SAP-HANA-XS-Architecture-II.png

 

Inside the HANA server we have three processes involved at the operating system level. The first process is called the ICM, which stands for Internet Communication Manager. And this is the Web server itself. Then the Web server communicates to the XS Engine process. At the lowest level inside of HANA, we have the index server process. The index server is if you would think of HANA the database itself and you had to classify it down to one operating system process, the index server would be the most important of those operating system processes. It’s the main core of HANA, of the database itself.

 

4.2 SAPUI5

 

SAPUI5 is SAP’s UI development kit for building HTML5–based applications.

 

SAPUI5.png

 

SAPUI5 is a combination of open standards, some third-party open source libraries, and some SAP self-created technology. Some of the technologies we use inside of SAPUI5:

 

  • HTML5 is sort of an umbrella term, it means extensions to the JavaScript language
  • jQuery is very common, very popular open source JavaScript-based UI library
  • CSS3 is for the style and layout
  • OData is like ODBC for the Web. It is an open and easily consumable way of accessing data and business logic over HTTP using standard HTTP concepts.
  • Open Ajax is used for inoperability, it is an open standard for when you have multiply widgets and parts in a page coming from different servers or applications to be able to exchange data and events within that page using client-side JavaScript.
  • LESS is used for themes
  • D3.js is used for data-driven documents (charts and graphs)
  • ARIA is for accessibility
  • SAPUI5 Development Toolkit for HTML5 (SAPUI5 SDK). See SAP Note 1747308 Installation Guide UI development toolkit for HTML5 (SAPUI5).

 

SAPUI5 is used heavily in the analytics when we need to display charts and graphs, collaborations and feeds.

 

Positioning-of-SAPUI5-Technology.png

 

SAPUI5 is a new addition to SAP’s UI strategy and the traditional Web Dynpro technology is still used in the business suite, particularly for the expert users. SAPUI5 is the only technology that SAP delivers for HANA native applications.

 

4.3 Creating a User Interface with SAPUI5

 

A simple little “Hello World” application is built using SAPUI5 development tools.

 

4.4 OData Services

 

OData defines an open but very easy-to-consume standard for interopting with data over the Web. Whereas Web services use SOAP as the body format, OData on the other hand embraces REST-based architecture. This is lighter weight, HTTP-based, and uses the HTTP verbs as the actions in an OData service. OData Data Model describes data with Entity Data Model (EDM).

 

OData-Basics.png

 

OData Data Model defines which entities are used in a a service and what are the relationships between those entities. OData protocol is the communication protocol and it supports CREATE, READ, UPDATE, and DELETE operations by using the HTTP verbs. OData client libraries are prebuilt libraries that help with the consumption of OData services. OData Services exposes an end point that allows access to data in the SAP HANA database. It allows an external system or any external client or system that can speak HTTP to call the service or consume the data that is part of the OData service model.

 

4.5 Creating a Simple OData Service

 

An OData service definition is built.

 

4.6 Creating a Complex OData Service

 

Associations between entities are built.

 

4.7 Calling an OData Service from the User Interface

 

A user interface is created that consumes and presents OData service.

 

5. Exposing and Consuming Data with Server-Side JavaScript

 

5.1 Server-Side JavaScript (XSJS)

 

On the server side, our programming language for all procedural logic is JavaScript. We have seen how we can use SQLScript as the main programming language for data-intensive logic down in the database. But it is designed—-although it has some imperative logic features-—it is primary designed for data-intensive operations. Server-side JavaScript is very well suited to lightweight procedural logic. JavaScript is a very open, very widely known programming language.

 

5.2 Creating an XSJS Service

 

A simple server-side JavaScript service is created.

 

Custom-Logic-Functions-and-Variables.png

 

5.3 Extending the XSJS Service

 

More functionality is added in the server-side JavaScript service.

 

5.4 Calling XSJS from the UI

 

A user interface is built using SAPUI5 that consumes custom XSJS services.

 

5.5 Debugging XS JavaScript

 

The debug perspective has Debug view, Breakpoints view, Variables view, Source-code view, and Outline view.

 

Debug-Perspective.png

 

6. SAP HANA Advanced Development Options

 

6.1 Lifecycle Management

 

Lifecycle Management manages software releases, enforces version control, manage product development, monitors software packages, and manages application packages.

 

Lifecycle-Management-Overview.png

 

SUM for SAP HANA is your software update manager. This is the tool that helps you import whole new support packages and revisions into your HANA system, but also to take additional software packages and install them into SAP HANA.

 

6.2 SAP HANA UI Integration Services

 

SAP HANA UI Integration Services is an additional UI development tool.

 

SAP-HANA-UI-Integration-Services.png

 

It is a higher-level user interface design tool. It’s even something that maybe you don’t have to do yourself as a designer, but could give it to functional person or a business process expert. They only have to drag and drop widgets and arrange them in a screen as they want them. This tool does not replace SAPUI5.

 

Reference

 

openSAP. (2013, December 12). Introduction to Software Development on SAP HANA. Retrieved from https://open.sap.com/course/hana1-1.

 

Additional Resources

 

SAP HANA Developer Guide

 

SAP HANA SQL and System Views Reference

<--- Part I

Hacking SAP HANA Websockets

$
0
0

Disclaimer: This is not a production or documented feature - its also more of a hijack than a hack 

 

 

 

I have been hoping for the inclusion of Websocket support on the HANA DB platform for a while now, and I was a little disappointed it was not packaged in the SPS08 release. My goal when building apps (or products) is to make use of the core platform its running on as much as possible, I firmly believe that when convincing an IT department, or company, to implement a product or app, the first question is: "How much infrastructure does this need?". This can often be a deal breaker and why I am such a big proponent of the HANA's DB + XS App Server integration - it consolidates the requirements into a single investment. Having a Websocket technology built directly in XS can be an additional selling point which developers are starting to expect these days.

 

A little while ago I wrote a blog post on building a dashboard using the awesome Node.js package from Holger Koser, however I have really been wanting to use Websockets in the metric² platform since the get go. Some comments here and here are prime examples of my long lasting hope of seeing the technology being included in the XS Engine platform sooner rather than later. I recently had a little nudge again from John Patterson to dig back into the topic and did manage to hack something together. The most interesting part of this was that once I had it working, I was left wanting just a little more ...

 

Firstly a little bit about Websockets and why I feel they important to the app/web development world ...

 

Real-time: In the age of having blazingly fast DB's, we need a UI & server side (In our case XSJS files) integration layer which can display the data to user with as little over head as possible. Web Sockets supports this by providing a low-latency, near real-time connection between a client and the server.

 

Light Weight: Currently we need to do multiple AJAX calls to our backend server (to the XSJS files) to either perform some complex logic or DML against our database. This is slow and also fairly network intensive as each request (and response) requires additional handshakes and the packet size is considerably larger than just the intended content inside the package. In Web sockets, the requests and responses are really just the content themselves.

 

Duplexity: Web Sockets by nature are Full-duplex, implying that we can send and receive data at the same time for multiple requests.

 

Persistence: Web sockets provide a "open" connection between the server and client after the initial upgraded HTTP request handshake has been made. This lets us perform multiple server side requests using the same connection. This also lets the server initiate a response without needing a request from the client side.

 

The importance of these 4 factors to the web development world and to HANA XS specifically is that this is the missing link for us to take Web applications to the next level.

 

[Cross-domain support is another great feature!] In this example I was able to successfully have the HTML + JS file local to my PC and execute it against my HANA XSWS service (via a public URL).

 

So onto the more interesting HANA specific parts ...

 

 

I initially realized that HANA XS was using Web Sockets in SPS06, when for some reason the XS Engine debugger was not loading correctly due to my user missing some permissions. After searching through the XS code I came across the folder where the debugger was saved and it included a interesting file with the suffix of xsws i.e. XSWebService. After doing more digging I found that Websockets were being loaded in the WebDispatcher config file and I was confident I could find a way to start using it for app development.

 

After spending some time trying to create my own HANA package with these file extension types I realized that the name is somehow being filtered and only this file, and more specifically, in this location can be executed, otherwise the XS Engine passes back a 403 (forbidden) - I was a little disappointed but it didn't discourage me ... and I decided I would simply re-purpose my Debugger.xsws file for my own needs After a quick backup, I was ready to do some coding ...

 

Essentially, a xsws file is just like any xsjs file, with the exception that it has some socket code returning the responses versus your regular xsjs file. You can do things like $.import for additional libraries as well as perform $.sql functions. Here is a small snippet from the code over on Github.

 

Debugger.xsws


$.ws.onmessage = function (evt){    handleIncomingRequest(evt);
}
$.ws.onerror = function (evt) {    $.trace.debug("error on connection: " + evt.message);    throw new Error(evt.message);
}
$.ws.onclose = function (evt) {    $.trace.debug("connection closed, disabling debugger");    debugObject.enabled = false;    throw new Error("Close status " + evt.code + ":" + evt.reason);
}

And this is some of our Client side code making calls the xsws service:

App.js

// Create a new WebSocket. This works fine  var socket = new WebSocket('ws://<ENTER YOUR HANA SERVER HERE>/sap/hana/xs/debugger/api/Debugger.xsws', 'xsCrossfire');  // Handle any errors that occur.  socket.onerror = function(error) {    console.log('WebSocket Error: ' + error);  };
// Send the message through the WebSocket.
socket.send(message);

 

As you can see - the code and requests are very simple and straight forward, in fact to me they are a little easier than jQuery + AJAX.

 

One caveat I did find was connection persistence in the event you have an error on the server side, the socket connection can break, in this case you would need a error handling state that attempted a reconnect before submitting any new requests.

 

 

A quick screenshot of the running test app i developed and how the server is sending persistence frames pings/pongs to validate the open connection along with the client request and server response.


 

If you are interested in trying this out on your test or dev instance I have posted the code on Github. Follow these simple instructions to get up and running ...

 

1.) Using the Web IDE, open SAP -> HANA -> XS -> Debugger

2.) Make a backup of the Debugger.xsws file or simply comment out the contents.

3.) Paste the code into the file from Github

4.) Create the Websocket.html file and paste the contents of the Github file

4.) Create the app.js file and paste the contents of the Github file

5.) Open the Websocket.html file and enter any SQL statement

 

(Be sure you have the debugger security role)

 

As you can see from the files, the required methods for web sockets are really at a minimum and barely get in your way at all.

 

Conclusion

 

At the start of the article I mentioned I was left wanting a little more ... this was mainly because since I have been wanting Web Sockets for such a long time, I realized that using it , alone its not really enough. In order for us to really take XS Engine to the next level, we also need to consider a "Publish/Subscribe" feature and a server side timer feature.

 

The Pub-Sub approach would essentially allow us to push data from the server side, on a specific occurrence of an event, much like a DB trigger. If a new row gets inserted, push the new inserted record to the registered subscriber.

 

The server side timer feature would allow us to create a server side timer (similar to a xsjob) which would persist and execute the server side function every x secs/mins/hours.

 

Overall I am pretty impressed with the opportunities Web sockets will bring to the XS Engine. I am hoping they will be included in the next release.

HANA - Development Comment Function for an better Teamwork

$
0
0

Hi,

 

if possible add an development comment note function on SAP HANA Studio.

Think this is wr an nice feature for better and faster Teamwork at the development.

 

Kind Regards 

Tweets analysis with Node.js and SAP Hana

$
0
0

I had the idea to write this blog post, because I was waiting for the first public beta of OS X Yosemite, and after the release of beta, I was curious to know what the people was talking about it.

 

The idea here, is to get tweets posted using #OSXYosemite hashtag, insert it at SAP Hana table, and analyse the sentimental, using Text Analysis.

 

An easy way to catch tweets is using Node.js, and with help of amazing module "SAP Hana Database Client for Node" it keep incredibly simple.

 

Pre-requirements:

 

1. Node.js installed

2. Twitter dev account

3. SAP Hana environment

 

Let's go coding...

 

 

1. Creating SAP Hana table


I created a XS Project, a schema called TWEETS and defined a new table with name Tweets:

// Tweets.hdbtable
table.schemaName = "TWEETS";
table.tableType = COLUMNSTORE;
table.columns = [    {name = "ID"; sqlType = BIGINT; nullable = false;},    {name = "USER_NAME"; sqlType = NVARCHAR; length = 100; nullable = false;},    {name = "TEXT"; sqlType = NVARCHAR; nullable = false; length = 340;},    {name = "PROFILE_IMG_URL"; sqlType = NVARCHAR; nullable = true; length = 200;},    {name = "KEYWORD"; sqlType = NVARCHAR; nullable = false; length = 100;}];
table.primaryKey.pkcolumns = ["ID"];

This table is used to store the tweets catched, the Node.js app (described below), will insert data into this table.

 

2. Getting Tweets

 

As said before, this part was done using Node.js, using the HDB and NTwitter modules. Follow the steps below to create your "Tweets Catcher":

 

Install hdb module, using npm install hdb.

Install ntwitter module, using npm install ntwitter.

 

Create a JSON file with your access token values to Twitter. In my case, I created twitter.json file, with content:

// twitter.json
{    "consumer_key": <YOUR_API_KEY>,    "consumer_secret": <YOUR_API_SECRET>,    "access_token_key": <YOUR_ACCESS_TOKEN>,    "access_token_secret": <YOUR_ACCESS_TOKEN_SECRET>
}

 

Create a other JSON file, with your SAP Hana instance credentials, like the below:

// hdb.json
{  "host": <HOSTNAME_OR_IP_ADDRESS>,  "port": <PORT>,  "user": <USERNAME>,  "password": <PASSWORD>
}

 

Finally, create the app.js file, with code below:


var hdb            = require('hdb'),      hdbAuth    = require('./hdb.json'),      Twitter      = require('ntwitter'),      twitterAuth = require('./twitter.json');
var client = hdb.createClient(hdbAuth);
var keyword = '#OSXYosemite';    // Tweets with this keyword will be catched
var twitter = new Twitter(twitterAuth);
function saveTweet(tweet) {    client.connect(function(err) {        if (err) {            return console.error('Connect error', err);        }        var sql = 'INSERT INTO "TWEETS"."Tweets::Tweets" values(' + tweet.id + ', \'' + tweet.user.screen_name + '\', \'' + tweet.text.replace(/'/g, " ") + '\', \'' + tweet.user.prl', '') + '\', \'' + keyword + '\')';        console.log('SQL:', sql);        client.exec(sql, function (err, affectedRows) {          client.end();          if (err) {            return console.error('Insert error:', err);          }          console.log('Added with success!');        });    });
};
function getTwitterStream() {    twitter.stream('statuses/filter', {track: keyword}, function (stream) {        stream.on('data', function (data) {            console.log('New tweet!');            saveTweet(data);        });        stream.on('error', function (err) {            console.log('ERROR');            console.log(err);        });    });
}
getTwitterStream();

Start getting tweets: node app.js

 

The console output will be something like that:

GetTweets_—_node_—_144×44.png

 

I keep this app running for about 2 hours on the day OS X Yosemite public beta was released, just to get some sampling data. The table Tweets was populated with more than 150 tweets.

SAP_HANA_Development_-__TWEETS___Tweets__Tweets__-_Eclipse_-__Users_Sasai_Documents_Development_HANA_workspace.png

 

 

3. Analysing the sentimentals


Since SP5, SAP Hana provide a service to analyse unstructured data, like texts. I created FullText index, analysing the column Text of Tweets table, using the following SQL statement:


CREATE FullText INDEX "TWEETS_FT" ON "TWEETS"."Tweets::Tweets"(Text) TEXT ANALYSIS ON CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER';


The SAP Hana created a new table, called $TA_TWEETS_FT. Two columns of this new table is important for this exercise, the TA_TYPE, that classify the text analysed into a specific type, and the TA_COUNTER, that show the number of occurrences of this type. I'm looking for the followings types: StrongPositiveSentiment, WeakPositiveSentiment, NeutralSentiment, WeakNegativeSentiment and StrongWeakSentiment. As an option, you can use the types related with emoticons, like StrongPositiveEmoticon, etc.


The table populated:

SAP_HANA_Development_-__TWEETS____TA_TWEETS_FT__-_Eclipse_-__Users_Sasai_Documents_Development_HANA_workspace.png


The next step is create the OData service, but before that, I created a view where the OData service will be based on. This view select data from $TA_TWEETS_FT and return two columns, one with sentimental and other with percentual. The query used in the view:


SELECT "TA_TYPE", ROUND("UNIT_VALUE"/ "TOTAL_VALUE" * 100,2) AS "VALUE_PERCENTAGE" 
FROM 
( SELECT "TA_TYPE", SUM("TA_COUNTER") AS "UNIT_VALUE" FROM "TWEETS"."$TA_TWEETS_FT"   WHERE TA_TYPE IN ('StrongPositiveSentiment', 'WeakPositiveSentiment', 'NeutralSentiment', 'WeakNegativeSentiment', 'StrongNegativeSentiment')  GROUP BY "TA_TYPE" ) AS TABLE_UNIT,
( SELECT SUM("TA_COUNTER") AS "TOTAL_VALUE" FROM "TWEETS"."$TA_TWEETS_FT"   WHERE TA_TYPE IN ('StrongPositiveSentiment', 'WeakPositiveSentiment', 'NeutralSentiment', 'WeakNegativeSentiment', 'StrongNegativeSentiment') ) AS TABLE_TOTAL ORDER BY VALUE_PERCENTAGE DESC

And the result:

SAP_HANA_Development_-__Users_Sasai_Documents_Development_HANA_workspace_SQL_SELECT_SENTIMENTAL_sql_System__HDB_Host__104_40_128_187_Instance__00_Connected_User__SYSTEMSystem_Usage__Custom_System_-_Eclipse_-__Users_Sasai_Documents_Developme.png

 

The view created, to be used by OData service:

 

// SentimentalView.hdbview
schema="TWEETS";
query="SELECT \"TA_TYPE\", ROUND(\"UNIT_VALUE\"/ \"TOTAL_VALUE\" * 100,2) AS \"VALUE_PERCENTAGE\" FROM ( SELECT \"TA_TYPE\", SUM(\"TA_COUNTER\") AS \"UNIT_VALUE\" FROM \"TWEETS\".\"$TA_TWEETS_FT\" WHERE TA_TYPE IN ('StrongPositiveSentiment', 'WeakPositiveSentiment', 'NeutralSentiment', 'WeakNegativeSentiment', 'StrongNegativeSentiment') GROUP BY \"TA_TYPE\" ) AS TABLE_UNIT, ( SELECT SUM(\"TA_COUNTER\") AS \"TOTAL_VALUE\" FROM \"TWEETS\".\"$TA_TWEETS_FT\" WHERE TA_TYPE IN ('StrongPositiveSentiment', 'WeakPositiveSentiment', 'NeutralSentiment', 'WeakNegativeSentiment', 'StrongNegativeSentiment') ) AS TABLE_TOTAL ORDER BY VALUE_PERCENTAGE DESC";

 

4. Creating OData service

 

The OData service was used to provide sentimental ranking to UI, done using SAPUI5.

I created a OData service based on SentimentalView view, created on the step before.

 

// Sentimental.xsodata
service {
 "Tweets::SentimentalView" key ("TA_TYPE");
}


5. Creating SAPUI5 Chart

 

The final step was the creation of a SAPUI5 Application, with just one view, to show the chart with result of OData service, following the code of view.

 

// Chart.view.js
createContent : function(oController) {  var url = <URL_ODATA>;  var oModel = new sap.ui.model.odata.ODataModel(url, true);  var oDataset = new sap.viz.ui5.data.FlattenedDataset({            dimensions : [ {                axis : 1,                name : 'Sentimental',                value : "{TA_TYPE}"            } ],            measures : [{                name : 'Percentual',                value : '{VALUE_PERCENTAGE}'             } ],            data : {                path : "/SentimentalView"            }        });  var oPieChart = new sap.viz.ui5.Pie({  width : "80%",  height : "400px",  plotArea : {  },  title : {  visible : true,  text : 'OS X Yosemite Sentimental'  },  dataset : oDataset  });  oPieChart.setModel(oController.oModel);        return oPieChart;  }

 

The final result:

 

104_40_128_187_8000_Tweets_OSXYosemiteSentimentals_WebContent_.png

 

I hope this post can be useful and give you an idea of a kind of SAP Hana development.

Viewing all 676 articles
Browse latest View live