Quantcast
Viewing all 676 articles
Browse latest View live

Creating a HANA Workflow using HADOOP Oozie

There are several standard ways that HANA Procedures can be scheduled:

e.g:

1) HANA SPS7 XS  Job Scheduling (Job Scheduling | SAP HANA)

2) SAP DataService   (How to invoke SAP HANA stored procedures from D... | SAP HANA)

 

 

For those that use opensource HADOOP for managing Big Data, then OOZIE can also be used to execute HANA procedures in a workflow.

 

For a good overview of HADOOP terms and definitions please refer to:

SAP HANA - Hadoop Integration # 1

 

 

A Big Data workflow integrating HADOOP and HANA might be:

Image may be NSFW.
Clik here to view.

 

 

 

 

The focus on the remaining part of this blog is only to demonstrate how HANA Server Side Java script (XSJS) can be used to execute HANA procedures [ point d)  in diagram above] via an OOZIE WorkFlow:

 

Oozie is currently described in Wikipedia as

" a workflow scheduler system to manage Hadoop jobs. It is a server-based Workflow Engine specialized in running workflow jobs with actions that run Hadoop MapReduce and Pig jobs. Oozie is implemented as a Java Web-Application that runs in a Java Servlet-Container.

For the purposes of Oozie, a workflow is a collection of actions (e.g. Hadoop Map/Reduce jobs, Pig jobs) arranged in a control dependency DAG (Direct Acyclic Graph). A "control dependency" from one action to another means that the second action can't run until the first action has completed. The workflow actions start jobs in remote systems (Hadoop or Pig). Upon action completion, the remote systems call back Oozie to notify the action completion; at this point Oozie proceeds to the next action in the workflow.

Oozie workflows contain control flow nodes and action nodes. Control flow nodes define the beginning and the end of a workflow (start, end and fail nodes) and provide a mechanism to control the workflow execution path (decision, fork and join nodes). Action nodes are the mechanism by which a workflow triggers the execution of a computation/processing task. Oozie provides support for different types of actions: Hadoop MapReduce, Hadoop file system, Pig, SSH, HTTP, eMail and Oozie sub-workflow. Oozie can be extended to support additional types of actions.

Oozie workflows can be parameterized (using variables like ${inputDir} within the workflow definition). When submitting a workflow job, values for the parameters must be provided. If properly parameterized (using different output directories), several identical workflow jobs can run concurrently. "

 

 

I think I've also read that Oozie was original designed by Yahoo (now Hortonworks) for managing their complex HADOOP workflows.

It is opensource and able to be used by all distributions of HADOOP (e.g Cloudera, Hortonworks, etc).

 

Ooze workflows can be defined in XML, or visually via the Hadoop User Interface (Hue - The UI for Apache Hadoop).

 

Below I will demonstrate a very simple example workflow of HANA XSJS being called  to:

 

A)  Delete the Contents of a Table in HANA

B)  Insert a Single Record in the Table

 

To call procedures in HANA from HADOOP I created 2 small programs:

1) in HANA a generic HANA XSJS for calling procedures (callProcedure.xsjs)

2) In HADOOP a generic JAVA program for calling HANA XSJS (callHanaXSJS.java)

 

The HANA XSJS program has up to 7 input parameters:

iProcedure - is the procedure to be called  (mandatory)

iTotalParameters - is the number of additional input parameters used by the Procedure (Optional - default 0)

iParam1 to iParam5  - are the input parameters of the procedure.

 

In the body of the Reponse I  provide the basic input and output info (including DB errors) in JSON format.

 

HANA: callProcedure.xsjs

var maxParam = 5;

var iProcedure       = $.request.parameters.get('iProcedure');

var iTotalParameters = $.request.parameters.get("iTotalParameters");

var iParam1          = $.request.parameters.get("iParam1");

var iParam2          = $.request.parameters.get("iParam2");

var iParam3          = $.request.parameters.get("iParam3");

var iParam4          = $.request.parameters.get("iParam4");

var iParam5          = $.request.parameters.get("iParam5");

 

var output = {};

 

output.inputParameters = {};

output.inputParameters.iProcedure = iProcedure;

output.inputParameters.iTotalParameters = iTotalParameters;

output.inputParameters.iParam1 = iParam1;

output.inputParameters.iParam2 = iParam2;

output.inputParameters.iParam3 = iParam3;

output.inputParameters.iParam4 = iParam4;

output.inputParameters.iParam5 = iParam5;

 

output.Response = [];

var result = "";

 

// Check inputs

//if (iProcedure === '') {

if (typeof iProcedure  === 'undefined' ) {

  result = "ERROR: '&iProcedure=' Parameter is Mandatory";

  output.Response.push(result);

  $.response.status = $.net.http.INTERNAL_SERVER_ERROR;

  $.response.setBody(JSON.stringify(output));

 

}

 

else {

 

  var conn = $.db.getConnection();

  var pstmt;

 

  if (typeof  iTotalParameters === 'undefined') {

  iTotalParameters = 0;

  }

 

  var sql = "call \"_SYS_BIC\".\"" + iProcedure + "\"(";

 

  if (iTotalParameters > 0 && iTotalParameters <= maxParam) {

  var i;

  for (i=0;i< iTotalParameters;i++) {

  if (i===0) { sql += "?"; }

  else {sql += ",?"; }

  }

  }

  else {

  if (iTotalParameters !== 0 ) {

  result = "WARNING: '&iTotalParameters-' Parameter shoule be between 0 and " +  maxParam;

  output.Response.push(result);

  }

  }

 

  sql += ")";

 

  output.inputParameters.sql = sql;

 

  try{

  //pstmt = conn.prepareStatement( sql );   //used for SELECT

  pstmt = conn.prepareCall( sql );          //used for CALL

 

  if (iTotalParameters >= 1) { pstmt.setString(1,iParam1);}

  if (iTotalParameters >= 2) { pstmt.setString(2,iParam2);}

  if (iTotalParameters >= 3) { pstmt.setString(3,iParam3);}

  if (iTotalParameters >= 4) { pstmt.setString(4,iParam5);}

  if (iTotalParameters >= 5) { pstmt.setString(5,iParam5);}

 

  // var hanaResponse = pstmt.execute();

  if(pstmt.execute()) {

  result = "OK:";

  var rs = pstmt.getResultSet();

  result += JSON.stringify(pstmt.getResultSet());

  }

 

  else {

         result += "Failed to execute procedure";

     }

  } catch (e) {

     result += e.toString();

  }

 

  conn.commit();

 

 

  conn.close();

 

  //var hanaResponse = [];

 

 

  output.Response.push(result);

  $.response.setBody(JSON.stringify(output));

}

 

 

The HADOOP Java program accepts a minimum of 4 input arguments:

arg[0]  - URL of a HANA XSJS, accessible via the HADOOP cluster

arg[1]  - HANA User name

arg[2]  - HANA Password

arg[3]  - HADOOP HDFS Output directory for storing response

arg[4 to n] - are used for the input parameters for the HANA XSJS called

 

HADOOP:  callHanaXSJS.java

package com.hanaIntegration.app;

 

/**

* Calls a HANA serverside javascript (xsjs)

*  INPUTS: (mandatory) HANA XSJS URL, output logfile name, username & password

*          (optional) n parameters/arguments

*  OUTPUT: writes HANA XSJS response to a the logfile on HDFS

*

*/

 

import java.io.IOException;

import java.io.InputStream;

import java.io.OutputStream;

import java.net.HttpURLConnection;

import java.net.URL;

import org.apache.commons.io.IOUtils;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.FSDataOutputStream;

import org.apache.hadoop.fs.FileSystem;

import org.apache.hadoop.fs.Path;

 

 

public class callHanaXSJS

{

  public static void main(String[] args) throws IOException

    {

 

  String sUrl = args[0];

 

  //Append XSJS Command parmeters

  if (args.length > 4) {

  //append first parameter

  sUrl += "?" + args[4];

  //add subsequent

  for(int i= 5;i < args.length;i++) {

  sUrl += "&" + args[i];

  }

  }

 

  System.out.println("HANA XSJS URL is: " + sUrl);

        URL url = new URL(sUrl);

 

        HttpURLConnection conn = (HttpURLConnection)url.openConnection();

 

        String userpass = args[1] + ":" + args[2];   //args[0] user  args[1] password

        String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes());

 

 

        conn.setRequestProperty ("Authorization", basicAuth);

 

        conn.connect();

        InputStream connStream = conn.getInputStream();

 

 

        // HDFS Output

 

        FileSystem hdfs = FileSystem.get(new Configuration());

        FSDataOutputStream outStream = hdfs.create(new Path(args[3], "HANAxsjsResponse.txt"));

        IOUtils.copy(connStream, outStream);

 

        outStream.close();

 

 

 

        connStream.close();

        conn.disconnect();

    }

}

NOTE: HADOOP Java programs are compiled as JAR's and stored on HADOOP HDFS prior to execution by OOZIE

 

With the small programs in place I will now show the setup in Ooozie using HUE.

 

Below are Screenshots from my small Hortonworks Hadoop HDP2.0  cluster  running on EC2

( For setting up your own cluster or downloading a test virtual machine see HDP 2.0 - The complete Hadoop 2.0 distribution for the enterprise

 

 

Image may be NSFW.
Clik here to view.


Zoomed in a bit to the 2 workflow tasks:

Image may be NSFW.
Clik here to view.




The definition of the first workflow task is:

Image may be NSFW.
Clik here to view.

              Image may be NSFW.
Clik here to view.


The JAR I created was:

/apps/hanaIntegration/callHanaXSJS-WF/lib/callHanaXSJS-1.0-SNAPSHOT.jar


The arguments passed to call a delete procedure (no parameters) in HANA are:

${hanaXsjsUrl}  ${user} ${password} ${outputDir} ${procedure_delete}

 

As this is the first task I also delete and create a directory to store the Log files of each task.

This will store the JSON return by the HANA XSJS.

 

 

The Second workflow task is:

Image may be NSFW.
Clik here to view.

The arguments passed to call an INSERT procedure (no parameters) in HANA are:

${hanaXsjsUrl}  ${user} ${password} ${outputDir} ${procedure_insert} ${insertTotalParams}  ${insertId}  ${insertField1}


The follow XML workflow is then created at runtime:

Image may be NSFW.
Clik here to view.



I can then submit/schedule the workflow:

Image may be NSFW.
Clik here to view.


In my Test I passed the following parameters to the workflow:

(NOTE: unfortunately the order of input parameters via HUE is currently messy.  If manually creating XML this can be tidied up in  a more logical order


Image may be NSFW.
Clik here to view.


In a more logical sequence of this is:

Following used by both tasks

${hanaXsjsUrl}http://ec2-54-225-226-245.compute-1.amazonaws.com:8000/OOZIE/OOZIE_EXAMPLE1/services/callProcedure.xsjs

${user} HANAUserID

${password} HanaPAssword

${outputDir} hdfs://ip-xx-xxx-xx-xx.ec2.internal:8020/apps/hanaIntegration/callHanaXSJS-log


Used by Delete task

${procedure_delete} iProcedure=OOZIE.OOZIE_EXAMPLE1.procedures/deleteTestTable


Used by Insert task

 

${procedure_insert} iProcedure=OOZIE.OOZIE_EXAMPLE1.procedures/create_record

${insertTotalParams} iTotalParameters=2

${insertId} iParam1=10

${insertField1}iParam2=fromHADOOP



Once the Workflow runs we see the following:

Image may be NSFW.
Clik here to view.


Image may be NSFW.
Clik here to view.


The following log files were screated by each task:

Image may be NSFW.
Clik here to view.


The  Insert task Log file shows as:

Image may be NSFW.
Clik here to view.




Finally we can check in HANA to confirm the record has been created:

 

Image may be NSFW.
Clik here to view.




OK for inserting one record this isn't very exciting and a bit of an overkill Image may be NSFW.
Clik here to view.
,  but conceptually this enables the power of HADOOP and HANA to be harnessed and combined in a single workflow.

 

BTW: I'll upload my code onto Git Hub shortly for those that are interested.  I welcome all comments and enhancements  to my code.


Dynamic Charts in HANA XS

SAPUI5 VIZ Charts are great but in some scenarios you may need functionality not yet supported:

 

For example:

D3 Path Transitions

 

Image may be NSFW.
Clik here to view.
1.gif

Above is an animated gif of a HANA XS Html, that calls a HANA XSJS  every second and appends latest results to the FAR RIGHT, shifting the results to the LEFT.

 

The HANA XSJS simple calls "select rand() from dummy"

 

The code to replicate this:

 

random.xsjs

function getRandom() {

 

 

  var list = [];

 

 

 

  function getRandom(rs) {

  return {

  "random" : rs.getDecimal(1)

  };

  }

 

  var body = '';

 

  

  try {

  var query = "select rand() from dummy";

  var conn = $.db.getConnection();

  var pstmt = conn.prepareStatement(query);

  var rs = pstmt.executeQuery();

 

 

  while (rs.next()) {

  list.push(getRandom(rs));

  }

 

 

  rs.close();

  pstmt.close(); }

  catch (e) {

  $.response.status = $.net.http.INTERNAL_SERVER_ERROR;

  $.response.setBody(e.message);

  return;

  }

 

 

 

  body = JSON.stringify({

  "entries" : list

  });

 

 

 

  $.response.contentType = 'application/json; charset=UTF-8';

  $.response.setBody(body);

  $.response.status = $.net.http.OK;

}

 

 

 

 

getRandom();

 

 

dynamicChart.html

<html><head> 

    <meta http-equiv='X-UA-Compatible' content='IE=edge' /> 

    <title>Hello World</title> 

 

    <script id='sap-ui-bootstrap'

        src='http://xxxx.xxxxx.xxxx.xxxx:8001/sap/ui5/1/resources/sap-ui-core.js

        data-sap-ui-theme='sap_goldreflection' 

        data-sap-ui-libs='sap.ui.commons'></script>  

 

   <script src="http://xxxx.xxxxx.xxxx.xxxx:8001/sap/ui5/1/resources/sap/ui/thirdparty/d3.js"></script>

  

   <style>

    @import url(../style.css?aea6f0a);

 

  .x.axis line {

   shape-rendering: auto;

  }

 

  .line {

   fill: none;

   stroke: #000;

   stroke-width: 1.5px;

  }

  </style>

 

 

<script> 

 

 

 

 

 

 

  var vRandom = 0;

 

  var n = 40,

  random = d3.random.normal(0, .2);

 

 

  function chart(domain, interpolation, tick) {

 

   var data = d3.range(n).map(random);

 

 

   var margin = {top: 6, right: 0, bottom: 6, left: 40},

       width = 960 - margin.right,

       height = 120 - margin.top - margin.bottom;

 

 

   var x = d3.scale.linear()

       .domain(domain)

       .range([0, width]);

 

 

   var y = d3.scale.linear()

       .domain([-1, 1])

       .range([height, 0]);

 

 

   var line = d3.svg.line()

       .interpolate(interpolation)

       .x(function(d, i) { return x(i); })

       .y(function(d, i) { return y(d); });

 

 

   //var svg = d3.select("body").append("p").append("svg")

   // Custom Mode

   var svg = d3.select(".TickChart").append("svg")

       .attr("width", width + margin.left + margin.right)

       .attr("height", height + margin.top + margin.bottom)

       .style("margin-left", -margin.left + "px")

     .append("g")

       .attr("transform", "translate(" + margin.left + "," + margin.top + ")");

 

 

   svg.append("defs").append("clipPath")

       .attr("id", "clip")

     .append("rect")

       .attr("width", width)

       .attr("height", height);

 

 

   svg.append("g")

       .attr("class", "y axis")

       .call(d3.svg.axis().scale(y).ticks(5).orient("left"));

 

 

   var path = svg.append("g")

       .attr("clip-path", "url(#clip)")

     .append("path")

       .data([data])

       .attr("class", "line")

       .attr("d", line);

 

 

   tick(path, line, data, x);

  }

 

 

 

 

  var html1 = new sap.ui.core.HTML("html1", {

        // the static content as a long string literal

        content:

                "<div class='TickChart'>" +

  "</div>"

                ,

        preferDOM : false,                     

        // use the afterRendering event for 2 purposes

        afterRendering : function(e) {

       

                        

            // Call the Chart Function   DYNAMIC CHART

  chart([0, n - 1], "linear", function tick(path, line, data) {

 

  //

  var aUrl = '../services/random.xsjs';

  var vRand = 0;   //random();

 

     jQuery.ajax({

        url: aUrl,

        method: 'GET',

        dataType: 'json',

        success: function (myJSON) {

                vRandom = myJSON.entries[0].random;                      

              },

        error: function () {sap.ui.commons.MessageBox.show("OK",

    "ERROR",

  oBundle.getText("error_action") );  }

     });

     //sap.ui.core.BusyIndicator.show();

     //sap.ui.core.BusyIndicator.hide();

    

 

 

 

 

   // push a new data point onto the back

   data.push(vRandom); // random()

 

   // pop the old data point off the front

   data.shift();

 

   // transition the line

   path.transition()

       .duration(1000) // wait between reads    //1000 = 1 Second Refresh

       .ease("linear")

       .attr("d", line)

       .each("end", function() { tick(path, line, data); } );

  });

               

 

        }

    });

 

 

    html1.placeAt('content'); 

</script>

 

 

</head>

<body class='sapUiBody'>

    <div id='content'></div>

</body>

</html>

Import PostgreSQL Tables Containing Free-Text Data into SAP HANA

We at SAP Research-Boston, have been using SAP HANA’s data analytics capabilities for quite some time now in our research with the medical datasets (e.g. the well-known MIMIC2 datasets). Thus, in our line of work, we regularly need to import data from various sources into our SAP HANA databases.

 

Luckily, SAP HANA provides a handful number of features to import data in different ways from different kinds of sources. Now, importing data using .csv files is one the effective methods for migrating data to SAP HANA from different popular DBMS. HANA can also be extremely fast to when importing the .csv files on the server side, using its control (.ctl) files. Much has already been written all around the web on these processes of importing .csv files into SAP HANA (a very good overview can be found here: http://wiki.scn.sap.com/wiki/display/inmemory/Importing+CSV+files+into+SAP+HANA). However, one challenge that may not have been thoroughly discussed is about dealing with .csv files that contain free-text or unrestricted natural language as the data to be imported. In this blog, I will be presenting the issues one may encounter when dealing with free-text data, and also the details of how to preprocess this free-text data to prepare the .csv files that are ready to be imported to SAP HANA with zero problems. I will be using the PostgreSQL database as the source database for my examples.

 

The Problem

To migrate data to SAP HANA from any of the other popular database systems, we first need to build the table structures in SAP HANA, representing the data-types of the fields correctly following SAP HANA’s standard. The list of supported data-types in SAP HANA can be found here: http://help.sap.com/hana/html/_csql_data_types.html).

 

After the table structures are built on SAP HANA, the next step is to prepare the .csv files using the source database system, where each .csv file contains the data of one table.

 

All the common database systems are equipped with the feature of exporting the data of a table as a .csv file. These .csv files usually follow the same structure, where each record (or row) is delimited by line-break or a newline character (\n). Moreover, the text-type values are usually enclosed by double quote characters ("), and in case a double-quote character appears within a text-type value, it is usually escaped by another double-quote character (") appearing immediately before it.

 

Now, PostgreSQL, like many other database systems, allows one to choose any character for this escape character. However, like most other databases, it always delimits the records by a newline character (\n), with no option to choose otherwise.

 

In contrast, when importing .csv files, SAP HANA allows one to choose any character that have been used for delimiting the records, which is generally chosen to be a newline character (\n) in most cases. However, when importing .csv files, SAP HANA uses a backslash character (\) as the only escape character, with no option to choose otherwise.

 

Therefore, when exporting a table of any database system, like PostgreSQL, as a .csv file one should be paying attention to the above restrictions, in case the .csv file is meant to be imported on SAP HANA. Thus, the command to use in PostgreSQL is as follows:

 

COPY schema_name.table_name TO '/path/table_name.csv' WITH CSV QUOTE AS '"' ESCAPE AS '\';

 

A .csv file exported with the above command usually gets imported to SAP HANA with no problem, when using its control file-based server-side CSV import feature. However, dealing with free-text can be a little harder. Here's the reason why: Text-type fields often hold unrestricted natural language or free-text values, which can contain line-breaks or the newline characters (e.g. \n, or \r, or a combination of both) and also rarely backslash characters (\) that do not get escaped in the exported .csv files.

 

This creates problem for SAP HANA when importing these .csv files, as its CSV parser (which is used during the control file-based import) wrongly assumes the start of a new record as soon as it encounters a newline character, even if it appears within the enclosure of a text-type value.

 

 

Solution

To solve this problem, we need to preprocess the source data in order to replace these newline characters that appear within the text-type values with “something” that will not confuse SAP HANA’s CSV parser. In our case, we chose to insert the html line-break tag (</br>) instead of the newline characters. Moreover, we also need to cleanup (i.e. remove) the backslash characters (\) appearing within the text-type values.

 

To apply this solution, some may choose to preprocess data on the exported .csv files, which I find to be cumbersome, as it requires processing these mammoth .csv files with a powerful (regular expression-based) text-file processing engine, that needs to be able to differentiate between newline characters appearing within text-values and the newline characters used to delimit records.

 

The solution I present here will preprocess the data on the source database system, and then output the .csv file in a way that is ready to be imported on SAP HANA without any problems.

 

The following are the steps of preprocessing the data on a PostgreSQL a database:

 

STEP 1: Create a Temporary Copy of the Table

On PostgreSQL’s console, enter the following SQL to first create a temporary schema, and then create a copy of the table to be exported in this temporary schema:

 

CREATE SCHEMA temporary_schema;

CREATE TABLE temporary_schema.table_name AS (SELECT * FROM original_schema.table_name);

 

 

STEP 2: For Each Text-type Field/Column in the Table:

The text-type fields/columns are of data-types text, char, varchar, varchar2, nvarchar, nvarchar2 etc. Now, do the following for each such text-type field/column in the table:

 

STEP 2.1: Remove All Backslash Characters (\) from the Values of the Text-type Field:

Enter the following SQL on PostgreSQL’s console to remove all the backslash characters (\) from the values of the text-type field:

 

UPDATE temporary_schema.table_name SET field_name = REPLACE(field_name, '\','');

 

STEP 2.2: Replace All Newline Characters from the Values of the Text-type Field:

Enter the following SQL on PostgreSQL’s console to replace all the newline characters from the values of the text-type field, with the custom string "</br>":

 

UPDATE temporary_schema.table_name SET field_name = REGEXP_REPLACE(field_name, E'[\\n\\r]+', '</br>', 'g');

 

 

Thus, repeat the steps 2.1 and 2.2 for each text-type fields.

 

 

STEP 3: Export the Preprocessed Data as CSV:

Enter the following SQL on PostgreSQL’s console to export the preprocessed data of the table as a .csv file:

 

COPY temporary_schema.table_name TO '/path/table_name.csv' WITH CSV QUOTE AS '"' ESCAPE AS '\';

 

The “table_name.csv” file containing the preprocessed data will now be saved to “/path/” on the machine hosting the PostgreSQL database.

 

 

Note: All the SQL used in the steps 1 to 3 can be combined into an SQL script and be sequentially executed altogether. Similar SQL commands can be used preprocess the data on other database systems as well (like Oracle).

 

 

STEP 4: Transfer the CSV File To the HANA Server

Use SCP or any FTP client to transfer the “table_name.csv” file to “/path-on-hana-server/” on the server hosting the SAP HANA database.

 

STEP 5: Prepare the Control File

Prepare a plain text file, named "table_name.ctl" with the following contents:

 

IMPORT DATA

INTO TABLE target_schema."table_name"

FROM '/path-on-hana-server/table_name.csv'

RECORD DELIMITED BY '\n'

FIELDS DELIMITED BY ','

OPTIONALLY ENCLOSED BY '"'

ERROR LOG ' table_name.err'

 

Then, save the “table_name.ctl” also on the server hosting the SAP HANA database. In this example, I will be saving it in the same location as the “table_name.csv” file, which is “/path-on-hana-server/”.

 

STEP 6: Execute the Import on SAP HANA

As mentioned earlier, please make sure that you already have an empty table, called “table_name”, in the “target_schema” on your SAP HANA instance. Pease also make sure that this empty table has a correctly translated table structure with the SAP HANA data-types correctly identified. Please note that the list of supported data-types in SAP HANA can be found here: http://help.sap.com/hana/html/_csql_data_types.html

 

Now, execute the following command on SAP HANA Studio’s SQL Console:

 

IMPORT FROM '/path-on-hana-server/table_name.ctl';

 

This will start loading the preprocessed data on to the SAP HANA database.

 

If followed correctly, these steps will successfully load all the data of a table containing free-text, on to the SAP HANA database. However, please check the “table_name.err” file on the SAP HANA server to confirm that no error has occurred during this process.

 

Happy migrating your database to SAP HANA. Image may be NSFW.
Clik here to view.

Debugging XSJS configuration on SAP HANA System ( Version SPS6 )

I started to learn SAP HANA just now. While going through  Open SAP Course I found certain deviation in XSJS debugging configuration.


Reasons for deviation:

  • I was going through first SAP HANA tutorial material on Open SAP ( Introduction to Software Development on SAP HANA, May-June 2013 )
  • And I used SAP HANA Version SPS6, Rev 68 ( SAP HANA Studio and Client installed are developer edition Revision 68 )

 

I fixed the above deviation by referring to SAP HANA discussion forum. And then I thought it would be better to share my findings here at this forum. So that it helps other new learner like me.

 

  1. Changing SAP HANA system Configuration. Here we need to add debugger (section)  to "xsengine.ini".
  2. Creating Debug Configuration for XS JavaScriptin DEBUG
    • Earlier we were suppose to provide listenport ( from HANA system Configuration ) as port
    • But now we have to provide actual HTTP port 80XX ( XX = Instance Number ) ( e.g:- like for me its 8000)
    • Image may be NSFW.
      Clik here to view.
      Screenshot 2014-01-25 09.05.48.png
  3. Please add "sap.hana.xs.debugger::Debugger" role . To do so execute below query from HANA System SQL console.
    • CALL GRANT_ACTIVATED_ROLE('sap.hana.xs.debugger::Debugger','<ROLE_NAME>'); 


References:

debug server side JavaScript -- socket connection problem

Hana AWS - Serverside Javascript Debug timeout | SAP HANA

 

Complete Guide to XSJS Debugging:

http://help.sap.com/openSAP/HANA1/openSAP_HANA1_Week_05_Unit_05_Debugging_XSJS_Presentation.pdf

 

I hope that this content is useful for new learners. And all experienced developer please let me know in case I need to add more points here. Or your thought on how to improve this content.

 

-----------------------

Prakash Saurav

Exporting and Importing DATA to HANA with HADOOP SQOOP

For those that need a rich tool for managing data flows between SAP and NON SAP systems (supported by SAP), then first stop will probably be the SAP Data Services tool, formerly known as BODS.

 

Here are a couple of useful links:

 

https://help.sap.com/bods

Getting ready for Big Data with Data Services Webcast - Part 1

Configuring Data Services and Hadoop - Enterprise Information Management - SCN Wiki

 

 

With that said though, opensource HADOOP also has a tool for moving large amounts of data between HADOOP and RDBMS systems, known as SQOOP

Image may be NSFW.
Clik here to view.

 

If you need support beyond your own IT organisation then the leading HADOOP vendors (such a Cloudera and Hortonworks) offer support contracts, and will presumably enable you to enhance the tool as required.

 

 

Sqoop currently has 2 flavours  Version 1 & Version 2 (which is almost Production ready).

Version 1 is a command line tool,  which has been integrated with OOZIE to enable SQOOP to be easily used within an HADOOP workflow.

Version 2 has enhanced security, UI support but isn't yet integrated with OOZIE yet (but is apparently in the pipeline).  As a workaround it theoretically it can be used with OOZIE now if incorporated within Shell scripts or a wrapper JAVA program.

 

To demonstrate sqoop I first need to create some test data in HANA:

 

Image may be NSFW.
Clik here to view.


Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.




SQOOP2 (Version 2)


Sqoop2  has a new UI which has been added to the HADOOP User Interface (HUE)  from version 2.5 onwards.


The HUE Website Hue - Hadoop User Experience - The Apache Hadoop UI - Tutorials and Examples for Hadoop, HBase, Hive, Impala, Oozie, Pig… has some nice videos demonstrating it's features  


NOTE: To use SQOOP2 with HANA you first need to have copied the the HANA JDBC drivers ngdbc.jar (from HANA Client download)  to the SQOOP2 directory on your HADOOP cluster (e.g. /var/lib/sqoop2)


At the bottom of the below HUE screen capture you can see I've created 2 jobs for IMPORTING and EXPORTING from SAP

Image may be NSFW.
Clik here to view.




When you create the first job to HANA you will need to create a connection , which can be share with subsequent jobs:

Image may be NSFW.
Clik here to view.


Add new connection:


Image may be NSFW.
Clik here to view.



With a connection to HANA created then the Job can be defined.



First lets define and run an IMPORT from SAP to HDFS

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Note: the 'Extractors' section enables the data to be extracted in parallel (in this case 5 parallel tasks)


Click Save and Run.


(I've skipped the detailed logging screens)



Finally the data is downloaded to HADOOP in 5 separate files (representing the 5 parallel task).

Image may be NSFW.
Clik here to view.

One of the task files is:

Image may be NSFW.
Clik here to view.


Don't worry about the files being split like this,  with funny names, HADOOP loves it like this. Image may be NSFW.
Clik here to view.


These files can now be very easily used by HADOOP  HIVE or PIG etc. for Batch processing OR combined with HANA Smart Data Access to be brought back into HANA as a Virtual Table.

Smart Data Access with HADOOP HIVE & IMPALA

 

 

Now lets repeat the process in reverse to load back into HANA into a different table.


NOTE: SQOOP does NOT have complex field mapping rules, so SOURCE and TARGET must have same column structure.

If you need complex mapping rules then you might be best of use SAP Data services.

Alternatively you could use HADOOP PIG to first reformat the data into the correct TARGET format, prior to using SQOOP.


Now lets define and run an EXPORT from HDFS to SAP

Image may be NSFW.
Clik here to view.


Image may be NSFW.
Clik here to view.


After 'Save and Run' we then have the following results in HANA:

Image may be NSFW.
Clik here to view.




Pretty Easy stuff really.


Once Sqoop2 is officially production ready then it's definitely worth doing a bit more stress testing with.



SQOOP1 (Version 1)

 

Sqoop1 is a command line which should achieve similar results

 

The following statements are used:

 

Import from HANA:

sqoop import --username SYSTEM --password xxxyyyy --connect jdbc:sap://xxx.xxx.xxx.xxx:30015/ --driver com.sap.db.jdbc.Driver --table HADOOP.HANATEST1 --target-dir /user/sqoop2/ht1001 --split-by id

 

NOTE: I'm not sure if just my HADOOP setup yet but the Sqoop1 imports fails for me with the following error 'java.io.IOException: SQLException in nextKeyValue'.    For the moment I'm happy with Sqoop2 for imports so I'm not that fussed to investigate, but it anyone has the answer then I welcome the feed back.

 

 

Export to HANA:

sqoop export -D sqoop.export.records.per.statement=1 --username SYSTEM --password xxxxyyyy --connect jdbc:sap://xxx.xxx.xxx.xxx:30015/ --driver com.sap.db.jdbc.Driver --table HADOOP.HANATEST2 --export-dir /user/admin/HANATEST1_SQOOP1

 

Sqoop1 Export works for me. The results in HANA were:

Image may be NSFW.
Clik here to view.

 

NOTE: Sqoop1 and Sqoop2 appear to  handle strings slightly differently when exporting and importing so you just need to be careful with your format.

 

 

Sqoop1s advantage over Sqoop2 is that the command line can be easily added to an OOZIE to enable a full workflow scenario to be processed.  For a bit more details on using OOZIE with HANA then see Creating a HANA Workflow using HADOOP Oozie

 

 

Do give Sqoop a try (which ever flavour) and let me know how you get on. Image may be NSFW.
Clik here to view.

Text Analysis In SAP HANA

SAP HANA has introduced some new features like Text Analysis in SPS05 onwards.

 

With some few steps you can implement Text Analysis in SAP HANA environment.

 

Following are the steps to implement Text Analysis In SAP HANA.

 

1.Create a column table <Text Analysis only takes VARCHAR,NVARCHAR,NCLOB,CLOB,BLOB>

 

CREATE COLUMN TABLE "TEST"."TEST" (ID NVARCHAR(50), TEXTANA NVARCHAR(5000), PRIMARY KEY(ID))


"TEST"."TEST"-- > SchemaName.TableName


2.Insert some records into Test table


insert into "TEST"."TEST" values("1","Barely hours after sitting on a hunger strike at the Jantar Mantar in Delhi")

insert into "TEST"."TEST" values("2","off-spinner Ravichandran Ashwin admitted that India have been below par in the ODIs against New Zealand and they want to avoid another series defeat when they face the Black Caps in the fourth ODI here Tuesday.")

insert into "TEST"."TEST" values("3","Ashwin said the tied match at the Eden Park in Auckland was disappointing.")

insert into "TEST"."TEST" values("4","Pune weather is good")

insert into "TEST"."TEST" values("5","Bangalore weather is also good")



3. Create FullText Index "TEST"."TEST_ANA" On "TEST"."TEST"("TEXTANA")

TEXT ANALYSIS ON

CONFIGURATION 'EXTRACTION_CORE';

 

4. Once you will execute the query which is mention in Step 3, SAP HANA will generate the Analysis table in the same schema where your source table reside.

 

5.You will find another table with the prefix $TA_TEST_ANA which will contain the results. Please check the attachment for result table.

 

6.This is very important point as you have created the index now If your are inserting some new values into your table that value will be updated into the result<$TA_TEST_ANA> table as well. By this you can apply the Text Analysis on real time data.

 

  

 

Note:For above example I have used SYSTEM as a user in HANA DB(HDB).

Sentimental Analysis In SAP HANA

This is very strong and important features of SAP HANA.

 

Follow some steps to implement Sentimental analysis.I am using my previous Blog  data for Sentimental analysis

 

Following are the steps to implement Sentimental analysis.

 

1.Create a column table

 

Sentimental Analysis only takes VARCHAR,NVARCHAR,NCLOB,CLOB,BLOB for applying analysis on the text or document column.


Sentimental Analysis supported  only for English, French , German, Spanish , Chinese.

 

CREATE COLUMN TABLE "TEST"."TEST" (ID NVARCHAR(50), TEXTANA NVARCHAR(5000), PRIMARY KEY(ID))


"TEST"."TEST"-- > SchemaName.TableName

2.Insert some records into Test table


insert into "TEST"."TEST" values('1','Barely hours after sitting on a hunger strike at the Jantar Mantar in Delhi')

insert into "TEST"."TEST" values('2','off-spinner Ravichandran Ashwin admitted that India have been below par in the ODIs against New Zealand and they want to avoid another series defeat when they face the Black Caps in the fourth ODI here Tuesday.')

insert into "TEST"."TEST" values('3','Ashwin said the tied match at the Eden Park in Auckland was disappointing.')

insert into "TEST"."TEST" values('4','Pune weather is good')

insert into "TEST"."TEST" values('5','Bangalore weather is also good')

insert into "TEST"."TEST" values('6','good better best')


3.

a.Create FullText Index "TEST"."SENTIMENT" On "TEST"."TEST"("TEXTANA")

TEXT ANALYSIS ON

CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER';


b.Create FullText Index "TEST"."SENTIMENT" On "TEST"."TEST"("TEXTANA") ASYNC FLUSH EVERY 1 MINUTESLANGUAGE DETECTION ('EN') TEXT ANALYSIS ON;

 

Both above SQL query are same in one we are asking to SAP HANA engine to put the synchronization by default and in second we are using ASYNC<Asynchronous>  .Don't confuse  when you are trying to execute the query.

 

4. Once you will execute the query which is mention in Step 3, SAP HANA will generate the Analysis table in the same schema where your source table reside.

 

5.You will find another table with the prefix $TA_TEST_ANA which will contain the results. Please check the attachment for result table.

 

6.This analysis will give you the positive,negative,neutral sentiment of your text or document. Just for example take below text

'bad good best'

bad- StrongNegativeSentiment

good-WeakPositiveSentiment

best-StrongPositiveSentiment


We can use sentimental Analysis to read the customer's sentiment about the products or etc.

 

Please check the out put table.

Confusing contradictions about HANA File Systems

If you're new to this HANA thing, and if you're a Basis Consultant, you normally start with reading the installation guides and related and referenced OSS Notes before beginning the installation of a new solution.

 

So did I.

 

While preparing our company for RDS Qualification of Business Suite on HANA Migration, I started with downloading the RDS materials and then I downloaded the guides for SAP HANA installation Version 1.0 SPS07 from https://help.sap.com/hana_appliance.

 

Also, as I'm not going to install the system on a certified hardware, I read the blog How to install the HANA server software on a virtual machine for installing the system on Amazon Web Services EC2 instances.

 

But, at the beginning, I noticed a contradiction between guides and materials. Here is the list of materials and contradiction between them.

 

Material IDLink to Material
Material 1 (M1)http://help.sap.com/hana/SAP_HANA_Server_Installation_Guide_en.pdf
Material 2 (M2)How to install the HANA server software on a virtual machine
Material 3 (M3)1793303 - Install and configure SUM for SAP HANA

 

According to M1, on page 16 (2.3.1  Recommended File System Layout) and M2, your HANA system's FileSystems has to be as follow:

 

File SystemDefault PathRecommendations
Root/Recommended disk space = 10 GB
Installation path/hana/shared/

The installation path (mount directory) requires disk space equal to

the default system RAM. (GB = RAM)

System instance/usr/sapThe system instance directory requires at least 50 GB disk space
Data volume

/hana/data/<SID>

The data path requires disk space equivalent to four times the size of

the system RAM. (GB = 4*RAM)

Log volume/hana/log/<SID>

The log path requires disk space equivalent to the default system

RAM. (GB = RAM)

 

According to M3 (Which is updated on 19.03.2013);

Standard locations of HANA components as of HANA SPS04:

  • SAP HANA database - /usr/sap/<SID> - this can be just a link but it must exist
  • SAP HANA client - /usr/sap/hdbclient
  • SAP HANA studio - /usr/sap/hdbstudio
  • SAP HANA studio update repository - /usr/sap/hdbstudio_update
  • SAP Host Agent - /usr/sap/hostctrl

If SAP HANA Database uses non-standard directory layout, SUM for SAP HANA cannot support it and the system has to be reinstalled.

If some of the other components are installed in different locations, you should remove them as the script and the subsequent update with SUM for SAP HANA will install them in the proper locations.

 

If we ignore the M2 (Which is a Blog and not written by a SAP Worker) how we can explain the difference between the Guide and SAP Note.

 

Is there any official statement for this? If yes, can you please share it?


SAP HANA Developer Edition v1.7.70

It's live now and we already see people talking about it! We are excited to announce that SAP HANA SPS7 revision 70 is now available as the new developer edition.

 

The process works just like before however we've made some massive changes! Now it's also a reason you don't immediately see an "update" volume either. We added so much content pre-loaded, new configuration settings and a whole brand new experience that an "update" would probably conflict with your existing systems and cause you lots of problems.

 

The first thing we did was put up the brand new SAP HANA Studio, here. Some of you will notice that there is no Mac version there. This is because we don't have a supported Mac version at the moment however it's already on it's way with some very cool features for all of us Mac users out there. Internally we are already testing it and getting things squared away.

 

Next we only actually updated the AWS images for revision 70. Several reasons for this but that we can save for another discussion. SPS7 is available on two sizes and again is fully pre-configured to meet the needs of any developer and even has pre-loaded content from Thomas Jung and Rich Heilman to help you get started as quickly as possible.

 

  • 4 vCPU's, 34.2GB RAM 154GB disk
  • 8 vCPU's, 68.4GB RAM 154GB disk

 

We've also modified the instance to run the XS Engine through port 80, so no need to worry about changing or re-configuring port numbers or anything like, and this time once you start your instance you can copy the hostname to your browser and instead of the traditional "XS Engine is running" screen you'll now be prompted to login using the user "SYSTEM" and the default password "manager" and you'll get a quick start landing page with some links already in place to get you moving along!

 

Image may be NSFW.
Clik here to view.
Screen Shot 2014-02-03 at 5.27.38 PM.png

 

Some of the configuration changes we made were automatically enabling the debugger, developer_mode and more as well as giving "SYSTEM" some of the default roles necessary to use a lot of the built in tools like the XS Web IDE, ADMIN, transport system, etc.

 

We are very excited about this change and we hope you enjoy it as well!

 

Grab your new developer edition today, just 10 quick steps to start the process!

 

  1. Go to http://aws.amazon.com (you can use your existing Amazon login) and be sure to sign up (you gotta give them your credit card number) for Amazon EC2 (Elastic Computing Cloud)
  2. Once you have signed up in be sure to go to your management console https://console.aws.amazon.com/console/home?# and on the left side select EC2 – once this page loads you will need to select the region you are working or will be working in (top right corner) - for example "Ireland" for the EU https://console.aws.amazon.com/ec2/v2/home?region=eu-west-1#
  3. On the left side scroll down until you see "key pairs" https://console.aws.amazon.com/ec2/v2/home?region=eu-west-1#KeyPairs: here you will need to create one – the reason for this is if you have large datasets or want to install the R language you'll need this to jump onto the server. The name of the key pair can be whatever you want.
  4. Now select your "account" page https://portal.aws.amazon.com/gp/aws/manageYourAccount within Amazon AWS and copy your account ID to the clipboard.
  5. Now head over to http://developers.sap.com and choose the HANA section http://scn.sap.com/community/developer-center/hana (oh make sure you are logged in)
  6. At the top of the HANA section you'll see the current SP and Revision that is available. Choose the "Developer Edition" http://scn.sap.com/docs/DOC-31722 if you go direct to Amazon you'll get our productive instance which has an additional 0.99 USD per hour charge. You'll also want to download the HANA Studio and HANA Client from here as well. The Studio is how we interact with the HANA Server (admin, monitoring, development, modeling) the Client is needed for native development, connecting to local tools (e.g. Excel)
  7. We have multiple hosts that provide the servers but in this case we'll sign up for the AWS one https://sapsolutionsoncloudsapicl.netweaver.ondemand.com/clickthrough/index.jsp?solution=han&provider=amazon
  8. Enter your Account ID that you copied earlier and select the region you chose as well then "Accept" the license.
  9. You will be redirected to AWS once your information is verified.
  10. Then follow the wizard

Decision Table (DT)in SAP HANA

As per the SAP HANA  Developer guide Decision table is again a strong features of SAP HANA in memory Data base.

 

This features come in bundle of SPS05 on wards.

 

In a simple language if we  try to understand a decision table is a table which will take a decision to retrieve/update a data based on the business rules.

 

We can write our business logic in a simple excel file and upload in to the decision table. A non technical guy could also do these task behalf of a application developer if his/her Business idea's are clear.

 

There are few steps to create Decision table(DT).

 

1.Go to the Modeler view create a package.

 

2.Once you right click on package will find Decision Table just click on that and one pop up appears.

 

3.Give any name to your Decision Table and click Next and finish.

 

4.In studio you will find DT panel (left and right panel of DT). Drop a table from your schema into left panel.

 

5.Create your Data foundation ,right click on columns and add as a Attribute.

 

6.Remember Decision table needs two mandatory parameters 1.Conditions 2. Actions

 

7.Conditions and actions could be a anything from vocabulary (Attributes ,Parameters).

 

8.Add some attribute as a conditions and add parameters as a actions.

 

Here we have to focus one more thing that is Parameters (?)

9.

    a right click on parameters and click on new

 

    b.Give user define parameter name and select Data types of parameters.

 

    c.Now your parameters is ready to be a part of actions.

 

10.In left panel bottom you will find two tabs 1.Data Foundation 2. Decision table .

 

click on Decision table and provide some business rules according to your requirement.

 

11.You can select dynamic value for your conditions and for actions.

 

12.just click on Decision table conditions or actions  you will find add add dynamic values.After providing specific condition you can use    Alt+Enter .

 

13.If you want to apply your business rules on every table's column then use *.

 

14.Just find some screen shots of Decision table and rule excel file.

 

I hope above steps will help you to create a Decision table without any hurdle.

Sybase ESP Integration with SAP HANA

I am a newbie started exploring SAP HANA and Sybase Event Stream Processor. I was trying to figure out how I can load data from a text file into SAP HANA server in real time. This file is not located in SAP HANA server. Assume there is a text file, it can be a log file or any file that we continuously keep appending records to it and we need to load these records into HANA server in real-time.

 

Then, I found Sybase Event Stream Processor and installed free trial version. Sybase ESP has SAP HANA Output Adapter that uses ODBC connection to load information from Event Stream Processor into the SAP HANA server.

 

In this example, I thought of a scenario that there is a log file which has transaction logs. Each transaction is a line in the text file formatted as:

 

Transaction ID|Transaction Type|Transaction Status|Transaction Date|Details

 

So, I created a simple Java project to generate random transaction data and write this data to a log file called transactionLog.csv.


To be able to load data into HANA server from ESP, first you need to configure the HANA ODBC data source. Open ODBC Data Source Administrator and add HANA ODBC driver.

Image may be NSFW.
Clik here to view.
odbc.png

Figure1: Creating new HANA ODBC datasource

Image may be NSFW.
Clik here to view.
hanaodbc.png

Figure 2: Adding and testing datasource


After you configured ODBC data source successfully, go to %ESP_HOME%\bin\service.xml and add the following entry into your service.xml file.


<Service Name="HANAODBCService" Type="DB">

  <Parameter Name="DriverLibrary">esp_db_odbc_lib</Parameter>

  <Parameter Name="DSN">HDB</Parameter>

  <Parameter Name="User">***user***</Parameter>

  <Parameter Name="Password">***password***</Parameter>

</Service>


Then, I created an ESP Project and added a .ccl file. I used three tools from the palette.

  1. File/Hadoop CSV Input Adapter
  2. Input Window
  3. SAP Hana Output Adapter.

 

 

Image may be NSFW.
Clik here to view.
diagram.png

Figure 3: Event Processing


1. File/Hadoop CSV Input Adapter is added to read transactionLog.csv file.

TransactionAdapter Configuration:

    • Directory: Directory path of the data file at runtime.
    • File: File which you want the adapter to read (you can specify regex pattern as well)
    • Dynamic Loading Mode: Adapter supports three modes, namely static, dynamicFile and dynamicPath. You need to use either dynamicFile or dynamicPath mode if you need to keep polling the new appended content into the file. I set this parameter to dynamicFile.
    • Column Delimiter: | for this example
    • Has Header: False (the text file that I generated doesn’t contain the descriptions of the fields).
    • Poll Period (seconds): Period to poll the specified file. It is set to 5 seconds in this example.


2. Input Window has a schema, which defines the columns in the events. In this example, we can say that each transaction is an event. TransactionInputWindow’s schema has columns transactionId, transactionType, transactionDate, status and description.


3. SAP HANA Output Adapter is used to load data rapidly from Event Stream Processor into SAP HANA database table.

TransactionHANAOutputAdapter Configuration:

    • Database Service Name: HANAODBCService (service name defined in %ESP_HOME%\bin\service.xml)
    • Target Database Schema: Source schema in HANA server
    • Target Database Table Name: Table where the data is loaded into.


Finally, I created a corresponding HANA database table into which the output adapter loads transaction data. Then, I ran my log generator and Event Stream Processor. Transaction data was loaded successfully into the table.


Before running log generator, the log file is empty and there is no event streamed into ESP and the HANA database table is empty as shown in the figures.

Image may be NSFW.
Clik here to view.
sapemptytable2.png

Figure 4: Transaction Input Window before generating data (0 rows)

Image may be NSFW.
Clik here to view.
hanaemptytable2.png

Figure 5: Result of select query on HANA table before generating data (No rows retrieved)


After running log generator, transaction data written into log file is streamed into ESP via input adapter and loaded into HANA via HANA output adapter. 37,770 transaction records are added to the table.

Image may be NSFW.
Clik here to view.
espstreamdata2.png

Figure 6: Streaming transaction events after running log generator


Image may be NSFW.
Clik here to view.
count2.png

Figure 7: Number of rows in HANA table after running log generator


Keep running log generator... New appended data is loaded into HANA table, the number of transactions has increased to 44,733 as seen in the figure.

Image may be NSFW.
Clik here to view.
countupdated2.png

Figure 8: Number of rows in HANA table after running log generator


After making sure that I am able to load the data into HANA, I created an attribute view and a calculation view.

Image may be NSFW.
Clik here to view.
attributeView3.png

Figure 9: Attribute View


An attribute view is created and calculated columns are added to format transaction date and status information.

Transaction Status:

0: Error

1: Warning

2: Success

 

Case() function under Misc Functions is used to format status information.

Image may be NSFW.
Clik here to view.
case.png

Figure 10: Case function

 

Image may be NSFW.
Clik here to view.
calculationView2.png

Figure 11: Calculation View


A Calculation View is created. Transaction data is grouped by transaction status.


After creating views, I created OData services to expose the views.


transactionView.xsodata

service namespace "experiment.services" {

       "experiment.model::VW_TRANSACTION"

       as "Transactions"

       keys generate local "ID";

}

 

status.xsodata

service namespace "experiment.services" {

       "experiment.model::CV_STATUS"

       as "TransactionStatus"

       keys generate local "ID";

}


Since the data is exposed, let’s consume it. I created a transaction view under my SAPUI5 project and added a table and a viz chart to show transaction data.

 

Image may be NSFW.
Clik here to view.
transactionsview.png

Figure 12: transaction.view.js

Image may be NSFW.
Clik here to view.
transactionshtml.png

Figure 13: transaction.html


Below is the final ui for this example.

Image may be NSFW.
Clik here to view.
transactionspage.png

Figure 14: Transactions page

Quick overview of what's in a Hana Table with SAPUI5

I keep needing to load poorly defined data into Hana to do some analysis, but the analysis is not quite defined up front.Many of the sources have a large number of columns, and the first step is often to work out which columns might be interesting.

 

I've created a simple page to show a summary of the fields in a table. It's nothing which can't be done other ways, but it's a nice fast way to get a feel for some data, and start to see which columns might be worth further investigation.

It has mostly been used for time series data, and so it can also show a graph of selected columns, although this function is a bit more dependant on the actual data (e.g. it takes an average of the value to show each point), and is less well developed than the list of columns!

There are three parts - some xsjs to get the columns from the database and return it as JSON, similarly for the actual data in time buckets, and a SAPUI5 page to show it to the user.

 

  • To see graphs, select one or more rows in the table, and click "Show Graphs".
  • To change the information displayed about columns, select "Names Only", "Min/Max" or "All Stats". You have to have at least Min/Max for the graphs to work, and All Stats can be a bit slow.
  • To show the full table (rather than paging), click Maximise
  • To see the table as text (to cut and paste elsewhere), click CSV.

 

This is not a production tool - although the user still needs to logon and have authorisation to see the data, it's still a potential route for inappropriate access!

You need to put the three attached files into an xs project on Hana. Then call the html with parameters SCHEMA, TABLE and TIMESTAMP. The first two parameters are the schema of the table, and the tablename, and the TIMESTAMP parameter is the field to use as the timestamp for the graphs.

For example (replace server with you server, and DataAnalysis with your package name):

     http://server:8000/DataAnalysis/dataSummary.html?SCHEMA=JCASSIDY&TABLE=DATA5&TIMESTAMP=Timestamp

The SAPB1 Add-on that save me, Migrating SQL Server to HANA

We are about to go life with HANA on a Customer.

 

And as you might Know there is some Tool posted some time a go to help  SAPB1 teams with the translation of queries and Formatted searches.

Although the tool is really nice it takes an additional effort to take your SQL Server queries, formatted searches and views and SBO_Transaction_Notification to the real string that will actually work on HANA.

 

There's a manual review process that a Human should do, before having your Queries running smoothly on HANA.

 

When I was actually translating my 320 Queries I realized 2 things:

 

  1. There's a priorization need to be done (The users might not be using some of the queries...)
  2. I Need something to save the time I take to update manually the queries after the ConverterLib translates them...
  3. We could save lots of time due to the fact that the only 2 things that might not work in HANA but Will on SQL Server is the IFNULL and Convert. So I could have saved lots of time by changing some of the syntaxis details over SQL Server even before Migrating to HANA!!!!!!!!!!!.
  4. When you are about to go live you need to Know what is working on HANA and what is not other wise you'll be caught on a big Excel Jungle (there's a tiny little possibility of having to translate Queries after going live, )

 

After reaching point Number 2, I couldn't help myself and I started writing code to make my life a little bit easier than before, using the Library called ConverterLib. I create an Add-on wich helped me to.

  1. Priorize the queries I want to translate
  2. A tool where I can actually make all the changes after using ConverterLib
  3. A tool where I can translate my queries on the SQL Server Infraestructure
  4. A tool where I will be able to track what has and/or has not been translated.

 

Take a Look on Youtube: The idea that saved me when Migrating SAP B1 to HANA - YouTube

 

I'm on the first version of it... I have more new ideas ..

I can't help myself I need to write more code....

 

Still working on adding views and Stored Procedures....(Coming Soon!!!)

 

Hope you like this... and if you have more

please send me more ideas!!
sky is the limit...

 

 

Happy Migration to HANA

Get started on HANA Cloud Platform (HCP) XS development: build a basic REST service

Lately I’ve started building a POC of a mobile app for tourism. Given the low entry barrier of HANA Cloud Platform (HCP) as opposed to all the plumbing and IT set up required to operate and expose a REST app on the Internet, I thought HCP would be the way to go. I come from the world of Java Servlet development and I wanted to see whether I could apply my Java expertise as is in the HCP XS environment.  While laying the ground to the POC, I thought I would share my experience as a tutorial. So here is the SAP Guided Tour Tutorial, or SGT for short.


Scenario:


The SGT POC requires a service to create new touristic sites and a service to retrieve all the existing touristic sites.

At the time of writing this tutorial, I am using the HANA studio version:1.0.7000 and the HANA Cloud DB version: 1.00.70.00.386119


Prerequisites:


Sorry, I hate starting with prerequisites but we can’t escape it if we don’t want to waste too much time figuring out why things break.

  • Get your account (it’s a free small account and no credit card info is requested )

https://help.hana.ondemand.com/help/frameset.htm?868d804efd0b4eb788bdebd7b36a57a4.html

  • Download and install the HANA studio (a development environment  based on Eclipse)

https://help.hana.ondemand.com/help/frameset.htm?b0e351ada628458cb8906f55bcac4755.html


Create a HANA XS app:


Follow the steps described here: https://help.hana.ondemand.com/help/frameset.htm?3762b229a4074fc59ac6a9ee7404f8c9.html


Important notice: make sure you don’t skip Step 3 (3. Create a Subpackage). Otherwise you may receive an “insufficient privilege” error message. It seems that this is a security measure as the trial accounts run on a shared HANA server.

 

For the SGT POC, I created a package with the name “sgthanaxs” and subpackage with the name “sgt0”. P1940386592 is my user Id and p1940386592trial my account number.  Note that the subpackage doesn’t show under the cockpit.

 

Image may be NSFW.
Clik here to view.
cockpitPackage.PNG

 

 

On the SAHA studio you should see:

 

Image may be NSFW.
Clik here to view.
package.PNG

 

The Repositories tab of the SAP HANA development perspective looks like this:

 

Image may be NSFW.
Clik here to view.
repo.PNG

 

Under your Project Explorer tab, make sure the package structure shows as highlighted.

 

Image may be NSFW.
Clik here to view.
projectHeader.PNG

 

 

Create a table:

 

To persist our sites we need a HANA table. Although one can go with the Hana studio SQL console, I chose to go with the descriptive way. This way the HANA XS server will create automatically the table upon activation of the project. Here is the SITE table declaration:

 

//Touristic site table

 

table.schemaName = "NEO_6HNCW10OVUL4AF2BX9XZZJ02B";

table.tableType = COLUMNSTORE;

table.columns = [

{name = "site_id"; sqlType = INTEGER; nullable = false;},

       {name = "site_name"; sqlType = VARCHAR; nullable = false; length = 72;},

       {name = "site_type"; sqlType = VARCHAR; nullable = false; length = 16;}

     ];

table.primaryKey.pkcolumns = ["site_id"];

 

The hdbrole file looks like this:

 

role p1940386592trial.sgthanaxs.sgt0::model_access {

       applicationprivilege: p1940386592trial.sgthanaxs.sgt0::Basic;

       sqlobject p1940386592trial.sgthanaxs.sgt0::SITE: SELECT,INSERT;

}


The project explorer looks now as follow:

 

Image may be NSFW.
Clik here to view.
CreateTableProjectExp.PNG

The orange barrel indicates that a file is activated and that I should see the SITE table created under the SGT schema. Don’t worry about the red cross.


Image may be NSFW.
Clik here to view.
SITETable.PNG


Create the REST services:

 

Let’s create the “site” REST service.  Create a file with name site.xsjs and paste the following code:


$.response.contentType = "text/json";   

var output = "";

try {

       //connect to default schema

       var conn = $.db.getConnection();

    var pstmtSelect = conn.prepareStatement("SELECT * FROM \"p1940386592trial.sgthanaxs.sgt0::SITE\"");

       var rs = pstmtSelect.executeQuery();

       //build an Array of site JSON objects out of the result set

       var sites = [];

       while (rs.next()) {

             var site = {};

       site.siteId = rs.getString(1);

       site.siteName = rs.getString(2);

       site.siteType = rs.getString(3);

       sites.push(site);

     }

     output = output + JSON.stringify(sites);

           

         //close everything

     rs.close();

     pstmtSelect.close();

     conn.close();

           

         //return the HTTP response. OK is used by default

     $.response.setBody(output);

           

} catch (e) {

       //log the error

    $.trace.fatal(e.message);

       //return 501

    $.response.status = $.net.http.INTERNAL_SERVER_ERROR;

}


Activate the file and go to the HCP Trial cockpit. For my trial account I see:

 

Image may be NSFW.
Clik here to view.
cockpitAppUrl.PNG

 

Note that on an HCP trial account SAML is the default sign-in protocol. So if your browser asks you if it should use your company’s sign-in certificate, you should select NO.

 

Click on the application URL and you would get and empty JSON array “[]” since the SGT’s SITES table is empty.

 

To try if the SQL Select really works, you can go to the developer Studio and insert some records to the p1940386592trial.sgthanaxs.sgt0::SITE table using the SQL insert on the SQL console or using the HANA studio’s data import wizard located under File>Import>SAP HANA Content>Data from Local File.

 

Here is an example of such an Insert statement:

 

insertinto"NEO_6HNCW10OVUL4AF2BX9XZZJ02B"."p1940386592trial.sgthanaxs.sgt0::SITE"values(1,'my site','monument')

 

The second requirement is the ability to create new touristic sites. Let’s enhance the previous site.xsjs script and add to it a RESTy flavor so to use HTTP GET verb for retrieving the sites and the POST verb for creating sites. Overwrite the previous site.xsjs with the following:


/**

*getqueryparametersinasJSONobject

*@returnsJSONobject

*/

function getRequestParameters() {

       var paramsObject = {};

       var i;

       //$.request.getParameter(paramerterName) is not supported

       for (i = 0; i < $.request.parameters.length; ++i) {

              var name = $.request.parameters[i].name;

              var value = $.request.parameters[i].value;

        paramsObject[name] = value;

    }

       return paramsObject;

}

 

/**

*handlesiteGETrequest

*/

function doGet() {

     $.response.contentType = "text/json";

     var output = "";

     try {

              //connect to default schema

              var conn = $.db.getConnection();

        var pstmtSelect = conn.prepareStatement("SELECT * FROM \"p1940386592trial.sgthanaxs.sgt0::SITE\"");

              var rs = pstmtSelect.executeQuery();

              //build an Array of site JSON objects out of the result set

              var sites = [];

              while (rs.next()) {

                     var site = {};

            site.siteId = rs.getString(1);

            site.siteName = rs.getString(2);

            site.siteType = rs.getString(3);

            sites.push(site);

         }

         output = output + JSON.stringify(sites);

           

                //close everything

         rs.close();

         pstmtSelect.close();

         conn.close();

           

                //return the HTTP response. OK is used by default

         $.response.setBody(output);

           

       } catch (e) {

                   //log the error

           $.trace.fatal(e.message);

                    //return 501

           $.response.status = $.net.http.INTERNAL_SERVER_ERROR;

       }

}

 

/**

*handlesitePOSTrequest

*/

function doPost() {

   

    $.response.contentType = "text/json";

       var output = "";

       try {

              //connect to default schema

              var conn = $.db.getConnection();

 

              //get the parameters of the POST body

              var paramsObject = getRequestParameters();

 

              //validate received parameters

              if (paramsObject.siteId === null

|| paramsObject.siteName === null

              || paramsObject.siteType === null

              || paramsObject.siteId.length <= 0

              || paramsObject.siteName <= 0

|| paramsObject.siteType <= 0) {

 

              $.trace.debug("Wrong parameters");

                  

                        //return 412

              $.response.status = $.net.http.PRECONDITION_FAILED;

         } else {

                     var pstmtInsert = conn.prepareStatement("INSERT INTO \"p1940386592trial.sgthanaxs.sgt0::SITE\" values(?,?,?)");

            pstmtInsert.setInteger(1, parseInt(paramsObject.siteId));

            pstmtInsert.setString(2, paramsObject.siteName);

            pstmtInsert.setString(3, paramsObject.siteType);

                     var numberRows = pstmtInsert.executeUpdate();

                     if(numberRows!==1){

               $.trace.fatal("something bad went wrong with the insert of a SITE");

                          //return 501

               $.response.status = $.net.http.INTERNAL_SERVER_ERROR;

                           return;

            }

           

            conn.commit();

            pstmtInsert.close();

                  

            $.trace.info("site created: " + paramsObject.siteId);

            

            doGet();

        }

       } catch (e) {

              $.trace.fatal(e.message);

                        //return 403

              $.response.status = $.net.http.NOT_MODIFIED;

       }

}

 

// only GET and POST are supported for the site service

if ($.request.method === $.net.http.GET) {

       doGet();

} elseif($.request.method === $.net.http.POST){

       doPost();

} else{

       $.response.status = $.net.http.METHOD_NOT_ALLOWED;

}

 

Given that the sign-in protocol is SAML, it is not easy to test an HTTP POST using popular REST Clients. So let’s design an HTML form to calls the site.xsjs POST service to insert some data.

 

Create an HTML file with the name site.html and activate it.

 

<!DOCTYPEhtml>

<html>

<body>

<formname="input"action="https://s1hanaxs.hanatrial.ondemand.com/p1940386592trial/sgthanaxs/sgt0/site.xsjs"method="post">

siteId: <inputtype="text"name="siteId"><br>

siteName: <inputtype="text"name="siteName"><br>

siteType: <inputtype="text"name="siteType"><br>

<inputtype="submit"value="Create site">

</form>

</body>

</html>

 

Fill the fields and submit.

 

Image may be NSFW.
Clik here to view.
createSiteForm.PNG

 

You should happily see the newly created site added to the response.


[{"siteId":"1","siteName":"my site","siteType":"monument"},{"siteId":"2","siteName":"my site2","siteType":"bridge"}]

 

At the end, the HANA Studio perspectives for the project should look like this:

 

Image may be NSFW.
Clik here to view.
projectExplorerEnd.PNG

 

 

Image may be NSFW.
Clik here to view.
repositoriesEnd.PNG

 

Image may be NSFW.
Clik here to view.
SystemEnd.PNG

 

Note that in order to show all the artifacts under the subpackage “sgt0”, you should enable “Show all objects” under Window>Preferences>SAP HANA>Modeler>Content Presentation.


Conclusion:


I hope this can help some of you guys J and please let me know if you have problems.

Having the Database and the source control view and the project structure under the same development environment is great. Besides coding in JavaScript, I felt that the HANA SX development experience is very comparable to the Servlet development.

As a developer, you can now build your mobile app or web site with the little tooling provided in this tutorial. For more support in building your enterprise apps, you should better go with standard tools such as OData, SAP UI5, SAP Fiori, etc.


Some references:

 

http://scn.sap.com/community/developer-center/front-end/blog/2013/07/07/native-development-in-sap-hana-and-consuming-the-odata-services-in-sapui5

http://scn.sap.com/community/developer-center/cloud-platform/blog/2013/10/17/8-easy-steps-to-develop-an-xs-application-on-the-sap-hana-cloud-platform

http://help.sap.com/hana/SAP_HANA_XS_JavaScript_Reference_en/index.html

My Experience in SAP HANA Certification ( C_HANAIMP131)

Hi all,

 

 

               

     Image may be NSFW.
Clik here to view.
Experience.jpg
         
MY HANA Certification Experience (C_HANAIMP131) 

 

 

 

 

SAP HANA is currently the fastest-growing and most popular area forSAP certification.

I cleared my SAP Certified Application Associate (Edition 2013). I would like to share some of my Knowledge which may be useful.

 

For detail knowledge about certification please go through the following link :

(http://training.sap.com/shop/certification/c_hanaimp131-sap-certified-application-associate-edition-2013---sap-hana-g/)

 

Syllabus :

Image may be NSFW.
Clik here to view.
syllabus.png

 

EXAM SLOT BOOKING :

A) select your paper code and book a slot for examination, once you done you will receive a mail from SAP  about exam date was conformed.

 

PREPARATION :

A) HANA 100 : HANA STUDIO, ARCHITECTURE, DATA PROVISIONING, MODELLING, REPORTING.

 

B) HANA 300: HANA MODELLING, CONNECTING TABLES, ADVANCE MODELLING, FULL TEXT SEARCH, PROCESSING INFORMATION MODELS, MANAGING MODELLING CONTENT, SECURITY & AUTHORIZATIONS, DATA PROVISIONING USING SLT, BODS(ETL), DXC, FF

 

HA 100 & HA 300 are the most important  training materials to clear the exam. Please go through each and every topic you may get question from any where.

 

My suggestion :: Spend more time on HANA SERVER because questions are scenario based and you need hana server experience very much.

 

C) If you are a basic learner  of Hana please go through SAP HANA VIDEOS provided by open sap.  https://open.sap.com/

 

D) once you finish studying  HA100 & HA300 please spend more time on SAP HANA SERVER. you may get trail version for 30 days .

http://scn.sap.com/docs/DOC-28191

E) you can find lot of videos in detail about hana from  http://www.saphana.com/community/hana-academy  and   http://www.youtube.com/

 

Note : Having  BI, BO , BODS, SQL is advantage.

 

Below link provides you full information about SAP HANA.

http://scn.sap.com/community/hana-in-memory/blog/2013/08/17/hana-reference-for-developers--links-and-sap-notes#comment-431816

 

F)  Marks break down by content

 

Data Provisioning                                                      ::  you may get more than 10 questions ( practice on server is very important )

Security and Authorization                                        ::  more than 6 questions

Data modeling - Analytical views &  Calculation views   

                                                                                          :: more than 10 q and scenerio based and tricky be careful before you answer

Advanced data modeling                                               :: more than 6q (i found this questions are very critical)

Optimizion of data models and reporting                                                                                                                                                                                                  

Administration of data models                                      ::  Authorizations on Data modelling mainly on schemas.

Reporting                                                                          :: more than 5 questions

Data modeling                                                                  ::  SQL Scripts :more than 3q  i found very difficult.

Data modeling                                                                  :: Attribute views: more than 3questions

Deployment scenarios of SAP HANA                          :: more than 4 questions

SAP HANA Live & Rapid Deployment Solutions for SAP HANA                                                                                                                                                       

                                                                                              :: more than 4 questions

 

Exam time :                                                                                                              Image may be NSFW.
Clik here to view.
tk positive.jpg

 

A) please stay before 15 min of examination  and don't forget to carry ID ( passport).

examination starts  + 1  - 15 min later( don't worry),  your exam time starts while you login by providing your id and password provided by them

 

B)once you login to system by providing credentials  you time counts on top right corner.

and you have options on bottom of page NEXT, Navigator and Flag.

 

Navigator : You can view on bottom right,   overview all your questions  ( answered will show Green and unanswered shows you GREY, FLAG questions will show you RED )

You can directly jump to selected question No.

Flag : if  you want to highlight ( RED).

 

C) please go through the known questions first & difficult questions flag them and answer later.

 

Note :  3 Types of questions

 

1) SINGLE ANSWER            : more than 40 Q

2) MULTIPLE ANSWERS      : 2 ans more than 20

                                           : 3 ans more than  5

in this section if you select one wrong answer you loss full mark.

3)Matching  result : cross matching from table1 and table2. 2q.

 

Note : Don’t fully depend on Navigator (the Green Background in the assist option as the background colour will become green even if you mark 1 answer for a question which is supposed to have 3 right answers).A very few tricky questions asked, where you will be tempted to choose the wrong answers. Beware of such questions.

 

 

 

Final SUBMIT button : Image may be NSFW.
Clik here to view.
result3.jpg

 

 

Once you finish your exam, you need to press SUBMIT button.

even if  you don't press SUBMIT button after 3 hrs it automatically disconnect and displays individual % result in each chapter and over all Total %

 

NOTE : ***SUBMIT Button shows from 1st question, please make sure do not press submit button at any cost until your finish your exam.****


 

 

 

My best wishes to those who are going to prepare for HANA exam...

 

 

Image may be NSFW.
Clik here to view.
best regards.jpg

       

           NAGASATISH.


SAP HANA: Replicating Data into SAP HANA using HDBSQL

Hi Folks,

 

This document is intended to focus on how we can replicate strategies to load data into SAP HANA (with out using SLT or BODS).

 

Problem Description:

 

1) To load data into SAP HANA ( Full or Delta)

2) On Load failure, entire load has to be rejected

3) Error message to be notified to the support team

 

 

 

Image may be NSFW.
Clik here to view.
Screen Shot 2014-01-27 at 3.51.43 PM.png

We know that we can use either SLT or SAP BODS for loading data into SAP HANA. Depending on the type of requirement and other external factors we can choose either one of the two.

 

There are some discussions and polls as well to understand what is the best way to replicate data into SAP HANA. ( which encouraged me to write this document)

 

What is the Replication Technique used in your project to get the data into SAP HANA database? Where you can see people who voted liked SLT the most  ( Possibly due to the "Real-time" replication that it supports ) .


Small-scale replication into HANA

 

Interesting Blog links and videos on the similar topic:

 

Best Practices for SAP HANA Data Loads

HANA Recommends Project: Bulk loading data usin... | SAP HANA

HANA Recommends Project:&amp;nbsp; Bulk loading dat... | SAP HANA

HANA Recommends Project: Bulk load data using C... | SAP HANA

Scheduling a job in SAP HANA using  HDBSQL and windows task scheduler

SAP HANA Academy: Backup and Recovery - Scheduling Scripts

So we have different options to load and practices to follow to get the optimal performance of loading into SAP HANA now let us check on how we can use HDBSQL for loading.

 

Find the steps below:

 

1) Create a HDBUSERSTORE( for secure logging)

2) Create a .ksh to sftp the flat file to SAP HANA Database server

3) Create a .ksh to use HDBUSERSTORE to login to SAP HANA

4) Call the Stored Procedure to load data into SAP HANA.


Find the HLD below:

 

Image may be NSFW.
Clik here to view.
Screen Shot 2014-02-17 at 7.15.36 PM.png

 

1) Steps to Create a HDBUSERSTORE:

 

  • Login to the File System and to the user (which will call the procedure )

          SU - <UserName>          password: *****

  • Set the user store Key:

           "/usr/sap/hdbclient/hdbuserstore" SET <USERSTORENAME> <HOSTNAME:PORT>  <USERNAME> <PASSWORD>

  • Check connection:

          "/usr/sap/hdbclient/hdbsql" -U <USERSTORENAME>


2) Create a .ksh to use HDBUSERSTORE to login to SAP HANA:

As mentioned in the videos above you can use shell script to frame the import statements and to call the generic procedure.
You will need to login to the database using HDBUSERSTORE and then frame your import & Call statements. PFB the sample code here:


export PATHNAME=/ngs/app/krishna
log_file=${PATHNAME}/data_load"_"`date +"%Y%m%d%H%M%S"`.log
loading_call_out=${PATHNAME}/data_load"_"`date +"%Y%m%d%H%M%S"`.out
loading_call_err=${PATHNAME}/data_load"_"`date +"%Y%m%d%H%M%S"`.err
echo "loading_call_out : ${loading_call_out}" >> ${log_file}
echo "loading_call_err : ${loading_call_err}" >> ${log_file}
echo "Calling KRISHNA.DATA_LOAD_USING_ARRAY" >> ${log_file}
#calling the  proc   SQLQUERY=`echo "CALL KRISHNA.DATA_LOAD_USING_ARRAY"`  echo ${THISQUERY}  >> ${log_file}    /usr/sap/hdbclient/hdbsql -U HANAUSER -z <<EOF 1>${loading_call_out} 2>${loading_call_err}  ${SQLQUERY}       \q   quit 
EOF

Note: As am not an Unix guy, Just sharing the sample code which will login to SAP HANA db and fire the queries. You may want to involve an Unix expert and write the shell script according to your requirements.

 

3) Generic Procedure to load data into SAP HANA :


Please find the sample generic procedure which will help to load data into SAP HANA below:


SAP HANA: Generic Procedure using Arrays to Load delta data into Tables


This procedure will help you to load data from the Staging table to Target table in SAP HANA. You can add the error handling and other checks as mentioned in the document as per your customer requirements.


Hope you enjoyed reading this blog. Awaiting your valuable feedback on this.

 

Yours,

Krishna Tangudu Image may be NSFW.
Clik here to view.



Serving up Apples & Pears: Spatial Data and D3

Serving up Apples and Pears: Spatial Data and D3

 

D3 is a third party Data visualization tool included as standard in HANA XS.

SAPUI5 is great for standard stuff, but for going off-piste D3 is fantastic.  It just boggles my mind with what Mike Bostock has created and shared with the world. Where does he find the time?  Check out many of the great examples at http://d3js.org/

 

Mike: If you every read this.  Thank you, D3 is brilliant Image may be NSFW.
Clik here to view.

 

 

Back in the pure realm of HANA SPS07 we now have more advance GIS features to use, check out http://help.sap.com/hana/SAP_HANA_Spatial_Reference_en.pdf

 

Using these feature it’s now possible to store geographic shapes, such as countries, regions, boundaries, buildings  and perhaps even fruit trees.

 

Rather than just serving up some fast data facts from HANA I thought why not serve up some fruit.

 

In the beginning I used HANA XS & D3 to create the world (based on http://bl.ocks.org/mbostock/3757119)

I then defined a spatial data table and inserted some fruit (spatial data shapes - Polygons).

Finally HANA spat the fruit out onto the earth, reformatted into GeoJson. Charming!!!

Image may be NSFW.
Clik here to view.
Capture.JPG

 

NOTE: No bitmap images are used in the creation of this. It is built exclusively with HANA XS and D3.

 

 

To recreate this on your HANA SPS7 box:

 

A) Create the world map with D3 in HTML. It’s pretty straight forward, you can follow any of Mikes many examples.  (My completed HTML is included at the end)


B) Create a Spatial table in HANA for storing the fruit

CREATECOLUMNTABLESpatialShapes(

id integer, shape ST_GEOMETRY, name nvarchar(20), color nvarchar(20));

 

C) Use a GIS tool to create your polygon (e.g. http://www.openjump.org/) then save in a suitable format (see SAP_HANA_Spatial_Reference_en.pdf section 2.3.2)

 

D) Import or Insert the shape into the new HANA SpatialShapes table.

In my case I inserted 2 fruit:

APPLE

INSERTINTO SpatialShapes VALUES(1, new ST_POLYGON('POLYGON ((-115.80622866283737 50.396875473355024, -125.85683522021442 49.748449243846835, -132.34109751529638 50.721088588109126, -140.44642538414882 50.07266235860092, -150.82124505627996 47.15474432581405, -159.57499915464064 41.96733448974847, -166.38347456447673 30.29566235860094, -169.62560571201772 17.97556399794519, -168.97717948250948 -3.42250157582529, -165.41083522021444 -11.852042559431862, -159.25078603988652 -22.875288461071204, -151.4696712857882 -32.277468788940055, -144.98540899070622 -37.464878625005625, -137.20429423660784 -42.00386223156299, -130.07160571201769 -43.9491409200876, -122.29049095791935 -43.3007146905794, -115.15780243332917 -42.32807534631711, -108.3493270234931 -42.652288461071194, -97.97450735136196 -43.9491409200876, -88.8965401382472 -39.410157313530206, -80.46699915464065 -30.980616329923645, -72.68588440054229 -18.01209173975972, -68.1469007939849 -4.395140920087595, -66.20162210546032 9.870236129092735, -68.1469007939849 21.866121374994382, -73.98273685955867 36.455711538928796, -81.43963849890294 44.88525252253535, -90.5176057120177 49.748449243846835, -98.94714669562426 50.721088588109126, -105.75562210546032 51.36951481761733, -110.94303194152589 50.396875473355024, -112.2398844005423 50.721088588109126, -108.3493270234931 62.06854760450257, -108.02511390873902 65.95910498155176, -109.64617948250951 67.90438367007634, -111.59145817103409 68.55280989958455, -112.88831063005047 66.93174432581405, -112.56409751529638 64.33803940778127, -113.21252374480459 59.474842686469785, -114.83358931857508 53.963219735650114, -115.80622866283737 50.396875473355024))'), 'Apple', 'red');

PEAR

INSERTINTO SpatialShapes VALUES(2, new ST_POLYGON('POLYGON ((43.10892629676332 9.24977905375759, 36.948877116435455 2.441303643921527, 31.761467280369878 -6.312450454439132, 28.84354924758299 -14.417778323291587, 27.5466967885666 -24.792597995422724, 29.81618859184529 -37.43690947083256, 34.679385313156764 -48.78436848722602, 43.10892629676332 -57.53812258558666, 53.48374596889447 -62.401319306898145, 65.4796312147961 -64.34659799542273, 69.69440170659938 -61.75289307738994, 76.50287711643544 -60.456040618373535, 82.33871318200922 -62.72553242165223, 91.41668039512396 -64.02238488066864, 106.0062705590584 -59.15918815935716, 113.46317219840265 -52.02649963476698, 120.92007383774693 -37.11269635607847, 121.5685000672551 -20.577827503619446, 115.08423777217314 -2.746106192144042, 107.30312301807479 12.81612331605268, 100.81886072299282 25.460434791462518, 90.44404105086167 46.2100741357248, 86.5534836738125 47.5069265947412, 83.63556564102561 48.1553528242494, 81.04186072299282 50.42484462752808, 82.9871394115174 64.04179544720022, 82.01450006725511 78.63138561113463, 74.55759842791085 78.63138561113463, 74.23338531315676 69.87763151277399, 76.17866400168134 63.39336921769201, 75.20602465741905 58.530172496380544, 74.88181154266495 55.288041348839556, 74.55759842791085 53.666975775069076, 70.34282793610758 52.69433643080677, 65.15541810004201 51.39748397179038, 61.913286952501025 50.42484462752808, 59.31958203446824 47.8311397094953, 56.725877116435456 42.96794298818382, 43.10892629676332 9.24977905375759))'), 'Pear', 'gold');

 

Check the output with:

select *, shape.ST_AsGeoJSON() as"GeoJSON"from  SpatialShapes

 

E) Create a HANA XS Serverside Javascript GeoShapes.xsjs to serve up the shapes in GeoJson format

 

function getGeoShapes() {

  

       function createShapeEntry(rs) {

         

              var geometry = JSON.parse(rs.getNString(3)); // GeoJson is Object

         

              //D3 currently appears to need anti clockwise winding of Shapes

              //https://github.com/mbostock/d3/issues/1232

              geometry.coordinates[0] = geometry.coordinates[0].reverse();

         

              return {

                     "type":"Feature","properties": { "name" : rs.getNString(1), "color" : rs.getNString(2) },

                     "geometry": geometry,

                     "id" : rs.getInteger(4)

              };

       }

       var body = '';

       var list = [];

  

       try {

              var query = 'select name, color, shape.ST_AsGeoJSON() as "GeoJSON", id  from  <INSERT YOUR SCHEMA>.SpatialShapes where not name is null'; // and id = 23';

                      

              var conn = $.db.getConnection();

              var pstmt = conn.prepareStatement(query);

              var rs = pstmt.executeQuery();

 

              while (rs.next()) {

                     list.push(createShapeEntry(rs));

              }

 

              rs.close();

              pstmt.close();

         

       } catch (e) {

              $.response.status = $.net.http.INTERNAL_SERVER_ERROR;

              $.response.setBody(e.message);

              return;

       }

 

       body = JSON.stringify({

              "type":"FeatureCollection",

              "features": list

       });

 

       $.response.contentType = 'application/json; charset=UTF-8';

       $.response.setBody(body);

       $.response.status = $.net.http.OK;

}

 

var aCmd = $.request.parameters.get('cmd');

switch (aCmd) {

default:

       getGeoShapes();

}

 

 

F) Finally, add the HANA shapes onto the world map html (marked in Blue)


<html><head> 

    <meta http-equiv='X-UA-Compatible' content='IE=edge' /> 

    <title>Hello World</title> 

 

    <script id='sap-ui-bootstrap'

        src='../../../sap/ui5/1/resources/sap-ui-core.js' 

        data-sap-ui-theme='sap_goldreflection' 

        data-sap-ui-libs='sap.ui.commons'></script>  

 

    <script src="../../../sap/ui5/1/resources/sap/ui/thirdparty/d3.js">


    <style type="text/css">

 

              svg {

                     width: 1280px;  //1980px;

                     height: 800px;  //800px

                    pointer-events: all;

              }

             

             

              path {

                     fill: #aaa;

                    stroke: #fff;

              }

 

 

    </style>        

 

<script> 

 

 

 

       var html1 = new sap.ui.core.HTML("html1", {

        // the static content as a long string literal

        content:

                           "<div class='Chart'>" +

                     "</div>"

                ,

        preferDOM : false,                     

        // use the afterRendering event for 2 purposes

        afterRendering : function(e) {

       

                              

      

                     // WORLD CHART

                     var feature;

          

                     var vScale = 200;

                     var width = 1280,

                     height = 800;

           

                     var projection = d3.geo.equirectangular()

                      .scale(vScale)

                      .translate([640, 400]);

 

                    

                    

                     var path = d3.geo.path()

                         .projection(projection);

                    

                     // Custom Mode

                     varsvg = d3.select(".Chart").append("svg:svg")

                         .attr("width", width)

                         .attr("height", height)

                         ;

                    

                    

                     vargraticule = d3.geo.graticule();

                    

                    

                     //WORLD

                     d3.json("world-countries.json", function(collection) {

 

                       feature = svg.selectAll("path.world")

                           .data(collection.features)

                           .enter().append("svg:path")

                           .attr("d", path)

                           .attr("class", "world")

                           .style("stroke-width", 2)

                                 .style("stroke", "white")

                                 .style("fill", "LightSlateGray ")

                           ;

                      

                      

                       svg.append("path")

                         .datum(graticule)

                           .style("fill", "none")

                           .style("stroke", "#777")

                           .style("stroke-width", ".5px")

                           .style("stroke-opacity", ".5")

                           .attr("d", path);

                          

                       addHanaShapes();

 

                     });

 

           

                     function addHanaShapes() {

                           // Serving up HANA shape objects               

                           d3.json("../services/GeoShapes.xsjs", function(collection) {

      

      

                             feature = svg.selectAll("path.hana")

                               .data(collection.features)

                               .enter().append("svg:path")

                                 .attr("d", path)

                                  .attr("class", "hana")

                                 .style("stroke-width", 0.5)

                                       .style("stroke", "black")

                                       .style("fill-opacity", 0.25)

                                       .style("fill", function(d, i) {

                                           return d.properties.color;

                                         })

                                 ;

                    

      

                     });

              }

                    

                          

        }

    });

 

    html1.placeAt('content'); 

</script>

 

</head>

<body class='sapUiBody'>

    <div id='content'></div>

</body>

</html>

 

 

If you give it a go then please do let me know how you get on.Image may be NSFW.
Clik here to view.

Filtering Rules using SAP HANA Decision Table

Filters are often applied by the business users or rule designers to control the output based on the multiple parameters specific to any industry like filter customers that have a specific plan in telecom industry or filter deals based on autmobile manufacturer, model etc. or filtering customers that have specific policy in Life Insurance sector or filtering messages in your inbox. This blog will walk you through the process of defining rules and using them to filter the content based on specified action and/or input.

 

 

Facts

  • Decision Table directly does not support filtering of rules. It will be used in consonant with Calculation View to achieve filtering.
  • There could be several approach based on the requirements like performance, filtering to be done first and then the rules are to be executed or vice versa etc. In all the approach Calculation View has to be used to filter no-matter at which stage you choose to filter.
  • This solution could be applied since HANA release SP06

 

Usecase
There is an online company that offers discount coupons on car deals from various dealers based on the car model and manufacturer. A set of rules are run to decide the discount given by various dealers from different regions. Customers can use these discount coupons with the dealers when purchasing the car.

 

 

Solution
Here is a step-to-step guide that could be used to filter rules using decision table based on the usecase described above.  Explore this solution that has been divided into 3 sections (a) Data model (b) Decision Table model (c) Consumption model

 

Note: All the images are based on modeling done on HANA Studio SP07, but the same usecase could be designed in same way in SP06 aswell.

 

 

(a) Data Model

I have created three database tables named CAR, DEALER and DISCOUNT_COUPON. CAR table contains all the metadata about the car, DEALER contains all the metadata about the dealers and their location and DISCOUNT_COUPON contains discount information that would later be suggested to the customers who are looking for best buy before purchasing the car.

 

Image may be NSFW.
Clik here to view.
Image1.jpg

 

 

(b) Decision Table Modeling

 

Data Foundation

  • Use the tables to create the data foundation of the decision table

 

Image may be NSFW.
Clik here to view.
Image2.jpg

 

 

  • Add the Attributes from the data foundation to create Condition and Actions of decision table

 

 

Image may be NSFW.
Clik here to view.
Image3.jpg

     Note:  Action is a Parameter – DISCOUNT, which is set to after the rules are executed.

 

Decision Table

  • Fill the decision table with Condition and Action values

        Image may be NSFW.
Clik here to view.
Image4.jpg

 

 

  • Finally, Save, Validate and Generate Decision Table

 

Note: This would generate the Result View that would be used in Calculation View.

For more details on modeling decision table refer my blog series

 

 

Calculation View

 

  • Use the result view of the decision table in Projection shown as Projection_1

        Note: You can find result view in “_SYS_BIC/<your-package>/<your-decision-table-name>_RV

  • In this Projection_1,  create Input Parameters and Filter.
    • Input parameter are the one the you want user to input like Model and Manufacturer - based on which the deals from various dealers would be suggested. 

 

          Image may be NSFW.
Clik here to view.
Image5.jpg

 

    • Filter is based on the Model and Manufacturer entered by the user as INPUT PARAMETERS

        

               Image may be NSFW.
Clik here to view.
Image6.jpg

 

  • Finally, Save , Validate and Generate Calculation View
  • Test
      • To test the outcome of the Calculation View, use Data Preview of Calculation View

 

                Image may be NSFW.
Clik here to view.
Image7.jpg

 

                    Image may be NSFW.
Clik here to view.
Image9.jpg

 

 

 

(c)  Consumption Model

Calculation view can further consumed using OData service.

 

 

You can thus use decision table to control the items that are consumed in your application, and can bring in ability of controlled consumption to the database. Follow this blog to successfully create custom application in HANA especially where filtering-rules are needed. Do write in your suggestions and feedback. If you have any queries on filtering rules then drop me comments, I would be happy to help you !

Hana SPS07, the spatial engine and taking a byte out of the Big Apple

Introduction

 

The Hana Spatial Engine became GA (general availability) as of SPS07.  Very exciting, no more ATAN2, SIN, COS and SQRT for me. 



Let me give you a straightforward example of what is traditionally required to calculate the distance between two pairs of GPS latitude/longitude coordinates.  This is my implementation in JavaScript:

 



myHplApp.controller.Distance = function(fromLatitude, fromLongitude, toLatitude, toLongitude) {

  var earthRadiusMetres      = 6371000; // in metres

  var distanceLatitude       = myHplApp.controller.toRad(fromLatitude - toLatitude);

  var distanceLongitude      = myHplApp.controller.toRad(fromLongitude - toLongitude);

  var lat1                   = myHplApp.controller.toRad(fromLatitude);

  var lat2                   = myHplApp.controller.toRad(toLatitude);

  var a = Math.sin(distanceLatitude / 2) * Math.sin(distanceLatitude / 2) +

          Math.sin(distanceLongitude / 2) * Math.sin(distanceLongitude / 2) *

          Math.cos(lat1) * Math.cos(lat2);

  var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));

  var distance = (earthRadiusMetres * c).toFixed(2);

  return distance;

};

 

 

 

 

Compared with the requirement being delivered in Hana Spatial:

 

SELECT NEW ST_Point('POINT(58.641759 -3.068672)',4326).ST_Distance(NEW ST_Point('POINT(50.05935 -5.708054)',4326)) FROM dummy;



 

This simple example involves only two coordinate pairs, not 20, 50 or 100 describing a more complex geometric or geographic shape..  Now we have Spatial, let's get Hana to do all the heavy lifting for our GIS (Geographic Information System) needs!



Frequent visitors to the Hana Developer Center and Hana In-Memory forums may have seen fantastic blogs relevant to geo data processing, like Kevin Small's reverse geocoding and Aron MacDonald's spatial with D3.  It's clear this topic has community interest and I'm rather intimidated to be writing my first SCN blog in such esteemed company.

 

 

Regretfully you won't get any beautifully rendered maps nor tablet-enabled apps here. What I do want to share with you is the consolidated learning of a team effort this week in the forum, thanks to all, it's been fun.  I'm going to use Manhattan as an example, let's begin.

 

 

 

Shopping for data on 5th Avenue

 

We need to describe real world objects like streets, areas, roads and other points of interest.  We do this with spatial artifact types like points (points of interest), lines (streets, rivers) and polygons (lakes, counties, countries). 



To describe parts of Manhattan I'm interested in I need geographic data, and I'm going to use a GIS tool to get that.  I use Google Earth, a personal choice, whatever you find works for you.  First I'm interested in the whole of Manhattan, Central Park and Liberty Island.  I want to mark those areas out.  In Google Earth I can use the polygon tool:


Image may be NSFW.
Clik here to view.
blog_polygon.PNG


This is cool because you can give an area a border and fill colour and set the opacity.  You can see from the screenshots below I've set Manhattan as red.

Image may be NSFW.
Clik here to view.
blog2_comp.jpg



Now I want to plot 5th Ave, 59th St and Manhattan Bridge.  I've set these as yellow to stand out.  I have done this with the path tool in Google Earth, which looks like:


Image may be NSFW.
Clik here to view.
blog_path.PNG



Finally some places of interest - my favourite, the Chrysler Building, followed by the Empire State and Statue of Liberty.  You do this with the placemark tool in Google Earth:

 

Image may be NSFW.
Clik here to view.
blog_pin.PNG



Finished mapping Manhattan.

Image may be NSFW.
Clik here to view.
blog1.jpg

 

 

 

You can organise your places rather nicely in Google Earth too, and toggle visibility.

 

Image may be NSFW.
Clik here to view.
blog_big_apple_folder.PNG

 

 

OK, so we have plotted our places of interest, how to we get all these coordinates?  Right-click each place of interest to get the context menu then choose Save Place As.  Set the type as Kml


Open the file in a text editor (I use Notepad++).  For Central Park you will see something like the following - you are interested in the contents of the <coordinates> tag.

 

 

<coordinates>
-73.95804761112954,40.80030564443565,0 -73.98220944876016,40.76822446495113,0 -73.97245661014151,40.76435147208066,0 -73.94896121467113,40.79633309813332,0 -73.95804761112954,40.80030564443565,0
</coordinates>

 

 

 

Filling the Hana bag with spatial data goodness

 

Let's create a table to store our spatial data.

 

 

 

create column table SpatialLocations(

    id        integer,

    name      nvarchar(40),

    shape    ST_GEOMETRY(4326)

);

 

 

 

Q. What is type ST_GEOMETRY?

A. ST_GEOMETRY is the spatial data supertype which we are going to use to define our column shape to contain all our coordinates.  More on that in the Spatial Reference guide page 9.

 

 

Q.What does 4326 represent?

A.4326 is the identifier for WSG84 (World Geodetic System, 1984) for a specific SRS (Spatial Reference System).  For now think of it as a set of specifications that enables us to accurately transform groups of spatial coordinates into real world points and measures.

 

 

 

Now we have our Spatial Locations table created, let's fill it with our Manhattan data. 

IMPORTANT - Longitude is expected first, which is the opposite to that provided by Google Earth.

 

 

 

Central Park going into the "bag"

insert into SpatialLocations values(2, 'Central Park', new ST_POLYGON('POLYGON((

    40.80030564443565 -73.95804761112954,

    40.76822446495113 -73.98220944876016,

    40.76435147208066 -73.97245661014151,

    40.79633309813332 -73.94896121467113,

    40.80030564443565 -73.95804761112954

))'));

 

 

 

 

5th Avenue

insert into SpatialLocations values(3, '5th Avenue', new ST_LINESTRING('LINESTRING((

    40.80326767213774 -73.94457982661453,

    40.7315072365715 -73.99689578972514

))'));

 

 

 

 

And my favourite, the Chrysler

insert into SpatialLocations values(6, 'Chrysler Building', new ST_POINT('POINT((

    40.75157529253383 -73.97548823598672

))'));


 

 

Note how I'm describing each real world "shape" with a spatial shape type - ST_POLYGON, ST_LINESTRING, ST_POINT.  Also note with polygons you "close the loop", my first coordinate pair is also my last.  I have included all 9 Manhattan spatial shape table inserts as an attachment.


 

 

Taking a bite with SQL


Here's an example SQL statement determining the distance (in metres) between a point describing Times Square and spatial shape id 6 in our table, the Chrysler building.  Result 1058m.



Note here we specify SRID 4326 for Times Square, as the SRID (spatial reference identifier) needs to be common.  Soon an enhancement to deduce an implied SRID from the column value SRID will be with us Image may be NSFW.
Clik here to view.



select shape.st_distance(ST_GeomFromEWKT('SRID=4326;POINT(40.758977 -73.984746)'))

      from spatiallocations

      where id = 6;

 

 

Image may be NSFW.
Clik here to view.
blog5.PNG

 



Q. How can I be sure the result is in metres?

A. Linear unit of measure associated with SRID, in view ST_SPATIAL_REFERENCE_SYSTEMS

Image may be NSFW.
Clik here to view.
blog4.PNG

 

 


Another example of distance between the Chrysler (id 6) and Empire State (id 7) in our spatial locations table:

 

select A.name as "From", B.name as "To", A.shape.ST_DISTANCE(B.shape) as "Distance(m)"

      from SpatialLocations A , SpatialLocations B

      where A.id = 6 and B.id = 7;

Image may be NSFW.
Clik here to view.
blog7.PNG

 

 

 

Munching consumption with XSJS

 

We have seen interrogation of the spatial shapes directly with SQL in the console.  I've also included consumption with an XSJS service (code below and attached - no warranty provided for lack of robustness!). 


Q. Tell me about Central Park?

A. bigApple.xsjs?cmd=poi&poi=2


 

Image may be NSFW.
Clik here to view.
blog20.JPG

Q. Give me the distance between the Chrysler and Empire State?

A. bigApple.xsjs?cmd=dst&poiFrom=6&poiTo=7

 

{

  • shape:

    [

    ]

     

    • {}
      • From: "Chrysler Building",
      • To: "Empire State Building",
      • Distance(m): 1148

}



Q. Is the Chrysler within 1100m of the Empire State?

A.No.  bigApple.xsjs?cmd=wdst&dstM=1100&poiFrom=6&poiTo=7

{

  • shape:

    [

    ]

     

    • {}
      • From: "Chrysler Building",
      • To: "Empire State Building",
      • Distance(m): "1100",
      • Within distance: "No"

}

 




Q. Is the Chrysler within 1200m of the Empire State?

A. Yes.  bigApple.xsjs?cmd=wdst&dstM=1200&poiFrom=6&poiTo=7

{

  • shape:

    [

    ]

     

    • {}
      • From: "Chrysler Building",
      • To: "Empire State Building",
      • Distance(m): "1200",
      • Within distance: "Yes"

}

 


Q. Does 5th Avenue intersect Manhattan Bridge?

A.No. bigApple.xsjs?cmd=int&poiFrom=3&poiTo=5

{

  • shape:

    [

    ]

     

    • {}
      • From: "5th Avenue",
      • To: "Manhattan Bridge",
      • Intersects: "No"

}




Q. Does 5th Avenue intersect 59th St.?

A.Yes. bigApple.xsjs?cmd=int&poiFrom=3&poiTo=4

{

  • shape:

    [

    ]

     

    • {}
      • From: "5th Avenue",
      • To: "59th St",
      • Intersects: "Yes"

}



Q. Does 5th Avenue equal 59th St.?

A.No. bigApple.xsjs?cmd=eq&poiFrom=3&poiTo=4

 

{

  • shape:

    [

    ]

     

    • {}
      • POI A: "5th Avenue",
      • POI B: "59th St",
      • Spatially Equal: "No"

}

 

 

 

Final Thoughts


Currently there is a lack of documentation, example code, cookbooks and developer orientated videos, especially considering the GA status.  Over the last year the Hana Developer community has indulged in a plethora of quality educational and technical resources for core topics.  The bar is set high.  However once I had a few examples on the go it was easy to work with, very code light.

 

 

Hana itself has evolved remarkably over the last year I have been developing with it as a solution - it's an effort to keep up.  There are a few gaps in the current Spatial Engine solution.  One is being able to work with ST_GEOMETRY spatial types in SQLScript procedures.  I understand the Spatial team are making progress towards this for 7.2.  Another gap is inability in declaring entities with spatial-based columns in your CDS (Core Data Services) .hdbdd or hdbtable definition artifacts.  I also understand this is coming in SPS8/9 (TBD).  So improvements and added functionality to Spatial will come with each release, as they have for Hana core as a whole, with the learning materials to support.



In the meantime a HUGE thank-you to Gerrit Simon Kazmaier, chief technical architect for Spatial who has been providing great support in the community forum.  Perhaps whilst I have his attention - for some reason I was unable to successfully work with the ST_CONTAINS and ST_WITHIN methods for SRID 4326, to determine if for example Central Park lies within Manhattan.  I could not get ST_AREA to give me the size of Liberty Island.  It is not clear if some methods are restricted by SRID "type".  Any pointers here welcomed!

 

 

The Hana Spatial Engine opens up a whole host of exciting opportunities for delivering application capability with spatial awareness.  The consumer group is large and diverse, including:

 

  • Healthcare, for tracking of infection.
  • Enabling local authorities with decision support systems for urban planning and geographic analytics on crime.
  • Consumer goods companies delivering targeted customer marketing and effective sales delivery.

 

 

The future is exciting, it's spatial!

 

(Footnote: - no more fruit-orientated blogs from me)

 



XSJS service bigApple.xsjs


(You may wish to get the code from the attachment due to formatting concerns)


 

function bigApple() {

 

    var pstmt          = null,

        rs              = null,

        conn            = $.db.getConnection(),

        bodyContent    = '',

        myCmd          = $.request.parameters.get('cmd'),

        myPoi          = $.request.parameters.get('poi'),

        myPoiFrom      = $.request.parameters.get('poiFrom'),

        myPoiTo        = $.request.parameters.get('poiTo'),

        myDstM          = $.request.parameters.get('dstM'),

        myQuery        = null,

        geoShape        = [];

 

 

    var poiOut = function(val){

          var geometry = JSON.parse(val.getNString(3));

          return {

            "id": val.getInteger(1),

            "name": val.getString(2),

            "geometry": geometry

          };

    };

 

 

    var distanceOut = function(val){

          return {

            "From": val.getString(1),

            "To": val.getString(2),

            "Distance(m)": val.getInteger(3)

          };

    };

 

 

    var withinDistanceOut = function(val){

          return {

            "From": val.getString(1),

            "To": val.getString(2),

            "Distance(m)": myDstM,

            "Within distance": val.getInteger(3) ? "Yes" : "No"

          };

    };

 

 

    var intersectsOut = function(val){

        return {

            "From": val.getString(1),

            "To": val.getString(2),

            "Intersects": val.getInteger(3) ? "Yes" : "No"

        };

    };

 

 

 

    var equalOut = function(val){

        return {

            "POI A": val.getString(1),

            "POI B": val.getString(2),

            "Spatially Equal": val.getInteger(3) ? "Yes" : "No"

        };

    };

 

 

    function querySpatial(myQuery) {

      try { 

            pstmt = conn.prepareStatement(myQuery.query);

            if (myQuery.id) {

                pstmt.setInt(1,myQuery.id);

            } else if (myQuery.dstM) {

                pstmt.setInt(1,myQuery.dstM);

                pstmt.setInt(2,myQuery.idFrom);

                pstmt.setInt(3,myQuery.idTo);

            } else {

                pstmt.setInt(1,myQuery.idFrom);

                pstmt.setInt(2,myQuery.idTo);

            }

 

            rs = pstmt.executeQuery();

            while (rs.next()) {

                geoShape.push(myQuery.fnOut(rs));

            }

 

            bodyContent = JSON.stringify({

              "shape": geoShape

            });

     

            $.response.setBody(bodyContent); 

      } catch (e) {

            $.response.status = $.net.http.INTERNAL_SERVER_ERROR;

            $.response.setBody(e.message);

            return;

      }

    }

 

 

    try {

      switch (myCmd) {

            case "poi":  //Point of Interest

                myQuery = { query: 'select id, name, shape.ST_AsGeoJSON()

                                  from SpatialLocations where id = ?',

                            id:    parseInt(myPoi,10),

                            fnOut:  poiOut };

                querySpatial(myQuery);

                break;

           

            case "dst":  //Distance between two points

                myQuery = { query: 'select A.name as "From", B.name as "To",

                                        A.shape.ST_DISTANCE(B.shape) as "Distance(m)" from

                                        SpatialLocations A ,

                                        SpatialLocations B where A.id = ? and B.id = ?',

                            idFrom: parseInt(myPoiFrom,10),

                            idTo:  parseInt(myPoiTo,10),

                            fnOut:  distanceOut };

                querySpatial(myQuery);

                break;

           

            case "wdst": //Within Distance of two points

                myQuery = { query: 'select A.name as "From", B.name as "To",

                                        A.shape.ST_WithinDISTANCE(B.shape,?) as

                                        "Distance(m)" from SpatialLocationsA,

                                        SpatialLocations B where A.id = ? and B.id = ?',

                            dstM: parseInt(myDstM,10),

                            idFrom: parseInt(myPoiFrom,10),

                            idTo:  parseInt(myPoiTo,10),

                            fnOut:  withinDistanceOut };

                querySpatial(myQuery);

                break;


            case "int":  //Two points intersect

                myQuery = { query: 'select A.name as "From", B.name as "To",

                                        A.shape.ST_Intersects(B.shape) as "Intersects"

                                        from SpatialLocations A ,

                                        SpatialLocations B where A.id = ? and B.id = ?',

                            idFrom: parseInt(myPoiFrom,10),

                            idTo:  parseInt(myPoiTo,10),

                            fnOut:  intersectsOut };

                querySpatial(myQuery);

                break;

           

            case "eq":  //Two spatial geometries equal

                myQuery = { query: 'select A.name as "POI A", B.name as "POI B",

                                        A.shape.ST_Equals(B.shape) as "Spatially Equal"

                                        from SpatialLocations A ,

                                        SpatialLocations B where A.id = ? and B.id = ?',

                            idFrom: parseInt(myPoiFrom,10),

                            idTo:  parseInt(myPoiTo,10),

                            fnOut:  equalOut };

                querySpatial(myQuery);

                break;

 

            default:

                $.response.status = $.net.http.INTERNAL.SERVER.ERROR;

                $.response.setBody('Invalid cmd '+ myCmd);

      };  //End myCmd

    } catch(e) {

      $.response.status = $.net.http.INTERNAL_SERVER_ERROR;

      $.response.setBody(e.message);

    } finally {

      if (rs != null) {

            rs.close();

      }

 

      if (pstmt != null) {

            pstmt.close();

      }

 

      if (conn != null) {

            conn.close();

      }

    }

}

 

 

bigApple();

How do you get your experts with negative stance for new things to HANA

We have over 50 ABAP developer (senior experts). Primarily we develop in the old core module (SD, MM, FI, CO, HR, PP, CS, IH, PS) on ERP systems / business suite.


We have three groups of developer:

Group 1: They can’t await to work on new architectures – they’re open for all and have fun to work as a pioneer and dig in the deep of the system

Group 2: For this developer it’s all the same – for this people it’s not a problem to go to a other architectures

Group 3: They have no interest

-       to work in new architectures

-       to spend time to learn new things

-       they are very closed for new things

-       they have for all topics bad statements


I am part of the group 1. In my opinion in the IT it’s normal to spend much time at new topics in free time to keep up to date. New topics / innovative things make the developer job very exciting. For me it’s regular process – and that’s my own passion Image may be NSFW.
Clik here to view.
.


Since two month we have our own HANA system in our data centre as play field :-) (business suite on SAP HANA). I’ve some colleagues who made the HANA certification – and we made the first steps in our system. For group 1 and group 2 everything is okay and they’ve fun Image may be NSFW.
Clik here to view.
.


We have problems with the group 3. They find every hair in the soup – they spend very much time to search arguments against HANA. That’s our “negative group” Image may be NSFW.
Clik here to view.
. We copied our SAP System to a new system and made a technical migration. Now they compare the SAP System, which is based on an oracle datebase, with the new SAP System which is based on a HANA System. They go through the standard ERP process (offer / order / purchase order / goods movements / delivery / MM invoice / SD invoice / material master data / customer master data / vendor master data / conditions / financial bookings / etc.). They main argument is, that they can’t see a grow up of the performance / the added value of the invest / etc. Our other problem is that the group of this people have experience over 20 years in ABAP developing – and their opinion have a high weight. The other arguments: IBM and oracle are working on similar architectures – and we can hold on on the open sqlsyntax / on the present coding.


Have you similar problems to get the acceptance of group 3?


Have you tips / tricks for us?


Have you ideas for catching the group 3?


What standard components are really optimized for HANA?


In which standard components can we see a really performance grow up?


There are standard use cases to see the differences?


Which data volume do we need in the data model to see the differences?


What can we do to take the group 3 with us?

 

Viewing all 676 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>