Quantcast
Channel: SCN : Blog List - SAP HANA Developer Center
Viewing all 676 articles
Browse latest View live

Creating a connection to SAP HANA using Pentaho PDI

$
0
0

In this blog post we are going to learn how to create a HANA Database Connection within Pentaho PDI.

1)  Go to SAP HANA CLIENT installation path and copy the “ngdbc.jar”

*You can get SAP HANA CLIENT & SAP HANA STUDIO from : https://hanadeveditionsapicl.hana.ondemand.com/hanadevedition/

 

1.png

2) Copy and paste the jar file to : <YourPentahoRootFolder>/data-integration/lib

2.png

3) Start Pentaho PDI and create a new Connection

* Make sure your JAVA_HOME environment variable is setting correctly.

3.png

3_1.png

3_2.png

4) Create a transformation,  rick click on Database connection to create a new database connection

4.png

 

5) Select “Generic Database” connection type and Access as “Native(JDBC)”

 

5.png

6)  Fill the following parameter on Settings

Connection Name: NAMEYOURCONNECTION

Custom Connection URL: jdbc:sap://YOUR_IP_ADDREES:30015

Custom Driver Class Name: com.sap.db.jdbc.Driver

User Name: YOURHANAUSER

Password: YOURHANAPASSWORD

6.png

 

7) Test your connection.

7.png


Are you an expert in knowing what happen yesterday today?

$
0
0


I have always felt the BW fall short in delivering the true power of analytics that it was meant to do. Yes, you do have some basic data mining algorithm available in BW7.3 <= under the APD tool and even with BW 7.4, it is still lacking some serious predictive algorithm power for any hardcode data junkie to take it seriously.


Regardless of version, BW always had some basic and frequently used functions which at times can be useful in helping to address requirement whiteout having to download huge amount of data into an excel sheet or run third party analytic tools to perform the same type of analysis. Trust that SAP has done a great job in ensuring the complex mathematical formulas behind these functions has been done correctly.



However, as any respectable BW consultant would know, standard functionality can only take you so far and there will be a time when you will be challenge to provide algorithms which are not available and it is never a good feeling to leave your business users in the dark with no possible outcome or solutions. It makes it worse when they start downloading enterprise data into excel and prove you wrong with basis excel add on. As with everything SAP, the good guys at Walldorf did provide us an additional option to write our own procedure if none of the above delivered function is what you are looking for. But hey, how many of us out there can easily translate this into SQL Scripts?


This article is not about the ongoing battle between you and your users because as everyone knows, it is about taking them on a “journey” – I personally find this a cliché but it does embody the coloration that needs to happen to reach the end goal. I rather like the quote from Sonny from the movie The Best Exotic Marigold Hotel: Everything will be all right in the end... if it's not all right then it's not yet the end.


If you are running your BW server on HANA, you have a hidden gem concealed underneath that investment that your organisation has made. Some believe that data is the new oil– European Consumer Commissioner Meglena Kuneva and to mine for this new resource, SAP has developed a nifty tool call the Application Function Modeller (AFM). It is an Eclipse based modelling tool and has been long made available since SP6. As SAP’s partner we continue to see improvements in the form of stability and additions of algorithms with each new release to help us better understand our data. In my opinion, customers are able to finally benefit from a true single source of information without having to run third party analytic tools that source data from BW by replicates Info Cubes structures into their environment.


The AFM tool to a certain degree does away with the need to perform SQL scripting but of cause I am not suggesting that you entirely blindside yourself from not being able to interpret basic SQL commands – it is after all a database server with some exception to the norm. It just means that you are able to refocus your energy to explore and tweak the minefield of information that is available by using the correct algorithm and method to answer a specific business question. Imagine this, late Friday evening and you are being dragged into a meeting with folks from the marketing and operations department and you are ask to predict the outcome of a new product launch from the information that has been gathered. One approach to this is to use a decision tree to anticipate the market response so that your organisation can react appropriately. But sadly, from a BW implementation point of view, this does not materialise often enough and worse still, the team manning the day to day health and operation of the BW environment does not have any input into conversation such as these. What usually ends up happening is, you have an experience user from the marketing department demanding for a huge amount of data out from the reporting server and they start to perform data analysis on excel or whatever third party tool that is available. There is nothing wrong with that and I think they should own it – it is what they have been trained for, it is what makes them good at what they do. It is their bread and butter.

 

 

If you are not running your applications on HANA, fine, no contention here but if you are, I am certain the AFM tool will shine. Gone are the days of pesky external connections, constant nit-picking between IT and stakeholders when a simple structure has changed, away with long waiting period for what seems so trivial – data dump, if only they knew.  The list goes on and on and you can fill in with your own frustration here from whichever side of the fence you happen to be on.


Even though if you are not into the predictive space or unmoved by the hype that data mining is already upon us or is a space that you are not willing to jump into, just by having a graphic interface to answer business concern, puts the analytic power back into the users hand. This tool makes playing with data fun because it is so simple to use, given that you have gone through the standard documentation but apart from that, the performance is there, the tools works and the results are real.


I guess my parting thoughts on this matter is that BW has never been terrific at performing statistical calculation or running predictive algorithms without loss of sleep and the motivation that keeps you going is the inner drive of yours to do it at all cost with the help of an icy Red Bull by your lonely side. I would like to think that SAP has come to recognise this shortfall over the years and it became evident when SAP released their Predictive Analytics solution in 2011 and they continue to strengthen their market position with the acquisition of KXEN, who in their own space, is a market leader at what they do. With constant revision to the Predictive Analysis Library native to HANA, you can be positively confident that SAP will continue to make progress in this space. While BW excels in many other aspect of a data warehousing tool, mining for data is not its core strength and I find it comforting that they are other alternative offerings by SAP to address this gap.

 

http://www.cs.utah.edu/~jeffp/teaching/cs5955/L10-kmeans.pdf

http://www.cs.yale.edu/homes/el327/datamining2012aFiles/11_k_means_clustering.pdf

http://www.ibm.com/developerworks/data/library/techarticle/dm-1007predictiveanalyticssapspss/

https://help.sap.com/saphelp_nw04/helpdata/en/4a/eb293b31de281de10000000a114084/content.htm

SAP HANA Idea incubator - HANA as SAP CC database

$
0
0

SAP CC as rating and charging system in telco industry requires top performance elements on every level of IT infrastructure.

In case when system rates hundred of thousands and millions of transactions per day every improvement of performance means a lot.

I believe, HANA as in memory database would be that kind of improvement.

 

Originally, I posted idea in Idea incubator but idea was rejected since it is idea of SAP product improvement but Idea incubator team directed me to Idea Place: Enterprise Home

I posted my idea there.

If you like it and agree with me, support my Idea by voting on it HANA as SAP CC database : View Idea

 

[edit] Runing SAP CC on HANA would be the best showcase of HANA value as in memory database.

 

 

Best regards,

Mario

SAP HANA & Twitter - Find your leads

$
0
0

In recent times I have personally experienced a lot of change the way we used to search or reading a review on a product, company, movie, buying/selling real estate, finding out job vacancies. Somehow we are happy to depend more on twitter to get this information. But extracting/searching for the information from tweets is not very handy and very time consuming, considering the number of tweets raised 4500 per day to 5000 tweets per second from 2007 to 2014.

In this blog, I tried to get the tweet handlers information of head hunters tweeting about a specific job vacancy and responding to their exact tweet.


Summary:

  • Posting tweets to SAP HANA on a daily basis using a python script.
  • Filter/Changing the tweet data to structured data by using few SQL queries & Fuzzy search.
  • Stored the data in Fulltext Index tables so that I can differentiate the data
  • Using XSODATA of index tables in SAP Lumira for visualization
  • A python script to reply to specific tweets to the specific user.

 

Initially I wanted to build an application with selection parameters to search/filter the tweet data and to reply to the tweets using XSJS. But, I have to settle down with python due to trust store issues.

Let’s do it step-by-step.

 

1. Creating tables and services in HANA:


Created few tables using SQL Console and .xsodata


Sample Code:

  createcolumntable"MOHAS97"."amohas97.session.data::fuzz"

(

IDNO nvarchar(60) primarykey,

CNTR nvarchar(60),

THLR nvarchar(60),

CRTD nvarchar(60),

TTXT NVARCHAR(260) fuzzy search indexon

  );


XSODATA:

servicenamespace"amohas97.services" {

"MOHAS97"."amohas97.session.data::fuzz"as"tweet"

"MOHAS97"."amohas97.session.data::twitter"as"twitter"

"MOHAS97"."$TA_TWTIDX"as"twitlds";

  "MOHAS97"."amohas97.session.data::twitfnl"as"twitfnl";

  "MOHAS97"."$TA_TWTFNL"as"twitfnlidx";

  "MOHAS97"."amohas97.session.data::twiturl"as"twiturl";

  "MOHAS97"."amohas97.session.data::twitloc"as"twitloc";         

                  }   

 

2. Posting the tweet data from twitter:


Twitter API has limitations, it cant fetch data older than a week. So I have executed python script in regular/irregular intervals.


Install TWYTHON for Python and get your twitters keys from dev.twitter.com


Following is the woking code to fetch the data from twitter and to post it in SAP HANA. I have used the words "SAP ABAP". API will fetch all the tweets which has "SAP" and "ABAP"


 

#! /usr/bin/python

 

import requests

import json

import csv

import sys

import xlrd

import os

#import urllib.request as urllib2

import codecs

from twython import Twython, TwythonError

import time

from datetime import datetime

from datetime import timedelta

import socket #import the socket module

 

 

appURL = 'http://va..........your server addd ..:8008/amohas97/session/services/pytxso.xsodata'

auth = 'your hana id','your password'

 

 

s = requests.Session()

s.headers.update({'Connection': 'keep-alive'})

headers = {"Content-type": 'application/json;charset=utf-8'}

r = s.get(url=appURL, headers=headers, auth=auth)

url = appURL + "/tweet"

 

 

# Requires Authentication as of Twitter API v1.1

 

 

twitter = Twython('8Jep7jyAstr8W8wxMekC3', 'ywvEJKc4TRnZcDHiHBP4jZmYH73DCEgf7UnLrlwprUwh7l', '273814468-7MUccHp07UiPvpL5o6ktIZkjdZg7YXjMTGH', 'iFMgreouGh6Hl18eGX3r99U3IjjaqXdMxp8B4yUN')

 

 

keywords = ['sap abap']

count=0

 

for page in range(0,1):

    search = twitter.search(q=keywords,

                count =35 ,include_retweets=False,  timeout=1500)

 

#count=35, it will read 35 tweets per request. max is 100

 

    tweets = search['statuses']

 

    for tweet in tweets:

      count+=1

      ts = datetime.strptime(tweet['created_at'],'%a %b %d %H:%M:%S +0000 %Y')

      ts1 = str(ts)

      data = '{"IDNO": " ' + tweet['id_str'] + ' ", "CNTR": " ' + str(count) + ' ", "THLR": " ' + tweet['user']['screen_name'] + ' ",  "CRTD": " ' + ts1[0:10] + ' ", "TTXT": " ' + str(tweet['text'].encode('utf-8')) + ' "}'

      r = s.post(url, data=data, headers=headers)

 

    print(count)

    last = tweet['id_str']

#last is ID of last tweet. we are going to use it next step

 

for page in range(0,10):

 

    search2 = twitter.search(q=keywords,

                count =35 ,  include_retweets=False, max_id=last,  timeout=1500)

 

# we need to pass the value for max_id, otherwise the request will fetch the same tweets( or latest) again. Maintaining max_id will help the request to start from last fetched tweet.


# I have looped it for 10 times and faced issues when the looping number is more, so used nested loop to fetch more tweets.

 

    tweets2 = search2['statuses']

 

    for tweet in tweets2:

      count+=1

      ts = datetime.strptime(tweet['created_at'],'%a %b %d %H:%M:%S +0000 %Y')

      ts1 = str(ts)

      data = '{"IDNO": " ' + tweet['id_str'] + ' ", "CNTR": " ' + str(count) + ' ", "THLR": " ' + tweet['user']['screen_name'] + ' ",  "CRTD": " ' + ts1[0:10] + ' ", "TTXT": " ' + str(tweet['text'].encode('utf-8')) + ' "}'

      r = s.post(url, data=data, headers=headers)

    print(count)

    last = tweet['id_str']

 

 

Table Screenshot:

I have collected 6000+ tweets.

fuzz table.jpg

 

3. Filtering the data:

 

Let assume, we are more interested in finding a job in ABAP in CRM module or in ABAP HCM or any other.

I have executed the below SQL in sql console to save tweet data in another table which has ABAP CRM mentioned in the tweets.

 

 

INSERT INTO "MOHAS97"."amohas97.session.data::twitter" select * from "MOHAS97"."amohas97.session.data::fuzz"

  where contains("TTXT", 'ABAP CRM', fuzzy(0.9, 'ts=compare, excessTokenWeight=0.1, decomposewords=2' ))

  order by 1 desc;

 

Fuzzy is 0.9

excess token weight = 0.1 -> any other words in the string/tweet will be ignored during select apart from ABAP CRM.

also decompose words=2, as in looking for two separate words.

 

I have got 161 records

 

twitter table.jpg

 

4. Creating FULLTEXT index for the table with Configuration 'EXTRACTION_CORE_VOICEOFCUSTOMER'

 

CREATE FULLTEXT INDEX TWTIDX ON "MOHAS97"."amohas97.session.data::twitter" ("TTXT")

CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER'

TOKEN SEPARATORS '\/;,.:-_()[]<>!?*@+{}="&'

TEXT ANALYSIS ON;

 

Configuration 'EXTRACTION_CORE_VOICEOFCUSTOMER' will help us to classify the string into different types.

whenever it there is "#" mentioned it considers as "SOCIAL", HTTP as "URL".

twitindex1.jpg

 

When I browsed this data, I realized people have used "#" in front of the city/location. so I am  removing "#" from the table. It will show the "type" as "locality"

twitindex2.jpg

Using the above index table, I further created three more table. I will use this tables in Python, SAP Lumira and Excel.

 

Table 1:  Using XSODATA in SAP Lumira

I want compare different cities in terms of job market.

 

INSERT  INTO "MOHAS97"."amohas97.session.data::twitloc"

  SELECT idno,TA_TOKEN

              FROM "MOHAS97"."$TA_TWTFNL"

            WHERE Ta_type = 'LOCALITY'

            group by idno, ta_token

            order by ta_token;

 

I will use XSODATA to view the table data in SAP Lumira.

 

l1.jpg

l2.jpgl3.jpgl4.jpgl5.jpg

 

I have created a geography hierarchy.

Looks like Pune City has more job offers or may be twitter handler is more active.

 

Table 2:  Using XSODATA in Excel

Lets have all the URL links mentioned in the tweets in excel.

 

  INSERT  INTO "MOHAS97"."amohas97.session.data::twiturl"

  SELECT TA_TOKEN, max(idno)

              FROM "MOHAS97"."$TA_TWTFNL"

            WHERE Ta_type = 'URI/URL'

            group by ta_token

            order by ta_token;

 


open the excel file and use the odata as follows. All the URLs are fetched in a column

e1.jpg

e2.jpg

 

 

Table 3:  Reply to Tweets -Twitter Bot

Initially I wanted to build an application with selection parameters to search/filter the tweet data and to reply to the tweets using XSJS. But, I have to settle down with python due to trust store issues.

I want to reply to each and every tweet after I filtering on my own conditions. It is again going to be tough to find the short listed tweets and responding to all of it.

so I am going to use this 3rd table to reply to all the exact tweets at once.

Now I only have 117 records. I am going to use this data @ python to respond to their tweets

 

INSERT INTO "MOHAS97"."amohas97.session.data::twitfnl"

            SELECT *

            FROM "MOHAS97"."amohas97.session.data::twitter" AS s

            WHERE EXISTS

                        (SELECT *

                        FROM "MOHAS97"."$TA_TWTIDX" as p

                        WHERE p.idno = s.idno

                          and ( p.Ta_type = 'LOCALITY'

        OR p.TA_TYPE = 'ADDRESS2') );

 

twitfnl.jpg

 

Python Code:

 

#! /usr/bin/python

 

 

import requests

import json

import csv

import sys

import xlrd

import os

import urllib.request as urllib2

import codecs

from twython import Twython, TwythonError

import time

#import datetime

from datetime import datetime

from datetime import timedelta

import socket #import the socket module

 

 

appURL = 'http://v.... your server add .....:8008/amohas97/session/services/pytxso.xsodata'

auth = 'your hana id','your password'

 

s = requests.Session()

s.headers.update({'Connection': 'keep-alive'})

 

url = appURL + "/twitfnl/?$format=json"

#url = appURL + "/twitfnl"

 

r = s.get(url, auth=auth)

data = json.loads(r.text)

 

twitdata = data['d']['results']

 

from twython import Twython, TwythonError

 

 

# Requires Authentication for Twitter API v1.1

 

twitter = Twython('8Jep7jyA8wxMekC3', 'ywvEJKc4T3DCEgf7UnLrlwprUwh7l', '273814468-7MUccHpYXjMTGH', 'iFMgreouGXdMxp8B4yUN')

 

 

k = 0

for i in twitdata:

    #print( "Twitter Handler: " + exceldata[k]['THLR'] + " --- " + "Twitter ID: " + exceldata[k]['IDNO'] )

    tweet = '@' +  exceldata[k]['THLR'].strip() + ' ' +' I am not interested - Share more details to refer others'

    tid = exceldata[k]['IDNO']

    k = k+1

 

# Posting tweets

    twitter.update_status(status=tweet, in_reply_to_status_id=tid)

    print(tweet, tid)

 

Let check my tweets @ twitter.

Out of 6000+ tweets I am able to choose and reply to 117 tweets. I have job URLs and knows which city has more jobs.

And knows the right head hunter.

 

t1.jpgt2.jpg

Alpaca - Unit tests over Hana

$
0
0

How Alpaca burn ?

 

Its all start when we was asked to move some code from java into db store procedure.

Usually when writing a code in java we use the TDD (Test Driven Development Methodology) and write unit test together with the development process .

Doing this in java its easy and we are using JUnit libraries for it.

However, There is no tool or framework that will allow me to write unit test on Hana db and there is no way to work TDD while developing store procedure and function in Hana .

Of course I can run Junit test that connect to Hana or write some XS code that check hana , but I wanted to test my code within the language itself without having other layers .


Inno Jam


We have in SAP Labs Israel contest that called "Inno Jam", each employee in the lab can work 3 days on an idea he or his friends thinnk of .
After the 3 days, every team present a demo (it must be demo) in only 6 minutes that show their idea working.

It was my first innoJam, the atmosphere was great , pizza and beers was served until night.

I choose the name ALPCA for my idea and 3 other employees from my team join too.



alpacateam.jpg

(from left to right: Sapir Golan, IdoMosseri(me), RonyLahav, Avi Klein)



Alpaca Target

 

We had 3 missions for Alpaca

  1. Testing and working tdd within Hana
  2. Insert the testing into quality process within Jenkins
  3. Have a nice Alpaca report via mail

 

Alpaca Idea

 

The main idea was to create a framework that provide us a list of assert db functions that will help us testing our hana code.

By running the assert functions we will get a report record that contain the fields : tested object(name of the store procedure/ funtion), test name (for exmaple: "1+1=4"), test status (paas or fail) and messege .

So, user who want to work TDD , can start with creating the tests inside dedicated test store procedure, execute the test , fail , and fix the tested SP.

it will be more clear after looking at the  example.


Example

 

we want to add 'multi' functionalaty to the SP CALC (the sp calc get 2 items and action and return the result of the action on the items).

the currewnt situation is that we have the action plus and minus :

 

create  procedure "CALC" (  in item1 INTEGER, in item2 INTEGER ,
 in action varchar(256), out result INTEGER)
AS    BEGIN
 if action = 'plus' then
 result := item1 + item2;
 end if;
 if action = 'minus' then
 result := item1 - item2;    end if;   
END;


now, lets have a look on the developer test SP. we have an ALPACA test that check the result of running the calc SP with 1, 3 and 'plus' .


create  procedure "TEST_CALC" ( out result TEST_RESULT)
AS    sp_result INTEGER;       BEGIN
 call "CALC"(1,3,'plus',sp_result)   ;
 call "ASSERT_TRUE" ('CALC','test 1+3= 4',sp_result,4,?);
 result = select * from "TEST_RESULTS" where tested_object = 'CALC' ;
END;

now, by calling :

 

 

CALL "TEST_CALC"(?);

 

 

we will get the output :


image1.JPG




Now, lets start the TDD...

first we will write the test, so we add to TEST_CALC the lines :

 

call "CALC"(2,3,'multi',sp_result)   ;
call "ASSERT_TRUE" ('CALC','test 2*3=6',sp_result,6,?);

Lets run the test :

 

CALL "TEST_CALC"(?);

 

Ohhhh we get Exception , but Hey, this is what was expected since the multi action is not implemented yet ...

image2.JPG

 

 

ok, lets develop the multi action, we will add to the calc sp the lines :

 

if action = 'multi' then
 result := item1 * item2;
end if;

now, lets run it again


 

CALL "TEST_CALC"(?);

 

 

image3.JPG

Hurrey !!

 

lets add test that should fail and add those lines to TEST_CALC:

 

call "CALC"(2,5,'multi',sp_result)   ;
call "ASSERT_TRUE" ('CALC','test 2*5=9',sp_result,9,?);

image4.JPG

 


In the same way we implement the ASSERT_TRUE function we also implement the ASSERT_EXIST that check if there is a record in a table.

Example:

 

CALL  ASSERT_EXIST ('ADD_PURCHASE'(sp name),
'Check new record in PURCHASES table '(test description),
'PURCHASES' (table name),
' customer_name=''IDO'' and product_name = ''BIKE'' ' (where clause),?);


Appendix


in 3 days we did all the development including adding suites(that contain several tests sp) , connecting the test suites to Jenkins and sending a report by email .

 

image5.JPG

(example of email generated by Alpaca)




The Alpaca project is not ready for customers use since we need to implement more functionality over the  framework.

However,It can give huge benifit to the hana developers .


BTW


We didnt win the contest however we had a lot of fun implementing this idea  ..






SAP HANA Idea Incubator - Crowd source crisis mapping for disaster response

SAP HANA Idea Incubator - SCN Search categorization

$
0
0

Hi Friends,

I firstly like to thank SCN for providing me this opportunity to express our Ideas.

I have got an Idea on how to make Search in SCN more user friendly and quicker.

When a user searches for a question within SCN network, the search results be categorized as

  • A-Marked Answered(Green)

  • H-Marked Helpful(Yellow)

  • N- Not answered(RED)

  • P - Partially Answered(Blue)


Image is uploaded as below to make the idea more understandable.

Capture.JPG

SAP has data for all the queries posted on its network.

And also has data if the query is marked as answered or not. If both the data can be clubbed and showed when user searches a query, user will be able to save time at looking the Search category Answered first and moving on to next likely helpful category.

My Idea link : SCN Search categorization : View Idea

 

Feedback is appreciated.

 

Thanks and Regards,

Anil Supraj

SAP HANA Trial Account Trouble

$
0
0

Hi,

 

I am trying to access the SAP HANA instance I have previously configured in "SAP HANA Cloud Platform Cockpit".

SAP HANA Studio's Administration Console is giving me the following error message:

 

eclipse.buildId=4.3.2.M20140221-1700
java.version=1.7.0_40
java.vendor=Oracle Corporation
BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en_US
Framework arguments:  -product org.eclipse.epp.package.standard.product
Command-line arguments:  -os win32 -ws win32 -arch x86_64 -product org.eclipse.epp.package.standard.product

Error
Tue Sep 23 16:11:32 CEST 2014
com.sap.core.tunnelcommands.framework.executor.DefaultResponseConsumingStrategy  - content: User 'Pxxxxxxxxx' does not have permission 'readDbInformation' in account pxxxxxxxxtrial'

 

Does anyone know what this readDbInformation permission is all about? Where would I change it?

 

Best regards,
Chris


Calling all Developers Interested in Software Development on SAP HANA!

$
0
0

It’s less than two weeks until the follow-up to openSAP’s first course, Introduction to Software Development on SAP HANA, begins.

 

The long awaited follow-up course will focus on major topics around SAP HANA native development.

If you’re a developer, then you’ll be interested in the many advances in the programming model and tooling that came with SAP HANA SPS6, SPS7, and SPS8. The course will go beyond the basics of programming models, discuss deeper, real-world patterns and anti-patterns, as well as looking at the architecture of applications for read-only, analytic, transactional and providing an interface with transactional systems.

 

The course will also feature SAP HANA studio as well as several new Web-based tools (for example, the Web-based Development Workbench, SAP HANA Lifecycle Management, and SAP HANA XS Administration Tool). The course will also look at the extended capabilities of SAP HANA, from beyond a database to a complete application platform.

 

If you enjoyed and benefited from the first course, then Next Steps in Software Development on SAP HANA is for you! Enroll today for free and advance your SAP HANA development skills!

 

 

openSAP is SAP’s Open Online Course provider. You don’t need to travel or take weeks out of the office to attend training. With openSAP, you can complete the course content at your convenience – anywhere, any time and on any device. openSAP is open to anyone interested in learning about SAP’s innovative products and solutions. Registration, learning content, final exam and Record of Achievement are provided free of charge. Find out more at openSAP.

My Experience - SAP HANA Certification ( C_HANAIMP141 Application Associate (Edition 2014) - SAP HANA)

$
0
0

I would like to share my experience for preparing and clearing for HANA Certification

C_HANAIMP141 Application Associate (Edition 2014) - SAP HANA.


A Brief Background About me : I am an SAP ABAP Consultant for more than 10 years  recently involved  in a SAP HANA implementation  on CRM for a media company  which  moved my interest  towards  In Memory database technology  and was much more interested  towards SAP HANA as a Platform . I had a through preparation about 2-3 months and finally  cleared the exam though was lot skeptical in initially finalizing the date of the exam But thorough preparation is needed to be confident to clear the certification.( Please be sure of your exam date because the cost of exam if you are paying by your self.)



Link for certification topics  : https://training.sap.com/shop/certification/c_hanaimp141-sap-certified-application-associate-edition-2014---sap-hana-g/

 

Examination Sections (The number of questions as Approx questions that i have received during my exam it might differ)

 

Section No of Questions approx

1.Administration of data models : Administer data models in SAP HANA, including setting of information validation rules, managing schemas, the importing/exporting and transporting of data models.

6
2.Advanced Data Modelling : Apply advanced data modeling techniques, including currency conversion, variables and input parameters,constraint filters Decision Tables8
3.Data modeling - Analytical view :Implement data models with SAP HANA using analytical views , and best practices optimization scenarios6
4..Data modeling - Attribute views :Implement data models with SAP HANA using attribute views,and best practices optimization scenarios4

5.Data modeling - Calculation Views:Implement data models with SAP HANA using calculation views,and best practices optimization scenarios( Both Graphical and as well as Script Calculation View , CE functions that replicate the Graphical Calculation Nodes)

,Calculation View with Union Vs Calculation View with Union as constant

8
6.Data modeling - SQL Script : Apply SQL Script to enhance the data models in SAP HANA using AFL, CE functions, and ANSI-SQL. 6
7.Data Provisioning : Describe possible scenarios and tools for replicating and loading data into SAP HANA from different data sources (e.g. SAP Landscape Transformation (SLT), SAP Data Services, or Direct Extractor Connection (DXC)).16
8.Deployment scenarios of SAP HANA :Describe the deployment scenarios for SAP HANA and evaluate appropriate system configurations.4
9.Optimization of Data Models and Reporting :  Best Practices and Optimization Techniques,General Principles on Performance 8
10.SAP HANA Live & Rapid Deployment Solutions for SAP HANA :Describe the value of HANA and identify scenarios for SAP delivered content for SAP HANA, such as SAP HANA Live and Rapid Deployment Solutions.4
11.Reporting : Provide advice on reporting strategies, and perform appropriate reporting solutions with SAP HANA. Build reports using various tools, for example, Microsoft Excel or SAP Business Objects BI tools.6
12.Security and Authorization : Describe the authorization concept of SAP HANA and implement a security model using analytic privileges, SQL privileges, pre-defined roles and schemas. Perform basic security and authorization troubleshooting.4

 

Study Resources :   HA100(SP07), HA300(SP07),HA350(SP07) ,Devloper guides in  SAP HANA Platform – SAP Help Portal Page

 

Other Study Resources

 

1.   SAP HANA Academy - YouTube video tutorials which gives your a better understanding of the topics and step by guides to practice.

2. Websites for SAP HANA: SAP HANA and In-Memory Computing, SAP HANA Developer Center and saphana.com.

3. Implementing SAP HANA.  of Jonathan Haun, Chris Hickman, Don Loden, Roy Wells - by        SAP PRESS 

4. SAP HANA. An Introduction of Bjarne Berg, Penny Silvia - by        SAP PRESS ( This contains SP08 topics as well )


Areas of Concentraion : Data Provisioning and Data Modelling  and Reporting.


Practical Knowledge : If we don't have any implementation experience it is advised to  practice the exercises and follow the YouTube videos for a basic step by step procedure the You Tube videos are based on version  SP05 so there might be slight difference in the screens and  options.

.

Data Modelling Concepts :  Create various types of views( Attribute,Analytic,Calculation View),Input Parameters,Variables,Restriction using Filters,Constraint filters,Decision Tables,Currency Conversion,Hierarchies and exposing Views to Excel.


Data Provisioning Concepts : Please practice various methodologies of  Data Provisioning ( SLT, Data Services,DXC,Uploading through Flat files etc ) .

SLT ->Configurations,Simple Transformations adding line of and ABAP Includes for  transformations , Changing and adding columns during Replication scenarios.

Data Services -> Configurations,creating Data Store,Data Flow,Query Transform,Template table,Complex Transforms, Replication  Scenarios,Try to replicate Data into HANA  from any extractor as well( Just to be Confident about the process)


Reporting : There are various reporting tools available for various platforms and scenarios 

Crystal Reports,WEBI,DASHBOARD,EXPLORER,ANALYSIS EDITION FOR OLAP, ANALYSIS FOR MS OFFICE,SAP Lumira for Visualization  do take a look how each scenarios can be created (HA100)


Final Words  : There needs to be lots of practice and studying required if we don't have any implementation experience. In  the Study materials  each and every slide can be important you never know how the questions might come. Do give a second reading if time permits . 60% of the questions do come from the study materials and developer guides and 40% questions come from your practical and implementation experience .Please do follow the SCN HANA Development center for more detailed understanding of any issues that might encounter.SAP Certification questions are mostly tricky please answer carefully and do read the questions and options twice you can easily spot 2 answers for multiple choice but the third is mostly tricky be careful in those circumstances.Other Topics to be considered as well are Deployment Solutions,Security and Authorizations.


Hope this blog might provide you an insight for the SAP HANA Certification (2014 Edition)

All the best for those preparing and appearing for the certification exam.


Developers in Houston: Join us for a hands-on Developer Day on SAP HANA (Oct. 28)

$
0
0

I-Developer Day-Houston-300dpi.jpg

The SAP HANA Developer Day is a great opportunity for you to learn and get your hands dirty with SAP HANA, SAP’s column-based in-memory platform for big data, real-time apps, with live support from the experts. During the event, SAP experts will show you how the platform works and how to get started. You’ll get hands-on coding experience developing on top of SAP HANA by exploring the building blocks needed to create apps, including:

 

  • Creating tables and views using Core Data Services
  • Modeling HANA views
  • Creating SQLScript stored procedures
  • Leveraging services such as oData and server side JavaScript
  • Building UIs using SAPUI5

 

This is a bring-your-own-laptopevent that will take place on Tuesday, October 28th at Schlumberger(1200 Enclave Parkway, No. 6090, 6th floor, Houston, TX 77077). Our leading expert will be developer and product manager Thomas Jung. For more details and for an overview of the agenda, click here.

 

This event is free of charge but space is limited so REGISTER NOW!

 

We look forward to seeing you there!

Real-time sentiment rating of movies on SAP HANA (part 1) - create schema and tables

$
0
0

Intro

Hi everyone, welcome to the series of "Real-time sentiment rating of movies". In this series of blogs I want to share with you how to build a SAP HANA native application named "Real-time sentiment rating of movies" step by step in detail. Of course, I will also share my project to GitHub when I finish it. The goal of this smart application is that we can analyze the sentiment rating of new release movies from social media in real-time. The basic approach to build this app consists of three steps as follows.

 

1. Crawl metadata of new release movies as well as social media data and insert into SAP HANA in real-time

2. Use the text analysis feature of SAP HANA to analyze the social sentiment

3. Build some fancy UIs to expose the result of sentiment analsyis

 

Motivation

Actually, what I want to share with you in this series is the second version of this smart app. If you are interested in the first version, you can have a look at Real-time sentiment rating of movies on SAP HANA One. You may ask why I want to build the second version and what's the difference between the first version and the second version. So, I'll answer this question first. I've already explained some reasons in Use XSJS outbound connectivity to search tweets. The first version was built with SAP HANA SPS05 and at that time XS Engine does not support some advanced features such as Core Data Services (CDS, since SPS06), Outbound Connectivity (since SPS06) and Job Scheduling (since SPS07), etc.. Without these awesome features I could only use some external tools/methods to finish some part of the app, e.g., I used Twitter4J to crawl tweets which means I used Java in the first version. With the rapid development of XS, now I am able to build the smart app with pure XS which means now I can build the pure SAP HANA native application. That's the major motivation why I want to rebuild the smart app and share with you. The second reason is that I'm not satisfied with the UI of the first version. I'd like to build a more fancy UI and the app should be able to run on mobile devices. Besides I also fixed some bugs in the first version. I'll also show you in the series later.

 

I can't wait to share with you. Are you ready? Now let's start!

 

Prerequisites

1. A running SAP HANA system, at least SPS07, since Job Scheduling was introduced since SPS07. I am using SAP HANA SPS08 Rev. 80.

2. A developer account for Rotten Tomatoes API - Welcome to the Rotten Tomatoes API. If you don't have, register here

3. A developer account for Twitter Developers. If you have a Twitter account, just login. If you don't have, sign up here

 

Basics about XS

In the series of "Real-time sentiment rating of movies", I won't explain basics about XS, for example how to create XS project, how to commit/activate your design-time object and some basic concepts. If you are new to SAP HANA XS, you can first have a look at the following blogs/materials/references.

 

SAP HANA Extended Application Services

SAP HANA SPS6 - Various New Developer Features

SAP HANA SPS07 - Various New Developer Features

http://help.sap.com/hana/SAP_HANA_Developer_Guide_en.pdf

JSDoc: Index

 

Find APIs

Before we start to develop the smart app, we need to first do some research work or some tests. The most important thing is that we need to figure out which APIs we can call to get the metadata of new release movies and the social media data, e.g. tweets. I'll still use Rotten Tomatoes API - Welcome to the Rotten Tomatoes API and Twitter Developers as the data sources which I used in the first version of this app. Now let's have a look at which APIs we need in our app.

 

Metadata of new release movies

Among Rotten Tomatoes API - API Overview, we can find that we can get new release movies of current week via Rotten Tomatoes API - Opening Movies. There are some parameters you can configure, e.g., limit the number of movies to return. Let's give it a shot. The URL is http://api.rottentomatoes.com/api/public/v1.0/lists/movies/opening.json?apikey=[your_api_key] You can find your API key from http://developer.rottentomatoes.com/apps/mykeys if you've already signed in.

 

1.PNG

 

It works! However, we need some details of movie metadata which is not included in this API call, e.g., the director, studio and genres of the movie. Don't worry! There is another API we can use, Rotten Tomatoes API - Movie Info With this API, we can get all information about a certain movie as follows.

 

2.PNG

 

Tweets

Now let's take a look at REST APIs | Twitter Developers. Since we will search tweets about new release movies and do the sentiment analysis in SAP HANA, we need to use GET search / tweets | Twitter Developers. I've already written a blog about this part, so please refer step 1 and 2 in Use XSJS outbound connectivity to search tweets. Make sure you can successfully get the result with Postman - REST Client as follows. If you want to know more about the search API, you can also have a look at The Search API | Twitter Developers

 

3.PNG

 

OK. In short we need the following three APIs in our app.

 

Create schema and tables

Now we can start to develop our smart app with SAP HANA XS. First of all, we need to create XS project, then add .xsapp and .xsaccess. Since we want to build a pure SAP HANA native application, we can use .hdbschema and .hdbdd these two artifacts to create our schema and tables.

 

MOVIE_RATING.hdbschema

schema_name = "MOVIE_RATING";

 

movieRating.hdbdd

namespace movieRating.data;
@Schema: 'MOVIE_RATING'
context movieRating {  type SString : String(20);  type MString : String(200);  type LString : String(2000);  @Catalog.tableType : #COLUMN  Entity Movies {  key id : Integer;  title : MString;  year : Integer;  mpaa_rating : SString;  runtime : SString;  release_date : LocalDate;  synopsis : LString;  poster : MString;  studio : MString;  hashtag : MString;  timestamp : UTCTimestamp;  since_id : SString;  };  @Catalog.tableType : #COLUMN  @nokey  Entity Genres {  movie_id : Integer not null;  genre : MString not null;  };  @Catalog.tableType : #COLUMN  @nokey  Entity AbridgedCast {  movie_id : Integer not null;  cast : MString not null;  };  @Catalog.tableType : #COLUMN  @nokey  Entity AbridgedDirectors {  movie_id : Integer not null;  director : MString not null;  };  @Catalog.tableType : #COLUMN  Entity Tweets {  key id : SString;  created_at : UTCDateTime;  text : MString;  source : MString;  user_screen_name : SString;  user_profile_image_url : MString;  longitude : Decimal(20, 17);  latitude : Decimal(20, 17);  movie_id : Integer;  timestamp : UTCTimestamp;  };
};

 

You can find there are five entities as follows which means after the activation there will be five corresponding runtime tables under schema "MOVIE_RATING".

  • Movies
  • Genres
  • AbridgedCast
  • AbridgedDirectors
  • Tweets

 

I'll explain the above five entities respectively.

 

Movies

- We can get "id", "title", "year", "mpaa_rating", "runtime", "release_date", "synopsis", "poster", "studio" from Rotten Tomatoes API - Movie Info directly.

- For "hashtag", it's an improvement in the second version. We can generate a hashtag for each movie with the following three advantages.

 

1. Hashtag means a topic/keyword in Twitter. See Twitter Help Center | Using hashtags on Twitter It's common to "hashtag" movie titles in tweets. So, we can use the hashtags of movies to search tweets instead of movie titles in plain text.

 

2. Since we will use SAP HANA to do the sentiment analysis of tweets, we need to insert tweets into SAP HANA. However, sometimes the movie title contains sentiment itself which we need to avoid, e.g., if the movie title is "I love you", SAP HANA will detect there is a strong positive sentiment in each of tweet about this movie. But if we use hashtag, SAP HANA will consider it as a topic instead of a potential sentiment.

 

3. Another case is that sometimes the movie title is a common word/phrase which is used widely. The tweet may contain this word/phrase but the user is not talking about movies. For instance, if the movie title is "go to work", it is obviously a common phrase. Usually we will post including "go to work" directly instead of #gotowork.

 

- For "timestamp", we use it to record when we crawl this movie.

- For "since_id", in order to only search new tweets of each movie, we will use it as a parameter to search tweets. See GET search / tweets | Twitter Developers

 

Genres, AbridgedCast, AbridgedDirectors

Since movies and genres/cast/directors have n:m relationship, we can store this info in additional mapping tables. For simplicity, we do not store the info about actor/actress/director, so we do not create tables for these entities. If you create tables about actor/actress/director, you can use associations in CDS. See Create an Association in CDS - SAP HANA Developer Guide - SAP Library

 

Tweets

First of all, you can find the JSON format of tweet object returned from Tweets | Twitter Developers. As you can see, we just store info what we need in our smart app instead of all information of a tweet object.

 

- For "id", we use "id_str" instead of "id". You can find the reason from this thread Google Groups

- For "created_at", "text", "source", "user_screen_name", "user_profile_image_url", we can get them directly from GET search / tweets | Twitter Developers

- For "longitude" and "latitude", we can get the geo info from the "coordinates" field instead of the "geo" field. See Google Groups. Since currently CDS does not support geospatial data types officially, we just use "longitude" and "latitude" instead.

- For "movie_id", we need to record which movie this tweet mentioned.

- For "timestamp", we use it to record when we crawl this tweet.

 

After the activation, you may find five tables are created under schema "MOVIE_RATING" in run-time. However, now you cannot do select/insert/update/delete operations on these five tables, since both design-time objects and run-time objects are owned by the technical user _SYS_REPO.

 

4.PNG

 

Next steps

Till now, we've created several tables in our smart app. In the next blog, we will first create few roles and grant these roles to some users, so that they can do select/insert/update/delete operations on these tables. Then we will use Outbound Connectivity and Job Scheduling features to crawl metadata of new release movies from Rotten Tomatoes API and tweets from Twitter API in real-time!

 

Hope you enjoyed reading my blog.

SAP HANA SQL Options

$
0
0

SQL Script in SAP HANA SQL Console.

 

Like in SQL Server, we have many features in SAP HANA as well. In SAP Console write a SQL statement and right click, you can see many options like below.

HANA1.png

 

It has segregated into some sections. common features are save, open, clear, etc..

HANA6.png

 

Important and useful features are, Execute, Explain Plan, Visualize Plan & Format.

 

I am explaining few of the features here.

 

Format:

 

Write a SQL Script as per your wish and click the format option, it will align as per standard.

hana3.png

 

Explain Plan

 

Without run, we can see the query plan. It will display many information including the row count in the output, etc..

HANA5.png

 

hana4.png

 

Preferences


hana2.png

Real-time sentiment rating of movies on SAP HANA (part 2) - data preparation

$
0
0

Intro

In the previous blog Real-time sentiment rating of movies on SAP HANA (part 1) - create schema and tables, we've determined APIs which we'll use in our app and create some tables like movies and tweets with CDS. In this blog, we will prepare the data. In the first version of the smart app, I just crawled the metadata of new release movies once which means there were only about 20 movies at all. With the lack of Outbound Connectivity and Job Scheduling features, I could only use Java multithreading to crawl tweets continuously. With the introduction of Outbound Connectivity since SPS06 and  Job Scheduling since SPS07, now I can use SAP HANA XS to prepare data. We will discuss it in this blog.

 

Create roles

At the end of Real-time sentiment rating of movies on SAP HANA (part 1) - create schema and tables, we can see tables are already successfully created. However, they are owned by _SYS_REPO instead of the user which you use to add the SAP HANA system and login. So, first of all, we need to create few roles and grant these roles to some users. We can create two roles as follows. The user role has the select privilege on our design-time schema which means this role can only select data from this schema, while the administration role extends the user role and has additional insert/delete/update privileges on the schema which means the admin role can do select/insert/update/delete on this schema.

 

Notice: we use schema instead of catalog schema in our role definitions. You're recommended to do so since we will build a pure SAP HANA native app.

 

User.hdbrole

role movieRating.roles::User {  schema movieRating.data:MOVIE_RATING.hdbschema: SELECT;
}

 

Admin.hdbrole

role movieRating.roles::Admin
extends role movieRating.roles::User {  schema movieRating.data:MOVIE_RATING.hdbschema: INSERT, DELETE, UPDATE;
}

 

After the activation, we can use the following SQL to grant the activated roles. In this example, we grant the admin role to a user who will crawl data from APIs and insert into SAP HANA.

CALL "_SYS_REPO"."GRANT_ACTIVATED_ROLE"('movieRating.roles::Admin', '<USERNAME>');

 

Create HTTP/HTTPS destinations

Since we want to use Outbound Connectivity in SAP HANA XS, the first thing is to create HTTP/HTTPS destinations. We will call APIs from Rotten Tomatoes API and Twitter API, so we need to create two destinations, one for Rotten Tomatoes API, the other for Twitter API.

 

rottenTomatoesApi.xshttpdest

description = "rotten tomatoes api";
host = "api.rottentomatoes.com";
port = 80;
pathPrefix = "/api/public/v1.0";
useProxy = true;
proxyHost = "proxy.pal.sap.corp";
proxyPort = 8080;
authType = none;
useSSL = false;
timeout = 0;

 

twitterApi.xshttpdest

description = "twitter api";
host = "api.twitter.com";
port = 443;
pathPrefix = "/1.1";
useProxy = true;
proxyHost = "proxy.pal.sap.corp";
proxyPort = 8080;
authType = none;
useSSL = true;
timeout = 0;

 

As we can call Rotten Tomatoes API via HTTP, you don't need to configure anything. However for Twitter API, we can only call APIs via HTTPS and it is kind of complex to configure. It will take some time here. Please make sure you've finished Use XSJS outbound connectivity to search tweets before you go forward. So, for twitterApi.xshttpdest, we need to configure the trust store as showed in the red box below.

 

5.PNG

 

Create XSJS to crawl data

Now we can code XSJS to crawl data. We can create two XSJS files. One is for getting metadata of new release movies, the other is for searching tweets.

 

searchMovies.xsjs

function hashtag(title) {  return "#" + title.split(":")[0].replace(/\W/g, "");
}
function searchMovies() {  var baseURL = "/lists/movies/opening.json?limit=50&country=us";  var apikey = "<your_api_key>";  var destination = $.net.http.readDestination("movieRating.services", "rottenTomatoesApi");  var client = new $.net.http.Client();  var request = new $.net.http.Request($.net.http.GET, baseURL + "&apikey=" + apikey);  var response = client.request(request, destination).getResponse();  var movies = JSON.parse(response.body.asString()).movies;  if (movies) {  var movie;  var conn = $.db.getConnection();  var pstmtMovies = conn.prepareStatement('INSERT INTO "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("id", "title", "year", "mpaa_rating", "runtime", "release_date", "synopsis", "poster", "studio", "hashtag", "timestamp", "since_id") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)');  var pstmtGenres = conn.prepareStatement('INSERT INTO "MOVIE_RATING"."movieRating.data::movieRating.Genres" ("movie_id", "genre") VALUES (?, ?)');  var pstmtAbridgedCast = conn.prepareStatement('INSERT INTO "MOVIE_RATING"."movieRating.data::movieRating.AbridgedCast" ("movie_id", "cast") VALUES (?, ?)');  var pstmtAbridgedDirectors = conn.prepareStatement('INSERT INTO "MOVIE_RATING"."movieRating.data::movieRating.AbridgedDirectors" ("movie_id", "director") VALUES (?, ?)');  for (var i in movies) {  request = new $.net.http.Request($.net.http.GET, "/movies/" + movies[i].id + ".json?apikey=" + apikey);  response = client.request(request, destination).getResponse();  movie = JSON.parse(response.body.asString());  //Movie  pstmtMovies.setInteger(1, movie.id);  pstmtMovies.setString(2, movie.title);  pstmtMovies.setInteger(3, movie.year, 10);  pstmtMovies.setString(4, movie.mpaa_rating);  pstmtMovies.setString(5, movie.runtime + "");  pstmtMovies.setDate(6, movie.release_dates.theater, "YYYY-MM-DD");  pstmtMovies.setString(7, movie.synopsis);  pstmtMovies.setString(8, movie.posters.thumbnail);  pstmtMovies.setString(9, movie.studio === undefined ? "" : movie.studio);  pstmtMovies.setString(10, hashtag(movie.title));  pstmtMovies.setTimestamp(11, new Date());  pstmtMovies.setString(12, '0');  pstmtMovies.execute();  //Genres  if (movie.genres) {  for (var i in movie.genres) {  pstmtGenres.setInteger(1, movie.id);  pstmtGenres.setString(2, movie.genres[i]);  pstmtGenres.execute();  }  }  //AbridgedCast  if (movie.abridged_cast) {  for (var i in movie.abridged_cast) {  pstmtAbridgedCast.setInteger(1, movie.id);  pstmtAbridgedCast.setString(2, movie.abridged_cast[i].name);  pstmtAbridgedCast.execute();  }  }  //AbridgedDirectors  if (movie.abridged_directors) {  for (var i in movie.abridged_directors) {  pstmtAbridgedDirectors.setInteger(1, movie.id);  pstmtAbridgedDirectors.setString(2, movie.abridged_directors[i].name);  pstmtAbridgedDirectors.execute();  }  }  }  pstmtMovies.close();  pstmtGenres.close();  pstmtAbridgedCast.close();  pstmtAbridgedDirectors.close();  conn.commit();  conn.close();  }
}

 

I don't plan to explain the whole code, since you can find the XSJS reference from JSDoc: Index. I just want to mention several key points.

 

1. We first call Rotten Tomatoes API - Opening Movies to get all IDs of new release movies of current week, then we use Rotten Tomatoes API - Movie Info to get the detail info of each movie and insert into SAP HANA.

 

2. The hashtag() function: There are two steps: First, we remove the subtitle, for example there is a new release movie named "My Little Pony: Equestria Girls - Rainbow Rocks", we just use "My Little Pony" as the movie title. Second, we just keep A-Za-z0-9_. Otherwise, some movie titles will be too long or there will be some marks in the movie title which are both not good for searching tweets.

 

3. For Rotten Tomatoes API - Opening Movies, you cannot get more than 50 new release movies and the default value of "limit" is 16 which is few to us. So, we set "limit" to 50.

 

4. We just use the release date in theaters and we change the data type from string to date.

 

5. since_id is 0 by default.

 

searchTweets.xsjs

function searchTweets() {  var baseURL = "/search/tweets.json?lang=en&result_type=recent&count=100";  var token = "<your_bearer_token>";  var destination = $.net.http.readDestination("movieRating.services", "twitterApi");  var client = new $.net.http.Client();  var request, response, result, tweets, max_id_str;  var conn = $.db.getConnection();  var pstmtSelectMovies = conn.prepareStatement('SELECT "id", "hashtag", "since_id" FROM "MOVIE_RATING"."movieRating.data::movieRating.Movies" WHERE DAYS_BETWEEN("release_date", CURRENT_DATE) BETWEEN 0 AND 6 ORDER BY "release_date" DESC');  var pstmtUpdateMovies = conn.prepareStatement('UPDATE "MOVIE_RATING"."movieRating.data::movieRating.Movies" SET "since_id" = ? WHERE "id" = ?');  var pstmtTweets = conn.prepareStatement('INSERT INTO "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("id", "created_at", "text", "source", "user_screen_name", "user_profile_image_url", "longitude", "latitude", "movie_id", "timestamp") VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)');  var rs = pstmtSelectMovies.executeQuery();  while (rs.next()) {  try {  request = new $.net.http.Request($.net.http.GET, baseURL + "&q=" + rs.getString(2) + "&since_id=" + rs.getString(3));  request.headers.set('Authorization', token);  response = client.request(request, destination).getResponse();  result = JSON.parse(response.body.asString());  tweets = result.statuses;  max_id_str = result.search_metadata.max_id_str;  if (tweets.length === 1) {  pstmtTweets.setString(1, tweets[0].id_str);  pstmtTweets.setTimestamp(2, tweets[0].created_at.slice(4, 20) + tweets[0].created_at.slice(-4), "MON DD HH24:MI:SS YYYY");  pstmtTweets.setString(3, tweets[0].text);  pstmtTweets.setString(4, tweets[0].source);  pstmtTweets.setString(5, tweets[0].user.screen_name);  pstmtTweets.setString(6, tweets[0].user.profile_image_url);  if (tweets[0].coordinates === null) {  pstmtTweets.setNull(7);  pstmtTweets.setNull(8);  } else {  pstmtTweets.setDecimal(7, tweets[0].coordinates.coordinates[0]);  pstmtTweets.setDecimal(8, tweets[0].coordinates.coordinates[1]);  }  pstmtTweets.setInteger(9, rs.getInteger(1));  pstmtTweets.setTimestamp(10, new Date());  pstmtTweets.execute();  } else if (tweets.length > 1) {  pstmtTweets.setBatchSize(tweets.length);  for (var i in tweets) {  pstmtTweets.setString(1, tweets[i].id_str);  pstmtTweets.setTimestamp(2, tweets[i].created_at.slice(4, 20) + tweets[i].created_at.slice(-4), "MON DD HH24:MI:SS YYYY");  pstmtTweets.setString(3, tweets[i].text);  pstmtTweets.setString(4, tweets[i].source);  pstmtTweets.setString(5, tweets[i].user.screen_name);  pstmtTweets.setString(6, tweets[i].user.profile_image_url);  if (tweets[i].coordinates === null) {  pstmtTweets.setNull(7);  pstmtTweets.setNull(8);  } else {  pstmtTweets.setDecimal(7, tweets[i].coordinates.coordinates[0]);  pstmtTweets.setDecimal(8, tweets[i].coordinates.coordinates[1]);  }  pstmtTweets.setInteger(9, rs.getInteger(1));  pstmtTweets.setTimestamp(10, new Date());  pstmtTweets.addBatch();  }  pstmtTweets.executeBatch();  }  //Update "since_id" of the movie  pstmtUpdateMovies.setString(1, max_id_str);  pstmtUpdateMovies.setInteger(2, rs.getInteger(1));  pstmtUpdateMovies.executeUpdate();  } catch (e) {  }  }  conn.commit();  rs.close();  pstmtSelectMovies.close();  pstmtUpdateMovies.close();  pstmtTweets.close();  conn.close();
}


Similar to searchMovies.xsjs, I just explain some key points.

 

1. Regarding the following SQL, the first thing we want to do in searchTweets.xsjs is to get new release movies, because we will search tweets based on the hashtags of movies. Since there were only about 20 movies in the first version of this app, I can always search tweets about all movies. However, in the second version of the smart app, we plan to get new release movies continuously which means we will get new release movies of each week. So, we will have huge amounts of movies in our movie table. It's impossible to crawl tweets about all these movies because of API Rate Limits | Twitter Developers. Imagine there are about 30 new release movies each week in the US, after one year there will be 1500+ movies in the movie table. The result is that we'll be not able to crawl tweets in real time! So my current solution is just searching tweets about new release movies which has been released less than one week. For example, if a movie is released on 2014-09-30, the app will crawl tweets about this movie from 2014-09-30 to 2014-10-06.

 

SELECT "id", "hashtag", "since_id" FROM "MOVIE_RATING"."movieRating.data::movieRating.Movies" WHERE DAYS_BETWEEN("release_date", CURRENT_DATE) BETWEEN 0 AND 6 ORDER BY "release_date" DESC

 

2. We use since_id as one of the parameters, because we don't expect old results which we have already crawled. And we keep since_id for each movie, so after we insert tweets into SAP HANA, we will update since_id of the movie. Next time we want to crawl tweets about this movie, we can use the updated since_id directly.

 

3. Parameters "lang", "result_type", "count"

- For "lang=en", we just search tweets in English for simplicity. Currently SAP HANA supports sentiment analysis the following five languages. See Voice of the Customer Content - SAP HANA Text Analysis Language Reference Guide - SAP Library

  • English
  • German
  • French
  • Spanish
  • Simplified Chinese

 

- For "result_type=recent", since we want to search tweets in real-time, we just need recent tweets. You can find other options from GET search / tweets | Twitter Developers

 

- For "count=100", the maximum count of each return is 100, so we set to maximum.

 

4. We use batch when inserting tweets if possible. If we get only one tweet, cannot use batch. You can find the logic from line 27 - 67.

 

5. We change the data type of "created_at" from string to timestamp.

 

Notice: Before going to the job scheduling step, you are highly recommended to activate all objects by now and call searchMovies.xsjs and searchTweets.xsjs manually. If everything is OK, please jump into the job scheduling part. Since I will use searchMovies.xsjs and searchTweets.xsjs in the job scheduling, I wrapped them in two functions. If you call them manually, please remove function() {}, that is the first line and the last line

 

Create job scheduling

Now you should be able to search new release movies and tweets manually. Since SAP HANA XS now supports the job scheduling feature, why not use this awesome feature?

 

searchMovies.xsjob

{    "description": "Search movies",    "action": "movieRating.services:searchMovies.xsjs::searchMovies",    "schedules": [       {          "description": "00:00 every Tuesday (UTC)",          "xscron": "* * * tue 0 0 0"       }    ]
}

 

Since the new release movies of current week does not change usually, we can just search new release movies once per week. Here we call the "searchMovies" function at 00:00 every Tuesday. You may ask why this specific time? I noticed that most movies are released on Friday. The reason is obvious, we can go to the theaters at weekends. See When Is Opening Your Film on Wednesday a Good Idea? - Film.comSo, Tuesday is early enough.

 

Notice: The time in .xsjob is UTC time and you cannot change it to other timezones.

 

searchTweets.xsjob

{    "description": "Search tweets",    "action": "movieRating.services:searchTweets.xsjs::searchTweets",    "schedules": [       {          "description": "every 2 minutes",          "xscron": "* * * * * */2 0"       }    ]
}

 

The above is the job scheduling we will use to search tweets. Due to the API Rate Limits | Twitter Developers, two minutes interval is currently a good choice which means the app will crawl tweets about a range of new release movies every two minutes.

 

After the activation, do not forget to activate both job scheduling in the SAP HANA XS Administration Tool as follows. If you are not familiar with that, please have a look at Scheduling XS Jobs - SAP HANA Developer Guide - SAP Library

 

6.PNG

 

7.PNG

 

Next steps

So far we've finished the data preparation which means we are now searching new release movies and related tweets and inserting into SAP HANA in real-time! You may have crawled the similar results as follows. With the metadata of new release movies and huge amounts of tweets, we can do the sentiment analysis in the next blog.

 

Movies

8.PNG

 

Tweets

9.PNG

 

Hope you enjoyed reading my blog.

Open SAP Update: SAP HANA Developer Edition v1.8.80

$
0
0

With the upcoming new Open SAP courses we've done an update to our latest HANA Developer Edition to include content specific to those upcoming courses so now if you get your own edition you'll be ready for those courses!

 

You'll be able to find this new version in the  SAP Cloud Appliance Library for both the Aamazon EC2 landscape as well as the Microsoft Azure platform.

 

To get your own system please follow this link.


How 'count' aggregation sometimes behaves differently in SAP HANA graphical views vs traditional SQL

$
0
0

This blog is to explain a common problem which occurs when we try to add an alphanumeric column as a measure (with 'COUNT' aggregation) in graphical views.

 

Problem Description


For example, if we have two views - VIEW_1 and VIEW_2, each having 2 alphanumeric columns "A" & "B".

And if we are trying to get a count of distinct values ofcolumn "B", grouped by the unique values of column "A", we would normally build a graphical view, similar to the one shown below.

intiial_flow.PNG

 

View_1.PNGview_2.PNG

In this case lets assume that we have only two unique values ('X' & 'Y') for COLUMN "A", across VIEW-1 & VIEW-2.


From an SQL perspective, we would expect HANA to perform the following query


SELECT A, COUNT(B) FROM

(

     SELECT A, B FROM VIEW_1

     UNION

     SELECT A, B FROM VIEW_2

)

GROUP by A;

 

But the graphical view data preview will always give the following result, whatever values are present in the views.

 

DATAPREVIEW.PNG

 

This result is definitely wrong.

 

Reason

 

The HANA optimizer, whenever possible, tries to always push down the aggregation/filters to a lower level nodeto reduce the number of rows transferred across levels/nodes.

 

So, in our case, HANA optimizer tries to push the 'COUNT' aggregation down.

 

flow.png

And the following property setting -"ALWAYS AGGREGATE RESULT"if set to'TRUE', always enforces a final aggregation in the semantics node.

This wouldn't be a problem for aggregation types like SUM, MAX and MIN.

 

Ex:  SUM ( SUM(A,B,F), SUM(C,D) ) is same as SUM(A,B,C,D,F)  or

        MAX ( MAX(A,B,F), MAX(C,D) ) is same as MAX(A,B,C,D,F)  or

        MIN ( MIN(A,B,F), MIN(C,D) ) is same as MIN(A,B,C,D,F)

 

BUTCOUNT ( COUNT(A,B,F), COUNT(C,D) )  => COUNT ( 3, 2 ) => 2, which is definitely wrong vsCOUNT(A,B,C,D,F) => 5

 

pre_aggre_true.PNG

 

So what actually gets executed, in the graphical view is the following query -

 

SELECT A, COUNT(B) AS B FROM

(

     SELECT A, COUNT(B) AS B FROM

     {

          SELECT A, B FROM VIEW_1

          UNION

          SELECT A, B FROM VIEW_2

     }

)

GROUP by A;

 

Resolution:

 

I guess the resolution is evident by now

 

ForCOUNT aggregation on alphanumeric values, switch "ALWAYS AGGREGATE RESULT" to 'FALSE'. to get accurate results,

 

always_aggre.PNG

 

or even simpler, use the COUNTER feature provided by HANA 

 

Regards

Ajay

openSAP: Next Steps in Software Development on SAP HANA - Week 1

$
0
0

Today is the launch day for the new openSAP course: Next Steps in Software Development on SAP HANA. In this course we will look at more advanced features in SAP HANA native development as well as new features which have been introduced in SPS07 and SPS08, and an occasional look ahead to SPS09. However my favorite part of doing this course was that it goes well beyond the basics.  As is necessary with any new platform, for the last 2 years or so we have been very focused on introducing the basic concepts of the development environment.  However this rarely leaves much opportunity to dig deeper into the programming model or explore real world problems.  Let's just say that I was a little tired of teaching "Hello World" exercises and looking forward to the kinds of topics we got to cover in this new course.

 

For example in the first course, we explained XSJS and the usage of JavaScript; but didn't get to go into much detail about the JavaScript language itself. Instead we had to say that there are plenty of JavaScript resources available in other locations. But now we revisit the topic and have the benefit of being able to cover JavaScript language constructs that benefit the XSJS developer.  Another example is around OData services. In the first course we spent all of our time on this topic to introduce the basics and simple entities with mostly read-only operations.  In this course we could not only get into update/create operations but also explore batch and association links - two very often asked for topics. We also go outside the traditional core topics of HANA native development to explore some "extended" topics like Fiori, Text Analysis, Geospatial, etc.

 

In this first week, we do spend just a little time on a general recap of the overall architecture.  We then pretty quickly move on to discussing the various tools which will be used in the course.  We will show and detail the SAP Web-based Development Workbench, HANA Studio, XS Admin, and SAP HANA Application Lifecycle Manager.  Not only do we introduce all of these tools in the first week, but for those already familiar with them, we also highlight some of the recent advances in the tooling in SPS08. All of this will help prepare you for the comings weeks when we will use these various tools to create a complete application using the HANA native development model.

W3C Semantic Web Standards with HANA - RDF/SPARQL Support

$
0
0

Currently I'm involved in a project exploring the semantic technologies for providing situation and context awareness services for future human computer interfaces. We decided to use a graph-based approach for storing the user's situation and exploit Linked Data [1] principles to collect related information from different sources to derive recommendations for user assistance. Unfortunately SAP does not have


With HANA, SAP provides a fast and scalable data layer based on innovative column-based, in-memory storage and query engines. So the question was, whether this new technology can be utilized for implementing W3C Semantic Web [2] technologies, such as RDF store, SPARQL [3] efficiently. Many vendors use relational databases for their SPARQL implementations or provide the possibility to use relational databases as storage backend. So I spend some time on the problem of using HANA as an RDF triple store and implementing a SPARQL endpoint for querying graph data in HANA. What came out of this activity is a proof-of-concept implementation of SPARQL 1.1 in pure JavaScript (node.js [4] ), which translates SPARQL statements into HANA SQL and SQLScript functions and executes these queries on a HANA instance. Although concrete performance measuring and comparison was not yet done, the results are very promising and we use the system for our project now.

 

Besides the performance aspect we discovered a few other benefits, which come with HANA. So for example, it is possible to define HANA SQL Views to directly access the enterprise data within SPARQL queries and such provide a graph view on enterprise data. No replication of data into a special triple store is necessary. Furthermore, together with the Virtual Data Model or CDS these views can also be created automatically.

Screenshot from 2014-10-09 17:35:18.png

Sample query of a business object "PurchaseOrder"

 

The full-text search and text analysis features of HANA can be integrated into the SPARQL processor. For example we implemented a built-in RDF property , which uses the HANA contains [5] predicate for fuzzy search.

 

Finally, another interesting possibility comes with HANA Virtual Tables, which refer to tables on a different (remote) HANA databases. This feature allows to implement graphs spanning multiple databases and execute federated queries.


We are now using and improving the HANA SPARQL extension in the context of speciifc internal projects. The semanticweb.com Website lists a number of industry verticals [6]. So I'm wondering if there is a demand for having RDF/SPARQL support for HANA inside the SAP community.

 

 

[1] Linked data - Wikipedia, the free encyclopedia, Data - W3C, http://linkeddata.org/

[2] Semantic Web - Wikipedia, the free encyclopedia

[3] SPARQL - Wikipedia, the free encyclopedia and SPARQL 1.1 Overview

[4] node.js

[5] The CONTAINS() Predicate - SAP HANA Fuzzy Search Reference - SAP Library

[6] http://semanticweb.com/catagory/industry-verticals

openSAP: Next Steps in Software Development on SAP HANA – my thoughts after week 1

$
0
0

This week the course started and I hopped on board on Wednesday. As someone who also took the first course and didn’t do too much in between I was curious how I would be able to pick things up and how many has changed since then.

Now I did the exam of the first week and have some initial thoughts.

One of the pleasant surprises was to see that there are a lot of things that are easier. Last time we spent a lot of time creating text files in all sort of places. The .xsaccess, the .xsapp and the .xsodata files were all created manually (yes you could copy them if you wanted).

 

Now in the first run there are a lot of wizards that take away a lot of the burden of creating those files. The risk is off course that you won’t pass the ‘wizard developer’ stage, but after a couple of files created the experience got stale fast. So I am happy with that. You can do a lot of stuff now using the Web-based Workbench. Seems like something that will be expanded in the future. Is it going to replace HANA studio entirely? The example of the airport came up (working some fast changes on your tablet on the airport), but I think one airport session could set you up with weeks’ worth of bugs as you had to do it so hastily. Just wondering how you would hold off the stewardess ho insists that you close the tablet immediately while you’re waiting for the last object to activate ;-)

 

That particular problem  seems to be addressed nicely with the change manager. Comparing all versions. Seeing what was changed, going back to previous versions. Where used with items on the server. All things that will help nicely to keep the quality in your system up.

 

In the architectural files it was clear that either there are a lot of new services, or I forgot some since the course. It is good to see how SAPUI5 is taken a big place in the development. As someone who did already quite some things in SDK’s this is familiar territory.

 

I am curious to see how to work with the XSJS outbound HTTP service. I cannot remember doing anything with that in the previous course.

 

Finally I am looking forward to the exercises in the last week. One of the best ways of learning is trying some challenging things to learn while you’re trying to create something. Last time I ‘invented’ some scenario’s just to be able to take the HANA developer edition ‘car’ for a ride.

 

By the way, was anyone impressed with the patience of both guys while they looked silently in the camera while the other was presenting the entire segment?

How to create your geoJSON model of the world and get it into HANA

$
0
0

Intro

Did you know HANA comes with a geo spatial engine since SPS07 (What s New? SAP HANA SPS 07 Geospatial Pr... | SAP HANA& What's new in SAP HANA SP07)? Did you think of the power you have, when master, real time and geo spatial data all come together in one engine? Scenarios like real time tracking the spread of infections (healthcare), decision support systems for urban planning (crime, commute, recreation areas, ...), targeted customer marketing, effective sales delivery and last but not least connected cars are just some of them coming to my mind immediately.

So you really should start thinking about all the possibilities you have got right now!

 

When we started to implement a scenario for the Solheim cup the first question to be solved was: How to model the world and get the model into HANA?

This Post is going to answer this question and give you some handy code you can easily deploy into your cloud or on premise HANA instance.

 

Prerequisites

Entering the geo spatial world, you will be facing concepts, which are not too familiar for the most of us. Of course we all have used services like google maps and different navigation systems, but do you already know about all the conventions and standards like clock wise orientation or geoJSON?

 

As we are looking at the geo spatial engine in HANA, I assume you do have some basic experience with HANA itself.

 

OK, so let's get started with the easy prerequisite:

 

Technology

You need to have access to a HANA box. This may be an on-premise instance, your HANA factory cloud account,  a HANA cloud trial instance or anything else running HANA with a revision >= 70. If you do not have anything at all, I strongly recommend using the free to use HANA cloud trial instance.

 

Other than that you need the usual developer rights and rights for the schema, you want to persist the geo spatial data in.

 

I am using the HANA factory cloud and my account has got the following roles:

 

20141010_132954.png

 

out of which the most important ones are:

  • sap.hana.xs.ide.roles::Developer
  • sap.hana.xs.debugger::Debugger
  • sap.hana.xs.ide.roles::CatalogDeveloper
  • sap.hana.xs.ide.roles::EditorDeveloper

 

I usually tend to give my user everything containing *xs*

If you already have installed HANA Studio or HANA Cloud computing platform tools, that's good and you can use it as well. But actually you won't need it here

 

Knowledge

If you are new to the area of geo spatial you should spend some time to get familiar with the topic. Though it is not necessary to get the things of this Post running. Nevertheless I recommend spending some time here, as there are some really special things about it:

 

  • Longitude & latitude versus Cartesian coordinates
  • radian versus degrees
  • Spatial reference systems { 0 || WGS84-4326 || WGS84-1000004326 }
  • Clock-wise/counter clock-wise orientation
  • data formats like geoJSON, KML, shape files, ...

 

As a good start I recommend reading the SAP HANA Spatial Reference guide.


geoJSON

The most important thing right now is: you have to understand what geoJSON is. If you already are familiar with JSON, that's easy. If not you should read:

 

 

On the official page (GeoJSON) there is a nice and crisp definition:

 

"

GeoJSON is a format for encoding a variety of geographic data structures.

{"type":"Feature",

  "geometry":{

          "type":"Point",

          "coordinates":[125.6,10.1]},

          "properties":{"name":"Dinagat Islands"}

   }

}

GeoJSON supports the following geometry types: Point, LineString, Polygon, MultiPoint, MultiLineString, and MultiPolygon. Lists of geometries are represented by a GeometryCollection. Geometries with additional properties are Feature objects. And lists of features are represented by a FeatureCollection.

".. source: http://geojson.org

 

Nothing to add here. Please see the GeoJSON Specification for more details.

 

Installation

After you deployed the code you will have a converter from geoJSON to SQL inserts. These can be copied and executed in a SQL console. The tool is capable of creating a destination table for you. It allows to specify schema and table name. The created table has a very simple structure:

(ID type int, GEO type ST_GEOMETRY)

 

Is is meant as a starting point for you and therefore kept simple on purpose. The converter looks like this in it's initial state...

20141010_140005.png

 

...and after pasting some geoJSON and the transform request similar to this:

20141010_140556.png

 

 

 

Deployment

Assuming you have got the necessary rights, the easiest way to deploy the tool is via the Web IDE.

 

 

Get the code

You will find three out of the required files in the attachment. The .xsapp file unfortunately has to be created by yourself and you also need to remove the .txt.zip ending for all of the other ones. Sorry, this is a SCN 'feature'.

 

After copying the three files to one folder, please create an empty file named '.xsapp' and remove the ending '.txt.zip' for all the other files. Your folder now should contain:

 

  • index.html 
    • converts the JSON input to SQL inserts
    • contains the front end logic
  • logic.xsjs
    • checks for DB consistency
    • handles create DB object request
  • .xsaccess
    • XS configuration artifact
      • how to authenticate and who is allowed to run this app
  • .xsapp
    • XS configuration artifact
      • this is an XS application

 

As the coding is pretty straight forward, so I do not think it makes a lot of sense to go into details here. If you have questions or remarks regarding the coding, please do not hesitate to ask.

 

Deploy the code

We are going to use the Web IDE/ development workbench to get the job done.

All you have to do is:

 

  • Open one of the following URLs in you browser:
    • http://<yourHostName>:80<yourInstanceNumber>/sap/hana/xs/ide/editor/
    • https://<yourHostName>:43<yourInstanceNumber>/sap/hana/xs/ide/editor/


  • Create a package and give it a name (here: 'converterTest')

               20141010_143713.png

  •     select the package in the tree

                    20141010_144005.png

  • select all the files using your file browser (Windows Explorer, Nautlius, Dolphin,...)
  • drag and drop them to the 'multi-file drop zone'
    • in the console you should see some log messages:

 

14:37:25 >> Package converterTest created successfully.
14:42:52 >> File .xsapp uploaded successfully.
14:42:53 >> File .xsaccess uploaded successfully.
14:42:54 >> File logic.xsjs uploaded successfully.
14:42:55 >> File index.html uploaded successfully.

 

 

  • unfolding the created package reveals:

                    20141010_144821.png

 

And that's it. The converter has been installed and you go ahead and use it.

 

Using the converter

Run it

To run the converter, you just go to one of the following URLs:

  • http://<yourHostName>:80<yourInstanceNumber>/<yourPackageName>/
  • https://<yourHostName>:43<yourInstanceNumber>/<yourPackageName>/

 

As you already might be in the editor, the easiest way to achieve is:

  1. select the index.html beneath your package
  2. push F8 OR click the green run arrow in the icons menu on the left site

If you want to bookmark the URL, I would recommend removing everything behind index.html

 

Prepare data structures

You have to ensure the sequence and a table for the data import do exist. This can either be done via the tool itself, or manually. Using the tool, you just provide the desired names and hit 'Create these DB objects':

20141010_151334.png

 

If you prefer to do it manually, open a SQL console and send (assuming your schema name shall be GEO and the table name will be geoTable):

 

CREATE SCHEMA GEO;
CREATE COLUMN TABLE "GEO"."geoTable" ("ID" INTEGER CS_INT, "GEO" ST_GEOMETRY(4326) CS_GEOMETRY) UNLOAD PRIORITY 5  AUTO MERGE;
CREATE SEQUENCE "GEO"."geoSequence";

Of course you do need the necessary authorizations on the according schema...

 

 

Use it

In order to use the tool, we have to get some geoJSON. There are plenty of possibilities out there, personally I most of all like http://geojson.io, but this is up to you...

Using geojson.io you need to:

  • go to http://geojson.io
  • search for your spot (e.g. 'Yosemite national park')
  • select one of the tools (point, linestring, polygon)
  • model whatever you want to use later on
    • watch out: polygons have to be closed in the end

 

  • finally you will come up with your modeled geoJSON on the right side of your window

                    20141010_145932.png

 

 

In order to convert this geoJSON to SQL inserts, you now have to copy the JSON into the XS application:

20141010_150222.png

 

Now hit 'Transform to WKT' in the converter tool (WKT stands for WellKnownText, which is a standard notation).

If table and sequence exist (see subchapter above) and have the right format (ID type INT, GEO type ST_GEOMETRY) you will get the inserts within a new div inside this window:

20141010_150601.png

 

So ... that was easy, wasn't it?

 

What's next?

Now that you have some geo data available in your database, you can use it using standard SQL statements. This way you can filter for certain records e.g. cars within a certain distance (ST_DISTANCE function) or whether a golf ball hit the green (ST_WITHIN). There are a lot of functions available right now and there will be a lot of new ones with SPS09.

Please check the SAP HANA Spatial Reference guide for a list of all functions and a deep dive into the topic.

 

Conclusion

I hope you got the central idea on how to get started in the geo spatial area and understand the big potential it offers.

I am pretty sure we are going to see very interesting innovations coming around that corner.

The tool itself can be enhanced in many ways (e.g. allow tables which are structured in another way, execute the sql code via front end, handle multiGEOs [not available at geojson.io]). As the converter was supposed to be a starting point, I rather kept it straight forward and simple. I hope you like it this wat?!

 

If you are interested in further geo spatial topics, you are more than welcome to join me (in Las Vegas) on (Wednesday or on Thursday) or my colleagueFrank Albrecht in Berlin at our SAP TechEd && d-code session 'Mapping the World with SAP HANA Geospatial Engine (DEV103)'.

 

 

 

Stay tuned for more

Kai-Christoph

Viewing all 676 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>