Quantcast
Channel: SCN : Blog List - SAP HANA Developer Center
Viewing all 676 articles
Browse latest View live

SAP HANA Developer Center Updates: New Landing Page, New Tutorials

$
0
0

The SAP HANA Developer Center has a new landing page full of new content for you. Check it out at developers.sap.com/hana.


SAP HANA Landing Page2.png


The new homepage offers you a quick and easy way to access the latest developer info on SAP HANA, sign up for your free developer edition and get started building your first app.


You’ll find information about how SAP HANA works including technical aspects, core features and developer tools. 

You’ll also get an overview of the different options available for you to get started: you can sign up for your free developer edition via SAP HANA Cloud Platform (you get a free instance) or you can sign up for your free developer edition via AWS or Microsoft Azure.


In addition, you’ll find step by step tutorials to help you build your first app. The tutorials cover from how to create your developer environment to building your first app to accessing data and more.


The page also includes links to resources and tools, the community, other related documentation, education and training, certification, etc.


So, take a look and bookmark the page: developers.sap.com/hana.


$.hdb vs $.db Interface - Performance/Problems

$
0
0

Hi folks,

 

I want to share my experience concerning the two xsjs-engine database connection implementations:

  • $.hdb (since SPS 9)
  • $.db

 

The Story:

 

Some days ago I used the new HDB interface implementation for the xsjs engine to process and convert a result set in a xsjs service. Problematic for this service is the size of the result set. I am not very happy with the purpose of the service but we somehow need this kind of service.

 

The result set contains about 200.000 rows.

 

After setting up everything and having multiple test with small result sets < 10.000 rows everything works fine with the new $.hdb implementation. But requesting the first real sized set caused heavy trouble on the maschine (all xsjs connections) and the request never terminated.

 

As a result I found myself implementing a very basic xsjs service to get all files in the HANA Repository. (Because per default there are more then 40.000 elements in it.) I duplicated the service to get one $.db and one $.hdb implemenation with almost the same logic.

 

The Test:

 

HDB - Implementation

 

// >= SPS 9 - HDB connection
var conn = $.hdb.getConnection();
// values to select
var keys = [    "PACKAGE_ID",  "OBJECT_NAME",  "OBJECT_SUFFIX",  "VERSION_ID",  "ACTIVATED_AT",  "ACTIVATED_BY",  "EDIT",  "FORMAT_VERSION",  "DELIVERY_UNIT",  "DU_VERSION",  "DU_VENDOR"
];
// query
var stmt = conn.executeQuery( ' SELECT ' + keys.join(", ") + ' FROM "_SYS_REPO"."ACTIVE_OBJECT"' );
var result = stmt.getIterator();
// result
var aList = [];
while(result.next()){    var row = result.value();    aList.push({        "package" : row.PACKAGE_ID,         "name" : row.OBJECT_NAME,         "suffix" : row.OBJECT_SUFFIX,         "version" : row.VERSION_ID,         "activated" : row.ACTIVATED_AT,         "activatedBy" : row.ACTIVATED_BY,         "edit" : row.EDIT,        "fversion" : row.FORMAT_VERSION,        "du" : row.DELIVERY_UNIT,        "duVersion" : row.DU_VERSION,        "duVendor" : row.DU_VENDOR    });
}
conn.close();        
$.response.status = $.net.http.OK;
$.response.contentType = "application/json";
$.response.headers.set("Content-Disposition", "attachment; filename=HDBbench.json" );
$.response.setBody(JSON.stringify(aList));

DB - Implementation

 

// < SPS 9 - DB connection
var conn = $.db.getConnection();
// values to select
var keys = [    "PACKAGE_ID",  "OBJECT_NAME",  "OBJECT_SUFFIX",  "VERSION_ID",  "ACTIVATED_AT",  "ACTIVATED_BY",  "EDIT",  "FORMAT_VERSION",  "DELIVERY_UNIT",  "DU_VERSION",  "DU_VENDOR"
];
// query
var stmt = conn.prepareStatement( ' SELECT ' + keys.join(", ") + ' FROM "_SYS_REPO"."ACTIVE_OBJECT"' );
var result = stmt.executeQuery();
// vars for iteration
var aList = [];
var i = 1;
while(result.next()){    i = 1;    aList.push({        "package" : result.getNString(i++),         "name" : result.getNString(i++),         "suffix" : result.getNString(i++),         "version" : result.getInteger(i++),         "activated" : result.getSeconddate(i++),         "activatedBy" : result.getNString(i++),         "edit" : result.getInteger(i++),        "fversion" : result.getNString(i++),        "du" : result.getNString(i++),        "duVersion" : result.getNString(i++),        "duVendor" : result.getNString(i++)    });
}
result.close();
stmt.close();    
conn.close();        
$.response.status = $.net.http.OK;
$.response.contentType = "application/json";
$.response.headers.set("Content-Disposition", "attachment; filename=DBbench.json" );
$.response.setBody(JSON.stringify(aList));

 

The Result:

 

  1. Requesting DB-Implementation: File-Download for all 43.000 rows is starting within 1500 ms.
  2. Requesting HDB-Implementation: Requesting all rows leads to an error. So I trimmed the result set by adding a TOP to the select statement.
    • TOP  1.000 : done in 168ms
    • TOP  2.000 : done in 144ms
    • TOP  5.000 : done in 297ms
    • TOP 10.000 : done in 664ms
    • TOP 15.000 : done in 1350ms
    • TOP 20.000 : done in 1770ms
    • TOP 30.000 : done in 3000ms
    • TOP 40.000 : The request is pending for minutes (~5 min) then responding with 503. The session of the logged in user expires.

 

As summary: The new hdb implementation performs worse then the old one and there is a treshold in hdb that leads to significant problems on the system.

 

I appreciate every comment on that topic.

 

Best,

Mathias

XS Project Not showing After SAP HANA tools Installation

$
0
0

XS Project Not appearing in Eclipse


I have encountered this problem,and i tried a lot of things but this basic step solved my problem,but before getting to that you need to follow the bellow steps properly.


To install SAP HANA Tools, proceed as follows:


  1. Get an installation of Eclipse Luna (recommended) or Eclipse Kepler.
  2. In Eclipse, choose in the menu bar Help > Install New Software...
  3. For Eclipse Luna (4.4), add the URL https://tools.hana.ondemand.com/luna.
    For Eclipse Kepler (4.3), add the URL https://tools.hana.ondemand.com/kepler.
  4. Press Enter to display the available features.
  5. Select the desired features and choose next.
  6. On the next wizard page, you get an overview of the features to be installed. Choose Next.
  7. Confirm the license agreements and choose Finish to start the installation.


Post installation open Eclipse and follow the steps below to see whether XS Project appeared or not:


  1.      In Eclipse Go to Window--Open Perspective--Other (see in fig 1)

1.jpg

 

Fig 1

  1. Select SAP HANA Development Perspective.
  2. Now there will be 3 tabs as shown below.

 

2.jpg

 

  1. Now go to project explorer view and on the top left corner click on File-- New -- Project as shown below.

 

3.jpg

 

  1. Now in New project wizard Select SAP HANA -- Application Development--XS Project as shown below.

 

4.jpg

 

Now if XS project does not appear then follow the steps below:


  1. Exit Eclipse.
  2. Find the path where your eclipse is installed for ex : D:\SAP HANA\eclipse
  3. Now go to command prompt.
  4. Switch to the eclipse folder and type eclipse –clean.
  5. Now the eclipse would open automatically and you could see the XS Project under Application Development.

"No buffer space available" when running ping

$
0
0

Hi there,

 

I've recently installed the latest SLES version provided by SAP for Business One (at the moment SLES 11 PL 3) which works just fine. After some weeks of continuous work, I dicovered some packages loss when running ping with result message "No buffer space available".

 

It turns out that the allocatedmemory was getting at maximum. The solution was to extend the amount in the file

 

/proc/sys/net/core/wmem_max

 

Then restart the network interface for it to take the change.

 

Hope this was useful.

 

Regards,

 

Alejandro Fonseca

Twitter: @MarioAFC

Insurance Claims triangle - A jab at SQLScripting

$
0
0

The Insurance Claims triangle/Loss triangle/Run off triangle

 

Before we delve into the prediction, scripting and all the interesting stuff, lets understand what the claims loss triangle really is. An insurance claims triangle is a way of reporting claims as they developer over a period of time. It is quite typical that claims get registered in a particular year and the payments are paid out over several years. So it becomes important to know how claims are distributed and paid out. An Insurance claims triangle does just that. Those who are familiar with Solvency II norms set by EIOPA would be familiar with the claims triangle report. The report is a mandate and is in the Quantitative Reporting Templates(QRTs).

 

 

fig1.png

 

Fig : 1 - The claims triangle

 

In figure - 1, the rows signify the year of claim registration and the columns the development year. Consider that we are in the year 2013 and are looking at the claims triangle. The first row focuses on the claims registered in the year 2013. The first column of the first row(header 0) gives you the claim amount paid out by the insurance company in the same year. The second column gives you claim amount paid out in the next year(2006). This goes on until the previous year of reporting i.e. 2012. The second row does the same thing, but for the claims registered in the year 2006. Logically, as each row is incremented, the number of columns would be lesser by one. This gives the triangular shape to the report and hence its catchy name. The claims triangle could be of two types - incremental or cumulative. Incremental is when each column hold the amount paid at that specific intersection of registration year and payment year. The cumulative on the hand would contain the cumulative claims paid out as of that intersection point.

 

The below prediction is based on the cumulative model of the claims triangle. We would base or logic on a set of records stored at the cumulative level. I have uploaded the input data as a CSV in the blog for saving your time.

 

The Prediction

 

The interesting part is to fill the second triangle of the rectangle(if you will). Typically R is used to do this work and that would be a much easier and reliable way to do it of-course. If you are interested to follow the R way, I would suggested viewing these videos presented in the channel SAP Academy - https://youtu.be/wogBQ8Rixwc . It was out of shear curiosity that I planned on implementing an SQL Script based implementation of the loss triangle. Let's try understanding the algorithm first.

 

As an insurance company it would be useful to know what you would have to pay out as claims in the years to come. It helps the insurance company to maintain financial reserves for future liabilities and reduce risk of solvency. There are quite some statistical models used to predict the future numbers, but the most accepted one is the Chain Ladder algorithm presented by T.Mack.

 

Well, lets see the math behind the prediction. I'd have to candidly accept that my math is not too refined. So I would rather explain it in words. The algorithm itself has two parts to it - building the CLM estimator and the prediction itself.

 

Phase 1 : Derivation of the CLM(Chain ladder method) estimator

 

The first phase would be to determine the multiplication factors for each column which would later be used for the prediction. 

fig2.jpg

Fig : 2 - CLM Estimator derivation

 

 

The above figure shows the CLM estimator of each column. Basically the math is a rather simple division of subsequent columns with equal number of cells. The CLM estimator for column 3 is derived as the division of the cumulative values of column number 3 over column number 2 excluding the last cell of column number 2. The same exercise is repeated over all adjacent sets of columns to build the estimators.

 

Phase 2 : Predicting the values

 

The prediction is a recursive exercise that is done one diagonal row at a time. Each diagonal row signifies claim payments for one particular future year. Looking again at figure 1, the first empty diagonal row would hold the predicted values that would be paid out in the year 2013 for the claims registered across different years. The next diagonal row would be for 2014 and so on.

 

fig3.jpg


Fig : 3 - Prediction

 

Each predicted value is to be calculated as a product of the CLM estimator of the target column and the amount in the predecessor column of the same row. Once an entire row is calculated, the next diagonal row is calculated the saw but based on the previous predicted diagonal row. The whole process is done until the entire rectangle is complete.

 

 

 

The SQL Scripting

 

Now to get to the meat of this blog. I took a major assumption in the example that I show here; I assume the cumulative values for the run-off triangle is available in a table. The reason is that  data for claims and payments could be on a single table or multiple tables depending on how the insurance data model is implemented. An SQL/View would have to be written to build a cumulative value and the the whole SQL script done here can be pointed to it. For simplicity I just use a single table here.

 

The whole implementation is on a script based calculation view.

 

Semantics

 

fig4.jpg

 

Fig : 4 - Calculation view semantics

 

As you see above, the calculation view gives out 5 fields

  • Claim_year - Year of claim registration
  • Pymt_year - Year of payment(cumulative)
  • Dev_year - Claim development year
  • Predict - A flag to distinguish predicted and historical values
  • Amount - Cumulative amount

Script -> Variable declarations

 

fig5.jpg

Fig : 5 - Variable declaration

 

Above is just a bunch of variables that would be used in the calculations below. I use an array of real type to store the CLM estimators.

 

 

Script -> Variable definitions

 

fig6.jpg

Fig : 6 - Variable definition

 

What you see above is building three variables - the minimum year, maximum year for calculation and their difference. The next component is building a table t_claim_table based on pre-calculated cumulative claim amounts stored in the CLAIMS_PAID table. The above part of the code could be modified based on the underlying data model and calculation requirements. For example if you are trying to execute claims triangle as of current status, the max value could be selected as  select year(current_date) from dummy and the min could be filled from an input parameter or from the table itself as done here. For simplicity of my simulation, I have hard-coded the max and obtained the min from the table itself. The select query on CLAIM_PAID also could be changed based on the data model used. Assuming we were able to get over the above hurdle of building the input data.

 

Script -> Building the CLM estimator

 

fig7.jpg

Fig : 7 - CLM Estimator

 

 

To understand the math behind the CLM estimator I recommend reading the topic on the "The Prediction" above. I use a while loop to iteratively go over subsequent columns, build the sum and in the outer query divide and arrive at the CLM estimator. The value is then saved into an array. The iteration starts from 0 to the maximum number of years for which the run of triangle goes. For our example, looking at figure 1, this would be 2012 - 2005 = 7. So we could safely assume the while loop runs 7 times to calculate the 7 CLM estimator values as seen in figure 2. The variable 'i' helps in controlling selection of the correct column. At the end of the while loop, all the 7 CLM estimator values would be in the array.

 

 

Script -> Predicting the values

 

fig8.jpg

Fig : 8 - The prediction

 

To understand math behind the prediction done here, I recommend reading the topic on the "The Prediction" above. There are two nested for loops that do the work. The inner for loop calculates each cell within one diagonal row at a time. The outer for loop runs as many times as there are diagonal rows until the rectangle is filled. The three variables 'i', 'j' and 'h' control calculation of each value. The CLM estimator is obtained from the array filled in the previous step. I used a UNION to append records to the existing historical claims. This way, once a diagonal row has been predicted, I can use those values to build the next diagonal row. At the end of the loops, the table variable - t_claim_table would have the historic as well as the predicted values filling up the rectangle.

 

 

Script -> Finally the output

 

fig9.jpg

Fig : 9 - Output

 

The var_out variable is finally filled to be displayed as output. The case statement checks whether it is a predicted or a historic value and is later used for applying a filter in the report.

 

Visualization - SAP Lumira Reports

 

Putting all the moving pieces together, Lumira is the perfect tool to show the output. I used a cross-tab report to demonstrate the triangular layout. The development year is along the columns and the claim registration year is along the rows. Additionally a filter lets you make the report even more interactive.

fig10.jpg

 

Fig : 10 - SAP Lumira report showing loss triangle with only historical values

 

 

fig11.jpg

 

Fig : 11 - SAP Lumira report showing loss triangle with the predicted values

 

I am quite keen on listening to your feedback and suggestions on if there is a better way to script this (Of course not using the shortcut by calling R)

SAP TechEd (#SAPtd) Stragety Talk: SAP's Platform-as-a-Service Strategy

$
0
0

In this Strategy Talk, recorded at SAP TechEd Bangalore 2015, Ashok Munirathinam, Director PaaS APJ speaks about

how SAP intends to focus the SAP HANA Cloud Platform for customers, partners, and developers to build new applications, extend on-premise applications, or extend cloud applications.  in this session you can get an understanding of the platform today, and the direction SAP is headed. Also understand key partnerships and use cases for the platform today, as well as future capabilities that are being developed. Understand the value and simplicity of cloud extensibility, as well as how to engage with SAP in a simple way.


Insurance Claims triangle - A jab at SQLScripting

$
0
0

The Insurance Claims triangle/Loss triangle/Run off triangle

 

Before we delve into the prediction, scripting and all the interesting stuff, lets understand what the claims loss triangle really is. An insurance claims triangle is a way of reporting claims as they developer over a period of time. It is quite typical that claims get registered in a particular year and the payments are paid out over several years. So it becomes important to know how claims are distributed and paid out. An Insurance claims triangle does just that. Those who are familiar with Solvency II norms set by EIOPA would be familiar with the claims triangle report. The report is a mandate and is in the Quantitative Reporting Templates(QRTs).

 

 

fig1.png

 

Fig : 1 - The claims triangle

 

In figure - 1, the rows signify the year of claim registration and the columns the development year. Consider that we are in the year 2013 and are looking at the claims triangle. The first row focuses on the claims registered in the year 2005. The first column of the first row(header 0) gives you the claim amount paid out by the insurance company in the same year. The second column gives you claim amount paid out in the next year(2006). This goes on until the previous year of reporting i.e. 2012. The second row does the same thing, but for the claims registered in the year 2006. Logically, as each row is incremented, the number of columns would be lesser by one. This gives the triangular shape to the report and hence its catchy name. The claims triangle could be of two types - incremental or cumulative. Incremental is when each column hold the amount paid at that specific intersection of registration year and payment year. The cumulative on the hand would contain the cumulative claims paid out as of that intersection point.

 

The below prediction is based on the cumulative model of the claims triangle. We would base or logic on a set of records stored at the cumulative level. I have uploaded the input data as a CSV in the blog for saving your time.

 

The Prediction

 

The interesting part is to fill the second triangle of the rectangle(if you will). Typically R is used to do this work and that would be a much easier and reliable way to do it of-course. If you are interested to follow the R way, I would suggested viewing these videos presented in the channel SAP Academy - https://youtu.be/wogBQ8Rixwc . It was out of shear curiosity that I planned on implementing an SQL Script based implementation of the loss triangle. Let's try understanding the algorithm first.

 

As an insurance company it would be useful to know what you would have to pay out as claims in the years to come. It helps the insurance company to maintain financial reserves for future liabilities and reduce risk of solvency. There are quite some statistical models used to predict the future numbers, but the most accepted one is the Chain Ladder algorithm presented by T.Mack.

 

Well, lets see the math behind the prediction. I'd have to candidly accept that my math is not too refined. So I would rather explain it in words. The algorithm itself has two parts to it - building the CLM estimator and the prediction itself.

 

Phase 1 : Derivation of the CLM(Chain ladder method) estimator

 

The first phase would be to determine the multiplication factors for each column which would later be used for the prediction. 

fig2.jpg

Fig : 2 - CLM Estimator derivation

 

 

The above figure shows the CLM estimator of each column. Basically the math is a rather simple division of subsequent columns with equal number of cells. The CLM estimator for column 3 is derived as the division of the cumulative values of column number 3 over column number 2 excluding the last cell of column number 2. The same exercise is repeated over all adjacent sets of columns to build the estimators.

 

Phase 2 : Predicting the values

 

The prediction is a recursive exercise that is done one diagonal row at a time. Each diagonal row signifies claim payments for one particular future year. Looking again at figure 1, the first empty diagonal row would hold the predicted values that would be paid out in the year 2013 for the claims registered across different years. The next diagonal row would be for 2014 and so on.

 

fig3.jpg


Fig : 3 - Prediction

 

Each predicted value is to be calculated as a product of the CLM estimator of the target column and the amount in the predecessor column of the same row. Once an entire row is calculated, the next diagonal row is calculated the saw but based on the previous predicted diagonal row. The whole process is done until the entire rectangle is complete.

 

 

 

The SQL Scripting

 

Now to get to the meat of this blog. I took a major assumption in the example that I show here; I assume the cumulative values for the run-off triangle is available in a table. The reason is that  data for claims and payments could be on a single table or multiple tables depending on how the insurance data model is implemented. An SQL/View would have to be written to build a cumulative value and the the whole SQL script done here can be pointed to it. For simplicity I just use a single table here.

 

The whole implementation is on a script based calculation view.

 

Semantics

 

fig4.jpg

 

Fig : 4 - Calculation view semantics

 

As you see above, the calculation view gives out 5 fields

  • Claim_year - Year of claim registration
  • Pymt_year - Year of payment(cumulative)
  • Dev_year - Claim development year
  • Predict - A flag to distinguish predicted and historical values
  • Amount - Cumulative amount

Script -> Variable declarations

 

fig5.jpg

Fig : 5 - Variable declaration

 

Above is just a bunch of variables that would be used in the calculations below. I use an array of real type to store the CLM estimators.

 

 

Script -> Variable definitions

 

fig6.jpg

Fig : 6 - Variable definition

 

What you see above is building three variables - the minimum year, maximum year for calculation and their difference. The next component is building a table t_claim_table based on pre-calculated cumulative claim amounts stored in the CLAIMS_PAID table. The above part of the code could be modified based on the underlying data model and calculation requirements. For example if you are trying to execute claims triangle as of current status, the max value could be selected as  select year(current_date) from dummy and the min could be filled from an input parameter or from the table itself as done here. For simplicity of my simulation, I have hard-coded the max and obtained the min from the table itself. The select query on CLAIM_PAID also could be changed based on the data model used. Assuming we were able to get over the above hurdle of building the input data.

 

Script -> Building the CLM estimator

 

fig7.jpg

Fig : 7 - CLM Estimator

 

 

To understand the math behind the CLM estimator I recommend reading the topic on the "The Prediction" above. I use a while loop to iteratively go over subsequent columns, build the sum and in the outer query divide and arrive at the CLM estimator. The value is then saved into an array. The iteration starts from 0 to the maximum number of years for which the run of triangle goes. For our example, looking at figure 1, this would be 2012 - 2005 = 7. So we could safely assume the while loop runs 7 times to calculate the 7 CLM estimator values as seen in figure 2. The variable 'i' helps in controlling selection of the correct column. At the end of the while loop, all the 7 CLM estimator values would be in the array.

 

 

Script -> Predicting the values

 

fig8.jpg

Fig : 8 - The prediction

 

To understand math behind the prediction done here, I recommend reading the topic on the "The Prediction" above. There are two nested for loops that do the work. The inner for loop calculates each cell within one diagonal row at a time. The outer for loop runs as many times as there are diagonal rows until the rectangle is filled. The three variables 'i', 'j' and 'h' control calculation of each value. The CLM estimator is obtained from the array filled in the previous step. I used a UNION to append records to the existing historical claims. This way, once a diagonal row has been predicted, I can use those values to build the next diagonal row. At the end of the loops, the table variable - t_claim_table would have the historic as well as the predicted values filling up the rectangle.

 

 

Script -> Finally the output

 

fig9.jpg

Fig : 9 - Output

 

The var_out variable is finally filled to be displayed as output. The case statement checks whether it is a predicted or a historic value and is later used for applying a filter in the report.

 

Visualization - SAP Lumira Reports

 

Putting all the moving pieces together, Lumira is the perfect tool to show the output. I used a cross-tab report to demonstrate the triangular layout. The development year is along the columns and the claim registration year is along the rows. Additionally a filter lets you make the report even more interactive.

fig10.jpg

 

Fig : 10 - SAP Lumira report showing loss triangle with only historical values

 

 

fig11.jpg

 

Fig : 11 - SAP Lumira report showing loss triangle with the predicted values

 

I am quite keen on listening to your feedback and suggestions on if there is a better way to script this (Of course not using the shortcut by calling R)

Internet of Things Foosball - Part 2

$
0
0

...continuing from Internet of Things Foosball - Part 1

 

We got a team of four undergraduates who applied for this project. They immediately recognize that this Final Project was not some ordinary project. This project would both mean a grade deal for the people in the company and also would this project challenge their techincal skills to their highest level.

 

They would have to come familiar with a raspberry and its sensors. Build up a knowledge with SAP HANA and its endless possibilities and to evaluate if the SAPUI5 was applicable as a web app. As much as the enthusiasm drove the team to get this started on both hardware and software they first had to go through some learning material.

 

When all members of the team had gone through the basics(ie blogs and tutorials from Thomas Jung ) it was now time for business. At this time they could more understand the roles of each object and finalized the architectural view of the whole project. The architectural mockup looked like this.

Foosball_architect.png

 

They soon finalized a solution in the raspberry. The raspberry had a simple role which was simply to capture the goal events and sends a HTTP POST via api to SAP HANA. The api's where created with the SAP HANA XS application framework.

The api was build upon a simple db model.

foosbal_dbmodel.png

 

Although this model was quite simple it was enough from our original specifications. Now it was time for the imagination to start building up and find out what kind of statistical data we wanted to get from the data created with this model. We decided to do the following:

  • Most wins
  • Most played games
  • Most scored/conceded goals
  • Highest/Lowest win percentage
  • Quickest/Slowest goals
  • Winning/Loosing streak
  • Greatest comback
  • Wall of Shame
  • Best partner, Worst partner. Easiest opponent, Toughest opponent etc.

It was almost impossible to recognize at the beginning that this was all possible with a simple data model and the power of SAP HANA.

 

However this list of statistical data did not show which player was ranked the highest. Therefore the team implement an ELO score algorithm which was a brilliant idea. Not only did they implement a statical ELO score but also a ELO historical view.

 

As the time got passed and the delivery date got closer the team decided not use SAPUI5. Instead they would build the UI using AngularJS as they where more familiar with it. And also this decission was made because of the requirements to use WebSocket for an immedied score update.

The landing page whould show an ongoing game and the routes to the player/team selection, statistics view etc.

landingpage.png

 

It is an understatment to say that the final outcome was much more than we hoped for.

It can be seen from this video


Syscompare - a tool for comparing repos across HANA instances

$
0
0


compare-icon.png

 

A couple of weeks ago I was moving code from 1 hana instance to another trying to keep them in sync. However, I thought there might be a better alternative for comparing the contents of the repos across my systems to ensure that the files matched. After doing some digging and not finding a solution, I decided to write a small tool to do just this, called Syscompare. It is a open source HANA app which uses the new File API and compares the files on each system and displays the differences.

 

You can read more about the application here, and find the files for the HANA app in Github.

 

 

Features:

 

- Compare repos across HANA instances

- Display file differences in the application

- Highlights missing files on each instance

 

Usage:

 

- Setup the 2 XSHttpDest files

- Specify the source and target systems (using the xshttpdest file names)

- Specify the repo to compare

 

Once the processing is complete the app will show a summary of differences:

 

Screen Shot 2015-06-16 at 10.18.20 PM.png

 

Screen Shot 2015-06-16 at 10.20.12 PM.png

 

Screen Shot 2015-06-16 at 10.20.51 PM.png

 

 

 

You can checkout the Github/Source code here: paschmann/Syscompare · GitHub

 

If you prefer to download the Delivery Unit - please click here and provide your email address (this way I can keep you up to date with changes): metric² | Real-time operational intelligence for SAP HANA

 

Interested in contributing to the project? Please feel free to fork or submit a pull request.

Get Your SAP HANA Idea Incubator Badge Today!

$
0
0

UPDATE - SAP Idea Incubator is transforming into a new program, a new blog will be posted to explain the new process shortly. Please check back later. Thank you all for your participation.


SAP Idea Incubator is a crowdsourcing program that brings together customers with fresh ideas about how to use their data – and innovators who can quickly build prototypes of solutions that reflect these ideas. The result? Fast, cost efficient problem solving.

 

Join SAP Idea Incubator today!

 

How to participate?


Do you have an idea about how to better utilize your data already? Send us a brief description about it in oursubmission form.

Do you see an idea that interests you? You can share it in your circles, discuss your thoughts in the discussion board, or submit a proposal.

 

What are the benefits?


For idea submitter

  1. Get global data scientists, developers to work on your idea
  2. Leverage the crowd-sourcing benefits with confidence in data security on SAP cutting-edge technology
  3. Get working prototypes to prove feasibility of your idea

 

For innovators who submit prototypes for ideas

  1. Build personal skills, knowledge and expertise, hands on real customer data and solve real business problems
  2. Get a chance to participate in HANA Distinguished Engineer program, exposure to a broader business community
  3. Monetary and/or other sponsorship for the winner of every idea

 

Plus, now you can get badges!

 

Level 1: I am an SAP HANA Idea Incubator Fan

HANAIdeaIncubator_fan_75.png

  • “Bookmark” this blog (click on the Bookmark function on the upper right hand side of this blog)

Level 2: I am a Contributor to SAP HANA Idea Incubator

HANAIdeaIncubator_contributor_75.png

  • Submit your idea here
  • and Post a blog on SCN describing your idea or proposal

Level 3: I am a Winner at SAP HANA Idea Incubator

HANAIdeaIncubator_winner_75.png

  • Be selected as a winner for the proposal submission. Congratulations!

 

Some related links

 

Please let us know in the “Comment” section if you have any questions. We can’t wait to see your participation!

 

Special thanks to the SCN Gamification team---Jason Cao, Jodi Fleischman, Audrey Stevenson, and Laure Cetin for helping to create a series of special SAP HANA Idea Incubator missions and badges and making this happen!

Dynamics of HANA Projects

$
0
0

HANA Projects – What is so special?

 

What is so much buzz around topic of managing / running a HANA Project? Come on HANA is just another product in line with SAPs ERP product lines and innovations. We have tons of information and loads of examples around us on how to run an ERP project. SAP as an ERP is no new to market and till date innumerable implementations of various types /categories/sizes are all around.

 

It’s just a revolutionary database technology HANA at its core which brings in unmatched computation speed. Lift and shift existing implementation utilizing conventional Databases to latest SAP Application running on HANA and boom! it runs at blazing speed.Even going to extremes like S4HANA it’s more over same story with added advantage of convergence of multiple separate applications like CRM, Simple Finance etc. as plugin to single application and Database platform.

 

All mentioned before this sentence are the ignorance areas which create issues and at times crash HANA projects.

 

Remember HANA does not only talks about speeding up things using high capability Memory-To-CPU Architecture, but it moreover about:

a. Changing the way organizational processes are first simplified and reflected in design strategies

b. Come up with flavour of new age architecture models like Data Vault and Layered Scalable Architecture Plus to name few

c. To understand correctly the sizing aspects to avoid oversizing or under sizing and in turn affect performance

 

 

In short we can say the conventional way of ERP projects , especially for Business Warehouse projects where it use to be mostly data driven or were – Primarily Technical Implementation does not suits mood and meaning of a HANA implementation.

 

It’s not only the speed of calculation engines that pump in value but other factors like actual usability of solution, proper pivoting of data level architecture, aligned architecture and understanding how HANA actually works at engine level helps in a proper implementation and can decide a success or failure.

 

 

There are broadly two important factor areas:

 

1. Non-Technical Aspects like – Execution plan, proper resourcing and integration of Business core requirements in deliverables.

2. Technical Aspects like – Proper sizing, selection of Packaged or Tailored appliance, Architecture, Balance in solution to strike a balance between CPU Vs Memory consumption, wise choice of data provisioning.

 

 

Where is the world heading to?

 

In the current scenario where the businesses are heading to, has the key to how HANA projects to be handled.

 

It was a time back in history when it use to be enough to have ERP solution connecting all departments and being able to generate reports  to manage an organization and to some extent help in managing operations etc. was enough.

 

With time we gained more speed, covered more aspects, went into more complexity, gone ahead in time to include planning, CRM and SRM on top of core modules.

 

What expertise was required to handle this was unpacking bundled solutions or weave around customized solution over time and recursively built on top each other. Projects were running to configure and implement new reporting requirements, standard or custom.It’s more of a two dimensional project requirement running between these two aspects.

 

Optimization were towards:

a. Demonstrating expertise in building more similar solution for various customers and optimizing using tools to automate frequently done things

b. Create templates to implement stuff required almost by everyone with option of customizations

c. Reduce FTEs and time by using models like Factory model.

d. Delivering one report for every business user at times even out of temptations if not actual requirement

 

 

From customer perspective also the optimization, gaining most out of investment and execution of projects were limited to per department.

 

The reporting development in data warehouses were limited to smaller data sets and based on aggregation due to limitations in performance.

 

Where we are heading now makes a huge impact on usability of these solution and when means like HANA based solutions are available now.

 

We have overburdened systems running expensive ETLs, aggregations and huge data volume with at times valuable information as little as 1% as per researches.

 

Where we are heading is need to have powerful and real-time decision capabilities. We need not only information derived from systems in our network of OLTP boxes. But data flowing in from various other real time sources are as important. Organizations are growing in size and which translates into more complex systems. Planning on even a day old data is considered old fashioned or risky. Understanding sentiments of customer in minutes is necessity and predicting customer mood well in advance is core strategy. The definition of “what actual business an organization is in” has also changed with time. Shift has happened from being just a product based companies to service to customer by product has taken place. Now a car manufacturer cannot just focus on launching cars on mere gain in technology, they have to maintain, understand and focus on catering needs of existing customer by superlative communication is of importance.

 

 

Coming back to our discussion about differentiating factors in HANA project management. It is this scenario with customers going for these implementations have a wider implication on dynamics of HANA implementations.

 

A HANA implementation cannot be just a technical and department wise solution and report generating project. It’s a fusion of art of business understanding driven finer technologically supported initiative.

 

 

So what is different here – Fundamental Difference.

 

The best trained and run SAP implementing partners use to believe that if you are able to collect all aspects of data in OLTP systems and dump them all in smaller subsets in warehouse you have a successful implementation. So was the view of client CIO and CTO offices as well.

 

There are so many failed HANA implementations and the reason being they were executed with a mind-set as mentioned above.

 

The fundamental differences are:

a. HANA implementations should aim to greater goal of higher business usability  and enhanced simplicity in terms of design – it is strategic organizational goal

b. HANA is no magic wand. It returns good ROI if used properly else not.

SAP recommends to go with a Project execution to enable Increase in Quality value proposition to customer with decreased cost impact. The decreased cost enablement also has a factor of decreased time of execution attached.

 

Prevalent Challenges

 

There are some prevalent challenges to be kept in mind while executing or even going at initial stages of bidding for a HANA project. There are some concerns and cloud of doubt always around any new technology and at times it is the understanding difference between marketing and sales team & delivery team that should also be blamed.

 

Some of the common prevalent challenges are:

a. HANA is hot, getting right skill is hotter

b. To go with pre-packaged HANA appliance black box or utilize existing Hardware and to go for Tailor made HANA server

c. Cost of investment for HANA is high

d. Choice for proportion to go on premise and cloud and reasoning behind that

e. Will this actually help in reducing my TCO and tangible benefits in terms of ROI

f. What about existing Hardware

g. Customer’s belief that lift and shift is enough

h. Reports running faster, what else is the benefit?

 

 

Some common mistakes

 

There are some common mistakes when we talk about HANA projects:

 

a. As mentioned by Vishal Sikka. Beauty of HANA is, “you run a process in 100 seconds using one core, you run it on 100 core it runs in 1 second”. This aspect gives a lesson also about sizing. It should be noted if 100 concurrent user are running a report with runtime of 2 seconds, with 1000 users each of them will run for 10 seconds. So, concurrency should also be factored as one parameter with expected complexity and runtime of reports.

 

b. HANA engine does not create persistent cubes but generates metadata to pick up data at runtime, if CPU utilization is very low as compared to memory utilization that also is a design error and might end up screwing the sizing.

 

c. Too much of real time data reporting and unnecessary big data combination. Not considering strategic utility of big data at times end up into bottlenecked and memory overflowing systems with literally no meaningful information flowing out of it.

 

d. Scrap is not good for home, society and so for HANA implementations. Fail to shed unnecessary flabs (read unnecessary processes and reports) before moving to HANA

 

e. As-Is migrations to HANA

 

f. Wasting efforts on things which were never used

 

g. Long duration planning and execution projects. In ever changing market at times it is too late for a client when after spending little too much time it ends up seeing the strategic and competitive edge of new solution is already lost.

 

 

Best practices

 

 

There are some best practices to be followed and which if not followed may result into failed HANA project to different extent.

 

 

A. Focus of HANA implementations are normally not only to speed up functionalities and reports. They are to achieve a higher strategic goal of utilizing more of data towards generating time critical responses and monitoring for faster decision making. This also requires less flab to be generated in terms of having as much possible to have only what is required. This all also needs to be executed and delivered in shorter period of time for better ROI and Business Values. This translates into a project methodology with 200% more Business direct involvement and moulding architecture around this. A shift in mind-set and project execution is required from being data centric technical execution to business facing method.

 

B. As-Is and Lift and Shift are technical possibility towards non-disruptive migration of existing developments to new HANA environment.  These are purely technical terms and not most of the time result into much optimized migration result. SAP also recommends a lift and shift only after removing clutter from system as much possible. After lifting and shifting the HANA optimized design methodologies to be followed to get benefit out of HANA. This results into additional implementation cost and time as well.

 

C. Trying to sell to customer just lift and shift is as bad as spreading bad words about yourself in market

 

D. Continuing with point – A. Involve business and if possible field sales / market staff SPOCs also in workshops to understand strategic and customer-to-customer key areas. At times business sitting in corporate office is able to give 100% understanding of processes and utility for that. But customer-to-customer key areas are best understood by field staff and at time critical analytic inputs are disclosed.

 

E. As HANA implementations have a high level organization strategic value proposition expectation as well and is being viewed with microscope from every level of client organization ensure to understand the greater overall expectation from client. E.g. if a telecommunication client is executing a project to deliver reports on network data, just not go with standard reports  to present that data but understand its business value expected out of it. When you are delivering solutions like - stats of this data to show number of calls done region wise, types of caller type active region wise there mix, fraction of local and STD calls etc. client might be just interested to see call dropout rate rapidly to plan their client retention. Value proposition to client is shaken by excessive timelines and cost when focussed solutions are not done.

 

F. Sizing should be done carefully. HANA stores data in compressed format. It stays compressed and is decompressed only very quickly if required and only in CPU memory cache. So overall foot print of data is reduced significantly up to even 40%. With simple finance it has gone down more. But at the same time sizing should be enough to support resource distribution between multiple concurrent users.

 

G. Proper architectural guidelines should be implemented. E.g. LSA++ to reduce layers in Warehouse to get maximum of storing once and using multiple time benefits of HANA

 

H. In SAP BW HANA implementations, most of active inactive settings not utilized to be able to actively flush out data from memory to disc during bottleneck. At most 50% of memory size should be equal to all of hot data in system.

 

 

Other Challenges

 

There are some other challenges in running HANA projects as well. For first timers the BASIS at times is not ready for HANA and that becomes bit of issue. A heterogeneous environment at times with multiple SAP and Non-SAP systems becomes a challenge. Using correct data extracting solution, transforming data before bringing to HANA and de-normalization requirement of data are important challenges.

 

A very important aspect which moreover questions the knowledge level and maturity of project teams is suggesting or planning what goes to cloud and what remains on premise is also a challenge. There are customers who want to make use of both the worlds, but based on line of business, department and type of service provided by organization these should be considered.

 

At times customers fail to understand or not consulted well or due to financial constraint – the value of archival strategy and near line storage [NLS]. With huge volume of data input Archival and NLS plays a strategic role in data management. Especially with retail, pharma and telecommunication customers. The huge data accumulates in no time and performance is impacted and the new system loses its sheen and value.

 

At times due to negligent planning disruption of systems when moved to production, system outages and then time taken to fix this are also the reason why HANA project like any other project fails. HANA being new and with less expertise available at times lies in higher risk area for this.

 

Creating a Centre of Excellence in service provider and optionally in client organization is important. Primarily in absence of this at

service provider organization, knowledge remains scattered, improperly documented, at times remains with individuals and which hampers organizational maturity. The focus on quality control and metrics is missing mostly. While leveraging COEs helps to build up more of these capabilities and reach maturity with additional focus on tool and methodology build up.

 

Remember the expectation from customer towards HANA project is always extremely high. It is always a high focus project. Also, the target benefit of customer is strategic organizational upgrade as well and not only a technical upgrade. This makes it extremely important to create designs for implementations only with deep understanding of business processes and bigger goal expected.

 

On the other hand HANA being altogether a new kind of Database and application a sound understanding of technology and bringing on board right skill is extremely important.

 

At organization level, building up of COEs and competency development also not only helps in proper project execution. But also helps in quality focussed and mature strategic expertise build.

New SQLScript Features in SAP HANA 1.0 SPS 10

$
0
0

Enhancements to SQLScript Editor & Debugger in the SAP Web-Based Development Workbench

 

In SPS 10, the SQLScript editor and debugger in the SAP Web-based Development Workbench has been enhanced in several ways to help the developer be more productive.  We introduced the editor in SPS 9 with basic keyword hints, but in SPS 10, we’ve expanded this to include code snippets and semantic code completion very similar to what we introduced in the SAP HANA Studio in SPS 9.  Basically, if you want to construct a SELECT statement, you simply type the word SELECT and hit CTRL+SPACE.  You will then get a list of possible code snippets to choose from.

1.png

Select the snippet you wish to insert and hit ENTER, the code snippet is inserted into the procedure.  You can then adjust as needed.


2.png


Another feature that we’ve added to the SQLScript editor in the web-based development workbench is semantic code completion.  For example, if you need to call a procedure, you can simply type the word CALL and hit CTRL+SPACE, and you will get a drop down list of procedures. Simply double click on the object you want to insert.  This is context sensitive, so it works quite well in other statements as well.

3.png

 

With SPS 9, we introduced the ability to debug procedures within the web-based development workbench, but only from the catalog.  As of SPS 10, you can now debug design-time artifacts(.hdbprocedure files) as well.  You simply open the .hdbprocedure file and set your breakpoints.  You can then, right click and choose “Invoke Procedure” to run it from the SQL console.  The debugging pane is show and execution stops at your breakpoint.  You can then of course single step through the code and evaluate values.


4.png

 

 

Commit/Rollback


One of the many stored procedure language features that a developer expects in any database is the concept of COMMIT & ROLLBACK.  Up until now we did not support COMMIT/ROLLBACK in SQLScript.  As of SP10, we now support the use of COMMIT/ROLLBACK within procedures only, not for scalar or table User Defined Functions(UDFs). The COMMIT statement commits the current transaction and all changes before the COMMIT statement.  The ROLLBACK statement rolls back, the current transaction and undoes all changes since the last COMMIT. The transaction boundary is not tied to the procedure block, so if there are nested procedures that contain COMMIT/ROLLBACK then all statements in the top-level procedure are affected. For those who have used dynamic SQL in the past to get around the fact that we did not support COMMIT/ROLLBACK natively in SQLScript, we recommend that you replace all occurrences with the native statements because they are more secure.  For more information, please see the section on Commit & Rollback in the SQLScript Reference Guide.


Header Only Procedures/Functions


We’ve also introduced the concept of “Header Only” procedures/functions in SPS 10.  This is to address a problem when creating procedures/functions that are dependent on one another.  You can’t create the one procedure/function before the other. Basically this allows you to create procedures/functions with minimum metadata first using the HEADER ONLY extension.  You can then go back and inject the body of the procedure/function by using the ALTER PROCEDURE statement.  The CREATE PROCEDURE AS HEADER ONLY and ALTER PROCEDURE statements are only used in the SQL Console, not in design-time artifacts. Below is a sample of the basic syntax, for more information, please see the section on Procedure & Function Headers in the SQLScript Reference Guide.


CREATE PROCEDURE test_procedure_header( in im_var integer,

                                out ex_var integer ) as header only;

 

ALTER PROCEDURE test_procedure_header( in im_var integer,

                                out ex_var integer )

LANGUAGE SQLSCRIPT

SQL SECURITY INVOKER

READS SQL DATA AS

BEGIN

   ex_var = im_var;

END;

 

SQL Inlining Hints


The SQLScript compiler combines statements in order to optimize code.  SQL Inlining hints allows you to explicitly enforce or block the inlining of SQL statements within SQLScript.  Depending on the scenario, execution performance could be improved by either enforcing or blocking inlining. We can use the syntax, WITH HINT(NO_INLINE) or WITH HINT(INLINE).  For more information, please see the section on Hints: NO_INLINE & INLINE in the SQLScript Reference Guide.

 

Multiple Outputs from Scalar UDFs


In SPS 8, we released the ability to call scalar functions in an assignment statement. But there was a limitation which only allowed you to return one output parameter per call.  In SPS 10, you can now retrieve multiple output parameters from a single call.

 

The following function output_random_numbers has two return parameters called ex_rand1 and ex_rand2.

 

CREATE FUNCTION output_random_number( )

        RETURNS ex_rand1 integer,

                 ex_rand2 integer

    LANGUAGE SQLSCRIPT

    SQL SECURITY INVOKER AS

BEGIN

ex_rand1 = ROUND(TO_DECIMAL(1 + (999999-1)*RAND()),2);

ex_rand2 = ROUND(TO_DECIMAL(1 + (999999-1)*RAND()),2);

END;

 

In this procedure, we will call the function and retrieve both return parameters in one call.

 

CREATE PROCEDURE test_scalar_function(

          OUT ex_x integer, OUT ex_y integer)

  LANGUAGE SQLSCRIPT

  READS SQL DATA AS

BEGIN

    (ex_x,ex_y) = output_random_number( );

END;

 

 

You can also, retrieve both values separately with two different calls, referencing the name of the return parameter.

 

CREATE PROCEDURE test_scalar_function(

         OUT ex_x integer, OUT ex_y integer)

  LANGUAGE SQLSCRIPT

  READS SQL DATA AS

BEGIN

    ex_x = output_random_number( ).ex_rand1;

    ex_y = output_random_number( ).ex_rand2;

END;

 

Table Type for Table Variable Declarations


In SPS 9, we introduced the ability to declare a table variable using the DECLARE statement.  At that point, you could only define the structure explicitly inline, and could not reference a table type from the catalog or from the repository. In SPS 10, you can now do so.  In the below example, LT_TAB is declared referencing a table type in a CDS(.hdbdd) file. 


CREATE PROCEDURE get_products( )

    LANGUAGE SQLSCRIPT

    SQL SECURITY INVOKER

    DEFAULT SCHEMA SAP_HANA_EPM_NEXT

    READS SQL DATA AS

BEGIN

 

declare lt_tab "sap.hana.democontent.epmNext.data::MD.Products";

lt_tab = select * from "sap.hana.democontent.epmNext.data::MD.Products";

select * from :lt_tab;

 

END;

 

Anonymous Blocks


Finally,  the last feature I would like to introduce is the concept of Anonymous Blocks.  This allows the developer to quickly write and execute SQLScript code in the SQL Console without having to create a stored procedure.  This is very useful for trying out small chucks of code during development.  You can execute DML statements which contain imperative and declarative statements. Again there is no lifecycle handling(CREATE/DROP statements), and no catalog object.  You can also not use any parameters or container specific properties such as language, or security mode.  The syntax is very simple, you basically use the word DO, followed by a BEGIN/END block.  Then you simply put your SQLScript code in the BEGIN/END block and execute it.  For more information, please see the section on Anonymous Blocks in the SQLScript Reference Guide.


5.png

Anonymous Authentication on a HCP XS Application

$
0
0

Recently I found myself needing to expose one of my HCP (HANA Cloud Platform) applications to the outside world without any authentication. While this is probably not the most common scenario it still can happen and of course brings a whole load of questions like how to actually expose the UI in a freely accessible way and also how to give limited access to your data?

 

So here we go - this scenario is where I have a split app on HCP (not the trial version) with data residing in my HANA server.

 

Step 1 - Roles & Privileges

 

We need a standard .hdbrole and .analyticprivilege file. The first should be of the standard form to give perhaps "SELECT" access to a schema or set of tables. It should also include your analytic privilege (contains any attribute, analytical or calculation views).

 

 

Figure 1


Sample .hdbrole file giving access to a schema and including an analytic privilege


* Note that normally I would never give UPDATE/INSERT/DELETE privileges to a anonymous user unless I had a good reason.

Screen Shot 2015-06-30 at 15.50.18.png

Figure 2


Sample .analyticprivilege file giving access to an Analytic view I created

Screen Shot 2015-06-30 at 15.50.43.png

 

 

Step 2 - Create basic restricted user

 

In order to be certain that the connecting user only has access to what we want them to have access to, create a new user and only assign the following permissions:

  • Assign the role created in step 1 to the user
  • Assign "SELECT" access to the schema "_SYS_BIC"

 

Step 3 - Create a SQL connection for your app

 

Now we need to create a XS SQL connection configuration (.XSSQLCC) file which will be the object we will use to connect our anonymous user to our project. This file simply contains one line which is a description of the connection configuration.

 

Figure 3


Sample .xssqlcc file contents simply giving a description of the SQL connection configuration.

Screen Shot 2015-06-30 at 15.51.12.png

 

 

Step 4 - Assign your restricted user to the SQL connection

 

Activation of this XSSQLCC file from step 3 creates an entry in the system table "SQL_CONNECTIONS" in the schema "_SYS_XS" and performing a select on that table where the "NAME" field is equal to your XSSQLCC file name will retrieve that entry. i.e. if your project is called "ABC" and it is in the top level package "XYZ" and your .XSSQLCC file is called myConfig.xssqlcc then your name search will be for "XYZ.ABC::myConfig".


Once you have verified the entry is in the table you can see that the field called "USERNAME" defaults to blank. This is where we need to specify our restricted user. Do this by running the command as follows using a standard SQL console on the HANA server:


Figure 4


SQL statement to update the SQL Configuration of your app to run as your restricted user.

Screen Shot 2015-06-30 at 16.11.47.png


In this case my restricted user is called DEMO_ANON.

 

 

Step 5 - Make your app use the SQL connection for all access attempts

Finally we now setup our app to use this connection for anybody who attempts to connect to the app. In the .xsaccess file we update our authentication methods to null and set our anonymous_connection to use our XSSQLCC connection.

 

Figure 5


Updated .xsaccess file to use anonymous authentication via our XSSQLCC file.

Screen Shot 2015-06-30 at 16.13.36.png

 

Once all this is complete you should be good to go for anonymous authentication to your XS application. There is some of this configuration available via SAP provided configuration apps (such as the xs admin console /sap/hana/xs/admin on your server ) however this is the workflow that works for me :-)

 

Any questions/comments please feel free to shout

New Development Tools Features in SAP HANA 1.0 SPS 10

$
0
0

With the recent release of HANA SPS 10, its time once again to give a quick look at the highlights of some of the new features. This blog will focus on the new development tools features in HANA SPS 10. I will say up front that the amount and scope of additions in SPS 10 for the developer topic isn't as large as what we saw in SPS 09. Now that isn't to say we aren't investing. In fact we have some really big things in store for the future and it just so happens that most of our development teams were already working on SPS 11 and beyond. Therefore you will mostly see catch up features and usability improvements in SPS 10 for the development topic area.

 

SAP HANA Web-based Development Workbench

 

Calculation View Editor

The first area I want to touch on is the calculation view editor. The calculation view editor was first introduced to the SAP HANA Web-based Development Workbench in SPS 09; but it wasn't feature complete.  In SPS 10, we've spent considerable effort rounding out all the missing features. I won't go into details of all the new modeler features here; as that topic is actually covered separately by other colleagues.  However I still wanted to point out that now you should be able to create and maintain most any calculation view from the web tooling; make complete end-to-end development in the SAP HANA Web-based Development Workbench a possibility.

 

Auto Save

One of the architectural differences between a local/client tool and web based is fundamentally how they react when they get disconnected from the server or encounter some other unforeseen technical issue. In the SAP HANA Studio, a disconnect or crash often still meant that your coding was safe since its first persisted on your local file system. However IDEs in the web browser need to take other measures to ensure your content isn't lost. With SPS 10, we introduce the option to auto save your editor content in the local browser cache.

autosave.png

This is a configurable option which isn't enabled by default since some organizations may have security issues with the fact that the content is stored insecure in the browser cache. However if you enable this option and the browser crashes, you accidentally close the browser tab, or you lose connection with the server; your edited content isn't lost. Instead the "local version" is visible in the version management tool and can be compared to the active server version or restored over the top of it.

 

AutoSave2.png

 

GitHub Integration

Another major new feature for the SAP HANA Web-based Development Workbench in SPS 10, is GitHub integration. Although you can't replace the local repository with Git or GitHub (yet), this functionality does allow you to commit or fetch entire package structures from the public GitHub repository.

 

GitHub.png

Its easy to use because its so nicely integrated into the SAP HANA Web-based Development Workbench.  Just choose the parent package from the Content hierarchy and then choose Synchronize with Github.  You can then choose the Github repository and branch you either want to commit to or fetch from. Personally I've already used this feature to share a few of the demo/educational projects which we use for the openSAP courses.  Also you can do version management from the SAP HANA Web-based Development Workbench between your local versions and the version of the object in the GitHub (GitHub version is the one with G prefix :

 

GitHub2.png

Quick Fix

Most developers have a love/hate relationship with ESLint and other suggestions and warnings. While we like the idea that these suggestions improve our code, we don't like the little red flags hanging around telling us that we have yet more work to do.  This is where the new quick fix option in the SAP HANA Web-based Development Workbench is so nice. You can select multiple lines in JavaScript file and choose quick fix. The system will then apply the fixes it thinks are necessary to remove the ESLint markers. For many small, stylistic warnings; this can be a great way to clean up your code in one fast action.

QuickFix.png

 

JSDoc

JSDoc is a standard for formatting comments within JavaScript which can be used to generate documentation. It is how we generate the JavaScript API documentation found on help.sap.com. Now we integrate the generation of JSDoc directly into the SAP HANA Web-based Development Workbench.

It works for XSJS, XSJSLIB, and client side JS files. The JavaScript editor has a new option to help with the generation of JSDoc compliant function comments. There is also an option to generate a JSDoc HTML file for all the files within a package.

 

jsdoc.png

SQLScript Editor and Debugger

There are several enhancements to the SQLScript Editor and Debugger in the SAP HANA Web-based Development Workbench in SPS 10. You can now set breakpoints and debug from the editor without having to switch to the catalog tool. We also see full the semantic code completion in the SQLScript Editor. For more details on these enhancements, please have a look at Rich Heilman's SQLScript SPS 10 blog: New SQLScript Features in SAP HANA 1.0 SPS 10

 

Data Preview

The data preview tool in the SAP HANA Web-based Development Workbench has a couple of new usability features.  First there is the option to allow for the editing or creation of data directly from the data preview. This probably isn't a tool that you would want to give to end users to maintain business data, but for developers and admins this is great new tool to quickly enter test data or correct an emergency problem.

data_preview1.png

 

The data preview also introduces advanced filtering options to put it closer to the content preview features of the SAP HANA Studio.

data_preview2.png

 

SAP HANA Studio

As has been apparent for a few Support Package Stacks, most of our investment has been going into the web tooling and not the SAP HANA Studio.  SPS 10 is no exception, but still we see a few usability improvements in the area of the Repository browser tab.


We wanted to streamline the start up process, so every system connection automatically shows up in the Repository browser.  In order to edit files, you no longer have to create a local repository workspace. In SPS 10 you just start editing and you will be prompted to create the local workspace.

 

We also bring over for the folder groupings for system from the SYSTEMS tab.

SYSTEM_folders.png

Also added a new options for filtering, grouping and searching files from the Repository browser.

RepositoryBrowser.png

New Core Data Services Features in SAP HANA 1.0 SPS 10

$
0
0

Core data services (CDS) is an infrastructure for defining and consuming semantically rich data models in SAP HANA. Using a a data definition language (DDL), a query language (QL), and an expression language (EL), CDS is envisioned to encompass write operations, transaction semantics, constraints, and more.

 

A first step toward this ultimate vision for CDS was the introduction of the hdbdd development object in SPS 06. This new development object utilized the Data Definition Language of CDS to define tables and structures. It can therefore be consider an alternative to hdbtable and hdbstructure.

 

In SPS 10 we continue to develop CDS with a focus on expanding the SQL feature coverage and improving complex join operations on views.

 

SQL Functions

 

In SPS 10, CDS is expanded to support almost all of the HANA SQL Functions. This greatly expands the kinds of functionality that you can build into views by formatting, calculating, or otherwise manipulating data with these functions. The following functions are the only ones not yet supported:

  • Fulltext functions
  • Window functions
  • the functions GROUPING, GROUPING_ID, and MAP in the section Miscellaneous function

 

Geo Spatial Types and Functions

 

In SPS 09, CDS first offered support for the usage of the Geo Spatial types in entity definitions. In SPS 10 we expand this support for Geo Spatial in CDS with the addition of GIS functions. This example shows how you can use the function ST_DISTANCE to calculate the distance between two geometry values. Specifically in this example we are taking the address of a business partner which is stored in the database and calculating the distance between it and Building 3 on the SAP Walldorf campus.

 

define view BPAddrExt as select from MD.BusinessPartner {    PARTNERID,    ADDRESSES.STREET   || ', ' || ADDRESSES.CITY   as FULLADDRESS,    round( ADDRESSES.POINT.ST_DISTANCE(                NEW ST_POINT(8.644072, 49.292910), 'meter')/1000, 1) as distFromWDF03   };

Foreign Keys of Managed Associations in Other Associations

In the past using a managed association in a "circular" relationship where the key of entity is used in the association to another entity which in turn uses its key back to the parent would simply have resulted in an activation error. In SPS 10, the compiler now recognizes such relationships. When it sees that the referenced field is actually part of the base entity and thus can be obtained without following the association, it allows activation and doesn't generate any additional columns in the underlying database tables.

 

The following is a common example of just such a Header/Item relationships:

entity Header {  key id : Integer;  toItems : Association[*] to Item on toItems.head.id = id;
};
entity Item {  key id : Integer;
 head : Association[1] to Header { id };
};

Unlike a normal managed association, no additional column is generated for the association in the underlying database table. So this case it acts very much like an unmanaged association.

header.png

 

Filter Conditions

Another new features in SPS 10 is the addition of filter conditions. When following an association, it is now possible to apply a filter condition which is mixed into the ON-condition of the resulting JOIN. This adds more power and flexibility to the views you can build via CDS while also following the idea of CDS to make the definition more human readable and maintainable than the corresponding pure SQL functionality.

 

In this first example we apply a simple, single filter on LIFECYCLESTATUS to the Business Parnter -> Sales Order join.

 

 

view BPOrdersView as select from BusinessPartner {  PARTNERID,  orders[LIFECYCLESTATUS='N'].SALESORDERID as orderId
};

The resulting generated view is:

view1.png

Associations with filters are never combined.  Therefore in order to tell the compiler that there actually is only one association, you have to use the new prefix notation. In this example we want the LIFECYCLESTATUS filter apply to both the SALESORDERID and GROSSAMOUNT retrieval via association.

 

view BPOrders2View as select from BusinessPartner {  PARTNERID,  orders[LIFECYCLESTATUS='N'].{ SALESORDERID as orderId,                                GROSSAMOUNT  as grossAmt }
};

The resulting generated view is:

view2.png

But we also see that by using the prefix notation, that such filters can be nested. This example expands on the earlier one. It still filters business partners who only have orders with LIFECYCLESTATUS = N, but now also only selects those who have ITEMS with a NETAMOUNT greater than 200.

 

view BPOrders3View as select from BusinessPartner {  PARTNERID,  orders[LIFECYCLESTATUS='N'].{ SALESORDERID as orderId,                                GROSSAMOUNT  as grossAmt,                                ITEMS[NETAMOUNT>200].{ PRODUCT.PRODUCTID,                                                       NETAMOUNT }                              }
};

The resulting generated view is:

view3.png

 

Series

The final new feature in CDS to discuss today is Series. Series data allows the measuring of data over a time where time is commonly equidistant; it allows you to detect and forecast trends in the data. You can read more about the general functionality of Series data in SAP HANA here: http://help.sap.com/hana/SAP_HANA_Series_Data_Developer_Guide_en.pdf

 

The major addition from the CDS side is that you can define Series data within CDS entities.  Here is a small example of the use of the series keyword:

entityMySeriesEntity{  key setId : Integer;  key t : UTCTimestamp;
 value : Decimal(10,4);  series (
series key (setId)
period for series (t)          
equidistant increment by interval 0.1 second
)

};


Getting started with XSJS – Challenges, Learnings, Impressions

$
0
0

The following blog entry shall mirror first experiences made with XSJS in SPS9. XSJS is the server-side JavaScript that is used to create powerful services in the HANA backend. In the use-cases shown down below, the focus will be on database communication, service calls via AJAX and some useful hints for beginners. The service will be consumed by an OpenUI5 application.

For this tutorial I’ll be using the Web-based Development Workbench v1.91.1 of HDB. The payload of the requests will be delivered in the JSON format. You can find a more formal introduction on Software Development on SAP HANA at https://open.sap.com/courses/hana3/

 

First steps

 

Once you have created a database model and inserted some data with an ODATA-service for instance (See the following links for help on that:

 

(https://www.youtube.com/watch?v=c41anxrDleg

(Useful introduction on ODATA create/update/delete requests by Thomas Jung),

http://scn.sap.com/community/developer-center/front-end/blog/2014/11/08/odata-service-sapui5-app-on-sap-hana-cloud

(Tutorial on how to create an ODATA UI5 application by Ranjit Rao)),

 

you may do something like creating an Email out of the data modified or manipulate the data in some kind of way ODATA won’t provide you with. That’s when XSJS becomes useful. Let’s say we have a button in our app that shall trigger an XSJS call which will insert the data provided with the service call into our db. Based on that it will request some other data to create a mail with data-specific content.

The first thing you will have to do is creating a new XSJS file by adding the .xsjs suffix to a new file. This will do the trick so that it’s interpreted as a server-side JavaScript file.

 

Calling a service from the UI5 controller


Our model’s data will be sent in the JSON format. A local “model.json” file stores all the data – also the specific object we want to send (in this case a repository object which has attributes like a name, Descriptions, LicenseType, and a creation Date). The object can be easily accessed with the model we are using so that all we need to do is creating an AJAX request which looks as follows:

pp3.png

The “$” identifier starts the jQuery command. An AJAX call gives us the opportunity to call any service we want with more settings available than you’ll ever need (See the following link for the jQuery.ajax() documentation: http://api.jquery.com/jquery.ajax ).

 

All you’ll need to know for the beginning is that you need the URL of the service which ends with “.xsjs”, the data to be delivered and the contentType being “application/json” to make sure it transmits the data in the right manner. The data is accessed through the JSONModel which links to the “localModel.json”. It’s then stringified with a predefined JSON method. If you need the application to do something after the request has finished successfully, you can add a callback-method “.done(function(odata){ //what shall be done }))” and there is also one for error-handling.

Now that you know how to call the service, let’s have a look on what it actually does:

 

Creating the service logic


Due to the fact that it’s basically just JavaScript we are going to write there’s not much to say about any specific syntax. Of course it makes sense to wrap a lot of our coding into functions that we’re just going to invoke afterwards.

The first function will get us the data of the body that we sent with the request and call a HDB procedure which will insert the new repository into the database.

pp1.png

Again, the jQuery library gives us some nice features. The documentation of XSJS contains all the useful classes and their methods which you’ll probably need. Keep in mind that two different versions of the API exist

(http://help.sap.com/hana/SAP_HANA_XS_JavaScript_API_Reference_en/$.hdb.html

http://help.sap.com/hana/sap_hana_xs_javascript_reference_en/$.db.html ).

As the second API is outdated and lacks some useful classes and methods which the new one ($.hdb) provides, you should probably go for the latest one.

The first line initializes the function just as you know from JavaScript, after that the body of our request is taken as a String and parsed to a JSON object via “JSON.parse($.request.body.asString))”. The next line gets us a connection to the database. After that a procedure call is created which will insert the new object into the database. The procedure itself is not a part of this blog. Pay attention to the syntax of the schema and procedure description because it’s easy to get irritated at the beginning. The question marks at the end are the input parameters which will be filled with our JSON data. Unfortunately it’s not possible to hand a complete JSON object to procedures as a row and single values as an output at the same time with the old API. This might not have been implemented so far. As a workaround, splitting the JSON object and giving the procedure multiple inputs with simple data types, did the trick. After the request is executed it’s possible to fetch the output parameters (in that case an integer). Next, the procedure call is closed and the changes are committed to the connection. It’s not going to be closed yet, because there is still some work left to do for it. The “getMailData()” method selects all the values being connected to the repository object by calling prepared select statements which is also part of the documentation.

pp2.png

The “sendMail()” function which is invoked after the mail data has been collected has several JSON objects as input parameters and creates a new mail. Fortunately, it is fairly easy to create a mail in XSJS. We just need to create a mail by template and fill the settings. A funny security issue here is the possibility to enter any address for the sender and the mail is going to look like it’s been created by him/her. The neat thing here is that the content of the mail is made up of “parts”. As we want to create a pretty HTML mail we’ll use the contentType “text/html”. After that the mail’s first part’s text is filled with all the data we want to be shown in the mail. You can also add value to the look of the mail by using in-line CSS. Last of all, the mail is send via “mail.send()”. An interesting security gap is the opportunity to type in whatever you want for the sender and the received mail displays this sender. The result of the mail looks something like the following:

pp4.png

 

Issues and Conclusion

 

XSJS services are easy to use if you know how to code JavaScript and the functions regarding the db connection become clear very fast. You just have to keep in mind that for simple use cases ODATA services might be more efficient because you don’t need to define the service logic for them. If you need to modify data in some way before it reaches the database level, XSJS might be very useful to you because it gives you all the opportunities of JavaScript to modify JSON objects, arrays and invoke functions. Furthermore, it lets you send mails and helps you get as much logic in the backend as possible, so you do not have to worry about API-keys or credentials within frontend controllers. Dealing with authentication (which many applications need) is a lot easier with serverside XSJS than within the frontend. An issue I faced was the lack of opportunity to include multiple HTML parts in one mail. The mail would not be rendered correctly and there was no workaround except for creating one big HTML mail part. The procedure which creates the new repository entry had to be modified a lot in order to work correctly. The procedure call in XSJS didn’t allow to pass a complete row as an input parameter, but via ODATA this was always the case. The documentation is still pretty helpful, even if it is short and needs to grow and include more classes in the future. How was your first experience with XSJS? Which problems did you face? Feel free to express your thoughts in the comments section!

Using SDI for KF Model to Account Model and Column Generation

$
0
0

Converting a Key Figure Model to Account model is a common enough use-case and has been the subject of much discussion in the HANA spaces, with people employing various techniques in the Graphical CV and SQLScript side.

 

In SPS09 SAP introduced this newfangled thing called Smart Data Integration that seems to show some promise of making this easier on users, and that's what I'll explore in this blog. Please have a look at this excellent series of blog posts on this new feature.

 

Scenario

 

Those of you who are familiar with the COEJ table model and/or the KF-to-Account-Model concept would probably like to just skim this section. For others, COEJ is the table that stores budget data (CO Object line items) and this is what it looks like*

 

COEJ.png

 

The green fields are the Client, Controlling Area and Document Number and Item. The red fields WKGxxx are the fields that contain the amounts for each period. So in the above example 68.17 is the amount for period 001 in the year 1995, and 221.56 is the amount for period 002 for the same year and so on. This style of modeling is known as the Key Figure Model.

 

This isn't the only way to model such data though - there exists an alternative way called the Account Model. I'll describe it below.

 

Our scenario is to transform and transpose this table, as below. Essentially we'll generate one row for each period. In each row, the green fields (which are not period-specific) will be replicated. The WKGxxx fields (which contain period-specific values) will be transposed into each row of the output.

 

We'll need to tell which amount is for which period, and for this we'll also generate a PERIOD column. In addition we'll concatenate the Year and Period to generate a "FISCPER" field. This field basically contains the Fiscal Year and Period in a YYYYPPP format, exactly as BW does it. This can be useful in some kinds of analysis.

 

This output is what's known as an Account Model. Our objective here is to convert from the KF model into the account model as shown below. To keep the diagram small, I've not shown all fields.

COEJ2.png

 

 

Setting up the Flowgraph

 

Here we'll look at how this can be done in SDI. I'm going to assume you already have COEJ in your HANA system, either replicated from an ECC box or copied.

 

First off, create a project in the Developer Perspective and share it as usual. Next, create a Flowgraph model by right-clicking the project --> New --> Other. Select Flowgraph and give a name for your new flowgraph. In the resulting dialog box, make sure to select "Flowgraph for Activation as Task Plan". This is very important, as the Stored Procedure option for some reason doesn't allow the transpose logic. Here I've named my Flowgraph "TRANSPOSATOR", because the Terminator movie is out and this was the first name that came to mind

COEJ3.png



Click Finish and a blank flow graph screen will come up.

 

Now the first step is to add our COEJ table as a source for our flowgraph. Do this by grabbing the table from its schema and dragging it into the flowgraph. The system will ask whether this should be a data source or data sink. We obviously want this to be our data source, so select that.

 

COEJ4.png

 

Unpivot! Unpivot! Unpivot!

 

Hopefully we'll have more luck than Ross Geller here.

 

Look at the Palette on the right side. From the Palette, select the Unpivot transformation from the Data Provisioning folder and drag it into the output. Our COEJ data should act as the input to UNPIVOT, so connect the DATA node of COEJ to the INPUT of UNPIVOT.

COEJ6.png

After doing that, select the UNPIVOT node above. Now we need to set a lot of properties for our UNPIVOT, to tell it what fields are to be transposed. Open the Properties view and go to the General tab. The inputs in the below screen are color-coded like the fields in the Scenario section above for comparison.

COEJ7.png

 

Who moved my FISCPER?

 

Astute readers might have noted that all the fields from the Scenario section have been added to the UNPIVOT properties, except FISCPER. This is because values for the FISCPER need to be generated - however the Unpivot transform allows us to generate only one field, which is the field called PERIOD above.

 

We'll generate FISCPER using another transform called the Filter transform. This was the only transform I could get that would generate an extra field. So grab the Filter transform from the General tab in the Palette. The Filter transform also has an input and output node, just like Unpivot. Now the FISCPER field should be added to the output of the UNPIVOT, so connect the output of UNPIVOT to the INPUT of the FILTER. Then click on the filter step itself and go into its properties. In the properties, go to the Output tab. Add the FISCPER by clicking the Add button and enter the data type and length

COEJ8.png

 

So now we have the FISCPER, but we haven't populated it yet. Recall that the plan was to populate it in the YYYYPPP format, for example 1995001 for the first period of 1995. So basically the logic would be to take the Fiscal Year (field: GJAHR) and concatenate it with the PERIOD field.

 

There's just one problem: that would give us 19951, 19952 etc as PERIOD is an Integer. To handle that, we'll pad the PERIOD with zeroes on the left. The function lpad( PERIOD, 3, '0') will put zeroes on the left side until it reaches a total length of 3, so a number like "1" will be padded with two zeroes on the left to become "001".

 

Which is a great story, where do we actually do this stuff? The answer is: in the Mappings tab of the Filter's properties view. In there, you'll find the FISCPER field sitting by itself on the target side rather unhappily, while all the other fields have nice mappings. Let's fix that. Select FISCPER, then click on the "Edit Expression" button.

COEJ9.png

 

In the Expression Editor, enter the following formula: Concat("INPUT_2"."GJAHR",LPAD("INPUT_2"."PERIOD",3,'0')). You'll notice this is just a glorified version of the formula we derived above Instead of manually typing the field names, you can drag them from the left side as well. In fact I'd recommend doing that if the name of the input node isn't INPUT_2. Click OK and we're done with the filter.

 

In fact we're almost done with the entire scenario. Notice that even though this is a Filter transform, we didn't actually do any filtering. Looks like this node is more like a Projection than it is a Filter.

 

All right, let's finish this off.

 

 

Output

 

We want the output to go into a new table, so let's configure that by dragging a Data Sink (Template Table) into the flowgraph. Obviously, the output of the filter step should go into this node. Now go into the properties of the newly created Data Sink.

 

Not much to do here, just enter a table name in the Catalog Object field. The SDI job will create this table, so it should not already exist in the system. The table will be created in the Authoring Schema, which here is _SYS_BIC. Leave the rest of the fields as-is.

COEJA.png

 

And that's all. Activate the flow graph and it will get created in the system.

 

The Proof of the Pudding

 

So now how do we verify that this is working? Click on the Execute button in the SDI window. It will fire up an SQL console and begin the task for filling the result table.

 

You can also fire a select statement to pull from the results table.

 

COEJC.png

 

As you can see in the results, we have the amounts in a single column and the PERIOD/FISCPER fields tell us which period the amount is for.

 

 

Caveats

 

One caveat about this example is that it is insert-only. That is to say if you run the task again, it will generate the result and insert them into result table. It will do so repeatedly, which means you could end up with duplicate entries in the result table if you keep re-running it. I haven't found a way around that, but if anybody has an idea, please share in the comments section.

 

Also, please do let me know if there are easier/better ways to achieve this using SDI.

 

* I have depicted a much simplified schema of the COEJ table with far fewer fields to illustrate the concept.

A SHORT TOUR AROUND TEXT ANALYSIS

$
0
0

This is a blog on some of the options that are available in text analysis. Shore descriptions with example codes.

 

Text Analysis is the process of analyzing unstructured text, extracting relevant information and then transforming that information into structured information that can be leveraged in different ways.

Full Text Indexing:

When dealing with a small number of documents, it is possible for the full-text-search engine to directly scan the contents of the documents with each query, a strategy called "serial scanning." This is what some rudimentary tools, such as grep, do when searching.However, when the number of documents to search is potentially large, the problem of full-text search is often divided into two tasks: indexing and searching.The indexing stage will scan the text of all the documents and build a list of search terms (often called an index). In the search stage, when performing a specific query, only the index is referenced, rather than the text of the original documents.The indexer will make an entry in the index for each term or word found in a document, and possibly note its relative position within the document.Conceptually, full-text indexes support searching on columns in the same way that indexes support searching through books.


CREATING A FULL TEXT INDEX:

 

  CREATE FULLTEXT INDEX "nameofindex" On <SCHEMA_NAME>."<table>"("<column>")

  TEXT ANALYSIS ON

  CONFIGURATION {'EXTRACTION_CORE_VOICEOFCUSTOMER' || 'EXTRACTION_CORE' || 'LINGANALYSIS_BASIC' || 'LINGANALYSIS_STEMS' ||   

   'LINGANALYSIS_FULL'};

 



Configurations:


5 Predefined configurations. GROUPED INTO 3 categories.

 

LINGUISTIC ANALYSIS CONFIGURATIONS:


Linguistic analysis

  • Segmentation–the separation of input text into its elements
  • Stemming–the identification of word stems, or dictionary forms
  • Tagging–the labeling of words' parts of speech (POS)

 

  1. LINGANALYSIS_BASIC : Segmentation (tokenization)
  2. LINGANALYSIS_STEMS : BASIC + Stemming (identifying words in dictionary based form --  like Work is the stem of working/worked/works )
  3. LINGANALYSIS_FULL : BASIC + STEM + TAGGING ( Labeling of words POS – Verb, noun,etc).

 

EXTRACTION CONFIGURATIONS:  ( to extract Entities & FACTS )

  1. EXTRACTION_CORE :  Only extracts Entity types (like name , organization , city, language, etc).
  2. EXTRACTION_CORE_VOICEOFCUSTOMER : extracts Entity types and relationship between them (sentiment analysis)

 

CUSTOM CONFIGURATIONS:

  1. Creating a custom configuration from existing configuration: (.hdbtextconfig)

        Repositories->SAP->HANA->TA->Config. We can find all  5 config files(Ling analysis & extraction)

 

   We can,

  1. Include / exclude Analyzers ,
  2. increase /decrease the sample text for analyzing language,
  3. Turn ON / OFF the POS, stemming,Tokenizing
  4. Enable custom dictionaries  Should always be set to TRUE

Activate the text config file and use the file in the CONFIGURATION part of the index.

 

CREATE FULLTEXT INDEX "nameofindex" On <SCHEMA_NAME>."<table>"("<column>")
TEXT ANALYSIS ON
CONFIGURATION 'sap.hana.ta.config::<CUSTOMCONFIG>'

 


EXAMPLES:


ENTITY EXTRACTION (SENTIMENT ANALYSIS):

DROP TABLE LANGUAGE_DETECT;

Create column table language_detect (ID smallint Primary Key, content nvarchar(50), lang varchar(2));

 

insert into language_detect values(2,'JOHN LOVES TO PLAY FOOTBALL','EN');

 

DROP FULLTEXT INDEX LANG_INDEX;

CREATE FULLTEXT INDEX LANG_INDEX ON language_detect(CONTENT)

LANGUAGE COLUMN LANG

CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER'

TEXT ANALYSIS ON;

 

SELECT TOP 1000 * FROM "$TA_LANG_INDEX";

 

 

LINGUISTIC ANALYSIS :

 

DROP FULLTEXT INDEX LANG_INDEX;

CREATE FULLTEXT INDEX LANG_INDEX ON language_detect(CONTENT)

LANGUAGE COLUMN LANG

CONFIGURATION 'LINGANALYSIS_FULL'

TEXT ANALYSIS ON;

 

SELECT TOP 1000 * FROM "$TA_LANG_INDEX";

 

 

  LANGUAGE DETECTION PARAMETER:


select * from SYS.M_TEXT_ANALYSIS_LANGUAGES -- TO FIND THE LIST OF SUPPORTED LANGUAGES.



Insert the following texts into a table for testing the detection:


German : Ich mag Musik

English: I love Music

Chinese (traditional) : 我愛音


 

insert  into language_detect values(1,'我愛音樂');

insert into language_detect values(2,'I LOVE MUSIC');

insert into language_detect values(3,'Ich mag Musik');

 

Create Index without specifying the LANGUAGE DETECTION option:

 

CREATE FULLTEXT INDEX LANG_INDEX ON language_detect(CONTENT)

TEXT ANALYSIS ON

LANGUAGE DETECTION(‘ZH’,'EN')-- Analyse and display the results only for the specified two languages

CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER';

 

SELECT TOP 1000 * FROM "$TA_LANG_INDEX"

 

If no LANGUAGE DETECTION parameter is specified, then  only the Words which are in ENGLISH (Default language) are considered.


LANGUAGE COLUMN:


Speed up the analysis by eliminating the detection of language by explicitly having the language code in a separate column.

 

 

DROPTABLE LANGUAGE_DETECT;

 

Createcolumntable language_detect (ID smallintPrimaryKey, content nvarchar(50), lang varchar(2));

 

insertinto language_detect values(2,'I LOVE MUSIC','EN');

 

insertinto language_detect values(3,'Ich mag Musik','DE');

 

DROP FULLTEXT INDEX LANG_INDEX;

 

CREATE FULLTEXT INDEX LANG_INDEX ON language_detect(CONTENT)

LANGUAGECOLUMN LANG --Specified the language column as LANG (To fetch the language code from this column)

CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER'

TEXT ANALYSIS ON;


SELECT TOP 1000 * FROM"$TA_LANG_INDEX";


MIME TYPES:

Used to specify MIME type (Format of the content) to improve the performance.


Supported mime types for text analysis:

 

SELECT * FROM SYS.M_TEXT_ANALYSIS_MIME_TYPES


 

DROPTABLE mime_types;

Createcolumntable mime_types (ID smallintPrimaryKey, content BLOB);

--NOTE: BLOB will take up more memory and hence no point in using BLOB to just load plain text files.

 

--Insert the data through any program (Java/Python,etc)

 

 

DROP FULLTEXT INDEX MIME_INDEX;

CREATE FULLTEXT INDEX MIME_INDEX ON mime_types(CONTENT)

CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER'

TEXT ANALYSIS ON

MIME TYPE'application/pdf'; --Specify the type of file to be loaded to the BLOB column

 

SELECT TOP 1000 * FROM"$TA_MIME_INDEX";


 

TOKEN SEPARATORS:


Specify only the separators that are needed to split the words in a set of text.

 

DROPTABLE TOKEN_SEPARATOR;

Createcolumntable TOKEN_SEPARATOR (ID smallintPrimaryKey, content nvarchar(200));

 

DROP FULLTEXT INDEX TOKEN_INDEX;

CREATE FULLTEXT INDEX TOKEN_INDEX ON

TEXT ANALYSIS ON

TOKEN SEPARATORS '/\:;"''[]_';

 

insert into TOKEN_SEPARATOR VALUES (1,'fellow-student writes semi-final'); -- Treats '-' as a part of the string and doesn't tokenize.

 

SELECT TOP 1000 * FROM"$TA_TOKEN_INDEX";


 

SYNCHRONIZATION:

 

       We can set the time/document count for synchronizing the index table with the contents.


DROPTABLE SYNC;

Createcolumntable SYNC (ID smallintPrimaryKey, content nvarchar(200));

 

DROP FULLTEXT INDEX SYNC_INDEX;

CREATE FULLTEXT INDEX SYNC_INDEX ON SYNC(CONTENT)

TEXT ANALYSIS ON

ASYNC FLUSH EVERY 1 MINUTES; --SYNCHRONIZES ONCE EVERY MINUTE (Only once every minute, index table entries are refreshed)

 

DROP FULLTEXT INDEX SYNC_INDEX;

CREATE FULLTEXT INDEX SYNC_INDEX ON SYNC(CONTENT)

TEXT ANALYSIS ON

ASYNC FLUSH AFTER 5 DOCUMENTS; --SYNCHRONIZES ONCE AFTER EVERY 5 RECORDS

(Index table entries are refreshed once in every 5 records inserted into the content table)

 

ALTER FULLTEXT INDEX SYNC_INDEX SUSPEND QUEUE; -- SUSPEND THE ANALYSIS (Index table entries are not refreshed)

 

ALTER FULLTEXT INDEX SYNC_INDEX ACTIVATE QUEUE; -- ACTIVATE THE ANALYSIS (Index table entries are refreshed)

 

CUSTOM DICTIONARY:

 

 

Step1: Create the list of words in a file in project explorer(.hdbtextdict)

Step2: Add the file to any of the “.hdbtextconfig” file.

 

.hdbtextdict:

       This file should be in XML format.

 

<?xml version=”1.0” encoding=”UTF-8”?>

<dictionary xmlns=http://sap.com/ta/4.0>

       <entity_category name=”IPL CRICKET TEAM”>

              <entity_name standard_form="Chennai Super Kings">

                     <variant name="CSK"type="ABBREV" />

                     <variant name="C.S.K"type="ABBREV" />

                     <variant name="super kings" />

                     <variant generation type ="standard"language="english" />

              </entity_name>

       </entity_category>

</dictionary>

 

.hdbtextconfig :

 

Create a custom configuration file (.hdbtextconfig) having any configuration as the base.

 

We will find the following at the bottom of any configuration file:

 

--create a .hdbtextconfig  file ("iplteamconfig.hdbtextconfig" )by editing the following in the existing configuration files.

 

<property name=”Dictionaries” type=”string-list”>

       <string-list-value> sap.hana.ta.config::iplteam.hdbtextdict</ string-list-value>

</property>

 

Use this in the Configuration while defining the INDEX creation:

 

--Use this in the Configuration while defining the INDEX creation:

DROP FULLTEXT INDEX DICT_INDEX;

CREATE FULLTEXT INDEX DICT_INDEX ON DICT(CONTENT)

CONFIGURATION 'sap.hana.ta.config::iplteam'; --The custom configuration file name

TEXT ANALYSIS ON;


REQUEST EXTRACTION:

 

 

 

General Requests : general req. by customers for enhancement , improvement, etc

Contact Requests : contacts given by customer like contact me at… call me at …

 

INSERTINTO prof_test VALUES (2,' CALL ME AT 8884484855 ');

INSERTINTO prof_test VALUES (10,' An additional key would be good on this keyboard');

INSERTINTO prof_test VALUES (3,'I want your customer care to contact me as soon as possible');

 

 

DROP FULLTEXT INDEX prof_index;

CREATE FULLTEXT INDEX prof_index ON prof_test(CONTENT)

CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER'

TEXT ANALYSIS ON;

 

SELECT   * FROM"$TA_PROF_INDEX";

 

 



EMOTICONS EXTRACTION:


 

DROPTABLE prof_test;

Createcolumntable prof_test (ID smallintPrimaryKey, content nvarchar(300));

INSERTINTO prof_test VALUES (2,' I CLEARED MY CERTIFICATION :-D ');

INSERTINTO prof_test VALUES (10,' THE PARTY WAS GOOD :)');

INSERTINTO prof_test VALUES (3,'I HATE TRAFFIC :(((');

 

 

DROP FULLTEXT INDEX prof_index;

CREATE FULLTEXT INDEX prof_index ON prof_test(CONTENT)

CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER'

TEXT ANALYSIS ON;

 

SELECT   * FROM"$TA_PROF_INDEX";

 

Exposing HANA Calc Views via OData to Fiori Tiles

$
0
0

Having been through the process of exposing a HANA Calculation View as an OData Service for a Fiori tile to consume.  There are many examples that show some of pieces of this process but I haven't seen this documented anywhere fully, so I have attempted to captured it below.

 

1. Create a Calc View to return a single row as is required by the Fiori tiles.

2. Add additional parameters supported by the Fiori Dynamic Tiles

3. Surface Calc View as an OData service

4. Create Fiori Catalogue for the Fiori dynamic tile to reside within

5. Add dynamic tile to Fiori Launchpad

 

1. Calculation View

The creation of the Calc View is the piece that is more familiar, this will be returning a single row to the Fiori tile so you may need to perform some aggregation and filtering so that you returning the required information.

 

In this example the Calc View also contains the other Fiori Dynamic Tile Parameters although only the "number" output is mandatory, all others are optional.

CV-Fiori.png

2. Fiori Dynamic Parameters

The required OData structure and parameters are well documented in this HCP link.  SAP HANA Cloud Portal Documentation

 

 

3. OData Services


3.1 HANA XS Project

The OData service resides within a XS Project, so you should create that first ensuring the you have the .xsapp and .xsaccess files, with later revisions of HANA this should be created automatically when you create the XS Project.

Follow the excellent step by step instructions here for creating XS Apps, from step 3 through to 6

SAP HANA Cloud Platform

 

3.2 HANA OData Service

The OData service is very simple in this example, Werner wrote a great Blog on building HANA OData services

REST your Models on SAP HANA XS

 

The OData service below access the FIORI_LATEST_VALUE CalculationView and exposes this as LatestValue

service  {   "TimeSeries::FIORI_LATEST_VALUE" as "LatestValue"   keys generate local "ID"                                aggregates always;                 }

You can check the OData service easily by launching this from HANA Studio, with the Run As, XS Service

Screen Shot 2015-07-28 at 21.38.10.png

 

This will launch the XML definition of the OData (xsodata) service that has been createdScreen Shot 2015-07-28 at 21.42.40.png

To check the metadata within the OData Service look at the URL in this format

http://ukhana.mo.sap.corp:8001/Ian-Fiori/LatestValue.xsodata/$metadata

 

Screen Shot 2015-07-29 at 15.47.32.png

 

Once the service is created the import thing is that it needs to return data is JSON format as this is what Fiori expects, the $format=json parameter does this nicely, as below.  Also using a chrome extension to format the JSON response.

 

http://ukhana.mo.sap.corp:8001/Ian-Fiori/LatestValue.xsodata/LatestValue/?$select=subtitle,number&$format=json

 

Screen Shot 2015-07-29 at 15.51.25.png

 

Creating a new Fiori Catalogue in HANA Studio for your custom tiles to reside within

Screen Shot 2015-07-29 at 15.54.29.png

 

 

Screen Shot 2015-07-28 at 17.15.31.png

 

Using the tile templates makes it easy to create the dynamic tiles

 

Screen Shot 2015-07-28 at 17.18.08.png

 

Paste in the OData URL from above into the Service URL field, ensure to have the $select=number&$format=json

Screen Shot 2015-07-29 at 16.02.59.png

 

The dynamic tile can then be added to the appropriate Fiori Launch Pad through the standard Fiori interface.Screen Shot 2015-07-29 at 16.06.27.png

 

 

There seems to be an issue with "Error Failure - Unable to load groups" with HANA Revision 101 and accessing some Fiori Launchpad links.

Screen Shot 2015-07-29 at 16.07.55.png

This link errors for me

http://ukhana.mo.sap.corp:8001/sap/hana/uis/clients/ushell-app/shells/fiori/FioriLaunchpad.html

 

Where as this link works fine

http://ukhana.mo.sap.corp:8001/sap/hana/uis/clients/ushell-app/shells/fiori/FioriLaunchpad.html?siteId=sap|hana|admin|cockpit|app|cockpit

 

I hope this helps you if you are creating similar Fiori tiles.

Stay up to date with Software Development on SAP HANA SPS 09

$
0
0

Earlier this year, openSAP hosted Software Development on SAP HANA (Delta SPS 09), a follow up course to the popular courses, Introduction to Software Development on SAP HANA and Next Steps in Software Development on SAP HANA. Over 100,000 people have learned to develop native applications on SAP HANA through openSAP since 2013. With SAP HANA SPS 09, released in November 2014, many development tools and languages used when performing SAP HANA development have been added and extended. Due to popular demand, we’re happy to announce that we are reopening Software Development on SAP HANA (Delta SPS 09), starting September 15.

 

The purpose of this course is to enable developers to get up to date with the new development features on SPS 09 without disruption. New development features include:

  • New XSJS Database Interface
  • New Core XSJS APIsopenSAP_hana3_Web.jpg
  • New XSODATA Features
  • SQLScript
  • XS Admin Tools
  • SAP HANA Test Tools
  • Core Data Services
  • XSDS (XS Data Services)
  • SAP HANA REST API
  • SAP River
  • SAP HANA Web-based Development Workbench
  • SAP HANA Studio

 

The course will focus on the new and improved features that are available with SPS 09. It is recommended that participants should have already taken part in the original software development on SAP HANA courses prior to taking part in this course. The course will be presented Thomas Jung and Rich Heilman once again. If you haven’t previously completed the software development on SAP HANA courses but would like to get started, it’s not too late to sign up and take the courses in self-paced mode. Sign up today for free!

Introduction to Software Development on SAP HANA

Next Steps in Software Development on SAP HANA

 

Registration is now open for Software Development on SAP HANA (Delta SPS 09), starting September 15 and running for three weeks.

 

Other courses now open for enrollment on openSAP

Experience SAP Cloud for Customer

Sustainability and Business Innovation (Repeat)

Viewing all 676 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>