I was working with a Sample data which was Geographically distributed. when i was trying to create a Geographical Hierarchy, Some places were not found and some had a close match.
I suggest Lumira should provide us to pic an area from the Map/ By entering details manually. Visualizations with "Not found" data will not be complete. My idea was posted in ideas.sap.com here
PFB of a Geographic Visualization - where i had data related to Europe, West Indies :- but Lumira Din't find a match to these places.
So another week gone and This week the course was set up quite different from the first week. Was the first week very theory oriented explaining all the architectures. This week had a large hands-on part. I spend two nights doing all the practical exercises.
Basically in the first week you get the explanation of where all the tools in the shed are and what they are for.
I’m more the kind of guy that just stumbles into the shed, doesn’t even turn the light on and starts rumbling and trying until I get a grasp of things. And what happened to Richie? Oh well, guess he will do other parts.
(very) roughly you could say that the course this week was divided into two parts. First working with the .hdbdd file where you insert all the definitions of your database artifacts. First you can define you types and structures and using those types you can create tables, views.
In your .hdbdd file you use CDS syntax. Something that you can also yuse in HANA/river and ABAP. That’s nice as things I learn now are also usable in a wider area. Thing is though that as an acronym .hdbdd leaves something to be desired. I was impressed that Thomas was able to say this without stumbling in over 95% of the time. Behind my desk at home I tried it, but my percentage was way lower.
As someone with a lot of BW experience I was thinking about those types and structures and how you might be able to use them as sort of limited info objects. I would like first though that the functionality of export/import of tables is implemented.
Maybe I should explain that some more. When you update types or structures. All tables that hold that type/structure will also be updated. This can lead to activation errors if the table is holding data. Thomas said that some automatic export/import functionality is being worked on that when activating your table data will be exported. Update takes place and then data gets imported again.
That sounds like something that is really handy. Although you might want to be careful with changing your types and structures. A much used type (business key anyone?) that resides in large tables could create an avalanche of runtime actions to be taken.
In general though I like CDS, but I would like it to be SQL complete.
For the models We had an impressive example with lots of calculation views (graphical and scripted), a decision table and analytical / attribute views. In the scripted calculation view we used the procedure that was generated through the creation of the decision table.
I was wondering though if you could insert overlapping conditions in the decision table and if you could general multiple outcomes for one record. Does anybody know this or should I just try out. I suppose it is the latter.
In my notes I started this segment bravely by stating I would type everything and not cut & paste things, but due to the size of the exercises I had to tone that down a bit.
Additionally I had to revisit earlier parts I skipped as I had to create some roles and authorize the system user so the record create & test app could run.
All in all a good week, a lot more time was spent actually working in the HANA system.
Next week we will go into more SQLscript advanced. Let’s see how that is continued from last year’s course.
I have written a HTML-Javascript code that return LinkedIN Profile details in JSON format.Now i want to add those data into my table. Should i convert them into XSJS type if so can you guide me through the steps? Sholud i write xshttpdest? Can you guide sample code will be really helpful.
My Code:
<html><head><title>LinkedIn JavaScript API Hello World</title><!-- 1. Include the LinkedIn JavaScript API and define a onLoad callback function --><script type="text/javascript" src="https://platform.linkedin.com/in.js">/* api_key: 75eq444tllkike onLoad: onLinkedInLoad authorize: true credentials_cookie: true credentials_cookie_crc: true
*/</script><script type="text/javascript">
function onLinkedInLoad() { IN.Event.on(IN, "auth", onLinkedInAuth);
}
function onLinkedInLogout() { setConnections({}, {total:0});
}
function onLinkedInAuth() { IN.API.Connections("me") .fields("firstName", "lastName", "industry","summary","headline","specialties","positions","interests","educations") .result(displayConnections) .error(displayConnectionsErrors);
}
function displayConnections(connections) { var connectionsDiv = document.getElementById("connections"); var members = connections.values; // The list of members you are connected to /*for (var member in members) { connectionsDiv.innerHTML += "<p>" + members[member].firstName + " " + members[member].lastName+ " works in the " + members[member].industry + " industry";}*/ document.write(JSON.stringify(members));
}
function displayConnectionsErrors(error) { /* do nothing */ }</script></head><body><!-- 3. Displays a button to let the viewer authenticate --><script type="IN/Login"></script><!-- 4. Placeholder for the greeting --><div id="profiles"></div><div id="connections"></div></body></html>
There are times when a particular procedure say X, needs to be executed only as part of a workflow or else it may cause unwanted behavior, particularly when the procedure deals with DDLs and DMLs, as the workflow might involve a chain of checkpoints with the output of one checkpoint leading into the next checkpoint as input. The schema structure is at stake. Such procedures need to have restricted access. A typical application will have a Roles and Privileges framework, which restricts/grants certain users from/to creating/dropping/executing DB artifacts (procedures, tables, views etc.). So with the users who do have the privilege to create/modify DB artifacts, its always possible to create multiple wrappers around the procedure X in our case, and execute the wrappers, or in another case, directly execute X, outside the workflow. So, for DB artifacts, I prefer to think of them having 2 groups of users, we might say, in our case. One is the server, the other the client. While the server has the privileges to create/modify/delete/execute the procedures, the client can only execute the DB artifact. Restricting the client from executing X outside the workflow, therefore, becomes simple. We just give EXECUTE access only on this schema containing X to the client(s). But this itself is not enough. Lets say the relevant user in question is UserA. UserA might still log on to the HANA studio and run the procedure from backend, outside the workflow as he has the EXECUTE rights as mentioned before. Lets say the name of the schema is RUNTIME_SCHEMA. Therefore UserA cannot do any DDL/DML changes to the any of the DB artifacts in RUNTIME_SCHEMA, unless doing them from within SQLScript calls on procedures in the schema RUNTIME_SCHEMA. The UserA has all the rights on his schema UserA. He can create procedures in his schema and fire DDLs/DMLs on the artifacts in RUNTIME_SCHEMA. But the procedures won't execute as the user does not have the appropriate DML privileges outside the scope of the EXECUTE privilege on the RUMTIME_SCHEMA. Therefore, the user is completely locked out from making any changes to this artifact X. So, we are good with UserA. So is the case, with other client(s). But if we talk about the server user or the user group that created the procedure X in the first place, then he is the one user (group) who can modify the procedure as well. Lets assume, he (or the group) modifies it for legitimate reasons as part of the workflow. But this user group can also control the possible point(s) of entry into this procedure X if it wants to as part of the workflow strategy. It is also possible to limit the point of entry to none if the procedure is supposed to be a "non-wrappable" procedure (i.e. the procedure can't have a parent), so to speak.
Use Case:
1.To ensure that the called procedure is only executed if called from a specific caller or caller group and not from other procedures, typically when you want a single/limited point(s) of entry to the called procedure, which typically happens during creation of a service model implementing design patterns at DB level, particularly the factory methods, knowing the caller of the procedure can be handy. Furthermore, the called procedure might be a decision table traversing different paths based on the specific caller from the caller group. One might say, the called procedure can behave as a router
Example Pseudocode
create procedure schema_name.Inner as begin
//find the caller
....
...
...
if caller is 'Outer1' then execute model1
elseif caller is 'Outer2' then execute model2
else //for other callers not authorized to run this procedure log a 'No authorization' message return
end if;
end;
2.Another use case that comes to my mind is logging the caller procedure. One might have multiple points of entry into the called procedure in the same workflow at different points in time. Logging the name of the caller along with other metadata such as the checkpoint or the timestamp, can help in troubleshooting, which otherwise would need scrutiny of all the possible callers one by one, which can be time consuming, unless we manually pass the procedure name as a parameter to the called procedure, which in my opinion should not be the way to go, as it involves a lot of updates to the code the moment the procedure name changes
Premise
The premise this works on is that recursion is not supported in HANA and that at a particular point in time a DB transaction consisting of a workflow having only one Connection ID can have a particular procedure being called from only one other procedure (or else it would become a cyclic dependency), which stays true till the called procedure is done getting executed, after which another procedure may call the same called procedure.
Implementation
create procedure INNER(in inp nvarchar(32),out status nvarchar(5000)) as
lv_var nvarchar(5000):='{ call "MYSCHEMA"."INNER" (?, ?) }';
lv_statement nvarchar(256):='';
begin
select statement_string into lv_statement from "SYS"."M_PREPARED_STATEMENTS" where statement_status='ACTIVE' and connection_id=current_connection and statement_id= (select parent_statement_id from "SYS"."M_PREPARED_STATEMENTS" where statement_status='ACTIVE' and cast(statement_string as nvarchar(5000))=:lv_var);
if lv_statement= 'call MYSCHEMA.OUTER' then status:=:lv_statement||' Success::Procedure called:'||ucase(substr_after(:lv_statement,'call '));
else status:=:lv_statement||' Failure::Procedure called:'||ucase(substr_after(:lv_statement,'call '));
end if;
end;
create procedure OUTER as
stat nvarchar(5000):='Hello';
begin
call MYSCHEMA.INNER('TEST',:stat);
select :stat from dummy;
end;
call MYSCHEMA.OUTER;
In the above sample code, the called procedure 'INNER' checks if the call is coming from the procedure 'OUTER'.
Sample Use Case:
Here, I have created a utility tool of re-creating a DB artifact with the latest definition amidst a workflow as a small part of the workflow. Creating a DB artifact from within a procedure would involve checking if it already exists, dropping it if it does, and then creating it again with the latest definition. I want the re-creation of the DB artifact to be run only as a part of this workflow and not outside it, as far as the execution of the procedure is concerned. So, I create a verification utility tool called myschema.check_caller which takes the called procedure as the first parameter and the expected caller procedure as the second parameter and checks for verification and if successful, executes the workflow else returns.
--check caller
drop procedure MYSCHEMA.check_caller;
create procedure MYSCHEMA.check_caller(in source nvarchar(5000),in target nvarchar(256)
,out status nvarchar(5000)
) as
i int:=0;
j int:=0;
lv_statement nvarchar(256):='';
begin
select statement_string
into lv_statement
from "SYS"."M_PREPARED_STATEMENTS" where statement_status='ACTIVE' and connection_id=current_connection and statement_id= (select parent_statement_id from "SYS"."M_PREPARED_STATEMENTS" where statement_status='ACTIVE' and cast(statement_string as nvarchar(5000))=:source);
if ucase(substr_before(substr_after(:lv_statement,'call '),'('))= ucase(:target) then status:=:lv_statement||' Success::Procedure called:'||ucase(substr_after(:lv_statement,'call '));
else status:=:lv_statement||' Failure::Procedure called:'||ucase(substr_after(:lv_statement,'call '));
end if;
end;
--check if DB artifact exists
drop procedure MYSCHEMA.is_exist;
create procedure MYSCHEMA.is_exist(in artifact_schema nvarchar(32),in artifact_name nvarchar(32),in artifact_Type nvarchar(32)
,out is_exist int
) as
begin
if :artifact_type='USER' then
select count(*) into is_exist from "SYS"."ROLES" where role_name=:artifact_name;
elseif :artifact_type='ROLE' then
select count(*) into is_exist from "SYS"."USERS" where user_name=:artifact_name;
else select count(*) into is_exist from "SYS"."OBJECTS" where schema_name=:artifact_schema and object_name=:artifact_name and object_type = :artifact_Type;
end if;
end;
--drop DB artifact
drop procedure MYSCHEMA.drop_artifact;
create procedure MYSCHEMA.drop_artifact(in artifact_schema nvarchar(32),in artifact_name nvarchar(32),in artifact_Type nvarchar(32)) as begin
exec 'drop '||artifact_type||' '||artifact_schema||'.'||artifact_name;
end;
--create DB artifact
drop procedure MYSCHEMA.create_artifact;
create procedure MYSCHEMA.create_artifact(
in artifact_schema nvarchar(32),
in artifact_name nvarchar(32),
in artifact_Type nvarchar(32),--TABLE/PROCEDURE/VIEW
in artifact_definition nvarchar(5000),--for procedures/tables/views
in persistence nvarchar(1), --GLOBAL TEMPORARY/'' for tables only
in original_artifact nvarchar(256))-- for CREATE-LIKE VARIANT of Table
as
result nvarchar(5000);
begin
call check_caller('{ call "MYSCHEMA"."CREATE_ARTIFACT" (?, ?, ?, ?, ?, ?) }','MYSCHEMA.RECREATE_ARTIFACT'
,result
);
result:=locate(:result,'Success');
if result != 0 then
--rest of the code to create the DB artifact
else
--return without execution
end if;
end;
--wrapper: Master Procedure
drop procedure myschema.recreate_artifact;
create procedure myschema.recreate_artifact(
in artifact_schema nvarchar(32),
in artifact_name nvarchar(32),
in artifact_Type nvarchar(32),--TABLE/PROCEDURE/VIEW
in artifact_definition nvarchar(5000),--for procedures/tables/views
in persistence nvarchar(1), --GLOBAL TEMPORARY/'' for tables only
in original_artifact nvarchar(256)
) as
result nvarchar(5000);
begin call MYSCHEMA.is_exist(:artifact_schema,:artifact_name,:artifact_Type,result); if result != 0 then call MYSCHEMA.drop_artifact(:artifact_schema,:artifact_name,:artifact_Type); end if; call MYSCHEMA.create_artifact('MYSCHEMA','TEST_TAB','TABLE','','','');--can only be excuted from recreate_artifact
end;
call MYSCHEMA.RECREATE_ARTIFACT('MYSCHEMA','TEST_TAB','TABLE','','','');
Request your thoughts on the use of such a utility.
With the advent of HANA SPS 08 circa Q2 2014, SAP finally released the HANA data provider for Microsoft ADO.NET. Admittedly, this is not trousers stirring news this but I am pleased to know that there is now an effective way to allow the .NET framework to efficiently communicate with SAP HANA. In the past, you could still do that but it was through the use of the ODBC driver – there is nothing wrong with it if all you wanted to do was to connect to SAP HANA and perform some simple SQL statements but we all know SAP HANA if more than just an in memory database.
This article will to tease out some of the key concepts and features introduced by SPS 08 on the client interfaces for ADO.NET. Data access and manipulation will be the theme here today using C# and SAP’s new .NET API – Sap.Data.Hana. If you are wondering what ADO.NET is, it is Microsoft data access technology for the .NET framework using programming languages such as C# or VB.NET.
I beg the question that you ask why is it important for SAP to release this new API when ODBC is sufficient? Given that the standard ODBC connection to SAP HANA is still a valid method to bridge the connection to the database or any database for that matter, we need to understand the core use of an ODBC driver itself. Microsoft’s primary goal when they released the initial ODBC driver was to provide a vendor neutral way of accessing stored data in any machine. As you might have already guessed, the whole host of ODBC driver that Microsoft currently supports spans vast and wide from databases such as Oracle, Informix, Teradata and all the way to SAP HANA, just to name a few. The result of building an ODBC driver to be as generic as possible can come at a cost. Cost in terms of lost of performance, unleveraged hidden features, missing core functionalities and best practices surrounding a given database.
Although there are no official information from SAP detailing why a developer should use the new API, one can only infer that SAP has put in a lot of hard work to ensure that your .NET code communicates efficiently with their HANA database. Take for example if your team were to deliver a .NET solution specifically on Microsoft SQL Server, they will certainly use the System.Data.SQLClient.SQLConnection object, on Oracle they would most likely end up using the ODP.NET driver and with SAP HANA, it would be this new API. Although the .NET framework is able to connect to SAP HANA through the ODBC driver, SAP reckons that your application will take full advantage of all the underlying advances build specially for the .NET framework to play nicely with SAP HANA. There will be instances when you find yourself with no vendor delivered driver and in this case the ODBC driver is your only option to exchange information with the database on the .NET framework e.g. when developing application on Hadoop Hive.
Getting Started
To start using the new ADO.NET API delivered in SPS 08, you will need to have the latest SAP HANA Client Developer Edition version 1.00.80 installed on your machine or alternatively check the default installation path under C:\Program Files\sap\hdbclient\ado.net. If you have that installed, you should see two folders call v3.5 and v4.5 and clicking through v4.5, you should have the Sap.Data.Hana.v4.5.dll.
Building an enterprise .NET application on SAP HANA will undoubtedly require you to perform one of these four actions and in this section of the article, the aim is to demonstrate the basic database call to interact and manipulate the data that resides in SAP HANA.
Under the .NET solution explorer, expend the Reference dropdown and ensure that Sap.Data.Hana.v4.5 has been included to your .NET solution and if not you can add a new reference under the Extensions path. Once you have that, standard C #codes ensue for namespace: using Sap.Data.Hana;
The bare minimum parameter required to establish a connection is to have the connection string include the server address, user name and password. Other parameters such as connection lifetime, database, pooling, etc. are optional but worthwhile exploring if you want to maintain a healthy level of database activity.
Once a database connection has been established it is a straight forward approach to acquire the data that you require from any table or schema. In this example, I have issued a simple SELECT statement with the intention of publishing the data to a .NET DataGridViewer. The class demonstrated here is the HanaDataReader class use to return a read only result set and output into a Windows form.
//Establish an active connection to the SAP HANA database
Inserting records into the database is as easy as querying for it but not without additional effort to format and prepare the data. The example provided here stores the information from 5 text fields using standard SQL inserts statements executed by C#.
Upon loading the application, it connects to SAP HANA, loads 150 rows of customer records from the database and the intention here is to create a new piece of information and write it back to the table.
When the Save New button is clicked, the .NET framework runs through this piece of simple code to perform two sets of activities. Firstly, it will perform a SQL insert statement to store all the values captured from the text box by running the ExecuteNonQuery() method and secondly it reread the information by running the ExecuteReader() method as this will repopulate the DataGridViewer with the new record.
The only information that the ExecuteNonQuery() method returns is the number of rows affected by the specific action and this method is useful for running SQL statements such as INSERT, UPDATE or DELETE.
My guess is Stored Procedure has always been off limits for SAP applications that sits on the NetWeaver stack because of the vast database vendors out there and the effort to cater for the different method of creating and calling a procedure can add up to an unnecessary amount of overhead. What SAP has given us in equivalent is the lovely function module builder that we have all known to work with and love throughout the years. Like any high performing, respectable RDBMS out there in the market, SAP HANA has included the Stored Procedure function and using it with .NET is just as simple as the rest.
In this next example, the goal here is to change the record for Customer ID 150. The customer name Samuel Johnson has been misspelled and it needs to be corrected. Upon clicking on the Update Changes button, the .NET framework will call a Stored Procedure in SAP HANA, passing in the Customer ID and update the record using a standard SQL update command that is embedded within a Stored Procedure. A Stored Procedure can contain logic as simple as accepting fields and processing it to as complex as performing specific validation and returning an error if the conditions are not met.
CREATEPROCEDURE prcUpdateCustomers(IN ID INT, IN CUSTOMER NVARCHAR(60)) AS
BEGIN
UPDATE customers SET customer = :CUSTOMERWHERE ID = :ID;
END;
The change to execute Stored Procedure is to set the Hana command type to StoredProcedure and passing in the actual name of the Stored Procedure.
In brief and not trying to include needless details on the vast .NET sample codes which SAP has documented for the wider community, this article is just an example of some of the basic database communication that you can work with using Sap.Data.Hana. With the new SPS 08 client interface, customers who are heavily reliant on the .NET framework to address business problems can now have full SAP support when working with SAP HANA and the ability to fully utilise all features within the product.
In this week we took a deeper dive into actual coding and advanced language features. First we revisited the architecture. The point got reiterated that data intensive logic is moved to the database. This week’s episode were presented by Rich Heilman.
But this week is all about coding, so we immediately move into the .hdbprocedure files and SQLScript editor. Luckily the amount of help in the editor increased. Syntax highlighting, code hints and syntax checking now all are there.
Rich made a point that type definition is now moved to CDS as before you would declare them here. Additionally you can now define were the runtime artifact will live.
Into the procedure we get to hear about declarative logic and imperative logic. The first is the workhorse where the parallelization takes place, the second is more for looping, decisions and data manipulation. Finally Arrays are mentioned and how you use them. With unnest you can get these values and move them to tables. A bit confusion is that apparently the first index number is 1? As JavaScript starts at 0 you might have to switch back and forth in your thinking as you develop applications. Finally a look at the dynamic filtering, nice little thing to add filters dynamically.
Triggers are usable. In a current project we just had a discussion how we want to log who did what changes. These triggers may be the answer to that. You can set up triggers to perform actions on table updates Additionally you can respond, also events, but this is called exception handling. You can perform some actions or re-signal to the user that he’s done something that our beloved procedure doesn’t like. How dare he ruin our application!
By now I start noticing that we are looking at a lot of live demos of procedures that do what Rich just said.
Further with script we get into the user defined functions. These kind of functions are in two flavors: Table or Scalar. With table get a lot of in parameters and this table function will return you a… table.
Scalar functions on the opposite can return multiple values. The only thing that still puzzles me a bit is when to use hdbprocedure and when the user defined functions. I can build input and output in the former, so probably I am still missing something here. So if anyone can enlighten me, please ? J Otherwise I’ll just post a question on the forum.
The AFL is actually quite well known to me as I already used some predictive functions to simulate a worldwide market. Shame though you still need all the tables as parameter tables. The Business Function library is a collection of business functions that you can use primarily for finance functions such as depreciation.
In the late stages of this week we come to the plan visualizer. At first I wasn’t too keen about this thing as I thought that HANA itself should manage how it would do all the work. But after the example my opinion changed. This is a great map to see which functions all contribute to the final result. With the simulation of the market I mentioned earlier I actually had over 30 views and procedures that were interconnected and some contained Cartesian relations (deliberately ;-) ) and I overdid this so much that 1 query took over 30 seconds in HANA. I was quite proud of that actually, although for an audience it can be a hard sell. “look, we made HANA slow!”
A tool like this would probably have helped to do some tweaking as you are able to find the needle in your function haystack that is responsible.
The last theme of the week is ABAP or in full “ABAP Managed Database Procedures”. Now you are able to write SQLScript in the ABAP layer and it was shown how SQL script in an ABAP function is able to perform much faster than a badly written (according to Rich) ABAP.
Actually I was wondering about ABAP in HANA in general. What is the future? I can imagine that in the future you would fit all the specific SAP methods in the AFL and use SQLScript and JavaScript and Fiori, River etc for your custom programming. So where does ABAP does actually fit in here. Does it do anything specific that keeps the need for an extra language for a long time? Or is it there to enable people to hold on to their code?
This week was all about code. In addition to show how things are done it raised new questions. I guess it is supposed to be this way as the course is a starting point to get going.
Poznań was on the map for a good reason: the Big Data Smarter conference was organized that day at the University of Economics. SAP has long time University Alliances relationship with this university through cooperation with Professor Witold Abramowicz and his team at the Department of Information Systems. It is worth to mention that two SAP Mentors from Poland – Marek Kowalkiewicz and Michal Krawczyk - both graduated from University of Economics in Poznań too.
The conference started with introductory sessions: first one discussing the history and the context of Big Data presented by Prof Abramowicz and second one presenting the inner technology of SAP HANA platform delivered by Karol Dworak.
Image may be NSFW. Clik here to view.Prof Abramowicz reminded that the “Big Data” can be the new buzz word, but is not the new challenge. Volume, Variety and Veracity of the data had been around no matter if we were talking about 2MB in 1980s or 2PB of data in 2010s.
Then team of Prof Abramowicz presented applications they had been building with SAP HANA and other technologies. Impressive spectrum of Big Data use cases has been covered:
Analysis of demand and supply on the energy market
Sentiment analysis on the consumer market
Management of micronetworks of energy prosumers
Applying graph processing and visualization to analysis of social networks
Threat detection in marine traffic based on AIS data
Optimization of public procurement
Monitoring (illegal) content in deep web
The University of Economy in Poznań and SAP will continue their partnership through announced Smart and Big Data Science Research & Innovation Center.
I won't explain the details of text analysis in SAP HANA. If you are interested in SAP HANA text analysis, you can have a look at SAP HANA Platform – SAP Help Portal Page, "SAP HANA Search and Text Analysis References" part.
Regarding our scenario, we want to do the sentiment analysis on tweets, especially the "text" field. Now let's do it! It's not difficult for us to implement the sentiment analysis part, since what we need is just a full text index. Unfortunately, we cannot use full text index in SAP HANA XS currently. So, in order to realize this, we have to run the following SQL statements manually.
CREATE FULLTEXT INDEX "MOVIE_RATING"."TWEETS_I" ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("text") CONFIGURATION 'EXTRACTION_CORE_VOICEOFCUSTOMER' ASYNC LANGUAGE DETECTION ('EN') TEXT ANALYSIS ON;
ALTER TABLE "MOVIE_RATING"."$TA_TWEETS_I" ADD CONSTRAINT TWEETS_FK FOREIGN KEY("id") REFERENCES "MOVIE_RATING"."movieRating.data::movieRating.Tweets"("id") ON DELETE CASCADE;
Here is the explanation.
- For the first SQL statement, we create a full text index "TWEETS_I" on the "text" field of tweets table. In addition, we use the voice of customer configuration which will detect the sentiment of the "text" field and since we only search tweets in English, we can limit the language detection to English only. After you run this SQL statement successfully, a new table named "$TA_TWEETS_I" will be created which maintains the analysis results. For the syntax of "CREATE FULLTEXT INDEX", please refer CREATE FULLTEXT INDEX - SAP HANA SQL and System Views Reference - SAP Library
- For the second SQL statement, it is optional. We just define a foreign key "id" in the automatic generated table "$TA_TWEETS_I" which references the "id" column in our "TWEET" table. Imagine if we delete a tuple in "TWEET" table, the corresponding text analysis records will also be deleted automatically.
Now we can find what's going on in the "$TA_TWEETS_I" table. You may find with the capability of text analysis, SAP HANA is able to detect the property, e.g., SOCIAL_MEDIA/TOPIC_TWITTER, DAY, DATE, URI/URL, PRODUCT, Sentiment, etc. (in "TA_TYPE" column) of the extracted token (in "TA_TOKEN" column). Here I highlighted some records in blue boxes and red boxes respectively. For example, there was a movie titled "Justice is Mind", since we searched tweets with the hashtag "#JusticeisMind" for this movie, you can find lots of "#justiceismind" in the "TA_TOKEN" field which are marked in blue boxes. As in our smart app, we focus on the sentiment analysis, I selected some records in red boxes, e.g., "great" is tagged as "StrongPositiveSentiment" and "Nice" is tagged as "WeakPositiveSentiment". Everything seems reasonable.
Now let's create some information views which can be easily consumed in the UI part.
1. AT_TWEETS.attributeview
Step 1: Data Foundation
This attribute view is focused on sentiments with tweet and movie info, so we use "$TA_TWEETS_I", "Tweets" and "Movies" these three tables as data foundation. Sub-steps are shown as follows.
Join "$TA_TWEETS_I" and "Tweets" on the ID of tweet;
Join "Tweets" and "Movies" on the ID of movie.
Filter "TA_TYPE" with only five values "StrongPositiveSentiment", "WeakPositiveSentiment", "NeutralSentiment", "WeakNegativeSentiment" and "StrongNegativeSentiment", i.e., we only show tweets with sentiment and we ignore tweets without sentiment.
Add necessary columns to the output and create some calculated columns based on the output.
Compared with the above attribute view, this calculation view is focused more on new release movies themselves. What we want to do is given a release date range getting several attributes and calculated measures of movies in this range, e.g., # of mentions and the rating score.
Step 1: We use SQLScript instead of graphical view. Here is what we want to get:
id: movieID
title: movie title
poster: the URL of movie poster in profile
release_date: the release date of movie
mention: # of sentiments detected related with this movie
rating: the rating score
rating = (# of strong positive sentiment * 10 + # of weak positive sentiment * 8 + # of neutral sentiment * 6 + # of weak negative sentiment * 4 + # of strong negative sentiment * 2) / # of total sentiments
Till now we can automatically analyze the sentiment from real-time tweets which we've searched in Real-time sentiment rating of movies on SAP HANA (part 2) - data preparation and we've built two information views, one for sentiments and the other for movies. In the next two blogs we'll have a look at the UI part and consume the information views in this blog.
Hope you enjoyed reading my blog. Image may be NSFW. Clik here to view.
We all start out doing things the wrong way, and then wonder how we get ourselves back out of it.
Over the last couple of weeks, I've been trying to bring ourselves into a better development method by building objects into HANA using the SAP HANA Development perspective. Unfortunately, we've been and built a whole series of tables using SQL. I know, bad boys and so on, but we're trying to get back on the right track.
So we've finally started building our new tables using hdbtable files (and hdbdd files), but how can we reverse engineer the tables we've already built?
The answer lied in the catalog tables.
What started as a seemingly painful problem, ended up as a fairly simple query that I wrote and fine tuned in a couple of hours in between other things
Once I have this, it's just a matter of creating a new hdbtable file, completing the schema name using the demo lines and pasting in the results from the above script. I've attached a copy of the script file, and an actual file from our landscape.
Recently, I have experienced something while modeling HANA information view that I want to share with everyone.
Case:
Dynamic Filter for projection with current Fiscal Year and Fiscal Period.
Method:
In order to filter it dynamically I decided to use input parameter. There are different parameter types that we can use to do this. I decided to use following "Derived from table" type as shown below:
When filtering column then it displays list of values from the same HANA information view and we can select the date. Now, as the requirement is current Fiscal Year and Period so how do we filter the column to dynamically select current date in "Filter Value" section?
Solution:
Like everyone, I expected it to understand the expression. So I tried following and it worked. I used the expression
I was aware that I could use expression in the "Default Value" section (as explained in SAP HANA Developer Guide) but didn't expect it to
work in "Filter Value". I was able to filter the projection using this input parameter. To test the filtering, I removed the input parameter and the projection displayed all the years rather than just current year. Hopefully, when I get an explanation for this then I will share it here. For now, you can try different expressions.
SAP developers and employees from the SAP User Experience team are giving you the opportunity to try out and provide feedback on the new SAP® HANA Modeler based on the SAP Web Integrated Development Environment by participating in a usability test.
The test examines the new cloud-based development environment that allows you to create complex analytic models right inside your browser. Experience new ways of user interaction to work with the HANA Modeler in the WebIDE and create Analytic Models (Calculation Views & Analytic Privileges) with SAP’s cutting-edge HTML5 technologies.
During the test you will evaluate the ease-of-use, usability and functional completeness of the HANA Modeler in WebIDE.
Targeted roles are Consultants or dedicated data modelers who create, manage and deploy analytic models on HANA as well as application/content developers responsible for building analytical models on HANA for further consumption in HANA Native Applications or in reporting tools.
If you are interested in discovering the latest developments in this area, please sign up for this topic:
As a tester, you can actively work on the prototypes or products during the test. At the end of the test, you will have an opportunity to share your feedback on the prototype. Please note that it is the tool that is being tested, not you, to further enhance the user experience of SAP's products.
Testing will start at Tuesday, November 11th 2014 after the keynote.
Access to the full topic catalog and registration instruction for the usability testing can be found herehttps://usability.sap.com/
See you in Berlin!
For the SAP HANA Modeling- and Product Management Team Christoph Morgen
This week we looked at OData and ways to consume the data. The OData protocol allows you to create, read or update and delete data via an URL.
The OData protocol is a powerful part as it is an endpoint that allows access to the SAP HANA data database. This access can be from a xsjs or html file on the HANA server, but it can also be from somewhere else.
In SAP HANA you have functions to do aggregations, can define associations, associations can be picked up later by SAPUI5 components for added functionality. (look for the purchase order header and item sample)
And typical for sap was also the OData4SAP Annotations. Typical as when you enable this extension it will deliver you the label, the unit and the text (but not the currency)
What I don’t remember from the initial HANA course, but apparently available since SP06 are SQLSCRIPT extensions. These extensions are procedures that are triggered by the OData service. The input is basically the data that is passed from the OData service i.e. the new to be created record or the update of a record.
What you can do in the trigger is validation, but if you want you can get creative and think of other things such as an online ticket sales procedure that checks the timestamp of the order to look which place in the virtual line the prospect buyer has for the final ticket of that important concert of <insert your favorite music here>
This is an improvement over the earlier version as I can remember that previously you had either a straight OData service with nothing else or a xsjs service.
Another part of this week was Batch requests and deep insertions. The latter was interesting as this provided a way to maintain relationships between two tables. For example if you have purchase order headers and items, the item table needs the foreign key ordernumber to keep a relation to the header file.
The deep insert functionality creates an ID place holder that will be replaced by the actual ordernumber ID in the end. It’s a shame that SAPUI5 doesn’t support it yet. The example was a concatenation of string variables to manually create an OData Request.
Exercises:
In this week we had to type A LOT for the exercises. (I know you can cut&paste but I like to type it in at least for a part, to better digest the lessons). Especially interesting was using the SAPUI5 for creating a table using the OData source. It takes a little getting used to as the entire setup of SAPUI5 is that you declare variables based on objects.
You set properties in a Json variable :
Var myButtonproperties = ({text:"Hello World!",tooltip:"Hello Tooltip!"}
Then you create a new instance using those properties :
var myButton = new sap.ui.commons.Button (myButtonproperties);
With HANA all around (speaking about the HANA database, not HANA XS or Hana Cloud) I as ABAP developer expect myself needing to leverage more database features. This consequently means more to code more SQL – and this implies more headache to me. My brain more used to more imperative commands instead of declarative expressions.
But technology does not care about my brain, so I have to get it used to this type of programming. Luckily, I don’t need a HANA-DB in order to train myself, so I started to write a short series about how to use SQL – oh wait, it’s now called “code pushdown”. This makes it easier to sell and might trick my brain that this is not veeeery old stuff which I should have been knowing for a long time but funky new stuff which is worth investing time into.
This first post is about the very basic of joins – particularly LEFT OUTER JOINs in combination with restrictions.
Business background
In our application, the database scheme is quite well normalized (don’t ask me which normal form it is, but it should be the third one or higher, I was never eager to differentiate further). However; to the user, the data is presented in a denormalized form. While transactionally, the dependent data is retrieved via an association, we also need to enable finding instances where dependent data matches some criteria. However, the non-existence of this dependent data must not prevent finding instances when not searching for attributes of the dependent data.
Implementation alternatives
Reading the requirement, the first thing that comes to every ABAPer’s mind is a database view. Go to se11, model your relations, use OpenSQL – that’s it. Unluckily not.
An DDIC DB-view always joins tables with an INNER JOIN. This means that the right hand side table (RHST) needs to provide matching entries for each instance of the Left HSTas prerequisite for the joined data to exist. This makes DDIC DB views unusable for all those selections, where data shall be searchable which has a minimum cardinality of 0. Thus, specifying the join as LEFT OUTER at runtime seems to be the only alternative. However, as we are being told by the OPenSQL-exception, placing a restriction on the RHST via OpenSQL is also not possible. Both limitations – as far as I know – shall be obsolete when using Core data services (CDS). But as this is only available from 7.40 upwards, it can only be subject of another post.
Until then, in our project we were forced to used native SQL based on LEFT OUTER JOINs in order to fulfill those requirements.
The INITIAL issue
After we had implemented a generic query class executing the native SQL, we felt quite comfortable having found a nice solution. Until the first tickets appeared in out bug tracking tool. As we’re using a floorplan manager based UI, our users were enjoying the “new” search options – particularly the “is initial” and the “is not initial” option were attracting a lot of affection: “oh, this makes it so easy for me to find a … where no purchasing requisition has been created yet”. Of course! “But it does not work!”. The reason for that was the custom table containing the purchasing requisition number was a depend one to the table which contained the instances for which shall be searched. With dependent_table.purch_req = ‘’, the desired instances for which the dependent data does not exist were not returned. When explaining the issue and solution in my team, I felt that not every ABAPer is aware of that behavior.
Thus, I wrote a short report executing multiple selects in order to explain the different result sets.
Have look at the following screenshot picturing the content of the joined tables and the different SELECT-statements issued. Do you know all the results?
You can find a screenshot of the report including result and explanations separately at below. I did not want to spoil your curiosity presenting the results directly Image may be NSFW. Clik here to view.
Did you expect all the results? There was one, which surprised me. And to my even bigger surprise, it behaves differently on NW 7.31 and NW 7.40 – maybe I should open a message for that one…
You can find the report attached including the report source and alternatively SAPLINK-nugget containing the source tables if you’re too lazy to create the two tables yourself.
Outlook
There is one massive limitation with respect to performance on this approach of using LEFT OUTER JOINs. Therefore, in a subsequent post, I may introduce the advantage of subqueries (once I understood it myself to such an extent that I feel comfortable writing about it… )
Got feedback?
I’d love to read your comments on this topic and how you prepare yourself for the obviously upcoming re-incarnation of SQL – sorry: of the code pushdown.
In the above three blogs, we've already finished data preparation and sentiment analysis, in addition we've also created two information views which will be consumed in the following two blogs. Now, it's time for UI. In this blog let's first take a look at SAP HANA info access and build our first smart application based on this super cool feature!
"The SAP HANA info access HTTP service wraps search SQL queries and exposes them through an HTTP interface. The service operates on one SAP HANA attribute view at a time and makes use of the indexed and analysed texts joined in that view."
First of all you can find the above definition and architecture diagram of SAP HANA info access from SAP HANA Info Access Architecture - SAP HANA Developer Guide - SAP Library. As you can see, there are four layers including attribute views and Info Access HTTP Service on the SAP HANA side, while SINA and Info Access Dev Toolkit for HTML5 on the UI side. With SAP HANA info access, you do not need to care the server-side search logic and you can just focus on the UI stuff. So, why not use this awesome feature to build our sentiment app? In our smart app, we'll just use the attribute view which is already defined in Real-time sentiment rating of movies on SAP HANA (part 3) - text analysis and modeling and the Info Access Dev Toolkit for HTML5 at the top layer for simplicity. I'll show you how simple but powerful it is in this blog. Image may be NSFW. Clik here to view.
There is still something you need to notice. Make sure you've finished the following before you start your journey with SAP HANA info access.
Import the delivery unit "HCO_INA_SERVICE.tgz" from your SAP HANA server, since this DU is not included in the default installed DU list, i.e., when you install SAP HANA server, it is not automatically installed like the SAPUI5 DU.
Download the delivery unit "HCO_INA_UITOOLKIT.tgz" from SAP Software Download Center to your desktop/laptop and import the DU from the client side.
Assign this role "sap.bc.ina.service.v2.userRole::INA_USER" to the user whom you want to use SAP HANA info access.
In addition, I highly recommend you to play with the SAP HANA info access demo in SHINE (section 9.10) first, since you can have a basic idea & look and feel about SAP HANA info access from this demo. Visit http://<myServer>:<XS_Port>/sap/hana/democontent/epm/fulltextsearch/ui/search.html after you import SHINE and generate data. Besides, you can also have a look at its source code, as our smart app is based on it.
Since we've already preapred our attribute view "AT_TWEETS.attributeview" in Real-time sentiment rating of movies on SAP HANA (part 3) - text analysis and modeling, what we need to do in this blog is more related with the front-end stuff such as HTML, CSS and JavaScript. Now let me show you the most important search.html file which we will visit directly in our browser. If you've read the source code of the SINA demo in SHINE, you'll be familiar with it. Here I just want to mention some key points from top to bottom. (I just found the syntax highlighting does not work on SCN today, so the below code may look ugly. Sorry for that. Image may be NSFW. Clik here to view.)
1. You need to include the following script when you use the info access development toolkit.
4. We have 6 work area items (position from 0 to 5) in our facet pane, while there are only 3 in the SINA demo of SHINE. Besides, we have one work area item (position 6) for the result list in our content pane.
5. We have 17 work area items with the chart widget in our facet repository and the following 6 are displayed by default which seem the most important.
SENTIMENT BY MOVIE
SENTIMENT
TWEET SOURCE
TITLE
TOKEN
STUDIO
6. Except SENTIMENT BY MOVIE, for each work area item with the chart widget, you can switch between bar chart, pie chart and tag cloud for your convenience.
7. We have one work item area with the result list widget which shows the details of each MENTION and the result list is sorted by "created_at" in a descending order, i.e., you will see the most recent mention at first. Similar with the SINA demo of SHINE, we also define 3 separate result templates as follows.
tweet.html: Defines the template of the result in our content pane.
tweet_small.html: Defines the template of the result in our facet pane.
tweet_detail.html: Defines the template of the result in the overlay container.
Regarding the second key point in the last section, we'll now discuss creating full text index. First you may find the following code snippet at the beginning of the HTML body which I have not mentioned yet.
So what's that? It will display a search filter at the top of the web application. Moreover, it should be the heart & key feature of SAP HANA info access. So, I'm gonna show you in this separate section. Because we want to enable searching the used dimensions in our chart widgets instead of the used attributes in our result list, we set "data-useresponseattributes" to false and "data-usedimensions" to true. As we all know, we can use full text index to search in SAP HANA. However, SAP HANA isn't so smart that it can create the full text index for you automatically. Currently we need to create the full text index manually. That's why we need createFulltextIndex.js and createFulltextIndex.xsjs in our project. Since we just use createFulltextIndex.js to call createFulltextIndex.xsjs, we just need to change the URL of createFulltextIndex.js in SHINE. Here let's just have a look at the server-side logic.
createFulltextIndex.xsjs
(function (){
'use strict';
//initialize variables
var conn = null,
body = '',
prepStat = null;
//initial database setup, create fulltext indizes
try {
//get connection
conn = $.db.getConnection();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_TA_TOKEN_I ON "MOVIE_RATING"."$TA_TWEETS_I" ("TA_TOKEN") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_TA_TYPE_I ON "MOVIE_RATING"."$TA_TWEETS_I" ("TA_TYPE") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_SOURCE_STR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("source_str") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_YEAR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_year") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_MONTH_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_month") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_DAY_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_day") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_HOUR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_hour") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX TWEET_CREATED_AT_WEEK_I ON "MOVIE_RATING"."movieRating.data::movieRating.Tweets" ("created_at_week") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_TITLE_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("title") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_MPAA_RATING_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("mpaa_rating") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_YEAR_STR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("year_str") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_STUDIO_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("studio") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_RELEASE_DATE_YEAR_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("release_date_year") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_RELEASE_DATE_MONTH_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("release_date_month") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_RELEASE_DATE_DAY_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("release_date_day") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
prepStat = conn.prepareStatement('CREATE FULLTEXT INDEX MOVIE_RELEASE_DATE_WEEK_I ON "MOVIE_RATING"."movieRating.data::movieRating.Movies" ("release_date_week") SYNC SEARCH ONLY OFF');
prepStat.execute();
prepStat.close();
// --- commit changes and close connection
conn.commit();
conn.close();
body = 'fulltext index created';
$.response.status = $.net.http.OK;
}
catch (e){
// 289: index already exists, thats ok
if(e.code && e.code !== 289){
body = e.message;
$.response.status = $.net.http.BAD_REQUEST;
} else {
body = 'fulltext index already exists';
$.response.status = $.net.http.OK;
}
}
$.response.contentType = 'text/plain';
$.response.setBody(body);
}());
You can see in the above XSJS file, we create 16 full text indexes for the dimensions in our chart widgets except SENTIMENT BY MOVIE if these full text indexes do not exist. Otherwise, we need to handle the exception. Now you may ask, "Can I run these SQL statements in SQL console manually?" Absolutely you can. But considering the DU transportation, it's better to use this dynamic way.
Now it's time to show the look & feel! You can find four parts from the following window dump.
Search filter in red box
Facet pane in green box with six default chart widgets
Content pane in blue box with result list
Facet repository in yellow box with 17 chart widgets, you can drag and drop them to facet pane.
As you can see, since we filter the movie title to "The Judge", everything including the chart widgets and the result list will only display the stuff with this movie.
Now let's click one mention and the following overlay container will prompt to us. It'll show you the very detailed tweet info and mentioned movie info. From strong positive sentiment to strong negative sentiment, we use green to red, similar with the traffic light. In addition, the sentiment token is also highlighted in the tweet text. Remember in Real-time sentiment rating of movies on SAP HANA (part 2) - data preparation, we insert tweets into SAP HANA in real-time and the movie sentiment is continuously analyzed by SAP HANA. So with this smart app, we can analyze the movie sentiment in real-time! For instance, you can analyze the sentiment and # of mentions of a specific movie or you can even analyze all new release movies in current week. Exciting? Image may be NSFW. Clik here to view.
Still not enough? Don't worry. I've recorded a video for you. In the following video I'll show you how to use this smart app to analyze the movie sentiment based on SINA.
Next steps
In this blog, we've jumped into SAP HANA info access and built our first movie smart app based on it. With the simplicity of SAP HANA info access, we are able to finish our first app in a very short time. What we need is just an attribute view and some front-end stuff like HTML, CSS and JavaScript. We did less but got more. In the next blog, we will have a look at sap.m (a package in SAPUI5) and build our movie app on mobile devices. See you soon.
Hope you enjoyed reading my blog and playing with the smart app. Image may be NSFW. Clik here to view.
This document is regarding a solution for a specific requirement (mentioned below) that I had when working on a SAP HANA project for a client.
REQUIREMENT:
I had a requirement to export a complete schema (SCHEMA X) from one system (Development - BD2) and import to another schema (SCHEMA Y) (Quality system – BQ2) so as to make all tables available in target schema as available in source.
SCHEMA X in target system has all the catalog objects as in source system.
Therefore when SCHEMA X is exported from source and imported into target system, HANA system raises an error that the catalog objects (in my case - tables) already exist in target system (as the exported file contains references to SCHEMA X and not to SCHEMA Y).
PROCESS:
To achieve the above requirement, just exporting and importing catalog object will not work as the source schema name is different from the target schema.
Also there is SCHEMA MAPPING option from HANA Modeler perspective which is suitable for modeling views but the method I used is a common approach for moving the catalog objects from one schema to another.
Source schema: ARJUN (Development)
Target schema: SRIVATSAN (Quality)
All the tables available in ARJUN schema of DEV system has to be exported and imported to SRIVATSAN schema of Quality system.
Both the schemas are available in Quality system.
ARJUN schema in quality system already has all the tables as development system so there is a requirement to rename all the occurrences of ARJUN to SRIVATSAN in the table creation code after the EXPORT.
Now, the tables that are required are imported to SRIVATSAN schema.
Conclusion:
This approach is a general method to copy tables from one system to other (different schemas). This might be achieved even through SCHEMA MAPPING which I haven’t tried practically.
(Special thanks to Ashish Sahu -blog content and review, Eric Du and Travis Price for technical papers)
The NoSQL market has grown rapidly to support the vast amount of data generated by today’s world of intense, always-on environments.
Datastax, built on Cassandra, a high performance, Apache open-source database technology, spans its database technology in a fully distributed way, across private data centers and the cloud to power this landscape.
It is one of the leading NoSQL vendors providing fault-tolerant distributed databases to 200+ customers including Netflix, Adobe, HP, eBay and Twitter. Additionally, DataStax supplies OpsCenter for visual management and monitoring, along with expert support, training, and consulting services for Hadoop, Cassandra, and Solr. SAP and DataStax are working together to bring their joint capabilities to the enterprise.
The integration of the technologies is delivered for 2 scenarios:
SAP HANA SPS09 supports DataStax Enterprise 4.5
SAP’s in-memory SAP HANA platform is integrated with DataStax to enable applications and interactive analysis across corporate applications data & operational data with content from Internet of Things, messaging or stream data stored on DataStax.
This integration is delivered via SAP HANA smart data access (SDA) that makes it possible to access data from DataStax remotely, without moving data into SAP HANA. SAP HANA handles the data like a local virtual table performing automatic data type conversions and push-down processing.
The combined capability means that customers can use SAP HANA’s “speed of thought” analytics on millions of transactions per second data coming into DataStax. It also simplifies the landscape by processing data where it resides instead of performing costly and time-consuming data movement.
SDA already supports many different data sources such as: SAP IQ, SAP ASE, Spark, Teradata, Apache Hadoop, Oracle, SQLServer providing a comprehensive selection for its customers’ needs.
SAP HANA is a high-performance in-memory columnar database which performs OLTP and OLAP on the same platform.
It provides in-database native support for advanced analytics like predictive, text analysis, spatial and graph processing. The platform is the leading choice of more than 4000+ customers and 1700+ startups.
DataStax Enterprise (DSE), built on Apache Cassandra™, delivers what Internet Enterprises need to compete in today's high-speed, always-on data economy. DataStax Enterprise is the leading distributed database choice for online applications that require fast performance with no downtime.
DataStax Enterprise 4.5 encapsulates a distributed peer-to-peer architecture in which all nodes are equal, ensuring cluster resilience to an arbitrary number of node or data center failures.
Architecturally, SAP HANA’s smart data Access Component connects to DataStax Analytics with Spark/Shark through the ODBC driver. The resultant data set is treated as a remote table in HANA and it can be used as the input for all advanced analytics functionalities in HANA, which includes modeling views, SQL Script procedures, predictive, text analytics and so on.
The joint capabilities of SAP HANA and DataStax are as follows:
Always On
Zero downtime. Ever.
100% availability for mission-critical applications
Performance
High throughput
Low latency
Advanced Analytics
Intuitive interfaces make it easy to explore transactional data in real time
No Complex Data Migration or ETL
Transactional data is immediately available to SAP Applications
Integrate with SAP Ecosystem
Use existing SAP workflows and BI tools
2. SAP’s Agile BI Platform,SAP Lumira, supports DataStax
We have also completed the integration with SAP Lumira in order for customers to view Datastax data in Lumira Data Visualizations and Storyboards.
The DataStax Enterprise Sample Connector Template for SAP Lumira makes it easy to integrate DataStax data with numerous other data sources to allow business users to analyze data and get insights in real-time You can download the DataStax Connector Template for SAP Lumira from the SAP HANA Marketplace under this link:
Trying to get my teeth into MVC coding and javascript, I started looking at ready available packages like JQuery, NodeJS, Cloud9 (and briefly spring.io).
But what would all this work be getting me, other than knowledge that I cannot apply and some fun, which is short-lived. I am an SAP HANA consultant, MVC is good, SAP UI5 and MVC is better.
First stop after searching was Amazon AWS, where you can cheaply rent SAP HANA One. All very easy to setup, instructions for SAP HANA Studio Windows are provided here, but a bit more difficult for my Mac. After a long search on SCN, people wrote that a SAP HANA Studio Mac version was made available, but shortly after removed from the SAP download portal, I found some help on an external link.
So I installed Eclipse (Luna) which requires the Java SDK on Mac OS X and downloaded all the required plugins to do my SAP HANA cloud development.
Further down the process I wanted to run my instance on AWS, which looked really cheap at $3 an hour (11/11/2014) for a c3.8xlarge EC2 instance type ...
Free at last! Or at least have a good trial run, before spending the money.
After successfully registering using my existing S-User and accepting the terms and conditions I found that I can also connect my local SAP HANA Studio installation to the instance in the cloud.
First tip, do not use the default "Landscape host" entry of "hana.ondemand.com", use as above, correctly pointed out in the instructions "https://hanatrial.ondemand.com"
but there seem to be some differences in files which can be seen in the browser i.e. templates (above) and the local installation (below), the samples are missing:
Anyhow, I wanted to run the samples through the browser anyway.
Click on the run drop-down shows some different run modes iOS and Android as well (neat feature), although I couldn't detect the difference for the very simple HelloWorld sample:
and now I have to write some code "To be continued ...", also next time I will add some information on data import using my local SAP HANA studio. Where is my schema again?
The schedule of this meetup was from 18:00 to 21:30. It was until 22:00 that we finished this session because a lot of people were very interested about this topic and discussed the study roadmap with Ray and they wanted to study more about SAP HANA and big data. Image may be NSFW. Clik here to view.Image may be NSFW. Clik here to view.
The feedbacks from our participants:
“Big data is hot topic on recently and yesterday Mr.Wu give us amazing session on it.
He give us a landscape on big data, within different stories and projects from his experience. Big data has impacted on retails, oil industry, E-commerce, and also share his opinion on how big data will impact our work and life. One story give me a deep impression that Machine learning will change the translation and the role of the translator will to rethink about his career path in future.
On his experience , Mr.Wu introduced the 6 characters of the big data and the science behind the big data.
Another interesting part to compare the prediction between big data and Chinese tradition methodology, "I Ching".
Thanks a lot for Mr.Wu sharing us how to learn the big data from beginner.
Good session on me to learn the big data and expect next amazing session.
Also, thanks the delicious Pisa from the organizer offered.” – Zhang, Yaguang / SAP / Business One
As you are? you are not hungry? You don't sleep? Well, have you ever thought of reversing requests? Probably it's a stupid idea, like other million, but often we focus too much on a question without thinking that the answers are around it.
When a manager asks for something, he's really thinking about that thing? Or he doesn't know that the real answer is around that thing?
SAP Hana could be used, for its performance, to create automated inverted query when analyzing data. Someone asks for billing datas? why not give him the costs of the department that deals with billing? Someone asks how many defective products have been generated? why not give immediately even those generated perfectly and in how much time?
A scan engine could be generated which processes the words related to the request made, and search queries pre-existing, compacting them and creating a report easy to read .
Someone could say "But if someone wants something he can asks for it, why all that mess?". I think that this is a behavioral problem before all that should be resolved. Managers must read always everything, even if repeated, even if they think to be smart man and remember everything.
Often it happens that you solve something and you destroy something else. So that's important that a global vision of data shouldn't be never missed by analysts.