Quantcast
Channel: SCN : Blog List - SAP HANA Developer Center
Viewing all 676 articles
Browse latest View live

No instance displayed on SAP Business One version for HANA

$
0
0

Hello All,

 

Welcome to my first post in this blog, although not my first writing for the SCN. I expect to write a series of posts reporting some issues with its solutions about installing SAP Business One version for HANA. After that, I will continue to collaborate with posts about software development for HANA and configurations on SAP Business One version for HANA. Coders are welcome to join my discussions.

 

If you are reading this it's likely that you already know the huge capabilities of SAP HANA and its power to analyse massive volumes of data within fractions of a second. As per today it's possible to use that powerful platform on the SAP products, such as SAP Business One.

 

I imagine that some of you have come across some issues during the installation of SAP Business One server and client components for the HANA version. It is important to keep in mind that SAP HANA provides client interfaces for connecting applications to the whole HANA system. A regular mistake while installing the SAP Business One client (the one that comes within the B1_SHF folder) is to forget to install the HANA client first. Whenever this happens, the SAP Business One login user interface will look like this:

 

no_server.png

 

No HANA instance will appear on the list, therefore, no company databases will be loaded. If this ever happens don't panic! Simply go to the HANA installer, copy and paste the HDB_CLIENT_WINDOWS folder to the Windows machine. Finally, run the hdbsetup.exe file which will install the HANA client. Restart SAP Business One and the instance and databases will appear in the login window.

 

I hope this helps all of you who are struggling to get SAP Business One to work on its HANA version.

 

Regards,

 

Alejandro Fonseca

Twitter: @MarioAFC


Developing for SAP HANA has never been so easy – with the cloud and ABAP

$
0
0

Are you interested in developing for SAP HANA with ABAP or in the Cloud? Then join openSAP’s upcoming courses and learn all you need to know free of charge and at a time that suits you!

 

ABAP Development for SAP HANA starts September 25 and is a new course by openSAP enabling you to understand the important concepts in ABAP development for SAP HANA, how to detect and analyze the performance of your ABAP coding, and which features AS ABAP provides for database-oriented programming.

 

Next Steps in SAP HANA Cloud Platform (Repeat) is a follow-up to the course Introduction to SAP HANA Cloud Platform and will begin on September 10. This course will bring you to the next level about how to use SAP HANA Cloud Platform. You'll learn how you can use the platform to develop and manage SAP HANA native apps and HTML5 apps, as well as how to apply advanced security features, develop widgets on SAP HANA Cloud Portal, and much more.

 

Registration, learning content and final exam are provided free of charge. Both courses will offer you the opportunity to get hands-on practice with system environments which will incur a small fee. More details will be provided at the start of both courses. You can register for both courses today!

 

About openSAP

For new users to openSAP, courses are delivered completely online and the content is released on a weekly basis. You can access the content through video lectures, hand-outs and self-tests which are accessible to you at any time, you don’t need to logon at a specific time to gain the knowledge. If you have questions or would like to discuss the topic based on your experiences, you can join the openSAP discussion forum and collaborate with peers and SAP experts.

 

To keep you motivated, you can earn a Record of Achievement through weekly assignments and a final exam. You have to submit these elements before the weekly deadline to help encourage you to complete the content in a timely manner and avoid procrastination.

 

Search our available courses here. Are you interested in learning about SAP Fiori, the new user experience strategy at SAP?

 

 

Follow us on Twitter

Find us on Facebook

Join us on SCN

Download our iPad app
(Android users can download course materials using Google Chrome)

Sybase ASE To HANA

$
0
0

Hi,

This is my first Blog, so if I don't articulate properly please pardon me .</p>

Recently I worked in a project which involved Migrating the few tables and stored procedures from Sybase ASE database to HANA Database.

ASE adheres to TSQL standard  and HANA adheres to ANSCII standard. So the Migration involved lot of changes in the HANA Stored procedures. I would like to highlight few of the most important changes:

1) Cursor with parameter within a cursor is not accepted. so as a work around I have changed the code as below:

Cursor :C_PARTY_DETAILS

Orginal:

H1.jpg

Changed:

h2.jpg

2)Cursor variables doesn’t require “:”

Orginal:

h3.jpg

Changed:

h4.jpg

3)System is not allowing to access the Cursor in any Concat operation

Orginal:

h5.png

Changed:

h6.png

4) Functions have to be used in select statements cannot be assigned directly to a variable.

h7.png

5)Null handling inside the Arithmetic functions is not necessary.

6)Domains have to be changed to Local Data Type

7)Mixed Cases (Lower & Upper) have to be handled in”.

h8.png

Here the column name unitized_flag is maintained in lower case in table itself.

8)In control flow statements like “IF” Range comparison is not allowed(between, In)

h9.png

h10.png

9)Error Handling via Exit Handler

h11.png

10) In Dynamic construct have to use "||" instead of  "+"

h12.png

h13.png

11) Update from multiple tables has to be changed

h14.png

12) IS Null has to be changed to If Null

13) Function Convert to Cast

14)Date functions have to be adjusted

h15.png

15)Recompile should be changed to Alter procedure recompile

16)Case statements within a procedure not allowed


h16.png

This CASE has to be taken outside and converted to IF.

h17.png

17)Identity in DDL has to be changed to Synonyms.

18) Calling procedure with NULL is not allowed it has to be replaced with calling with string “NULL”

I hope this Blog is useful for some Data Migration Projects

SLES root directory growing indefinitely

$
0
0

Hi All,

 

As SAP HANA developers we must have at least a basic understanding of SUSE Linux Enterprise Server (SLES) and get to know its system folders and files. So, if you fear Linux systems let me tell you that there is nothing to be afraid of, and you are still in time to start browsing around.

 

This week I discovered an unusual disk consumption in SLES root directory (~/) on which a HANA instance was running. It had about 49GB of free space and the day after it suddenly came down to 0%. It took me a bit of time to find out the reason for this: Linux graphical interfaces not always work correctly, generating error messages as they fail. Those messages are produced by the X-windows system and kept in the log file: ~/.xsession-errors. There was that file and a second one named ~/.xsession-errors.old. About 48GB of disk space were being used by these 2 files.

 

The temporary solution for this is to run the following sentences to delete both files:

 

rm ~/.xsession-errors
rm ~/.xsession-errors.old

 

I said temporary because SLES will create the log file again, so we must delete the files from time to time, not very good ah?.

 

If any of you ever found a way to stop the X-window logging or a way to trick SUSE into thinking that the messages are being saved, please let me know!

 

Regards,

 

Alejandro Fonseca

Twitter: @MarioAFC

Changing the IP address SAP B1 Analytics Platform points to

$
0
0

Dear SAP B1H/B1A Consultants,

 

In this post I intend to share an experience that I had with the Analytics Platform of SAP Business One version for HANA: After having the system setup and running there was a need to change the server IP address; once done that, the Analytics Platform suddenly stopped working.

 

As you might know, when the Analytics Platform is installed (the same applies to the Licence Manager and Extremem App Framework) it gets to point to a hostname or server IP address where the HANA instance is hosted. This configuration can be managed on the System Landscape Directory (SLD) under the tab Services, where it is possible to access the Analytics Platform Administration Console to perform operations such as database initialization.

 

The three applications (License Manager, Analytics Platform and Extreme App Framework) are meant to be updated and even deleted from the Services tab on the SLD. However, none of those functions are available (as per SAP Business One 9.0 version for HANA PL 11) for the Analytics Platform, as shown in the following image:

 

img_001.png

 

In this case the Analytics Platform is correctly setup and pointing to the server IP address xxx.xxx.xxx.xxx. If the server happens to expose (change to) a different IP address, this configuration must be updated on each application, so that they end up pointing to the right IP address (and thus to the right server). Since it's not possible to "Edit" or "Delete" the Analytics Platform entry from the table, the important question to answer is: How to change the IP address the Analytics Platform points to? Specially if the customer is in front of you and wants his system back on track immediately!!

 

The answer is simple, although not specified on the SAP B1H/B1A documentation: There is an Analytics Platform config file in /opt/sap/SAPBusinessOne/AnalyticsPlatform/conf/ named AnalyticsService.conf. Open it and change the address value:

 

{"SLDInfo":{"protocol":"https","address":"xxx.xxx.xxx.xxx","port":"40000"}}

 

Save and close the file, and then restart the SLD service. All the analytics capabilities should be back and SAP B1H/B1A clientes should operate as usual. You could also check the Service tab on SLD to see the change on the IP address of the Analytics Platform entry.

 

Hope this helps.

 

Regards,

 

Alejandro Fonseca

Twitter: @MarioAFC

Random Ramblings from a Developer - What is Big Data?

$
0
0

Some time back, I started a bit of a blogging journey with this post.  As I could have predicted at the time, it has taken me much longer to find space in my schedule to carry on with the initiative and get another part published but here we go...


BusinessReportAndGrowthGraph.jpgWhat is Big Data?

At the same time, this is both a difficult & complex question along with a very straightforward one.  I'm inclined to suggest a better question should be something like "Why is Big Data suddenly such a Big Thing?"  Of course, as many readers of this post will be all too familiar, our industry is very clever at creating "The Next Big Thing®" that just happens to help sell the latest version of some platform, solution or system...


Etymology of Big Data

Ok, I admit - I only used this as a sub-title to get a nice big word like etymology into one of my 'blog posts...  Seriously, for those who don't know what this is please read this Wikipedia entry.


I wanted to try and get an understanding for where and when the term Big Data first crashed into our world - I have a personal recollection but wanted to get a wider view.  A quick Google (just what did we do before Google?) gleans a vast number of thoughts along these lines.  One of my favourites was this post, partly for the main content and (as usual with the internet) partly for the comments.  Interestingly, the author's rough stab at Big Data hitting mainstream around 2012 isn't too far from my thoughts (2011 is in my head for some reason.)  I recall it reaching my conscious around that time also, mainly thanks to SAP's HANA announcements and the increasing momentum the appliance was getting.  It is also relevant to note that we are talking about our current understanding & use of the term Big Data here, and also to recognise that it has been used in a few other ways prior to this point.


The important point here though, is that we are talking about the history of the term Big Data, not that of the data itself.  Or put another way - we have been generating & collecting vast quantities of data for many years prior to every man and his dog rushing to get Big Data or Data Scientist on their CV's...  What changed?  Why are we suddenly using the term "Big Data" so much, in so many (often) vague ways?


How do you define Big?

Let's break the term down and think just about the first word for a bit.  I think the word "Big" can be the most misleading aspect of this whole subject.  Having said that, I'm not sure I can think of a suitable alternative.  We often hear size isn't everything and I believe this relates to Big Data more than many will have you believe.  As usual it comes down to perspective and how you want to measure and compare.  As someone quite famous once said, it is all relative.  We live in an age where data is generated through so many channels, at such an alarming rate, that we probably don't know what is happening with it all.  Conversely, we also don't know what other data we could or should be generating and therefore capturing.  Once we've generated and captured all of this data, what are we going to do with it?  What happens if we haven't captured any data that we can actually use?  Ultimately, one of the intended benefits of our current obsession with Big Data and Data Scientists is how they enable us to actually focus down on a specific, highly tailored sub-set of the overall picture - our slice of the pie as it were.  If we don't have the ingredients for our pie, we will never get a slice of it...


EthanJewett Unknown Unknowns.png

Known Knowns

I spotted an interesting exchange on Twitter not too long ago where  Ethan Jewett was (I think!) trying to make a point about how we capture data.  I didn't manage to track the whole exchange (some observers might have suggested Ethan was having a drunken conversation with himself!) however I did takeaway some sense of agreement with this tweet.  It really piqued my interest about the whole Big Data thing (as well as helping me to finally make a bit more effort to complete this post.)


All of this got me thinking and it reminded me of an aspect of Quantum Mechanics that I thought was quite appropriate to our current Big Data world and especially Ethan's comments.  I was idly wondering about how we cannot measure or capture all data and in fact, choosing to measure one aspect of a system could lead us to miss other, important measurements that we actually do need and would find useful.  I'm officially naming this "Jewett's Data Uncertainty Principle"

 

My Slice of the Pie

The challenge for all of our 'new' data scientists is how they take all of the data and information available at their fingertips and turn it into something useful.  Just how do we capitalise on the sheer volume of information combined with processing power at our disposal?  At a UKISUG Conference a year or two ago a colleague was speaking to a senior customer representative, who had asked what HANA could do for them - the answer was "what do you want it to do for you?"  I have seen lots of Twitter traffic in recent weeks following a similar vein, where SAP users are struggling to understand what the actual use-cases for HANA can be.  That suggests they don't understand what Big Data is and more importantly, what it can offer.


This is one of the key challenges with the current state of Big Data, IMHO.  We've reached a brave new world where almost anyone can access almost endless amounts of data; they can generate almost endless amounts of data; and then anyone can consume and mash all of this data up into all sorts of random results.  What is the point and where is the value in all of this data?  How do enterprises get value out of this data wrangling?  Are we creating roles for data scientists that are somewhat self-serving?


As a rather pointless example, I discovered LinkedIn InMaps recently and duly generated my network map...  Wow, doesn't it look impressive with all of my connections there on one screen?

MyLinkedInMap.jpg

The problem is though, what does it do?  What's the point?  What value does it create or add?  This is effectively my slice of the much larger LinkedIn data pie but it doesn't really serve much purpose.  To make it useful, it needs something else added, some extra context.  As soon as you start talking about context in relation to data and information, things start getting interesting fast...


It's all about Context

I'm pretty sure Vishal Sikka said something along these lines last year some time.  No doubt I have it as a favourite tweet, SCN bookmark, saved to My Pocket...  Ok, I know I've got it hidden somewhere anyway.  The point is, often just one element of data on its own is near meaningless but add another element, another dimension and suddenly it becomes valuable and of use.


As a real world example, here in the UK on our motorway network we have overhead gantry signs that display useful information.  Often, on a journey you will see a message such as "To junction 18 - 22 minutes" with the idea that you can then gauge roughly how well the traffic is moving.  However, there is a problem with this.  You are only getting one dimension or measurement.  It's like a scalar value - it means something but isn't easy to interpret in isolation.  Now, on some of the overhead signs we have, there is more space and instead you get "To junction 18 - 25 miles, 22 minutes".  This extra dimension, which turns our data into a vector type value suddenly enables a better interpretation of the information represented.  In your head you can do a rough calculation to determine if the traffic is running at or below the speed limit (70mph in the UK - I base my calculations on 60mph though, which is a mile per minute.)  Now that is useful!


The above example is a clear showcase of how bringing more than once source of data (a constant distance between sign & junction) together with a dynamic source of data (current motorway speed) delivers a compound piece of information that is useful to someone.  Lets extrapolate this example out a bit though into what might happen in future...  What if the sat-nav systems in our cars could tap into this real-time data and perform calculations and decisions accordingly?  Would we see journey times being much more accurately estimated?  If we added in another dimension, such as weather or local events, which we know will impact traffic then we suddenly have a multi-dimensional source to base decisions on.  We are already seeing this sort of technology appearing - I should be taking delivery of a new Audi A6 in a couple of weeks.  Nothing out of the ordinary but it has an 8-speed automatic gearbox and on-line integration with Google maps - this combination allows the car to look ahead and determine if it is worth changing gear.  So, if you are approaching a T-junction in 4th gear, it won't bother changing up to 5th as it knows you will be slowing again soon and hence it is more economical to hold the current gear ratio.  It might not make a massive difference but consider if each and every single car on the roads was able to do similar things and more by using multi-dimensional decisions?


Commercial Examples

I now regularly attend a JavaScript MeetUp in my hometown of Liverpool.  One of the last sessions was about D3.js and it led me to this website - Sea Level Research. This is another example of how bringing multiple sources of data together and applying some rules and logic can deliver tangible business benefit.  I suspect it could potentially deliver environmental benefits over the long term too.


This area of using multiple sources of data, often from completely unrelated areas, is how I see the Big Data movement moving forward and no doubt how those who have always been close to it have always understood it.  It requires a bit of a stretch in how you understand the word Big though, as you don't necessarily end up with vast volumes of data but instead maybe vast sources of small, finite information.


SAP Users need to re-think how they are approaching their use of Big Data and indeed HANA.  If it is deployed to simply speed up BI, they have missed the point.  Whilst having your dunning run completed in minutes rather than days is great, where is the value add?  I'm not aware of anyone in the SAP world who is sat staring at their SAP system waiting for a dunning run to complete...  However, I suspect if a financial controller could begin to predict and take proactive, mitigating decisions early in the dunning process with customers based on multiple sources of information, some people will start getting excited.


The Answer?

Finally we get to the end and no doubt you wonder what I think Big Data is?  Well, I don't imagine it would generate as much interest or excitement if it was called "Multi-Source, Multi-Dimensional, Intelligent, Decision-Making Data" would it?

 

 

Image sources

ImageAuthorLinkLicense
BusinessReportAndGrowthGraph.jpgcuteimagefreedigitalphotos.net

A Programming Model for Business Applications (1): Assumptions, Building Blocks, and Example App

$
0
0

When you start with HANA Cloud Platform, you can find a vast amount of documentation, tutorials, online trainings. This information makes it easy to get your first application up and running. However, during our last month in developing on HCP, we found that there is a gap between the examples and a real-life application. This is, of course not a miracle, and, of course, when you start to build a real-life application, you have to think about your programming model and application architecture. In this blog series, I would like to present a programming model that fits business applications with the assumptions listed below.

 

The programming model that I discus in this blog is “implementation-agnostic”. But, to make it more tangible and discuss it based on a concrete example, I implement the model using the SAP HANA native development approach using HANA CDS defining the data models and HANA XS (XS OData and XS JavaScript) implementing the server side. For the UI side, I implement a small SAP UI5 mobile UI.

 

The programming model described in this document is based on the following application characteristics that apply for enterprise business applications.

 

      General assumptions:

  1. UIs / external web services are not one-to-one images of the data stored in the database. The data mode stored in the
    database may serve as base for multiple UIs / external web services. As a consequence, a data transformation from the database tables to the UI and vice versa has to be performed.
  2. Security (restriction of data elements and data records) and consistency enforcement must be implemented on the server, not only on the UI client.
  3. (a) Application logic is performed on the server (not in the UI client). (b) Application logic can be written in SQL (“code pushdown”) or in a JavaScript. (c) Some application logic may also run redundantly on the UI client to provide early feedback to the user.
  4. Multiple entities can be changed within a roundtrip. As a consequence, entities provide a common transaction (phase) model. Application logic must be executed as a “bundle” when multiple instances are changed.

    Performance Assumtions
  5. Roundtrips between UI client and server are an expensive resource (~ 100 ms). As a consequence, data are written/read
    with one roundtrip.
  6. Read/write from/to the database to programming language container is expensive (~ 1 ms) in comparison to a read/write within the programming language (<< 1 microsecs).

    Assumptions on state and concurrency
  7. Stateless communication between frontend (browser) and server: The services provided to clients are stateless, in a sense that the runtime maintains no application server state. If an application needs to store state, for example an intermediate result of a multi-step interaction,
    the state information needs to be stored in the database in a draft version or on the client.
  8. Database performs COMMITED READs (only committed data are read, except from the transaction that actually changes the data, in contrast to a DIRTY READ).
  9. (a) During the transaction, locks are kept on database level. (b) Concurrency handling is (optionally) implemented using a delta detection mechanism (eTags). (Locking on application level is not required)

 

 

Assumptions 1 to 4 state that application logic has to be performed on the server side. This
application logic consists of:

  • Access control
    • for elements: restrict access to elements from the entity (example: expose columns for a public address book from employee/address entities)
    • for instances: restrict instances from an entity (objects that I am allowed to see)
  • Control of public/private elements, decoupling, layering
  • Calculated elements – elements that are not persisted in the database but calculated at request
  • Format transformation (date/time, alphanum, …) and conversions (currency, quantity, time/calendar, …)
  • Data joins and unions
    • Merge data from different entities, in particular read information that is managed in master or configuration data entities,
    • Including data filtering of language-dependent texts, filtering of data that is relevant for a particular date/time, a particular role, specialization/generalization, etc.
  • Responsibility determination (objects I am responsible for in a particular role, for example “my”, “myTeam”, …)
  • Application Logic: Application logic can be classified as
    • Property logic:  Property logic provides information whether data can be updated, are mandatory, etc. depending on the lifecycle of an object.
    • Validation logic:  Validations implement check logic that evaluates the data consistency. Validation logic can add messages (errors, warning, info, success) to a message service (Example: do not save an order w/o valid account information). In addition, they can return an indication that the
      object is in a critical state and that the transaction cannot be saved (in order to prevent loss of data, violate legal or business constraints, etc.)
    • Determination logic: HANA technology calls for the implementation of element calculations as “on the fly” calculations of non-persisted “view” elements with database means (SQL expression, “code pushdown”) wherever possible. However, the advent of HANA technology does not influence legal or business requirements, so there are still use cases where element calculations have to be persisted in the database.
      Examples are:
      • The element calculation is only a default value and can be overwritten by the user, so the result must be persisted. (Example: New account: based on the input of the address and other properties, the sales territory, sales organization, and responsible is determined. These data can be overwritten by a user.)
      • The result of the calculation is based on data that may change over time, in other words, the calculation is time dependent, and the element must represent the result of calculation at this point in time and therefore be persisted.
      • Performance considerations: Calculations are time-consuming and so they are done once during change of the business object instead of repeating them every time the entity is read.
    • Action logic: Action logic is called from a UI, external service or background process and typically starts a particular operation or process step.

 

 

 

F1.png

Figure 1: Service Adaptation Layer and Business Object Layer

 

 

 

As mentioned before, I will show an example implementation using the SAP HANA native development approach with HANA CDS for defining the data models and HANA XS (XS OData and XS JavaScript) for implementing the server side. For the UI side, I implement a simple SAP UI5 mobile UI (split app).

 

So, on the server side, the boxes in Figure 1 correspond with the following artifacts:

Box in Figure 1Implementation in XS
Protocol Handler (e.g. OData)XS OData Service
Service Adaptation: Readservice

Database View (CDS)

Service Adaptation: CUD ServiceJavaScript Library (Exit implementation of the
XS OData Service)
BO: BO Read

Database View (CDS)

BO: TableDatabase Table (CDS)
BO: CUD ServicesJavaScript Library
BO: Business LogicJavaScript Library / SQL procedure

 

 

At runtime the service provider simply interprets the client request (for example the filter, order clause, etc.), reads the data from the database view and converts the result into the protocol format (for example OData). In Figure 1, the boxes labeled with “Read service” and “BO Read” are implemented as database views.

 

For a write request, the data transformation has to be implemented. In Figure 1, the boxes labeled with “CUD services” and “CUD / Action API” and “Business Logic” are implemented in a programming language (for example JavaScript), boxes “Business Logic (SQL)” are implemented with database means (for example SQL Script).

f2.png

Figure 2: Entity model of a Sales Application

 

Figure 2 shows the entity model of a sales application. It is simplified in comparison to the models that are used in the SAP CRM or SAP Business ByDesign, but it contains examples for real life “complexity”, such as “realistic” calculated fields and time dependency (here for a customer’s address). We will use this example for a discussion of the assumptions and building blocks introduced earlier and for implementation in the following posts.

 

The (simplified) entity model of the Business Partner is shown in the middle of the Figure. It consists of a business partner business object with 3 entities (Business Partner “main” or “root” entity, a Customer entity that stores customer-specific data, and an Address Information entity that holds the reference to addresses (time-, type- and role-dependent). We ignore the fact that the business partner’s name and other common elements may change over time and model them in the Business Partner “header” entity. The sub-entities Customer and Address Information show patterns that can be repeated for other business partners, for example Empoyee, Supplier, Contact, etc. and other business partner details, for example time-dependent Name and Identification, Bank Details, Business Partner Relationship, and so on.

 

The Address business object stores an address as a re-usable entity. An address can be used (referenced) from multiple business documents. This, however, means that an address cannot be changed once it was used in a business document. Instead a new address is created when a user changes an address on the UI. Let us have a look at the following use cases and their representation in the entity model:

  1. In the Customer master data UI, a user can enter various addresses for a customer, for example a (default) mail address, a ship-to and a bill-to address. These addresses are represented by three instances of the Address entity and three instances in the Address Information entity
  2. In the Customer master data UI, a user can change an address for a specific date in the future (relocation of the customer). This is represented by two instances of the Address entity and two instances in the Address Information entity with adjacent validity periods.
  3. In a Sales Order UI, the address information is normally automatically determined from the Customer master data, and persisted in the Party entity of the Sales Order business object. However a user may change the delivery address for a particular sales order, because the products shall be sent to the vacation address of the customer. This is represented by a new instance of the Address entity, the reference to this address is stored in the sales order Party entity.

In the simplified model, I have modelled the address as a flat structure. In reality, many address components (for example phone numbers) may have multiple records, so you would model them as separate entities. The postal address may be stored in multiple script codes (Latin, Chinese, …) to support global business.

The Sales Territory business object stores the assignment of customers to sales territories, which are used to manage access and responsibilities for sales operation. In our example application, a sales rep should be allowed to manage only sales orders for customer in territories he/she is assigned to.

The Sales Order business object consists of three entities, the Sales Order (header information), the Item entity and the Party entity. I think that header and item is self-explaining to everybody who bought something. The party is introduced as an entity to store all involved parties (for example account, ship-to party, sales representative) in a normalized way. The Party entity allows to flexibly add additional parties, for example a service performer when it comes to selling services.
Based on the entity model, we will discuss two UIs in the next blog series:

  1. A very simple special-purpose UI that allows the creation of a new private prospect (for example as a mobile UI as a part of a fair application) (Wireframe, Figure 3).
  2. A UI for a sales representative for managing sales orders of “his/her” accounts and territories (SAP UI5 application, screenshot see Figure 4)

 

f3.png

Figure 3

 

f41.png

Figure 4

 

The next blog in this series shows the implementation of the entity model using HANA CDS (in process).

A Programming Model for Business Applications (2): Implementing the Entity Model with CDS

$
0
0

With this post I want to continue the blog series “A Programming Model for Business Applications”. If you havn´t read the first part in the series, I recommend that you read the first part go back to A Programming Model for Business Applications (1): Assumptions, Building Blocks, and Example App and read it first.

 

In this part, I will discuss the implementation of the entity model introduced in Figure 2 of part 1 with HANA Core Data Services (HANA CDS).

Note: In HANA SPS8 the implementation suffers from a couple of limitations in HANA CDS, in particular:

  • Missing support for references between CDS files
  • Missing support for “unmanaged associations” (= ability to define an association on existing elements using “on”)
  • Missing support for “Boolean” data type
  • Missing support for calculated elements in entities

According to my knowledge, these limitations will be removed in SPS9, so it is worth checking the SPS9 documentation as soon as it is available!

 

I implement the model in one CDS file (due to a restriction in HANA SPS8) in four sub-contexts. So, here is the stub of my CDS file:

 

namespace XYZ;

@Schema:'XYZ'

context bo {

 

 

  context common {

   (...)

  };

 

  context address {

    (...)

  };// end of context address

  context businesspartner {

   (...)

};// end of context businesspartner

  context salesterritorymgmt {

   (...)

  };// end of context salesterritorymgmt

  context salesorder {

   (...)

};// end of context salesorder

 

}; // end of context bo

 

 

 

Context Common

In the common context, I implement common data types.

Code lists are implemented as entities, for example:

  entity Currency {

    key Code          :String(5000);

    DescriptionText   :String(5000);

  };

 

Code contains the code representation, for example “EN” for English, DescriptionText contains the description in default language.

 

A code list table may not only contain the code and the default description, but also additional elements (attributes), for example:

  entity IncotermsClassification {

    key Code                      :String(5000);

  DescriptionText               :String(5000);

  LocationIsMandatoryIndicator  :bo.common.Indicator;// workaround for "Boolean"

      //            IncotermsClassificationCode - Examples

      //                   CFR    Cost and freight

      //                   EXW    Ex works

      //                   FCA    Free carrier

     };

 

As CDS does not support yet the data type “Boolean” as a native data type, I have defined an Indicator data type; I use this data type whenever a Boolean is required:

type Indicator :Integer;// WORKAROUND: Boolean is not supported

Structured re-use data types are implemented as follows:

  type Amount {

    Currency :association[0..1]to common.Currency;

    Value       :DecimalFloat;

    };

  type Incoterms {

    Classification          :associationto bo.common.IncotermsClassification;

    TransferLocationName    :String(5000);

  };

 

 

 

Sales Order

 

The header (or main, root) entity of the Sales Order business object is implemented as follows:

 

  entity SimpleOrder {

        key ID_                       :Integer64;

    SystemAdministrativeData      :bo.common.SystemAdministrativeData;       

    ID                            :String(35);    

    Name                          :String(256);

        //@EndUserText : { label: 'Posting Date'}

    DateTime                      :UTCDateTime;

    Currency                      :association[0..1]to bo.common.Currency;

    FulfilmentBlockingReason      :association[0..1]to bo.common.FulfilmentBlockingReason;

    DeliveryPriorityCode          :association[0..1]to bo.common.Priority;

    Incoterms                     :bo.common.Incoterms;

    RequestedFulfilmentDateTime   :UTCDateTime;

    PaymentFormCode               :association[0..1]to bo.common.PaymentForm;

    Status                        :bo.simpleorder.Status;

  };

 

 

Discussion:

1) For each BO entity, we introduce a technical key of type big integer (Integer64 in CDS), which is named ID_ (which dangling “_”) and which should not be mixed up with the semantic key, which is in this case represented by the element “ID”. Exposing a unified simple key makes the work much easier in the
business logic and the UI implementation (instead of dealing with the “semantic” key, which may differ from entity to entity (for example many entities have semantic keys that consist of multiple elements).

 

2) Code lists are represented by associations to code list entities. Please note that the resulting database field has the name FulfilmentBlockingReason.Code.

 

3) CDS does not allow (in SPS8) defining inline structured elements (so-called Anonymous structure types) within an entity, for example:

 

entity SimpleOrder {

  (...)

  Status {

   LifeCycleStatus :association[0..1]to salesorder.LifeCycleStatus;

   InvoiceProcessingStatus :association[0..1]to ref1.salesorder.InvoiceProcessingStatus;

  };

 

As a workaround, I had to define all structured data types using a type statement and refer it in the entity.

 

4) CDS does not allow (in SPS8) defining calculated fields directly in the entity (limitation for HANA SPS8). We will come to this point later in part 3 of this series.

 

 

The Item entity is modelled as follows:

  entity Item {

      key ID_                     :Integer64;

   Parent_ID_                  :Integer64;

   SystemAdministrativeData    :bo.common.SystemAdministrativeData;

   Product                     :association[0..1]to bo.product.Product;

   Quantity                    :common.Quantity;

   RequestedFulfilmentDateTime :UTCDateTime;

   ListPriceAmount             :bo.common.Amount;

   DiscountPercentValue        :Decimal(5,2);

  };

 

 

Discussion

The relation to the Sales Order header entity is stored in the element Parent_ID_ (I use here again a dangling “_” to mark this element as “technical”). CDS does not know the construct of a “composition association”, which would be the optimal way of defining the relation between header and item. So I decided to
introduce the Parent_ID_ element. The associations between header and item could be modelled as so-called unmanaged associations in the following way:

 

In the Sales Order entity:

Item :association[0..*]to salesorder.Item on Item.Parent_ID_ = ID_;

In the Item entity:

_Parent :association[0..1]to salesorder.SalesOrder on _Parent.ID_ = _Parent_ID;

 

Unfortunately unmanaged associations are not yet available in CDS in HANA SPS8.

 

The Party is the third entity of the Sales Order business object:

  entity Party {

      key ID_                    :Integer64;

   Parent_ID_                 :Integer64;

   MainIndicator              :bo.common.Indicator;

   PartyRole                  :association[0..1]to bo.common.PartyRole;

      PartyID_                   :Integer64;

   Party                      :association[0..1]to bo.businesspartner.BusinessPartner;

   AddressReference           :association[0..1]to bo.address.Address;

  };

 

Discussion:

1) Associations from Sales Order to Party and vice versa should be added as unmanaged associations (as discussed for Item).

2) Unmanaged association should be also added to allow direct navigation the Customer, Employee, and other business partners:

      Customer                  :association[0..1]to bo.businesspartner.Customer on Customer.ID_ = PartyID_;

 

 

 

Address

 

In the address context, we implement the Address entity. This entity shall store addresses for re-use in different business objects.

  entity Address {

    key ID_                       :Integer64;

    AddressType                   :Integer;// TODO code

    PreferredCommunicationMediumType    :association[0..1]to bo.common.CommunicationMediumType;

    GeographicalLocation          :address.GeographicalLocation;        

             // TODO-> [0..*] sub nodes

    DefaultName                   :Name;

    DefaultPostalAddress          :PostalAddress;

    DefaultFacsimile              :Telephone;            

    DefaultConventionalPhone      :Telephone;

    DefaultMobilePhone            :Telephone;

    DefaultEmailURI               :String(5000);    

    DefaultWebURI                 :String(5000);

    DefaultWebCode                :String(1);// TODO Homepage, Facebook, Twitter, LinkedIn, Google+

    DefaultInstantMessagingAccountID    :String(5000);

    DefaultInstantMessagingCode   :String(5000);// TODO: Skype, WhatsApp

    Workplace                     :address.Workplace;

  };

 

 

Some of the Elements are structured elements, which are defined in the address context, for example:

  type PostalAddress {

   ScriptCode                :String(1);//TODO C = Chinese, I = Latin, K = Japanese, ...

   DeliveryServiceTypeCode   :String(1);//TODO   Street= '1'; POBox = '2'; Company = '3';

   Country                   :association[0..1]to bo.common.Country;

   Region                    :association[0..1]to bo.common.Region {Code};        

   CountyName                :String(40);

   CityName                  :String(40);

   DistrictName              :String(40);

   PostalCode                :String(10);              

   Street                    :StreetPostalAddress;

   POBox                     :POBoxPostalAddress;

  };

 

Business Partner

 

Let’s continue with the Business Partner. For our simplified model we implement the following entities:

  • Business Partner: We ignore the fact that the business partners name and other common elements may change over time and model them in the BusinessPartner “header” entity.
  • Address Information: Sub-Entity with cardinality [0..*] that holds the time- and role-dependent address information of a business partner
  • Customer: customer-specific data in the business partner
  • Employee, similar to the Customer, not shown

 

  entity BusinessPartner {     

      key ID_                       :Integer64;

   ID                            :String(20);

   CategoryCode                  :businesspartner.CategoryCode;

   Person                        :businesspartner.Person;

   Organization                  :businesspartner.Organization;

   UserID                        :String(255);   

   Status                        :association[0..1]to businesspartner.LifeCycleStatus;

  };

 

Discussion

1) Associations to sub-entities should be implemented as discussed for the Sales Order:

Customer :association[0..1] to businesspartner.Customer on Customer.Parent_ID_ = ID_;

AddressInformation :association[0..*] to businesspartner.AddressInformation on
AddressInformation
.Parent_ID_ = ID_;

 

2) A filtered association that points to the current default address should be modelled as unmanaged, filtered association. Filtered associations are also not yet supported by CDS:

CurrentDefaultAddressInformation1 :association[0..*] to businesspartner.AddressInformation on
AddressInformation
.Parent_ID_ = ID_ where where ValidityPeriod.StartDate < now() and where ValidityPeriod.EndDate > now() and DefaultIndicator =1;

 

The Customer entity is implemented as:

  entity Customer {

      key ID_                       :Integer64;

   Parent_ID_                    :Integer64;

   Industry                      :association[0..1]to bo.common.Industry;

   ProspectIndicator             :bo.common.Indicator;

   BlockingReasons               :businesspartner.BlockingReasons;

  };

 

The Address Information entity is implemented as:

 

  entity AddressInformation{

      key ID_                :Integer64;

   Parent_ID_             :Integer64;

   ValidityPeriod         :businesspartner.ValidityPeriod;

   AddressType            :association[0..1]to bo.common.AddressType;      

   DefaultIndicator       :bo.common.Indicator;

   AddressUsage           :association[0..1]to bo.common.AddressUsage;

      //  Examples:

      //    BILL_TO      Bill-to party

   //    SHIP_TO      Deliv. address

      //    EMPP Employee private address

   Address                :association[0..1]to bo.address.Address;

  };

 

Sales Territory

 

The Sales Territory is implemented as follows:

 

  entity SalesTerritory {

      key ID_                   :Integer64;

   Name                      :String(256);

};

 

  entity Customer {

      key ID_                   :Integer64;

   Parent_ID_                :Integer64;

   TerritoryAssignmentManualOverrideAllowedIndicator   :bo.common.Indicator;

   Customer                  :association[0..1]to bo.businesspartner.Customer;

  };

 

Discussion

As mentioned for the previous discussions, unmanaged associations should be added for navigation from parent to child and vice versa.

 

 

The last entity that we implement for our example application is used to store the access rights (or in other words, the restrictions) for a user:

 

  entity RoleAssigmentRestriction {

        key ID_                  :Integer64;

    UserID                   :String(255);

    AccessContextCode        :String(5000);

    Object_ID                :Integer64;

  }; 

 

A discussion for this entity will follow in the next part of this series.

 

In the next blog of this series, I will discuss my implementation of the read services (calculated fields, access restriction and OData service definitions).


SAP HANA Idea Incubator - To reduce defective products in textile production solution SAP HANA

$
0
0

Hi All.

I came across this scenario, thought of sharing.

My idea to reduce defective products in textile production solution SAP HANA

Because textile production attentive and high quality production that results when faulty products due to increased costs will reduce the mobile integration with minimal defects cost to build the technical and software infrastructure is needed at this point, SAP HANA solution with the whole process fast and integrate the work will be.

This has been proposed in SAP idea incubator, here.

 

Regards,


Cem Ates

Partner and ISV Workshop: Build and Certify an Application for SAP HANA (November 2014)

$
0
0

Workshop.gif

Are you looking to gain an in-depth understanding of the SAP HANA Platform?  Do you plan to build, integrate, and certify an application with SAP HANA?


SAP-HANA.gif

The SAP Integration and Certification Center (SAP ICC) will be offering partners and ISVs an introductory 4-day workshop (November 10-13, 2014) to facilitate a general understanding of SAP HANA.  After each training module, partners and ISVs will reinforce their skills via a series of hands-on exercises to demonstrate their knowledge of the various components for the SAP HANA Platform.  The SAP HANA Enablement Workshop will outline the underlying knowledge needed to allow for the development, integration, and certification of an application with SAP HANA.

 

By attending this enablement workshop, you'll be able to:

  • Understand the end-to-end native application development for SAP HANA
  • Reinforce knowledge with skill-builder modules via hands-on exercises
  • Understand the certification roadmap and process for application certification
  • Leverage a 30% discount for application certification to enable Go-to-Market
  • Engage with product experts and certification team via Q&A sessions

 

Registrations Fees and Deadlines:

 

Due to the popularity of this enablement workshop, seating will be limited and registration will be on a first-come, first-served basis.  If you would like to make a group booking, please submit separate registrations for each individual of your organization.

 

 

INDIVIDUAL REGISTRATION
REGISTRATION TYPESDATESFEESREGISTRATION
Early BirdBefore October 10, 2014$2,000.00 USDSign-up here!
RegularBefore November 10, 2014$3,000.00 USDSign-up here!
GROUP REGISTRATION -THIRD OR MORE GET A DISCOUNT
REGISTRATION TYPESDATESFEESREGISTRATION
Early BirdBefore October 10, 2014$1,500.00 USDSign-up here!
RegularBefore November 10, 2014$2,500.00 USDSign-up here!

 

Event Logistics and Agenda:

 

 

DATESMonday, November 10, 2014 to Thursday, November 13, 2014
TIME9:00 AM to 5:00 PM (Pacific)
LOCATION

SAP Labs

3410 Hillview Avenue

Palo Alto, CA 94304

Building 2 - Baltic Room

 

The agenda for this enablement workshop will highlight some of the following topics:

  • Introduction to SAP HANA Development Platform
  • SAP HANA Application Development Tools: SAP HANA Studio and Eclipse
  • Introduction to SQL Basics and Debugging
  • Introduction to SAP HANA Native Development
  • Introduction to Data Modeling with SAP HANA
  • How to Certify your Application with SAP HANA

 

Take advantage of this opportunity to plan, build, and explore the various certification programs for SAP HANA and leverage a 30% discount when submitting an application for certification with the SAP HANA Platform!  For any questions or inquiries relating to this enablement workshop, please contact icc-info@sap.com.

Git HANA - A free, open-source Github client for SAP HANA

$
0
0

Git-HANA-Screenshot.jpg


Over the last few months, working on the metric² open source project, I have been frequently updating the GitHub repo. As a heavy XS Web IDE user, this entailed exporting or copying the contents of the files from the package into my local GitHub repository for the project and subsequently committing the files from there. Since there is a small disconnect between the source (my HANA packages) and the destination (GitHub) I like to often see what changes which are due to be committed, the differences between the files, or just compare the files between the 2 systems.


Being over dedicated to building solutions to some of my workflow challenges (see here, here and here), I created yet another small HANA native app called Git <> HANA. The application allows you to compare files between your local HANA package and your (or any other) GitHub repo, and it also lets you commit files directly from the UI to GitHub, and vice-versa. If a file does not exists, it will create it for you (on either side). There are a couple other cool features which you can read about below, or watch the little video I created.

 

If you are a web IDE user it's quick and convenient to use, and I am convinced it will make your HANA + GitHub integration easier (I am also hoping we will also see more open source native HANA apps on GitHub as a result!!!!)

 

 

Features of Git <> HANA

 

- Compare files between HANA and Github

- Compare inline or side by side

- Commit files from HANA to GitHub

- Commit/activate files from GitHub to HANA

- Repo/branch selection

- Native HANA application

- Easy HANA package installation

- Open source

- handles .xs* (e.g. .xsaccess, .xsapp) files (which your file system probably does not like!)

- Image comparison

- File browsing can be done via the GitHub repo or your HANA package

 

You can download the app package here (newsletter sign up requested so I can keep you up to date with the app) or check out the source files here.

 

If you think this would be helpful or would like to see any other features, or would like to contribute to the source ... EXCELLENT, please just let me know

 

Screenshots


  

 

Use the HANA Package Browser or GitHub repository as a reference.

 

 

Push files from HANA to GitHub or from GitHub to your local HANA package.

 

 

 

Compare Files side by side, or inline

 

 

 

 

Package Install Instructions

 

- Download the package

- Open Lifecycle manager (http://<HANA_SERVER>:PORT/sap/hana/xs/lm/)

- Click on Import/Export menu

- Click Import from File

- Browse to the downloaded file

- Edit the index.html file and specify your github username/password (or leave it blank and enter these details using the settings)

The Journey of an Algorithm : Look-A-Like Customers with Social Patterns

$
0
0

As explained in this blog on Omni-channel consumer engagement, the focus is getting bigger on solutions that can solve unique but complex usecases to cater to the expectations of the Marketing Analysts or the E-Commerce applications, which is to make sense out of sheer volume of data screaming in from various channels (think 3Vs : Volume / Variety / Velocity ).

 

This is one area where HANA can be used innovatively to solve hitherto unknown problem domains. In this blog, I tried to trace thru one such problem that we solved based on the above-mentioned blog, how we expanded the usecase, how different approaches were experimented and how a unique algorithm was invented on HANA to meet the requirements.


The Origin


The problem originated from a simple question: What if an e-Commerce site were to recommend to the user what people with similar “interests/Likes” have already bought. This looked like an easy problem to solve, but then, when we look at it realistically from a social media point of view, there can be hundreds of users for each similar Like, each having bought completely different product, we may end up recommending the entire product catalog to this user!


Then, after some more research, we realized that the behavior of the user can be closely matched if we were to combine more than one Like together and see which users overlap this combination the most, i.e, people with likes similar to Wildlife, National Geography and Camera might have more things in common in terms of choosing a lens for the camera, than individual Likes.  This solves two problems: identify behaviorally similar set of people, and reduce the choice of recommendation in a rational way.


It finally resulted in the problem statement: “If current user x have y number of Likes (Lx1..Lxy), and if there are other users in the same domain with their own Likes (L11..Lmn), what could be the most overlapped group of Likes, considering both on the number of Likes in that group and the number people following that group.”


Following example gives an illustration: Have a look at the current user’s Likes, and how many other users in the target group share how many similar Likes.

 

b1.png

 

The problem


Out of the problem domain mentioned above, the key challenge to be solved is “How can we find out, out of the whole space, the (top most) combinations that are common”. Of course, in order to do that, we may need to calculate all combinations of the current user with each and every combination of every other user which would be permutationally taxing. So, we needed to look for a quicker and more performant approach.


Also, based on specific usecases, some would give more weightage to number of Likes in the group, or, some to the number users per group. Then we can extend the problem to support both metrics. Say, if there 5 users having 10 similar likes that of the current user, it would indicate a small number of users but a strong similarity (we call this Match). Or, there could be 100 users having 3 similar likes which indicates huge trend with a limited similarity (we call this Support). Based on the usecases and the Likes involved, it simply depends, and hence we need provide both metrics in a sorted order, so that an analyst can make a better choice.


The following diagram explains the entire workflow of the usecase we want to solve, from the bottom where have the raw data to the top where we have the product recommendations:

 

b2.png


Apriori Modified


We started looking into the Apriori algorithm as it tries to solve a similar problem in the domain of market basket analysis, where individual baskets are recorded as the users buy them, and it can predict the probability of the likely products to be bought.
Say, if we record the following purchases with Apriori
-  {P1, P2, P3},
- {P2, P3},
- {P1,P2} )


Now, we can ask the probability of someone buying {P2} if they have already bought {P1} and calculate how much support and confidence we have in that.
In this example,
For {P1} -> {P2} =>  Support is 2, Confidence is 100%
i.e, people always buy P2 if they buy P1 based on 2 occurrences in the recordset.


But Apriori does not have a concept for the input set, as required by our problem (i.e, the likes of the current user and compare the rest of the sets for matchings), but if somehow we have to figure out a way to bring {OctoberFest, Ferrar, Photography, Nikon} on LHS, it can automatically spit out the necessary support combinations on RHS, which will solve our primary problem of bubbling up all the combinations. But it cannot happen as atleast one value out of that set has to be on RHS (see the Normal method in the below diagram)!

 

And, one trick we used to make that happen is to add a dummy like called “Target”, which will ensure we get all 4 like on LHS. From here, Apriori will take over and actually gives us all the support combination. In fact it will give all combination, but we can set a pre-defined RHS filter which is “Target”. It works but with a major disadvantage: still the algorithm has to run through all combinations beyond what is necessary for our purpose. And, we were even thinking about extending the Apriori itself at code level until we stuck upon a different path altogether.

 

b4.png

 

 

The Final Solution


As we were brainstorming for a better solution which is economical and performant, we thought why not solve the problem from the DB/SQL point of view rather than as a programmatic algorithm, which anyway suits us fine to showcase the capabilities of HANA and its number-crunching abilities.


So, we evolved base of this solution on the premise of providing a unique identity to each Like which will result in a unique identity to each combination, then simply count them. In the above example, let us map each of the input likes to L1, L2, L3 and L4, then the resulting calculations would look like below. With this we have reached calculating the direct combination quite simply and quickly.

 

 

b5.png

Though this aggregation would give us the direct combinations of likes, the “hidden” likes needs to be found out: for example, if we have combinations of
{L1, L2,L3} = 10 occurrences
{L1, L2, L4} = 20 occurrences,
Then we have to effectively deduce that {L1, L2} = 30 occurrences. We overcame this problem using L-script within HANA with a binary comparison method.


Performance and Summary


The key advantage of this approach is performance when especially run on a flat data model without joins. In our experiments on a typical development box, for 1 million likes transactions can be completed under a second to calculate the matches and supports (with L-Script for hidden matches obviously taking most of the time).


Once we reach this combination, it’s only a matter of getting the right products used by this combination and parameterize it so it can be calculated for high support and/or a high match.


So, the key take-away at least for us is, we can really take the seemingly simple problems but extend their scope in such a way that the problem itself is as innovating as the solution that can be made possible on HANA.


Your thoughts are welcome especially around additional usecases around this space both functionally and technically which can enrich the marketing domain…

Dependency Graph

$
0
0

This blog deals with the dependency graph of (an) object(s) required to maintain the list of objects in order of their dependency on this base object.

Creating DB artifacts frequently involves using other DB artifacts e.g.

  • Procedures may use tables, views, column views, other procedures
  • Column views use underlying column store tables/other column views/procedures

Thus we, basically, establish an acyclic directed graph of objects and their dependencies. At any point in time, it can be useful to establish a hierarchy of any object in this graph to check its dependents at various levels. While establishing a hierarchy for such an object would mean traversing from this node, through its dependents, to the node with in-degree one, being an acyclic directed graph, there might be more than one hierarchy for this particular object, which means, there might be more than one nodes with in-degree one. In such a case, instead of maintaining a hierarchy, we can choose to maintain an ordered list of objects based on their dependencies, thereby, essentially, breaking down the problem, to ensuring that the child object appears before the parent object, when we traverse from such base object to the "grandest" parent or the top level node or the node(s) with in-degree one.

 

Why is it necessary

1. Typically, in an environment, such as  HANA's, we try to push most of the processing logic into the backend, which means that there would be too many DB objects in the to take care of. In a developer environment with, more than just a few developers, we frequently, have the same DB object(s) being used by multiple developers e.g. a table T being used by two procedures, each belonging to two different developers John and Jim, and then, these procedures further being used to establish a complex graph of dependencies. John, casually, makes changes to the DB object without caring for its repercussions. It might invalidate the entire chain of objects created by the Jim, say, John drops a column from the shared table that is being used by Jim's procedures. Just like the procedures, Jim might have multiple such DB objects, using this changed table, which may have now been rendered invalidated, so much so, that it might not be possible for Jim to remember each of them, by heart; much worse if the change to the table has had a ripple effect across the circuit of dependency, established by this base object table. Now, Jim has two problems:

  • remembering the list of objects that are using this changed table e.g. Procedure X, Y, Z might be independently using Table T
  • the order in which these objects have been used e.g. Procedure A might be calling Procedure B might be calling Procedure C which uses the table T; to make things more complex Table T might be being used by Procedure B directly as well

Now, John has a bit if a homework to do to avoid making Jim's life miserable again. Before making such a change, a utility tool like dependency graph could have helped him establish the dependency graph for this Table T. And he could then have taken a call regarding going ahead with that change.

Again, Jim, still, does not have all lost, if he can establish the graph for this object and find out what is/are the object(s) that need to be re-worked on
2. Again, a lot of applications divide their consumption into design time and the runtime. The runtime DB artifacts to be used are created during the activation using the metamodel of the runtime, stored as the metadata. If since last activation, changes to the run time have been proposed in the design time, then the HANA system will not be able to tell you until the proposals are actually activated. However, the metamodel does have a way to tell if a modification/enhancement has been made post activation. So a change log is required to maintain the list of objects that have been changed; as also, the rest of the objects that have been rendered invalidated logically, because of the dependency between them. The added advantage to this is that we do not have to regenerate the entire runtime. We can deal with the affected (direclty/indirectly) objects only

 

 

Proposed solution:

Consider the following graph of object dependencies:

Key features from this graph

1. Its an acyclic directed graph

2. Leaf nodes are always tables

3. Non leaf nodes can also be the base objects that trigger the chain of invalidation

 

Problem

Build the dependency graph for the table P.

 

Assumption(s):

1. The leaf nodes are tables. So tables are the lower most database artifact that when changed invalidate the graph of objects

2. Cyclic dependencies are not considered since SAP HANA database does not support recursions/cyclic dependencies

3. Dynamic SQL is not taken care of

4. In terms of graph theory, there can at max be 1 base object in a particular path to the top most node. That base object is the first vertex of that path.


Key idea(s)

1. Child object should get activated before its dependent(s) at any level in the graph.

2. We use Breadth First Search to prepare the relationship data between the nodes in the graph. We do not use the Depth First Search as it requires recursions.



Process:

1. Build a parent-child relationship data between the nodes in the graph            

Parent-Child List

Node

Parent Node

Level

P

N

1

P

Z

1

N

I

2

N

J

2

Z

V

2

I

F

3

I

J

3

J

Y

3

J

A

3

V

U

3

F

C

4

Y

Z

4

Y

X

4

Y

W

4

A

-

4

U

-

4

C

A

5

Z

V

5

X

V

5

W

V

5

W

U

5

2. Prepare the first list of processed nodes from the above structure. We call the list the List A. We push the base object into the list and mark it Processed. For the rest of the list, we propose the following approach. To decide if a node is processed or not, we look at the Parent Node column in the Parent-Child List prepared in step 1. For each Parent Node in the list above, we find if there are any child objects that have not already been processed, that is, they have not already been included in this list of processed nodes i.e. List A. If there are such nodes then, we include those unprocessed child nodes first and mark them as unprocessed, and then include the current parent node as processed.

Lets see how this goes.

     i. we push P into the table and mark it processed. So, List A looks like:

Node

Status (Processed or not)

P

Processed

      ii. we take the first Parent Node from Parent-Child List that is N and see if its already included in List A. If not, we look for its children. We find that P is the only child of N. And P has already been included. So we push N in the list and mark it processed. So, List A looks like:

 

Node

Status (Processed or not)

P

Processed

N

Processed

     iii. we take the second Parent Node from Parent-Child List that is Z and see if its already included in List A. If not, we look for its children. We find that P and Y are the children of Z. And P has already been included. But Y has not already been included. So we push Y, first, in the list and mark it Unprocessed. Now, we push Z in the list and mark it processed. So, List A looks like:

 

Node

Status (Processed or not)

P

Processed

N

Processed

Y

UnProcessed

Z

Processed

.. we continue this approach till we iterate through all the elements in Parent Node column of Parent-Child List. So at the end, this is how List A looks like:

List A

Node

Status (Processed or not)

P

Processed

N

Processed

Y

UnProcessed

Z

Processed

I

Processed

J

Processed

X

UnProcessed

W

UnProcessed

V

Processed

F

Processed

C

UnProcessed

A

Processed

U

Processed

3. Now, from List A, we keep deriving subsequent lists, by checking for only the unprocessed nodes, on the same lines, until all the nodes are processed

List A Version 1

 

 

Node

Status (Processed or not)

P

Processed

N

Processed

Y

UnProcessed

Z

Processed

I

Processed

J

Processed

X

UnProcessed

W

UnProcessed

V

Processed

F

Processed

C

UnProcessed

A

Processed

U

Processed

List A Version 2

Node

Status (Processed or not)

P

Processed

N

Processed

J

UnProcessed

Y

Processed

Z

Processed

I

Processed

X

Processed

W

Processed

V

Processed

F

Processed

C

Processed

A

Processed

U

Processed

List A Version 3

Node

Status (Processed or not)

P

Processed

N

Processed

J

Processed

Y

Processed

Z

Processed

I

Processed

X

Processed

W

Processed

V

Processed

F

Processed

C

Processed

A

Processed

U

Processed


Limitations:

  • Dynamic SQL is not covered- As is natural, the dependencies of parent objects on the objects used in dynamic SQL queries is not covered in the metadata views of the SYS schema and therefore, in order to capture the such dependencies, methods like parsing need to be resorted to; this solution however, leaves behind the question of the very premise of the dynamic SQL.
  • Deletion of base objects- The base objects if deleted do not have a presence in the metadata views of HANA which makes the detection of the invalidated objects impossible. We might need to have a custom table to address such a scenario.

 

Technical Implementation

The technical implementation of this involves talking with the metadata views of the SYS schema. The views involved are:

Code Piece

set schema chlog;
drop table relation;
create global temporary table relation(node nvarchar(32), parent_node nvarchar(32), level int, object_no int, is_processed nvarchar(32));
drop table changelog_tab;
create column table changelog_tab(object nvarchar(32));
drop table gt_chlog1;
drop table gt_chlog2;
create  global temporary table gt_chlog1(node nvarchar(32),is_processed int,order_no int);
create  global temporary table gt_chlog2(node nvarchar(32),is_processed int,order_no int);
insert into changelog_tab values('P');
delete from relation;
drop table chlog.attr_view_relation;
create table chlog.attr_view_relation(an_view nvarhar(256),at_view nvarchar(256));
insert into chlog.attr_view_relation values('chlog/N','chlog/O');
insert into chlog.attr_view_relation values('chlog/E','chlog/G');
drop view object_dependency;
create view object_dependency as select * from object_dependencies where dependency_type=1 and base_object_type in ('PROCEDURE','VIEW','TABLE') and dependent_object_type in ('PROCEDURE','VIEW','TABLE') and dependent_object_name not like '%/hier/%'
union
select '_SYS_BIC',right_table,'VIEW','_SYS_BIC',entity_name,'VIEW',1 from chlog.attr_view_relation ;
drop procedure check_log;
create PROCEDURE check_log ( )  LANGUAGE SQLSCRIPT  SQL SECURITY INVOKER  DEFAULT SCHEMA chlog
-- READS SQL DATA
AS
/*****************************  Write your procedure logic
*****************************/
i int;
lv_object nvarchar(32);
lv_base_object_name nvarchar(32);
flag int;
lv_no_of_unprocessed int;
lv_objects_left_at_level int;
arr_parent nvarchar(32) array;
arr_node nvarchar(32) array;
arr_status int array;
arr_order_no int array;
lv_cnt int;
lv_cnt1 int;
current_node nvarchar(32);
current_order_no int;
current_status int;
cnt_unprocessed int;
lv_max_order_no int;
lv_max_at_level int:=0;
lv_is_exist int := 0;
lv_any_more int :=0;
begin
DECLARE EXIT HANDLER FOR SQLEXCEPTION SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY;
select top 1 object into lv_object from changelog_tab;
i:=0;
lv_base_object_name:=lv_object;
flag:= 1;
truncate table relation;
insert into relation select base_object_name node, replace(dependent_object_name,'/proc','') parent_node, i+1 level,:lv_max_at_level + row_number() over () object_no ,null is_processed  from chlog.object_dependency where base_object_type in ('PROCEDURE','VIEW','TABLE') and dependent_object_type in ('PROCEDURE','VIEW','TABLE') and dependent_object_name not like '%/hier/%' and base_object_name in (select object from changelog_tab) and dependency_Type = 1;
i:=1;
while flag =1 do
--get maximum object_no for the current level  select case when max(object_no) is null then 0 else max(object_no) end into lv_max_at_level from relation where level= i+1;  lv_is_exist:=0;
--get 1st level dependents  select count(*) into lv_is_exist  from chlog.object_dependency where base_object_type in ('PROCEDURE','VIEW','TABLE') and dependent_object_type in ('PROCEDURE','VIEW','TABLE') and dependent_object_name not like '%/hier/%' and base_object_name=lv_base_object_name and dependency_Type = 1;  if lv_is_exist = 0 then
--if no such dependents exist then this is the root of the hierarchy, push a dummy record for this base object with null as the parent object  insert into relation values(lv_base_object_name, null, i+1 ,:lv_max_at_level + 1,'X');  else
--if dependents exist enter the first level dependents  insert into relation select base_object_name node, replace(dependent_object_name,'/proc','') parent_node, i+1 level,:lv_max_at_level + row_number() over () object_no ,null is_processed  from chlog.object_dependency where base_object_type in ('PROCEDURE','VIEW','TABLE') and dependent_object_type in ('PROCEDURE','VIEW','TABLE') and dependent_object_name not like '%/hier/%' and base_object_name=lv_base_object_name and dependency_Type = 1;  end if;
--mark this node present as parent node in other rows of the current snapshot processed  update relation set is_processed= 'X' where parent_node = :lv_base_object_name;
--routine run to mark those parent nodes that have been newly added after their older instances have been already processed and had therefore been marked processed for that snapshot  lt_parent =select parent_node from relation where is_processed='X';  update relation set is_processed='X' where parent_node in (select parent_node from :lt_parent);
--check for any objects left unprocessed at current level  select count(*) into lv_objects_left_at_level from relation where  parent_node is not null and level= :i and is_processed is null;  if lv_objects_left_at_level > 0 then
--if left, then select the next unprocessed object at the current level  select top 1 parent_node into lv_base_object_name from relation where level= :i and is_processed is null  and parent_node not in (select distinct node from relation a where a.node is not null) and parent_node is not null  order by object_no;  flag:=1;  else
--if no, then check if this is the last level
-- if this is the last level then exit
-- if this is not the last level then increase the level counter i and start processing this new level in the next while loop iteration  i:=:i+1;  select count(*) into lv_any_more from relation where parent_node not in (select node from relation a where a.node is not null) and parent_node is not null;  if lv_any_more >0 then--for last level  select top 1 parent_node into lv_base_object_name from relation where level= :i and is_processed is null  and parent_node not in (select distinct node from relation a where a.node is not null) and parent_node is not null  order by object_no;  flag := 1;  else  flag:= 0;  end if;  end if;
end while;
select * from relation;
truncate table gt_chlog1;
truncate table gt_chlog2;
--preparing the 1st list of processed/unprocessed nodes (objects)
select count(*) into lv_cnt from relation where parent_node is not null;
lt_mid= select * from relation where parent_node is not null order by level,object_no;
arr_parent:=array_agg(:lt_mid.parent_node);
insert into gt_chlog1 select object,1,row_number() over () from changelog_tab;
--insert into gt_chlog1 values(:lv_object,1,1);
for i in 1..:lv_cnt do  current_node:= :arr_parent[:i];
--check if the current node is already mentioned in the list processed/unprocessed, if yes proceed to the next node (iteration)  select count(*) into lv_cnt1 from gt_chlog1 where node=:current_node;  if :lv_cnt1 =1 then  continue;  end if;  select max(order_no) into lv_max_order_no from gt_chlog1;  insert into gt_chlog1 select node,0 is_processed,lv_max_order_no + row_number() over () order_no from relation where parent_node=:current_node and node not in (select node from gt_chlog1);  select max(order_no) into lv_max_order_no from gt_chlog1;  insert into gt_chlog1 values(current_node,1,lv_max_order_no + 1 );
end for;
select * from gt_chlog1;
--keep iterating till all the nodes are processed and are in the order where child comes before the parent
select count(*) into cnt_unprocessed from gt_chlog1 where is_processed= 0;
select count(*) into lv_cnt from gt_chlog1;
while cnt_unprocessed != 0 do  for i in 1..:lv_cnt do  select node,is_processed,order_no into current_node,current_status,current_order_no from gt_chlog1 where order_no = :i;  select count(*) into lv_is_exist from gt_chlog2 where node=current_node;  if lv_is_exist = 0 then  select case when max(order_no) is null then 0 else max(order_no) end into lv_max_order_no from gt_chlog2;  if :current_status = 1 then  insert into gt_chlog2 values(:current_node,:current_status,lv_max_order_no+1);  else  insert into gt_chlog2 select node,0 is_processed,lv_max_order_no + row_number() over () order_no from relation where parent_node=:current_node and node not in (select node from gt_chlog2);  select max(order_no) into lv_max_order_no from gt_chlog2;  insert into gt_chlog2 values(:current_node,1,lv_max_order_no+1);  end if;  end if;  end for;  select count(*) into cnt_unprocessed from gt_chlog2 where is_processed= 0;  select * from gt_chlog2;  if cnt_unprocessed !=0 then  truncate table gt_chlog1;  insert into gt_chlog1 select * from gt_chlog2;  truncate table gt_chlog2;  end if;
end while;
END;
call chlog.check_log;

Output:

output1.png

We can have multiple objects pushed intochangelog_tab and the procedure will give us the correct order in which the objects depend on each other. Try the following input and run the procedure:

 

insert into changelog_tab values('P');
insert into changelog_tab values('T');
insert into changelog_tab values('K');
insert into changelog_tab values('D');
call chlog.check_log;

 

Output:

output2.png

 

Thank You.

 

-Sheel Pancholi   

metric² for iPhone and SAP HANA

$
0
0


metric² for iPhone lets you monitor your SAP HANA instances from your phone showing you alerts, core resources and important metrics. Wherever you are.

 

As mentioned in my GitHANA article, developing on the metric² open source project has really provided some interesting use cases for me around SAP HANA. While it might not be as critical as ERP, BW or custom solution, the metric² demo system is used fairly regularly by people wanting to test drive the functionality. I recently had some server troubles and my HANA instances was down without me knowing. This promoted me to develop a small mobile app to monitor and ensure that my instance was available and running optimally. This is when metric² for iPhone was conceived and I started developing the free app.

 

 

 

 

The app is currently availble for iPhone, and I have a iPad version getting ready to be submitted to the App store. From a technical perspective the app uses a small XS file called mobileapi.xsjs which will need to be put in a package on your XSEngine instance to serve up the data to the app. You can specify how often you would like the data to be refreshed and specify multiple systems which you may need to monitor. (I have included my demo HANA instance as an example within the app so you can try it out.)

 

 

http://metric2.com/img/alerts2.png

 

 

The app is perfect for anyone running a HANA instance, be dev, test or production. It provides a really easy way to view the status of your system from anywhere using your iPhone. The app also downloads updates in the background and will notify you if any high alerts are experienced on the selected system, this is perfect for any sys admin/dba who will be to anticipate critical outages and be ready for the support calls.

 

A few features of the app

 

- View CPU, Disk, Memory consumption

- View open alerts

- Insights into your HANA instance quickly and from anywhere

- Add multiple HANA instances for monitoring

- Clean and simple UI for basic admin functions

- Push notifications for high alerts, when the app is in running background

 

Click here to find the GitHub project (of the mobileapi.xsjs file) and click here to check out the product page. This includes install instructions.

 

 

Technical Details

 

Building a native iOS (read Obj. C or Swift) which is integrated with SAP HANA is not terribly challenging and you really have 2 options for pulling or pushing data. Via a XSJS type file (like this app) or via a xsOData type of interface. Both have their pro’s and con’s but are fundamentally very similar. Below is a snippet of some of the code from my xsjs file and looks/acts very similar to what a regular AJAX call would use from a native XS app.

 

One of the biggest challenges for production users, like any intranet based resource, will probably be gaining access to the URL (mobileapi.xsjs) from outside the corporate net and will probably require a network admin to grant you access or configure (or reuse) a reverse proxy or firewall.

 

 

Screen Shot 2014-08-22 at 1.08.56 PM.png    

 

Screen Shot 2014-08-22 at 1.12.40 PM.png
XCode iOS Pull Data Code
Screen Shot 2014-08-22 at 12.52.08 PM.png
SAP HANA XSJS Code serving data to the iOS app

JavaScript resources XSEngine

$
0
0

Cool Stuff

http://d3js.org

Nice example: http://www.nytimes.com/interactive/2012/10/15/us/politics/swing-history.html

 

Screen Shot 2014-08-23 at 16.32.45.png

 

http://jsfiddle.net

JS Playground to test CSS&HTML&JavaScript

 

Screen Shot 2014-08-23 at 17.31.28.png

 

Does your website pass the browser test?

http://www.browserstack.com

 

Screen Shot 2014-08-23 at 17.32.38.png

 

JavaScript charts

http://dygraphs.com/index.html

Simple charts - open source not fancy

 

http://www.highcharts.com/products/highcharts

Free for non-commercial

 

http://www.koolchart.com

Mixture of charts

 

http://www.amcharts.com/javascript-charts/

Maps and charts

 

http://www.zingchart.com/js-charts

Charts, maps, word cloud


http://www.jqwidgets.com/jquery-widgets-demo/

Free for non-commercial

Mixture of simple UI controls


http://www.jscharts.com/examples

Simple charts - free to use with watermark


http://www.flotcharts.org

Simple


http://www.fusioncharts.com/explore/charts/

Charts and Dashboards


http://gojs.net/latest/samples/index.html

Lots of different tree structures, mind maps, flow charts, org charts etc

 

https://google-developers.appspot.com/chart/interactive/docs/gallery

Google has a go at it

Screen Shot 2014-08-23 at 17.39.45.png

 

 

http://canvasjs.com/html5-javascript-column-chart/

Charts (including candlestick and bubble)

 

More than just charts, whole dashboards

http://wijmo.com/widgets/

 

http://js.devexpress.com/WebDevelopment/Charts/

Charts, maps, Gauges

 

http://www.anychart.com/products/anychart7/gallery/

Clean & simple (Oracle partner)

 

 

JS Frameworks

http://jquery.com

Most well known JS library (MIT license)

(Testing Framework: http://qunitjs.com )

 

Screen Shot 2014-08-23 at 17.38.02.png

 

https://docs.angularjs.org/misc/faq

Angular JS (Google)

Nice: bidirectional data binding


http://knockoutjs.com/index.html

Opensource (MIT license)

 

https://saucelabs.com/javascript/

Another tool to Test your Javascript


Use XSJS outbound connectivity to search tweets

$
0
0

Intro

XSJS outbound connectivity was introduced in SAP HANA SPS06. It is really a cool feature in SAP HANA XS. With XSJS outbound connectivity, we can directly make HTTP/HTTPS request in our SAP HANA native app. I love social media and this feature makes it possible to connect social media (e.g. Twitter, Facebook) API with SAP HANA and do some interesting analysis. In this blog I want to share with you how to use XSJS outbound connectivity to search tweets.

 

 

Motivation

Last year I had a mini-fellowship in Palo Alto and it was my first time in the US. I wanted to watch a movie one weekend but it's hard for me to pick one. So I decided to use SAP HANA to build a smart app which would tell me the rating of movies. I made it and selected the movie with highest rating. The main idea is first crawl tweets and insert into SAP HANA, then use the native text analysis to calculate the sentiment rating. However, when I was building this app, SAP HANA is still SPS05. Without XSJS outbound connectivity, I have to use Twitter4J to connect Twitter API. If only I could use XSJS outbound connectivity at that time! Never mind. It's time to rebuild the smart app and replacing Twitter4J part with XSJS outbound connectivity is my first step.

 

Now let's do it.

 

 

Prerequisites

1. A running SAP HANA system, at least SPS06. I am using SAP HANA SPS08 Rev. 80.

2. A Twitter account. If you don't have, sign up here.

 

 

Steps

1. Learn how to communicate with Twitter API

In this step, we need to find what APIs will be used and how to communicate with Twitter API (authentication & authorization stuff). First you can find all Rest APIs here. We want to search tweets, so we will use GET search/tweets | Twitter Developers It is very clear and you can find the URL, parameters and an example request.


But how can we call this API? You can find the answer in this doc. Since we just need to search tweets, we can use the app-only authentication.There you can find a very detailed example with three steps. That's exactly what we need. There is one thing need to be mentioned, also you can find it here"As with all API v1.1 methods, HTTPS is always required."

 

 

2. Use Postman to simulate calling the API

Since now we know how to communicate with Twitter API, we can first test it with Postman. I like to test before I really do something. There is also a lot of other tools similar with Postman, please use your favorite. The steps are described clearly in app-only authentication. I will not describe them again. Here I just summarized my steps with some screenshots.

 

a. Encode API key and secret.

First you need to create an app if you don't have one. You can find <API key> and <API secret> under "API Keys" tab in you app. I've already regenerated API key and secret, so the key and secret in the following pic do not work now.

 

1.PNG

 

Then encode <API key>:<API secret> to Base64 format. For example, you can use Base64 Decode and Encode - Online

 

2.PNG

 

 

b. Obtain a bearer token

You need to sign out your Twitter account in Chrome; otherwise you will not get the bearer token. Instead you will get this message "403 Forbidden: The server understood the request, but is refusing to fulfill it." I've already invalidated the bearer token in the following pic.

 

3.PNG

 

 

c. Test GET search/tweets | Twitter Developers with the bearer token

We managed to search tweets with hashtag #SAPHANA and got the results. To keep simplicity, we just use parameter "q" which stands for query.

 

4.PNG

 

 

3. Set up your SAP HANA to use HTTPS

Till now, we have successfully called one Twitter API using Postman. So why not use XSJS outbound connectivity? Let's start! As with all API v1.1 methods, HTTPS is always required. So, the first thing we need to do is set up our SAP HANA to use HTTPS which is not configured by default. You can follow this detailed blog by Kai-Christoph to finish this step. When you finish it, you should be able to do the following; Otherwise you do not finish this step.

 

a. Visit https://<hostname or IP>:43<instance number>/sap/hana/xs/admin/ successfully

b. When you switch to "Trust Manager" tab, there is no such error "No valid SAP crypto configuration"

 

 

4. Create trust store of Twitter API

In this step, we need to create trust store of Twitter API. Again, you can follow this detailed blog by Kai-Christoph to finish this step. There is only one thing you need to change. That is visit https://api.twitter.com/ instead of https://api.github.com/ in the blog.

 

5.PNG

 

 

5. Use XSJS outbound connectivity to search tweets

We finally come to this step. Since we have prepared everything in the previous steps. It is easy for us in this step. We just need to do the following. I did it in SAP HANA Studio. Of course you can also do it in Web IDE. Here is my project hierarchy. It is very simple.

 

6.PNG

 

a. Create a XS project

b. Create .xsapp, .xsaccess and "services" folder

c. Create twitterApi.xshttpdest, edit, save and activate

 

description = "twitter api";
host = "api.twitter.com";
port = 443;
pathPrefix = "/1.1";
useProxy = true;
proxyHost = "proxy.pal.sap.corp";
proxyPort = 8080;
authType = none;
useSSL = true;
timeout = 0;

 

d. Edit trust store in HTTP destination (in red box) and save

 

7.PNG

 

e. Create search.xsjs and edit. From Application-only authentication | Twitter Developers, we can learn that "Note that one bearer token is valid for an application at a time. Issuing another request with the same credentials to /oauth2/token will return the same token until it is invalidated." So, we do not need to obtain the bearer token each time which means we can directly use bearer token in our code. I've already invalidated the bearer token in the following code.

 

var destination = $.net.http.readDestination("searchTweets.services", "twitterApi");
var client = new $.net.http.Client();
var request = new $.net.http.Request($.net.http.GET, "/search/tweets.json?q=%23SAPHANA");
request.headers.set('Authorization', 'Bearer AAAAAAAAAAAAAAAAAAAAADdMZgAAAAAADme2QQk3csQXnGCeepM7Swvf6PI%3DJZNlbYr3YkcDsS0xCgeRgmzJW5Cjk8cvI4ESXECzVKTYI3bNw5');
var response = client.request(request, destination).getResponse();
$.response.status = response.status;
$.response.contentType = response.contentType;
$.response.setBody(response.body.asString());

 

f. Save and activate all files

 

6. Test it

Now let's test our XSJS outbound connectivity. Called successfully! There is a tweet text which contains "The only limitation is our imagination!"

 

8.PNG

Conclusion

So far, we have successfully used XSJS outbound connectivity to search tweets. However, yet we neither inserted tweets into SAP HANA nor made some analysis. I'll do this in my smart app and share with you later. In addition, there is also a lot of other Twitter APIs which you can call with XSJS outbound connectivity.

 

Hope you enjoyed reading my blog.

A Programming Model for Business Applications (3): Calculated Elements and Properties

$
0
0

This post continues the blog series A Programming Model for Business Applications. If you havn´t read the first parts in the series, I recommend that you read the first part A Programming Model for Business Applications (1): Assumptions, Building Blocks, and Example App and the second part A Programming Model for Business Applications (2): Implementing the Entity Model with CDS first.

 

In this part, I want to discuss the implementation of the business object read service and the properties on the server side.

 

 

Business Object Read Service

 

Let us start with the BO-specific views for calculated elements. Obviously, I would like to define calculated elements that belong to the BO in the entity definitions itself. For example the following amount calculation:

 

NetAmount ={

  Currency = ListPriceAmount.Currency,

  Value   =( ListPriceAmount.Value * Quantity.Value )*(100- DiscountPercentValue) / 100 

}

 

Unfortunately, in SPS8, CDS does not support the definition of calculated elements in an entity, so I have implemented calculated elements as separate views postfixed by $C.

 

The first listing shows the view for the Item entity with two examples for amount calculations:

  view Item$C as select from bo.SalesOrder.Item {

    ID_,

    Parent_ID_,

    ListPriceAmount.Currency.Code     as "NetAmount.Currency.Code",

       (ListPriceAmount.Value * Quantity.Value)*(100- DiscountPercentValue) / 100 as "NetAmount.Value",

    ListPriceAmount.Currency.Code     as "GrossAmount.Currency.Code",

    NetAmount.Value + TaxAmount.Value as "GrossAmount.Value"

    };

 

The next listing shows the view for the Sales Order entity with a summation over the items. Here, unfortunately a second limitation of CDS hits us: It is not yet possible defining (unmanaged) associations between views. As I want to use calculated elements of the Item$C view in the SalesOrder$C view, I cannot refer to them via association, but I have to define a join between SalesOrder and Item$C, which makes the select statement more complex, and which is not supported by CDS in SPS8. As a consequence, I implemented the SalesOrder$C view as hdbview artifact using the following select clause::

 

select

   SO.ID_,

      sum(I."NetAmount.Value")   as"NetAmount.Value",

   SO."Currency.Code"         as"NetAmount.Currency.Code"

  from<schema>.<path>::bo.SalesOrder.SalesOrder"as SO

  join<schema>.<path>::bo.SalesOrder.Item$C"as Ion SO.ID_ = I."Parent_ID_"

groupby SO.ID_ , SO."Currency.Code"  

 

The last listing shows the (simplified) calculation of the formatted name for a business partner (simplified) as part of the BusinessPartner$C view.

casewhen (CategoryCode = '1') then

      casewhen (not(Person.AcademicTitle.Code = '')) then

            concat(Person.AcademicTitle.TitleText , concat(' ', concat(concat(Person.GivenName, ' '), Person.FamilyName)))

            elseconcat(concat(Person.GivenName, ' '), Person.FamilyName) end

else

   casewhen (not(Organization.SecondLineName = '')) then

      concat(Organization.FirstLineName, concat(' ', Organization.SecondLineName))

   else

     Organization.FirstLineName end

endas FormattedName

 

As a result the following formatted names may be returned: “SuccessFactors A SAP Company” (organization with two name lines), “Thomas Schneider” (no academic title), “Dr. Thomas Schneider” (academic title before the name), or “Thomas Schneider, PhD” (academic title after the name, not implemented).

This little example already shows us that a simple calculation like a formatted name can be quite complex (the complete code that considers all possible combinations is even longer). But it gives us a good reason for the argument that these type of calculation should be done in centrally as business object logic and should not be repeated in the service implementation or the UI.

 

Properties

Property logic provides information whether data can be updated, are mandatory, etc. depending on the lifecycle of an object. Examples for properties are:

  • Disabled: an element or entity is disabled
  • Read-only: an element or entity is read-only. If the entity is read-only, all elements are read-only.
  • Mandatory: the element must have an input value

Properties can be static, dependent on some business configuration or master data, or dependent on the status of the business object.

 

Examples:

  • Static: the Last Changed DateTime element is read-only, it is always set by the system.
  • Dependent on the master data: in a sales order, the properties depend strongly on the product you are selling: In case you are selling physical materials, or services, or projects, various elements (for example the Incoterms elements) may be disabled (= not relevant for the product), read-only (= prefilled by the system), or mandatory.
  • Dependent on the status: Sales orders in status “Completed” cannot be changed, they are read-only.

The UI may optionally request the properties as part of the read request and set the properties of the respective UI control accordingly. Properties checks also run as part of the CUD request to check if the input is valid at a certain point in time (I will discuss this in a later blog).

For the implementation of properties, I am using an additional view with name postfix $P, for example SalesOrder$P, etc.

 

Discussion:

  • Properties are view fields of type Boolean (workaround: Integer), named, for example “Incoterms.Classification.IsMandatory”
  • Properties are optional, in other words, not all properties are modelled in the properties view 
  • A Null value is equivalent to False.

 

Properties are a very powerful concept, but you can introduce them step by step. In the following blogs I will come back to the properties view and discuss how it is consumed in case of the read and the write scenario.

The next blog in this series shows the implementation of the service adaptation read service the OData service and the corresponding UI implementation: A Programming Model for Business Applications (4) Service Adaptation and UI (Read Service).

SAP HANA Studio on Mac OS

$
0
0

I ‘hope you'll find this blog useful :-)

 

I ‘was looking a while to find a solution to run the SAP Hana Studio on my Mac without any emulators. The Solution is simple:

 

Short:

  1. Download Eclipse Kepler 4.3 - 64Bit (eclipse-standard-kepler-SR2-macosx-cocoa-x86_64)
  2. Change the Eclipse Compiler & JRE Settings to Java 1.6!  It will not work with 1.7 or any OpenJDK version.
  3. Open Eclipse->Help->Install New Software->Add:

 

 

Screenshots:

 

JRE & Compiler Settings

jre.jpg

compiler.jpg

 

Software Installation

software.jpg

 

Regards, Tobi

SAP HANA Idea Incubator - To reduce defective products in textile production solution SAP HANA

$
0
0

Hi All.

I came across this scenario, thought of sharing.

My idea to reduce defective products in textile production solution SAP HANA

Because textile production attentive and high quality production that results when faulty products due to increased costs will reduce the mobile integration with minimal defects cost to build the technical and software infrastructure is needed at this point, SAP HANA solution with the whole process fast and integrate the work will be.

This has been proposed in SAP idea incubator, here.

 

Regards,


Cem Ates

Error message when starting SAP B1H

$
0
0

All,

 

I've recently installed a new SAP Business One database in HANA. When trying to access it from the ERP got this error message: "It was not possible to establish the connection to the SAP HANA server".

 

As usual, no information regarding that particular message was found on the net... However, the solution was pretty simple: It turns out that B1Analytics needs to be initialised before running SAP B1H, so that analytics gets to work properly.

 

Simple steps to initialise Analytics in the SLD:

 

  1. Select the Analytics Platform option.
  2. Go to Companies.
  3. Choose the company that you would like to initialise.
  4. Click on the Initialise button.
  5. That's all. The company should now appear as "Initialized" in the list:

 

img.png

 

Hope this helps.

 

Regards,

 

Alejandro Fonseca

Twitter: @MarioAFC

Viewing all 676 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>