Platform Core
Platform Core
Platform Core
EVENTS
The Event System is a framework provided by the service layer allowing you to send and receive events
within the SAP Hybris.
• One software component acts as a source and publishes an event that is received by registered listeners
• Event listeners are objects that are notified of events and perform business logic corresponding to the
• event that occurred
• Events can be published locally or across cluster nodes
• SAP Hybris Event System is based on the Spring event system
To create your own event, you need to extend the class de.hybris.platform.servicelayer.event.events.AbstractEvent
You should be able to read the following message on SAP Hybris console : INFO [hybrisHTTP8]
[HybhubListener] Received event(Hybhub Event) : Hybhub Event : Event published from Groovy console !
The only flaw here is that everything is executed synchronously by the same thread ! To process them in an
asynchronous way you can :
• make your event cluster aware by implementing de.hybris.platform.servicelayer.event.ClusterAwareEvent,
when you implement this interface, your events are always processed asynchronously even if they are
executed on the same node they were created.
IMPEX
• SAP Hybris needs to exchange a lot of data with external systems, and it needs to be easily configurable
without having to create complex SQL query, Impex offers a way to easily store and exchange, it means import
and export, it is an out of the box CSV based import framework.
• Hybris has come up with an extension called “Impex” inside platform/ext folder which helps in inserting,
updating, deleting, and exporting data.
• Impex import always means data flows into Hybris system.
• Impex export always means data flows out of Hybris system.
Impex Header:
• Attributes if needed
– virtual ([virtual=true,default=value]) needs to have a default modifier as well
• Header modifiers, are configured with the item type, for example : INSERT MyType[headerModiier=something];…
– batchmode ([batchmode=true]), used with an update or remove it allows to modify more than one
item that matches the query
– cacheUnique ([cacheUnique=true])
– processor ([processor=de.hybris.platform.impex.jalo.imp.DefaultImportProcessor])
– impex.legacy.mode ([impex.legacy.mode=true])
INSERT_UPDATE
category;code[unique=true];name[lang=de];name[lang=en];$supercategories
;$thumbnail;description[lang=de];order
You can also specify a subtype of the Header item type on the value line, for example :
The header item type could be abstract, but the value line item type obviously can’t.
Setting an attribute value for atomic types is straightforward, a string attribute for example can be directly entered
within the value (see example with User item type above), but for reference attributes Impex expects you to enter
the primary key of the referenced item, this is not possible to know items primary key before they are created so
this is not the right way to create relations. To efficiently and without external dependencies insert relations
between items you need to lookup, for them based on their own attributes.
To localized attributes, you need to use a modifier within the header attributes like [lang=en] to localize an
attribute in English :
en is defined with the Language items and correspond to unique ISO code of the language,
have a look at the internationalization tab under the HMC or HAC.
Value Lines:
Comment:
A commented line starts with a dash # and are completely ignored during import :
Macro:
Impex files can easily be huge, it’s not occasional to see Impex import files of more than a thousand lines! If you
need to update one common attribute for each of the value lines, macros come in handy. A Macro
definition starts with a $ sign :
Distributed Impex:
SAP Hybris V6 introduced a new concept of distributed Impex to share the Impex import between nodes,
it speeds up the execution of queries by splitting them into batches that are executed in different nodes,
this doesn’t need to be configured since it’s the new default Impex engine. Importing data using the new
distributed mode consists of three steps :
• prepare then split
• single import execution
• finishing
Cell Decorator is used to obtain the specific cell of value line after parsing but before translating it. We can
then decorate the cell value based on business logic. Decorating cell value means modifying the cell value based
on some business requirement.
Requirement
Assume we are Updating Customer record through impex
We need to check customer belongs to which country using customer id provided
in impex
If customer belongs to US then append _US to customer id before saving
If customer belongs to Canada then append _CA to customer id before saving
Step 2: Create the class which implements CSVCellDecorator and override the decorate method with our custom
logic.
In the above class, we have written a logic to return the customer Id by appending “_US” to US
customers and “_CA” to Canada customers and return customer Id without appending
anything if user is different from US and CA.
Step 3: Run the impex and check the updated customer record in HMC/BackOffice.
Impex Translator
The valueExpr is the parsed cell value (possible decorators already applied if any), toItem is the resolved item.
Implement the translation logic for the export case here, The given object has to be translated to a String that has
to be returned.
Step 2:
Create the class which implements AbstractValueTranslator and override
the importValue and exportValue methods with our custom logic.
Step 3:
Run the impex with price value greater than or equal to Zero, it will run successfully otherwise it will throw an
error.
Note:
We can use item type as one of the unique value in header if we want to update all instances of that type.
We can specify unique value in header other than item type if we want to update matching instances for that
unique value rather than all instances.
How do we escape special meaning of these characters and insert those characters as part of data?
Example 1:
Now when we run this impex, the value for syncJob(code) will be stored as:
sync electronics-ukContentCatalog:Staged->Online
This is because, we are using path-delimiter =! .
If we don’t use this path-delimiter, then we would have got error as below:
Example 2:
Now when we run this impex, hybris searches for catalog id with value as “myStore:catalog” exactly in the table.
If we don’t use path-delimiter, then we will get error as below:
Example :
If we are updating name of 1000 products, obviously we will specify product code as unique for each value
line and provide new name to update. Assume out of 1000 products, 1 product is not there in DB.
Now this example leads to the above exception and stops further execution.
Look at below impex which has invalid product in 3rd value line:
Solution to above problem is, ignore those records which are not exist in DB and update other records which
are valid.
This can be done by adding below line at the top of impex:
Now rows which contains errors will be skipped and they will be placed in a dump file.
SPRING CONTEXT
SAP Hybris is built heavily using Spring, it offers developers the flexibility to implement their work on top of the out
of the box extensions. Spring offers the ability to contain beans within different contexts, SAP Hybris provides
different application contexts :
• a global static application context shared by all tenants
– to configure it you need to configure an extra property within your extension configuration
extname.global-context=spring.xml
The order of loading Spring configuration is the same as in the build process, so to override a Spring bean
defined in extension A from extension B, add a dependency in extension B for extension A. You can configure
more than one Spring configuration file, simply separate them with a coma, for example : extname.application-
context=spring1.xml,spring2.xml.
To manually load a bean, it is recommended to use Registry.getApplicationContext() as this will first try to load it
from a web application context then from the core context and finally from the global context.
CRONJOBS
A Cronjob is an automated task performed at a certain time (every day at 2 am for example), or in fixed intervals
(every hour for example), it can be used for:
• data backups
• Catalog synchronization
• importing or exporting data
A Cronjob is made of :
• a job to do
• a trigger to start the job (not mandatory)
• a Cronjob item which link the job and the trigger and configure the environment the job will be performed in.
To create a new Cronjob first you need to create a new item which extend the Cronjob item type, we will create a
Cronjob that deactivate products that haven’t been updated since a given time :
Then we need to create the actual job that deactivate products based on the minLastUpdate attribute :
You need to register your class as a Spring Bean.
You need to update your system for SAP Hybris to find the new Job.
Don’t forget to switch the commit mode to on if you execute this more than once it will fail since Cronjob codes
needs to be unique!
NOTE:
Cron Job:
• It’s an Item Type we define in the Items.xml with all the attributes required for the Job to be executed. So,
we can say that Cronjob provides the configuration details for the Job.
• Hybris Out of the Box(OOTB) provides Cronjob item Type which already has many attributes defined
inside it like Code, active, status. logText, startTime, endTime etc.
• Check Cronjob itemtype under below file for more details
/platform/ext/processing/resources/processing-items.xml
• All these attributes are considered as a configuration for the Job.
• In case our Job needs some additional configuration parameters other than what is provided
with Cronjob Item Type, then we should define a new Cronjob Item Type in the *items.xml by extending
the Cronjob Item Type.
Job:
• Job defines the logic to be executed.
• We create a new class and extend AbstractJobPerformable class or
implement JobPerformable interface.
• Inside this newly created class, we provide the definition for perform(CronjobModel
CronjobModel) method. What we define inside this method is called as Business logic of the cron job.
• And this class which contains the business logic is called as Job.
Note: This class should be defined as a Spring bean in order to consider this as a Job.
Result:
o It is of type CronjobResult enum.
o It gives the information about the last execution of the cron job, like whether cron job has
ended with Success or Failure
o Possible values for the same can be checked in the CronjobResult.java enum.
We can say that Cron Job is completed successfully with the below combination of CronjobStatus and
CronjobResult:
CronjobStatus.FINISHED and CronjobResult.SUCCESS
So PerformResult is returned by perform() method after the Job completes Business logic execution,
which then can be used to determine the overall result and status of the Job.
Trigger:
• It defines the scheduling of a Job which helps to decide when the Job has to be executed.
• It defines scheduling using the attributes like seconds, minutes, hours etc. or it can define it
using Cron Expression.
• Trigger is actually linked to the Job indirectly which means Trigger will always be defined
for Cronjob and Cronjob holds the actual Job to be executed.
• So, whenever we define Trigger, we define it for Cronjob not for Job.
Cronjob in turn identify the corresponding Job and makes sure it gets executed.
In Simple words,
Job defines what should be done, a Trigger says when it has to be done, and a Cronjob defines the
setup/configuration required for a Job.
Requirement:
We need to run some tasks which should remove the products whose price is not defined, and it is X days
old in the system. Now our business logic is to remove the products if its price is not defined, and it is X no
of days old.
Defined a new Cron Job item type ProductsRemovalCronjob which extends Cronjob item type.
So all the attributes of Cronjob item type are also inherited to this new Cron Job item type.
Do ant all and refresh the platform folder to check ProductsRemovalCronjobModel.java is generated.
Step 2: Create a class which acts as a Job by extending AbstractJobPerformable class and override
the perform() method to have our business logic.
This is a class which does not have any business logic written for the Job to perform.
Then we are iterating the product List and checking each product whether they have price row or not, If
price row is not defined then we are adding such products to a list of deleting products.
Then, we are removing all such products.
• After that we are calling the SOLR full indexing to index only those products which have the price.
• Since we need to define the query to get all the products older than specified days, we will create a
new DAO Implementation class and interface as below:
CustomProductsDAO.java
hybris\hybris\bin\custom\training\trainingcore\src\org\training\core\dao\CustomProductsDAO.java
CustomProductsDAOImpl.java
hybris\hybris\bin\custom\training\trainingcore\src\org\training\core\dao\impl\CustomProductsDAOI
mpl.java
Step 3: Define the above classes as spring beans in the *-spring.xml file as below
hybris\bin\custom\training\trainingcore\resources\trainingcore-spring.xml:
Step 5: Write an impex into the same file to create a Trigger instance as below:
Step 6: Do Update System by selecting the extension where we defined our job
Now Cron Job will be executed at the scheduled time automatically.
Note:
After Updating the system, we can run the below query to check the ServiceLayerJob created for the Job
that we defined as a spring bean.
Step 7: Check the Products in HMC whose price not defined, and they are X days old before Job runs.
Search the same products after the Job runs successfully. We should not be able to see those products
now.
• Log in to HMC.
• Go to System -> Cronjobs.
• Type productsRemovalCronjob in Value Text Box for code; then click on search,
open productsRemovalCronjob.
• Go to Time Schedule tab ->set time in (Trigger text box) then save it.
If we want to start it immediately then Click Start Cronjob now, then our job will perform and display
popup box with Cronjob performed Result.
How to execute the Cronjob through code rather than through trigger?
We can also execute the Cron Job through code rather than running it through Triggers as below:
In the above code, we have written a code to call the Cron job through CronjobService’s
performCronjob() method.
TaskEngine will keep on polling the Tasks for every X second which we configure in the local.properties file
as below:
By default, the timer is set to 30 seconds, so if we want to change this value, we can define it with new
value in the local.properties file as mentioned above.
Here X is 30 seconds, so for every 30 seconds Timer task fires a DB query to check for any triggers to be
fired, if any trigger matches the current time or its overdue then Trigger is fired immediately.
NOTE:
Since Timer Task polls a DB every X seconds as we specify in the local.properties file, we should not
give its value too less unless and until its very much essential otherwise more DB calls will be made
unnecessarily.
So, this Task polled by TaskEngine will check the cron expression whether it matches or exceeded the
current time, if so, it will immediately execute the Cron job.
Sometimes when the jobs you are performing are time consuming you need to be able to abort a running
Cronjob. By default, a Cronjob will run until it’s done but you can implement your own logic to give it a
chance to abort itself. First you need to create a job aware of this new functionality to do so two
methods:
1. isAbortable() method in Cronjob performable class should be overridden to return true. Define
a new property in the *-spring.xml file where we defined our Job’s bean definition.
2. You need to add within your perform method implementation an exit door like this:
So, how do we set these attributes to the Cronjob so that we can access them while writing the logic of
the Cronjob?
Here when we define the instance of Cron Job , we also specify the session attributes like user,v
currency and language.
Above session values can be used while writing the logic inside Perform() method of Job.
Now above Cronjob runs only in the Node 3 of the clustered environment.
If we don’t set any Node Id for the Cron Job, then Cron job can be executed by any Node within the
cluster but only one Node at a time.
Configuration:
Node specific configuration can be done by using:
Nodes need to have a unique identifier, prior to SAP Hybris V6 you had to manually configure an
id for each node:
This could make deployment more complex as you needed to provision a unique identifier for
each node, since SAP Hybris V6 you can activate auto discovery mode, this will keep track of each
node within the database:
Cluster configuration is available under your platform project.properties file. If you want to
change the out of the box configuration remember to do it under your config local.properties
file.
Testing:
SAP Hybris uses Junit to run unit tests and integration tests; tests are located within the testsrc and
web/testsrc folders. You can execute tests from your IDE or from ant.
There are other ant commands (manual tests, performance tests…) to see all available tests execute
“ant -p”.
Unit Tests:
Unit tests are simple tests that do not need any access to SAP Hybris platform (database, services…). They
focus on testing the correct behavior of a single java class. When needed you can isolate the class by using
Mockito to mock anything you need (interfaces, pojo…). Example of a Unit test class generated by SAP
Hybris:
Other Tests:
• @DemoTest
• @PerformanceTest
• @ManualTest