Pages

The Struggles of Personalization in ADF

A Mike Heeren & Richard Olrichs co-production

ADF comes with the out-of-the-box features of personalization. This means that whenever you configure personalisation, users can persist changes they make to the application across sessions and personalize their experience with the application. We have seen that this feature can also confuse some of our users, so it is not always wise to use this. It depends on the use case you have. However, when recently implementing personalization on an ADF 12.2.1.2 application, we had a couple of issues regarding persisting these personalizations to the MDS.

We felt that most of the blogs we came across while implementing these features, share the joyful out-of-the-box configuration. Just select some of the checkboxes in the properties and you are done, ready to enjoy your beer and have your designers and product owners cheer for you.
Sometimes however, real life applications at customers do not match the out-of-the-box configuration and things can become a little bit more tricky than you might expect.

Let's start at the top, and go through some of the steps you will always need within your application for personalization to work. You need to have authentication and authorization set up. If you also want to follow our struggles, we have posted a sample application at the end of this blog to show both the problems as well as the solutions.

Getting started with customizations

The basic configuration of customization in ADF is pretty simple. We start with a simple ADF application (with authentication already configured), and select the Project Properties > ADF View. Here we check Enable user customization and Across sessions using MDS, as seen below:



When enabling user customizations, the following files are edited:
In the adf-config.xml file the following lines are added:

 
  oracle.adf.view.rich.change.MDSDocumentChangeManager
 


In the web.xml file the javax.faces.FACELETS_RESOURCE_RESOLVER context-param is changed from oracle.adfinternal.view.faces.facelets.rich.AdfFaceletsResourceResolver to oracle.adfinternal.view.faces.facelets.rich.MDSFaceletsResourceResolver, and the following blocks are added:

 adflibResources
 oracle.adf.library.webapp.ResourceServlet

...

 adflibResources
 /adflib/*

...

 ADFLibraryFilter
 oracle.adf.library.webapp.LibraryFilter

...

 ADFLibraryFilter
 /*
 FORWARD
 REQUEST

...

 oracle.adf.jsp.provider.0
 oracle.mds.jsp.MDSJSPProviderHelper


 org.apache.myfaces.trinidad.CHANGE_PERSISTENCE
 oracle.adf.view.rich.change.FilteredPersistenceChangeManager


Finally, the following block will be added to the project .jpr file:

 
 


After this we need to configure the adf-config.xml file and add the oracle.adf.share.config.UserCC Customization Class on the MDS tab. This is a default customization class that is shipped with ADF. You can also write your own, but that is more of a use case for customization than it is for personalization. In our case we’ll just configure the application using the UserCC:



Now you need to configure the components that you want the end user to be able to personalize. This can be done in the View tab. Select the ADF Faces Components Tag Library, because we want to personalize the table and column components, these are default component from ADF Faces. By default all attributes will be persisted, you can uncheck them if you do not wish to persist these:



After the above steps are configured, you can give a fancy demo on your demo application, and everybody is happy. However, in our production application, there were still a few issues to tackle, we struggled with some of these.

Struggle 1: Task flows from libraries combined with a file based MDS on a Windows machine.


Our application was not a simple MVC application, but like many application out there, we used ADF libraries to include taskflows from library projects and included them in a bigger main application. When we turned on personalisation on components from task flows which come from libraries instead of directly from the application, they were not correctly persisted to the file based MDS on Windows machines.

When running the application via JDeveloper (in our example application by right clicking default.jsf in the ViewController project, and selecting Run), we see that the task flow that comes directly from the ViewController project (the table on the left of the screen), behaves as expected. However, when personalizing the table from the task flow that comes from the imported library (so from the ViewControllerLibrary project), we see the following warning in the log files.

    

We can verify that the preferences were persisted to the MDS for the left table, but not for the right table, by opening the application (with the same user) in another browser:



Unfortunately, this issue occurs when using task flows from libraries in combination with using a file based MDS on a Windows machine. Weblogic is not able to create a file path, which contains !/ (which is used to indicate that the resource is part of a library).

Luckily, in our case all other DTAP environments don’t use a file based MDS, but a database MDS. In the database MDS, we don’t have the file path issues, so there we won’t see this issue. So on all environments except the (local) Integrated WLS, we don’t see these warning logs, and we will see both table personalizations being persisted. It might take you some time to realise this, if you do not want to deploy to Dev or Test without having the Personalization working on your local machine.

Struggle 2: Deploying the application as EAR instead of via JDeveloper


Deploying both from JDeveloper as well as creating an EAR file seems to work. However, there was still some struggle there as well. The personalisation was working when deploying via JDeveloper, but when we would build in EAR from the application via JDeveloper, and deploy it manually to the Integrated WLS via the console, none of the settings were persisted to the MDS, and we saw the following warning in the log files:

     

By opening the application in different browsers again, we can also confirm that neither of customizations the tables is persisted in the MDS now:



When Personalization across sessions with the MDS is configured, JDeveloper always creates a metadata store usages in the adf-config file at deployment time. This configuration is named MAR_TargetRepos.

It does not matter that we have configured a different metadata store usages within the adf-config. If the MAR_TargetRepos is not present while creating the EAR file, it will be added to the adf-config file. The only solution that we found to this, is by naming our metadata store usages to match the expected default, then it will not override or add anything.


      
            
      
 
  
   
    
    
   
  
 


Note that we did not set deploy-target to true, because if we do this, our custom MAR_TargetRepos will be overridden with the default implementation again, when building the EAR file. This default implementation does not contain the FileMetadataStore implementation:



Because we want to override the default MAR_TargetRepos, so the personalization will also work when deploying the EAR instead of deploying via JDeveloper, we do not set the deploy-target to true so the default (false) will be used. In this case the FileMetadataStore configuration will be preserved in the EAR file.

If we deploy the new EAR file, we see the same behaviour as we did when deploying it via JDeveloper. Also, when we use the personalization functions on the screen, we will see files being created within the PersDef folder within the %TEMP% environment variable.

This FileMetadataStore configuration is changed (back) to a DBMetaDataStore configuration by an ANT build script, before deploying on the different DTAP environments instead of the Integrated WLS environment. An example of such ANT script can be found below.


  
  
  
  
 
  
  
  
  
  
 
 
   
   
 
 
  


Be sure that the WLS_HOME environment variable is set in the system environment variables, and the ear.location property is replaced in the ANT file when you want to use the above example.

Struggle 3: Suddenly our application does persisting during the session.


We have configured the adf-config to persist only certain components and attributes across the session. This works very nice and clear, however, suddenly all the other components also persist their state, just not across the session, but during the session.

It is possible that this is not what you want, it certainly was not what we expected or had in mind for our application, but there is nothing much we can do about it. It would have made more sense to turn this off for all the components and only persist those that were configured to be persisted.

Luckily the ADF components have an attribute persist and dontPersist on them. When reading the documentation on these attributes, it sounds exactly like what we need for our application! Before adding the dontPersist attribute to the hundreds of components we have in the application, we decide to test it on a couple. What we found out was very unpleasing, basically these attributes could be used for documentation purpose or for fun, but it certainly did nothing concerning persistence.

We decided to create our own custom class to adjust the framework and get this working. To achieve this, the context-param org.apache.myfaces.trinidad.CHANGE_PERSISTENCE can be adjusted to a custom class.

At first we tried to extend the oracle.adf.view.rich.change.FilteredPersistenceChangeManager class, which is the class ADF uses by default. However, unfortunately this class is declared final. We decided to create a class that extends the org.apache.myfaces.trinidad.change.SessionChangeManager class, and use the FilteredPersistenceChangeManager as an instance variable. This may not be the prettiest solution, but it serves our purpose:
package nl.whitehorses.personalization.changemanager;
import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;

import oracle.adf.view.rich.change.FilteredPersistenceChangeManager;

import org.apache.myfaces.trinidad.change.AttributeComponentChange;
import org.apache.myfaces.trinidad.change.ChangeManager;
import org.apache.myfaces.trinidad.change.ComponentChange;
import org.apache.myfaces.trinidad.change.DocumentChange;
import org.apache.myfaces.trinidad.change.SessionChangeManager;

public class CustomChangeManager extends SessionChangeManager {
 private final FilteredPersistenceChangeManager fpcmInstance = new FilteredPersistenceChangeManager();

 @Override
 public void addComponentChange(final FacesContext context, final UIComponent component, final ComponentChange change) {
     if (component == null || component.getAttributes() == null) {
         return;
     }
     final String[] persistArray = (String[]) component.getAttributes().get("persist");
     if (persistArray == null) {
         return;
     }
     for (final String persistVal : persistArray) {
         if (persistVal != null && change instanceof AttributeComponentChange && ("ALL".equals(persistVal) || ((AttributeComponentChange) change).getAttributeName().equals(persistVal))) {
             fpcmInstance.addComponentChange(context, component, change);
         }
     }     
 }

 @Override
 public void addDocumentChange(final FacesContext context, final UIComponent component, final DocumentChange change) {
     fpcmInstance.addDocumentChange(context, component, change);
 }

 @Override
 public boolean supportsDocumentPersistence(final FacesContext context) {
     return fpcmInstance.supportsDocumentPersistence(context);
 }

 @Override
 public ChangeManager.ChangeOutcome addDocumentChangeWithOutcome(final FacesContext context, final UIComponent component, final DocumentChange change) {
     return fpcmInstance.addDocumentChangeWithOutcome(context, component, change);
 }
}

As you can see, the logic for the persist attribute has been implemented in the addComponentChange method. The addComponentChange method from the FilteredPersistenceChangeManager instance, will only be called when the component contains the persist attribute which is set to ALL. Besides the addComponentChange method, all other (public) methods from the FilteredPersistenceChangeManager have been implemented to use the instance variable as well.
Conclusion

After some struggles and adjustments to the implementation and configuration of the application, we got personalization to work in our real world application used by customers. We overcame the struggles, but this was not as easy as the blogs on the internet made us believe beforehand. We hope that sharing this experience, might save you for some of the troubles we had.

To give you some more insight in the code and the struggles, we have created a (simple) demo application AdfPersonalization, to reproduce the issues we had, and which we used to solve them.

This application consists of the default Model and ViewController projects. Also, we added a ViewControllerLibrary project. This ViewControllerLibrary project is imported as a library by the ViewController project.

The AdfPersonalization application can be deployed to Weblogic in multiple ways:

  • Using the ‘Run’ button in JDeveloper.
  • Using Application > Deploy > … to IntegratedWebLogicServer in JDeveloper. This can be done to verify that struggle 2 is no longer an issue.
  • Using Application > Deploy > … to EAR, followed by running the replace-persdef-repo ANT target from the build.xml file. Afterwards the EAR can be deployed to a Weblogic servers, which is capable using a database based MDS. This can be done to verify struggle 1 is no longer an issue.
  • The source of this project can be downloaded via AdfPersonalization.zip.

Resources




Jarvis Pizzeria: Custom Reports & Dashboards

In our previous blog we showed the out of the box functionality of Dashboard & Reporting. Next to the default dashboards, you can also create your own custom dashboards. However, before we can create dashboards on this, we need to set up some indicators within our application. In the composer we have to go to the tab Indicators.


There are three types of indicators that you can create here:
- A Dimension represents the grouping on the X axis,
- A Measure will be the value of the Y Axis.
- An Attribute, in turn, will act as a filter.





You can define customer indicators that you can bind to any data objects available in your process. They will become available for reporting alongside with the default system indicators.On the Indicators tab, we can create new indicators.





Let's start with some indicators on the Register Order process, we create a dimension on the PizzaName.



And a measure on the price.



And we create an attribute on CustomerCity




For the indicators to be picked up, we need to publish and deploy our application again. After this we need to create some instances on which the indicators will do their work. We let the application run to gather info.

After this, we open the Dashboard within the workspace and go to the Business analytics tab.



In here we setup a report just like the system report, only now we can also choose from our own indicators. On the X Axis we go for PizzaName and on the Y Axis we select the sum of the price.



Resulting in the following graph.




Since we have set our CustomerCity as an attribute, we can now filter the results based on the city. If we go to Add Filter and make sure that CustomerCity equals Utrecht we can now see the results, just for the city of Utrecht.



Display Report:



In the toolbar, there are three buttons, to reload, save the report or export the data.


If you want to export the data, a csv file will be exported to your local machine.





Jarvis Pizzeria: Business Reports & Dashboards

Within the workspace there are some out of the box dashboards that you can use to monitor your instance. When you go to My Tasks and click Dashboard, by default the Health dashboard is shown.
There are a total of 5 default dashboards, Health, Open, Workload, Trend & Closed. You can use the filters on the right hand side to filter the dashboard.




The health dashboard will give you an instant, real-time view of the number of active processes running on your instance at this very moment, differentiated between the various states:
Active: this process is going on currently. Some tasks are being assigned or worked on.
Suspended: This process is currently suspended, waiting for some actions to occur. This would be the case if the process is triggered by some message catch, or other event.
Recoverable: your process has encountered some errors, and has an incomplete state, waiting to be recovered manually
To learn more about the default dashboards, visit the Process Cloud Documentation.

Next to the default out of the box dashboards, it is also possible to create custom System Reports. For this we go to the second tab of the Dashboard, called Business Analytics. Here you can select your Data Source type, in this example Process with the Jarvis_Pizzeria application. You can pick from several Process related data for both the X Axis as well as the Y Axis.



In this example we went for Process Instance Status grouped by Process Display Name on the X Axis and on the Y axis Process Open Time in Days with the Average function. Resulting in a graph that clearly represents our Test environment, since on production we serve pizza within minutes!





Jarvis Pizzeria: Aborting a Process

In this blog entry, we will take a look at the way a dynamic process deals with a process that ends with an exception. We will implement this in our “Register Order” process.

We modeled the process as depicted below:




When saved, published and activated, we start a new dynamic process; of course through our famous form:


The expectation is that Register Order starts and that we can start executing the task. When we open the process and click the “Abort” button on the “Save in backend system”, the process ends with an error event. See the screenshot below:



So what do we expect our Dynamic Process to do? Will it recognize the error and re-activate the “Register Order” activity again? Let’s see.



Interesting to see is that the process is still running...Hmm...the “Error End” happens to not end the process, but instead just throw an error and end up in fault recovery. How do we know its still running? By having selected the filter in the activities list:



Let's try to end the process with a different callback.



Running this case and ending the human task with an abort action, the process at runtime looks like the following:



We can indeed see that the process came to an end, but far more interesting to see is that the Dynamic Process rules consider the process to be ended happily and activated two new activities: Process Payment and Prepare Pizza



So what have we learned? The Dynamic Process can’t handle two different callbacks. So if you would like to model a scenario in which the process aborts and the case should act upon the abort, you should work with case data to indicate the process ended not successfully. This also means that you should think about making processes repeatable to ensure that an aborted process can start again. So, look before you leap and happy modeling!

Disclaimer: please note that the human tasks we used in the above screenshots are of the “Submit” type, whereas you should use the “Accept-Reject” pattern.

Jarvis Pizzeria: Markers and Conditions

In this post we do a deep dive into the fundamentals of markers and conditions. But first one step back, what are markers and conditions and what do they have to do with each other?

For Dynamic Processes we recognize the following markers:

  • Repeatable: controls whether a stage, activity or milestone is repeatable.
  • Auto Complete: controls the completion of a stage instance.
  • Manually Activated: controls the activation of a stage or activity instance.
  • Required: controls whether a stage, activity or milestone is required.

Next we have the following conditions:

  • Activation: additional entry criteria for a stage.
  • Enablement: additional entry criteria for a activity.
  • Termination: additional exit criteria for a stage or activity.
  • Completion: addition exit criteria for a milestone.

Markers

Below is some additional information about these markers. This text comes directly from the Oracle documentation.

Repeatable

The behavior of the repetition relies on the presence of entry criteria. If there is no entry criterion defined, then the repetition rule is evaluated by default in the transition into the COMPLETED state. Otherwise the repetition rule is only evaluated, when an entry criterion is satisfied and the task/stage transitions away from the state AVAILABLE into the next state.

Repetition on completion

To repeat a task or stage when it gets completed a repetition rule must be defined and the task or stage must not have any entry criteria. Whenever a task or stage instance transitions into the COMPLETED state, the repetition rule is evaluated and if it evaluates to true a new instance of the task or stage is created. The new instance transitions into the AVAILABLE state.

Repetition triggered by entry criteria

A trigger for a repetition of a milestone, stage or task is a satisfied sentry, that is referenced as entry criterion. Whenever an entry criterion is satisfied, the repetition rule is evaluated and if it evaluates to true, a new instance of the milestone, stage or task is created. The new instance transitions into the AVAILABLE state. The previous instance, in case of a milestone instance, transitions in state COMPLETED and, in case of a stage or task instance, into the ACTIVE or ENABLED state (depending on the manual activation rule) because the entry criterion is satisfied.

Auto Complete

The attribute autoComplete controls the completion of a stage instance. The stage will be auto-completed if:

  • Auto-complete is true and there are no children in the ACTIVE state, and all required children are COMPLETED, DISABLED or TERMINATED.
  • Auto-complete is false and there are no children in the ACTIVE state and
    • all children are COMPLETED, DISABLED or TERMINATED, or
    • on manual completion, all required children are COMPLETED, DISABLED or TERMINATED.

Manually Activated

Whether the actual work of a task or stage can be performed depends on its entry criteria. Given that an entry criterion is fulfilled, there are two ways to activate a task:

  • By manual activation
  • By automatic activation

Manual activation is the default behavior in which it is required that a user manually activates a task. By specifying a manual activation rule, it is possible to omit this step or make it depend on case variable payload. With manual activation, a user can decide to activate a task or instead disable it. A task that is automatically activated must be carried out.
In terms of the task/stage lifecycle, manual activation corresponds to the transition from AVAILABLE to ENABLED when an entry criterion occurs, and from ENABLED to ACTIVE when the task is manually activated. In contrast, automatic activation corresponds to the direct transition from AVAILABLE to ACTIVE that fires immediately when an entry criterion occurs.

Required

A plan item may be required, meaning that it has to reach an end-like state before the containing stage can complete. Whether a plan item is required can be specified by a required rule.
This rule is evaluated when the milestone, stage or task is instantiated and transitions to the state AVAILABLE, and its result value of type boolean is maintained throughout the remainder of the process instance. If this rule evaluates to true, the element's parent stage instance must not transition to COMPLETED state unless the element is in the COMPLETED, TERMINATED or DISABLED state. For example, a task that has not yet been worked on, i.e., is in state ENABLED, prevents its containing stage to complete. If the rule is not present, then it is considered to be false.

Conditions

Below is some additional information about the conditions.

Enabled

A task or stage becomes enabled as soon as any of its entry criteria is fulfilled. If this is given when the task/stage becomes available, it immediately becomes enabled or, depending on its manual activation rule, active.

Terminated

The exit transition triggers when the task's/stage's exit criteria are met.

Our specification

The following image shows our current implementation.

Let's get into the details of this implementation.

A new instance of the Jarvis Pizzeria Dynamic Process is started by the Order Entry WebForm.

The Ordering stage has no entry condition, so this is activated immediately after the order has been given. This stage only contains a mandatory structured process that is started automatically. This structured process registers the order in a back-end system and is almost immediately completed. The completion of the activity also completes the Ordering stage (the ‘Auto Complete’ markers is set and there is no additional ‘Termination’ condition set).

The preparation stage has an access condition, namely: that the Ordering stage must be completed. That is the case, so the preparation stage is also activated immediately. This also causes that the structured process Prepare Pizza is enabled. Because it has to be activated manually, it is available but not yet started.

And finally also the Process Payment activity, that is not part of any stage, is enabled.

The following image shows this situation

We have now come to the point that the ordering is completed and that the order can be prepared. For this we have the structured process Prepare Pizza. For every ordered pizza in the order, an instance must be started manually. Let's start up an instance and see what happens.

We started the Prepare Pizza process. It has been completed at a certain moment. This also completes the Prepared milestone. This in turn results in the Compose Order activity to becoming available. The repeatable activity Prepare Pizza will also be available again.


It seemed to be so good, but this is not quite what we had in mind. It is not entirely logical that the milestone Prepared is set but also the Prepare Pizza activity is still available. The milestone can only be set as all pizza's has been prepared.
We have to adjust this. But now let's go on first. In order to complete the Preparation Stage, we must indicate that no more Pizzas needs to be prepared (DISABLE the Prepare Pizza activity) and that the order can be prepared (by executing the Compose Order activity).
After doing this we have the following situation


The Delivered milestone also appears to have no condition. So once the Delivery Stage is activated, the milestone is set, while the intention is that this only happens once the Deliver and Payment Activity has been completed. So there are some more adjustments needed.

We will implement the following adjustments:

  1. the Delivered milestone may only be set when both the Delivered activity and the Payment activity have been performed.
  2. the Prepared milestone may only be set if there is no Prepare Pizza activity available, but has been performed at least once.


Our implementation
1. For this change an entry condition for the Deliver milestone is needed. Initially, we decided that the milestone should be set as soon as the Delivered Activity is completed and the Paid milestone is set. However, the condition can only be based on milestones and activities in the own stage. The Paid milestone is therefore not available. Because it is possible to base the condition on Dynamic Process data, a good alternative is available. Payment is made in the Payment process. The condition can thus be provided with the check: amountPaid> 0.


2. This second condition does not seem to be that simple yet. The part that no Prepare Pizza activity may be available is simple. The activation criteria of the Prepared milestone must match 'Prepare Pizza' is DISABLED. But the other part is a bit harder. We have said that the activity must be performed at least once. But actually the activity has to be done exactly as often as the number of pizza in the order. Data objects are needed for the total number of instances (pizzas) and the number of instances executed. Once the value in these data objects is equal to each other, the criteria are met. This ensures that the Prepared milestone is available at the right moment.



But this does not prevent the Prepare Pizza activity from being started again (because it is still available) and the values ​​in the data objects no longer match each other. At this moment we do not have a good solution for this. In this way it is also not possible to prepare pizzas simultaneously (in parallel). This is also an issue we had with structured processes (26 Third step in Implementing the Order Processing, Correlation). It is our expectation that Oracle will come up with a solution to be able to execute business in parallel.
The Prepare Pizza activity is a manually starting activity. This is done with a reason. Now we could say: because the Pizza preparer starts the activity itself. But that is not the reason. The real reason is that the combination of automatic start and repeatable is quite an unfortunate one. As soon as an instance has been completed, the next one is started immediately. Because of this, it can not be DISABLED, so we can not meet the milestone condition.

After implementing these conditions we can rerun an instance and get in completed as expected.


Without going into the details of all the rather complex situations that can occur, we hope to have made clear with a few examples that the use of markers and conditions is not as simple as it seems. From a functional point of view, things can get complex fairly quick.

Jarvis Pizzeria: At the PaaS Forum

The annual Partner PaaS Forum was held in Budapest Hungary this year. The full Jarvis Pizzeria team attended the PaaS Forum and as always Jürgen delivered a great program with lots of interesting content.
On Monday, since last year in Split, the Community Day we got to take the stage and present about Jarvis Pizzeria and Dynamic Processes within the Oracle Integration Cloud. A great opportunity to show some of the highlight and content of our blog series. 



The next day was a day full with presentation from Oracle Product Management and at the end of the day the traditional awards were presented by Jürgen. We were both surprised and honoured to hear we received the Partner Community Award for outstanding PCS contributions. A great recognition of which we are very proud.



Besides all the great content and hands on labs that followed on the next three days, it was very nice to have several good conversation and interactions with the PCS team that flew in from San Francisco. It was very nice to meet Eduardo & Nicolas and share our feedback on the product in real life.
It is motivating to see that something really happens with our feedback. Things that require more effort have been put on the backlog or are as we speak under development, but many of the quick wins have already been processed. The results of this are already visible in the application.

Overall a great week as can be expected from the Partner Events, in case you missed it or you were there and want to learn more. Do not forget to sign up for the Summercamps, 27 to 31 august 2018, in Lisbon. 

Jarvis Pizzeria: The PCS Mobile app

PCS also comes with a mobile app for the end users. It’s available for iOS and Android devices (search for ‘Oracle Process Mobile’ in the PlayStore or the AppStore) . The app provides access to tasks in both connected and disconnected mode.

Compared to the browser features, there are a few differences. These are listed in the table below (this table comes directly from the oracle documentation).



But let's just have a look at how the app works. We have installed the app on an iPad 2017 and a Samsung Galaxy S8 phone.
Note: we already installed the app and configured it.

When starting the app, first thing you need to do is sign in. Nothing unusual so far. After that the ‘MyTasks’ list appears. On the S8 the filters are available underneath a button. On the iPad there is more space available, here are the filters directly available on the left side of the screen. depending on whether you hold the ipad horizontally or vertically, the filter may or may not be opened immediately.

S8:


iPad:


From this screen it is very easy to sort and filter for tasks.
And once you've found the desired task to approve. You can directly approve the task without opening it, for that you can tick the task (checkbox or icon at the front of the task), and then click on approve. With this it is also possible to approve multiple tasks simultaneously.

S8:


iPad:


Note: From here we will only show the screenshots of the iPad, but all shown functionality is also available on the S8, even if it will look a bit different.

Behind each task is a hamburger menu, via this menu it is possible for example to delegate a task or to reassign. When we click on a task, the task form is opened as we are also accustomed via the browser. All task functionality is available.



When we go back to the task list we also see a hamburger menu in the top left corner. Via this menu the other functionality of the app can be reached.



Let's look at the functionalities that are interesting for the end user. Namely starting applications and the process dashboard. The screenshots of the ipad show this. If you have experience with the browser version of PCS this will be very recognizable.






According to the table of differences, it would not be possible to view the dashboard. But as the picture above shows, that is now possible on the iPad. This is not yet the case on the S8.

We can conclude that the mobile app is a very useful application for the end user.
On the S8 things seem to be a bit off the screen, but we think this will have to do with the rounded screen on the sides.


Jarvis Pizzeria: Fourth step in Implementing the Order Processing, Decision Model

In our previous blog we gave an overview of the various type of decisions that are available for a Decision Model. In this blog we show by means of an example that Decision Models can also be used for making complex decisions. We are going to make a decision model that determines the order of preparation of the pizzas in an order.
The order of preparation is determined by the baking time, the total preparation time of each pizza and the number of available ovens (1 pizza per oven). Let's assume that we have an order for the following 9 pizzas:

  1. Small Margherita
  2. Large Margherita
  3. Small Pepperoni
  4. Medium Pepperoni
  5. Medium Pepperoni
  6. Large Pepperoni
  7. Small Quarttro Stagioni
  8. Medium Quarttro Stagioni
  9. Large Quarttro Stagioni

Expected outcome

The pizzas with the longest baking time are prepared first. When pizzas have the same baking time, the total preparation time is also taken into account to determine the order. As a result, to determine the sort order we first need to determine the baking time and total preparation time for each pizza. 
Because not all pizzas can be prepared at the same time, pizzas that are not in the oven will have a waiting time. Once we have established the order, we can also determine the waiting time per pizza. We explain this with the help of the figure below.



This image shows the desired output of the decision model when using 2 oven. Result is sorted based on the longest baking time and total preparation time. As soon as a pizza is baked, the next one is picked up. So as soon as the first one is finished, the third pizza is put in the oven, and as soon as the second one is finished, the fourth one is put in the oven. And then when the third is ready we go on with the fifth. And so on. Based on the sorting, however, the second has a shorter time as the first one. The order of the first two pizzas is therefore reversed. For the same reason, the order of the first 3 must be reversed for 3 ovens and the order of the first 4 for 4 ovens.
It may be difficult to see, but the waiting time for the different pizzas is determined as follows: - the first 2 can immediately go into the oven. They have no waiting time. - third waits for the first - fourth is waiting for the second - fifth waits for the first and third - sixth waits for second and fourth - seventh is waiting for first, third and fifth - eighth wait for second, fourth and sixth - ninth finally wait for first, third, fifth and seventh And for the completeness, when using 3 ovens we expect the following outcome:
And with 4 ovens:
For now we are determining the order of the pizzas within one of the orders, using this logic on multiple orders to determine what pizzas to prepare first is not supported yet.

Steps from Input to Output

Now lets see how to come from the input to the required output. We have the following steps to get from the input to the output:

  1. calculate baking time and total preparation time
  2. sort order by time
  3. apply sort order correction
  4. calculate waiting time

1. Calculate baking time and total preparation time

1.1 Pizza Level

We start by calculation the baking time. The baking time depends on the pizza type and size. For this we made a reusable function with a decisiontable in it.

For the duration another custom ‘getPreparationTime’’ function is used. This function determines how long it will take to prepare the pizza bottom or to add the filling.


Without getting into all the details, this function shows the usage of some default functions like ‘list’, ‘index of’, ‘number’ and ‘string’. This function returns 2 for a small pizza, 3 for a medium and 4 for a large pizza. When the size is none of these the functions return 0.

After getting the baking time the total duration can be determined. For this we have created a simple expression function.

1.2 Order Level

Now that we have defined the reusable functions to calculate the baking time and the total duration we need to apply these functions on the list of pizzas in the order.

Again we have defined a custom function. The function contains an expression with a for loop. For every pizza in the order, the following custom function (updatePizzaDetails) is called. In this call we also see that the previously defined functions for determining the times are used.

The ‘updatePizzaDetails’ function is a simple setter. A context with expressions.
Now we have also defined the function to perform the time calculations on the list. Calling this function is the last step for this part.
After calling this function we have the list that can be sorted.

2. Sort the order

For this, we will use the standard 'sort' operation. The sorting criteria that we use here are defined in a separate function.
The sorting criteria is in the Sorter.desc function.
Here we see, as mentioned earlier, that there is sorting on the baking time and on the total preparation time.

3. Sort order correction

At the expected output we have indicated that the order at the beginning of the list must be adjusted depending on the number of ovens. So first we have to define how many ovens there are. This can easily done by defining a 'constant' for the number of available ovens.
By changing this value, it is determined which of the above outputs the decision model gives. Next we defined a custom function with an if-then-else decision to correct the sort order.
Again we see the use of a number of standard functions. In this case all list related functions. In the 'then' part the first x-pizzas (depending on the number of ovens) are reversed. In the 'else' part the list remains unchanged. Normally, the list could simply be returned here, but this is currently not working properly. This results in a type casting error. As work-around the sublist function can be used, which also selects the entire list in our situation. Finally we have to apply this function on the order/list.

4. Calculate waiting time

As the fourth and last step, the waiting time has yet to be calculated. Earlier in this blog we have already indicated how the waiting time is calculated. This we have incorporated into a decision table. This is shown below. The point of attention in this table is the used hit policy. This is C. Which means as much as "The output consists of a list of all matching rules".


And ofcourse apply this function.

Expected outcome summarized

We have summarized these 4 steps in a context.
As outcome of the overall decision model we are only interested in the outcome of the fourth step. To complete the order, the customer data is also added. The final output of the complete decision model then becomes:

This output is available in the PCS process that is using the Decision Model.