However, due to the success of the Jarvis Pizzeria, new equipment was purchased. Two brand new ovens were installed, enabling the Jarvis enterprise to prepare three pizza’s simultaneously...awesome!
To make optimal use of our equipment we should configure the process to prepare pizzas in parallel, see the screenshot below:
Those of you who have worked with multiple asynchronous calls from the same process (or even BPEL, back in the days) should remember the need for correlating the invocation with the corresponding callback.
Lets first try this with setting up message based correlation. I.e. from the calling process we initiate a correlation with a unique key which is send to the called process. The called process returns this unique key and hence the calling process knows on which callback it has to wait.
In the properties of the “Start of Preparation” activity we have the option to initialize the correlation:
Via the data association we make sure that the key contains a unique value, for now we have used the pizza name(which we make sure is unique when sending the soap request)
When testing this process we try to prepare two pizzas: salami and pepperoni, which would result in two instances of our subprocess. Looking at the screenshot below, we see indeed that two instances are spawned at the same time, but also that the second one ran into an error:
Hmm, a correlation violation. Were the correlation keys not unique? Yes they were! But we did initialize the same correlation key more than once! So what actually happens when setting up a correlation - assuming correlation sets work the same as in BPM suite? When the runtime engine comes across a correlation it will create a MD5 hash from the values in the correlation keys and is this hash value that is used to correlate the message on the callback operation.
So the correlation violation was caused by initializing the same correlation key twice. How was this solved back in the days when using BPM suite? The trick we could use in BPM suite was to use so-called scoped conversations. Instead of using the default conversation, the send and receive task in an embedded subprocess each uses this advanced conversation.
So where does this leaves us? In PCS we haven’t seen the option to use Advanced Conversations (yet), thus you should find a way to work around this if you want to use send and receive tasks in a multi-instance parallel subprocess. Or you should stick with the sequential option and don’t use correlation.
The addition of dynamic processes might imply additional features or improvements in processes. Meaning, that in the near future this correlation issue might be solved and advanced conversations - or something similar - are added. We will keep you up to date, stay tuned!