Batch jobs are a major part of D365FO and are a great way to asynchronously process large amounts of data.
If we are writing a batch job process that is going to use multiple batch jobs simultaneously on the same data, there are potential problems that we should look out for.
In this article, we will cover how to solve the problem of deadlocks and so-called lost records by utilizing just three fields and a couple of system methods.
What are deadlocks?
In D365FO, like in any concurrent computing program, there exists potential for deadlocks.
A deadlock is a situation where different units of code that are executing on the same data are all waiting for each other, without any one of them being able to proceed. This leads to an indefinite waiting state.
The same situation can be applied to D365FO when multiple batch jobs are working on the same data.
For example, if there are two batch jobs that want to work on some data, they both randomly take data from the database and start working on it:
Batch job 1 will start working on record 1, and batch job 2 will start working on record 2.
Then batch job 1 will try to start working on record 2, and batch job 2 will try to start working on record 1, but they won't be able to start working on them because they are still locked by the other batch job.
Since both batch jobs are waiting for each other to finish, they will never proceed, thus leading to a deadlock.
How to prevent deadlocks in D365FO?
Now that we understand the problem, how do we avoid it? Fortunately, the solution is quite straightforward.
The way to avoid deadlocks is to specify which data is being processed by each batch job (e.g., reserve data for each batch job) so that they don’t interfere with one another.
We can achieve this by adding a ProcessorId field that will be unique for each batch job process.
Let’s look at an example of how this can be implemented in the code for the Docentric Alert Summary Email Distributor batch job, which implements similar logic as the base sysEmailDistributor class.
First, we need to add a GUID (Globally Unique IDentifier) field to the table that is being used by the batch jobs, where we will be storing our GUID values, which we can create like so:
We can even use an EDT (Extended Data Type) of type GUID, but for this example, we will be using the SysEmailProcessorId EDT.
To populate this field, we will need to create a new GUID and assign the value to our field each time the batch job is being processed by using the newGuid() method:
1 |
SysEmailProcessorId thisProcessorId = newGuid(); |
To select only records that are currently not being processed, it is good to have an enum field defining the record's current status.
This is why we created the DocAlertSummaryEmailStatus enum, which includes the states: Created, InProcessing, and FullyProcessed.
We should select only those records which are newly created and assign the ProcessorId to them. Simplified, the pre-processing code would look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
protected void preProcess() { SysEmailProcessorId thisProcessorId = newGuid(); while select docAlertSummaryEmailStatusTable where docAlertSummaryEmailStatusTable.Status == DocAlertSummaryEmailStatus::Created { ttsbegin; docAlertSummaryEmailStatusTable.selectForUpdate(true); docAlertSummaryEmailStatusTable.Status = DocAlertSummaryEmailStatus::InProcessing; docAlertSummaryEmailStatusTable.ProcessorId = thisProcessorId; docAlertSummaryEmailStatusTable.update(); ttscommit; } } |
Now that we have reserved the data, we can simply fetch all the records that we need to process, like so:
1 2 3 4 5 6 7 8 9 |
protected void process(SysEmailProcessorId _processorId) { while select docAlertSummaryEmailStatusTable where docAlertSummaryEmailStatusTable.ProcessorId == _processorId && docAlertSummaryEmailStatusTable.Status == DocAlertSummaryEmailStatus::InProcessing { // Process logic omitted for brevity. } } |
At the end of this process, records that are finished are changed to the FullyProcessed state.
In the above example, if two different batch jobs fetch the same record, they will know not to touch the ones that don't have the appropriate ProcessorId, thus successfully preventing a deadlock situation!
What are lost records?
Let’s say that while executing, one of the batch jobs crashes for some reason and thus doesn’t fully process the records it was working on.
This would mean that the status of the records it was processing at the time of the crash remains stuck on InProcessing.
New batch jobs that start processing the InProcessing records will never pick up these crashed records because they will never have the correct ProcessorId.
These records are now effectively lost records.
Preventing lost records
To prevent records from being lost due to crashes, we need a way to reset the now invalid InProcessing statuses back to Created.
For this, we need to implement a cleanup() method that resets the statuses before any records are marked for processing by the batch jobs, allowing the records to be picked up again.
But how do we identify which records need resetting? This is where a new field called SessionId is introduced. We can add this field similarly to the GUID field, but this time it will be a simple integer field:
We can also use an EDT here of type integer, in this case we will be using the default SessionId EDT.
To determine if the session is still active, we also need to introduce the SessionLoginDateTime field, which is of type UtcDateTime and can be created as follows:
For this we can use an EDT of type UtcDateTime, in this case it will be the default SysSessionLoginDateTime EDT.
To solve our problem of lost records with these new fields, we now have to add session data to each record before processing. The pre-processing code will now look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
protected void preProcess() { this.cleanup(); SysEmailProcessorId thisProcessorId = newGuid(); xSession xSession = new xSession(sessionId()); while select docAlertSummaryEmailStatusTable where docAlertSummaryEmailStatusTable.Status == DocAlertSummaryEmailStatus::Created { ttsbegin; docAlertSummaryEmailStatusTable.selectForUpdate(true); docAlertSummaryEmailStatusTable.Status = DocAlertSummaryEmailStatus::InProcessing; docAlertSummaryEmailStatusTable.ProcessorId = thisProcessorId; docAlertSummaryEmailStatusTable.SessionId = xSession.sessionId(); docAlertSummaryEmailStatusTable.SessionLoginDateTime = xSession.loginDateTime(); docAlertSummaryEmailStatusTable.update(); ttscommit; } } |
The cleanup() method mentioned above is where we will reset the records lost in crashed sessions, as shown in the example below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
protected void cleanup() { DocAlertSummaryEmailStatusTable docAlertSummaryEmailStatusTable; while select docAlertSummaryEmailStatusTable where docAlertSummaryEmailStatusTable.Status == DocAlertSummaryEmailStatus::InProcessing { if (!isSessionActive(docAlertSummaryEmailStatusTable.SessionId, docAlertSummaryEmailStatusTable.SessionLoginDateTime, false) || !isSessionActive(docAlertSummaryEmailStatusTable.SessionId, docAlertSummaryEmailStatusTable.SessionLoginDateTime, true)) { ttsbegin; docAlertSummaryEmailStatusTable.Status = DocAlertSummaryEmailStatus::Created; docAlertSummaryEmailStatusTable.update(); ttscommit; } } } |
Here we see how we’re utilizing the global isSessionActive() method to check whether the session associated with the record is not one of the active sessions, which would mean that it is not being processed by any batch job and can be safely reset.
Thus, by using the SessionId and SessionLoginDateTime fields, we have ensured that we won't lose any records due to crashes!