Is Race Condition a myth in Salesforce?

Since the introduction of the concept of multi-threading, there has been a drastic change in the way we code. Now multiple threads run in parallel, hence many tasks can be performed at the same time.

Salesforce too adhered to this concept taking a step forward in its endeavor and provided a multi-threaded environment.

But everything has its pros and cons and so did parallelism. Though it completely changed the way programs are executed but brought with it a new concurrency bug what we call as race condition.

When does it actually occur?

Race condition occurs when two thread operate on same object without proper synchronization and there operation interleaves on each other.

To prevent itself from becoming a prey to this predator, Salesforce introduced locks as they are one of the best practices to avoid race condition.

But folks, locks are associated with ever famous deadlocking issue since their inception. To prevent deadlock from occurring, Salesforce gave some hands-on approaches.

In the optimistic approach a transaction waits for 10 seconds to obtain the lock. If the lock is not granted during this time period to transaction, an exception stating unable to obtain lock will be thrown. In pessimistic approach a transaction needs the lock immediately, failing which an exception would be thrown.

But deadlock may occur in salesforce in case of faulty designs i.e. it is totally dependent on the developer’s design.

Now let’s have a scenario-based discussion on whether race condition exist in Salesforce or not

Consider a situation where you load a humongous amount of data using workbench. Each batch of the humongous chunk has a trigger associated to it which affects a field of another object. A trigger is so written that it is associated with large processing time and same field of another object is affected by each batch. So, can the field be affected in a sequential manner concurrently. It seems to be difficult, but there are some approaches you can try.:

(All the described approaches are explained taking into consideration the above scenario)

Using Future Methods

Future methods are asynchronous methods that run in background, and we can use them to obtain parallelism. If you have a small and dicey job, you may get results that depicts that the task are running sequentially (it seems to be a single threaded environment) but unfortunately that’s misleading, and as soon as the jobs assigned to these methods become tedious you will see unexpected results. If you get the expected results in the scenario then it’s a day where you can think of yourself as being born with a silver spoon in your mouth.

Using Queueable Interface

Isn’t Queueable method asynchronous? If yes, then won’t they show the same characteristics as future methods? If so why are we talking about them.

These are the questions which raise in the moment of heat and the only point that makes queueable interfaces part of the discussion are that they provide a great advantage over future methods through chaining.

One queueable job can be chained to another queueable job by enqueuing the first job in the execute () method of second job. Not only this, a queueable job can be enqueued to itself. This provides a recursive approach.

Every enqueued job, process in a new transaction and multiple enqueued jobs run in parallel. As a result one enqueued job would not be able to determine whether other enqueued jobs have been completed or not.

But today everything has workarounds, and workarounds to this problem are:

Note:

First thought that strikes everyone’s mind is to use a static Boolean variable to obtain mutual exclusion just like a semaphore to maintain data integrity. If one job is being processed within a semaphore block then all the other jobs trying to act on the field will enqueue themselves again till the block or critical section is free i.e. no other job is acting on it.

But this only seems satisfying, as static variable can only persist values for a single transaction and they are only static within the scope of the request and not throughout the organization. So this is not a approach we are looking for.

Use of custom settings as flags or locks

As list custom setting contains static data that can be reused and accessed easily without repeated queries on database, it can be considered for developing a locking environment. This kind of environment can be provided by creating a field in the list custom setting which will act as lock as static variable did but this time it will be static throughout organization. This approach may or may not work as there may be conditions where the field may have been read and used by one or more jobs and during the same period it may be updated by some other job, so it leads to the possibility of complete distortion of the control we wanted to obtain.

Using FOR UPDATE to obtain locks.

It’s quite shocking that all the approaches talk about locks, and none of them have considered the FOR UPDATE lock provided by Salesforce itself, its because for update does not provide a failure-safe environment i.e. it provides DML and QUERY Exception interfaces to work on but doesn’t provide an interface to handle them. Using FOR UPDATE with queueable interface can serve as a solution we are looking for. As queueable interface can be used to obtain reclusiveness and every time an exception is raised by FOR UPDATE, it can be handled by enqueuing the same enqueue job again.

Use Queue Data Structure

Queue work in FIFO (First in First out) manner. Result of every job can be pushed(enqueued) into the queue and then a schedulable job can be used to dequeue the job at some point of time.

Persistent queue can be designed and developed through a custom object (with fields as per need) .In this custom object a record with the result of a job can be created as soon as a job is completed (enqueuing of queue) .A job can then be scheduled for processing out the results (dequeuing of queue) within the custom object.

Note: Queue data structure can also be developed using Apex List Collection class. But persistence comes into play and we have already discussed that making anything static won’t help.

Conclusion

Race condition is not a myth in Salesforce. It can occur in some situations depending on the jobs that are onboard. But there are few ways and approaches that can be tried to avoid race condition. Efficient designing is the key to avoid deadlocks for developers. As most of the deadlock avoidance tricks can be applied during designing phase itself.

Vipul Goyal

Working as a Salesforce Engineer on AI and ML projects at Mirketa.
Posted in Apex Development, force.com app development, Learn Salesforce, Salesforce, salesforce certified developer. Tagged with , , .

Leave a Reply

Your email address will not be published. Required fields are marked *

*