Skip to content

Serializable != Synchronized, especially not with Oracle

Recently I wrote some database code and tried to convince myself that it was threadsafe. I realized I’d made a basic mistake about the serializable isolation level, and thought it was worth a quick blog entry to post the explanation, in case anyone else runs into this situation. The problem can be represented by this simplified example:

Transaction Example

Transaction Example

Both transactions are trying to process withdrawals of $100. In either case, if the balance is sufficient, a status message is sent to a downstream system and the balance is reduced. Otherwise the transaction is aborted. If these transactions are run in the serializable isolation level, can they be executed concurrently without fear of overdrawing the account, sending the status message more than once, or creating any other problems? Actually, no.

A cursory read of the documentation on database isolation levels might lead you to believe that serializable means that transactions act as though they are being run serially, i.e. as if they had been synchronized (using Java terminology). But that would be wrong. Databases are only required to honor commitments about serializability from the perspective of the data they store. So in the example above, the database’s only responsibility is to ensure that the balance doesn’t go negative.

Oracle read the fine print carefully and came up with a creative way to improve performance. It lets both transactions proceed full steam ahead, but only permits the first update statement to succeed. The second update will fail with an error:

ORA-08177: Can't serialize access for this transaction.

Your application has to check for this error and retry the whole transaction until it succeeds. It’s really quite smart. Essentially, Oracle is optimistically running the transactions as quickly as possible, hoping they would have been serializable. If that turns out to be wrong, Oracle tries to forget the whole thing happened by aborting the transaction. The result is higher concurrency and better performance than the more obvious locking based design, and it faithfully preserves the serializability rule.

The only problem is that in our example, the status message will get sent too many times. Oracle doesn’t care about the serializability of your non database-related application code. In the example above, only one of the transactions should succeed because the initial balance is $100. But both of them will send the message. The moral here is that you can’t use the serializable database isolation level to manage concurrency in your application code.

So what can you do? You could serialize the whole application code sequence (i.e. wrap it all in a synchronized block) but that would be needlessly throwing away performance. You could create a thread pool that guarantees that requests for the same ID will get serviced by the same thread, but that would be needlessly complicated. The simplest solution (which will work even if you run multiple instances of your entire application) is to have the database lock the row during the transaction, so that if any other transactions touch that row they will block until the running transaction completes.

Oracle allows you to do this by adding the clause FOR UPDATE to your select command, i.e.

select balance from account where id = 123 for update

That’s nice and simple, allows you to run your transaction in the usual, uncomplicated READ_COMMITTED isolation level, allows concurrent processing of different database rows without any possibility of race conditions, and doesn’t require any synchronization in your application code. Unfortunately it’s not portable. It works with Oracle and Sybase, but the rules for MS SQLServer are different.

{ 5 } Comments

  1. Daniel Chait | July 18, 2009 at 9:17 pm | Permalink

    Nice post Joe.

    One thought – in this example, you could just send the message after you have actually withdrawn the $100 from the account. Re-ordering your steps so that the data is committed before sending the message would be the more sensible approach here anyway.

  2. Joe | July 19, 2009 at 9:45 am | Permalink

    Thanks, that’s a good suggestion. It works for this example – as long as the code correctly retries upon transaction failure – and in any case where you can postpone all side effects (e.g. sending the message) until after the database updates. Although that’s not always possible, e.g. if the details of the database update depend on a response from the side-effecting operation.

  3. vaandu | July 21, 2009 at 9:48 am | Permalink

    It is very useful, I was just seraching difference bet. the serializable and for update. Thx a lot Joe

  4. choy | July 22, 2009 at 3:34 am | Permalink

    Couldn’t the “send message” be included in the same transaction as the database update. if you were using Oracle AQ, I’m quite sure it would be.

    But assuming you’re using some external system and the cost of two-phase commit (JTA, Enterprise Services) is too much, the best-effort thing Daniel suggested with the message sent after is not bad. it just has to be understood that the QoS for the message is lower than the update.

    Oracle approach to serializability is a nice option. not too useful for a not-so-concurrently-updated account row in only one table. But perhaps useful for account transfers when you want to avoid locking both accounts.

  5. Omri Ben Shitrit | February 11, 2010 at 10:59 am | Permalink

    Hi Joe,

    A nice link on Wikipedia thats relevant:

    http://en.wikipedia.org/wiki/Software_transactional_memory

    Omri

Post a Comment

Your email is never published nor shared. Required fields are marked *