Use DB locks when simultaneous work may break DB constraints

This is part of the Semicolon&Sons Code Diary - consisting of lessons learned on the job. You're in the databases category.

Last Updated: 2022-12-01

A product has many notes files, or which one is elevated to the position of "best notes files". A product should always have a best notes file (this was enforced with DB constraints), therefore deletion of the current best notes file should elevate another notes files into that position.

To that end I had this code within my service for deleting a notes file:

def reassign_best_notes_file(notes_file)
  product = notes_file.product

  new_best_notes_file = product.notes_files.where.not(id: notes_file.id)
    .with_samples.first
  product.update!(best_notes_file: new_best_notes_file)
end

I woke up one morning to a bunch of errors. The issue was that a user had queued up a bunch of deletions and these were being processed in parallel by my queue system. During the time that passed between choosing a new best notes file and updating the product (i.e. between the 2nd and 3rd lines of this function), the new best notes file was deleted in another process, and so the db constraint failed.

The fixed code used a lock to prevent any other parallel processes from modifying that product record out from under my feet.

def reassign_best_notes_file(notes_file)
  product = notes_file.product

  product.with_lock do
    new_best_notes_file = product.notes_files.where.not(id: notes_file.id)
    .with_samples.first
    product.update!(best_notes_file: new_best_notes_file)
  end
end