SQL Server

Don’t Overuse DEADLOCK_PRIORITY2 min read

I hope everyone had a relaxing and safe Christmas!

By default, SQL server will check for deadlocks every 5 seconds.  If a deadlock is found, SQL will decide which task is the least expensive to end, terminate it, and roll it back.  This task that gets rolled back is called the deadlock victim.




If you’re running into deadlock issues and know that “Task A” should always be rolled back instead of “Task B” you can use a setting called DEADLOCK_PRIORITY.  Instead of focusing on the expense of each query when deciding which task to end, SQL Server will end the task with the lower deadlock priority.

DEADLOCK_PRIORITY can be set to LOW, NORMAL, HIGH, or a numeric value between -10 and 10.  LOW is the equivalent of setting the priority to -5, NORMAL the equivalent of 0, and HIGH the equivalent of 5.

If setting DEADLOCK_PRIORITY to HIGH you would enter:

If setting DEADLOCK_PRIORITY to -10 you would enter:

I would recommend only using LOW or HIGH in a situation where you need a quick fix until the root cause of the deadlocks can be resolved.  I know, fixing the root cause is easier said than done.  But if you’re starting to use number ranges between -10 and 10, that means the root problem probably isn’t going to be getting fixed and the DEADLOCK_PRIORITY will be the forever lasting workaround.

I hate to imagine a database with stored procedures all using various DEADLOCK_PRIORITY values.  This stored procedure is -6.  This one is 7.  This one is 3.  What if everyone starts setting their priority to 10?  At that point it’s useless if a situation comes up where DEADLOCK_PRIORITY could be helpful.  It should not be a crutch as much as it should be a temporary workaround.

Work on the queries to make changes or perhaps some index maintenance.  Consider turning on RCSI (Read Committed Snapshot Isolation) to take advantage of a row versioning approach versus a locking approach.  But be careful not to let DEADLOCK_PRIORITY be a habit.

Leave a Comment