| Store | Cart

Re: update query fails as table gets locked by select query (sql blocking issue)

From: Shashank Singh <shas...@gmail.com>
Fri, 2 May 2008 02:59:36 -0700
 Hey Steve,

Thanks a ton to help me out with this unusual problem, please find some
further inputs inline:
 On Thu, May 1, 2008 at 6:24 PM, Steve Howard (PFE) <stho...@microsoft.com>
wrote:

>   This doesn't sound unusual, but check the transaction isolation level> anyway. One part of your problem description makes me think you MIGHT not be> using Read Committed.>> When you are doing a select with read committed transaction isolation> level, SQL takes a shared lock on the row (if the optimizer deems row level> locking appropriate). If it needs to select another record (which sounds> like the case in your query), it then takes a shared lock on the second row,> and AFTER this lock is acquired, it releases the shared lock on the first> row. If you are using Repeatable Read, then the shared lock will be held> until the end of your transaction. So you have this going on in your> connection where you are doing the select.>>  I didn't set any transaction level in my queries so it should be using> the default one i.e. Read Commited, anyway i tried to set Read Commited> explicitely still no luck :(>> You then open a new connection. This connection has a new SPID which is> not associated in any way with the SPID where your select is being handled.> You issue an update. SQL first issues an update lock to evaluate a row to> see if it is what needs to be updated. An update lock is compatible with a> shared lock, so the evaluation can be successfully made. When SQL determines> that this row needs to be modified, it then converts the update lock to an> exclusive lock. However; a shared lock is not compatible with an exclusive> lock, so as long as your first connection maintains a shared lock, the> update statement will be blocked, and cannot complete the update. If you> check sp_who2 on your SQL server while you are in this state, you will see> the second SPID blocked by the first SPID.>>  Yes you are right the second SPID(update query) is being blocked by the> first SPID(select query).>> As for the threshold, here is where I am not sure you are using Read> Committed:>> When SQL begins a query, the optimizer should estimate the optimal locking> granularity. Most times, this is row lock, or page lock. However; when a> threshold of 5000 locks for a single object is reached (or a certain> percentage of available lock memory is used), SQL will attempt to escalate> the lock. Locks are ALWAYS escalated to table locks. The reason I suspect> that you might not be using Read Committed is because the locks should roll> off as I described in the second paragraph if you are using read committed.> If the locks roll off, then your select should not escalate its shared locks> to a table lock. If you are using read committed, then I would expect to see> escalation if a large result set is returned. I would also expect to see> escalation on the update if a large number of rows are updated.>>  If i execute SP_LOCK, the select query helds a shared lock and update> query helds an exclusive lock. Just to tell you if my select query returns> 18 rows, there is no blocking issue but if select query returns more than 18> rows, the blocking issue comes into picture. I feel the problem is related> with the duration of shared lock helds on the table which needs to be> updated, because if i use retry logic in my code for update query execution> then after some retries update gets succeeded for records more than the> threshold too. It suggests that after some time shared lock due to select> statement goes away. This way we can also say that select query is running> in Read Commited isolation level otherwise shared lock should persist till> the end of SELECT transaction, which is not. Buy what makes a difference> between processing 18 & 19 records, ideally if it is working fine for 18> records then it should also work fine for 19 records, what do you say?>> When you used a "nolock" hint on your select query, you told SQL to behave> like "read uncommitted." This means the select will not take any shared> locks, and will not respect any locks from any other SPIDs. In this case,> you do not see the blocking issue, but you run the risk of reading> transactionally inconsistent data. This is not normally recommended –> especially if you are using that data for logic in your update.>>  Yes I can not use "nolock" hint as i don't want dirty reads.>> What might be a better solution is if your select either retrieves all the> data first, and you finish that statement, then use the data you retrieved> into a Perl data structure to issue your updates, or if there is not enough> memory on the client side, then select the data into a temp table and then> use the data from the temp table to issue your updates. If it is possible to> issue a single update maybe with a where clause that updates the rows you> are currently selecting (thus reducing the process to one statement) that> would be the best solution. You may use an update based on a join to> accomplish the last one These are just a couple of suggestions without> really knowing what you are trying to accomplish, but see if there is a way> to do one of these as it will get past the blocking you are seeing..>>  I agree that if i implement your above suggesstions this problem will be> resolved. But the thing is this application has been running since last 2-3> years without issues ( maybe because the select query would have never> crossed the threshold value of recordsets). And moreover  i am very keen> to know the root of this blocking issue as i have spent lots of time in> troubleshooting.>
    Please let me know if you need some more inputs to troubleshoot the root
cause. If required I can also share the complete perl script.

Thanks again to help me out.

--Shashank




>   *From:* perl...@listserv.ActiveState.com [mailto:> perl...@listserv.ActiveState.com] *On Behalf Of *Shashank> Singh> *Sent:* Thursday, May 01, 2008 5:39 PM> *To:* Perl...@listserv.ActiveState.com> *Subject:* update query fails as table gets locked by select query (sql> blocking issue)>>>>  I have a perl script which updates some records in sqlserver 2000,> the queries are like below:>> * A select query> This select query selects records that needs to be updated.> * An update query> This update query updates a bit column(from 0 to 1) for the IDs> retrieved in above select query.>>>>>> the pseudo code is like below:>>>> Open SQL Connection for select ;>>>> Execute SELECT query;>>>> WHILE( ALL IDs not processed)>> {>>>> Open SQL Connection for update ;>>>>  Execute UPDATE query;>>>> Close SQL Connection for update ;>>>> }>>>> Close SQL Connection for select;>>>>>>>>> The problem I am facing is: the update query can not update the records> because the table is being locked by the first select query. The weired> thing i could see is that when select query returns records more than some> threshold size, the update starts failing for example let say if select> returns records of  X Kbytes update runs fine but if select returns more> than X+ sized records, update fails as table gets locked by select query.>>>> If I add with nolock in select query tool runs fine in every scenario.>>>> Is it a issue of hash mapping of select recordsets in memory?>>>> Please help me out.>>>>>>>>>> Snapshot of perl script is below:>>>>>>>>>> Thanks>

Recent Messages in this Thread
Shashank Singh May 02, 2008 12:39 am
Steve Howard (PFE) May 02, 2008 01:24 am
Shashank Singh May 02, 2008 09:59 am
Steve Howard (PFE) May 02, 2008 01:16 pm
Steve Howard (PFE) May 02, 2008 01:33 am
Messages in this thread