sql server - SQL Locking causing timeouts -


we have long running transaction (with nested trans), runs 2min. in time heap of insert, update , selects. @ same time users still need able use system. of tables used same ones in batch program.

in below profiler trace getting block unrelated table. interesting thing is, query blocking transaction, table not used second task.

here profiler output, blocked process (customerorderentry) quick 1 while blocking process longer running one.

we using msdtc ms sql server , ado.net

<blocked-process-report monitorloop="546369">  <blocked-process>   <process id="process9fbc51c28" taskpriority="0" logused="10464" waitresource="page: 40:1:182902 " waittime="20839" ownerid="232311233" transactionname="user_transaction" lasttranstarted="2015-05-14t11:52:46.997" xdes="0x12838563b0" lockmode="s" schedulerid="1" kpid="7508" status="suspended" spid="118" sbid="2" ecid="0" priority="0" trancount="1" lastbatchstarted="2015-05-14t11:52:47.023" lastbatchcompleted="2015-05-14t11:52:47.023" lastattention="1900-01-01t00:00:00.023" clientapp="customerorderentry.exe" hostname="qit-ts" hostpid="116616" loginname="qit\tgower.admin" isolationlevel="read committed (2)" xactid="232311233" currentdb="40" locktimeout="4294967295" clientoption1="671088672" clientoption2="128056">    <executionstack>     <frame line="1" stmtstart="52" stmtend="512" sqlhandle="0x02000000946ebc1dc9d4bf7eefd3826e6a744d9b92931d1a0000000000000000000000000000000000000000"/>     <frame line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"/>    </executionstack>    <inputbuf> (@ordernumber varchar(20))select        id, ordernumber, line, equipment, site, quantity, enginehours, gpslongitude, gpslatitude, offroad, route, eta, etd            customerorderequipment        (ordernumber = @ordernumber) order equipment   </inputbuf>   </process>  </blocked-process>  <blocking-process>   <process status="suspended" waitresource="40:1:2670699" waittime="49" spid="95" sbid="2" ecid="0" priority="0" trancount="2" lastbatchstarted="2015-05-14t11:53:07.817" lastbatchcompleted="2015-05-14t11:53:07.817" lastattention="1900-01-01t00:00:00.817" clientapp="ezyserveimport.exe" hostname="qit-ts" hostpid="25480" loginname="qit\tgower.admin" isolationlevel="read committed (2)" xactid="232308068" currentdb="40" locktimeout="4294967295" clientoption1="671088672" clientoption2="128056">    <executionstack>     <frame line="1" stmtstart="1050" stmtend="2498" sqlhandle="0x0200000050f50e11ba633c312db084db39d3e5c7ff5b8a2d0000000000000000000000000000000000000000"/>     <frame line="1" sqlhandle="0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"/>    </executionstack>    <inputbuf> (@id uniqueidentifier,@account varchar(12),@date datetime,@details varchar(39),@amount decimal(18,2),@quantity decimal(18,4),@source int,@postingbatch uniqueidentifier,@basposition varchar(3),@basposted bit,@baspostedperioddate date,@site int,@financed bit,@financeddate date,@financedbatch varchar(8000),@bankrecreference varchar(8000),@bankrectype int,@bankrecpaymedium varchar(8000),@excludefromfinancing bit,@notes varchar(8000),@reconciled bit,@reconcileddate datetime,@fuelsubsidyclaimed bit,@fuelsubsidyclaimdate date)insert [ledgertransactions] ([id], [account], [date], [details], [amount], [quantity], [source], [postingbatch], [basposition], [basposted], [baspostedperioddate], [site], [financed], [financeddate], [financedbatch], [bankrecreference], [bankrectype], [bankrecpaymedium], [excludefromfinancing], [notes], [reconciled], [reconcileddate], [fuelsubsidyclaimed], [fuelsubsidyclaimdate]) values (@id, @account, @date, @details, @amount, @quantity, @source, @postingbatch, @basposition, @basposted,    </inputbuf>   </process>  </blocking-process> </blocked-process-report> 

by sound of it, breaking massive transaction more manageable chunks option. use cursor or while loop update/insert/select 1000 parameters @ time , run in 1 transaction. way other transactions can read data between breaks. of course assumes not imperative of these updates read together. option change isolation level using query. depending on kind of data concurrency need.

(by 1000 parameters mean if 1 column being updated 1000 rows per transaction, if 2 columns being updated 500 rows per transaction. systems 1000 parameters "sweet spot" speed.)


Comments

Popular posts from this blog

apache - PHP Soap issue while content length is larger -

asynchronous - Python asyncio task got bad yield -

javascript - Complete OpenIDConnect auth when requesting via Ajax -