Wednesday, March 21, 2012
How to reduce timeouts in SQL Server?
We have designed a system which processes millions of transactions a
year. There are a couple of tables in the system which are heavily
used. These 2 tables gets about 30,000 rows inserted into them
everyday at an average. The inserts are done through stored procedures
after perfomring some complex logic. There are also a few scheduled
jobs that run every night which accesses these 2 tables. In addition,
there are a few reports that run in the web system which acces these
table for ad-hoc reporting. In the last few months, the number of
transactions have increased quite a bit and system started giving us
problems such as a lot of timeouts and a few deadlockks everyday.
I am thinking that if I can somehow replicate these tables every 6
hours or so and then use the new tables for all reporting purposes, my
timeouts might decrease. Could you suggest on how I could go about
replicating these 2 tables in an efficient way? and whether this would
help my situation?
Any other suggestions on how I could improve my system would be
appreciated.
Thanks,
Karanouch....
tough question without being able to see your database.
here are some thoughts...
1. Make sure indexes are being optimized\maintained on a regular basis (Read
up on showContig and on dbcc IndexDefrag)
2. review your critical transactions and make sure that you are not causing
excessive blocking and deadlocks (This is really hard, you'll have to read
up on troubleshooting deadlocks)
3. yes your adhoc reporting should be moved out to another database. You can
use DTS for this, replication or log shipping. We are using log shipping to
satisfy this need.
you're likely going to need to spend some serious hours pinpointing the
problem here. My guess is that option #1 above will buy you some serious
performance gains.
Cheers,
Greg Jackson
PDX, Oregon|||Articles for reference:
http://www.microsoft.com/technet/community/chats/trans/sql/sql1023.mspx
http://vyaskn.tripod.com/watch_your_timeouts.htm
"Karan" wrote:
> Hi,
> We have designed a system which processes millions of transactions a
> year. There are a couple of tables in the system which are heavily
> used. These 2 tables gets about 30,000 rows inserted into them
> everyday at an average. The inserts are done through stored procedures
> after perfomring some complex logic. There are also a few scheduled
> jobs that run every night which accesses these 2 tables. In addition,
> there are a few reports that run in the web system which acces these
> table for ad-hoc reporting. In the last few months, the number of
> transactions have increased quite a bit and system started giving us
> problems such as a lot of timeouts and a few deadlockks everyday.
> I am thinking that if I can somehow replicate these tables every 6
> hours or so and then use the new tables for all reporting purposes, my
> timeouts might decrease. Could you suggest on how I could go about
> replicating these 2 tables in an efficient way? and whether this would
> help my situation?
> Any other suggestions on how I could improve my system would be
> appreciated.
> Thanks,
> Karan
>|||I performed the DBCC SHOWCONTIG on the tables affected and the stats
look ok to me. I am pasting the stats below..
DBCC SHOWCONTIG scanning 'claims' table...
Table: 'table_name' (322100188); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 202265
- Extents Scanned.......................: 25349
- Extent Switches.......................: 27598
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 91.61% [25284:27599]
- Logical Scan Fragmentation ..............: 0.64%
- Extent Scan Fragmentation ...............: 10.38%
- Avg. Bytes Free per Page................: 672.6
- Avg. Page Density (full)................: 91.69%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Please let me know if you see anything I could improve with this
stats. All other indexes on these table have similar stats as well.
Thanks for your help.
Karan|||looks good to me.
Red Flag would be scan density LESS THAN 80%....You look fine in that
regard.
Greg Jackson
PDX, Oregonsql
How to reduce timeouts in SQL Server?
We have designed a system which processes millions of transactions a
year. There are a couple of tables in the system which are heavily
used. These 2 tables gets about 30,000 rows inserted into them
everyday at an average. The inserts are done through stored procedures
after perfomring some complex logic. There are also a few scheduled
jobs that run every night which accesses these 2 tables. In addition,
there are a few reports that run in the web system which acces these
table for ad-hoc reporting. In the last few months, the number of
transactions have increased quite a bit and system started giving us
problems such as a lot of timeouts and a few deadlockks everyday.
I am thinking that if I can somehow replicate these tables every 6
hours or so and then use the new tables for all reporting purposes, my
timeouts might decrease. Could you suggest on how I could go about
replicating these 2 tables in an efficient way? and whether this would
help my situation?
Any other suggestions on how I could improve my system would be
appreciated.
Thanks,
Karan
ouch....
tough question without being able to see your database.
here are some thoughts...
1. Make sure indexes are being optimized\maintained on a regular basis (Read
up on showContig and on dbcc IndexDefrag)
2. review your critical transactions and make sure that you are not causing
excessive blocking and deadlocks (This is really hard, you'll have to read
up on troubleshooting deadlocks)
3. yes your adhoc reporting should be moved out to another database. You can
use DTS for this, replication or log shipping. We are using log shipping to
satisfy this need.
you're likely going to need to spend some serious hours pinpointing the
problem here. My guess is that option #1 above will buy you some serious
performance gains.
Cheers,
Greg Jackson
PDX, Oregon
|||Articles for reference:
http://www.microsoft.com/technet/com...l/sql1023.mspx
http://vyaskn.tripod.com/watch_your_timeouts.htm
"Karan" wrote:
> Hi,
> We have designed a system which processes millions of transactions a
> year. There are a couple of tables in the system which are heavily
> used. These 2 tables gets about 30,000 rows inserted into them
> everyday at an average. The inserts are done through stored procedures
> after perfomring some complex logic. There are also a few scheduled
> jobs that run every night which accesses these 2 tables. In addition,
> there are a few reports that run in the web system which acces these
> table for ad-hoc reporting. In the last few months, the number of
> transactions have increased quite a bit and system started giving us
> problems such as a lot of timeouts and a few deadlockks everyday.
> I am thinking that if I can somehow replicate these tables every 6
> hours or so and then use the new tables for all reporting purposes, my
> timeouts might decrease. Could you suggest on how I could go about
> replicating these 2 tables in an efficient way? and whether this would
> help my situation?
> Any other suggestions on how I could improve my system would be
> appreciated.
> Thanks,
> Karan
>
|||I performed the DBCC SHOWCONTIG on the tables affected and the stats
look ok to me. I am pasting the stats below..
DBCC SHOWCONTIG scanning 'claims' table...
Table: 'table_name' (322100188); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 202265
- Extents Scanned.......................: 25349
- Extent Switches.......................: 27598
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 91.61% [25284:27599]
- Logical Scan Fragmentation ..............: 0.64%
- Extent Scan Fragmentation ...............: 10.38%
- Avg. Bytes Free per Page................: 672.6
- Avg. Page Density (full)................: 91.69%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Please let me know if you see anything I could improve with this
stats. All other indexes on these table have similar stats as well.
Thanks for your help.
Karan
|||looks good to me.
Red Flag would be scan density LESS THAN 80%....You look fine in that
regard.
Greg Jackson
PDX, Oregon
How to reduce timeouts in SQL Server?
We have designed a system which processes millions of transactions a
year. There are a couple of tables in the system which are heavily
used. These 2 tables gets about 30,000 rows inserted into them
everyday at an average. The inserts are done through stored procedures
after perfomring some complex logic. There are also a few scheduled
jobs that run every night which accesses these 2 tables. In addition,
there are a few reports that run in the web system which acces these
table for ad-hoc reporting. In the last few months, the number of
transactions have increased quite a bit and system started giving us
problems such as a lot of timeouts and a few deadlockks everyday.
I am thinking that if I can somehow replicate these tables every 6
hours or so and then use the new tables for all reporting purposes, my
timeouts might decrease. Could you suggest on how I could go about
replicating these 2 tables in an efficient way? and whether this would
help my situation?
Any other suggestions on how I could improve my system would be
appreciated.
Thanks,
Karanouch....
tough question without being able to see your database.
here are some thoughts...
1. Make sure indexes are being optimized\maintained on a regular basis (Read
up on showContig and on dbcc IndexDefrag)
2. review your critical transactions and make sure that you are not causing
excessive blocking and deadlocks (This is really hard, you'll have to read
up on troubleshooting deadlocks)
3. yes your adhoc reporting should be moved out to another database. You can
use DTS for this, replication or log shipping. We are using log shipping to
satisfy this need.
you're likely going to need to spend some serious hours pinpointing the
problem here. My guess is that option #1 above will buy you some serious
performance gains.
Cheers,
Greg Jackson
PDX, Oregon|||Articles for reference:
http://www.microsoft.com/technet/co...ql/sql1023.mspx
http://vyaskn.tripod.com/watch_your_timeouts.htm
"Karan" wrote:
> Hi,
> We have designed a system which processes millions of transactions a
> year. There are a couple of tables in the system which are heavily
> used. These 2 tables gets about 30,000 rows inserted into them
> everyday at an average. The inserts are done through stored procedures
> after perfomring some complex logic. There are also a few scheduled
> jobs that run every night which accesses these 2 tables. In addition,
> there are a few reports that run in the web system which acces these
> table for ad-hoc reporting. In the last few months, the number of
> transactions have increased quite a bit and system started giving us
> problems such as a lot of timeouts and a few deadlockks everyday.
> I am thinking that if I can somehow replicate these tables every 6
> hours or so and then use the new tables for all reporting purposes, my
> timeouts might decrease. Could you suggest on how I could go about
> replicating these 2 tables in an efficient way? and whether this would
> help my situation?
> Any other suggestions on how I could improve my system would be
> appreciated.
> Thanks,
> Karan
>|||I performed the DBCC SHOWCONTIG on the tables affected and the stats
look ok to me. I am pasting the stats below..
DBCC SHOWCONTIG scanning 'claims' table...
Table: 'table_name' (322100188); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned........................: 202265
- Extents Scanned.......................: 25349
- Extent Switches.......................: 27598
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 91.61% [25284:27599
]
- Logical Scan Fragmentation ..............: 0.64%
- Extent Scan Fragmentation ...............: 10.38%
- Avg. Bytes Free per Page................: 672.6
- Avg. Page Density (full)................: 91.69%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
Please let me know if you see anything I could improve with this
stats. All other indexes on these table have similar stats as well.
Thanks for your help.
Karan|||looks good to me.
Red Flag would be scan density LESS THAN 80%....You look fine in that
regard.
Greg Jackson
PDX, Oregon
Wednesday, March 7, 2012
How to read block of rows from database tables
Is there are any option/query to read block of rows at one time and then query again for next page ?
i.e In MYSQL have LIMIT clause with Select Statement
Please let me know..
Database : SQL Server 2000/2005,
Thanks in Advance
Laxmilal
Whenever you use Select statement you must use WHERE condition to limit the rows.
Eg.
Select col1,col2.... From Tablename WHERE someid between 1 and 1000
Madhu
|||You 'Machine not responding' is most likely due to waiting for millions of rows to come back from the server.
You really need to limit the quantity of data that you are requesting from SQL Server. Without WHERE clause criteria that enforces limits, you are unnecessarily wasting bandwidth (and time) as more data than you need for the operation at hand is being transported to the client application.
With SQL 2005, you may wish to explore using the new ROW_NUMBER() function to assist in retrieving specific 'blocks' of rows.
Referring to Books Online, Topic: ROW_NUMBER()
Example:
USE AdventureWorks; GOWITH OrderedOrders AS ( SELECT SalesOrderID, OrderDate, ROW_NUMBER() OVER (ORDER BY OrderDate) AS 'RowNumber' FROM Sales.SalesOrderHeader ) SELECT * FROM OrderedOrders WHERE RowNumber BETWEEN 50 AND 60;(You should be able to adapt this concept for your needs.)