Editorials

High Volume Transaction SQL Tips – Indexes

Today I want to wrap up the discussion on high performance OLTP databases with the topic of indexing. The design of the schema is important. The syntax used is significant. But, if all of those things are aligned appropriately, and your indexes are off, you performance may still lag.

Here is one way I have seen it happen. You put together a great design. You build all of your tables, and even use set logic for your inserts. Everything works fine for a few weeks, months, or maybe more than a year, depending on the load. Then one day, things start acting weird. A few transactions are getting dropped, and you don’t know why. If you have some logging of your errors you might find you are getting deadlock or blocking timeout errors. What happened? Your regular load didn’t necessarily change. But the size of the tables you are processing is no longer the same, and the plan being developed for the work has probably been modified.

This is often the case where the indexes needed in a table when a system is first deployed change over time. A table scan or clustered index seek may have been quite performant when the table had a few thousand rows. Now that you have tens of thousands of records, or maybe millions, the requirements to fulfill your query are quite different.

Two things may have happened. First, you may have a good index already defined on your table. However, the index may be fragmented, or the statistics may be stale, not accurately representing the actual records. If that is the case, the query plan the optimizer generates may not be efficient. It may be escalating locks to page or table locks. It may be using inefficient join methods. It’s amazing what updating statistics, and re-indexing will do to the performance of a database if it hasn’t been done frequently.

The second thing that may have happened is that you did not anticipate a necessary index on your table. As the table grew, the need for an index increases, and performance begins to degrade, much faster than an existing index that isn’t maintained. You can use tools to monitor your queries and recommend indexes. In SQL Server there are DMVs that provide guidance on needed indexes. You can also review query plans for the code that is failing to locate pain points. Often, the query plan will suggest indexes that should improve performance.

To summarize the tips for optimizing your transaction processing on high volume OLTP databases, the key things to do are:

  • Prefer INSERTS over UPDATES and DELETES.
  • Process SETS if it fits your application model By Staging Changes
  • Use SET SQL Statements to process the staged data
  • Appropriately create and maintain your Indexes and Statistics

Cheers,

Ben