Editorials

Basics: Your performance bottleneck may not be what you think it is

Been running into an interesting trend in working on different systems, looking for performance issues, finding things that need to be addressed.  Typically it all starts with a client that is confused about the performance they’re seeing after an upgrade that was supposed to fix everything automagically, but doesn’t seem to have accomplished that once completed.

There are many, many ways out there to see what’s happening with your system, to find out what query is killing it.  And by “it” I mean processing time and performance.  (Here’s a great place to start) So people run off and query their systems, and see the behemoth queries that are battering their system performance.  Tune, tune and tune some more, tweak those queries, get things running just so on those queries…

But the systems still aren’t operating up to snuff.  We’ve particularly seen this moving a system from development to test to production and then out to a high-performance install.

The issue isn’t that the tuning wasn’t done correctly on the long-running query though.  It’s that the tuning still needs to be done on the follow-on queries to make sure you have supporting indexes, smart T-SQL and a look at other tools that can speed those queries along their way.  What’s happening more now that we don’t have big huge queries to worry about is that the smaller, more tightly wound queries, are in need of attention because they’re called a bazillion times, and they have inefficiencies that are emphasized at volume.

So, as you roll the system through its paces headed to production, and then out to that high-volume environment, those smaller queries that run all the time over and over again (think searches and transactional updates, that type of thing) that seem so short and sweet can actually become a real challenge to the performance of your system.

It’s important as you’re using these tools and queries that pull back performance information that you look beyond the top 5 rows of queries and pay attention to things that occur in batches, that occur under load or perhaps only occur in certain situations, but when they hit, they hit hard.  These can be some of the harder things to track down.  Getting the information on where to look is pretty straightforward; remembering to move beyond the “HOLY COW! Look at that top query!  It’s killing us!” and on to the other queries in the list that are leading the pack on performance is the key.

One other suggestion – if you’re using tools that like to attach indexes or that type of thing just “automatically” after your OK, make sure you understand your underlying infrastructure.  You may not need those things, or they may be redundant or they may be provided for by your cloud provider’s implementation, etc.  Get to know how those implementation tools are put into play for your environment.

One (!) other idea – make sure you check your system from time to time to see if things are changing.  Many times, when a system is deployed, it’s used *exactly* as designed and people are runinng down the paths of execution like you’d expect them to.  Then they use it for awhile.  And they adapt (those sneaky users!) – and as they do, usage patterns and data patterns can change and may offer an opportunity to tune your systems a bit more.