Amazon AWS, Amazon RDS, Editorials

Important Lessons/Reminders for SQL Server Performance

The post yesterday about moving one of our SQL Servers to RDS and a dedicated instance and all of that (read more here) was a(nother) reminder that sometimes you have to take a step back and start with the easy stuff prior to assuming you need to do the complex modifications to your system to get the results you want.

What do I mean by that?  Well, we’re likely all constantly fighting to have SQL Server live on a dedicated instance/server and working to give it RAM and disk that isn’t competing with other resources.  These are about as basic as it gets when designing systems and trying to get as much performance as possible out of those systems.  So, it really shouldn’t have come as any big surprise when performance improved.

I think sometimes we get so caught up, either in the pressure of the moment or the complexity of the solutions, in tweaking and tuning and working down in the trenches to get our systems configured correctly… that we forget the basics.  If not “we,” then certainly me.  I tend to think that the system that’s been running just fine, thank you, for whatever period of time will always be ok.  Never mind that we’ve added these processes and these other things that contend for resources.  It can’t be a silly server configuration issue, it MUST be something else.

So we dive into code and connections and performance tuning and all of those things that are needed to address the issue.  We put the software in place.  We build out solutions that monitor differently.  We analyze the execution plans and queries and application code that is using the SQL Server.

These are healthy things to do from time to time, sure.  But as a triage effort, they’re very expensive.  They’re expensive because the risk of any changes made in the heat of the moment can be quite high.  You’re tweaking and changing things without necessarily having the proper vetting of those changes.  You can quickly and easily end up with many, many variables changing at once and getting slapped by too many moving parts, or parts that are just flat moving incorrectly now.

Those types of changes are expensive as well because you could be looking at code changes.  Could be stored procedures, other server-side code, or application code – but if you’re dancing around trying to solve an issue in real-time and don’t have the time to test fully and make sure things are just right, you could end up with the process of changing the code, the process of deploying it, and the process of unrolling those changes, rolling back to prior releases.  Expensive indeed.

When you’re chasing issues like this, make sure you start with the basics.  Look at your server configurations, see what’s changed.  Know that key essentials to understanding what could impact that server and what needs to be checked out.  Start there.  It may only take a minute to truly find out that nothing has changed and the issues run deeper.  That’s great.  But just don’t ignore the basics.

What’s that saying?  Paraphrasing, “Always assume that the simplest answer is correct” – while it might not be the issue this time, it’s best to start there and work forward in the troubleshooting process.