Friday, June 29, 2007

More Performance Tuning

I was occupied with some performance tuning this week on your .NET client/server solution, here are some of the conclusions I have drawn.

One of the bottlenecks turned out to be caused by some missing database indices. This again reminds me that I have hardly experienced a case when there were too many indices, but plenty of times when there were too few. Foreign keys AND where-criteria attributes are primary candidates for indices, and I'd rather have a real good reason when leaving out an index on those columns. I know some folks will disagree, and point out the performance penalty for maintaining indices, and that database size will grow. While I agree those implications exist, they are negligible in comparison to queries that run a hundred times faster with an index at the right place.

Also, the reasoning that relatively small tables don't need indices at all simply does not catch up with reality. While an index seek might not be much faster than a scan on a small number of rows, missing an index on a column that is being searched for also implies that there are no statistics attached to this column (unless statistics are manually created - and guess how many developers will do so), hence the query optimizer might be wrong about that data distribution which can lead to ill-performing query plans. Also, the optimizer might make different decisions because it considers the fact that there is no index on certain columns.

There are several ways to find out about missing indices: Running the Tuning Advisor, checking execution plans for anomalies, and of course some excellent tool scripts from SqlServerCentral and from Microsoft.

A query construct that turns out problematic at times under SqlServer looks something like

where (table.attribute = @param or @param is null)

While these expressions help to keep SQL statements simple when applying optional parameters, they can confuse the query optimizer. Think about it - with @param holding a value, the optimizer has a restraining criteria at hand, but with @param being null it doesn't. When the optimizer re-uses an execution plan previously compiled for the same statement, trouble is around the corner. This is specifically true for stored procedures. Invoking stored procedures "with recompile" usually solves this issue. But this may happen within client-generated SQL-code as well. Options then are forcing an explicit execution plan for the query (tedious, plus if you still have to support SqlServer 2000 you are out of luck), using join hints, or finally re-phrasing the query (in one case we moved some criteria from the where- to the join-clause, which led the optimizer back to the right path again - also additional criterias might help).

By the way I can only emphasize the importance of mass data tests and of database profiling. I don't want my customers to find out about my performance problems, and I want to know what is being shoveled hence and forth my database at any time.

I was also investigating another performance problem that appeared when reloading a certain set of records with plenty of child data. Convinced of just having to profile for the slowest SQL, I was surprised that they all performed well. In reality, the time got lost within the UI, with too much refreshing going on (some over-eagerly implemented toolbar wrapping code which re-instantiated a complete Infragistics Toolbar Manager being the reason - external code, not ours).

I enjoy runtime tuning. A clear goal, followed through all obstacles, and at the end watching the same application running faster multiple times - those are the best moments for a developer.