TSQL Tuesday #150: My First “Why am I even here?” Job

Thank you Kenneth Fisher for hosting the May month of T-SQL Tuesday. Kenneth wanted us to write about our first technical job.

As I mentioned in my previous T-SQL Tuesday post here, I was a Pharmacy graduate who loved technology. As I decided to change my career into IT, I applied for masters degree in computer technology and finally made it to graduation.

After I completed my masters degree, I felt like I can easily do any job related to computers. I was wrong! I didn’t know at the time that “Degree’s doesn’t matter”.

I had my initial struggles to find a DBA job as everyone was looking for an experienced candidates who can fit their requirements. I took many interviews and failed. One final day, I got an offer for a DBA position in a company (I am not naming it here) after attending an interview for Junior DBA role. I was on the Cloud nine.

Everything is cool, right? No, not as I thought it would be.

On the first day into the company, they gave me an ID card with my happy face picture on it and under it, my job title as “.Net Developer”.

Wait, what?? A developer?

Well, I was in the same shock as you and don’t really know what to say. I tried contacting the management to know why I was placed in a complete different position when they interviewed me for a DBA position. I did not receive any useful information from them. Now, you know why I am not naming the company ๐Ÿ™‚

I was angry in the beginning but thought I can learn and survive. You may ask, how can you survive when you know nothing about that technology? Remember, it was my first tech job ever. I was scared and worried if I will get an another job soon if I resign. I tried understanding and learning the code but it was very hard to catch up. I know that job was not for me and always wanted to get a job as a SQL DBA.

I am not an accidental DBA but a planned one who ended up in an accidental developer position ๐Ÿ˜›

I was looking for a DBA job while I was still working in this developer position and fortunate enough, it did not took long enough to be placed as a Database administrator in another company.

Since then, I am a SQL DBA who love working with SQL Servers and Databases.

Being a Women in Tech, I faced my challenges in my career but I never thought of changing/moving my career from IT. That’s what I always loved and that was the main reason why I changed my career from being a Pharmacist to a SQL DBA.

From the core of my heart, I always believed “Everything happens for a reason”. This mindset has helped me pass through those initial challenges I have faced moving my career in to IT.

All the past experiences has taught me and helped me to be where I am today as a Women in Tech.

If I turn back and see, I know I made the right decision ๐Ÿ™‚

Image Sourcehttps://in.pinterest.com/pin/514817801127582057/

Thanks so much for reading!

T-SQL Tuesday #149: Best SQL advice I would give to my younger self!

Thank you camilahenrique for hosting April month of T-SQL Tuesday. You can find the invitation here.

I give a big thank you to my younger self for changing my career from a pharmacist to a SQL DBA. I always loved computers since my childhood. You may ask, why did you choose to do pharmacy then? It’s a long story! I was just an innocent kid fulfilling my father’s dreams. By the time I realize pharmacy is not for me, I graduated as a pharmacist. I came to the united states to finally pursue my dreams and change my career. I joined as a master’s student in computer technology at one of the great universities in the united states. That was a major life-changing moment for me since I do not have much computer knowledge before. One of my core subjects was SQL. That’s when I was introduced to SQL and SQL Server. I can tell you with my whole heart that at that moment I know the decision I made was right. Though it was very tough in the beginning to catch up with the speed of other students, I learned T-SQL basics and from there, my computer journey started. I was stressed out in many situations and started slow. I thought I would never be able to make it to graduation. No matter how much I learned, I made mistakes and was never perfect. I spent late hours learning basics so I would at least pass the exam. It was a roller coaster ride for my master’s degree but I finally made it to the graduation ๐Ÿ™‚

If I can give one piece of advice to my younger self, I would say learning is a continuous process and we all are perfectly imperfect. Failure happens only when you stop willing to learn. There will never be a perfect moment to start learning anything. No matter what you learned in the past or what degree you pursued in the past, that doesn’t have to define your future goals. Anyone can learn anything if we have a heart, and the will to work hard at it.

Learning the basics of T-SQL is really important before moving into complex T-SQL coding skills. I would like to share my favorite youtube channel that helped me when I started my T-SQL journey. If you are starting with T-SQL learning, follow this series from Kudvenkat.

I am looking forward to read all posts for this T-SQL Tuesday and learning from SQL Family!

Thanks for reading!

T-SQL Tuesday #147 – Upgrade Strategies

Thanks to Steve Jones for hosting February month of T-SQL Tuesday! This month T-SQL Tuesday is all about database upgrades.

When I started my first job as a DBA seven years ago, my project was to migrate several SQL Servers and all the servers were in SQL Server 2000. In my first SQL class at my school, I started my learning with SQL Server 2012. It was a shock to me to work on SQL 2000 databases at the time (as I am not familiar with the SQL Server 2000 yet), especially as it was my first job as a DBA.

My first project was to migrate approximately two hundred and fifty SQL 2000 SQL Servers to SQL Server 2012/2016. It took us a couple of years to successfully migrate all these Servers.

I would like to answer the below questions from Steve Jones for this T-SQL Tuesday:

Why do we wait to upgrade?

Fear! Fear of breaking things or impacting performance, I believe. Especially when you are making a big jump in the versions of SQL Server. The old legacy applications may or may not support the newer SQL Server environments. Rebuilding the application can take years, involve more developers and cost more. I believe, many companies are still using the older versions of SQL Server even after the end of support. Some companies are fine to be on legacy systems if their application works fine.

Strategies for testing an upgrade

When I migrated SQL Server 2000 databases, there was a lot of work that needs to be done before even planning for testing. There was deprecated T-SQL syntax like joins were no longer supported in SQL 2008 version of SQL Server. Legacy DTS packages need to be converted or rebuilt to be compatible with the latest version of SQL Server. By using the upgrade advisor and Migration assistant tool, it is easy to capture the incompatible objects. These objects need to be modified to make them compatible ahead of time. If these lists of objects are modified by developers in between, capturing those changes to the modified objects is important. Testing can be done using the backup and restore. You can skip SQL Server 2005 to upgrade the SQL database from SQL 2000 to SQL 2008. Running the compatible object script on SQL 2000 test database is recommended so the upgrade advisor can be run over the SQL 2000 database to confirm no more compatibility issues. Once the database is in SQL 2008 mode, there are very less T-SQL deprecated changes and so easy to upgrade.

Testing the SSIS packages after being migrated from legacy DTS packages is important. Once the database is upgraded on test environments, vigorous testing for any code breakages and application testing is important.

Smoke tests or other ways to verify the upgrade worked

I believe this depends upon the database and the type of applications. When you have databases having the same schema on several servers with similar environments, testing one database upgrade can confirm the other database upgrades to work. There are always caveats in these cases as well.

Moving to the cloud to avoid upgrades

Many companies are moving to cloud infrastructure as there is less maintenance when compared with on-prem environments and to lower their costs. I believe moving into the cloud is one of the options and the reason may not only be to avoid upgrades but to improve the performance and for lowering the costs.

Using compatibility levels to upgrade an instance by not a database

This can be an option to place the database in older compatibility mode even after upgrading the database to the latest versions of SQL Server like SQL Server 2019 but to use the features that are enabled for the latest compatibility databases cannot be used. Features like Query store can be used on older compatibility mode (up to SQL 2008) even after upgrading to newer versions of SQL Server. Though this is an option available, it is not suggested to place the database in older compatibility mode for a longer time.

Checklists of things to use in planning

  1. Using tools like Migration assistant will help identify the incomptibility objects list that needs to be modified to make the code compatible with lastest versions of SQL Server.
  2. Capacity planning
  3. Gathering the deprecated and unsupported feature list for the version of SQL Server you are moving to is important.
  4. Making sure the SSIS packages are tested properly.
  5. Gathering the logins and associated permissions (I used sp_hexadecimal and sp_help_revlogin)
  6. Making sure to take the backup of the database before migration and to place the copy of the database on source server for quick rollback if needed.
  7. Testing is needed if additional features like replication is being used. Making sure the publisher, subscriber have supportive versions of SQL Server to replicate the data.
  8. Making sure to script out the linked servers.
  9. Capturing performance baseline is an option to chose if you would like to use Query store. After upgrading the database to new version of SQL Server, place the database in the compatibility level similar to the compatibility level prior to the upgrade. This will help capture the regressions caused by any plans after the upgrade. We can easily compare the plan regressions and fix the performance issues quickly. To know more about capturing baselines using Query store, please see my blog post here.
  10. What method you plan to migrate a database is important. It depends on how much downtime is accepted for your database and application. Some of the methods to migrate databases are to backup and restore, detach and attach or using full, differential and transactional log backups for less downtime.

These are a couple of things to consider during planning. This list does not contain all the planning information but gives an overview.

The time it takes to upgrade your environment

I believe the time to upgrade the database depends on how large the database is and how many dependent features needs to be moved like for example, replication, whether the application needs to make any changes, what method is used to restore the database on destination (copy files to destination server can take time if the database is huge).

What do you evaluate in making a decision to upgrade or not?

I believe, checking what new features will be used by your database and application after the upgrade is important. Testing is a crucial part to evaluate the SQL engine, database features, and performance benefits after the upgrade. The latest cardinality estimates provide a better estimate which can help SQL engine to create an optimal plan. There are many other features that have been added to SQL Server over the years in improving the performance of the Queries and workloads.

These are some of my thoughts about the database upgrades. I would like to read and learn from the experiences of other SQL family members.

Thanks for reading!

Query Store Fundamentals: Did you know?

Source: Image

Did you enable Query store or have a plan to enable this amazing feature on your databases? There are many points that need to be remembered for effectively using Query Store on your databases. I have gathered some of the points as I learn about the Query Store. Hope the below list will be informative to you as well.

  1. Query Store feature cannot be disabled in Azure SQL Database single database and Elastic Pool. When you are using on-prem environment, you need to enable on database you would like to collect the query information.
  2. Use TF 7745 to avoid writing QS data to disk before SQL Server shutdown or during the failover. The general behavior of SQL Server is to write the Query Store data collected to disk before the SQL Server shutdown. This can take time to flush all collected data to disk and restart the SQL Server or during the failover situations. Using TF 7745 can cause the data collected be lost until the flush out interval period mentioned in the settings. If you can afford losing the data for that period of time, SQL Server restart or failover can take less time. SHUTDOWN WITH NOWAIT can also be used in the place of TF 7745 during SQL Server shutdown.
  3. Use TF 7752 to avoid synchronous load of Query Store while database is in the process of starting up and be available online. Regular behavior of synchronous load can take time to recover the Query Store in full. Until the Query Store is completely recovered, Query executions will be in hault and cannot execute. This is not a good situation if your database is huge. By using the TF 7752, asynchronous load can happen later after database becomes online. This trace flag is enabled by default in SQL Server 2019
  4. For in-memory OLTP, run time stats information is collected only when you enable using sys.sp_xtp_control_query_exec_stats (Transact-SQL). This setting needs to be enabled on each server restart as it does not persist on SQL Server restarts.
  5. Did you know what the circle, square and triangle mean in Query Store? Size of the circle also have a message. Circle represents the successful execution, the more number of executions has the bigger size, rectangle represents a cancelled execution, triangle means an error caused during execution. For the below example, I executed the stored procedure 20 times, so the size of the circle you see is big and I stopped the same query execution in between to show a rectangle.

6. If your workloads have many ad-hoc queries, use Optimize for Ad hoc Workloadsย option to save cache memory as recommended by Microsoft. There is also a force parameterization option that you can enable at the database level to parameterize the ad-hoc queries so they reuse the plans. This can be an alternative option for Optimize for Ad hoc workloads. Randolph West and BrentOzar have explained the reasons why one option is better than the other and when to use one over the other. Thanks to Randolph and Brent for your detailed explanation! Check these posts for sure to know more!

7. Max plans per query are set as 200 execution plans per query by default. You can change this setting based on your workload and requirements.

8. If MAX_STORAGE_SIZE_MB is reached, Query Store goes into read-only mode. Remember, the storage size is only checked at the flush-out intervals. If the storage size is reached to the maximum limit before reaching the flush-out intervals, that’s when the Query store goes to read-only mode. In read-only mode, the Query Store can read the forced plan information and apply a forced plan but cannot write any information into the Query Store.

9. To avoid reaching the maximum storage size, make sure the “size-based cleanup mode” is always set to AUTO which is a default. This option when set in AUTO mode avoids Query Store from going to read-only mode by deleting the old data from Query Store once it reaches 90% capacity of its Query Store maximum storage size limit. It clears up until it reaches 80% of space and this avoids the need of changing to read-only mode.

10. The QS QUERY_CAPTURE_MODE, AUTO will avoid ad-hoc queries with small resource consumption by default. This is recommended mode by default. This mode is enabled by default in Azure SQL Database, SQL Server 2019

11. Deleting plan cache data doesn’t delete the Query Store data. Query Store data is persisted upon the SQL Server restarts. The cleanup policy can be set by using the STALE_QUERY_THRESHOLD_DAYS setting.

12. Last by not least, there is a query that can delete the ad hoc queries from the Query Store. This will avoid query store space from filling up quickly. Follow this article to find the query that can delete ad-hoc queries

Thanks for reading!

Resources:

  1. https://docs.microsoft.com/sql/relational-databases/performance/best-practice-with-the-query-store
  2. https://bornsql.ca/blog/dont-optimize-for-ad-hoc-workloads-as-a-best-practice/
  3. https://www.brentozar.com/archive/2018/03/why-multiple-plans-for-one-query-are-bad/
  4. https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15
  5. https://docs.microsoft.com/en-us/sql/relational-databases/performance/using-the-query-store-with-in-memory-oltp?view=sql-server-ver15
  6. https://docs.microsoft.com/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store
  7. https://docs.microsoft.com/sql/relational-databases/performance/best-practice-with-the-query-store

Query Store Fundamentals: Query Store Hints

Source: Image

Query Store hints is another amazing feature added and is available in the preview mode in Azure SQL Database, managed instances, elastic pools and hyperscale databases as well. Query Store hints are just like the hints you use in your queries but the difference is you do not have to change the query code to apply these hints using Query store specifically designed stored procedure- sys.sp_query_store_set_hints. Cool, right?

Query Store hints are very useful when you need to use the hints and force the optimizer to use those hints but without changing the underneath query. This feature is used for specific queries in some circumstances. Some of the examples that can take the advantage of this feature are the parameter sensitive queries where recompile is needed to create an optimal plan each time it executes, set the MAXDOP value, memory grant size setting for bulk operations, etc. (Source: Microsoft Docs)

You also have the choice to choose the hints like forcing the legacy cardinality estimation for the queries not performing well on the newer versions of SQL Server, using different compatibility level for a particular query as the database compatibility level is another.

For the complete list of hints that are supported and not supported, please see it here.

Now, let’s see an example of a stored procedure where we can apply the Query Store hints. This stored procedure have sensitive parameters and we need to use recompile to make sure it generates the optimal plan for each execution. I am using Azure SQL Database Adventureworks for this demo. Query store is already enabled on this database by default. Let’s create the stored procedure.

IF EXISTS (
		SELECT *
		FROM sys.objects
		WHERE type = 'P'
			AND name = 'Salesinformation'
		)
	DROP PROCEDURE dbo.Salesinformation
GO

CREATE PROCEDURE dbo.Salesinformation @productID [int]
AS
BEGIN
	SELECT [SalesOrderID]
		,[ProductID]
		,[OrderQty]
	FROM [Sales].[SalesOrderDetailEnlarged]
	WHERE [ProductID] = @productID;
END;

Let me clear out the Query Store data and plan cache before running the test queries.

ALTER DATABASE [AdventureWorks]

SET QUERY_STORE CLEAR;

/* Clean the procedure cache */
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE

Execute the stored procedure with two different parameters (the first parameter results around 30K rowsand the second parameter results 380 rows)

/* Executing the sp resulting in many rows with this parameter */
EXEC dbo.Salesinformation 870

/* This parameter will result in only couple of rows, it reuses the same execution plan */
EXEC dbo.Salesinformation 942

If you observe the execution plan for both the executions, the parameter 942 only results in 380 rows but it is doing the clustered index scan, which is not optimal in this scenario.

To fix this usually, we will use recompile as a runtime hint or in the stored procedure header. With the Query Store hints, all we need to get is the query id and the hint information we want to apply. This is done in two steps. First, grab the query id by querying the system catalog views and filtering on the query text by using the like operator.

/* Find the query ID associated with the query
Source: https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current
*/
SELECT query_sql_text
	,q.query_id
FROM sys.query_store_query_text qt
INNER JOIN sys.query_store_query q ON qt.query_text_id = q.query_text_id
WHERE query_sql_text LIKE N'%@productID int%'
	AND query_sql_text NOT LIKE N'%query_store%';
GO

The query_id is 1 for this stored procedure. Now, we can use the stored procedure sp_query_store_set_hints and pass the parameter values query_id and the hint (recompile) in this case to fix the problem.

/* Set the Query hint by using the sp sp_query_store_set_hints
Source: https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current
*/
EXEC sp_query_store_set_hints @query_id = 1
	,@value = N'OPTION(RECOMPILE)';
GO

Now, execute the stored procedure again with the same parameters and see if the recompile occured.

/* Execute the sp again with two parameters */
EXEC dbo.Salesinformation 870

EXEC dbo.Salesinformation 942

The stored procedure is recompiled as the query hint has been applied and you can see the difference in the execution plans. For the parameter value with 380 records, it recompiled to create an execution plan using an index seek and a key lookup which is optimal for this execution.

To check for the query hints that are currently enabled, run the below query using the catalog views

/* Check for the Query hints that are enabled 
Source: https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current*/
SELECT query_hint_id
	,query_id
	,query_hint_text
	,last_query_hint_failure_reason
	,last_query_hint_failure_reason_desc
	,query_hint_failure_count
	,source
	,source_desc
FROM sys.query_store_query_hints;
GO

The query_hint_id is shown as 1. As you update the hints on this query_id, this number increases. The query_hint_text always shows the currently active hint that has been applied. The history of the query_hints has not been displayed.

If you would like to use the MAXDOP hint set to 1 for the parallelism used in the execution plan. Update the hint to MAXDOP by running the stored procedure below. Run the stored procedure again with the parameters to see the hint being applied.


/* MAXDOP HINT */
EXEC sp_query_store_set_hints @query_id = 1
	,@value = N'OPTION(MAXDOP 1)';
GO

/* Clean the procedure cache */
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE

EXEC dbo.Salesinformation 870

EXEC dbo.Salesinformation 942

If you observe the execution plans, the hint has been updated to MAXDOP. If you observe, the “NonParallelPlanReason” is shown as MAXDOPsetToOne and the recompile hint that has been previously applied is not active anymore. If you check the query store hints active list by using the system catalog views, updated MAXDOP hint is only shown but the Query_hint_id is being updated to 2.

We can use multiple query store hints in the same stored procedure like below and execute the stored procedure again with two parameters.


/* Multiple query hints */
EXEC sp_query_store_set_hints @query_id = 1
	,@value = N'OPTION(RECOMPILE,MAXDOP 1)';
GO

/* Clean the procedure cache */
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE

EXEC dbo.Salesinformation 870

EXEC dbo.Salesinformation 942

If you observe the plan and the details, both the recompile and MAXDOP query hints have been applied together. If you check the status of the query hints, this time the number of the Query_hint_id will be 3.

To remove the query hints that have been active on the query_id, use sp_query_store_clear_hints by providing the query_id to remove the Query Store hints on specific query_id.

/* Remove the Query hints from Query Store */
EXEC sp_query_store_clear_hints @query_id = 1;
GO

To make sure the Query Store hints are disabled on the query_id, recheck by running the below query:

/* Recheck for the Query hints to make sure Query hints are diabled */
SELECT query_hint_id
	,query_id
	,query_hint_text
	,last_query_hint_failure_reason
	,last_query_hint_failure_reason_desc
	,query_hint_failure_count
	,source
	,source_desc
FROM sys.query_store_query_hints;
GO

In this blog post, we have only tested some of the Query Store hints that are supported. There are many other hints that are supported and do not forget to read the entire blog post here from Microsoft to know the details of the hints that are being supported and not supported.

Summary:

In this post, we have seen what Query Store hints are and tried a couple of examples to see how the preview mode feature works. If you have a need to use any of the supported hints in your queries but are unable to edit and modify the text of the query because of any reasons like the third-party applications, upgraded the database to newer SQL Servers but have to use the older cardinality estimation on any particular query not performing well or use older compatibility level for any particular query without interfering with the database level compatibility, Query store hints can be very useful.

Thanks for reading!

Resources:

  1. https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current
  2. https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-query-store-set-hints-transact-sql?view=azuresqldb-current

T-SQL Tuesday #146 Preconceived Notions

Thanks to Andy Yun for hosting this month of T-SQL Tuesday. Andy’s challenge for this month of T-SQL Tuesday is to share our learnings that have changed our preconceived opinions.

There are many learnings throughout my career as a DBA but in this post, I would like to share something interesting I have learned in the recent months from Pam Lahoud about tempdb.

I always had an opinion of changing the schema of the temp tables or adding indexes later after the temp table creation doesn’t affect in any way but it’s actually not a good idea. Yes, you heard that right. Altering or adding indexes later after temp table creation within the stored procedures will be marked for deletion and so the cached temp table metadata cannot be reused. It is suggestable to create the index as you create the temp table using the inline index syntax.

Please read the full article here written by Pam Lahoud to know in detail how this best practice can help reduce the tempdb metadata contention. Pam also mentioned other best practices in a detailed in-depth presentation on tempdb internals at EightKB Conference. Here is the link to it.

Also, do not forget to read this wonderful article by Erik Darling on indexing temp tables. Erik explained in detail when can you actually be creating the indexes on temp tables by using the inline indexing syntax, when it can be useful, and cases where it is not an option.

I am looking forward to reading and learning from other SQL community members about what their preconceived notions are.

Thanks for reading!

Query Store Fundamentals: How Automatic plan correction works?

Image source: Pexels

How wonderful will that be if SQL Server has a way of automatically tune our Queries based on our workloads, amazing! Right?

Thanks to Microsoft for introducing the automatic tuning feature in SQL Server 2017 and available in Azure SQL Database. Automatic tuning has two features. Automatic plan correction and Automatic index correction (Source: Microsoft)

So, what is this automatic option, and how it works?

Once you enable the automatic plan correction, this feature continuously monitors and sniffs the parameter-sensitive queries that are causing the performance problem and monitors the changes in the execution plans, and always looks for the last good plan or the better plan that is newly created. For example, the Automatic plan correction feature has forced a query plan based on the ongoing workload and for any reason, a new plan is created (many reasons can cause the new plan creation like forced plan failures, index rebuilds, statistics updates etc.) and if the existing forced plan is not the best plan any more than the newly generated plan, this feature will go ahead and unforce the existing forced plan and force the newly generated plan. If on the other side, the newly generated plan is not optimal when compared with the existing forced plan, it keeps the existing forced plan as forced and continues to monitor the regression.

Why do you need Automatic plan correction?

As a production DBA, monitoring the generated plans and manually comparing the plans to force the best plan can be difficult. SQL Server engine chooses the optimal plan for our workloads but if your query is generating many execution plans for those parameter sensitive queries, enabling this feature will save a lot of time and energy for you to focus on other tasks. You can also find the reason why the automatic plan correction feature has unforced/forced a plan by using sys.dm_db_tuning_recommendations. It also provides recommendations and corrective action to fix the problem and other information like the reason for the plan change, the detection time for the regression, the script to force the plan that will be optimal and fix the problem. According to Microsoft, “There should be enough regression between the plans for implementing the recommendations, and that is caused when the estimated CPU gain is more than 10 seconds or the errors that are caused by the forced plan is more than the currently generated new plan (Source: Microsoft).

To enable the automatic plan correction:

ALTER DATABASE <yourDatabase>
SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON ); 

Let us use the Azure SQL Database for our demo to demonstrate how the automatic plan correction works. Once we enable the automatic plan correction to be turned on, let us make sure the settings are updated and ready to go.

/* Checking the tuning options if the option is enabled */
SELECT *
FROM sys.database_automatic_tuning_options

Create a stored procedure, clearing out the query store clear out the plan cache before beginning the workload

/* Creating the Stored procedure */
IF EXISTS (
		SELECT *
		FROM sys.objects
		WHERE type = 'P'
			AND name = 'Salesinformation'
		)
	DROP PROCEDURE dbo.Salesinformation
GO

CREATE PROCEDURE dbo.Salesinformation @productID [int]
AS
BEGIN
	SELECT [SalesOrderID]
		,[ProductID]
		,[OrderQty]
	FROM [Sales].[SalesOrderDetailEnlarged]
	WHERE [ProductID] = @productID;
END;

/* Clearing the Query store */
ALTER DATABASE [AdventureWorks]

SET QUERY_STORE CLEAR;

/* Clean the procedure cache */
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE

Create the first workload, check for the execution plan in the Query Store. The below parameter will only generate 385 records.

/* Creating the first workload */
EXEC dbo.Salesinformation 942 
GO 50

Clearing out the procedure cache

/* Clean the procedure cache */
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE

Now, create the regression. This parameter generates 237K records. That causes the regression as it is parameter-sensitive.

/* Creating Regression */
EXEC dbo.Salesinformation 707 
GO 10

Run the same stored procedure with the first-time parameter again (Parameter value 942) and check the Query store again.

/* Run the workload again */
EXEC dbo.Salesinformation 942 
GO 30

When you observe, plan ID 1 is automatically forced by the automatic plan correction plan feature. Amazing, right? If you look at the reason by querying the sys.dm_db_tuning_recommendations, you can check the reason for the plan change and other information.

/* Check for the Automatic plan correction in the Query Store */
/* Check for the tuning recommendations */
SELECT *
FROM sys.dm_db_tuning_recommendations

With the continuous monitoring for the regressed plans and performance of your queries, forcing the last known good plan is one of the best features Microsoft offers.

There are certain limitations to the plan forcing. Please check them here.

Summary:

Manually monitoring the forced plans is a tedious task. As we know that there can be better plans generated for the queries with the forced plans later down the road. In those scenarios, it will be really tough to monitor each of those forced plans, unforce, and force the newly generated optimal plan for our queries. The automatic plan correction feature takes that burden by automatically doing the unforcing/forcing of the plans based upon the regressions caused. The last known good plan is ensured to be forced automatically.

Thanks for reading!

Resources:

  1. https://docs.microsoft.com/en-us/sql/relational-databases/automatic-tuning/automatic-tuning?view=sql-server-ver15
  2. https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-query-store-plan-transact-sql?view=sql-server-ver15#plan-forcing-limitations

Query Store Fundamentals: Why my plan forcing has failed?

One of the advantages of using Query store is the ability to force a plan. Once you force a plan, there may be reasons when the optimizer still chose to create a new execution plan without using the forced plan. During this time, a failure reason is stored in the Query Store. When the forced plan cannot be used, the optimizer goes through the regular compilation and optimization phases to create a new plan but it doesn’t actually fail the query itself. It executes in a normal way with the new plan being created.

But what causes these failures and how to know the reason behind these forced plan failures?

When you alter/drop the schema of the objects used inside the query plan you forced can cause the forced plan failures.

Let see with examples.

Let’s create a stored procedure, execute it and see the actual execution plan

IF EXISTS (
		SELECT *
		FROM sys.objects
		WHERE type = 'P'
			AND name = 'StockInfo'
		)
	DROP PROCEDURE dbo.StockInfo
GO

CREATE PROCEDURE dbo.StockInfo @Quantity [int]
AS
BEGIN
	SELECT [fo].[Order Key]
		,[si].[Lead Time Days]
		,[fo].[Quantity]
	FROM [Fact].[Order] AS [fo]
	INNER JOIN [Dimension].[Stock Item] AS [si] ON [fo].[Stock Item Key] = [si].[Stock Item Key]
	WHERE [fo].[Quantity] = @Quantity;
END;

USE WideWorldImportersDW
GO

EXEC dbo.StockInfo 360;

To work on this example, let’s create the index and see if the index will be used.

USE [WideWorldImportersDW]
GO

CREATE NONCLUSTERED INDEX OderQuantity ON [Fact].[Order] ([Quantity]) INCLUDE (
	[Order Key]
	,[Stock Item Key]
	)
GO

Rerun the stored procedure and check if the index is being used

USE WideWorldImportersDW
GO

EXEC dbo.StockInfo 360;

As the new plan is using the index, let’s force this plan so we can later modify this index object and see how the forced plan failures can be caused.

As I forced the plan using the index, I will now go ahead and drop the index and execute the stored procedure again.

USE [WideWorldImportersDW]
GO

DROP INDEX OderQuantity ON [Fact].[Order]

USE WideWorldImportersDW
GO

EXEC dbo.StockInfo 360;

Did you observe, the query itself doesn’t fail but a new plan is generated although there is a forced plan. If you observe the newly generated plan doesn’t have the index in it.

Let’s go ahead and see the reason for this failure in Queries with forced plans using GUI. The reason for the failure is NO_INDEX as we dropped the index being used in the forced plan. There is no index that was used in the forced plan and so is the reason.

Once you create the exact index that has been dropped, it uses the forced plan back again.

USE [WideWorldImportersDW]
GO

CREATE NONCLUSTERED INDEX OderQuantity ON [Fact].[Order] ([Quantity]) INCLUDE (
	[Order Key]
	,[Stock Item Key]
	)
GO

/* Rerun the sp */
USE WideWorldImportersDW
GO

EXEC dbo.StockInfo 360;

Altering the index used in the forced plan can also cause the failure. I will add a column to the index and rerun the stored procedure again. Check for the plan again.

USE [WideWorldImportersDW]
GO

CREATE INDEX OderQuantity ON [Fact].[Order] (
	[Quantity]
	,[Order Key]
	)
	WITH (DROP_EXISTING = ON);

/* Rerun the sp */
USE WideWorldImportersDW
GO

EXEC dbo.StockInfo 360;
GO

Plan 2 is being used.

Open the Queries with forced plans and see the failure reason. NO_PLAN is the reason as there is no forced plan associated with that index definition. Remember, only the last failure reason is shown here and only the last_force_failure_count is shown but not the history of the reasons are shown. We can capture this information using Extended events shown at the ending of this post.

If you have queries with forced plans and those queries have execution plans using the three-part name queries (database.schema.object), renaming the database name will cause failure as well (Source: Microsoft)

If you observe, only the latest failure reason is shown in the Queries with forced plan report using GUI or by using sys.query_store_plan.

Code Source: https://docs.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store?view=sql-server-ver15
USE [QueryStoreDB];
GO

SELECT p.plan_id, p.query_id, q.object_id as containing_object_id,
    force_failure_count, last_force_failure_reason_desc
FROM sys.query_store_plan AS p
JOIN sys.query_store_query AS q on p.query_id = q.query_id
WHERE is_forced_plan = 1;

To capture the history of the failure reasons, you can capture using the extended event “query_store_plan_forcing_failed“. Let’s create the Xevent:

CREATE EVENT SESSION [Querystoreforcedplanfailures] ON SERVER 

ADD EVENT qds.query_store_plan_forcing_failed
ADD TARGET package0.event_file(SET filename=N'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\qserror.xel'),
ADD TARGET package0.ring_buffer
WITH (STARTUP_STATE=OFF)
GO

Enabled the Extended event and captured the information of when the Query Store plan forcing failed and the reason for the failure. Dropped the index “OderQuantity” first which captured the failure and later changed the definition of the index “OderQuantity” by adding a column. Both the failures were captured along with the error number, plan_id and the query_id information.

You can also use the extended event to notify when there is a forced plan failure by using the event “qds.query_store_notify_force_failure_failed

Summary:

In this blog post, we have seen the reasons of why plan forcing fails and how to view them using GUI/sys.query_store_plan/extended events. Once you forced a plan doesn’t mean that it will always be forced and used. It can create a new plan by going through the regular optimization processes. Altering or dropping the objects within the forced execution plan or changing the name of the database with active forced plans can cause the failures.

Thanks for reading!

Resources:

https://docs.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store?view=sql-server-ver15

T-SQL Tuesday #145: Pandemic, Presentations and Perspectives!

Thanks to Xavier Morera for hosting December month of T-SQL Tuesday!

Image

I never experienced how an in-person conference feels like. I started my speaking career during the pandemic through a virtual event New Stars of Data last year 2020.

I see different perspectives about virtual and in-person events. To be very sincere, I love virtual events for many reasons. Virtual events have given me and other new speakers a chance to speak at different conferences worldwide without having to travel.

As I never experienced an in-person event before, I am really looking forward to attending them in near future.

For this T-SQL Tuesday fun, Xavier asked a couple of questions to answer.

  1. Which is your favorite conference and why?
  2. Which is the best venue that you have visited for a tech conference?
  3. Who is the best presenter that you have ever listened to?
  4. Which location would you like for your next event to take place and why Costa Rica?

Which is your favorite conference and why?

My favorite conference will always be the one that has given me the stage and chance as a new speaker, that’s New Stars of Data.

I cannot stop with one conference ๐Ÿ™‚ My next Favourite conference is DataWeekender which is also an online data platform event. PASS Data Community Summit 2021 and Data Platform Virtual Summit 2021 is equally my favorite one along with New stars of data and Dataweekender. To list all of my favourite conferences, the list is definitely going to be big. All the above conferences encourage diversity and give a chance for the new speakers to share the same stage as the experienced speakers. New stars of data organized by the DataGrillen team is especially dedicated to new speakers. Their support for new speakers is just incredible. DataGrillen team also organized Dativerse specially dedicated to the speakers coming from underrepresented communities. They also started Dataminutes which is a ten-minute lightning talk conference and now the New Star of Data- Speaker improvement library. DataGrillen team continuously comes up with new ideas to help support the community in every way possible.

DataWeekender team has a style that I love. I just love the way they organize, advertise the event in the most entertaining way possible with an amazing logo. They encourage new speakers from diverse backgrounds and support them.

I am thankful to PASS Summit 2021 and DPS 2021 for encouraging new speakers coming from diverse backgrounds to submit to their conferences and giving them incredible support and a chance to share the stage with very well-known speakers out there. For many new speakers, it is a dream to speak at big events. As these conferences were virtual, many professionals used this wonderful opportunity to challenge themselves and took their first step in their speaking journey. If you can count the number of new upcoming speakers from the SQL Community for the past two years, you will be truly amazed. I believe we need to keep online conferences alive even when we choose to have other in-person conferences. This will help people who cannot afford to travel to other places or countries due to various reasons like family situations/young kids/visa waiting process etc but still interested in presenting and learning from other speakers.

Which is the best venue that you have visited for a tech conference?

I never attended an in-person event before ๐Ÿ˜›

Who is the best presenter that you have ever listened to?

That’s a really tough question. Every speaker is special in their own way. I have one speaker who actually inspired me to start thinking about speaking. BrentOzar valuable suggestions from one of his podcast interviews helped me think about becoming a speaker. I cannot thank Brent enough for his continuous support of the community. Kudos to you Brent.

Which location would you like for your next event to take place and why Costa Rica?

If my first in-person event will be in Costa Rica, how amazing that would be? I mean, look at the below picture from Tortugueroย (โ€˜Land of Turtlesโ€™). Who would not love to spend their time watching sunrise and sunset here?

The picturesque beaches of Tortuguero, Costa Rica during sunset
Source: https://www.nomadicmatt.com/travel-blogs/my-favorite-destinations-in-costa-rica/

Thanks again Xavier for hosting this month of T-SQL Tuesday!

Query Store Fundamentals: Capture Performance Baselines

Prior to SQL Server 2016, it is difficult to figure out the reason for the changes in the plan as the procedure cache only stores the latest plans and the history of the plans were not stored. When a query is sent to SQL Server, the optimizer can choose an already existing plan from the plan cache or choose a different plan because of some reasons like the changes in the cardinality estimation, rebuilding indexes or update statistics. The new plan that gets generated may be optimal or non-optimal. Using the Query Store feature, plan regressions can be identified easily.

Image: https://www.pexels.com/

The Query Store feature was especially useful to me when I upgraded several databases to the newer versions of SQL Server. With the new cardinality estimation released in SQL Server 2014, the optimizer changes are linked to the compatibility level of the database. Query store helps to capture the baselines during the database upgrades by collecting the plan information for all the queries as they were prior to the upgrade. This is possible by placing the database in the same older compatibility mode after the upgrade. Once all the Queries execute collecting the Query plan information with the old compatibility mode, update the database compatibility to the newer version of SQL Server.

This will help us in identifying any plan changes after the database upgrades with the new compatibility mode. New versions of SQL Server create better execution plans with the new cardinality estimation but there may be some queries that work better with older cardinality estimates. If you see there are plan regressions for your queries after upgrading to the latest compatibility level of the database, you can simply choose the last good plan generated using the old compatibility version of SQL Server from the Query Store. This will save you from a lot of time and from frustration in identifying the reasons for the slow-performing queries after the database upgrade.

With the new cardinality estimates, most of the queries run faster than before but there may be cases where you see the better running queries performing slowly after the upgrades. This use case is especially for those queries.

Let’s see an example of capturing a baseline using the Query Store feature for database upgrades.

In this scenario, we are upgrading a database from SQL Server 2012 to SQL Server 2016.

After upgrading the database to SQL Server 2016, keep the database compatibility mode the same as SQL Server 2012.

/* restoring the database WideWorldImportersDW from 
https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImportersDW-Full.bak
*/

/* restore database WideWorldImportersDW
from
disk='C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\WideWorldImportersDW-Full.bak'
with move
'WWI_Primary' to 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\WideWorldImportersDW.mdf',
move 'WWI_UserData' to 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\WideWorldImportersDW.ndf',
move 'WWI_Log' to 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\WideWorldImportersDW_log.ldf',
move 'WWIDW_InMemory_Data_1' to 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\WWIDW_InMemory_Data_1'

restore filelistonly from disk=
'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\WideWorldImportersDW-Full.bak'
*/

/* free the cache */

dbcc freeproccache

/* enable the Query store on the database */

ALTER DATABASE WideWorldImportersDW
SET QUERY_STORE = ON (OPERATION_MODE = READ_WRITE);

/* set query store options 
Source code: https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-ver15
*/
ALTER DATABASE WideWorldImportersDW
SET QUERY_STORE (
    OPERATION_MODE = READ_WRITE,
    CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30),
    DATA_FLUSH_INTERVAL_SECONDS = 3000,
    MAX_STORAGE_SIZE_MB = 500,
    INTERVAL_LENGTH_MINUTES = 15,
    SIZE_BASED_CLEANUP_MODE = AUTO,
    QUERY_CAPTURE_MODE = AUTO,
    MAX_PLANS_PER_QUERY = 1000,
    WAIT_STATS_CAPTURE_MODE = ON
);

/* change the compatability of database to 110 */

Use Master
Go
ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 110;

/*  
Query from the Microsoft docs 
https://docs.microsoft.com/en-us/sql/relational-databases/performance/joins?view=sql-server-ver15#adaptive
Placed the Query to run as a stored procedure
*/

USE WideWorldImportersDW
GO
IF EXISTS (SELECT * FROM sys.objects WHERE type = 'P' AND name = 'StockInfo')
DROP PROCEDURE dbo.StockInfo
GO
CREATE PROCEDURE dbo.StockInfo
@Quantity [int]
AS
BEGIN
SELECT [fo].[Order Key], [si].[Lead Time Days], [fo].[Quantity]
FROM [Fact].[Order] AS [fo]
INNER JOIN [Dimension].[Stock Item] AS [si]
       ON [fo].[Stock Item Key] = [si].[Stock Item Key]
WHERE [fo].[Quantity] = @Quantity;
END;

/*clear the Query store */

ALTER DATABASE WideWorldImportersDW SET QUERY_STORE CLEAR;

/*clear the cache */

dbcc freeproccache

/*Execute the stored procedure */

USE WideWorldImportersDW
GO
EXEC dbo.StockInfo 360;

/* change the compatability level of the database to 130 */

Use Master
Go
ALTER DATABASE WideWorldImportersDW SET COMPATIBILITY_LEVEL = 130;


/* rerun the stored procedure */

USE WideWorldImportersDW
GO
EXEC dbo.StockInfo 360;

Below is the screenshot of the Execution plans. Plan Id 2 is the plan generated when the database is still in SQL Server 2012 Compatibility mode. After upgrading the database to SQL Server 2016 compatibility mode, Plan Id 3 is created. When we hover the mouse to see the total duration, Plan 2 was performing well then plan 3. This query performed well with the older compatibility mode. As we now have the baseline information captured by the Query Store, we can go ahead and force the plan Id 2 which is the last good plan for that query. Capturing the baseline is really helpful when you have those specific queries which don’t perform well with the latest cardinality estimation.

Summary:

In this post, I have explained the usage scenario when Query Store was very helpful to me to capture the performance baselines during the database upgrades.

There are several other use cases where Query Store is very helpful like identifying the top resource consuming queries, quickly fixing the performance issues caused by the parameter sensitive queries, automatic plan correction which is available from SQL Server 2017 and above which will monitor the regressions continuously and force/unforce the last good plan based upon the plan comparisons and Query Store hints which is in preview mode in Azure SQL.

The main advantage of the Query Store feature is to quickly fix the performance issues without changing the underlying code. This can be very helpful in scenarios where it takes time to performance tune the query code itself to fix the performance problem.

In the next post, we will learn how automatic plan correction works.

Resources:

https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-usage-scenarios?view=sql-server-ver15