Blog

T-SQL Tuesday #153: My Life changing event!

Thanks to Kevin Kline for hosting this month of T-SQL Tuesday! Kevin wanted us to share about the events that has changed our lives in a better way.

Becoming a speaker has changed my life. I decided to become a speaker during the dark times in my life. Speaking has become a therapy to me. It has helped me pass through the rough times in my life.

I am always thankful to the conference “New Stars of Data” for giving me the opportunity and the platform to speak for the first time as a Speaker. My journey begin from here as a Speaker.

I have presented at many events later. Dataweekender is the second conference that gave me the opportunity to speak. This is one of my favorite conferences as well.

As I speak more, I gained a part of my confidence back. It is a continuous process. I took the step and decided to speak at the lowest point in my life. I sincerely can say that New Stars of Data event is a life changing event to me.

To anyone who wanted to become a speaker but if lack of confidence is stopping you from taking your first steps, please rethink. Confidence only comes by practice.

I can say that very sincerely because I spoke at several events later only to build my confidence. The more I presented, the more comfortable I became. Though I only presented virtually till date, I will carry the same confidence I gained when I present in-person.

When I think about the in-person events, I need to write a bit about the PASS Data Community Summit 2022. I am honored to be selected as a speaker at this year’s summit and I chose to present in-person.

I am very nervous to present in-person for the first time ever and that too for the biggest Data platform summit in North America. I believe having nervousness is a good thing when you are presenting because it will force you to prepare more and keeps you laser focused.

As New Stars of Data gave me a life as a Speaker, PASS Data Community Summit 2022 will definitely take me to the next level as a speaker as I will be presenting for the first time in-person.

Attending and presenting at the Conferences like SQLBits was one of the best experiences. Attending other events like EightKB is a great opportunity to learn internals about the SQL Server.

There are other conferences I have attended and learnt from other speakers.

During the Journey as a speaker, I have met new people along the way and few of them became my good friends for life!

I cherish all the moments…

I am very thankful to each and every organizer of the conferences, User groups and to each person who helped me along the way giving me the opportunity and supporting me at each step in my journey as a Speaker.

I would love to read other SQL Family member posts about this topic. Would love to learn from them how attending or speaking at different conferences changed their lives.

Thank you for reading!

T-SQL Tuesday #152 – Rants

Thanks to Deborah for hosting July month of T-SQL Tuesday!

I would like to share a rant on a situation I faced working on the database Server migrations couple of years ago.

In the past few years, I have done several database and server migrations. As you know, there are many things that needs to be taken care of during the testing phase until we do the actual migrations.

It is not simply the database that we migrate but all the related objects needs to be migrated as well. The agent jobs, SSIS packages, linked servers and most importantly updating the connections to the datasource information, not only to the database related objects but also to the related applications.

If you are using the Servername and database name in the connection strings in all the applications connecting to your database being migrated, this process gets tough and tedious to update once the database is migrated to a different server. It is always advised to use database DNS in the connection strings instead of the IP address or the Servername.

This also applies to the report server datasources as well.

I know it is hard to make the change but if the change is good, it is worth the effort.

During several database Server migrations, I have seen developers feeling comfortable in changing their application code to the new Server name instead of taking the advice of database DNS. I know the amount of work that goes behind to find the objects referencing the server names. Missing any one of the locations can absolutely break the code and mess really hard.

I have experienced many times where developers had failed updating the SSIS packages or failed Linked Server connections once the databases are migrated. This problem can be solved simply by using the database DNS instead of the Server name. You will be only updating the DNS entry pointing to the new server name once you migrate the database to point to the new Server environment.

I know many companies already follow the standards of using database DNS but there are still others out there doesn’t want to make the change and instead fix the problem when applications break.

I believe it is not only important to fix a broken application somehow but how effectively and simply you can avoid causing the applications to break in the first place.

I am very interested in reading other SQL family member posts on this topic!

Thanks for reading!

Quick Track: Beginner’s Guide to Azure SQL- Learn to deploy

Okay, so you would like to deploy your databases in Azure! Great. Your resource limits are based on the deployment you chose. Managed instance and Azure SQL database resource limits depends on the Service tier, compute tier, the number of vCores you chose.

Before migrating the databases to Azure, knowing the details of what deployment model (IaaS/PaaS), deployment method (Azure portal, Azure CLI, PowerShell, ssms tool), region you wish to deploy, Service tier, compute tier, purchasing model, hardware and size of your databases is crucial (Source).

The deployment option and the service tier you chose directly depends on the resource limits that you get. Compute resource limits like the memory, storage limits like max size of your database log file, how long the backups are secured, IOPS, tempdb size limit etc.;(Source).

There are couple of things that we need to focus on while deploying the Azure SQL database. The region where you would like to deploy, under which resource group you would like to place the database, logical server name to connect to the Azure SQL using ssms, Admin SQL login credentials, purchasing model and service tier.

Creating the Azure SQL database using the portal is the basic method of creating a single database. Remember, you cannot restore a native backup in Azure SQL database but restore using the bacpac file. In MI, you can natively restore the database using the url.

Any resource that you create requires the subscription, resource group, database name, logical server name and credentials,

While configuring, you will have an option to chose if the database needs to be created as a part of the elastic pool (databases in the pool will share the compute resources together).

As you click on the configure compute and storage, you can chose the service and compute tier here.

If you chose the general purpose, you will have the compute tier options “provisioned”/”serverless”.

Chose the hardware configuration.

Select the hardware

Scale up or down to adjust the number of vCores and data max size.

Backup storage redundancy has three options. To find the differences among those, see here.

Networking tab:

Network connectivity:

No access: no one will have access

Public endpoint: quickly connect but not secure.

Private end point: Most secure way to connect by creating the private ipaddress to connect to from your virtual network.

Firewall rules:

Allow Azure services and resources to access this server: Any resources in azure can connect to your server. Not a secure option. try to avoid choosing this option.

Add current IP address: enabling this will add the client IP address so it can connect to the server.

Under the Security tab, you will see the Azure defender for SQL. This is not a free service. You will have to pay 15$ per server per month.

In the additional settings, chose how you want the database to be created. None will create just the database name and no objects. Backup is to get the database from the backup file and sample creates a sample of the adventureworks database. Select the collation while deploying the database. For the Azure SQL database, you cannot chnage the collation later.

Chose the maintenance window for the options provided.

Under the tags, name the tag for this resource and a value. This will help to identify the resources later.

Click on review and create. Click on create at the bottom of the window. This will create a new Azure SQL database.

After the deployment is complete, you will be able to copy the server name, open the ssms and connect to the server using the admin login and password credentials.

Connect to the server using the credentials for the logical server admin with password

Things to remember:

  1. You can chose the size and the number of the files for the Managed instance. You cannot chose the storage in MI and Azure SQL database. For Azure SQL, you cannot chose the number of database files.
  2. T-log file for the Azure SQL database is 30% of the data max size.
  3. Compatibility level is available for both MI and Azure SQL database
  4. Query_store is turned on for all Azure SQL databases
  5. Azure SQL Databases recovery mode is always full
  6. You cannot stop and start the Managed instance and Azure SQL database
  7. Proxy and redirection are the two ways that we can connect to Azure SQK database. Proxy connection is like a friend who visits your apartment always need to connect to the security guard (known as gateway here) before reaching your apartment (database). This is the default setting for connections outside Azure.
  8. Redirect connection is like your friend gets an additional key as an authorized person for the apartment by connecting to the security guard (the gateway) and from that point the next connections can be directly reach the apartment (database). No need of additional hops in between. Redirect connection is default connection type within azure. To use redirect connection, ports 11000- 11999 needs to be opened.

We are going to learn about fundamentals of security in azure in the next blog post. Thanks for reading!

References:

  1. Learn Live – Azure SQL Fundamentals: Deploy and Configure Servers, Instances, and Databases (Ep.2)
  2. Deploy and configure servers, instances, and databases for Azure SQL

T-SQL Tuesday 151: T-SQL Coding Standards I wish I knew Earlier!

Thanks to Mala for hosting this month of T-SQL Tuesday! Mala wanted us to write about some of the T-SQL Coding standards.

Some of the coding standards that I mention here are obvious but I know how many times I obviously skip the simple coding standards. I believe every bit of simple things we do every day matters!

Here are few quick but effective coding standards I wish I knew earlier:

1. Get your T-SQL code formatting faster than before

We all love to write the code that can help you and others read better later but I know it is time consuming, especially when you are trying to focus more on writing the effective code that can run faster. My mindset while typing the code: Throw the formatting out of the window, let me just type this and get the results I need. I know many of you have the same mindset. If you are just typing few lines of code, that’s totally fine but building a huge stored procedure with complex logic needs formatting not only to make it easier for others to read it later but to help yourself along the way. Formatting doesn’t have to be complex with some free online tools like poorsql. This tool was introduced to me by one of my favorite SQL community champions, Randolph West. Thank you my friend! You saved me many hours. There are also other similar free tools like sqlformat.

2. Error and Transaction Handling: Saved many Jobs out there

Don’t wait until you break something and then figure out there is a way to avoid it in the first place. This was me at the beginning of my career. I learnt it in the hard way. These were all my face expressions at my computer as I write something which looks like a code.

Source: Pexels

From handling and capturing the errors to transactional handling the right way, knowing how to handle the errors and transactions is crucial when troubleshooting and modifying the data. Learn about error handling here and transaction handling here.

3. Are you inserting large number of rows as a schedule SQL agent job? This is for you

This is simple but effective. Add update statistics step as the next step in the agent job for the tables you just imported the data into. This will help optimizer to chose the right operators in your execution plans with the updated stats.

4. Start using SQLCMD mode, you will thank me later

Did you ever faced a situation where you accidentally deleted the rows in production instead of deleting the rows in development environment? If yes, then keep on reading. Start using SQLCMD mode. After you enable this mode, run your query with beginning

:CONNECT Servername                                                                                                      
USE DATABASENAME                                                                                                              
Your query art here   

Make it as a coding standard, this will help you remember where you are running the code no matter what your query editor is connecting to. If you need to run the same code on different server, you just change the server name from the first line. At the beginning, I felt adding this line at the beginning of the code is kind of pain but then after I get used to it, this has become one of the best standards. As I execute any code these days, my eyes automatically rolls to the first line, looking for the server name in the connect statement. This becomes even more helpful when you want to document the code you are running on different environments (DEV/TEST/QA/PROD). Learn about SQLCMD here.

5. Consolidate the number of Indexes: You could have duplicates

If you regularly implement the indexes from the recommendations provided on the execution plans, make sure to check if you already have an index with all the columns mentioned in the recommendations but just a column recommended not in already existing index. Try to fit in this additional column into the existing index if necessary but do not just go ahead and run the recommended index query. You may already have an index. Regularly check for any duplicate indexes you may have costing you lot of resources maintaining them.

6. Checkout the datatypes of variables used in your queries

The variable datatypes that you use in your queries for where clauses should have same datatypes for the columns you are referencing in your tables. If they are not the same, the optimizer have to go through the implicit conversion as an additional step. This may cause performance impact and may not use the required indexes when it have to. This can take more resources to execute the query and may be harder to find the reason later for why the query is running slow.

These are some of the T-SQL coding standards that I use on regular basis. I am looking forward to read and learn from other SQL family members on their T-SQL standards T-SQL Tuesday posts!

Thanks for reading!

Quick Track: Beginner’s Guide to Azure SQL

Source: Image

As I started learning Azure couple of months ago, I was confused in the beginning of where to start and was overwhelmed by looking at the amount of information that I not only have to learn but also remember what I learn.

I believe you can treat this series as my consolidated learnings put together in a simple way. A quick guide to understanding the concepts if you are a beginner or to revise the concepts you already learnt to make your foundations stronger.

Let’s start with a Why Question. Why should you even start learning Azure or any cloud platform?

Azure is one of the cloud platforms available in the market which provides services that you can use to host your applications, build your databases, virtual machines, storage, analytical and many other services without having to buy or manage any hardware. As the Azure supports Infrastructure and platform as a service, you can build, configure and run your databases and related services pretty quickly and with ease of deployments through Azure portal online/command line interface/PowerShell.

Deployment options

Source: Microsoft

IaaS: Infrastructure as a Service

  • SQL Server on Azure VM
    • SQL Server hosted on Azure Infrastructure
    • Automated backups and patches
    • Extended security updates for the older versions of SQL Server (SQL Server 2008)
    • You will have access to operating system and can configure instance level features for SQL Server
    • Easy Install- through Azure portal using images (various version combinations of available packages for both Windows and SQL Server together)

PaaS: Platform as a Service

  • SQL Managed Instance
    • OS is taken care by Microsoft
    • You will not have access to the OS
    • Use and configure all features of the SQL Server instance
    • Useful for lift and shift scenarios where applications don’t have to change their code in order to move to this service
  • Azure SQL Database
    • You will just get the database
    • Database level features can be configured
    • OS and Instance capabilities taken care by Microsoft
    • very useful for Modern cloud application built
    • Very quick to deploy

For the full list of feature comparison between Azure SQL Database and SQL managed instance, read the Microsoft doc here.

Unlike the SQL Server on Azure VM, Azure SQL database and SQL managed instance are versionless. Databases, OS are upgraded, patched behind the scenes by Microsoft.

Purchasing models:

There are two purchasing models, DTU and vCore.

In the DTU (database transaction per unit) model, we choose the number of DTU’s and the resources comes in bundle together (compute and storage resources). vCore model has the advantage of using the Azure hybrid benefit and reserved capacity if you can pay for the resources ahead of time up to three years. When you use vCore model, it is easy to convert the on-prem workloads to the azure as you can independently choose the compute, storage and other resources like memory.

DTUvCore
Bundle togetherIndividual compute and storage
Easy and simpleCustomize and control
Not flexibleFlexible
Azure Hybrid benefit – NO Azure Hybrid benefit – YES
Available for Azure SQL Database onlyAvailable for MI and Azure SQL Database

The cost of the vCore model depends on the service tier, compute, storage, hardware you choose, backup storage you choose.

Note: you can convert the DTU model to vCore model. Know more here.

Service Tiers

We have three Service tiers for Azure SQL database and Managed instance – General purpose, Business critical and Hyperscale. Especially under the General purpose tier, we have two compute tiers. We will learn that in coming section. For the main differences among the service tiers, please follow below table.

Service TierGeneral purposeBusiness criticalHyperscale
PurposeGeneral workloadsLow latency workloadsOLTP, large databases
Compute80 vCores80 vCores80 vCores
Storage typeRemote 5 GB – 4 TBLocal SSD 5 GB – 4 TBLocal SSD cache Up to 100 TB
Backups1 to 35 days (7 as Default)1 to 35 days (7 as Default)1 to 35 days (7 as Default)
Availability1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache3 replicas, 1 Read Scale-out, zone-redundant HA, full local storageMultiple replicas, up to 4 Read Scale-out, zone-redundant HA (preview), partial local cache
In-memoryNot supportedSupportedNot supported
IOPS500 per vCore with max 7,0005,000 with max 200,000Multi-tiered architecture Effective IOPS will depend on the workload

Compute Tiers

Especially under the General purpose service tier with vCore model, we have two Compute tiers, Provisioned and Serverless.

ProvisionedServerless
Predictable workloadsUnpredictable workloads
Manual scalingAutomatic scaling
No pausePause the database
Pay for the fixed resources you choosePay only while database in use
Per hour billingPer second billing

For example, you will need to chose provisioned compute tier if your database has users connected most of the times having more usage of the compute resources and you cannot predict the traffic to the database. On the other hand, if you have predictable workloads with the specific timings of the users connecting to the database, you can use the serverless compute model.

You will be choosing the max amount of resources ahead of time and scale manually and you will be paying even if you are not using the resources in full. With the serverless, it is automatic scaling. You will be choosing the min and max number of compute resources so the automatic scaling happens only in between those min and max limits.

The advantage you have by using the serverless is to pause the database during the specific timings you choose. These are the timings when the users will not be connected to the database. Once the database goes to the pause mode, you will be only paying for the storage and not for the compute. If you are using Serverless option and do not place your database in auto pause mode, then you will be charged for the min amount of resources when users are not connected and charged per second basis on the resources you use during the activity. (Source)

For the provisioned compute tier, the billing is per hour basis where as for the serverless compute tier, billing is per second basis.

Summary:

In this Quick track of basics to Azure SQL, we learnt about the overview about the deployment options, purchasing models, service tiers, compute tiers and how they differ from each other. In the next blog posts, we are going to learn more about what Azure SQL has to offer.

Thanks for reading!

References:

  1. https://docs.microsoft.com/en-us/azure/azure-sql/database/purchasing-models?view=azuresql
  2. https://docs.microsoft.com/en-us/learn/modules/azure-sql-intro/
  3. https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale
  4. https://docs.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview
  5. https://www.youtube.com/watch?v=wcRb5RHriUc

TSQL Tuesday #150: My First “Why am I even here?” Job

Thank you Kenneth Fisher for hosting the May month of T-SQL Tuesday. Kenneth wanted us to write about our first technical job.

As I mentioned in my previous T-SQL Tuesday post here, I was a Pharmacy graduate who loved technology. As I decided to change my career into IT, I applied for masters degree in computer technology and finally made it to graduation.

After I completed my masters degree, I felt like I can easily do any job related to computers. I was wrong! I didn’t know at the time that “Degree’s doesn’t matter”.

I had my initial struggles to find a DBA job as everyone was looking for an experienced candidates who can fit their requirements. I took many interviews and failed. One final day, I got an offer for a DBA position in a company (I am not naming it here) after attending an interview for Junior DBA role. I was on the Cloud nine.

Everything is cool, right? No, not as I thought it would be.

On the first day into the company, they gave me an ID card with my happy face picture on it and under it, my job title as “.Net Developer”.

Wait, what?? A developer?

Well, I was in the same shock as you and don’t really know what to say. I tried contacting the management to know why I was placed in a complete different position when they interviewed me for a DBA position. I did not receive any useful information from them. Now, you know why I am not naming the company 🙂

I was angry in the beginning but thought I can learn and survive. You may ask, how can you survive when you know nothing about that technology? Remember, it was my first tech job ever. I was scared and worried if I will get an another job soon if I resign. I tried understanding and learning the code but it was very hard to catch up. I know that job was not for me and always wanted to get a job as a SQL DBA.

I am not an accidental DBA but a planned one who ended up in an accidental developer position 😛

I was looking for a DBA job while I was still working in this developer position and fortunate enough, it did not took long enough to be placed as a Database administrator in another company.

Since then, I am a SQL DBA who love working with SQL Servers and Databases.

Being a Women in Tech, I faced my challenges in my career but I never thought of changing/moving my career from IT. That’s what I always loved and that was the main reason why I changed my career from being a Pharmacist to a SQL DBA.

From the core of my heart, I always believed “Everything happens for a reason”. This mindset has helped me pass through those initial challenges I have faced moving my career in to IT.

All the past experiences has taught me and helped me to be where I am today as a Women in Tech.

If I turn back and see, I know I made the right decision 🙂

Image Sourcehttps://in.pinterest.com/pin/514817801127582057/

Thanks so much for reading!

T-SQL Tuesday #149: Best SQL advice I would give to my younger self!

Thank you camilahenrique for hosting April month of T-SQL Tuesday. You can find the invitation here.

I give a big thank you to my younger self for changing my career from a pharmacist to a SQL DBA. I always loved computers since my childhood. You may ask, why did you choose to do pharmacy then? It’s a long story! I was just an innocent kid fulfilling my father’s dreams. By the time I realize pharmacy is not for me, I graduated as a pharmacist. I came to the united states to finally pursue my dreams and change my career. I joined as a master’s student in computer technology at one of the great universities in the united states. That was a major life-changing moment for me since I do not have much computer knowledge before. One of my core subjects was SQL. That’s when I was introduced to SQL and SQL Server. I can tell you with my whole heart that at that moment I know the decision I made was right. Though it was very tough in the beginning to catch up with the speed of other students, I learned T-SQL basics and from there, my computer journey started. I was stressed out in many situations and started slow. I thought I would never be able to make it to graduation. No matter how much I learned, I made mistakes and was never perfect. I spent late hours learning basics so I would at least pass the exam. It was a roller coaster ride for my master’s degree but I finally made it to the graduation 🙂

If I can give one piece of advice to my younger self, I would say learning is a continuous process and we all are perfectly imperfect. Failure happens only when you stop willing to learn. There will never be a perfect moment to start learning anything. No matter what you learned in the past or what degree you pursued in the past, that doesn’t have to define your future goals. Anyone can learn anything if we have a heart, and the will to work hard at it.

Learning the basics of T-SQL is really important before moving into complex T-SQL coding skills. I would like to share my favorite youtube channel that helped me when I started my T-SQL journey. If you are starting with T-SQL learning, follow this series from Kudvenkat.

I am looking forward to read all posts for this T-SQL Tuesday and learning from SQL Family!

Thanks for reading!

T-SQL Tuesday #147 – Upgrade Strategies

Thanks to Steve Jones for hosting February month of T-SQL Tuesday! This month T-SQL Tuesday is all about database upgrades.

When I started my first job as a DBA seven years ago, my project was to migrate several SQL Servers and all the servers were in SQL Server 2000. In my first SQL class at my school, I started my learning with SQL Server 2012. It was a shock to me to work on SQL 2000 databases at the time (as I am not familiar with the SQL Server 2000 yet), especially as it was my first job as a DBA.

My first project was to migrate approximately two hundred and fifty SQL 2000 SQL Servers to SQL Server 2012/2016. It took us a couple of years to successfully migrate all these Servers.

I would like to answer the below questions from Steve Jones for this T-SQL Tuesday:

Why do we wait to upgrade?

Fear! Fear of breaking things or impacting performance, I believe. Especially when you are making a big jump in the versions of SQL Server. The old legacy applications may or may not support the newer SQL Server environments. Rebuilding the application can take years, involve more developers and cost more. I believe, many companies are still using the older versions of SQL Server even after the end of support. Some companies are fine to be on legacy systems if their application works fine.

Strategies for testing an upgrade

When I migrated SQL Server 2000 databases, there was a lot of work that needs to be done before even planning for testing. There was deprecated T-SQL syntax like joins were no longer supported in SQL 2008 version of SQL Server. Legacy DTS packages need to be converted or rebuilt to be compatible with the latest version of SQL Server. By using the upgrade advisor and Migration assistant tool, it is easy to capture the incompatible objects. These objects need to be modified to make them compatible ahead of time. If these lists of objects are modified by developers in between, capturing those changes to the modified objects is important. Testing can be done using the backup and restore. You can skip SQL Server 2005 to upgrade the SQL database from SQL 2000 to SQL 2008. Running the compatible object script on SQL 2000 test database is recommended so the upgrade advisor can be run over the SQL 2000 database to confirm no more compatibility issues. Once the database is in SQL 2008 mode, there are very less T-SQL deprecated changes and so easy to upgrade.

Testing the SSIS packages after being migrated from legacy DTS packages is important. Once the database is upgraded on test environments, vigorous testing for any code breakages and application testing is important.

Smoke tests or other ways to verify the upgrade worked

I believe this depends upon the database and the type of applications. When you have databases having the same schema on several servers with similar environments, testing one database upgrade can confirm the other database upgrades to work. There are always caveats in these cases as well.

Moving to the cloud to avoid upgrades

Many companies are moving to cloud infrastructure as there is less maintenance when compared with on-prem environments and to lower their costs. I believe moving into the cloud is one of the options and the reason may not only be to avoid upgrades but to improve the performance and for lowering the costs.

Using compatibility levels to upgrade an instance by not a database

This can be an option to place the database in older compatibility mode even after upgrading the database to the latest versions of SQL Server like SQL Server 2019 but to use the features that are enabled for the latest compatibility databases cannot be used. Features like Query store can be used on older compatibility mode (up to SQL 2008) even after upgrading to newer versions of SQL Server. Though this is an option available, it is not suggested to place the database in older compatibility mode for a longer time.

Checklists of things to use in planning

  1. Using tools like Migration assistant will help identify the incomptibility objects list that needs to be modified to make the code compatible with lastest versions of SQL Server.
  2. Capacity planning
  3. Gathering the deprecated and unsupported feature list for the version of SQL Server you are moving to is important.
  4. Making sure the SSIS packages are tested properly.
  5. Gathering the logins and associated permissions (I used sp_hexadecimal and sp_help_revlogin)
  6. Making sure to take the backup of the database before migration and to place the copy of the database on source server for quick rollback if needed.
  7. Testing is needed if additional features like replication is being used. Making sure the publisher, subscriber have supportive versions of SQL Server to replicate the data.
  8. Making sure to script out the linked servers.
  9. Capturing performance baseline is an option to chose if you would like to use Query store. After upgrading the database to new version of SQL Server, place the database in the compatibility level similar to the compatibility level prior to the upgrade. This will help capture the regressions caused by any plans after the upgrade. We can easily compare the plan regressions and fix the performance issues quickly. To know more about capturing baselines using Query store, please see my blog post here.
  10. What method you plan to migrate a database is important. It depends on how much downtime is accepted for your database and application. Some of the methods to migrate databases are to backup and restore, detach and attach or using full, differential and transactional log backups for less downtime.

These are a couple of things to consider during planning. This list does not contain all the planning information but gives an overview.

The time it takes to upgrade your environment

I believe the time to upgrade the database depends on how large the database is and how many dependent features needs to be moved like for example, replication, whether the application needs to make any changes, what method is used to restore the database on destination (copy files to destination server can take time if the database is huge).

What do you evaluate in making a decision to upgrade or not?

I believe, checking what new features will be used by your database and application after the upgrade is important. Testing is a crucial part to evaluate the SQL engine, database features, and performance benefits after the upgrade. The latest cardinality estimates provide a better estimate which can help SQL engine to create an optimal plan. There are many other features that have been added to SQL Server over the years in improving the performance of the Queries and workloads.

These are some of my thoughts about the database upgrades. I would like to read and learn from the experiences of other SQL family members.

Thanks for reading!

Query Store Fundamentals: Did you know?

Source: Image

Did you enable Query store or have a plan to enable this amazing feature on your databases? There are many points that need to be remembered for effectively using Query Store on your databases. I have gathered some of the points as I learn about the Query Store. Hope the below list will be informative to you as well.

  1. Query Store feature cannot be disabled in Azure SQL Database single database and Elastic Pool. When you are using on-prem environment, you need to enable on database you would like to collect the query information.
  2. Use TF 7745 to avoid writing QS data to disk before SQL Server shutdown or during the failover. The general behavior of SQL Server is to write the Query Store data collected to disk before the SQL Server shutdown. This can take time to flush all collected data to disk and restart the SQL Server or during the failover situations. Using TF 7745 can cause the data collected be lost until the flush out interval period mentioned in the settings. If you can afford losing the data for that period of time, SQL Server restart or failover can take less time. SHUTDOWN WITH NOWAIT can also be used in the place of TF 7745 during SQL Server shutdown.
  3. Use TF 7752 to avoid synchronous load of Query Store while database is in the process of starting up and be available online. Regular behavior of synchronous load can take time to recover the Query Store in full. Until the Query Store is completely recovered, Query executions will be in hault and cannot execute. This is not a good situation if your database is huge. By using the TF 7752, asynchronous load can happen later after database becomes online. This trace flag is enabled by default in SQL Server 2019
  4. For in-memory OLTP, run time stats information is collected only when you enable using sys.sp_xtp_control_query_exec_stats (Transact-SQL). This setting needs to be enabled on each server restart as it does not persist on SQL Server restarts.
  5. Did you know what the circle, square and triangle mean in Query Store? Size of the circle also have a message. Circle represents the successful execution, the more number of executions has the bigger size, rectangle represents a cancelled execution, triangle means an error caused during execution. For the below example, I executed the stored procedure 20 times, so the size of the circle you see is big and I stopped the same query execution in between to show a rectangle.

6. If your workloads have many ad-hoc queries, use Optimize for Ad hoc Workloads option to save cache memory as recommended by Microsoft. There is also a force parameterization option that you can enable at the database level to parameterize the ad-hoc queries so they reuse the plans. This can be an alternative option for Optimize for Ad hoc workloads. Randolph West and BrentOzar have explained the reasons why one option is better than the other and when to use one over the other. Thanks to Randolph and Brent for your detailed explanation! Check these posts for sure to know more!

7. Max plans per query are set as 200 execution plans per query by default. You can change this setting based on your workload and requirements.

8. If MAX_STORAGE_SIZE_MB is reached, Query Store goes into read-only mode. Remember, the storage size is only checked at the flush-out intervals. If the storage size is reached to the maximum limit before reaching the flush-out intervals, that’s when the Query store goes to read-only mode. In read-only mode, the Query Store can read the forced plan information and apply a forced plan but cannot write any information into the Query Store.

9. To avoid reaching the maximum storage size, make sure the “size-based cleanup mode” is always set to AUTO which is a default. This option when set in AUTO mode avoids Query Store from going to read-only mode by deleting the old data from Query Store once it reaches 90% capacity of its Query Store maximum storage size limit. It clears up until it reaches 80% of space and this avoids the need of changing to read-only mode.

10. The QS QUERY_CAPTURE_MODE, AUTO will avoid ad-hoc queries with small resource consumption by default. This is recommended mode by default. This mode is enabled by default in Azure SQL Database, SQL Server 2019

11. Deleting plan cache data doesn’t delete the Query Store data. Query Store data is persisted upon the SQL Server restarts. The cleanup policy can be set by using the STALE_QUERY_THRESHOLD_DAYS setting.

12. Last by not least, there is a query that can delete the ad hoc queries from the Query Store. This will avoid query store space from filling up quickly. Follow this article to find the query that can delete ad-hoc queries

Thanks for reading!

Resources:

  1. https://docs.microsoft.com/sql/relational-databases/performance/best-practice-with-the-query-store
  2. https://bornsql.ca/blog/dont-optimize-for-ad-hoc-workloads-as-a-best-practice/
  3. https://www.brentozar.com/archive/2018/03/why-multiple-plans-for-one-query-are-bad/
  4. https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15
  5. https://docs.microsoft.com/en-us/sql/relational-databases/performance/using-the-query-store-with-in-memory-oltp?view=sql-server-ver15
  6. https://docs.microsoft.com/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store
  7. https://docs.microsoft.com/sql/relational-databases/performance/best-practice-with-the-query-store

Query Store Fundamentals: Query Store Hints

Source: Image

Query Store hints is another amazing feature added and is available in the preview mode in Azure SQL Database, managed instances, elastic pools and hyperscale databases as well. Query Store hints are just like the hints you use in your queries but the difference is you do not have to change the query code to apply these hints using Query store specifically designed stored procedure- sys.sp_query_store_set_hints. Cool, right?

Query Store hints are very useful when you need to use the hints and force the optimizer to use those hints but without changing the underneath query. This feature is used for specific queries in some circumstances. Some of the examples that can take the advantage of this feature are the parameter sensitive queries where recompile is needed to create an optimal plan each time it executes, set the MAXDOP value, memory grant size setting for bulk operations, etc. (Source: Microsoft Docs)

You also have the choice to choose the hints like forcing the legacy cardinality estimation for the queries not performing well on the newer versions of SQL Server, using different compatibility level for a particular query as the database compatibility level is another.

For the complete list of hints that are supported and not supported, please see it here.

Now, let’s see an example of a stored procedure where we can apply the Query Store hints. This stored procedure have sensitive parameters and we need to use recompile to make sure it generates the optimal plan for each execution. I am using Azure SQL Database Adventureworks for this demo. Query store is already enabled on this database by default. Let’s create the stored procedure.

IF EXISTS (
		SELECT *
		FROM sys.objects
		WHERE type = 'P'
			AND name = 'Salesinformation'
		)
	DROP PROCEDURE dbo.Salesinformation
GO

CREATE PROCEDURE dbo.Salesinformation @productID [int]
AS
BEGIN
	SELECT [SalesOrderID]
		,[ProductID]
		,[OrderQty]
	FROM [Sales].[SalesOrderDetailEnlarged]
	WHERE [ProductID] = @productID;
END;

Let me clear out the Query Store data and plan cache before running the test queries.

ALTER DATABASE [AdventureWorks]

SET QUERY_STORE CLEAR;

/* Clean the procedure cache */
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE

Execute the stored procedure with two different parameters (the first parameter results around 30K rowsand the second parameter results 380 rows)

/* Executing the sp resulting in many rows with this parameter */
EXEC dbo.Salesinformation 870

/* This parameter will result in only couple of rows, it reuses the same execution plan */
EXEC dbo.Salesinformation 942

If you observe the execution plan for both the executions, the parameter 942 only results in 380 rows but it is doing the clustered index scan, which is not optimal in this scenario.

To fix this usually, we will use recompile as a runtime hint or in the stored procedure header. With the Query Store hints, all we need to get is the query id and the hint information we want to apply. This is done in two steps. First, grab the query id by querying the system catalog views and filtering on the query text by using the like operator.

/* Find the query ID associated with the query
Source: https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current
*/
SELECT query_sql_text
	,q.query_id
FROM sys.query_store_query_text qt
INNER JOIN sys.query_store_query q ON qt.query_text_id = q.query_text_id
WHERE query_sql_text LIKE N'%@productID int%'
	AND query_sql_text NOT LIKE N'%query_store%';
GO

The query_id is 1 for this stored procedure. Now, we can use the stored procedure sp_query_store_set_hints and pass the parameter values query_id and the hint (recompile) in this case to fix the problem.

/* Set the Query hint by using the sp sp_query_store_set_hints
Source: https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current
*/
EXEC sp_query_store_set_hints @query_id = 1
	,@value = N'OPTION(RECOMPILE)';
GO

Now, execute the stored procedure again with the same parameters and see if the recompile occured.

/* Execute the sp again with two parameters */
EXEC dbo.Salesinformation 870

EXEC dbo.Salesinformation 942

The stored procedure is recompiled as the query hint has been applied and you can see the difference in the execution plans. For the parameter value with 380 records, it recompiled to create an execution plan using an index seek and a key lookup which is optimal for this execution.

To check for the query hints that are currently enabled, run the below query using the catalog views

/* Check for the Query hints that are enabled 
Source: https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current*/
SELECT query_hint_id
	,query_id
	,query_hint_text
	,last_query_hint_failure_reason
	,last_query_hint_failure_reason_desc
	,query_hint_failure_count
	,source
	,source_desc
FROM sys.query_store_query_hints;
GO

The query_hint_id is shown as 1. As you update the hints on this query_id, this number increases. The query_hint_text always shows the currently active hint that has been applied. The history of the query_hints has not been displayed.

If you would like to use the MAXDOP hint set to 1 for the parallelism used in the execution plan. Update the hint to MAXDOP by running the stored procedure below. Run the stored procedure again with the parameters to see the hint being applied.


/* MAXDOP HINT */
EXEC sp_query_store_set_hints @query_id = 1
	,@value = N'OPTION(MAXDOP 1)';
GO

/* Clean the procedure cache */
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE

EXEC dbo.Salesinformation 870

EXEC dbo.Salesinformation 942

If you observe the execution plans, the hint has been updated to MAXDOP. If you observe, the “NonParallelPlanReason” is shown as MAXDOPsetToOne and the recompile hint that has been previously applied is not active anymore. If you check the query store hints active list by using the system catalog views, updated MAXDOP hint is only shown but the Query_hint_id is being updated to 2.

We can use multiple query store hints in the same stored procedure like below and execute the stored procedure again with two parameters.


/* Multiple query hints */
EXEC sp_query_store_set_hints @query_id = 1
	,@value = N'OPTION(RECOMPILE,MAXDOP 1)';
GO

/* Clean the procedure cache */
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE

EXEC dbo.Salesinformation 870

EXEC dbo.Salesinformation 942

If you observe the plan and the details, both the recompile and MAXDOP query hints have been applied together. If you check the status of the query hints, this time the number of the Query_hint_id will be 3.

To remove the query hints that have been active on the query_id, use sp_query_store_clear_hints by providing the query_id to remove the Query Store hints on specific query_id.

/* Remove the Query hints from Query Store */
EXEC sp_query_store_clear_hints @query_id = 1;
GO

To make sure the Query Store hints are disabled on the query_id, recheck by running the below query:

/* Recheck for the Query hints to make sure Query hints are diabled */
SELECT query_hint_id
	,query_id
	,query_hint_text
	,last_query_hint_failure_reason
	,last_query_hint_failure_reason_desc
	,query_hint_failure_count
	,source
	,source_desc
FROM sys.query_store_query_hints;
GO

In this blog post, we have only tested some of the Query Store hints that are supported. There are many other hints that are supported and do not forget to read the entire blog post here from Microsoft to know the details of the hints that are being supported and not supported.

Summary:

In this post, we have seen what Query Store hints are and tried a couple of examples to see how the preview mode feature works. If you have a need to use any of the supported hints in your queries but are unable to edit and modify the text of the query because of any reasons like the third-party applications, upgraded the database to newer SQL Servers but have to use the older cardinality estimation on any particular query not performing well or use older compatibility level for any particular query without interfering with the database level compatibility, Query store hints can be very useful.

Thanks for reading!

Resources:

  1. https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current
  2. https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-query-store-set-hints-transact-sql?view=azuresqldb-current