Blog

T-SQL Tuesday 177: Managing Database Code

Thank you Mala for hosting T-SQL Tuesday for August month. Mala asked us to write about the Database version control and what tools we use to manage our database code. Please find the original invite here.

Earlier in my career as a DBA, the database code was managed with comments in the code to know who changed what, when, and the purpose of the change. It was tough to manage the code since many developers forgot to put in these comments properly. This is not a reliable way to maintain the code.

Then we decided to use the source control. Version control is where your changes in the code are tracked over time as versions. If in case you need to roll back to the previous version, you have the correct version in place as a point-in-time snapshot to retrieve that version back to rollback with much ease. Version control also helps with the collaboration of work by multiple developers working on the same project. There were several tools in the market. You can find the list here.

We used Redgate Source Control with the DevOps Git repository. Redgate Source control is a plugin you use in the SSMS tool. It will connect your databases to the source control systems like Git, TFS etc; Install the Redagte SQL source control tool from here.

We created a Project in the DevOps and select the version control as Git. We then initialize the main branch using the visual studio and clone the branch using the visual studio to local folder. We then connect the SQL database to the source control by connecting to the local folder. This will link the database to the source control. The initial commit will help us push the objects into the source control.

We can also use the Azure DevOps to create CI/CD pipelines to push the changes to each environment before committing the code into production. To find out what the CI/CD pipeline is, please read this Microsoft article.

I have described it very briefly here in this blog post about the database code solution we used but this is a wide topic to learn.

To learn about Azure DevOps implementing continuous integration using DevOps, check the Microsoft learn series here.

I am looking forward to reading the experiences of other SQL Community members regarding their journey with database source control.

Thank you for reading!

May 2024 Power BI Report Server update – Fix “We couldn’t connect to the Analysis Services Server”

We recently updated our Power BI Report Server with a May 2024 update. Found that we were unable to open any reports using direct query. Paginated reports were all working fine. This also doesn’t apply to any reports using the import mode. We were not having this issue when using the Power BI desktop but only on the web page.

Below is the error message shown when opening the reports using direct query.

I had to check with my Community friends first as I was searching online if any others had the same issue. James Reeves, one of my community friends pointed me to resolution steps by sharing this medium article. Thank you to my friend.

As per my company standard procedures to contact Microsoft support for any issues, I have open a ticket with them to get action steps for resolution. They helped me resolve the issue. I promised them that I will be writing a blog post about how we resolved this issue so anyone having the same issue will have an idea of the resolution steps to this error.

When you check the Change log for Power BI Report Server, Version: 1.20.8910.25479 (build 15.0.1115.165), Released: May 28, 2024 release notes mentioned about the security change made during this update to add the environment variable and system variable on the Power BI Report Server machine.

Steps

Connect to your Power BI Report Server. In the search bar at the bottom > search settings > system > about > Advance system settings > Environmental variables

Click on the New under the Environment Variables > create new variable with name PBI_SQL_TRUSTED_SERVERS. In the variable value (usually, the value shown in your datasource of the direct query report)- give the FQDN (example – mysvr.microsoft.com) or Servernames seperated by commas (example – contososql, contososql2) or Machinename with the * at the end if you want to include all the SQL Server instances on the machine (example – contososql* which includes contososqlinstance1, contososqlinstance2 and so on). Click OK.

Repeat the same by creating the same variable with value in the System Variables too. Click OK.

Restart the Power BI Report Server and now try to connect to the report and you should be able to open it.

Also, do not forget to read the limitations to Connect to SQL Server database from Power Query Online here.

Thank you for reading!

T-SQL Tuesday #174: My Favorite Job Interview Question

Loved this question asked by Kevin Feasel for this month of T-SQL Tuesday, so here I am to write this post before this day ends. Please find the invitation here.

First, I would like to talk about my entire amazing interview process as a whole. It was multiple layers of interviews – Starting with my HR, my manager, and IT Director, interviews with all the DBA team members in sets of two per interview, and Cultural ad interviews with multiple teams focusing on Diversity, Equity, and Inclusion.

By looking at the list, are you already stressed? I felt the same when I was sent an email from HR with this list of interviews scheduled but I can say this is one of the best interview processes I have seen in my entire career. I had a chance to talk to each of the members I potentially work with if I were the chosen one for the position. My team members all the way to my Director. I have never seen any interview with a Director for a database administration position. This has to do completely with the Culture of the company and a clear example of how each of the potential employees are valuable to them.

I really appreciate all the companies who take real care in interviewing the best candidates for the position.

Coming to the best interview question from the same company – An interview with DBA’s. They asked me how I could manage my work and community work at the same time and how I find time to do all the community work apart from my work. They asked me if I rest and take time for myself. To be very sincere, I became emotional when they asked me this question. They asked me about my emotional well-being. I was all prepared technically and was ready to answer technical questions and hearing this come from them (Who are Experts in the Database Administration field) melted my heart. I was not expecting this at all. We also had technical discussions and it was a great interview with each of the DBA team members.

At the end of this interview (which was the final interview round), I made sure I let the interviewers know how wonderful the interview process was and I appreciated them for giving me the best time ever. I also let them know that this interview process was the best in my entire career. To be very sincere, I was not making up the words to impress the interviewers. I had enough experience to find another job if I was not selected for the role but I sincerely wanted to appreciate the entire team and the company for giving me the best experience ever.

Best of the Best, I am currently working here ❤

Thanks to Kevin for asking us to write on this topic. Really appreciate it.

Looking forward to reading the posts from other SQL Family members.

Thanks for reading!

T-SQL Tuesday – The Last Issue I Solved

Thanks to BrentOzar for hosting this month of T-SQL Tuesday. Please find the original post here.

Brent asked us to share the last issue that we resolved. I recently resolved a replication issue. I completed a transactional replication which is set between 4 servers. Let’s say, A, B, C, and D servers. A replicates to B which then replicates to C and then from Server C to Server D. The setup is complicated. On a regular process, Server A needs to import the fresh data on Server B tables that are involved in replication. It is not possible to drop the tables as they are currently involved in replication to B Server which then replicates the data to Server C. To make this possible, publications have to be dropped on Server B and Server C before importing fresh data from Server A to Server B. Once the tables are created on the Server B, then the publications are recreated.

As all these servers are involved and linked through transactional replication on the same tables, and publications are created on each of these servers, it gets complicated to drop and recreate the publications on Server B and Server C. We tried automating the process by creating the drop and recreate publication SQL agent jobs and using triggers to trigger the execution of the jobs one by one.

I know we can use other technologies in SQL Server to accomplish the same goal but the requirements and the limitations from vendors made us to stick with this plan.

So, the setup is complete. Soon after, I saw one of the recreate publication jobs fail due to the replication agent account being locked, and the downstream jobs all failed as they were interlinked to each other. Though the resolution is simple to enable the locked account, it is hard to figure out that this is the reason for the replication being broken. Especially, when these jobs fail across multiple servers.

I enabled the replication agent account and rerun the job to success. Then manually executed each of those failed jobs to succeed on the rest of the Servers. I am currently in the process of figuring out why the account is locked in the first place.

I know it is challenging to maintain the replication but it gets even tougher with complicated replication setups across multiple servers.

This is the last issue that I fixed the last Saturday night at 1am, LOL. Glad the issue is resolved.

Though the setup is tough and things seem complicated, I was able to successfully migrate the servers from older versions to new versions without breaking the replication setup that is required between multiple servers.

I am looking forward to reading the other SQL family posts on this month’s T-SQL Tuesday.

Thank you for reading!

SQL Server 2022: Query Store Advancements

If you are not very familiar with the Query Store feature, please check my series of blog posts on Query Store here.

In this post, you are going to learn about the advancements of the Query Store released in SQL Server 2022. This is the first blog post for this series. You can expect more posts in the coming weeks related to the Query store usage in the Intelligent Query processing features released in SQL Server 2022.

Advancements in SQL Server 2022:

  1. Query Store enabled by Default – For any new databases that you create on SQL Server 2022, the Query Store feature is enabled by default. If you are migrating the database from lower versions to SQL Server 2022, the query store feature is not enabled by default. You need to manually enable it if the feature is not enabled manually on the lower version before. While choosing the configuration options, you need to make sure you capture only the required queries and not the insignificant queries that you do not bother capturing. To effectively capture the queries you need and save space for the Query Store data, you can customize the capture policy based on the total compile CPU time, total execution CPU time, and execution count. Example of the custom policy (source)
ALTER DATABASE [QueryStoreDB]

SET QUERY_STORE = ON
(
OPERATION_MODE = READ_WRITE,
CLEANUP_POLICY = ( STALE_QUERY_THRESHOLD_DAYS = 90 ),
DATA_FLUSH_INTERVAL_SECONDS = 900,
MAX_STORAGE_SIZE_MB = 1000,
INTERVAL_LENGTH_MINUTES = 60,
SIZE_BASED_CLEANUP_MODE = AUTO,
QUERY_CAPTURE_MODE = CUSTOM,
QUERY_CAPTURE_POLICY = (
STALE_CAPTURE_POLICY_THRESHOLD = 24 HOURS,
EXECUTION_COUNT = 30,
TOTAL_COMPILE_CPU_TIME_MS = 1000,
TOTAL_EXECUTION_CPU_TIME_MS = 100

)
);

2. Query Store on Secondary replicas – In SQL Server 2022, Query Store can also gather the information from the secondary replicas and send it over to the primary replica through the same channel that sends the data to the secondary replica. Once the data is captured and sent the data over to the primary replica, the data is hardened to the Query Store schema on the primary database. Query Store for secondary replicas is available for preview. It isn’t available for use in production environments.

You will need to enable the Trace flag 12606 before you enable the Query Store for secondary replicas.

There are some steps to follow in enabling the Query Store on secondary replica –

  • The primary database should have the Query Store enabled on it and it should be in the read-write mode before enabling the Query Store on secondary replicas.
  • Since the secondary replicas are all in read-only mode, you need to enable the Query store on the secondary replicas by connecting to the primary and running the below query on each database you would like to enable. This will enable the Query store for all the secondary databases.
ALTER DATABASE [Database_Name]

FOR SECONDARY SET QUERY_STORE = ON (OPERATION_MODE = READ_WRITE );
GO
  • To find more information about the replicas, you can use sys.query_store_replicas. You can find the replica_group_id, role_id, and replica_name.

3. Query Store Hints: We do have this feature available in Azure SQL database and Azure Managed Instance. It is also introduced in SQL Server 2022. When you have a query that will improve the performance with the query hint, you can use the Query store hints. Just like the regular query hint, it is the same way to use but the main important thing is that you do not have to change the underlying code of your query. You can use the system stored procedure designed for the Query Store hint sys.sp_query_store_set_hints.

You can use the Query store hints in the scenarios where the query needs to be recompiled, setting the MAXDOP, setting the memory size grants for your queries, compatability level at the query level, using specific kind of the joints like using Hash join in the place of the nested loop joins.

To see the example of how you can implement the Query Store hints, see my blog post on this here.

Some things to remember while using the Query Store hints:

  • If you are using a force parameterization option at the database level, remember that the Query hint ‘RECOMPILE’ will be ignored. Any other query hints will be used and execute the query. When this happens, there will be a warning 12461 issued by the SQL Server.
  • If there is any contradictory hints used while using the Query hints, simply the other hint will be ignored without issues with execution. The query will run fine.
  • If you are already having any hints at the statement level or any plan guides on your queries, they will be ignored by the Query hints. Query hints will always override these.
  • Prior versions of the SQL Server 2022 only captured the data using the Query store on the primary replica and Query Store hints are only applied to the primary replica but in SQL Server 2022 – once the secondary replicas are enabled to capture the data from Query Store, Query hints are replica aware for the secondary replicas.

4. Intelligent Query processing features in SQL Server 2022 – uses the Query Store data to capture the information overtime to capture the feedback and use that feedback for better performance of your queries. All the below highlighted features in blue were released in SQL Server 2022. To use these features, you just have to place the compatibility mode of the database to 160.

The only IQP feature released in SQL Server 2022 that doesn’t require the Query Store to be enabled is ‘Parameter Sensitive Plan Optimization” also known as PSP Optimization.

Source: Microsoft Docs

In the coming series of blog posts, I would like to go in detail of each of these IQP features with queries explaining how these features help in optimizing the performance of your queries and how Query Store help support these IQP features.

Thank you for reading!

References

  1. Intelligent query processing in SQL databases
  2. Query Store hints
  3. What’s new in SQL Server 2022

T-SQL Tuesday 170 – Steering Future Projects with Past Experiences

Thank you Reitse for hosting January month of T-SQL Tuesday year 2024. This is my first blog post in the year 2024. Reitse asked us to write about any lessons learned from the old projects. Please find the original invite here.

I had to work on a project years ago where there was a requirement to migrate the old version of the database (SQL 2000) to SQL 2012/2016. One of the requirements was to use the replication to replicate the data over to another server. This is an OLTP database where there were heavy transactions per second. As a part of the planning, I recommended using the Always on Availability groups instead of the replication. My manager at the time (from my previous company) was a person who trusted and supported male DBAs more than female DBAs regardless of their expertise. I had to respect the decision to replicate the data over using the availability groups for read-only reporting purposes. I didn’t even have the chance to test the scenarios, compare the results from the two technologies, and show the results in the test environment.

Once the project is completed and in production. There comes the problems with replication. There were intermittent network failures, issues with the settings in the files generated by the snapshots, and other typical issues with the replication- there was a heavy lag in transferring the data. If you worked on replication before, you surely know fixing replication issues was no joke. We had to later remove the replication for good and replace it with the always-on availability groups. This has not only improved the performance overall but also required less maintenance.

Though the management understood later that they must have taken a different approach, the cost of it was the time and resources spent on the project.

One of the other projects was about the tools we wanted to purchase for data governance. Depending on one tool is not a good case when you are still in the process of deciding which tool to purchase for your needs. We had to spend a lot of time on one tool for the Proof of concept for months only to decide it didn’t serve our needs. This took a lot of DBA and the other team resources for months. I believe understanding the requirements from the company’s perspective should be the first step in the process. Then compare the tools from the wider perspective of whether they can serve all the requirements we are looking for. Filtering this way can give us the list of tools we can try and test to decide further.

These are the two experiences I would like to share in this post. I am looking to learn more on this topic from others as well.

Thanks for reading!

Year 2023 in Rearview: Thrills, Spills, and Life Skills

Hello Dear readers!

This will be the last post for this year – writing this a few hours before the end of the year! I will keep this short and not boring.

Would like to thank you all for staying with me along my learning Journey. You are the main reason why I still continue to write these posts. Thank you and Thank you!!

Source: Pexels

As I look back at the year 2023, the main thing I can say is, that it was super fast. I still remember the day I wrote my 2022 year-end review. Time flies so fast and we get older quick.

I would like to reflect on this year as per my professional growth and personal growth.

Professional

  1. Community Speaking and Other Engagements
  2. Personal
    • I was interested in learning philosophy for a very long time. I had started educating myself on Stoic philosophy starting with Marcus Aurelius Meditations. Started listening to Rober Greene Audio books on Audible starting with The 48 Laws of Power. Recently fell in love with J. Krishnamurthy’s philosophy, this is my all-time favorite I can say because I am finding answers to some complex questions about life and I truly believe Krishnamurthy has answers to my questions. Really enjoying his hour-long lectures.

I am looking forward to continuing to publish more blog posts, community work including speaking engagements, and Mentoring in the year 2024.

To all you wonderful people who read all the above lines and reached this section of this post – These are a couple of the things I learned about life this year. Would like to share some of them here.

  • Live this life as much as you can – cause you don’t have any other lives to jump into. No matter how much you feel this life sucks, you are traveling on your own road (your life). Stop looking at the other roads cause you don’t know where they turn and how far the other roads lead. Those roads have their own bumps, speed breakers, and dead ends. Focusing on our road is the only way.
  • If you are trying to solve any particular issue in your life, try to examine it and do research on the issue. Fear disappears once you understand what the issue is. You can find books, and videos talking about the issue in detail. As you do the research, you will find you are not the only one having the same issue – which in itself solves half of your worries. Along the way, you might find a community of people trying to solve the same problem or achieve the same life goal.
  • Do not bother about what others think of you. Don’t take other people’s negativity towards you personally. They do not know you, your dreams, your ambitions, and the work you do while they are not watching you. Have empathy towards them. They may be going through a lot in their lives and have many insecurities in their lives.
  • No matter how big of a problem you are facing, many people may be going through the same exact problem at the same time as you. We are the human species that lived on earth for 200,000 years. Problems do repeat themselves but as we are more focused on our own lives, we feel we are the only ones facing the issue.
  • See your problems from a broader picture – most of the time, you will end up realizing you are giving greater attention than it needs.
  • At the end of our lives, none of these problems matters.

Thank you all for reading!

Happy New Year 2024!

T-SQL Tuesday #169 – Hey You Wonderful, Patient Souls!

Thank you my dear Friend Kay Sauter for hosting the December month of T-SQL Tuesday. Here is the invitation. I believe this is the perfect way of saying thank you and Goodbye to the year 2023.

Let’s take a moment here. Seriously, stop scrolling for a second. Can we all collectively pat ourselves on the back for a second? I mean, seriously, YOU all deserve a medal or at least a virtual hug for hanging around and being the absolute best bunch of readers of this blog!

Out of all the other wonderful things around the world to watch for, out of all the other endless distractions we have in this social media world today starting with cute little kitten videos all the way to your favorite actor movies, possibilities are endless for the entertainment. Still, you decided to spend your time chilling here reading all my ramblings. Seriously, I am speechless! Thanks for your commitment and patience with me.

Thanks for even taking my emotional side of things. I am an emotional person and a part of me from that side of my personality should have shown up in some of my blog posts. Especially, when I sit in the night and put my thoughts in here. My thoughts are unfiltered. You have braved to see me from that angle as well. I mean, my Drama! Thanks for embracing the unpredictable side of me.

So, THANK YOU!

Thank you for being my best virtual friends, and for spending your time and attention! Thank you for being so patient with me as I compose and do the posts.

You are all my real MVP’s and these little bi(y)tes of dbanuggets wouldn’t be the same without you!

So, my awesome and amazing readers! Keep being yourself! Never ever let anyone dim your light! Cause, you are born to stand out!

Embrace this life, keep being fabulous, be curious, enjoy the little things in life, cry, scream, and laugh the life. There is nothing wrong with it. After all, we just have one life and we want to be authentic. Show the emotions instead of suppressing them. Tell people in your life you love them. Do not worry about making mistakes. There is nothing bad in being wrong. There is nothing wrong with it. Keep being YOU!!

Keep that Smile, it may brighten someone’s day!

SQL Server 2022: Enable Azure AD Authentication for SQL Server on VM

To enable the SQL Server 2022 on a virtual machine to use Azure AD, we need below things:

Managed identity and its permissions to query Microsoft Graph: When you enable the AD on the SQL Server 2022 and create the Azure AD login and that user trying to login, the managed identity will be used to Query the Microsoft Graph and that is the reason we need to give this identity the permissions to Query Microsoft Graph. The security of this managed identity is at the resource level. Here the resource is a Virtual machine.

To set up the Azure AD authentication for SQL Server on VM, we can do this by using System-assigned managed identity or User-assigned managed identity.

System-assigned managed identity is easy to manage and configure as this identity will be deleted when the VM is deleted. It will have validity until the VM has its life whereas the User-assigned managed identity will be used at scale for many SQL Servers on the virtual machines.

In this post, we are learning how to enable the AD authentication to SQL Server on VM. So, we need the SQL Server 2022 which is installed on the Azure VM and we will have to register the Windows SQL Server VM with SQL IaaS Agent extension.

Permissions:

These are the below permissions needed to connect the SQL Server to VM to the managed identity:

User.ReadALL, GroupMember.Read.All, and Application.Read.All.

The managed identity must also have Active Directory reader permissions. Search for the Microsoft Entra ID

Under the default directory, click on the roles and administrators

Make sure you have Azure AD Global administrator permissions before giving the identity the reader permissions to the Azure AD

If you scroll your bar right side, you can see the ellipses button. Click on that to see the description of the role

Under the directory readers, click on assignments and then click on add assignments

From the list, choose the VM name if you want to use the system assigned managed identity

Click on Add at the bottom of the page

You will see the below screen. Make sure the application (VM name) is mentioned under the select members. This is if you choose the system managed identity. Click next.

As you click next, you will see the below screen. Make sure the assignment type is active and click on the box for permanently assigned. At the bottom of the screen, you will see the assign button. Click on that.

Under the directory readers, you will see the vm system assigned identity is being added.

If you want to choose the user-managed identity, choose it.

Enable the Azure AD authentication in the Azure VM

Under the SQL virtual machine (Make sure your VM is turned on and the SQL virtual machine is online as well to see the enabled security configuration option in the left blade under the security tab). Enable the Azure AD authentication. Here, I already enabled it earlier and so you see it as enabled.

Remember: Once you enable the Azure AD authentication, you cannot disable it.

If you want to enable the Azure AD authentication, you will see the below screenshot. Click on enable and hit apply at the bottom of the page.

In this blog post, you have read the process of enabling the Azure AD authentication on Azure VM. Only Windows Server 2022 VM’s are supported to enable the Azure AD authentication with the SQL IaaS Agent extension.

Thank you for reading!

References:

Microsoft articleEnable Azure AD authentication for SQL Server on Azure VMs

T-SQL Tuesday #165 – That Job Description I read and Couldn’t Stop Laughing!

Please excuse me if this post is going to be the funny one, yes it is!

I would like to first thank my dear friend Josephine Bush for bringing up this great topic and asking us to write about how the proper job titles and descriptions should be for the job postings.

I surely have a lot of thoughts on this topic. Two years ago, I was applying for Database administrator positions everywhere. If not everywhere, almost everywhere. Linkedin, Indeed, Monster, CareerBuilder, etc. As I was looking at the Job titles and Job descriptions at the time, my blood boiled at the beginning looking at the requirements. More about this soon. I was already stressed about figuring out a way to get a job soon and on top of that stress, unreasonable job descriptions caused me more stress. After a couple of days, that stress turned out to be a stress reliever. Yes, you read that right. In the beginning, I was madly looking at the job description mentioned for a SQL database administrator role mentioning high expertise in the languages – C++, Java, PHP, Python, etc; but later as I was checking on many of these types of job descriptions made me laugh and had helped me filter out the companies that I can ignore applying.

The other funny thing I observed is the description mentions that the company is looking for a senior-level position and mentions the salary as 15 dollars per hour. The description for this kind of job posting also mentions the certifications as needed or preferred.

To take this to another level, I would like to show an example of postings from one of the best companies (Don’t want to point out the company name here) back in 2020 for the position of “Cloud Native Infrastructure Engineer” mentioning the requirement is to have 12+ years of Kubernetes experience when in fact the Kubernetes technology was released in 2014 which was just 6 years. Source

I believe many companies rely on people who do not have much experience with technology to post about these job openings and their descriptions. Due to this, great candidates who can be a great fit for the position would not even consider reading the entire description. This can ruin the reputation of the company.

One more hilarious experience of mine –

My first-ever job was with the title “Database administrator” but as I enter the company, the first day I received a tag with my name, picture, and with the title “Database Developer”

Hope you enjoyed reading this post and my request to anyone looking for a job actively ignore these type of job descriptions!

I am curious to read all other posts on this topic for this month of T-SQL Tuesday!

Thanks for reading!