T-SQL Tuesday #190 – Learning a Technical Skill

September 2025 Month of T-SQL Tuesday is hosted by Todd Kleinhans, asking us to write about “Mastering a Technical Skill”. Thank you Todd for hosting this month of T-SQL Tuesday.

I would like to write about my learning about Microsoft Fabric recently. I have been a Database Administrator all my career and recently started learning Fabric by signing up to Amit Chandak’s Microsoft Fabric Bootcamp.

I really appreciate Amit for doing this for the community. This bootcamp is totally free and has been taught both in English and Hindi (the Indian national language). We can also access all the videos on Amit’s YouTube channel here.

Since I registered for this boot camp, I have had the chance to watch a couple of sessions about Fabric and am looking forward to catching up with the rest of the ongoing sessions. It is always good to learn about the technology that you are already familiar with, but want to go deeper into learning more about. I have been working with Power BI and Fabric for quite some time, but I am more into the administrative side of things. I believe listening to experts through the community-led bootcamps is an excellent way to learn a new or existing technical skill and get good at it.

There is always something new to learn in fast-moving technology, and having resources like these bootcamps is a great way to learn from experts. Not only bootcamps, but many online free conferences are going on throughout the year, and it is a great way to take advantage of these resources to learn new technologies.

By the way, I am one of the co-organizers for the upcoming free online conferences- Future Data Driven Summit (September 24th) and DataBash 2025 (September 27th). If you are interested in learning new technologies or you would like to dive deep into the topics that you are already familiar with, I highly suggest you register for these conferences. If you would like to know more about the topics and speakers, please visit the website to learn more.

I am happy to write this T-SQL Tuesday post, and thanks to Todd for the invitation!

Thank you for reading!

T-SQL Tuesday #181 – SQL Database in Microsoft Fabric

Thanks to Kevin Chant for inviting us to write this month’s T-SQL Tuesday. This month is special, as Kevin mentioned due to the Festive Tech Calendar, which I have been speaking about for a couple of years now. Every day of the December month, a new recording or a blog post will be released for you to view. If you are not following their youtube channel yet, you must subscribe to get the wealth of information on the latest and the greatest features in Microsoft space.

As Kevin invited us to write about our most exciting feature, I would love to write about the SQL Database in Fabric.

Note: This is a new feature that was announced in the Microsoft Ignite 2024 in November.

As per Microsoft docs,

“SQL database in Microsoft Fabric is a developer-friendly transactional database, based on Azure SQL Database, that allows you to easily create your operational database in Fabric. A SQL database in Fabric uses the same SQL Database Engine as Azure SQL Database.”

As you read, this is a transactional database that can be created in fabric and can be replicated to Data Lake for the analytical workloads. The other main goal is to help build AI apps faster using the SQL Databases in Fabric. The data is replicated in near real time and converted to Parquet, in an analytics-ready format. This database can be shared with different users without giving them the access to the workspaces but giving the access to the database will automatically give them the access to the SQL analytics endpoint and associated default semantic model. You can use the SQL database in Fabric for the data engineering and data science purposes. The other cool thing is, you can use the built-in git repository to manage your SQL database.

As you know Microsoft Fabric is a software as a service (SaaS) platform that combines data engineering, data science, and data warehousing into a unified analytics solution for enterprises. All of these services within the fabric can access the SQL Database in Fabric through the Data Lake for analytical purposes.

This feature is in Public preview now. You can test the SQL Database in Fabric for free for 60 Days. You need to have a fabric capacity.

Make sure to Enable SQL database in Fabric using Admin Portal tenant settings. For more details, you can follow this Microsoft doc.

You can query the database in fabric using the query editor, SQL Server Management Studio (SSMS), sqlcmd, bcp utility and GitHub Copilot.

As this is a new feature, there are resources available on Microsoft Reactor youtube channel. There are series of videos being released by this channel since couple of days ago. Please look below for the list of the videos and dates they are released:

You can find the first video here

For more information, do not forget to read these blogs from Anna Hoffman and Slindsay

Resources:

Microsoft docs

Thank you for reading!

T-SQL Tuesday #178 Invitation – Recent Technical Issue You Resolved

I am honored to invite you all to the September Month of T-SQL Tuesday blog party!

If you are new here and want to be part of the blog party every month, learn all about T-SQL Tuesday here.

Have you had any recent technological problems at work that you were able to fix? You might have tried very hard for days to figure out the answer to the technical issue you faced, but it turns out that a minor modification you made may have resolved the issue. Alternatively, the error message you see might be completely different from the solution you adopted to resolve the issue. Please blog for me about any problem, no matter how big or minor, that you may have encountered lately. I’d like to see all kinds of issues you’ve faced and how you fixed them.

I’ll share my latest experience here.

The DEV and UAT migrations for the SSRS migration project I was working on recently went well, but when we opened the webpage URL, we noticed the following HTTP address problem. ReportServer services servers and databases are housed on separate servers. The servers were set up correctly, the SSRS service delegation was established, and the Report Server service accounts had the appropriate rights to the Report Server databases. Days passed before I was able to work with the Server team member to resolve the problem—that is, we missed creating an SPN for the Report Service server using the Server name. The problem was fixed by adding the SPN made for the service using HTTP and the Servername. We also had to change the authentication configuration file to RSWindowsNegotiate instead of RSWindowsNTLM.

Until this problem was resolved, we had seen weird errors from an application running the reports, testing the data sources showed the login failure error message – “Login failed for user ‘NT AUTHORITY\ANONYMOUS LOGON'”.

This article really helped us pinpoint the issue.

Kindly submit your piece by Tuesday, September 10th, and leave a comment below. Also, post it to your social media platforms like Linkedin and Twitter with a hashtag #tsql2sday.

I’m excited to read posts from many SQL Family members.

T-SQL Tuesday 177: Managing Database Code

Thank you Mala for hosting T-SQL Tuesday for August month. Mala asked us to write about the Database version control and what tools we use to manage our database code. Please find the original invite here.

Earlier in my career as a DBA, the database code was managed with comments in the code to know who changed what, when, and the purpose of the change. It was tough to manage the code since many developers forgot to put in these comments properly. This is not a reliable way to maintain the code.

Then we decided to use the source control. Version control is where your changes in the code are tracked over time as versions. If in case you need to roll back to the previous version, you have the correct version in place as a point-in-time snapshot to retrieve that version back to rollback with much ease. Version control also helps with the collaboration of work by multiple developers working on the same project. There were several tools in the market. You can find the list here.

We used Redgate Source Control with the DevOps Git repository. Redgate Source control is a plugin you use in the SSMS tool. It will connect your databases to the source control systems like Git, TFS etc; Install the Redagte SQL source control tool from here.

We created a Project in the DevOps and select the version control as Git. We then initialize the main branch using the visual studio and clone the branch using the visual studio to local folder. We then connect the SQL database to the source control by connecting to the local folder. This will link the database to the source control. The initial commit will help us push the objects into the source control.

We can also use the Azure DevOps to create CI/CD pipelines to push the changes to each environment before committing the code into production. To find out what the CI/CD pipeline is, please read this Microsoft article.

I have described it very briefly here in this blog post about the database code solution we used but this is a wide topic to learn.

To learn about Azure DevOps implementing continuous integration using DevOps, check the Microsoft learn series here.

I am looking forward to reading the experiences of other SQL Community members regarding their journey with database source control.

Thank you for reading!

T-SQL Tuesday #174: My Favorite Job Interview Question

Loved this question asked by Kevin Feasel for this month of T-SQL Tuesday, so here I am to write this post before this day ends. Please find the invitation here.

First, I would like to talk about my entire amazing interview process as a whole. It was multiple layers of interviews – Starting with my HR, my manager, and IT Director, interviews with all the DBA team members in sets of two per interview, and Cultural ad interviews with multiple teams focusing on Diversity, Equity, and Inclusion.

By looking at the list, are you already stressed? I felt the same when I was sent an email from HR with this list of interviews scheduled but I can say this is one of the best interview processes I have seen in my entire career. I had a chance to talk to each of the members I potentially work with if I were the chosen one for the position. My team members all the way to my Director. I have never seen any interview with a Director for a database administration position. This has to do completely with the Culture of the company and a clear example of how each of the potential employees are valuable to them.

I really appreciate all the companies who take real care in interviewing the best candidates for the position.

Coming to the best interview question from the same company – An interview with DBA’s. They asked me how I could manage my work and community work at the same time and how I find time to do all the community work apart from my work. They asked me if I rest and take time for myself. To be very sincere, I became emotional when they asked me this question. They asked me about my emotional well-being. I was all prepared technically and was ready to answer technical questions and hearing this come from them (Who are Experts in the Database Administration field) melted my heart. I was not expecting this at all. We also had technical discussions and it was a great interview with each of the DBA team members.

At the end of this interview (which was the final interview round), I made sure I let the interviewers know how wonderful the interview process was and I appreciated them for giving me the best time ever. I also let them know that this interview process was the best in my entire career. To be very sincere, I was not making up the words to impress the interviewers. I had enough experience to find another job if I was not selected for the role but I sincerely wanted to appreciate the entire team and the company for giving me the best experience ever.

Best of the Best, I am currently working here ❤

Thanks to Kevin for asking us to write on this topic. Really appreciate it.

Looking forward to reading the posts from other SQL Family members.

Thanks for reading!

T-SQL Tuesday – The Last Issue I Solved

Thanks to BrentOzar for hosting this month of T-SQL Tuesday. Please find the original post here.

Brent asked us to share the last issue that we resolved. I recently resolved a replication issue. I completed a transactional replication which is set between 4 servers. Let’s say, A, B, C, and D servers. A replicates to B which then replicates to C and then from Server C to Server D. The setup is complicated. On a regular process, Server A needs to import the fresh data on Server B tables that are involved in replication. It is not possible to drop the tables as they are currently involved in replication to B Server which then replicates the data to Server C. To make this possible, publications have to be dropped on Server B and Server C before importing fresh data from Server A to Server B. Once the tables are created on the Server B, then the publications are recreated.

As all these servers are involved and linked through transactional replication on the same tables, and publications are created on each of these servers, it gets complicated to drop and recreate the publications on Server B and Server C. We tried automating the process by creating the drop and recreate publication SQL agent jobs and using triggers to trigger the execution of the jobs one by one.

I know we can use other technologies in SQL Server to accomplish the same goal but the requirements and the limitations from vendors made us to stick with this plan.

So, the setup is complete. Soon after, I saw one of the recreate publication jobs fail due to the replication agent account being locked, and the downstream jobs all failed as they were interlinked to each other. Though the resolution is simple to enable the locked account, it is hard to figure out that this is the reason for the replication being broken. Especially, when these jobs fail across multiple servers.

I enabled the replication agent account and rerun the job to success. Then manually executed each of those failed jobs to succeed on the rest of the Servers. I am currently in the process of figuring out why the account is locked in the first place.

I know it is challenging to maintain the replication but it gets even tougher with complicated replication setups across multiple servers.

This is the last issue that I fixed the last Saturday night at 1am, LOL. Glad the issue is resolved.

Though the setup is tough and things seem complicated, I was able to successfully migrate the servers from older versions to new versions without breaking the replication setup that is required between multiple servers.

I am looking forward to reading the other SQL family posts on this month’s T-SQL Tuesday.

Thank you for reading!

T-SQL Tuesday 170 – Steering Future Projects with Past Experiences

Thank you Reitse for hosting January month of T-SQL Tuesday year 2024. This is my first blog post in the year 2024. Reitse asked us to write about any lessons learned from the old projects. Please find the original invite here.

I had to work on a project years ago where there was a requirement to migrate the old version of the database (SQL 2000) to SQL 2012/2016. One of the requirements was to use the replication to replicate the data over to another server. This is an OLTP database where there were heavy transactions per second. As a part of the planning, I recommended using the Always on Availability groups instead of the replication. My manager at the time (from my previous company) was a person who trusted and supported male DBAs more than female DBAs regardless of their expertise. I had to respect the decision to replicate the data over using the availability groups for read-only reporting purposes. I didn’t even have the chance to test the scenarios, compare the results from the two technologies, and show the results in the test environment.

Once the project is completed and in production. There comes the problems with replication. There were intermittent network failures, issues with the settings in the files generated by the snapshots, and other typical issues with the replication- there was a heavy lag in transferring the data. If you worked on replication before, you surely know fixing replication issues was no joke. We had to later remove the replication for good and replace it with the always-on availability groups. This has not only improved the performance overall but also required less maintenance.

Though the management understood later that they must have taken a different approach, the cost of it was the time and resources spent on the project.

One of the other projects was about the tools we wanted to purchase for data governance. Depending on one tool is not a good case when you are still in the process of deciding which tool to purchase for your needs. We had to spend a lot of time on one tool for the Proof of concept for months only to decide it didn’t serve our needs. This took a lot of DBA and the other team resources for months. I believe understanding the requirements from the company’s perspective should be the first step in the process. Then compare the tools from the wider perspective of whether they can serve all the requirements we are looking for. Filtering this way can give us the list of tools we can try and test to decide further.

These are the two experiences I would like to share in this post. I am looking to learn more on this topic from others as well.

Thanks for reading!

T-SQL Tuesday #169 – Hey You Wonderful, Patient Souls!

Thank you my dear Friend Kay Sauter for hosting the December month of T-SQL Tuesday. Here is the invitation. I believe this is the perfect way of saying thank you and Goodbye to the year 2023.

Let’s take a moment here. Seriously, stop scrolling for a second. Can we all collectively pat ourselves on the back for a second? I mean, seriously, YOU all deserve a medal or at least a virtual hug for hanging around and being the absolute best bunch of readers of this blog!

Out of all the other wonderful things around the world to watch for, out of all the other endless distractions we have in this social media world today starting with cute little kitten videos all the way to your favorite actor movies, possibilities are endless for the entertainment. Still, you decided to spend your time chilling here reading all my ramblings. Seriously, I am speechless! Thanks for your commitment and patience with me.

Thanks for even taking my emotional side of things. I am an emotional person and a part of me from that side of my personality should have shown up in some of my blog posts. Especially, when I sit in the night and put my thoughts in here. My thoughts are unfiltered. You have braved to see me from that angle as well. I mean, my Drama! Thanks for embracing the unpredictable side of me.

So, THANK YOU!

Thank you for being my best virtual friends, and for spending your time and attention! Thank you for being so patient with me as I compose and do the posts.

You are all my real MVP’s and these little bi(y)tes of dbanuggets wouldn’t be the same without you!

So, my awesome and amazing readers! Keep being yourself! Never ever let anyone dim your light! Cause, you are born to stand out!

Embrace this life, keep being fabulous, be curious, enjoy the little things in life, cry, scream, and laugh the life. There is nothing wrong with it. After all, we just have one life and we want to be authentic. Show the emotions instead of suppressing them. Tell people in your life you love them. Do not worry about making mistakes. There is nothing bad in being wrong. There is nothing wrong with it. Keep being YOU!!

Keep that Smile, it may brighten someone’s day!

T-SQL Tuesday #165 – That Job Description I read and Couldn’t Stop Laughing!

Please excuse me if this post is going to be the funny one, yes it is!

I would like to first thank my dear friend Josephine Bush for bringing up this great topic and asking us to write about how the proper job titles and descriptions should be for the job postings.

I surely have a lot of thoughts on this topic. Two years ago, I was applying for Database administrator positions everywhere. If not everywhere, almost everywhere. Linkedin, Indeed, Monster, CareerBuilder, etc. As I was looking at the Job titles and Job descriptions at the time, my blood boiled at the beginning looking at the requirements. More about this soon. I was already stressed about figuring out a way to get a job soon and on top of that stress, unreasonable job descriptions caused me more stress. After a couple of days, that stress turned out to be a stress reliever. Yes, you read that right. In the beginning, I was madly looking at the job description mentioned for a SQL database administrator role mentioning high expertise in the languages – C++, Java, PHP, Python, etc; but later as I was checking on many of these types of job descriptions made me laugh and had helped me filter out the companies that I can ignore applying.

The other funny thing I observed is the description mentions that the company is looking for a senior-level position and mentions the salary as 15 dollars per hour. The description for this kind of job posting also mentions the certifications as needed or preferred.

To take this to another level, I would like to show an example of postings from one of the best companies (Don’t want to point out the company name here) back in 2020 for the position of “Cloud Native Infrastructure Engineer” mentioning the requirement is to have 12+ years of Kubernetes experience when in fact the Kubernetes technology was released in 2014 which was just 6 years. Source

I believe many companies rely on people who do not have much experience with technology to post about these job openings and their descriptions. Due to this, great candidates who can be a great fit for the position would not even consider reading the entire description. This can ruin the reputation of the company.

One more hilarious experience of mine –

My first-ever job was with the title “Database administrator” but as I enter the company, the first day I received a tag with my name, picture, and with the title “Database Developer”

Hope you enjoyed reading this post and my request to anyone looking for a job actively ignore these type of job descriptions!

I am curious to read all other posts on this topic for this month of T-SQL Tuesday!

Thanks for reading!

T-SQL Tuesday #164: Code that made me feel happy

This month T-SQL Tuesday is hosted by Eric Darling asking us to write about the code that made us feel a way.

I would like to mention the Query Store hints and why I really liked it. If you have a parameter-sensitive query in your stored procedure and you need to use a hint (For example, using RECOMPILE hint in this case) to fix the issue quickly without changing the actual code, using query store hints is the best option. Not only that, but you can also use other useful hints like setting up the MAXDOP, Compat level etc. For the list of supported and unsupported hints, look here.

Remember: This is the best last option to choose when you cannot change the code. It is always best to refactor the stored procedure.

It is very easy to use. You just need to collect two things. The query ID of the stored procedure and the query hint option you would like to use. There is a special stored procedure that you can use to implement this Query hints into your stored procedures.

To find the query_id of your query, run the below code by changing the Query text in the like operator:

/* Find the query ID associated with the query. Source */
SELECT query_sql_text
,q.query_id
FROM sys.query_store_query_text qt
INNER JOIN sys.query_store_query q ON qt.query_text_id = q.query_text_id
WHERE query_sql_text LIKE N’%query text%’
AND query_sql_text NOT LIKE N’%query_store%’;
GO

Use the Query_id to run the below-stored procedure with the hint you would like to use

EXEC sys.sp_query_store_set_hints @query_id= 1, @query_hints = N’OPTION(USE HINT(”RECOMPILE”))’;

Query store can also capture ad-hoc workloads. This can fill up your query store real quick if you have a lot of ad-hoc queries coming from your applications. If you can parameterize these queries, configuring PARAMETERIZATION = FORCED the database can be an option. Look more about Forced parameterization here. If you can’t parameterize those ad-hoc queries, you can set the Optimize for Ad hoc Workloads server option to save the cache memory on the queries that will execute only once. If you do not want to capture this kind of queries in the Query Store, set QUERY_CAPTURE_MODE to AUTO.

Remember:

  1. If you enable the forced parameterization and use the query hint RECOMPILE at the same time, the query engine will ignore that query hint and proceed with using any other hints used. If you are using the Azure SQL database, you will see the error with code 12461 when the RECOMPILE query hint will be ignored. Source
  2. Query Store will only store the latest hints that are active. It will not store the history of the hints that were active once. To capture that information, you can use extended events. I have written a blog post here on exactly how you can set this up so you can get the history of the hints.

I am looking forward to reading other SQL family member posts on this month’s T-SQL Tuesday post hosted by Erik Darling!

Thanks for reading!