Don’t think you are ready yet? Do it anyway! Do NOT Self Reject!

This blog post is related to speaking at New Stars of Data. If you read the title, this post is for the professionals who wish to become the speaker but the self doubt is pulling them back from taking that first step.

I totally get that feeling. I was in the same situation two years ago. Self doubt was not just creeping in from time to time but it was part of who I was back then. I was going through depression due to the harassment I faced from my previous company for years in a row. I have no self esteem and lost self confidence bit by bit as each year passed through harassment. While I was in the darkest dungeon of my life, I finally decided to standup for my self. I know no one will come to rescue me and it is me who needs to walk towards the light from that darkness. I know it takes time but it is not impossible. But how? Where should I get the strength and energy to move a step forward when I have every reason to feel like a victim of abuse and harassment and be right in the same place? I was tired of blaming my fate for the situations I was in and I remembered I always can make a choice in any situation life throws at me.

I then decided there should be a challenge that I need to take that can build my self confidence back.

In the process of searching for a challenge, I opened twitter account only to follow BrentOzar and his new blog post news. I saw a tweet which said something like “If you are looking to speak at the event for the first time, New Stars of Data is the conference you can look into”. When I first read this tweet, I never thought I would submit for this conference. For the next couple of days, I couldn’t able to sleep properly as this tweet keep popping up in my head. Those thoughts screaming at me “what if this was the challenge you are looking for? Speaking can get back your self confidence, think about it! If you miss this chance, you may not get another chance”. These thoughts were consistent for couple of days. I know in my heart that I was not ready and I will never will. I just cant relax and wait for the perfect time to start doing the things that are right for me to do.

I decided to take the first step in the faith and submitted my abstract for the New Stars of Data 1. I then prayed to God from bottom of my heart that my session should not be selected. I was that scared and had that big self doubt. God went against my will and selected my session 🙂 It was a blessing in disguise. I know I put myself in a uncomfortable situation only to test myself if I can face the challenge. If I can, I know it is only for my self growth.

At the time of my abstract submission, I didn’t know that I will be supported by a Mentor by New Stars of Data conference organizing team (NSOD). Ben Weissman and William Durkin are the Organizers for NSOD conference. I was blessed to have Deborah Melkin as my Mentor. Her smile almost always calmed my tensed nerves. Deborah and I had weekly meetings to prepare my presentation. Deborah was very patient with me during the process. She helped me build my presentation and Andy Yun also helped me in giving a proper flow to the presentation. Both Deborah and Andy helped me build the presentation, reviewed it and had dry run before the NSOD presentation. That rehearsal session actually helped in fine tuning the presentation.

During my big day of NSOD, Deborah was moderating my session. Thanks to Ben and William for thinking above and beyond about the comfort of the new speakers without us even asking for it. As Deborah was a familiar person to me and as she was my mentor, I was super comfortable in delivering my presentation for the first time. My presentation went well, at least not terrible I believe, lol. After I completed my presentation, attendees were asked to provide the feedback. Please look at the feedback provided by the attendees below. It was such a great honor to be able to present at NSOD and to receive such encouraging feedback.

This feedback has helped me to take another step and present this session and another sessions at different events and conferences within United states and at different international conferences across the globe.

Since then, I have spoken at different user groups, SQL Saturdays, local and international conferences. I have presented more and more to get my self confidence back. I believe it is a continuous process and I am still in the process. Healing from the trauma takes time. I have presented at more than 200 events till date. I did it like I was obsessed with speaking because the more I present, the more I gained inner peace. The more peace I had in my heart, the more confident I felt. I had mention this at many events but would like to mention it here again.

“At the darkest moments in my life, I have chosen Speaking as a way out. Speaking has become a therapy to me. It will be a therapy to me for the rest of my life!” -Deepthi Goguri

To all the professionals who are reading this and are interested in becoming a speaker for the first time, I would highly suggest you to submit your sessions to New Stars of Data 5. NSOD has changed my life and I am sure it will change your life for good as well.

Though you think you are not ready yet, believe you will never be ready until you take the step and put the things into action. Remember, experience will never come if you wish and dream of a better future. Experience will only come with action. Go ahead and take the first step in faith. Do NOT self reject.

After reading this blog post, I am confident that you will be motivated enough to submit the sessions. With the same confidence, I would like to wish you all the best and Good luck on your submissions!

Thanks so much for reading!

T-SQL Tuesday #156 – Checklist for Better Production Code

Thanks to Tom Zika for hosting this month of T-SQL Tuesday! Tom asked us to share about the tips for the perfect production code.

Below are some of the tips I would like to share for the better production ready code:

1. A production ready code is always readable with detailed comments of what changes are being made to the existing code. Detailed documentation is important as this code is going to stay for years moving forward. Why, when and who changed the code needs to be documented. I have seen situations where these obvious details are being missed in production code. Code is here to stay, Developer who have written the code may or may not stay in the same company.

2. Best practices for commenting, I highly suggest you read the BrentOzar excellent post here.

3. Check for the execution times based on the DEV, TEST data. I have seen scenarios where the data size in the DEV and TEST environments are not same as production data. This can cause the difference in the execution times from lower environments to production. It is always suggested to test your code in lower environments having same amount of data as production.

4. Unit testing is important to find out if there will be any resource contention or concurrency issues ahead of time before implementing the code in production. Please find the free unit testing tools here.

5. Error handling your code. You need to have procedures in place to handle and capture the errors if in case you encounter the errors. Rollback procedures should be in place. Please read the details here. This is an excellent article on transaction handling in SQL Server.

6. Tune your code before going to production. Check for the indexes that you can create to improve the performance. Check if there are already existing indexes with the columns before creating the indexes, you may be creating duplicate indexes.

7. For better readability and style of your code, check these formatting tools here.

8. Give the sensible alias names for columns in the code.

9. Test all the use cases. Do you remember any situations where you had deployed your code in production and it failed and now everyone blames the testers and developers for not checking every use case scenario. Thorough testing on all use cases is important for a successful production deployments.

10. Do not run the code you see online directly on production. I know, I know, you will say never! but even the code from the legitimate website you always use, please check the entire code, test the code in test environments before thinking to use the same sample code in production.

These are some of the things that you can check if your code is ready for production deployment. I am looking forward to read the posts from other SQL family members for this month T-SQL Tuesday!

Thanks for reading!

T-SQL Tuesday 154 – New Performance Features in SQL Server 2022

Thanks to Glenn Berry for hosting this month T-SQL Tuesday. Glenn wanted us to write about the new features that we are excited about in the SQL Server 2022. Please find the invitation here.

I am very excited about the performance capabilities in SQL Server 2022. When we talk about performance, you know how much Query Store feature has actually helped in finding and resolving the slow performing queries since this feature got introduced in the year 2016.

If you already used the Query Store feature for your databases, you know it needs to be enabled at the database level. The primary use of this feature is to capture the execution plans and execution stats information for your queries over time so you can identify the regressions between the execution plans and take this information to fix most troublesome queries easily.

Starting SQL Server 2022, the Query Store feature is going to be enabled by default when you create new databases. If you need to migrate databases from older SQL servers to SQL Server 2022, then you need to still manually enable this feature.

In SQL Server 2019 and below versions, there is no support for the read only replicas for availability groups, meaning the query data is only collected to Query Store on primary replica and not the readable secondary replica. In SQL Server 2022, the query data is collected into Query Store on the secondary read only replicas for availability groups. Query Store hints can also be used in SQL Server 2022.

The other feature is the Query optimization for the parameter sensitive queries. In SQL Server 2019 and below versions, the moment the query executes for the first time, it saves the plan in the plan cache with the parameter value used for the first execution. The next time the same query runs with the different parameter value, the initial plan that got stored in the plan cache will be used. If the parameter that is being used for the first execution is sensitive, then it can generate a plan which can have resource consuming operators that can effect the next executions of the query with different parameters. This is because the same stored plan from the plan cache is reused.

In SQL Server 2022, multiple plans are generated based up on the parameter sensitive values and are saved in the plan cache. Depending on the parameter values, the optimizer will sniff the parameter value and use the execution plan based on the parameter value. This is a very useful feature as it can mostly solve the parameter sensitive query issues as the optimizer choses the right optimal plan for your queries based on the sensitivity of your parameter value.

I did not get a chance to test the performance capabilities in SQL Server 2022 but I am going to create a new demo for one of my upcoming sessions related to performance capabilities and Query store enhancements in SQL Server 2022.

Thanks to Glenn Berry one more time for boosting our interest in trying out the new features in SQL Server 2022 and to share the knowledge to the world.

I am looking forward to read this month T-SQL Tuesday posts from other SQL family members and learn their excitement and learnings from the new features SQL Server 2022.

Thanks for reading!

T-SQL Tuesday #153: My Life changing event!

Thanks to Kevin Kline for hosting this month of T-SQL Tuesday! Kevin wanted us to share about the events that has changed our lives in a better way.

Becoming a speaker has changed my life. I decided to become a speaker during the dark times in my life. Speaking has become a therapy to me. It has helped me pass through the rough times in my life.

I am always thankful to the conference “New Stars of Data” for giving me the opportunity and the platform to speak for the first time as a Speaker. My journey begin from here as a Speaker.

I have presented at many events later. Dataweekender is the second conference that gave me the opportunity to speak. This is one of my favorite conferences as well.

As I speak more, I gained a part of my confidence back. It is a continuous process. I took the step and decided to speak at the lowest point in my life. I sincerely can say that New Stars of Data event is a life changing event to me.

To anyone who wanted to become a speaker but if lack of confidence is stopping you from taking your first steps, please rethink. Confidence only comes by practice.

I can say that very sincerely because I spoke at several events later only to build my confidence. The more I presented, the more comfortable I became. Though I only presented virtually till date, I will carry the same confidence I gained when I present in-person.

When I think about the in-person events, I need to write a bit about the PASS Data Community Summit 2022. I am honored to be selected as a speaker at this year’s summit and I chose to present in-person.

I am very nervous to present in-person for the first time ever and that too for the biggest Data platform summit in North America. I believe having nervousness is a good thing when you are presenting because it will force you to prepare more and keeps you laser focused.

As New Stars of Data gave me a life as a Speaker, PASS Data Community Summit 2022 will definitely take me to the next level as a speaker as I will be presenting for the first time in-person.

Attending and presenting at the Conferences like SQLBits was one of the best experiences. Attending other events like EightKB is a great opportunity to learn internals about the SQL Server.

There are other conferences I have attended and learnt from other speakers.

During the Journey as a speaker, I have met new people along the way and few of them became my good friends for life!

I cherish all the moments…

I am very thankful to each and every organizer of the conferences, User groups and to each person who helped me along the way giving me the opportunity and supporting me at each step in my journey as a Speaker.

I would love to read other SQL Family member posts about this topic. Would love to learn from them how attending or speaking at different conferences changed their lives.

Thank you for reading!

T-SQL Tuesday #152 – Rants

Thanks to Deborah for hosting July month of T-SQL Tuesday!

I would like to share a rant on a situation I faced working on the database Server migrations couple of years ago.

In the past few years, I have done several database and server migrations. As you know, there are many things that needs to be taken care of during the testing phase until we do the actual migrations.

It is not simply the database that we migrate but all the related objects needs to be migrated as well. The agent jobs, SSIS packages, linked servers and most importantly updating the connections to the datasource information, not only to the database related objects but also to the related applications.

If you are using the Servername and database name in the connection strings in all the applications connecting to your database being migrated, this process gets tough and tedious to update once the database is migrated to a different server. It is always advised to use database DNS in the connection strings instead of the IP address or the Servername.

This also applies to the report server datasources as well.

I know it is hard to make the change but if the change is good, it is worth the effort.

During several database Server migrations, I have seen developers feeling comfortable in changing their application code to the new Server name instead of taking the advice of database DNS. I know the amount of work that goes behind to find the objects referencing the server names. Missing any one of the locations can absolutely break the code and mess really hard.

I have experienced many times where developers had failed updating the SSIS packages or failed Linked Server connections once the databases are migrated. This problem can be solved simply by using the database DNS instead of the Server name. You will be only updating the DNS entry pointing to the new server name once you migrate the database to point to the new Server environment.

I know many companies already follow the standards of using database DNS but there are still others out there doesn’t want to make the change and instead fix the problem when applications break.

I believe it is not only important to fix a broken application somehow but how effectively and simply you can avoid causing the applications to break in the first place.

I am very interested in reading other SQL family member posts on this topic!

Thanks for reading!

Quick Track: Beginner’s Guide to Azure SQL- Learn to deploy

Okay, so you would like to deploy your databases in Azure! Great. Your resource limits are based on the deployment you chose. Managed instance and Azure SQL database resource limits depends on the Service tier, compute tier, the number of vCores you chose.

Before migrating the databases to Azure, knowing the details of what deployment model (IaaS/PaaS), deployment method (Azure portal, Azure CLI, PowerShell, ssms tool), region you wish to deploy, Service tier, compute tier, purchasing model, hardware and size of your databases is crucial (Source).

The deployment option and the service tier you chose directly depends on the resource limits that you get. Compute resource limits like the memory, storage limits like max size of your database log file, how long the backups are secured, IOPS, tempdb size limit etc.;(Source).

There are couple of things that we need to focus on while deploying the Azure SQL database. The region where you would like to deploy, under which resource group you would like to place the database, logical server name to connect to the Azure SQL using ssms, Admin SQL login credentials, purchasing model and service tier.

Creating the Azure SQL database using the portal is the basic method of creating a single database. Remember, you cannot restore a native backup in Azure SQL database but restore using the bacpac file. In MI, you can natively restore the database using the url.

Any resource that you create requires the subscription, resource group, database name, logical server name and credentials,

While configuring, you will have an option to chose if the database needs to be created as a part of the elastic pool (databases in the pool will share the compute resources together).

As you click on the configure compute and storage, you can chose the service and compute tier here.

If you chose the general purpose, you will have the compute tier options “provisioned”/”serverless”.

Chose the hardware configuration.

Select the hardware

Scale up or down to adjust the number of vCores and data max size.

Backup storage redundancy has three options. To find the differences among those, see here.

Networking tab:

Network connectivity:

No access: no one will have access

Public endpoint: quickly connect but not secure.

Private end point: Most secure way to connect by creating the private ipaddress to connect to from your virtual network.

Firewall rules:

Allow Azure services and resources to access this server: Any resources in azure can connect to your server. Not a secure option. try to avoid choosing this option.

Add current IP address: enabling this will add the client IP address so it can connect to the server.

Under the Security tab, you will see the Azure defender for SQL. This is not a free service. You will have to pay 15$ per server per month.

In the additional settings, chose how you want the database to be created. None will create just the database name and no objects. Backup is to get the database from the backup file and sample creates a sample of the adventureworks database. Select the collation while deploying the database. For the Azure SQL database, you cannot chnage the collation later.

Chose the maintenance window for the options provided.

Under the tags, name the tag for this resource and a value. This will help to identify the resources later.

Click on review and create. Click on create at the bottom of the window. This will create a new Azure SQL database.

After the deployment is complete, you will be able to copy the server name, open the ssms and connect to the server using the admin login and password credentials.

Connect to the server using the credentials for the logical server admin with password

Things to remember:

  1. You can chose the size and the number of the files for the Managed instance. You cannot chose the storage in MI and Azure SQL database. For Azure SQL, you cannot chose the number of database files.
  2. T-log file for the Azure SQL database is 30% of the data max size.
  3. Compatibility level is available for both MI and Azure SQL database
  4. Query_store is turned on for all Azure SQL databases
  5. Azure SQL Databases recovery mode is always full
  6. You cannot stop and start the Managed instance and Azure SQL database
  7. Proxy and redirection are the two ways that we can connect to Azure SQK database. Proxy connection is like a friend who visits your apartment always need to connect to the security guard (known as gateway here) before reaching your apartment (database). This is the default setting for connections outside Azure.
  8. Redirect connection is like your friend gets an additional key as an authorized person for the apartment by connecting to the security guard (the gateway) and from that point the next connections can be directly reach the apartment (database). No need of additional hops in between. Redirect connection is default connection type within azure. To use redirect connection, ports 11000- 11999 needs to be opened.

We are going to learn about fundamentals of security in azure in the next blog post. Thanks for reading!

References:

  1. Learn Live – Azure SQL Fundamentals: Deploy and Configure Servers, Instances, and Databases (Ep.2)
  2. Deploy and configure servers, instances, and databases for Azure SQL

T-SQL Tuesday 151: T-SQL Coding Standards I wish I knew Earlier!

Thanks to Mala for hosting this month of T-SQL Tuesday! Mala wanted us to write about some of the T-SQL Coding standards.

Some of the coding standards that I mention here are obvious but I know how many times I obviously skip the simple coding standards. I believe every bit of simple things we do every day matters!

Here are few quick but effective coding standards I wish I knew earlier:

1. Get your T-SQL code formatting faster than before

We all love to write the code that can help you and others read better later but I know it is time consuming, especially when you are trying to focus more on writing the effective code that can run faster. My mindset while typing the code: Throw the formatting out of the window, let me just type this and get the results I need. I know many of you have the same mindset. If you are just typing few lines of code, that’s totally fine but building a huge stored procedure with complex logic needs formatting not only to make it easier for others to read it later but to help yourself along the way. Formatting doesn’t have to be complex with some free online tools like poorsql. This tool was introduced to me by one of my favorite SQL community champions, Randolph West. Thank you my friend! You saved me many hours. There are also other similar free tools like sqlformat.

2. Error and Transaction Handling: Saved many Jobs out there

Don’t wait until you break something and then figure out there is a way to avoid it in the first place. This was me at the beginning of my career. I learnt it in the hard way. These were all my face expressions at my computer as I write something which looks like a code.

Source: Pexels

From handling and capturing the errors to transactional handling the right way, knowing how to handle the errors and transactions is crucial when troubleshooting and modifying the data. Learn about error handling here and transaction handling here.

3. Are you inserting large number of rows as a schedule SQL agent job? This is for you

This is simple but effective. Add update statistics step as the next step in the agent job for the tables you just imported the data into. This will help optimizer to chose the right operators in your execution plans with the updated stats.

4. Start using SQLCMD mode, you will thank me later

Did you ever faced a situation where you accidentally deleted the rows in production instead of deleting the rows in development environment? If yes, then keep on reading. Start using SQLCMD mode. After you enable this mode, run your query with beginning

:CONNECT Servername                                                                                                      
USE DATABASENAME                                                                                                              
Your query art here   

Make it as a coding standard, this will help you remember where you are running the code no matter what your query editor is connecting to. If you need to run the same code on different server, you just change the server name from the first line. At the beginning, I felt adding this line at the beginning of the code is kind of pain but then after I get used to it, this has become one of the best standards. As I execute any code these days, my eyes automatically rolls to the first line, looking for the server name in the connect statement. This becomes even more helpful when you want to document the code you are running on different environments (DEV/TEST/QA/PROD). Learn about SQLCMD here.

5. Consolidate the number of Indexes: You could have duplicates

If you regularly implement the indexes from the recommendations provided on the execution plans, make sure to check if you already have an index with all the columns mentioned in the recommendations but just a column recommended not in already existing index. Try to fit in this additional column into the existing index if necessary but do not just go ahead and run the recommended index query. You may already have an index. Regularly check for any duplicate indexes you may have costing you lot of resources maintaining them.

6. Checkout the datatypes of variables used in your queries

The variable datatypes that you use in your queries for where clauses should have same datatypes for the columns you are referencing in your tables. If they are not the same, the optimizer have to go through the implicit conversion as an additional step. This may cause performance impact and may not use the required indexes when it have to. This can take more resources to execute the query and may be harder to find the reason later for why the query is running slow.

These are some of the T-SQL coding standards that I use on regular basis. I am looking forward to read and learn from other SQL family members on their T-SQL standards T-SQL Tuesday posts!

Thanks for reading!

Quick Track: Beginner’s Guide to Azure SQL

Source: Image

As I started learning Azure couple of months ago, I was confused in the beginning of where to start and was overwhelmed by looking at the amount of information that I not only have to learn but also remember what I learn.

I believe you can treat this series as my consolidated learnings put together in a simple way. A quick guide to understanding the concepts if you are a beginner or to revise the concepts you already learnt to make your foundations stronger.

Let’s start with a Why Question. Why should you even start learning Azure or any cloud platform?

Azure is one of the cloud platforms available in the market which provides services that you can use to host your applications, build your databases, virtual machines, storage, analytical and many other services without having to buy or manage any hardware. As the Azure supports Infrastructure and platform as a service, you can build, configure and run your databases and related services pretty quickly and with ease of deployments through Azure portal online/command line interface/PowerShell.

Deployment options

Source: Microsoft

IaaS: Infrastructure as a Service

  • SQL Server on Azure VM
    • SQL Server hosted on Azure Infrastructure
    • Automated backups and patches
    • Extended security updates for the older versions of SQL Server (SQL Server 2008)
    • You will have access to operating system and can configure instance level features for SQL Server
    • Easy Install- through Azure portal using images (various version combinations of available packages for both Windows and SQL Server together)

PaaS: Platform as a Service

  • SQL Managed Instance
    • OS is taken care by Microsoft
    • You will not have access to the OS
    • Use and configure all features of the SQL Server instance
    • Useful for lift and shift scenarios where applications don’t have to change their code in order to move to this service
  • Azure SQL Database
    • You will just get the database
    • Database level features can be configured
    • OS and Instance capabilities taken care by Microsoft
    • very useful for Modern cloud application built
    • Very quick to deploy

For the full list of feature comparison between Azure SQL Database and SQL managed instance, read the Microsoft doc here.

Unlike the SQL Server on Azure VM, Azure SQL database and SQL managed instance are versionless. Databases, OS are upgraded, patched behind the scenes by Microsoft.

Purchasing models:

There are two purchasing models, DTU and vCore.

In the DTU (database transaction per unit) model, we choose the number of DTU’s and the resources comes in bundle together (compute and storage resources). vCore model has the advantage of using the Azure hybrid benefit and reserved capacity if you can pay for the resources ahead of time up to three years. When you use vCore model, it is easy to convert the on-prem workloads to the azure as you can independently choose the compute, storage and other resources like memory.

DTUvCore
Bundle togetherIndividual compute and storage
Easy and simpleCustomize and control
Not flexibleFlexible
Azure Hybrid benefit – NO Azure Hybrid benefit – YES
Available for Azure SQL Database onlyAvailable for MI and Azure SQL Database

The cost of the vCore model depends on the service tier, compute, storage, hardware you choose, backup storage you choose.

Note: you can convert the DTU model to vCore model. Know more here.

Service Tiers

We have three Service tiers for Azure SQL database and Managed instance – General purpose, Business critical and Hyperscale. Especially under the General purpose tier, we have two compute tiers. We will learn that in coming section. For the main differences among the service tiers, please follow below table.

Service TierGeneral purposeBusiness criticalHyperscale
PurposeGeneral workloadsLow latency workloadsOLTP, large databases
Compute80 vCores80 vCores80 vCores
Storage typeRemote 5 GB – 4 TBLocal SSD 5 GB – 4 TBLocal SSD cache Up to 100 TB
Backups1 to 35 days (7 as Default)1 to 35 days (7 as Default)1 to 35 days (7 as Default)
Availability1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache3 replicas, 1 Read Scale-out, zone-redundant HA, full local storageMultiple replicas, up to 4 Read Scale-out, zone-redundant HA (preview), partial local cache
In-memoryNot supportedSupportedNot supported
IOPS500 per vCore with max 7,0005,000 with max 200,000Multi-tiered architecture Effective IOPS will depend on the workload

Compute Tiers

Especially under the General purpose service tier with vCore model, we have two Compute tiers, Provisioned and Serverless.

ProvisionedServerless
Predictable workloadsUnpredictable workloads
Manual scalingAutomatic scaling
No pausePause the database
Pay for the fixed resources you choosePay only while database in use
Per hour billingPer second billing

For example, you will need to chose provisioned compute tier if your database has users connected most of the times having more usage of the compute resources and you cannot predict the traffic to the database. On the other hand, if you have predictable workloads with the specific timings of the users connecting to the database, you can use the serverless compute model.

You will be choosing the max amount of resources ahead of time and scale manually and you will be paying even if you are not using the resources in full. With the serverless, it is automatic scaling. You will be choosing the min and max number of compute resources so the automatic scaling happens only in between those min and max limits.

The advantage you have by using the serverless is to pause the database during the specific timings you choose. These are the timings when the users will not be connected to the database. Once the database goes to the pause mode, you will be only paying for the storage and not for the compute. If you are using Serverless option and do not place your database in auto pause mode, then you will be charged for the min amount of resources when users are not connected and charged per second basis on the resources you use during the activity. (Source)

For the provisioned compute tier, the billing is per hour basis where as for the serverless compute tier, billing is per second basis.

Summary:

In this Quick track of basics to Azure SQL, we learnt about the overview about the deployment options, purchasing models, service tiers, compute tiers and how they differ from each other. In the next blog posts, we are going to learn more about what Azure SQL has to offer.

Thanks for reading!

References:

  1. https://docs.microsoft.com/en-us/azure/azure-sql/database/purchasing-models?view=azuresql
  2. https://docs.microsoft.com/en-us/learn/modules/azure-sql-intro/
  3. https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale
  4. https://docs.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview
  5. https://www.youtube.com/watch?v=wcRb5RHriUc

TSQL Tuesday #150: My First “Why am I even here?” Job

Thank you Kenneth Fisher for hosting the May month of T-SQL Tuesday. Kenneth wanted us to write about our first technical job.

As I mentioned in my previous T-SQL Tuesday post here, I was a Pharmacy graduate who loved technology. As I decided to change my career into IT, I applied for masters degree in computer technology and finally made it to graduation.

After I completed my masters degree, I felt like I can easily do any job related to computers. I was wrong! I didn’t know at the time that “Degree’s doesn’t matter”.

I had my initial struggles to find a DBA job as everyone was looking for an experienced candidates who can fit their requirements. I took many interviews and failed. One final day, I got an offer for a DBA position in a company (I am not naming it here) after attending an interview for Junior DBA role. I was on the Cloud nine.

Everything is cool, right? No, not as I thought it would be.

On the first day into the company, they gave me an ID card with my happy face picture on it and under it, my job title as “.Net Developer”.

Wait, what?? A developer?

Well, I was in the same shock as you and don’t really know what to say. I tried contacting the management to know why I was placed in a complete different position when they interviewed me for a DBA position. I did not receive any useful information from them. Now, you know why I am not naming the company 🙂

I was angry in the beginning but thought I can learn and survive. You may ask, how can you survive when you know nothing about that technology? Remember, it was my first tech job ever. I was scared and worried if I will get an another job soon if I resign. I tried understanding and learning the code but it was very hard to catch up. I know that job was not for me and always wanted to get a job as a SQL DBA.

I am not an accidental DBA but a planned one who ended up in an accidental developer position 😛

I was looking for a DBA job while I was still working in this developer position and fortunate enough, it did not took long enough to be placed as a Database administrator in another company.

Since then, I am a SQL DBA who love working with SQL Servers and Databases.

Being a Women in Tech, I faced my challenges in my career but I never thought of changing/moving my career from IT. That’s what I always loved and that was the main reason why I changed my career from being a Pharmacist to a SQL DBA.

From the core of my heart, I always believed “Everything happens for a reason”. This mindset has helped me pass through those initial challenges I have faced moving my career in to IT.

All the past experiences has taught me and helped me to be where I am today as a Women in Tech.

If I turn back and see, I know I made the right decision 🙂

Image Sourcehttps://in.pinterest.com/pin/514817801127582057/

Thanks so much for reading!

T-SQL Tuesday #149: Best SQL advice I would give to my younger self!

Thank you camilahenrique for hosting April month of T-SQL Tuesday. You can find the invitation here.

I give a big thank you to my younger self for changing my career from a pharmacist to a SQL DBA. I always loved computers since my childhood. You may ask, why did you choose to do pharmacy then? It’s a long story! I was just an innocent kid fulfilling my father’s dreams. By the time I realize pharmacy is not for me, I graduated as a pharmacist. I came to the united states to finally pursue my dreams and change my career. I joined as a master’s student in computer technology at one of the great universities in the united states. That was a major life-changing moment for me since I do not have much computer knowledge before. One of my core subjects was SQL. That’s when I was introduced to SQL and SQL Server. I can tell you with my whole heart that at that moment I know the decision I made was right. Though it was very tough in the beginning to catch up with the speed of other students, I learned T-SQL basics and from there, my computer journey started. I was stressed out in many situations and started slow. I thought I would never be able to make it to graduation. No matter how much I learned, I made mistakes and was never perfect. I spent late hours learning basics so I would at least pass the exam. It was a roller coaster ride for my master’s degree but I finally made it to the graduation 🙂

If I can give one piece of advice to my younger self, I would say learning is a continuous process and we all are perfectly imperfect. Failure happens only when you stop willing to learn. There will never be a perfect moment to start learning anything. No matter what you learned in the past or what degree you pursued in the past, that doesn’t have to define your future goals. Anyone can learn anything if we have a heart, and the will to work hard at it.

Learning the basics of T-SQL is really important before moving into complex T-SQL coding skills. I would like to share my favorite youtube channel that helped me when I started my T-SQL journey. If you are starting with T-SQL learning, follow this series from Kudvenkat.

I am looking forward to read all posts for this T-SQL Tuesday and learning from SQL Family!

Thanks for reading!