Back to List

How to Dynamically Scale an Azure SQL Database to Get the Power You Need

Jared Kuehn Jared Kuehn  |  
Aug 24, 2017
 
We have had opportunities to work in Azure solutions on various occasions, including web applications, data warehousing scenarios, and IoT. Quite often in these solutions, there is a database in place to store information, in which case there may be an Azure SQL Database in the architecture.

As of now, the cost of running an Azure SQL database instance is based on the number of Database Throughput Units (DTUs) allocated for the database. When determining the number of units to allocate for the solution, a major contributing factor is to identify what processing power is needed to handle the volume of expected requests.
 
At a baseline, you would likely plan your architecture around the highest expected utilization, giving your users the best performance possible. But more than likely, your solution will not always be utilized to the level you expect. At times, there may be little to no traffic, leaving you to pay for resources that aren’t being used. It is challenging to find a single level of resources that is both cost effective and fulfils all of your requirements.

Download our free Tabular RI Analyzer workbook to quickly and accurately detect RI issues.
 
For Azure SQL Databases, there is a solution to this utilization problem. Azure is intended to allow organizations to scale up or scale down their architecture depending on what their needs are. Scaling up will cost more, but can give you more resources when you need them the most. Scaling down at non-peak times can reduce your cost by reducing the available resources when you don’t need as much.

You can scale between the service tiers of an edition, and you can also scale into other editions of Azure SQL Database. This can be done manually, which may be of benefit when you expect to make significant adjustments and leave it to run for an extended period of time. This process can also be automated, giving you the opportunity to routinely scale your solution to the needs at any given time.
 
There are multiple approaches to programmatically scale your Azure SQL Database, including T-SQL statements and PowerShell. To automate this process, you will need to use PowerShell for at least part of your solution. As of this writing, Azure SQL Database does not have a built-in ‘SQL Agent’ like there is on-premises, so a pure T-SQL solution is not possible yet. Instead, you can utilize Azure Automation, creating Runbooks to run your scaling code at specific dates and times.

When considering if dynamically scaling your database will benefit you, here are some things to think about:
 

Not all DTUs are created equal

Like most of Azure, Microsoft offers their Azure SQL Database services in multiple editions: Basic, Standard, Premium, and Premium RS. As you move up editions, you gain DTUs for your database. What you also gain is access to more efficient DTUs, especially when moving from Standard to Premium editions. Premium DTUs are more recent versions of the DTU structure, and they perform more efficiently than Standard DTUs. So aside from increasing the number of DTUs your environment has, note that you may get additional benefit from efficient DTUs at higher editions.
 

Scaling your database up/down isn’t as fast as it sounds

Running the statement to upgrade/downgrade your database takes a matter of seconds. The actual process of scaling your database takes much longer. Regardless of whether you are scaling up or down, it will take a few minutes at the very least, with the time increasing relative to the size of your database.
 

There will be a small interruption of database services when the scaling is complete

Near the end of the scaling process, there is a span of a few seconds where database connectivity is lost, and active transactions will stop, reporting a failure. Be careful to only perform the scaling when you aren’t running important processes on your database.
 

Consider the size of your database if you attempt to scale your database into another edition

Each Azure SQL Database edition has limitations on the maximum allowed size of a database, which you can define independently on each database. While your focus in scaling your database will likely be on the number of DTUs, you also have the ability to scale the maximum size of your database. Even if you do not wish to scale your database size, I still recommend that you specify in your code what you want your database size to be. Functionally it changes nothing, but it serves to prevent your code from breaking, If you don’t specify this value, your process will set the database to the maximum size available for the edition. This isn’t a problem scaling up an edition, as your database size will naturally increase. The problem is when you want to scale down an edition. If the maximum size of your database is larger than what your target edition allows, and you do not specify an acceptable size value for that edition, your code will fail. 

By scaling up/down your Azure SQL DB and other Azure resources, you can save money while still getting the performance you need for your solution to succeed. Incorporating a dynamic scaling plan into your architecture will give you the best of both worlds.
 
Business Intelligence

 

Love our Blogs?

Sign up to get notified of new Skyline posts.

 


Related Content


Blog Article
Mitigating the Risks of Realtime or Near-Realtime Data Processing
Matt PlusterMatt Pluster  |  
Dec 10, 2019
In this blog series, Matt Pluster, Director of Skyline’s Data Analytics Consulting Practice, explores data sources and processing options. For a full overview on this topic, check out the Realtime vs Near-Realtime webinar.   In previous blogs in this series, I’ve talked about...
Blog Article
“The Other Realtime”: Low-Latency Data Processing via DirectQuery
Matt PlusterMatt Pluster  |  
Dec 03, 2019
In this blog series, Matt Pluster, Director of Skyline’s Data Analytics Consulting Practice, explores data sources and processing options. For a full overview on this topic, check out the Realtime vs Near-Realtime webinar.    So far in this blog series, we have talked...
Blog Article
Realtime vs Near-Realtime Data: Pros and Cons
Matt PlusterMatt Pluster  |  
Nov 26, 2019
In this blog series, Matt Pluster, Director of Skyline’s Data Analytics Consulting Practice, explores data sources and processing options. For a full overview on this topic, check out the Realtime vs Near-Realtime webinar.   In this blog series, we are looking at the matchup of...
Blog Article
Two Megatrends Driving the Coming Data Tsunami (Are You Ready?)
Tim MorrowTim Morrow  |  
Oct 29, 2019
Data is growing and proliferating at an unprecedented rate. According to a 2016 report from IBM, "90% of the data in the world today has been created in the last two years alone" - and that was three years ago. Imagine how much more data we have today. ...
Blog Article
Keys to Winning When Transitioning Your Data to the Cloud
Scott HietpasScott Hietpas  |  
Oct 22, 2019
In this blog series, Scott Hietpas, a Principal Consultant with Skyline Technologies’ data team, explores the pros and cons of different data storage architecture. For a full overview on this topic, check out the original Cloud vs On-Premises Architecture Webinar.   In an On...