Hyperscale is the new tier of Azure SQL, although with Hyperscale it’s becoming increasingly clear that Azure SQL is not a SKU of SQL Server as much as it is its own product. Hyperscale is the logical progression of Microsoft’s attempt to make SQL Server in the cloud as hands-off as possible. While that’s a good thing, as Microsoft automates more and more of the SQL Server management best practices, the harder it can be to determine exactly what you’re supposed to to resolve common DBA tasks. And the battle-tested methods used for SQL Server at worst won’t work and at best take hours or days longer than the largely undocumented equivalent Azure SQL operation.

For instance, we had a very large Azure SQL database - 2 TB - that needed to be moved in a couple of contexts:

  • into an Azure VNet for network security
  • into a different Azure subscription under a different tenant due to a merger

The first is simpler than it sounds, because Azure SQL is a managed service at the network level. You can move an existing Azure SQL database (of any tier) into a private network by creating a private endpoint for the hosting Azure SQL Server, and then turning off public access to the server. Boom - done.

In our case we also wanted to rename the server and database due to a new naming convention. Now, the only way to truly do that is with either a Copy operation in the Azure Portal or an Export to a BACPAC file and a re-import to a new Azure SQl serer. The first is limited to databases under 200GB and the second entails significant downtime, so pick your poison.

We decided that we could live without renaming the database itself as long as the server and the tags followed the naming convention. That made the procedure a matter of literally minutes: create a geo-replica of the primary database on a new Azure SQL server, then break the replication and turn off public network access on the replica (which was at that point a read- write standalone)

Total time: fifteen minutes.

Azure Hyperscale databases do replication at the storage level, with page blob snapshots. This makes the replication lightning fast and independent of the size of the database. The limitations are that the replica database has to have the same name as the primary, and you can only have one geo-replica. That last can be a bit of a issue if you perhaps already have a geo-replica for DR purposes, but given how fast they are to create you could simply delete the geo-replicas and recreate them as needed.

Creating a geo-replica across subscriptions and tenants is a more complicated issue and can only be done via T-SQL commands. The official Microsoft documentation covers this, but it’s dependent on the size of the primary database. Once the database has been created, terminating the replication creates a read-write standalone server in the target subscription and tenant. Of note is that to set up the replication, both primary and secondary have to be publicly accessible and can be put back in their private networks once the replication has been completed. For ongoing replication it’s not necessary to keep them publicly accessible (so says the documentation. I haven’t tested this.)