Skip to content

Blog

Accessing Resources via Private Endpoint in Azure Hub-and-Spoke Virtual Network with Basic SKU VPN Gateway

In this blog post, we’ll be:

  • configuring a virtual network topology in Azure in the “hub and spoke model”
  • deploying an example resource (a Key Vault) in our spoke network
    • restricting access to the Key Vault using a private endpoint connection so that it is only accessible inside the vnet
  • configuring a DNS forwarder running Debian + Unbound in the hub network for resolving the private DNS name of the Key Vault
  • configuring a Basic SKU Virtual Network Gateway
  • configuring a Windows client to connect to the Basic VPN Gateway in a point-to-site configuration so it has access to the Key Vault through the private endpoint

Diagram of the architecture. A key vault, KVNEHubAndSpokeTest, is at the left of the diagram, connected to a virtual network vnetHSTestDev (172.16.1.0/24). This is peered with vnetHSTestConnectivity (172.16.0.0/24). This vnet contains private DNS zones, a virtual machine VM-NE-ConnectivityDNS (172.16.0.4), and the basic SKU virtual network gateway, vpng-HSTestConnectivity. On the right, the internet, and a VPN client connected through it. The VPN client has a line connecting it, via the internet, to the VPN gateway

Why?

A hub and spoke network with private endpoints for restricting access to various Azure PaaS resources is a fairly common architecture, but there are a few parts of it that lead to unnecessary costs: namely the PaaS private DNS resolver and the Virtual Network Gateway in its non-Basic SKUs, such as VpnGw1.

The primary purpose of this post is to document how I’ve achieved this architecture using the Virtual Network Gateway Basic SKU, which saves ~£80/month over the VpnGw1 SKU. It also saves the PaaS private DNS resolver costs by using a lightweight VM.

Create hub network

We’ll start by creating our “hub” network, called vnetHSTestConnectivity in my case.

Create virtual network screen. The virtual network name is vnetHSTestConnectivity

We’ll be using the 172.16.0.0/24 range for this network.

» Read the rest of this post…

“Could not import package. Warning SQL72012 / Error SQL72014” when importing a .bacpac from a blob

Azure SQL Database’s point in time restore and long term retention are solid backup options, of which you’d have every reasonable expectation for a PaaS service!

However, Microsoft’s documentation is abundantly clear that, at the time of writing, there is no support for immutable backups via this method.

"Configure backups as immutable" stated as "not supported" for Azure SQL in a table on Microsoft's documentation site

If you actually need to achieve immutable backup storage for Azure SQL database, you’ll need a different approach.

The Export button within Azure SQL Database can be used to export a .bacpac file. If this is stored in a storage account with immutability locked, you have a copy of your data that will be resilient, even to a Global Administrator compromise.

Microsoft Azure Portal -- the Access policy page for a blob container, showing the immutable blob storage options

With regard to .bacpac exports, Microsoft helpfully reminds us that:

BACPACs are not intended to be used for backup and restore operations. Azure automatically creates backups for every user database. For details, see business continuity overview and Automated backups in Azure SQL Database or Automated backups in Azure SQL Managed Instance.

However, that leads me right back to “immutability is not supported” point regarding the backups they’re mentioning here. It seems remarkable that “business continuity” is mentioned in the context of backups that are very vulnerable in many BCP scenarios, given the world of ransomware we face today (and will face in the future!)

A .bacpac file held in immutable storage can be imported back into a new Azure SQL database to restore it, but it’s important to note Microsoft’s warning:

For an export to be transactionally consistent, you must ensure either that no write activity is occurring during the export, or that you’re exporting from a transactionally consistent copy of your database.

This is indeed critical. A copy can be made simply with the Copy button within Azure SQL Database. Once complete, press Export on the copy of the database. You can delete the copied database once the export is complete.

The truncated error message I received (and the reason for this blog post) when trying to import a .bacpac that was not transactionally consistent is as follows:

The ImportExport operation with Request Id failed due to 'Could not import package. Warning SQL72012: The object [data_0] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box. Warning SQL72012: The object [log] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box. Error SQL72014: Framework Mi'.

If you see this, you’ll need to export a copy of the database, as above, so that no transactions are occurring on that database copy for the duration of the export operation.