Let’s take the recommendations from the previous article and see how they can apply to a blatantly synthetic example organization I just made up.

OmniConsumer Products is a conglomerate that has grown through acquisition and has a number of divisions and products that require Azure resources. Each OCP project has unique requirements for Azure resources, but the CTO has decided that for consistency all OCP projects must adhere to a single resource naming convention.

OCP begins with the following projects:

  • The ED-209 Project: uses Azure IoT Central for real-time analytics of combat performance, and Azure SQL and Azure Storage Accounts for long-term storage of metrics
  • The Robocop Project: uses Azure Machine Learning to produce adaptive policing responses to emergency situations
  • The 6000SUX: uses Azure VDI with high-performance graphics cards to design the next-generation family automobile
  • Delta City: uses Azure App Services to host legacy and new APIs behind an Azure API Gateway to provide municipal services to residents in the city of the future
  • Mediabreak: uses Azure Kubernetes Services, Azure Container Registries and Azure Media Services to provide streaming news as it happens

All of the projects use Azure Storage Accounts and Azure Key Vaults for common infrastructure tasks; many projects also use Windows and/or Linux Azure VMs.

Because OCP’s divisions are notoriously - often violently - competitive, the CTO decides to isolate each project in its own subscription. This makes billing and cost accounting easier, and also removes the need to include a project identifier in any resource names. Even though many Azure resources must have names that are globally unique across all of Azure, when interacting with Azure cloud engineers must always set the scope to a specific subscription. As a result, resources with similar names in different subscriptions will never appear together without specific effort by an engineer. The CTO considers this acceptable.

Reviewing the list of different resources, the CTO notes that most resources can be named with alphanumeric characters and hyphens up to 63 characters, with the following exceptions:

  • Storage Accounts and Media Services are limited to 24 alphanumeric characters (no hyphens)
  • App Service Plans are limited to 40 characters
  • Container Registries are limited to 50 alphanumeric characters (no hyphens)
  • Key Vaults are limited to 24 characters
  • Windows VMs are limited to 15 characters

Further, the CTO is mindful of the issues with resource names longer than 32 characters.

The CTO makes the following decisions:

  • The default name template will be {resource type}-{workload}-{environment}-{region}-{instance}
  • Resource names must be <= 32 characters
  • {resource type} and {region} will use the Azure recommended values
  • For global resources, {region} will have a value of globl
  • {instance} will be a 4 character randomly generated alphanumeric string
  • Each project will define for themselves a list of values for {workload} and {environment}
  • {environment} must be no longer than 3 characters
  • All other metadata is to be stored in tags

This leaves ~12 characters for the {workload} field.

To handle resources with stricter name limitations, the CTO defines the following exceptions:

  • For Storage Accounts, Media Services, Container Registries and Key Vaults, the name template will be {resourcetype}{workload}{environment}{region}{instance}
    • this limits {workload} to 8 characters
    • although Container Registries can have 50 character names and Key Vaults can have hyphens, the CTO considers consistency a higher priority
  • For Windows VMs, the the name template will be VM{workload}{environment}{instance}
    • this limits {workload} to 6 characters

After consulting with the various divisions, the CTO determines that it is very likely that many resources will share a {workload} with a Storage Account, Key Vault or VM. For instance, an App Service may use a Key Vault to store secret settings via Key Vault reference, or a VM might mount a Storage Account as a network file share for persistent storage.

As a result, the CTO directs the divisions to restrict their {workload} field to 6 characters, using abbreviations as necessary. To make it easier for cloud operations staff to work with the resources, he also mandates the use of a description: tag, which may have a free form text value to describe in depth the purpose or role of a given resource.

Example: Delta City Link to heading

The Delta City project consists of a city web site backed by APIs that allow residents to access city services. Delta City has the following APIs hosted in App Services:

  • Paying parking tickets
  • Applying for various licenses (pet, parade, ownership of military hardware)
  • City-sponsored events

Each app service is hosted on its own App Service Plan, which autoscales to handle variable load. Since the APIs are legacy services in different stacks with different contracts, an API Gateway is used to provide authentication, request rate limiting and a unified REST contract to the front end. Backend persistent storage is provided by a single Azure SQL database. Some App Services have dedicated Storage Accounts and Key Vaults for large upload or secret storage.

The Delta City engineers define the following environments:

  • dev - for resources used as part of active development
  • igt - for integration testing
  • uat - for user acceptance or limited public beta testing
  • prd - the production hosting

Further, they define the following workloads:

  • common - for resources like the database or API gateway which serve the entire project
  • park - for the parking tickets API
  • licens - for the licensing API
  • events - for the events API

The parking tickets API then consists of the following Azure resources:

  • apim-common-prd-eastus-ws3d - the API Gateway
  • rg-park-prd-eastus-jk98 - the resource group containing the parking ticket API
  • asp-park-prd-eastus-jk98 - the App Service Plan
  • app-park-prd-eastus-jk98 - the App Service
  • stparkprdeastusjk98 - the Storage Account for large file uploads
  • kvparkprdeastusjk98 - the Key Vault for secret app settings storage

Note that as the Delta City cloud engineers decide that the App Service, App Service Plan, Storage Account and Key Vault will all be created and destroyed as a unit, they are placed in the same resource group and share an {instance} value.