To say it’s a hot topic at the moment would be the understatement of the year!
One need only look at Microsoft’s announcements from Build and Inspire this year to understand the scale of their investment into AI. They’ve not only launched new AI driven products such as the Azure OpenAI Service and Bing Chat Enterprise – they’ve been baking it into what feels like their entire product set through the likes of Microsoft 365 Copilot, Microsoft Sales Copilot, GitHub Copilot and more. Microsoft has even partnered with Meta (the owners of Facebook et al.) to invest in Llama 2, a large language model (LLM) akin to ChatGPT (albeit with different use-cases).
Given the plethora of Microsoft marketing campaigns, announcements, updates, and initiatives such as the Microsoft Learn AI Skills Challenge, it’s understandably easy for those of us of working in the Microsoft space to fall down the rabbit hole of AI and lose sight of the bigger picture.
Consider OpenAI’s ChatGPT for example. It’s become so well known, even my non-technical friends and family have asked me about it. Conversational AI such as ChatGPT, and other generative AI such as OpenAI’s DALL·E 2 and Codex only form part of a much wider AI offering.
Consider the recently renamed Azure AI services offering from Microsoft. Whilst ChatGPT may be stealing the limelight, incredibly powerful AI based solutions such as Cognitive Search and Speech offer a plethora of AI based capabilities. Also, let’s not forget Azure Machine Learning, which rather confusingly seems to have been classified outside the Azure AI services grouping.
The point I’m trying to make is that AI, particularly in Azure, is way more than just OpenAI based solutions like ChatGPT.
The bigger picture, however, does not stop there. What good is AI without the data that feeds it, or the infrastructure that hosts it? And what good are either of those things without security, and governance, and connectivity, and so on….
AI in Azure is not something deployed in isolation. Not for anything beyond a dev/test scenario, anyway.
Consider the below Azure Speech Services solution:
In addition to the Azure Cognitive Services both for Language and Speech, there is data in the form of blob storage, processing in the form of Function Apps and presentation in the form of a Web App and Power Bi. The solution comprises several individual components working together.
Now, consider the security and governance aspects of this solution: Is each resource accessible from the internet, or behind a Private Endpoint? Can anyone in your organisation get read access to the data, or is it restricted to specific users or groups? What if multiple instances (and variations thereof) exist, such as production, development and testing environments?
This is where Landing Zones come in.
Applied at either an application level (such as in the above scenario) or at a platform level (covering your entire Azure tenant and subscriptions), an Azure Landing Zone can be used to define and maintain key design principles, such as security and governance.
Our sample solution could be deployed over multiple Azure subscriptions - one per environment. The subscription can be used as a boundary for both costs but also access. For example, the production environment is deployed to its own subscription. Access onto that subscription is applied via Role Based Access Control (RBAC) only to those users/groups that need it, and Azure Policy is used to audit or even enforce rules, such as the regional location to which resources can be deployed. You'd apply the same concept to a development environment for example, where you'd apply subscription level budgets to prevent costs spiralling out of control, and use Azure Policy to prevent the deployment is highly expensive SKUs.
You can scale this concept out even further, and have centralised resources such as a firewall and VPN gateway providing hub and spoke network topology, and so on.
You'd typically define your platform and application Landing Zones as infrastructure-as-code (IaC) such as Terraform or Bicep, maintain the code within source control, such as GitHub or Azure Repos, and deploy them via pipelines, such as GitHub Actions or Azure Pipelines. Doing so can ensure consistency, compliance and the application of best practice.
So how does this tie back to AI again?!
A couple of weeks ago, Microsoft released a reference architecture for an Azure OpenAI Landing Zone. Seen below:
What you're looking at here appears to be nothing more than a slight extension to the Microsoft provided reference architecture for an enterprise scale Azure landing zone, pictured below:
Note that in Microsoft's AI Landing Zone example, they only reference OpenAI!
What Microsoft have shared with us is their best practice approach to integrating Azure AI resources within a wider context, such as an application or platform level Landing Zone. This includes the likes of Private Endpoints, Network Security Groups and Web Application Firewalls. They also address concepts such as load balancing, monitoring, and identity management.
Building out this solution is way more complicated than it looks (for example via IaC and pipelines), especially with the complexities of private networking in Azure, but to me, this highlights the point I made earlier: AI in Azure doesn't exist in isolation.
Yes, it's new, shiny and very exciting. The possibilities are endless - and it's totally fine to get caught up in the hype! From an Azure perspective, however, implementing AI based resources needs to be as carefully considered as that of any other resource, such as a SQL Database or Virtual Machine. The AI itself is only as good (performant, secure, scalable, resilient etc.) as the infrastructure it's deployed on, and the environment it's deployed onto. Azure Landing Zones exist to help ensure just that.