My Journey Learning To Build AI Apps on Azure (March 2025 to Feb 2026)

I started by learning how RAG works end-to-end - indexing documents, vectorizing with embeddings, retrieving with hybrid search, and grounding LLM responses. Once I understood the mechanics, I leveled up to Semantic Kernel to introduce agent abstractions and plugin-based extensibility. From there, I explored Azure AI Foundry's hosted agents and prompt engineering patterns. Finally, I built a production multi-agent platform on AKS using the Microsoft Agent Framework SDK, routing five agents across three distinct backends — cloud APIs, on-cluster GPU inference via KAITO, and server-side RAG via KAITO RAGEngine. Each project was a building block toward understanding how enterprise AI applications are designed, orchestrated, and deployed at scale on AKS.

Intro to KAITO RAG Engine on Azure Kubernetes Service

Intro to KAITO RAG Engine on Azure Kubernetes Service

The Kubernetes AI Toolchaining Operator (AKS) features a RAG engine that enables users to interact with private documents using a hosted language model, like Phi-4. This tool allows for grounded AI responses by indexing and retrieving relevant data. This is an AI platform offering management control and scalability supporting many Gen AI applications.

Using Streamlit Chatbot UI with AKS KAITO Language Model Inferences

Using Streamlit Chatbot UI with AKS KAITO Language Model Inferences

This blog post discusses setting up a chatbot UI using Streamlit alongside a deployed language model inference service in Azure Kubernetes. It details the process of testing the inference service with curl commands, implementing a Streamlit app, and configuring ingress rules for external access, highlighting Streamlit's user-friendly capabilities for chatbot development.

Running Open-Weight LLMs on AKS with KAITO: A Summary of Model Families

KAITO is an AI toolchain operator designed for deploying language models in Kubernetes. It features various model families, including DeepSeek for advanced reasoning, Falcon for custom fine-tuning, Llama for general assistance, Mistral for efficiency, Phi for cost-sensitive tasks, and Qwen for programming. Open-weight models ensure privacy and customization options, making them suitable for enterprise workloads while allowing fine-tuning and governance.

Exploring Azure Verified Module for Azure Kubernetes Service

I have been testing out the Azure Verified Module for Azure Kubernetes Service that can be found in the Terraform Registry at https://registry.terraform.io/modules/Azure/avm-res-containerservice-managedcluster. This module came out in October 2024 so its fairly new. This module is suitable for enterprise-grade production environments, applies Microsoft best practices, features RBAC, complex monitoring. Also it is supported by …

Continue reading Exploring Azure Verified Module for Azure Kubernetes Service

Well Architected Framework With The Azure Verified Module For Azure Kubernetes

I came across the Azure Verified Module for Azure Kubernetes Service and in its Github repo I found a Well Architected Framework (WAF) Aligned example for deploying this Terraform module. So asked myself, "What exactly makes this example of deploying AKS WAF Aligned?" Before I get into that, let me explain what is WAF.  It …

Continue reading Well Architected Framework With The Azure Verified Module For Azure Kubernetes

Deep Dive Into Fine-Tuning An LM Using KAITO on AKS – Part 3: Deploying the FT Model

Now that I have fine-tuned a model in Part 2, next is to deploy the fine tuned model into a new Kaito workspace. This blog post is part of a series.Part 1: Intro and overview of the KAITO fine-tuning workspace yamlPart 2: Executing the Training Kubernetes Training JobPart 3: Deploying the Fine-Tuned ModelPart 4: Evaluating …

Continue reading Deep Dive Into Fine-Tuning An LM Using KAITO on AKS – Part 3: Deploying the FT Model