My Journey Learning To Build AI Apps on Azure (March 2025 to Feb 2026)

I started by learning how RAG works end-to-end - indexing documents, vectorizing with embeddings, retrieving with hybrid search, and grounding LLM responses. Once I understood the mechanics, I leveled up to Semantic Kernel to introduce agent abstractions and plugin-based extensibility. From there, I explored Azure AI Foundry's hosted agents and prompt engineering patterns. Finally, I built a production multi-agent platform on AKS using the Microsoft Agent Framework SDK, routing five agents across three distinct backends — cloud APIs, on-cluster GPU inference via KAITO, and server-side RAG via KAITO RAGEngine. Each project was a building block toward understanding how enterprise AI applications are designed, orchestrated, and deployed at scale on AKS.

Intro to KAITO RAG Engine on Azure Kubernetes Service

Intro to KAITO RAG Engine on Azure Kubernetes Service

The Kubernetes AI Toolchaining Operator (AKS) features a RAG engine that enables users to interact with private documents using a hosted language model, like Phi-4. This tool allows for grounded AI responses by indexing and retrieving relevant data. This is an AI platform offering management control and scalability supporting many Gen AI applications.

Using Streamlit Chatbot UI with AKS KAITO Language Model Inferences

Using Streamlit Chatbot UI with AKS KAITO Language Model Inferences

This blog post discusses setting up a chatbot UI using Streamlit alongside a deployed language model inference service in Azure Kubernetes. It details the process of testing the inference service with curl commands, implementing a Streamlit app, and configuring ingress rules for external access, highlighting Streamlit's user-friendly capabilities for chatbot development.

Running Open-Weight LLMs on AKS with KAITO: A Summary of Model Families

KAITO is an AI toolchain operator designed for deploying language models in Kubernetes. It features various model families, including DeepSeek for advanced reasoning, Falcon for custom fine-tuning, Llama for general assistance, Mistral for efficiency, Phi for cost-sensitive tasks, and Qwen for programming. Open-weight models ensure privacy and customization options, making them suitable for enterprise workloads while allowing fine-tuning and governance.

Resolving Errors In Azure AI Search Indexer Against Blob Storage Account

When creating an indexer in Azure AI Search to read files such as JSON and PDFs, I encountered the following error: Operation:Web Api response status: 'Unauthorized', Web Api response details: '{"error":{"code":"PermissionDenied","message": "Principal does not have access to API/Operation."}}' Message:Could not execute skill because the Web Api request failed. Details:Web Api response status: 'Unauthorized', Web Api …

Continue reading Resolving Errors In Azure AI Search Indexer Against Blob Storage Account

Permissions with Azure AI Foundry: Safety And Security

As I was starting to try out Azure Foundry Safety and Security feature, I confronted with the error "Your account does not have access to this resource, please contact your resource owner to get access". And so I went to the Management Center, to check user permissions and yet I have owner permissions at the …

Continue reading Permissions with Azure AI Foundry: Safety And Security

Deep Dive Into Fine-Tuning An LM Using KAITO on AKS – Part 4: Evaluation

In the previous article Part 3, I have shown deploy the fine-tuned model on Azure Kubernetes Service with the Kaito add-on. In this article, I will show manual evaluation with a series of prompts taken from the fine-tuning dataset. This blog post is part of a series.Part 1: Intro and overview of the KAITO fine-tuning …

Continue reading Deep Dive Into Fine-Tuning An LM Using KAITO on AKS – Part 4: Evaluation

Deep Dive Into Fine-Tuning An LM Using KAITO on AKS – Part 3: Deploying the FT Model

Now that I have fine-tuned a model in Part 2, next is to deploy the fine tuned model into a new Kaito workspace. This blog post is part of a series.Part 1: Intro and overview of the KAITO fine-tuning workspace yamlPart 2: Executing the Training Kubernetes Training JobPart 3: Deploying the Fine-Tuned ModelPart 4: Evaluating …

Continue reading Deep Dive Into Fine-Tuning An LM Using KAITO on AKS – Part 3: Deploying the FT Model