Generative AI Professional Program
Build and Deploy Your Own AI Agent
Delivery Method | Dates | Timing | Location | Registration Fees |
---|---|---|---|---|
Weekends program (in-person and live online) | October 4, 5, 11, 12, 18, 19, 25, 26, 2025 | Sat & Sun: 10:00 AM - 12:30 PM (GST) | Dubai Knowledge Park | 1500 USD |
Course Description
This program will enable you to build, customize, and deploy AI agents using modern open-source Large Language Models (LLMs) such as Mistral and LLaMA. You will learn how to integrate private data with Retrieval-Augmented Generation (RAG), extend LLMs into AI agents capable of real-world actions, and deploy applications with Django and Kubernetes.
By the end of this hands-on, project-based program, you will be able to run and customize state-of-the-art LLMs, build private chatbots powered by organizational data, and scale AI solutions on local GPUs or cloud environments. This practical, end-to-end course emphasizes real-world implementation — ensuring you not only understand the concepts behind Generative AI, but also leave with a fully functional AI system you can apply immediately in business or research. Upon successful completion, you will earn a certificate accredited by the Dubai Government.

Module 1 – Introduction to LLMs & Setup
- Overview of LLMs and today’s open-source landscape (Mistral, LLaMA, Falcon)
- Installing Python, PyTorch, Hugging Face libraries
- Running your first chatbot on an NVIDIA GPU
Module 2 – Prompt Engineering & Customization
- Understanding effective prompts for domain-specific assistants
- Prompt tuning vs. fine-tuning (when and why)
- Hands-on: Experimenting with different prompting strategies
Module 3 – Data Collection & Preprocessing
- Collecting and cleaning organizational data (PDF, CSV, TXT)
- Chunking text for use in LLM pipelines
- Basics of tokenization and embeddings
Module 4 – Working with Embeddings & Vector Databases
- Introduction to embeddings and vector search
- Storing data in FAISS (vector DB)
- Querying your private data with similarity search
Module 5 – Building a RAG (Retrieval-Augmented Generation) Pipeline
- Combining LLM + FAISS for contextual answers
- Passing retrieved context into prompts
- Testing your first RAG-powered chatbot
Module 6 – From Chatbot to AI Agent
- What makes an AI agent different from a chatbot
- Building tools and APIs for agents
- Using LangChain to enable real-world actions (e.g., schedule queries)
Module 7 – Deployment with Django
- Building a simple web interface for your chatbot/agent
- Connecting backend inference to the UI
- Running locally and testing with users
Module 8 – Scaling & Production Readiness
- Deploying with Docker & Kubernetes
- Monitoring and securing AI deployments
- Future directions: multimodal models (text, images, and audio)
Target Audience
- Software Developers: Build and deploy AI-powered chatbots and agents.
- Data Scientists: Fine-tune and integrate LLMs into real-world workflows.
- Data Analysts: Enhance analysis and insights with AI-driven tools.
- AI Enthusiasts: Gain hands-on experience creating chatbots and agents.
- Professionals Curious About Generative AI: Learn to apply LLMs and agents in practice.
Prerequisites for this Course
- Basic Python programming skills, including writing scripts and managing packages.
- Familiarity with Python libraries like NumPy or Pandas (advantageous but not required).
- Foundational understanding of AI/ML concepts such as data preprocessing, model training, and evaluation.
Learning Objectives
- Run and customize state-of-the-art open-source LLMs such as Mistral and LLaMA.
- Integrate private data into chatbots using Retrieval-Augmented Generation (RAG).
- Develop AI agents capable of performing real-world tasks with LangChain.
- Build and deploy user-facing applications with Django.
- Scale and secure AI systems using Docker and Kubernetes.
- Understand future directions in Generative AI, including multimodal models.