Enterprise RAG and Multi-Agent Applications

4.9

(22 ratings)

·

5 Weeks

·

Cohort-based Course

Build and Optimize Production-Grade RAG and LLM Applications: Master Advanced Techniques for Scalable, Secure, and Low-Latency AI Solutions

This course is popular

8 people enrolled last week.

Previously at

Google
Stanford University
Ucla
company logo
University of Minnesota

Course overview

Go Beyond Basic Frameworks: Build and Deploy Production-Grade AI Solutions

Welcome to the most technically rigorous and hands-on Large Language Model (LLM) application course available today.


This isn't just another AI course – it's your gateway to mastering the art and science of deploying production-grade LLM solutions that stand out in the real world.


As part of Maven's Top-Rated Content, this course is designed for those who have already mastered the basics of RAG, cosine similarity, vector databases, and LLMs. We'll take you to the next level, focusing on practical aspects of packaging and deploying these models in real-world production environments.


For cohort members joining this intensive learning experience, here's what you get:


- 6 weeks of in-depth content

- Weekly office hours for personalized guidance

- Real-world projects and challenging assignments

- Guest Lectures by Leading AI Professionals

- Continued support post-graduation

- Lifetime access to course materials



What You'll Master: Course Highlights


Agents: Forget CrewAI or Autogen, build your won Agents from scratch. Learn what it takes to make an Agent from the ground. Contribute to Open Source community.


Advanced RAG Solutions: Dive into enterprise-level RAG architectures and learn how to build and implement semantic caching from scratch using GCP and Redis.


LLM Hosting and Deployment: Gain insights into best practices for hosting Large Language Models (LLMs) in diverse production settings, creating inference endpoints, and deploying LLMs on serverless platforms.


Continual Pre-Training and Fine-Tuning: Explore advanced techniques for continual pre-training, fine-tuning LLMs, and mitigating catastrophic forgetting. Learn how to build a data pipeline for pre-training, apply causal language modeling, and leverage scaling laws.


Model Merging and Mixture of Experts: Master techniques for merging multiple models to enhance their collective capabilities, including the Mixture of Experts (MoE) approach. Learn to use tools like mergekit for efficient model merging.


Quantization Methods: Discover techniques to reduce model size while maintaining performance, crucial for deployment in resource-constrained environments.


Inference Speed Optimization: Learn strategies to accelerate inference speeds for real-time language processing, ensuring efficient and responsive AI systems.


Responsible AI Implementation: Explore ethical AI development using guardrails like NeMo, Colang, and Llama Guard to ensure AI systems align with responsible AI principles.


Agentic RAG and Chunking Strategies: Implement advanced semantic chunking techniques and explore AI agent frameworks like AutoGen to enhance the capabilities of RAG systems.


DSPy and Knowledge Graphs: Learn to create and utilize knowledge graphs effectively, mastering DSPy as an alternative prompting approach for structured data handling and enhanced AI interaction.


Throughout the course, we will analyze state-of-the-art AI products, reverse-engineering some through Python. As a bonus, you'll have access to experimental products being developed at Traversaal.ai, my startup, allowing you to stay at the forefront of advancements in the field.


Prerequisite: You should have coding experience of building RAG Solution. Understanding of Encoders and Decoders and some knowledge of Cloud Solutions and APIs.


If you feel the need for a more foundational course, consider checking out my other offering on LLMs: Building LLM Applications (https://maven.com/boring-bot/ml-system-design).


This course is for you if you are a:

01

Machine Learning Engineer exploring different techniques to scale LLM solutions

02

Researcher, who would like to delve in to various aspects of open-source LLMs

03

Software Engineer, looking to learn how to integrate AI into their products

What you’ll get out of this course

Advanced AI Architectures

Understand and implement complex AI architectures, including enterprise-level RAG systems and agentic RAG strategies. You will also dive deep into the Mixture of Experts (MoE) technique and other model merging strategies to enhance the capabilities of your AI systems.

Practical Skills for Deployment

From building semantic caches using GCP and Redis to deploying LLMs on serverless platforms like AWS Bedrock, you'll learn the practical skills to deploy and manage AI applications in real-world scenarios. 

Fine-Tuning Expertise

Acquire advanced techniques for fine-tuning LLMs, enabling you to adapt these models to specific tasks or domains and enhance their performance in targeted applications.

Efficient Inference Processing

Explore strategies for exploring and optimizing inference speeds, ensuring that your language models perform efficiently in real-time scenarios, a crucial skill for deploying responsive and scalable applications.

Knowledge of Responsible AI

Understand the importance of ethical AI development and learn to implement guardrails using tools like NeMo, Colang, and Llama Guard to ensure your AI systems align with responsible AI principles.




This course includes

8 interactive live sessions

Lifetime access to course materials

13 in-depth lessons

Direct access to instructor

4 projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Expand all modules
  • Week 1

    Oct 5—Oct 6

    Events

    • Oct

      5

      Session 1

      Sat, Oct 5, 4:00 PM - 6:00 PM UTC

    Modules

    • Enterprise RAG Architecture & Semantic Caching

  • Week 2

    Oct 7—Oct 13

    Events

    • Oct

      8

      Office Hours

      Tue, Oct 8, 7:00 PM - 7:30 PM UTC

    • Oct

      12

      Session 2

      Sat, Oct 12, 4:00 PM - 6:00 PM UTC

    Modules

    • Optimizing and Deploying Large Language Models

  • Week 3

    Oct 14—Oct 20

    Events

    • Oct

      17

      Office Hours

      Thu, Oct 17, 7:00 PM - 7:30 PM UTC

    • Oct

      19

      Session 3

      Sat, Oct 19, 4:00 PM - 6:00 PM UTC

    Modules

    • Quantization, API Production and Guardrails

  • Week 4

    Oct 21—Oct 27

    Events

    • Oct

      23

      Office Hours

      Wed, Oct 23, 7:00 PM - 7:30 PM UTC

    • Oct

      26

      Session 4

      Sat, Oct 26, 4:00 PM - 6:00 PM UTC

    Modules

    • DSPy and Knowledge Graphs

  • Week 5

    Oct 28—Nov 3

    Events

    • Oct

      30

      Session 5

      Wed, Oct 30, 2:00 AM - 4:00 AM UTC

    Modules

    • Model Merging Techniques

    • Agentic RAGs and Deployment Best Practices

  • Week 6

    Nov 4—Nov 9

    Nothing scheduled for this week.

  • Post-Course

    Modules

    • Demo Day

4.9

(22 ratings)

What students are saying

Meet your instructor

Hamza Farooq

Hamza Farooq

I am the founder of Traversaal.ai, an LLM-based startup dedicated to creating scalable, customizable, and cost-efficient language model solutions for enterprises.


With over 15 years of experience in machine learning, my journey has spanned three continents and seven countries, covering a diverse range of industries such as tech, telecommunications, finance, and retail.


As a former Senior Research Manager at Google and Walmart Labs, I have led data science and machine learning teams, focusing on optimization, natural language processing, recommender systems, and time series forecasting.

I am also an adjunct professor at Stanford and UCLA, where I bridge the gap between academic theory and real-world AI applications.


Additionally, I frequently speak at conferences and conduct training sessions, sharing insights on large language models, deep learning, and cloud computing.

A pattern of wavy dots

Join an upcoming cohort

Enterprise RAG and Multi-Agent Applications

Cohort 3

$800

Dates

Oct 5—Nov 9, 2024

Payment Deadline

Oct 5, 2024
Get reimbursed

Bulk purchases

Course schedule

4-6 hours per week

  • Sundays

    9:00 - 11:00am PT

    Virtual Class

  • Weekly projects

    2-3 hours per week

    Work in teams to build solutions, this requires engagement with other team members

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

What happens if I can’t make a live session?

I work full-time, what is the expected time commitment?

What’s the refund policy?

A pattern of wavy dots

Join an upcoming cohort

Enterprise RAG and Multi-Agent Applications

Cohort 3

$800

Dates

Oct 5—Nov 9, 2024

Payment Deadline

Oct 5, 2024
Get reimbursed

Bulk purchases

$800

4.9

·

5 Weeks