Building and Deploying RAG Systems using LLMs

Calendar 4th November, 2023 clock 9:30 am - 5:30 pm location Hyderabad
Level - Beginner

In today's AI world, Large Language Models (LLMs) is a phenomenal technology and help us build many interesting applications. These models learn from a wide variety of sources like Wikipedia, mailing lists, and more. But their real magic shines when they interact with specific data from different businesses or fields, which is often tucked away in separate systems, databases, or documents.

LlamaIndex steps in to make this easy. In this workshop, we’ll explore how to build and set up a smart system using LlamaIndex, which can understand and answer questions smartly by pulling in information from different places. We start by understanding the basics of Retrieval-Augmented Generation (RAG) - a smart way of answering questions and dive into the bits and pieces of LlamaIndex. As we move forward, we’ll look at real examples, learn how to tweak the system with other open-source tools, dive into a special way of pulling data from databases using text, manage important system info, and fine-tune our setup for better performance.

Through a mix of theory lessons and hands-on projects, you’ll learn how to set up a question-answering system on Replit and fine-tune an RAG system using LlamaIndex. This workshop will give you the tools and knowledge to tackle the challenge of making LLMs work with specific data, helping you build advanced AI applications. By the end of this journey, not only will you understand LlamaIndex and RAG systems well, but also see the wide possibilities they open up in making the most out of LLMs in today’s applications.

Here is the detailed outline for each module:

Module 1: Grasping the RAG Paradigm and LlamaIndex Framework

  • Introduction to the Retrieval-Augmented Generation (RAG) paradigm
  • Importance and applications of RAG
  • Introduction to Large Language Models (LLMs)
  • Overview of the LlamaIndex Framework
  • Significance and use cases
  • Delving into LlamaIndex’s Components
    • Data Loaders (LlamaHub)
    • Indexing
    • Retriever and Response Synthesis
    • Query Engine/Chat Engine

Module 2: Delving into RAG Use Cases

  • QA Systems and Summarisation System
  • Router Engine for routing the queries
  • SubQuestion Query Engine for document comparisons

Note: These are tentative details and are subject to change.

Download the complete modules below.

Ravi Theja

Data Scientist II

image.name
Download Brochure
Book Passes