In today's AI world, Large Language Models (LLMs) is a phenomenal technology and help us build many interesting applications. These models learn from a wide variety of sources like Wikipedia, mailing lists, and more. But their real magic shines when they interact with specific data from different businesses or fields, which is often tucked away in separate systems, databases, or documents.
LlamaIndex steps in to make this easy. In this workshop, we’ll explore how to build and set up a smart system using LlamaIndex, which can understand and answer questions smartly by pulling in information from different places. We start by understanding the basics of Retrieval-Augmented Generation (RAG) - a smart way of answering questions and dive into the bits and pieces of LlamaIndex. As we move forward, we’ll look at real examples, learn how to tweak the system with other open-source tools, dive into a special way of pulling data from databases using text, manage important system info, and fine-tune our setup for better performance.
Through a mix of theory lessons and hands-on projects, you’ll learn how to set up a question-answering system on Replit and fine-tune an RAG system using LlamaIndex. This workshop will give you the tools and knowledge to tackle the challenge of making LLMs work with specific data, helping you build advanced AI applications. By the end of this journey, not only will you understand LlamaIndex and RAG systems well, but also see the wide possibilities they open up in making the most out of LLMs in today’s applications.
Here is the detailed outline for each module:
Note: These are tentative details and are subject to change.
Download the complete modules below.