Skip to main content

Customize your generative AI applications to deliver relevant, accurate, and customized responses

To equip the foundation models (FMs) with up-to-date proprietary information, organizations use Retrieval Augmented Generation (RAG), to fetch data from company data sources and enrich the prompt with the data for more relevant and accurate responses. However, implementing RAG requires specific skillset and time to configure connections to data sources, manage data ingestion workflows, and write custom code to manage the interactions between the foundation model (FM) and the data sources. In this session, we share how to simplify the process with Knowledge Bases for Amazon Bedrock. Learn how to give FMs and agents contextual information from your company’s private data sources for RAG to deliver more relevant, accurate, and customized responses. We also demonstrate how to automate the end-to-end RAG workflow, including ingestion, retrieval, prompt augmentation, and citations, eliminating the need to write custom code to integrate data sources and manage queries. We then explore advanced RAG techniques involving multiple data sources including Amazon OpenSearch Service, Amazon Aurora Serverless and container-based system.

Speakers: 
Arun Balaji, Principal Prototyping Engineer, AWS India
P, Sakthi Srinivasan, PACE Engagement Manager, AWS India