Skip to main content

Enhancing chatbot performance with generative AI using Amazon Kendra and Amazon SageMaker hosted LLMs

In this session, learn how to implement Retrieval Augmented Generation (RAG) using Amazon Kendra and SageMaker hosted LLM (Large Language Models) to address data privacy and data residency concerns. We showcase how you can combine RAG with Amazon Kendra, a semantic search service, can significantly improve the response quality of chatbots and eliminate hallucination, while ensuring data privacy. Find out how to deploy LLM in Amazon SageMaker in a cost effective and optimized way, as well as integrate RAG and LLM with existing chatbot infrastructure to provide a seamless user experience. The session also outlines the benefits and practical applications of RAG enabled by Amazon Kendra and SageMaker hosted LLM, empowering you to create secure, responsive, and intelligent chatbots. Download slides », Download demo »

Speaker: Ben Friebe, Senior ISV Solutions Architect, AWS