Enhancing chatbot performance with generative AI using Amazon Kendra and Amazon SageMaker hosted LLMs

In this session, learn how to implement Retrieval Augmented Generation (RAG) using Amazon Kendra and SageMaker hosted LLM (Large Language Models) to address data privacy and data residency concerns. We showcase how you can combine RAG with Amazon Kendra, a semantic search service, can significantly improve the response quality of chatbots and eliminate hallucination, while ensuring data privacy. Find out how to deploy LLM in Amazon SageMaker in a cost effective and optimized way, as well as integrate RAG and LLM with existing chatbot infrastructure to provide a seamless user experience. The session also outlines the benefits and practical applications of RAG enabled by Amazon Kendra and SageMaker hosted LLM, empowering you to create secure, responsive, and intelligent chatbots. Download slides », Download demo »

Speaker: Ben Friebe, Senior ISV Solutions Architect, AWS

Previous Video
Improve customer experience with generative AI and ML based sentiment analysis
Improve customer experience with generative AI and ML based sentiment analysis

Many organizations are trying to extract insights from customer interactions across multiple channels to el...

Next Video
Generative AI on AWS What, why, and how
Generative AI on AWS What, why, and how

Recent advances in generative AI make it the most disruptive set of technologies and capabilities to hit th...