Skip to main content

Using serverless containers to rapidly deliver a generative AI chatbot application

Are you wondering how you can interface to and implement generative AI workloads for your organization and users? Join this session as we walk through a real-life use case of building a public user facing chatbot with generative AI. Find out how you can build this chatbot with AWS serverless and containers including AWS Lambda based containers, Amazon API Gateway, Amazon DynamoDB, with a Large Language Model (LLM) endpoint. We then demonstrate how to leverage these solutions to quickly build and deploy a resilient, secure generative AI powered chatbot application with minimal effort and cost. We also discuss how add context to generative AI workloads with Retrieval Augmented Generation (RAG) and automate deployments via CI/CD tools and AWS Lambda containers. Download slides », Download demo 1 », Download demo 2 », Download demo 3 »

Mai Nishitani, Solutions Architect, AWS
Steven Cook, Senior Solutions Architect, AWS