Skip to main content

Develop and deploy production-ready generative AI applications

Harnessing generative AI requires overcoming significant technical and strategic challenges to deploy production-ready solutions. In this session learn the tools, customization methods, and models so you can scale, move fast, manage risks while building and deploying your generative AI applications. We first dive deep into how to use Amazon Bedrock to access the key foundation models. We then explain how large language models (LLMs) are deployed on Amazon SageMaker. Discover how you can ensure flexibility of pluggable models, prompt versioning, customizability of RAG engines, and seamlessly integrate with data services on AWS. Understand the different techniques to generate safe and reliable responses, as well as best practices for monitoring and evaluating model outputs. We explore the key generative AI deployment patterns on AWS, and how they enable you to effectively deploy of multiple instances with diverse configurations, compare outputs, evaluate performance metrics, while ensuring enterprise-grade security measures.

Speakers: 
Santhosh Urukonda, Senior Prototyping Engineer, AWS India
Sureshkumar K V, Prototyping Engineer, AWS India