Skip to main content

Rapidly launch ML solutions at scale on AWS infrastructure (Level 200)

AWS offers the broadest and deepest services around quickly building and launching AI and machine learning for all types of organizations, businesses, and industries. In this session, we explain how to deploy your inference models on AWS, explore what factors to consider, and how to optimize the deployments. We share best practices and approaches to get your ML workloads running smoothly and efficiently on AWS. Download slides »
Speaker: Santhosh Urukonda, Senior Prototyping Engineer, AWS India
Duration: 30mins