Skip to main content

Fully-managed ML deployments on AWS (Level 200)

AWS offers and delivers the broadest choice of powerful compute, high speed networking, and scalable high-performance storage options for any machine learning (ML) project or application. You can also choose the ML infrastructure for a do-it-yourself approach or implement a fully managed approach with Amazon SageMaker. In this session, we explore how to deploy your inference models on AWS, what factors to consider and how to optimize the deployments. We share best practices and approaches to get your ML workloads running smoothly and efficiently on AWS.

Eshaan Anand, Senior Partner Solutions Architect, AWS

Download slides