Rapidly launch ML solutions at scale on AWS infrastructure (Level 200)

AWS offers the broadest and deepest services around quickly building and launching AI and machine learning for all types of organizations, businesses, and industries. In this session, we explain how to deploy your inference models on AWS, explore what factors to consider, and how to optimize the deployments. We share best practices and approaches to get your ML workloads running smoothly and efficiently on AWS. Download slides »
Speaker: Santhosh Urukonda, Senior Prototyping Engineer, AWS India
Duration: 30mins

Previous Video
Deploying a Text to Image Model with Amazon SageMaker and Amazon Rekognition (Level 200)
Deploying a Text to Image Model with Amazon SageMaker and Amazon Rekognition (Level 200)

Join this session to learn how global visual communications platform Canva built their new text-to-image fu...

Next Video
Operationalize and automate your NLP pipeline with AWS (Level 200)
Operationalize and automate your NLP pipeline with AWS (Level 200)

NLP models often consist of hundreds of millions of model parameters, thus building, training, and optimizi...