End-to-end MLOps with Amazon SageMaker and GitHub Actions (Level 300)

When you move your machine learning (ML) workloads into production, you need to look at creating automated model re-training and deployment pipelines. But building CI/CD around ML workflows and incorporating best practices such as source and version control, automatic triggers, and secure deployments can be challenging. In this session, we share how to operationalize and maintain your ML models in production efficiently with Amazon SageMaker Pipelines and bring CI/CD pipelines to ML, reducing the months of coding previously required to just a few hours. We demonstrate how to build and develop workflows by automating processes with third-party tools such as GitHub actions. Download slides »
Speakers: 
Romina Sharifpour, Senior Solutions Architect, AWS
Pooya Vahidi, Enterprise Solutions Architect, AWS

Duration: 30mins

Previous Video
Build, train, deploy, and operationalize Hugging Face models on Amazon SageMaker (Level 200)
Build, train, deploy, and operationalize Hugging Face models on Amazon SageMaker (Level 200)

The field of natural language processing (NLP) is developing rapidly, and NLP models are growing increasing...

Next Video
Scaling data processing and ML workloads with AWS (Level 200)
Scaling data processing and ML workloads with AWS (Level 200)

Building scalable data and AI and machine workloads is a cross-team effort that requires management of seve...