Build, train, deploy, and operationalize Hugging Face models on Amazon SageMaker (Level 200)

The field of natural language processing (NLP) is developing rapidly, and NLP models are growing increasingly large and complex. Through strong ecosystem partnerships with organizations like Hugging Face and advanced distributed training capabilities, Amazon SageMaker is one of the easiest platforms to quickly train NLP models. In this session, learn how to quickly train an NLP model from the Hugging Face transformers library with just a few lines of code using PyTorch or TensorFlow as well as SageMaker’s distributed training libraries. Download slides »
Speaker: Tapan Hoskeri, Principal Solutions Architect, AWS India
Duration: 30mins

Previous Video
Operationalize and automate your NLP pipeline with AWS (Level 200)
Operationalize and automate your NLP pipeline with AWS (Level 200)

NLP models often consist of hundreds of millions of model parameters, thus building, training, and optimizi...

Next Video
End-to-end MLOps with Amazon SageMaker and GitHub Actions (Level 300)
End-to-end MLOps with Amazon SageMaker and GitHub Actions (Level 300)

When you move your machine learning (ML) workloads into production, you need to look at creating automated ...