SageMaker Fridays Season 3, Episode 5 — NLP at scale with Hugging Face and distributed training
1 min readApr 17, 2021
In this episode, we use state of the art models for natural language processing available in the Hugging Face collection. Then, we fine-tune BERT on a sentiment analysis dataset, and predict with the model. Finally, we show you how to scale your training jobs with data parallelism and model parallelism.