Video: Accelerate Transformer inference with AWS Inferentia

In this video, I show you how to accelerate Transformer inference with Inferentia, a custom chip designed by AWS.

Starting from a Hugging Face BERT model that I fine-tuned on AWS Trainium (https://youtu.be/HweP7OYNiIA), I compile it with the Neuron SDK for Inferentia. Then, using an inf1.6xlarge instance (4 Inferentia chips, 16 Neuron Cores), I show you how to use pipeline mode to predict at scale, reaching over 4,000 predictions per second at 3-millisecond latency 🤘

--

--

Chief Evangelist, Hugging Face (https://huggingface.co)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store