Open in app

Sign In

Write

Sign In

Julien Simon
Julien Simon

6.8K Followers

Home

About

Pinned

Next public talks

Here’s the current list of public events I’ll be speaking at. I will keep it as up-to-date as possible! I’m always open to speaking at public events (online or in-person). I’m also happy to explore opportunities for in-house talks. Don’t hesitate to get in touch with details at julsimon@huggingface.co. May…

3 min read

3 min read


May 17

Video: Transformer training shootout, part 2: AWS Trainium vs. NVIDIA V100

In this video, I compare the cost/performance of AWS Trainium with the NVIDIA V100 GPU. I first launch a trn1.32xlarge instance (16 Trainium chips) and a p3dn.24xlarge (8 V100s). Then, I run 3 benchmarks: language pretraining with GPT2, token classification with BERT Large, and image classification with the Vision Transformer. The results? Trainium is 2 to 5x faster, and 3 to 8x cheaper!

Deep Learning

1 min read

Deep Learning

1 min read


May 11

Video: Accelerating Transformers with Optimum Neuron, AWS Trainium and AWS Inferentia2

In this video, I show how you to accelerate training and inference for Hugging Face models on AWS Trainium and AWS Inferentia2. Thanks to our Optimum Neuron library, a single line of code is all it takes to enjoy amazing cost-performance. Give it a try!

Transformers

1 min read

Transformers

1 min read


May 9

Video: Keynote @ PyCon Sweden 2022

In this session, after a brief intro, I use Hugging Face AutoTrain to train a food image classification model on the food101 dataset. Then, using a Stable Diffusion model, I generate images to add a new food class to the dataset (meatballs, of course) and I retrain the model with Python and the Hugging Face Transformers library. Finally, I deploy the model with Hugging Face Inference Endpoints. And of course, I answer questions!

Transformers

1 min read

Transformers

1 min read


Apr 28

Video: HuggingCast episode 1

In this new Hugging Face webcast, we discuss HuggingChat, our new open source chatbot, as well as the renewed partnership with AWS. We also show you how to accelerate Transformer training with AWS Trainium, and inference with AWS Inferentia2. And of course, we answer as any questions as possible!

Artificial Intelligence

1 min read

Artificial Intelligence

1 min read


Apr 14

Video: Accelerate Transformer inference with AWS Inferentia 2

AWS Inferentia2 is now generally available, and I couldn’t resist testing it with BERT models and comparing results with Inferentia1. This thing is FAST and looks very cost-effective. Check it out!

AWS

1 min read

AWS

1 min read


Apr 5

Video: Summarizing legal documents with Hugging Face and Amazon SageMaker

Real-life generative AI! In this video, I show you how to fine-tune a Google FLAN-T5 model to summarize legal text. We first deploy the model straight from the Hugging Face hub to Amazon SageMaker, and we evaluate it on legal data. Then, using GPU instances managed by SageMaker, we fine-tune the model with a Hugging Face script and we deploy it again.

Generative Ai

1 min read

Generative Ai

1 min read


Apr 3

Video: Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face (part 2)

More speed! In this video, you will learn how to accelerate image generation with an Intel Sapphire Rapids server. Using Stable Diffusion models, the Hugging Face Diffusers library, the Intel Extension for PyTorch and system-level optimizations, we’re going to cut inference latency from 36+ seconds to 5 seconds!

Generative Ai

1 min read

Generative Ai

1 min read


Mar 31

Video: Accelerating Stable Diffusion Inference on Intel CPUs with Hugging Face

In this video, you will learn how to accelerate image generation with an Intel Sapphire Rapids server. Using Stable Diffusion models, the Hugging Face Optimum Intel library and Intel OpenVINO, we’re going to cut inference latency from 36+ seconds to 4.5 seconds!

Deep Learning

1 min read

Deep Learning

1 min read


Mar 31

Podcast: Hugging Face and Intel

This week, I had the pleasure to discuss the amazing collaboration between Hugging Face and Intel on accelerating Transformer training and inference. We also talked about generative AI and quite a few other things :) ‎Code Together: Hugging Face and Intel - Driving Towards Practical, Faster, Democratized and… ‎Show Code Together, Ep Hugging Face and Intel - Driving Towards Practical, Faster, Democratized and Ethical AI…podcasts.apple.com

Intel

1 min read

Intel

1 min read

Julien Simon

Julien Simon

6.8K Followers

Chief Evangelist, Hugging Face (https://huggingface.co)

Following
  • Netflix Technology Blog

    Netflix Technology Blog

  • Jeff Barr

    Jeff Barr

  • Less Wright

    Less Wright

  • Adrian Hornsby

    Adrian Hornsby

  • Vincent de Lagabbe

    Vincent de Lagabbe

See all (18)

Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech

Teams