Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Habana Gaudi (HPU) Support #574

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Conversation

BartoszBLL
Copy link

This pull request adds support for running inference on Habana Gaudi (HPU) processors by introducing a new directory dedicated to Gaudi-specific implementation. It includes setup instructions, scripts for downloading GPT-2 models, a Jupyter notebook for running inference, and necessary supporting files.

Changes Introduced

  • New directory: setup/05_accelerator_processors/01_habana_processing_unit/
  • Documentation:
    • README.md: Instructions for setting up and running GPT-2 inference on Habana Gaudi.
  • Notebook:
    • inference_on_gaudi.ipynb: Jupyter notebook demonstrating how to run inference on Gaudi, including performance comparisons against CPU.

Key Features

  • Provides setup instructions for installing necessary drivers and libraries.
  • Links to Habana documentation for further reading.
  • Implements inference workflow optimized for Habana Gaudi.
  • Includes performance monitoring tools for CPU vs. HPU comparisons.

Testing

  • Verified inference runs successfully on Gaudi HPU.

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@rasbt
Copy link
Owner

rasbt commented Apr 4, 2025

Hi @BartoszBLL , thanks for the PR, and sorry for the slow response. I haven't used Habana Gaudi accelerators, yet, and wanted to think about this carefully.

So, it looks like this PR supplies alternative code files for people who want to use Gaudi accelerators. I think this may be a better fit for an external GitHub repo you can set up and then we can suggest it in the GitHub Discussion's Show and Tell section.

In addition, rather than just supplying the codes, I think what would be interesting is to explain how the existing code can be/needs to be adjusted to work on Habana Gaudi chips. This could be done for e.g. Chapter 5. What I have in mind is something similar to how this is structured: https://github.com/rasbt/LLMs-from-scratch/tree/main/ch05/10_llm-training-speed

Specifically, the bonus section folder could contain the original code and the Habana-modified code of Chapter 5 so that readers can check out the changes side-by-side and understand what changes they need to make to use Habana accelerators. Then, this could also include a side-by-side comparison running the code on CPU, HPU, and GPU. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants