This is a repository of reference implementations for the MLPerf training benchmarks. These implementations are valid as starting points for benchmark implementations but are not fully optimized and are not intended to be used for "real" performance measurements of software frameworks or hardware. Benchmarking the performance of training ML models on a wide variety of use cases, software, and hardware drives AI performance across the tech industry. The MLPerf Training working group draws on expertise in AI and the technology that powers AI from across the industry to design and create industry-standard benchmarks. Together, we create the reference implementations, rules, policies, and procedures to benchmark a wide variety of AI workloads.

Features

  • Train benchmark roadmap
  • Documentation available
  • Training benchmark rules
  • Benchmark reference implementations
  • Benchmark results every ~6 months
  • Examples included

Project Samples

Project Activity

See All Activity >

Categories

Machine Learning

License

Apache License V2.0

Follow MLPerf

MLPerf Web Site

Other Useful Business Software
Auth0 for AI Agents now in GA Icon
Auth0 for AI Agents now in GA

Ready to implement AI with confidence (without sacrificing security)?

Connect your AI agents to apps and data more securely, give users control over the actions AI agents can perform and the data they can access, and enable human confirmation for critical agent actions.
Start building today
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of MLPerf!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

Python

Related Categories

Python Machine Learning Software

Registered

2024-08-16