This is a repository of reference implementations for the MLPerf training benchmarks. These implementations are valid as starting points for benchmark implementations but are not fully optimized and are not intended to be used for "real" performance measurements of software frameworks or hardware. Benchmarking the performance of training ML models on a wide variety of use cases, software, and hardware drives AI performance across the tech industry. The MLPerf Training working group draws on expertise in AI and the technology that powers AI from across the industry to design and create industry-standard benchmarks. Together, we create the reference implementations, rules, policies, and procedures to benchmark a wide variety of AI workloads.
Features
- Train benchmark roadmap
- Documentation available
- Training benchmark rules
- Benchmark reference implementations
- Benchmark results every ~6 months
- Examples included