
Description
Torch7 has OpenCL support via @hughperkins' work - if PyTorch is based on the same backends as Lua Torch, how hard would it be to port OpenCL over and get this working on virtually all modern GPUs and integrated graphics?
Deep learning needs more accessible beginners' experience, so integrated graphics would help win early mindshare. DL also needs cheaper hardware; NVidia's monopoly and crazy prices are a hard and unnecessary tax.
Further; there are higher-abstractions like Keras that currently only support CUDA because the lower abstractions only support CUDA; if PyTorch ported Torch7's OpenCL backend, then building a Keras back-end would be a step towards OpenCL-ising a large codebase written for Keras, also.
The first Python OpenCL framework for DL will win a lot of credibility at beginners' workshops and in the creative-AI space where ubiquitous hardware is a must. OpenCL support should be a priority, in my opinion, for any new framework in this space. Tensorflow is already working on OpenCL, so perhaps PyTorch will miss this window of opportunity. I'd love to see both hit the milestone at once, to encourage healthy competition.