Intel is honored to join the PyTorch* Foundation as a Premier member, and we look forward to engaging with other industry leaders to collaborate on the open source PyTorch framework and ecosystem. We believe that PyTorch holds a pivotal place accelerating AI—it allows for fast application development to promote experimentation and innovation. Joining the PyTorch Foundation underscores Intel’s commitment to accelerate enhancements to the machine-learning framework with technical contributions and nurture its ecosystem.
Our contributions to PyTorch started in 2018. The vision: democratize access to AI through ubiquitous hardware and open software. In this blog, we highlight our ongoing efforts to advance PyTorch and its ecosystem, thus further enabling an “AI Everywhere” future that prioritizes innovation. We appreciate collaborating with our colleagues at Meta and other contributors from the open source community.
Advancing PyTorch* 2.0 Features through Intel Optimizations
PyTorch benefits from substantial Intel-provided optimizations for x86, including accelerating PyTorch using Intel® oneAPI Deep Neural Network Library (oneDNN), optimizations for aten operators, BFloat16, and auto-mixed precision support. We also actively participated in the design and implementation of general PyTorch features such as quantization and compiler by contributing four significant performance features to PyTorch 2.0:
- Optimized TorchInductor CPU FP32 inference
- Improved Graph Neural Network (GNN) in PyG for inference and training performance
- Optimized int8 Inference with unified quantization backend for x86 CPU platforms
- Leveraged oneDNN Graph API to accelerate inference on CPUs
We are also proposing new features to include in the framework’s next release.
PyTorch Maintainers
Intel has four PyTorch maintainers (three active, one emeritus) who maintain the CPU performance modules and the compiler front-end. They are proactive in triaging issues and reviewing pull requests (PRs) from the community and landed hundreds of PRs in PyTorch upstream, which is quite an impressive feat. The maintainers for CPU performance include:
- Mingfei Ma, (mingfeima), Deep Learning software engineer
- Jiong Gong (Jgong5), Principal Engineer and compiler front-End maintainer
- Xiaobing Zhang (XiaobingSuper), Deep Learning software engineer
- Jianhui Li (Jianhui-Li), Senior Principal Engineer, now emeritus; recognized by the PyTorch community for his past contributions and expertise in AI
Collaborating with the PyTorch Community
Our maintainers actively engage with the PyTorch community to foster collaboration and innovation among AI developers, researchers, and industry experts. Key activities include:
- Triaging PyTorch GitHub issues
- Enhancing the PyTorch documentation (i.e., PyTorch Docathon)
- Conducting meetups and workshops to share Intel’s latest applications that incorporate PyTorch
- Publishing technical blogs, articles, and white papers
- Highlighting key technical materials via PyTorch edition videos
Furthering a PyTorch Open Ecosystem<
Intel releases its newest optimizations and features in Intel® Extension for PyTorch* before they are ready to land in upstream PyTorch; this gives users early access to accelerations and other benefits. This extension is based on the oneAPI multiarchitecture programming model and with a few lines of code, you can take advantage of the most up-to-date Intel software and hardware optimizations for PyTorch.
In addition, Intel’s PyTorch extension for GPU extends PyTorch with up-to-date features and optimizations for an extra performance boost on Intel graphics cards. It is released in an open source project on GitHub xpu-master branch. For further details, please read the release notes.
Intel also provides technical contributions to libraries in the PyTorch ecosystem such as torch-serve, PyTorch Geometric, DeepSpeed, and Hugging Face Transformers (e.g., Accelerate, Optimum).
Intel Joins Linux Foundation AI & Data Foundation
Earlier this month, Intel also joined the Linux Foundation AI & Data Foundation as a Premier member. By joining the Governing Board, Intel has the opportunity to contribute its rich experience leading open innovation and nurturing developer communities to help shape the strategic direction of this foundation’s AI and data work to accelerate the development of open source AI projects and technologies.
Get the Software
An open ecosystem drives industry innovation and acceleration, and Intel provides an expansive portfolio of AI-optimized hardware and software to empower AI Everywhere. We look forward to continued collaborations with partners to advance the PyTorch community and ecosystem.
Try out PyTorch 2.0 and realize the performance benefits for yourself.
Check out the GitHub page for tutorials and the latest Intel Extension for PyTorch release.
PyTorch Resources
Susan Kahler, PhD
AI/ML Products and Solutions, Intel Corporation
Fan Zhao
Senior AI Engineering Manager, Intel Corporation