0% found this document useful (0 votes)
63 views3 pages

Unified, Cross-Architecture Programming Model: Product Brief

Uploaded by

komal.kothari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views3 pages

Unified, Cross-Architecture Programming Model: Product Brief

Uploaded by

komal.kothari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Product brief

Cross-Architecture Programming with Intel® oneAPI Toolkits (Beta)

Unified, Cross-Architecture
Programming Model

Simplify development and save time across multiple architectures―


with uncompromised performance for diverse workloads
Modern workloads are incredibly diverse—and so are architectures. No single
architecture is best for every workload. Maximizing performance takes a mix of
scalar, vector, matrix, and spatial (SVMS) architectures deployed in CPU, GPU, FPGA,
and other future accelerators.
Intel® oneAPI products will deliver the tools you need to deploy your applications
and solutions across SVMS architectures. Its set of complementary toolkits—a base
kit and specialty add-ons—simplify programming and help you improve efficiency
and innovation.
Use it for:
• High-performance computing (HPC)
• Machine learning and analytics
• IoT applications
• Video processing
• Rendering
• And more

Highlights
Data Parallel C++ Language for Direct Programming
Data Parallel C++ (DPC++) is an evolution of C++ that incorporates SYCL*.
It allows code reuse across hardware targets and enables high productivity
and performance across CPU, GPU, and FPGA architectures, while permitting
accelerator-specific tuning.

Libraries for API-Based Programming


Powerful libraries—including deep learning, math, and video processing—
include pre-optimized, domain-specific functions to accelerate compute-intense
workloads on Intel® CPUs and GPUs.

Advanced Analysis and Debug Tools


For profiling, design advice, and debug, Intel oneAPI products include leading
analysis tools:
• Intel® VTune™ Profiler (Beta) to find performance bottlenecks fast in CPU, GPU,
and FPGA systems
• Intel® Advisor (Beta) for vectorization, threading, and accelerator offload design
Get the Intel® oneAPI advice

Base Toolkit Now > • GDB* for efficient code troubleshooting


Toolkits Tailored to Your Needs • Intel® oneAPI Math Kernel Library (Beta): Accelerate math
processing routines including matrix algebra, fast Fourier
Start with the Intel® oneAPI Base Toolkit transforms (FFT), and vector math.
• Intel® oneAPI Data Analytics Library(Beta): Boost machine
The Intel® oneAPI Base Toolkit is a core set of tools and
learning and data analytics performance.
libraries for building and deploying high-performance,
data-centric applications across diverse architectures. • Intel® Distribution for Python*: Achieve fast math-
It features the Data Parallel C++ (DPC++) language, an intensive workload performance without code changes
evolution of C++ that: for data science and machine learning problems.

• Allows code reuse across hardware targets—CPUs, • Intel® VTune™ Profiler(Beta): Find and optimize performance
GPUs, and FPGAs bottlenecks across CPU, GPU, and FPGA systems.

• Permits custom tuning for individual accelerators • Intel® Advisor (Beta): Design code for efficient
vectorization, threading, and offloading to accelerators.
• Domain-specific libraries and the Intel® Distribution
for Python* to provide drop-in acceleration across • Intel® oneAPI Video Processing Library(Beta): Deliver
relevant architectures fast, high-quality, real-time video decoding, encoding,
transcoding, and processing for broadcasting, live
• Enhanced profiling, design assistance, and debug tools streaming and VOD, cloud gaming, and more.
to complete the kit’
• Intel® oneAPI Deep Neural Network Library (Beta):
Here’s what you get: Develop fast neural network frameworks on Intel CPUs
and GPUs with performance-optimized building blocks.
• Intel® oneAPI DPC++ Compiler (Beta): Targets CPUs and
accelerators using a single codebase while permitting • Intel® oneAPI Collective Communications Library(Beta):
custom tuning. Implement optimized communication patterns in deep
learning frameworks. Use the components separately or
• Intel® DPC++ Compatibility Tool (Beta): Migrate CUDA* together as the foundation of deep learning frameworks.
source code to DPC++ code with this assistant.
• Intel® Integrated Performance Primitives: Speed
• Intel® oneAPI DPC++ Library(Beta): Speed up data parallel performance of imaging, signal processing, data
workloads with these key productivity algorithms and compression, and more.
functions.
• GDB*: Enables deep, system-wide debug of DPC++, C,
• Intel® oneAPI Threading Building Blocks(Beta): Simplify C++, and Fortran code.
parallelism with this advanced threading and memory-
management template library. • Intel® FPGA Add-On for oneAPI Base Toolkit(Beta)
(Optional): Program these reconfigurable hardware
accelerators to speed specialized, data-centric workloads.

2
Add Domain-Specific Toolkits for Your accelerators through this toolkit powered by oneAPI
Specialized Workloads components.

Besides the Intel oneAPI Base Toolkit that serves a broad • Intel® AI Analytics Toolkit(Beta): Achieve end-to-end
set of developers’ needs, there are four add-on toolkits that performance for AI workloads with this toolkit powered
combine it to give you the specialized tools you need: by oneAPI. Accelerate each step in the pipeline—training
deep neural networks, integrating trained models into
• Intel® oneAPI HPC Toolkit(Beta): Deliver fast applications applications for inference, and executing functions for
that scale with tools to build, analyze, optimize, and data science and analytics.
scale HPC applications with the latest techniques in
vectorization, multithreading, multi-node parallelization, • Intel® System Bring-Up Toolkit(Beta): Strengthen system
and memory optimization. reliability and optimize system power and performance
with this collection of debug, trace, and power and
• Intel® oneAPI IoT Toolkit(Beta): Accelerate development of performance analysis tools to let you quickly debug and
smart, connected devices for healthcare, smart homes, analyze the entire platform.
aerospace, security, and more.
• Intel® oneAPI Rendering Toolkit(Beta): Get powerful
Try Your Code in the Intel® DevCloud
rendering and ray-tracing libraries for high-fidelity
Develop, run, and optimize your Intel oneAPI code in the
visualization applications—for medical research,
Intel® DevCloud—a free development sandbox with access to
geophysical exploration, movie-making, and more—
the latest Intel CPU, GPU, and FPGA hardware and Intel
that require massive amounts of raw data to be quickly
oneAPI software.
rendered into rich, realistic visuals.
• Intel® oneAPI DL Framework Developer Toolkit(Beta): Get Started
Develop new—or customize existing—deep learning
frameworks using common APIs. Optimize for high- • Learn More about Intel oneAPI Products >
performance on Intel CPUs and GPUs for either single-
• Get the Intel oneAPI Base Toolkit >
node or multi-node distributed processing.
There are three more toolkits closely related to oneAPI: • Check out the Intel DevCloud >

• Intel® Distribution of OpenVINO™ Toolkit: Accelerate


deep learning inference and seamlessly deploy
intelligent solutions across Intel® platforms and

Software

* Other names and brands may be claimed as the property of others.


**Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are
measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other informa-
tion and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete
information visit http://www.intel.com/performance.
Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2,
SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by
Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel
microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any
warranty arising from course of performance, course of dealing, or usage in trade. This document contains information on products, services and/or processes in development. All information
provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. The products and services described
may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request. Copies of documents which
have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm. For more information regard-
ing performance and optimization choices in Intel® Software Development Products, see our Optimization Notice: https://software.intel.com/articles/optimization-notice#opt
Copyright © 2019, Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Inside, Intel Atom, Intel Core, Intel VTune, and Intel Xeon are trademarks of Intel Corporation in the U.S. and/or
other countries.
1019/SS Please Recycle

You might also like