0% found this document useful (0 votes)
61 views78 pages

Oneapi Base Toolkit - Get Started Guide Linux - 2023.0 766893 766894

The document provides instructions for configuring systems to use the Intel oneAPI Base Toolkit for Linux, including: 1. Installing CMake, pkg-config, and GNU development tools to build samples. 2. Setting environment variables by sourcing the setvars.sh script for command line development. 3. For GPU users, installing GPU drivers and disabling hangcheck for long-running GPU workloads to avoid termination. It then describes building and running a sample project using the command line, Eclipse, or Visual Studio Code, and provides next steps for learning more about the toolkit.

Uploaded by

seevagandaass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views78 pages

Oneapi Base Toolkit - Get Started Guide Linux - 2023.0 766893 766894

The document provides instructions for configuring systems to use the Intel oneAPI Base Toolkit for Linux, including: 1. Installing CMake, pkg-config, and GNU development tools to build samples. 2. Setting environment variables by sourcing the setvars.sh script for command line development. 3. For GPU users, installing GPU drivers and disabling hangcheck for long-running GPU workloads to avoid termination. It then describes building and running a sample project using the command line, Eclipse, or Visual Studio Code, and provides next steps for learning more about the toolkit.

Uploaded by

seevagandaass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Get Started with the Intel® oneAPI Base

Toolkit for Linux*


Get Started with the Intel® oneAPI Base Toolkit for Linux*

Contents
Get Started with the Intel® oneAPI Base Toolkit for Linux* ..3
Configure Your CPU or GPU System ............................................................. 3
Configure Your FPGA System.......................................................................7
Build and Run a Sample Project Using the Command Line ............................... 8
Next Steps ..................................................................................... 13
SSH: Password-less Access to Remote Linux* Target ................... 16
SSH: Running Applications Built with Eclipse* ............................. 18
Build and Run a Sample Project Using Eclipse* ............................................ 24
Next Steps ..................................................................................... 26
SSH: Password-less Access to Remote Linux* Target ................... 28
SSH: Running Applications Built with Eclipse* ............................. 31
Build and Run a Sample Project Using Visual Studio Code ............................. 37
Next Steps ..................................................................................... 42
SSH: Password-less Access to Remote Linux* Target ................... 44
SSH: Running Applications Built with Eclipse* ............................. 46
Using Containers ..................................................................................... 52
Using Containers with the Command Line ........................................... 52
Using Containers with Eclipse* .......................................................... 53
Using Cloud CI Systems ........................................................................... 60
Build and Run a Intel® oneAPI DL Framework Developer Toolkit Sample
Using the Command Line...................................................................... 60
Troubleshooting....................................................................................... 67
Next Steps ............................................................................................. 68
SSH: Password-less Access to Remote Linux* Target............................ 70
SSH: Running Applications Built with Eclipse* ..................................... 72
Notices and Disclaimers............................................................................ 78

2
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Get Started with the Intel® oneAPI


Base Toolkit for Linux*
The following instructions assume you have installed the Intel® oneAPI software. Please see the Intel oneAPI
Toolkits page for installation options.

Follow These Steps for the Intel® oneAPI Base Toolkit:


1. Configure Your System.
2. Build and Run a sample project using one of these methods:
• Command Line
• Eclipse*
• Visual Studio Code*
3. After you have run a sample, learn more about the Intel® oneAPI Base Toolkit (Base Kit) in Next Steps.
An offline copy of this Get Started is available on the Downloadable Documentation page.

Choose Your Toolkit

NOTE If you are using the Intel® oneAPI Base Toolkit, go to Configure Your System, as described in
step 1 above.

If you are using a toolkit other than the Base Kit, follow the links below to find steps to configure your
system and run a sample project:
• Get Started with the Intel® oneAPI HPC Toolkit for Linux*
• Get Started with the Intel® oneAPI IoT Toolkit for Linux*
• Get Started with the Intel® AI Analytics Toolkit for Linux*
• Get Started with the Intel® oneAPI Deep Learning Framework Developer Toolkit for Linux*
• Get Started with the Intel® System Bring Up Toolkit for Linux*
• Get Started with the Intel® oneAPI Rendering Toolkit for Linux*

Configure Your CPU or GPU System


Intel® oneAPI Base Toolkit

NOTE If you are working with an FPGA system, see Configure Your FPGA System.

• CPU, GPU and FPGA users, install CMake* and pkg-config to build most samples
• Set environment variables for CPU/GPU
• For GPU users, install GPU drivers
• For GPU users, disable hangcheck for applications with long-running GPU compute workloads
• For GPU non-root users, add a user to the video group
• For Eclipse* users, install a recent release of the Eclipse IDE for C/C++ Developers, then install the
oneAPI Eclipse plugins.

3
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Install CMake*, pkg-config, and GNU* Dev Tools to Build Samples


Although the CMake and pkg-config build tools are not required by the oneAPI tools and toolkits, many
oneAPI samples are provided as CMake projects and require CMake to build them. In some cases pkg-config
is necessary to locate libraries needed to complete a build of the application.
The Intel compilers utilize the existing GNU build toolchains to provide a complete C/C++ development
environment. If your distribution of Linux does not include the complete suite of GNU development tools, you
need to install these tools.
To install CMake, pkg-config, and the GNU development tools on your Linux system, open a terminal session
and enter the following commands:
Ubuntu*

sudo apt update


sudo apt -y install cmake pkg-config build-essential
Red Hat* and Fedora*

sudo yum update


sudo yum -y install cmake pkgconfig
sudo yum groupinstall "Development Tools"
SUSE*

sudo zypper update


sudo zypper --non-interactive install cmake pkg-config
sudo zypper --non-interactive install pattern devel_C_C++
Verify the installation by displaying the installation location with this command:

which cmake pkg-config make gcc g++


One or more of these locations will display:
/usr/bin/cmake
/usr/bin/pkg-config
/usr/bin/make
/usr/bin/gcc
/usr/bin/g++
For more information about CMake, refer to CMake.org. If you are unable to install CMake using your Linux
distribution's standard package manager, see the CMake.org downloads page for additional installation
options.

Set Environment Variables for CLI Development


For working at a Command Line Interface (CLI), the tools in the oneAPI toolkits are configured via
environment variables. Set up your CLI environment by sourcing the setvars script:

Option 1: Source setvars.sh once per session


Source setvars.sh every time you open a new terminal window:

You can find the setvars.sh script in the root folder of your oneAPI installation, which is typically /opt/
intel/oneapi/ for system wide installation and ~/intel/oneapi/ when installed as a private installation.
Note that system wide installation requires root or sudo privileges.
For system wide installations:

. /opt/intel/oneapi/setvars.sh

4
Get Started with the Intel® oneAPI Base Toolkit for Linux*

For private installations:

. ~/intel/oneapi/setvars.sh
Option 2: One time setup for setvars.sh
Environment variables can be set up to automatically set using one of the methods below:
• Use a startup script, as described on this page
• Use modulefiles
• Use a setvars.sh configuration file
To have the environment automatically set up for your projects, include the command source
<install_dir>/setvars.sh in a startup script where it will be invoked automatically (replace
<install_dir> with the path to your oneAPI install location). The default installation locations are /opt/
intel/oneapi/ for system wide installations and ~/intel/oneapi/ for private installations.
For example, you can add the source <install_dir>/setvars.sh command to your ~/.bashrc or
~/.bashrc_profile or ~/.profile file. To make the settings permanent for all accounts on your system,
create a one-line .sh script in your system's /etc/profile.d folder that sources setvars.sh (for more
details, see Ubuntu documentation on Environment Variables).

NOTE
The setvars.sh script can be managed using a configuration file, which is especially helpful if you need
to initialize specific versions of libraries or the compiler, rather than defaulting to the "latest" version.
For more details, see Using a Configuration File to Manage Setvars.sh.. If you need to setup the
environment in a non-POSIX shell, seeoneAPI Development Environment Setup for more configuration
options.

GPU: Disable Hangcheck


This section applies only to applications with long-running GPU compute workloads in native environments. It
is not recommended for virtualizations or other standard usages of GPU, such as gaming.
A workload that takes more than four seconds for GPU hardware to execute is a long running workload. By
default, individual threads that qualify as long-running workloads are considered hung and are terminated.
By disabling the hangcheck timeout period, you can avoid this problem.

NOTE If the system is rebooted, hangcheck is automatically enabled. You must disable hangcheck
again after every reboot or follow the directions to disable hangcheck persistently (across multiple
reboots).

NOTE If the kernel is updated, hangcheck is automatically enabled. Run the procedure below after
every kernel update to ensure hangcheck is disabled.

1. Open a terminal.
2. Open the grub file in /etc/default.
3. In the grub file, find the line GRUB_CMDLINE_LINUX_DEFAULT="" .
4. Enter this text between the quotes (""):

i915.enable_hangcheck=0
5. Run this command:

sudo update-grub

5
Get Started with the Intel® oneAPI Base Toolkit for Linux*

6. Reboot the system. Hangcheck remains disabled.

For GPU Users, Install GPU Drivers


If you followed the instructions in the Installation Guide to install GPU Drivers, you may skip this step. If you
have not installed the drivers, follow the directions in the Installation Guide.

GPU: Add User to Video Group


For GPU compute workloads, non-root (normal) users do not typically have access to the GPU device.
1. To give a user access, first determine which group name is assigned ownership of the render nodes:

stat -c "%G" /dev/dri/render* groups


In this example, the video group is assigned ownership of the render nodes:

2. Determine if the current user is a member of that render node group:

groups ${USER}
In this example the current user is not included in the video group.

3. Add the user to the group using gpasswd:

sudo gpasswd -a ${USER} video


4. Activate the video group:

newgrp video

Next Step
Run a sample project using one of these methods:
• Command Line
• Eclipse*
• Visual Studio Code

6
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Configure Your FPGA System


NOTE If you are working with a CPU or GPU system, see Configure Your CPU or GPU System .

To set up your system, you need to:


• Install CMake* and pkg-config to build most samples
• Set environment variables

Install CMake*, pkg-config, and GNU* Dev Tools to Build Samples


Although the CMake and pkg-config build tools are not required by the oneAPI tools and toolkits, many
oneAPI samples are provided as CMake projects and require CMake to build them. In some cases pkg-config
is necessary to locate libraries needed to complete a build of the application.
The Intel compilers utilize the existing GNU build toolchains to provide a complete C/C++ development
environment. If your distribution of Linux does not include the complete suite of GNU development tools, you
need to install these tools.
To install CMake, pkg-config, and the GNU development tools on your Linux system, open a terminal session
and enter the following commands:
Ubuntu*

sudo apt update


sudo apt -y install cmake pkg-config build-essential
Red Hat* and Fedora*

sudo yum update


sudo yum -y install cmake pkgconfig
sudo yum groupinstall "Development Tools"
SUSE*

sudo zypper update


sudo zypper --non-interactive install cmake pkg-config
sudo zypper --non-interactive install pattern devel_C_C++
Verify the installation by displaying the installation location with this command:

which cmake pkg-config make gcc g++


One or more of these locations will display:
/usr/bin/cmake
/usr/bin/pkg-config
/usr/bin/make
/usr/bin/gcc
/usr/bin/g++
For more information about CMake, refer to CMake.org. If you are unable to install CMake using your Linux
distribution's standard package manager, see the CMake.org downloads page for additional installation
options.

7
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Set Environment Variables for CLI Development


For working at a Command Line Interface (CLI), the tools in the oneAPI toolkits are configured via
environment variables. Set up your CLI environment by sourcing the setvars script:

Option 1: Source setvars.sh once per session


Source setvars.sh every time you open a new terminal window:

You can find the setvars.sh script in the root folder of your oneAPI installation, which is typically /opt/
intel/oneapi/ for system wide installation and ~/intel/oneapi/ when installed as a private installation.
Note that system wide installation requires root or sudo privileges.
For system wide installations:

. /opt/intel/oneapi/setvars.sh
For private installations:

. ~/intel/oneapi/setvars.sh
Option 2: One time setup for setvars.sh
Environment variables can be set up to automatically set using one of the methods below:
• Use a startup script, as described on this page
• Use modulefiles
• Use a setvars.sh configuration file
To have the environment automatically set up for your projects, include the command source
<install_dir>/setvars.sh in a startup script where it will be invoked automatically (replace
<install_dir> with the path to your oneAPI install location). The default installation locations are /opt/
intel/oneapi/ for system wide installations and ~/intel/oneapi/ for private installations.
For example, you can add the source <install_dir>/setvars.sh command to your ~/.bashrc or
~/.bashrc_profile or ~/.profile file. To make the settings permanent for all accounts on your system,
create a one-line .sh script in your system's /etc/profile.d folder that sources setvars.sh (for more
details, see Ubuntu documentation on Environment Variables).

NOTE
The setvars.sh script can be managed using a configuration file, which is especially helpful if you need
to initialize specific versions of libraries or the compiler, rather than defaulting to the "latest" version.
For more details, see Using a Configuration File to Manage Setvars.sh.. If you need to setup the
environment in a non-POSIX shell, seeoneAPI Development Environment Setup for more configuration
options.

Run a Sample Project


Run a sample project using the Command Line or using Eclipse*.

Build and Run a Sample Project Using the Command Line


Intel® oneAPI Base Toolkit

NOTE If you have not already configured your development environment, go to Configure Your
System then return to this page. If you have already completed the steps to configure your system,
continue with the steps below.

8
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Command line development can be done with a terminal window or done through Visual Studio Code*. Some
tasks can be automated using extensions. To learn more, see Using Visual Studio Code with Intel® oneAPI
Toolkits.
To compile and run a sample:
1. Locate a sample project using the oneAPI CLI Samples Browser.
2. Build and run a sample project using Make* or CMake*.

Download Samples using the oneAPI CLI Samples Browser


Use the oneAPI CLI Samples Browser to browse the collection of online oneAPI samples. As you browse the
oneAPI samples, you can copy them to your local disk as buildable sample projects. Most oneAPI sample
projects are built using Make or CMake, so the build instructions are included as part of the sample in a
README file. The oneAPI CLI utility is a single-file, stand-alone executable that has no dependencies on
dynamic runtime libraries.
To see a list of components that support CMake, see Use CMake to with oneAPI Applications .
An internet connection is required to download the samples for oneAPI toolkits. For information on how to
use this toolkit offline, see Developing with Offline Systems in the Troubleshooting section.
To watch a video presentation of how to create a project with the command line, see Exploring Intel® oneAPI
Samples from the Command Line.
1. Open a terminal window.
2. If you did not complete the steps in Option 2: One time set up for setvars.sh in the Configure Your
System section, set system variables by sourcing setvars:
For system wide installations (requires root or sudo privileges):

. /opt/intel/oneapi/setvars.sh
For private installations:

. ~/intel/oneapi/setvars.sh
The command above assumes you installed to the default folder. If you customized the installation
folder, setvars.sh is in your custom folder.

NOTE
The setvars.sh script can be managed using a configuration file, which is especially helpful if you need
to initialize specific versions of libraries or the compiler, rather than defaulting to the "latest" version.
For more details, see Using a Configuration File to Manage Setvars.sh. If you need to setup the
environment in a non-POSIX shell, seeoneAPI Development Environment Setup for more configuration
options.

3. In the same terminal window, run the application (it should be in your PATH):

oneapi-cli
The oneAPI CLI menu appears:

9
Get Started with the Intel® oneAPI Base Toolkit for Linux*

4. Use the up and down arrow keys to select Create a project, then press Enter
5. Move the arrow key down to select Create a project, then press Enter. The language selection will
appear. If you want to run samples from a toolkit other than the Intel® oneAPI Base Toolkit, install the
domain-specific toolkit before proceeding.

6. Select the language for your sample. For your first project, select cpp, then press Enter. The toolkit
samples list appears.
7. Select the Vector Add sample. Vector Add is a simple test application that will help verify that the tools
are setup correctly and can access your system's GPU:

10
Get Started with the Intel® oneAPI Base Toolkit for Linux*

8. After you select a sample, press Enter.


9. Specify the location for the project. The default location includes the path from where the utility was
run and the name of the project.
10. Press Tab to select Create, then press Enter:

Build and Run a Project Based on a oneAPI Sample Using Make

NOTE Some samples require additional steps or arguments for building and/or running the sample.
Review the sample's README.md file for specific details regarding how to build and run the sample.

1. Open a command prompt.


2. Navigate to the directory which you specified when creating (downloading) the project.
3. Configure the project to use the buffer-based implementation (first box) or the Unified Shared Memory
(USM) based implementation (second box).

mkdir build
cd build
cmake ..
mkdir build
cd build
cmake .. -DUSM=1
4. Build the program.

make cpu-gpu

11
Get Started with the Intel® oneAPI Base Toolkit for Linux*

5. Clean the program.

make clean
6. Run the program for Unified Shared Memory (USM) and buffers.

./vector-add-buffers
./vector-add-usm

Build and Run a Project Based on a oneAPI Sample using CMake

NOTE Some samples require additional steps or arguments for building and/or running the sample.
Review the sample's README.md file for specific details regarding how to build and run the sample.

Vector add can only be run with Make. To run a different sample using CMake, where <sample_name> is the
name of the sample you want to run.
To find a sample that uses CMake, browse the oneAPI Samples GitHub repository and view the README file
for each sample to see if the sample can be run with CMake.
1. If necessary, re-run the command-line utility and select a CMake project that contains a
CMakeLists.txt file.

cd <sample_name>
2. Navigate to the build directory.

mkdir build
cd build
3. Build the program. Run CMake in the build directory to create the makefile. Use the newly created
makefile to build the executable.

cmake ../.
make VERBOSE=1
4. Run the program.

make run
5. Clean the program.

make clean

See Explore SYCL* Through Samples to learn more.

Compile and run a sample for FPGA


You can run the vector-add sample (or any FPGA SYCL code) in the following modes:

• Emulation: Verifies the code correctness. Compilation completes in few seconds. Use this mode if you are
using the Intel® oneAPI Base Toolkit
• Report: Generates a static optimization report for design analysis. Compilation can take a few minutes to
complete. When completed, you can find the reports in <project_name>.prj/reports/report.html.
This can be used with the Intel® oneAPI Base Toolkit. For more information about the reports, refer to the
FPGA Optimization Guide for Intel® oneAPI Toolkits.
• Hardware: Generates the actual bitstream on an FPGA device. Compilation can take few hours to
complete. Use this mode to measure performance. To use this mode, download the Intel® Quartus® Prime
Pro Edition software and BSPs separately. For more information, refer to the Install Software for Intel®
FPGA Development Flow section in the Intel® oneAPI Toolkits Installation Guide for Linux* OS and Intel®
FPGA Add-On for oneAPI Base Toolkit web page.

12
Get Started with the Intel® oneAPI Base Toolkit for Linux*

After downloading the vector-add design example using the oneAPI CLI Samples Browser, perform the
following steps to compile and run the design:
1. Change to the directory containing the vector-add design example using:

cd <vector-add directory on the same system>


2. Run the following make clean command before you start compiling.

make clean
3. Compile the design using one of the following options:
• Compile for emulation using:

make fpga_emu
• Compile the report using:

make report

NOTE
You can view the report at vector-add_report.prj/reports/report.html. This does not generate
an executable.

• Compile for hardware (takes a longer duration to complete) using:

make fpga

NOTE
If you compiled the hardware on a development system, copy the executable file vector-add.fpga to
the runtime system.

4. Run the design using one of the following options:


• Run the design for emulation using:

./vector-add-buffers.fpga_emu
./vector-add-usm.fpga_emu

NOTE
If you are using a separate development system, perform this step on that system.

• Run the design on FPGA hardware using:

./vector-add-buffers.fpga
./vector-add-usm.fpga

NOTE
To understand the command used in the Makefile, refer to the Intel® oneAPI Programming Guide.

See Explore SYCL* Through Samples to learn more.

Next Steps
After successfully building a sample application, Explore SYCL with Samples from Intel and explore the tools
in the Intel® oneAPI Base Toolkit.

13
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Tool Description

Intel® DPC++ Compatibility Tool The Intel® DPC++ Compatibility Tool assists in
migration of CUDA* applications to SYCL* ready
code that can use the Intel® oneAPI DPC++/C++
Compiler. Get started.

Intel® oneAPI DPC++/C++ Compiler The Intel® oneAPI DPC++/C++ Compiler targets
CPUs and accelerators through single-source code
while permitting custom tuning. Get Started.
Intel® oneAPI DPC++ Library This library is a companion to the Intel® oneAPI DPC++/C
++ Compiler and provides an alternative for C++
developers who create heterogeneous applications and
solutions. Its APIs are based on familiar standards-C++
STL, Parallel STL (PSTL), Boost.Compute, and SYCL*-to
maximize productivity and performance across CPUs,
GPUs, and FPGAs.
Intel® oneAPI Video Processing Library (oneVPL) This library provides access to H264/H265 elementary
stream decode and frame processing. By using the high
level interface of oneVPL, the same code can unlock
hardware capabilities for media across CPU and Intel®
Processor Graphics GPUs. Get started.
Intel® Distribution for GDB*
GDB, the GNU Project debugger, allows you to see
what is going on `inside' another program while it
executes -- or what another program was doing at
the moment it crashed. Get Started.
Learn more.
Intel® oneAPI Math Kernel Library (oneMKL) The oneMKL helps you achieve maximum performance
with a math computing library of highly optimized,
extensively parallelized routines for CPU and GPU. The
library has C and Fortran interfaces for most routines on
CPU, and SYCL* interfaces for some routines on both CPU
and GPU. Get started.
Intel® oneAPI Threading Building Blocks (oneTBB) oneTBB is a flexible performance library that simplifies the
work of adding parallelism to complex applications, even if
you're not a threading expert. Learn more.
Intel® Integrated Performance Primitives Intel® Integrated Performance Primitives (Intel® IPP) is an
extensive library of ready-to-use, domain-specific
functions that are highly optimized for diverse Intel®
architectures. Get Started with Intel® IPP or Get Started
with Intel® IPP Cryptography.
Intel® oneAPI Data Analytics Library The Intel® oneAPI Data Analytics Library (oneDAL) is a
library that helps speed up big data analysis by providing
highly optimized algorithmic building blocks for all stages
of data analytics (preprocessing, transformation, analysis,
modeling, validation, and decision making) in batch,
online, and distributed processing modes of computation.
The current version of oneDAL provides SYCL* API
extensions to the traditional C++ interface. Get started.
Intel® oneAPI Collective Communications Library Intel® oneAPI Collective Communications Library (oneCCL)
is a scalable and high-performance communication library
for Deep Learning (DL) and Machine Learning (ML)
workloads. It develops the ideas originated in Intel(R)
Machine Learning Scaling Library and expands the design
and API to encompass new features and use cases. Get
started.
Intel® oneAPI Deep Neural Network Library The Intel® oneAPI Deep Neural Network Library (oneDNN)
is an open-source performance library for deep learning
applications. The library includes basic building blocks for
neural networks optimized for Intel® Architecture
®
Processors and Intel Processor Graphics. oneDNN is

14
Get Started with the Intel® oneAPI Base Toolkit for Linux*

intended for deep learning applications and framework


developers interested in improving application
performance on Intel CPUs and GPUs. Get started.
Intel® VTune™ Profiler
Intel® VTune™ Profiler is a performance analysis tool
targeted for users developing serial and
multithreaded applications. The tool is delivered as
a Performance Profiler with Intel Performance
Snapshots and supports local and remote target
analysis on the Windows* and Linux* platforms.
Get started.
Intel® Advisor Intel® Advisor gives software architects and developers
the data and analysis tools they need to build well-
threaded and vectorized code that exploits modern
hardware capabilities. Get started.
Intel® FPGA Add-on for oneAPI Base Toolkit Use reconfigurable hardware to accelerate data-centric
workloads. Learn more.

Learn more about SYCL and targeting other accelerators using the following resources:
Resource Description
Intel® oneAPI Programming Guide Provides details on the oneAPI programming model,
including details about SYCL standards, programming for
various target accelerators, and introductions to the
oneAPI libraries.
Intel Acceleration Stack Quick Start Guide for Intel A quick start guide for the Intel® Programmable
Programmable Acceleration Card with Intel® Arria 10 GX Acceleration Card with Intel® Arria® 10 GX FPGA. This
FPGA guide provides instructions to load and run a loopback
test, Hello FPGA, in both non-virtualized and virtualized
environments.
Intel Acceleration Stack Quick Start Guide: Intel® FPGA A quick start guide for Intel® FPGA PAC D5005. This guide
Programmable Acceleration Card D5005 provides the instructions for installing the Open
Programmable Acceleration Engine (OPAE) on the host
Intel® Xeon® Processor to manage and access the Intel®
FPGA PAC, configuring and flashing the FPGA image and
Board Management Controller (BMC) images, running the
example hello_afu in a virtualized and non-virtualized
environment, and handling graceful thermal shutdown.
Intel® FPGA SDK for OpenCL™ Pro Edition: Custom Outlines the procedure for creating an Intel® FPGA
Platform Toolkit User Guide Software Development Kit (SDK) for OpenCL™ Pro Edition
Custom Platform. The Intel® FPGA SDK for OpenCL™ Pro
Edition Custom Platform Toolkit provides the necessary
tools for implementing a fully functional Custom Platform.
Intel® FPGA Acceleration Hub Learn about the Acceleration Stack for Intel® Xeon® CPUs
with FPGAs and other Intel FPGA-based acceleration
platforms.
FPGA Optimization Guide for Intel® oneAPI Toolkits The oneAPI FPGA Optimization Guide provides guidance
on leveraging the functionalities of SYCL to optimize your
design.
Explore SYCL Through Intel® FPGA Code Samples Provides guidance on how to target and develop your
design on an FPGA using the oneAPI programming model.
It also provides guidance on how to optimize a design to
achieve performance and latency targets for an
application targeting the FPGA.
oneAPI GPU Optimization Guide The oneAPI GPU Optimization Guide demonstrates how to
improve the behavior of your software by partitioning it
across the host and accelerator to specialize portions of
the computation that run best on the accelerator.
Specialization includes restructuring and tuning the code
to create the best mapping of the application to the
hardware. The value of oneAPI is that it allows each of
these variations to be expressed in a common language
with device-specific variants launched on the appropriate
accelerator.

15
Get Started with the Intel® oneAPI Base Toolkit for Linux*

For more information about this toolkit, see the https://software.intel.com/content/www/us/en/develop/


tools/oneapi/base-toolkit.html page.

SSH: Password-less Access to Remote Linux* Target

Introduction
For some oneAPI applications, you must configure a password-less SSH connection for the root user on the
target system; for example:
• IoT applications that use the MRAA/UPM sensor library
• Any application that interacts with system resources that require su, sudo, or root access
• Any tool that requires remote root or sudo access to your target system
When you finish the configuration steps below you will be able to “ssh into” your remote Linux target from
your host development system without a password prompt, as a normal (non-root) user or as a root user.
For an introduction to SSH and how SSH keys work, see SSH Essentials: Working with SSH Servers, Clients,
and Keys.

NOTE Password-less access works only when you connect to your target system from your host
development system with a matching private SSH key. Attempting to connect from a different host
system will still require a password.

Configure Password-less SSH Access


These instructions apply to:
• Host development system: Linux*, Windows*, or macOS*
• Target system: Linux
Set up an .ssh directory
On your host development system:
1. Open a terminal session (CMD window on Windows) and CD to your home directory.
2. Enter the following commands to create an .ssh directory, set the proper permissions, and CD into the
new .ssh directory.
At a Windows CMD prompt

> %HomeDrive% && cd %HomePath%


> mkdir .ssh
> cd .ssh
At a Linux terminal (bash) prompt

$ cd ~
$ mkdir -p .ssh
$ chmod 700 .ssh
$ cd .ssh
Step 3: Generate keys and copy to the target system

NOTE From this point forward the instructions apply to all host development systems (Windows,
Linux, and macOS).

16
Get Started with the Intel® oneAPI Base Toolkit for Linux*

1. To generate a default-named RSA key pair with an empty passphrase (that is, do not provide a
passphrase when asked), enter:

$ ssh-keygen -t rsa
2. To copy the new public key to your target system's non-root user home folder, enter the following,
where:
username = the name used to access the target and target = the IP address or the network hostname
of the target
You should be prompted for the non-root user password for your target device.

$ scp id_rsa.pub username@target:id_rsa.pub


$ ssh username@target
$ cd ~
$ mkdir -p .ssh
$ chmod 700 .ssh
$ cat ~/id_rsa.pub >>.ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ exit
Step 4: Confirm that a password is no longer required (non-root)
Follow this step to confirm that a password is no longer required for your non-root user.
1. To display the target's system information strings, including the target's hostname as the second field
in the output, enter:

ssh username@target uname -a


Step 5: Configure password-less access to root on your target
1. To login to the non-root user on the target using SSH and switch to the root user using sudo, enter:

$ ssh username@target
$ cd ~
$ sudo -E bash
Note that the sudo command should prompt you for your target system's non-root user password.
2. To copy the public key that you transferred to the non-root user account on the target into the root
user's authorized keys file, enter:

$ mkdir -p /root/.ssh
$ chmod 700 /root/.ssh
$ cat ./id_rsa.pub >>/root/.ssh/authorized_keys
$ chmod 600 /root/.ssh/authorized_keys
3. Exit twice, first from the sudo bash session, second from the ssh connection:

$ exit
$ exit

Step 6: [Optional] Check your progress


To test the root connection for your target, enter:

$ ssh root@target ls -a
You should see a directory listing of all files located in the /root folder on your target, without the need for a
login prompt.

17
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Next: Create a New Connection and Connect to Your Target


For instructions to create a new connection, view existing connections, and connect to your target, see
Connecting to Your Board Using an SSH/TCF Agent Connection.
Notes
• Password-less access works only when you connect to your target system from your host development
system with a matching private SSH key. Attempting to connect from a different host system will still
require a password.
• Make sure that you have created a project for Linux targets, and that this project is selected in the Project
Explorer.

See Also
Why sudo user can use sched_setscheduler SCHED_RR while root can not?
Operation not permitted while setting new priority for thread
Debugging libmraa {#debugging}

SSH: Running Applications Built with Eclipse*


The following describes how to run a built application on a "remote target" system using an SSH connection.
If your application requires a password-less SSH connection, see Password-less Access to a Remote Linux*
Target Device.
This section assumes:
• you already have an Eclipse project
• have successfully built an application from that project
• have some familiarity with Eclipse
• remote target is already configured for remote access via SSH
1. Create a Connection within Eclipse by selecting the New Connection option:

2. Select the SSH Connection type and click Next:

18
Get Started with the Intel® oneAPI Base Toolkit for Linux*

3. Enter your remote targets Hostname and Username. You can either use Key-based authentication or
Password, depending on the configuration of the remote device.

19
Get Started with the Intel® oneAPI Base Toolkit for Linux*

4. You may also be then prompted to set an Eclipse Secure Storage password. This protects the password
you entered when stored on the disk. Enter the password and click OK.

5. When connecting to the device for the first time, you should be prompted to trust the device. Click Yes
to continue.

20
Get Started with the Intel® oneAPI Base Toolkit for Linux*

6. Once a connection has been defined within Eclipse, open the the Run Configurations window to create
your configuration.

7. Double-click on the "C/C++ Remote Application" configuration type.

21
Get Started with the Intel® oneAPI Base Toolkit for Linux*

This will create a run configuration for you project.


8. Set two properties on this view:
a. Select the connection you created earlier.
b. Define a location on the remote system where Eclipse will copy your project's binary. Your use-
case may dictate where your binary must be copied to, but often a location your user has write
permission to, such as /tmp/, will suffice.
9. Click Apply to save your configuration.
10. Click Run to run your project on the remote device.

22
Get Started with the Intel® oneAPI Base Toolkit for Linux*

You should see the output in the console of Eclipse.

23
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Note that you can re-use the same connection and launch configuration for debug too. Your remote device
will need at least gdbserver in the PATH environment variable.

Build and Run a Sample Project Using Eclipse*


Intel® oneAPI Base Toolkit
Intel® oneAPI integrates with third-party IDEs on Linux* to provide a seamless GUI experience for software
development.

NOTE If you are using Eclipse* with FPGA, see the FPGA Workflows on Third-Party IDEs for Intel®
oneAPI Toolkits.

NOTE
An internet connection is required to download the samples for oneAPI toolkits. For information on how
to use this toolkit offline, see Developing with Offline Systems in the Troubleshooting section.

Create a New Project Using Eclipse*


You can access the entire suite using Eclipse* (see the release notes for minimum required version). Once
oneAPI is installed, the tools are integrated into the menus.
To watch a video presentation of how to create a project, see Intel® oneAPI Eclipse Samples Plugin.
1. Navigate to the Eclipse directory and source setvars.sh.
For root or sudo installations:

. /opt/intel/oneapi/setvars.sh
For normal user installations:

. ~/intel/oneapi/setvars.sh
See here for more information on setvars.sh.
2. Launch Eclipse from the folder where it was installed:

./eclipse
3. In Eclipse, select your workspace and create a project with Intel > Browse Intel oneAPI Samples.
The wizard selection screen appears.

NOTE If you do not see an Intel menu in Eclipse, install the oneAPI Eclipse plugins, then return to this
page for instructions on how to Build and Run a sample.

24
Get Started with the Intel® oneAPI Base Toolkit for Linux*

4. Enter a name for your project in the New Project Name field. Use the > arrow to select the Intel®
oneAPI Base Toolkit.
5. Select the Intel® oneAPI DPC++/C++ Compiler > CPU, GPU and FPGA group, then select Vector
Add. Vector Add is a simple test application that will help verify that the tools are setup correctly and
can access your system's GPU.
6. The Vector Add sample opens in the C++ perspective and the sample will run. Check the Problems
view at the bottom. If there are any errors, they will display in the Problems tab.

25
Get Started with the Intel® oneAPI Base Toolkit for Linux*

7. To build a project, select Project > Build Project. When building, the Eclipse Console view (usually in
the lower right) will show you a build log, along with any errors or warnings. In this example, no errors
were found.
8. To run the application, click Run > Run .
See Explore SYCL Through Samples to learn more.

Next Steps
After successfully building a sample application, Explore SYCL with Samples from Intel and explore the tools
in the Intel® oneAPI Base Toolkit.

Tool Description

Intel® DPC++ Compatibility Tool The Intel® DPC++ Compatibility Tool assists in
migration of CUDA* applications to SYCL* ready
code that can use the Intel® oneAPI DPC++/C++
Compiler. Get started.

Intel® oneAPI DPC++/C++ Compiler The Intel® oneAPI DPC++/C++ Compiler targets
CPUs and accelerators through single-source code
while permitting custom tuning. Get Started.
Intel® oneAPI DPC++ Library This library is a companion to the Intel® oneAPI DPC++/C
++ Compiler and provides an alternative for C++
developers who create heterogeneous applications and

26
Get Started with the Intel® oneAPI Base Toolkit for Linux*

solutions. Its APIs are based on familiar standards-C++


STL, Parallel STL (PSTL), Boost.Compute, and SYCL*-to
maximize productivity and performance across CPUs,
GPUs, and FPGAs.
Intel® oneAPI Video Processing Library (oneVPL) This library provides access to H264/H265 elementary
stream decode and frame processing. By using the high
level interface of oneVPL, the same code can unlock
hardware capabilities for media across CPU and Intel®
Processor Graphics GPUs. Get started.
Intel® Distribution for GDB*
GDB, the GNU Project debugger, allows you to see
what is going on `inside' another program while it
executes -- or what another program was doing at
the moment it crashed. Get Started.
Learn more.
Intel® oneAPI Math Kernel Library (oneMKL) The oneMKL helps you achieve maximum performance
with a math computing library of highly optimized,
extensively parallelized routines for CPU and GPU. The
library has C and Fortran interfaces for most routines on
CPU, and SYCL* interfaces for some routines on both CPU
and GPU. Get started.
Intel® oneAPI Threading Building Blocks (oneTBB) oneTBB is a flexible performance library that simplifies the
work of adding parallelism to complex applications, even if
you're not a threading expert. Learn more.
Intel® Integrated Performance Primitives Intel® Integrated Performance Primitives (Intel® IPP) is an
extensive library of ready-to-use, domain-specific
functions that are highly optimized for diverse Intel®
architectures. Get Started with Intel® IPP or Get Started
with Intel® IPP Cryptography.
Intel® oneAPI Data Analytics Library The Intel® oneAPI Data Analytics Library (oneDAL) is a
library that helps speed up big data analysis by providing
highly optimized algorithmic building blocks for all stages
of data analytics (preprocessing, transformation, analysis,
modeling, validation, and decision making) in batch,
online, and distributed processing modes of computation.
The current version of oneDAL provides SYCL* API
extensions to the traditional C++ interface. Get started.
Intel® oneAPI Collective Communications Library Intel® oneAPI Collective Communications Library (oneCCL)
is a scalable and high-performance communication library
for Deep Learning (DL) and Machine Learning (ML)
workloads. It develops the ideas originated in Intel(R)
Machine Learning Scaling Library and expands the design
and API to encompass new features and use cases. Get
started.
Intel® oneAPI Deep Neural Network Library The Intel® oneAPI Deep Neural Network Library (oneDNN)
is an open-source performance library for deep learning
applications. The library includes basic building blocks for
neural networks optimized for Intel® Architecture
®
Processors and Intel Processor Graphics. oneDNN is
intended for deep learning applications and framework
developers interested in improving application
performance on Intel CPUs and GPUs. Get started.
Intel® VTune™ Profiler
Intel® VTune™ Profiler is a performance analysis tool
targeted for users developing serial and
multithreaded applications. The tool is delivered as
a Performance Profiler with Intel Performance
Snapshots and supports local and remote target
analysis on the Windows* and Linux* platforms.
Get started.
Intel® Advisor Intel® Advisor gives software architects and developers
the data and analysis tools they need to build well-
threaded and vectorized code that exploits modern
hardware capabilities. Get started.

27
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Intel® FPGA Add-on for oneAPI Base Toolkit Use reconfigurable hardware to accelerate data-centric
workloads. Learn more.

Learn more about SYCL and targeting other accelerators using the following resources:
Resource Description
Intel® oneAPI Programming Guide Provides details on the oneAPI programming model,
including details about SYCL standards, programming for
various target accelerators, and introductions to the
oneAPI libraries.
Intel Acceleration Stack Quick Start Guide for Intel A quick start guide for the Intel® Programmable
Programmable Acceleration Card with Intel® Arria 10 GX Acceleration Card with Intel® Arria® 10 GX FPGA. This
FPGA guide provides instructions to load and run a loopback
test, Hello FPGA, in both non-virtualized and virtualized
environments.
Intel Acceleration Stack Quick Start Guide: Intel® FPGA A quick start guide for Intel® FPGA PAC D5005. This guide
Programmable Acceleration Card D5005 provides the instructions for installing the Open
Programmable Acceleration Engine (OPAE) on the host
Intel® Xeon® Processor to manage and access the Intel®
FPGA PAC, configuring and flashing the FPGA image and
Board Management Controller (BMC) images, running the
example hello_afu in a virtualized and non-virtualized
environment, and handling graceful thermal shutdown.
Intel® FPGA SDK for OpenCL™ Pro Edition: Custom Outlines the procedure for creating an Intel® FPGA
Platform Toolkit User Guide Software Development Kit (SDK) for OpenCL™ Pro Edition
Custom Platform. The Intel® FPGA SDK for OpenCL™ Pro
Edition Custom Platform Toolkit provides the necessary
tools for implementing a fully functional Custom Platform.
Intel® FPGA Acceleration Hub Learn about the Acceleration Stack for Intel® Xeon® CPUs
with FPGAs and other Intel FPGA-based acceleration
platforms.
FPGA Optimization Guide for Intel® oneAPI Toolkits The oneAPI FPGA Optimization Guide provides guidance
on leveraging the functionalities of SYCL to optimize your
design.
Explore SYCL Through Intel® FPGA Code Samples Provides guidance on how to target and develop your
design on an FPGA using the oneAPI programming model.
It also provides guidance on how to optimize a design to
achieve performance and latency targets for an
application targeting the FPGA.
oneAPI GPU Optimization Guide The oneAPI GPU Optimization Guide demonstrates how to
improve the behavior of your software by partitioning it
across the host and accelerator to specialize portions of
the computation that run best on the accelerator.
Specialization includes restructuring and tuning the code
to create the best mapping of the application to the
hardware. The value of oneAPI is that it allows each of
these variations to be expressed in a common language
with device-specific variants launched on the appropriate
accelerator.

For more information about this toolkit, see the https://software.intel.com/content/www/us/en/develop/


tools/oneapi/base-toolkit.html page.

SSH: Password-less Access to Remote Linux* Target

Introduction
For some oneAPI applications, you must configure a password-less SSH connection for the root user on the
target system; for example:
• IoT applications that use the MRAA/UPM sensor library
• Any application that interacts with system resources that require su, sudo, or root access
• Any tool that requires remote root or sudo access to your target system

28
Get Started with the Intel® oneAPI Base Toolkit for Linux*

When you finish the configuration steps below you will be able to “ssh into” your remote Linux target from
your host development system without a password prompt, as a normal (non-root) user or as a root user.
For an introduction to SSH and how SSH keys work, see SSH Essentials: Working with SSH Servers, Clients,
and Keys.

NOTE Password-less access works only when you connect to your target system from your host
development system with a matching private SSH key. Attempting to connect from a different host
system will still require a password.

Configure Password-less SSH Access


These instructions apply to:
• Host development system: Linux*, Windows*, or macOS*
• Target system: Linux
Set up an .ssh directory
On your host development system:
1. Open a terminal session (CMD window on Windows) and CD to your home directory.
2. Enter the following commands to create an .ssh directory, set the proper permissions, and CD into the
new .ssh directory.
At a Windows CMD prompt

> %HomeDrive% && cd %HomePath%


> mkdir .ssh
> cd .ssh
At a Linux terminal (bash) prompt

$ cd ~
$ mkdir -p .ssh
$ chmod 700 .ssh
$ cd .ssh
Step 3: Generate keys and copy to the target system

NOTE From this point forward the instructions apply to all host development systems (Windows,
Linux, and macOS).

1. To generate a default-named RSA key pair with an empty passphrase (that is, do not provide a
passphrase when asked), enter:

$ ssh-keygen -t rsa
2. To copy the new public key to your target system's non-root user home folder, enter the following,
where:
username = the name used to access the target and target = the IP address or the network hostname
of the target
You should be prompted for the non-root user password for your target device.

$ scp id_rsa.pub username@target:id_rsa.pub


$ ssh username@target

29
Get Started with the Intel® oneAPI Base Toolkit for Linux*

$ cd ~
$ mkdir -p .ssh
$ chmod 700 .ssh
$ cat ~/id_rsa.pub >>.ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ exit
Step 4: Confirm that a password is no longer required (non-root)
Follow this step to confirm that a password is no longer required for your non-root user.
1. To display the target's system information strings, including the target's hostname as the second field
in the output, enter:

ssh username@target uname -a


Step 5: Configure password-less access to root on your target
1. To login to the non-root user on the target using SSH and switch to the root user using sudo, enter:

$ ssh username@target
$ cd ~
$ sudo -E bash
Note that the sudo command should prompt you for your target system's non-root user password.
2. To copy the public key that you transferred to the non-root user account on the target into the root
user's authorized keys file, enter:

$ mkdir -p /root/.ssh
$ chmod 700 /root/.ssh
$ cat ./id_rsa.pub >>/root/.ssh/authorized_keys
$ chmod 600 /root/.ssh/authorized_keys
3. Exit twice, first from the sudo bash session, second from the ssh connection:

$ exit
$ exit

Step 6: [Optional] Check your progress


To test the root connection for your target, enter:

$ ssh root@target ls -a
You should see a directory listing of all files located in the /root folder on your target, without the need for a
login prompt.

Next: Create a New Connection and Connect to Your Target


For instructions to create a new connection, view existing connections, and connect to your target, see
Connecting to Your Board Using an SSH/TCF Agent Connection.
Notes
• Password-less access works only when you connect to your target system from your host development
system with a matching private SSH key. Attempting to connect from a different host system will still
require a password.
• Make sure that you have created a project for Linux targets, and that this project is selected in the Project
Explorer.
See Also
Why sudo user can use sched_setscheduler SCHED_RR while root can not?
Operation not permitted while setting new priority for thread

30
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Debugging libmraa {#debugging}

SSH: Running Applications Built with Eclipse*


The following describes how to run a built application on a "remote target" system using an SSH connection.
If your application requires a password-less SSH connection, see Password-less Access to a Remote Linux*
Target Device.
This section assumes:
• you already have an Eclipse project
• have successfully built an application from that project
• have some familiarity with Eclipse
• remote target is already configured for remote access via SSH
1. Create a Connection within Eclipse by selecting the New Connection option:

2. Select the SSH Connection type and click Next:

31
Get Started with the Intel® oneAPI Base Toolkit for Linux*

3. Enter your remote targets Hostname and Username. You can either use Key-based authentication or
Password, depending on the configuration of the remote device.

32
Get Started with the Intel® oneAPI Base Toolkit for Linux*

4. You may also be then prompted to set an Eclipse Secure Storage password. This protects the password
you entered when stored on the disk. Enter the password and click OK.

5. When connecting to the device for the first time, you should be prompted to trust the device. Click Yes
to continue.

33
Get Started with the Intel® oneAPI Base Toolkit for Linux*

6. Once a connection has been defined within Eclipse, open the the Run Configurations window to create
your configuration.

7. Double-click on the "C/C++ Remote Application" configuration type.

34
Get Started with the Intel® oneAPI Base Toolkit for Linux*

This will create a run configuration for you project.


8. Set two properties on this view:
a. Select the connection you created earlier.
b. Define a location on the remote system where Eclipse will copy your project's binary. Your use-
case may dictate where your binary must be copied to, but often a location your user has write
permission to, such as /tmp/, will suffice.
9. Click Apply to save your configuration.
10. Click Run to run your project on the remote device.

35
Get Started with the Intel® oneAPI Base Toolkit for Linux*

You should see the output in the console of Eclipse.

36
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Note that you can re-use the same connection and launch configuration for debug too. Your remote device
will need at least gdbserver in the PATH environment variable.

Build and Run a Sample Project Using Visual Studio Code


Intel® oneAPI Base Toolkit
Intel® oneAPI toolkits integrate with third-party IDEs to provide a seamless GUI experience for software
development.

NOTE If you are using Visual Studio Code (VS Code) with FPGA, see theFPGA Workflows on Third-
Party IDEs for Intel® oneAPI Toolkits.

To see a list of components that support CMake, see Use CMake to with oneAPI Applications .
To watch a video presentation of how to install extensions and use them to set up your environment, explore
sample code, and connect to the Intel® Developer Cloud using Visual Studio Code, see oneAPI Visual Studio
Code Extensions.
This procedure requires the Sample Browser extension to be installed. The next section will describe how to
install it. If you have already installed it, skip to Create a Project Using Visual Studio Code.

Extensions for Visual Studio Code Users


To watch a video presentation of how to install extensions and use them to set up your environment, explore
sample code, and connect to the Intel® Developer Cloud using Visual Studio Code, see oneAPI Visual Studio
Code Extensions.
You can use VS Code extensions to set your environment, create launch configurations, and browse and
download samples:
1. From Visual Studio Code, click on the Extensions logo in the left navigation.

37
Get Started with the Intel® oneAPI Base Toolkit for Linux*

2. Locate the extension titled Sample Browser for Intel oneAPI Toolkits, or visit https://
marketplace.visualstudio.com/publishers/intel-corporation to browse available extensions.
3. Click Install.
4. Next, locate the extension titled Environment Configurator for Intel oneAPI Toolkits.
5. Click Install.
For more information about VS Code extensions for Intel oneAPI Toolkits, see Using Visual Studio Code* to
Develop Intel® oneAPI Applications.

Create a Project Using Visual Studio Code


1. Click on the icon to open the oneAPI Samples Browser:

38
Get Started with the Intel® oneAPI Base Toolkit for Linux*

2. A list of available samples will open in the left navigation.


3. To view the readme for the sample, click the next to the sample. If you choose to build and run the
sample, the readme will also be downloaded with the sample.
4.
Find the sample you want to build and run. Click the to the right of the sample name.

5. Create a new folder for your sample and click OK.

6. The sample will load in a new window:

39
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Set the oneAPI Environment


1. Press Ctrl+Shift+P ( or View -> Command Palette… ) to open the Command Palette.
2. Type Intel oneAPI: Initialize environment variables. Click on Intel oneAPI: Initialize
environment variables.

Prepare Build Tasks from Make / CMake Files


1. Press Ctrl+Shift+P or View -> Command Palette… to open the Command Pallette.
2. Type Intel oneAPI and select Intel oneAPI: Generate tasks.

3. Select the build tasks (target) from your Make/CMake oneAPI project that you want to use.

40
Get Started with the Intel® oneAPI Base Toolkit for Linux*

4. Run the task/target by selecting Terminal -> Run task... .


5. Select the task to run.

NOTE Not all oneAPI sample projects use CMake. The README.md file for each sample specifies how to
build the sample. We recommend that you check out the CMake extension for VS Code that is
maintained by Microsoft.

Build the Project


The oneAPI extensions enable the ability to prepare launch configurations for running and debugging projects
created using Intel oneAPI toolkits:

NOTE Debugging a local Windows host using VS Code is not supported. Debugging a local Linux host
on CPU is supported using VS Code, but debugging on GPU requires a remote host. Debugging a
remote Linux target from Windows or Linux host is supported using VS Code.

1. Press Ctrl+Shift+B or Terminal -> Run Build Task... to set the default build task.
2. Select the task from the command prompt list to build your project.
3. Press Ctrl+Shift+B or Terminal -> Run Build Task... again to build your project.

Prepare Launch Configuration for Debugging


The oneAPI extensions enable the ability to prepare launch configurations for running and debugging projects
created using Intel oneAPI toolkits:
1. Press Ctrl+Shift+P or View -> Command Palette... to open the Command Palette.
2. Type Intel oneAPI and select Intel oneAPI: Generate launch configurations .

3. Select the executable (target) you want to debug.


Optional: select any task you want to run before and/or after launching the debugger (for example,
build the project before debug, clean the project after debug).
4. The configuration is now available to debug and run using the gdb-oneapi debugger. You can find it
in .vscode/launch.json. To debug and run, click on the Run icon or press Ctrl+Shift+D.

41
Get Started with the Intel® oneAPI Base Toolkit for Linux*

NOTE Debugging a local Windows host using VS Code is not supported. Debugging a local Linux host
on CPU is supported using VS Code, but debugging on GPU requires a remote host. Debugging a
remote Linux target from Windows or Linux host is supported using VS Code.

Debug, Analyze, Develop with More Extensions


There are more oneAPI extensions for Visual Studio Code which enable:
• debugging
• remote development
• connection to Intel Developer Cloud
• analysis configuration
To learn more about the extensions, see Intel® oneAPI Extensions for Visual Studio Code*.
To learn more about more capabilities and options, see Using Visual Studio Code with Intel® oneAPI Toolkits.

NOTE
An internet connection is required to download the samples for oneAPI toolkits. If you are using an
offline system, download the samples from a system that is internet connected and transfer the
sample files to your offline system. If you are using an IDE for development, you will not be able to
use the oneAPI CLI Samples Browser while you are offline. Instead, download the samples and extract
them to a directory. Then open the sample with your IDE. The samples can be downloaded from here:
Intel® oneAPI Toolkit Code Samples

See Explore SYCL* Through Samples to learn more.

Next Steps
After successfully building a sample application, Explore SYCL with Samples from Intel and explore the tools
in the Intel® oneAPI Base Toolkit.

Tool Description

Intel® DPC++ Compatibility Tool The Intel® DPC++ Compatibility Tool assists in
migration of CUDA* applications to SYCL* ready
code that can use the Intel® oneAPI DPC++/C++
Compiler. Get started.

Intel® oneAPI DPC++/C++ Compiler The Intel® oneAPI DPC++/C++ Compiler targets
CPUs and accelerators through single-source code
while permitting custom tuning. Get Started.
Intel® oneAPI DPC++ Library This library is a companion to the Intel® oneAPI DPC++/C
++ Compiler and provides an alternative for C++
developers who create heterogeneous applications and
solutions. Its APIs are based on familiar standards-C++
STL, Parallel STL (PSTL), Boost.Compute, and SYCL*-to
maximize productivity and performance across CPUs,
GPUs, and FPGAs.
Intel® oneAPI Video Processing Library (oneVPL) This library provides access to H264/H265 elementary
stream decode and frame processing. By using the high
level interface of oneVPL, the same code can unlock
hardware capabilities for media across CPU and Intel®
Processor Graphics GPUs. Get started.

42
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Intel® Distribution for GDB*


GDB, the GNU Project debugger, allows you to see
what is going on `inside' another program while it
executes -- or what another program was doing at
the moment it crashed. Get Started.
Learn more.
Intel® oneAPI Math Kernel Library (oneMKL) The oneMKL helps you achieve maximum performance
with a math computing library of highly optimized,
extensively parallelized routines for CPU and GPU. The
library has C and Fortran interfaces for most routines on
CPU, and SYCL* interfaces for some routines on both CPU
and GPU. Get started.
Intel® oneAPI Threading Building Blocks (oneTBB) oneTBB is a flexible performance library that simplifies the
work of adding parallelism to complex applications, even if
you're not a threading expert. Learn more.
Intel® Integrated Performance Primitives Intel® Integrated Performance Primitives (Intel® IPP) is an
extensive library of ready-to-use, domain-specific
functions that are highly optimized for diverse Intel®
architectures. Get Started with Intel® IPP or Get Started
with Intel® IPP Cryptography.
Intel® oneAPI Data Analytics Library The Intel® oneAPI Data Analytics Library (oneDAL) is a
library that helps speed up big data analysis by providing
highly optimized algorithmic building blocks for all stages
of data analytics (preprocessing, transformation, analysis,
modeling, validation, and decision making) in batch,
online, and distributed processing modes of computation.
The current version of oneDAL provides SYCL* API
extensions to the traditional C++ interface. Get started.
Intel® oneAPI Collective Communications Library Intel® oneAPI Collective Communications Library (oneCCL)
is a scalable and high-performance communication library
for Deep Learning (DL) and Machine Learning (ML)
workloads. It develops the ideas originated in Intel(R)
Machine Learning Scaling Library and expands the design
and API to encompass new features and use cases. Get
started.
Intel® oneAPI Deep Neural Network Library The Intel® oneAPI Deep Neural Network Library (oneDNN)
is an open-source performance library for deep learning
applications. The library includes basic building blocks for
neural networks optimized for Intel® Architecture
®
Processors and Intel Processor Graphics. oneDNN is
intended for deep learning applications and framework
developers interested in improving application
performance on Intel CPUs and GPUs. Get started.
Intel® VTune™ Profiler
Intel® VTune™ Profiler is a performance analysis tool
targeted for users developing serial and
multithreaded applications. The tool is delivered as
a Performance Profiler with Intel Performance
Snapshots and supports local and remote target
analysis on the Windows* and Linux* platforms.
Get started.
Intel® Advisor Intel® Advisor gives software architects and developers
the data and analysis tools they need to build well-
threaded and vectorized code that exploits modern
hardware capabilities. Get started.
Intel® FPGA Add-on for oneAPI Base Toolkit Use reconfigurable hardware to accelerate data-centric
workloads. Learn more.

Learn more about SYCL and targeting other accelerators using the following resources:
Resource Description
Intel® oneAPI Programming Guide Provides details on the oneAPI programming model,
including details about SYCL standards, programming for
various target accelerators, and introductions to the
oneAPI libraries.

43
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Intel Acceleration Stack Quick Start Guide for Intel A quick start guide for the Intel® Programmable
Programmable Acceleration Card with Intel® Arria 10 GX Acceleration Card with Intel® Arria® 10 GX FPGA. This
FPGA guide provides instructions to load and run a loopback
test, Hello FPGA, in both non-virtualized and virtualized
environments.
Intel Acceleration Stack Quick Start Guide: Intel® FPGA A quick start guide for Intel® FPGA PAC D5005. This guide
Programmable Acceleration Card D5005 provides the instructions for installing the Open
Programmable Acceleration Engine (OPAE) on the host
Intel® Xeon® Processor to manage and access the Intel®
FPGA PAC, configuring and flashing the FPGA image and
Board Management Controller (BMC) images, running the
example hello_afu in a virtualized and non-virtualized
environment, and handling graceful thermal shutdown.
Intel® FPGA SDK for OpenCL™ Pro Edition: Custom Outlines the procedure for creating an Intel® FPGA
Platform Toolkit User Guide Software Development Kit (SDK) for OpenCL™ Pro Edition
Custom Platform. The Intel® FPGA SDK for OpenCL™ Pro
Edition Custom Platform Toolkit provides the necessary
tools for implementing a fully functional Custom Platform.
Intel® FPGA Acceleration Hub Learn about the Acceleration Stack for Intel® Xeon® CPUs
with FPGAs and other Intel FPGA-based acceleration
platforms.
FPGA Optimization Guide for Intel® oneAPI Toolkits The oneAPI FPGA Optimization Guide provides guidance
on leveraging the functionalities of SYCL to optimize your
design.
Explore SYCL Through Intel® FPGA Code Samples Provides guidance on how to target and develop your
design on an FPGA using the oneAPI programming model.
It also provides guidance on how to optimize a design to
achieve performance and latency targets for an
application targeting the FPGA.
oneAPI GPU Optimization Guide The oneAPI GPU Optimization Guide demonstrates how to
improve the behavior of your software by partitioning it
across the host and accelerator to specialize portions of
the computation that run best on the accelerator.
Specialization includes restructuring and tuning the code
to create the best mapping of the application to the
hardware. The value of oneAPI is that it allows each of
these variations to be expressed in a common language
with device-specific variants launched on the appropriate
accelerator.

For more information about this toolkit, see the https://software.intel.com/content/www/us/en/develop/


tools/oneapi/base-toolkit.html page.

SSH: Password-less Access to Remote Linux* Target

Introduction
For some oneAPI applications, you must configure a password-less SSH connection for the root user on the
target system; for example:
• IoT applications that use the MRAA/UPM sensor library
• Any application that interacts with system resources that require su, sudo, or root access
• Any tool that requires remote root or sudo access to your target system
When you finish the configuration steps below you will be able to “ssh into” your remote Linux target from
your host development system without a password prompt, as a normal (non-root) user or as a root user.
For an introduction to SSH and how SSH keys work, see SSH Essentials: Working with SSH Servers, Clients,
and Keys.

44
Get Started with the Intel® oneAPI Base Toolkit for Linux*

NOTE Password-less access works only when you connect to your target system from your host
development system with a matching private SSH key. Attempting to connect from a different host
system will still require a password.

Configure Password-less SSH Access


These instructions apply to:
• Host development system: Linux*, Windows*, or macOS*
• Target system: Linux
Set up an .ssh directory
On your host development system:
1. Open a terminal session (CMD window on Windows) and CD to your home directory.
2. Enter the following commands to create an .ssh directory, set the proper permissions, and CD into the
new .ssh directory.
At a Windows CMD prompt

> %HomeDrive% && cd %HomePath%


> mkdir .ssh
> cd .ssh
At a Linux terminal (bash) prompt

$ cd ~
$ mkdir -p .ssh
$ chmod 700 .ssh
$ cd .ssh
Step 3: Generate keys and copy to the target system

NOTE From this point forward the instructions apply to all host development systems (Windows,
Linux, and macOS).

1. To generate a default-named RSA key pair with an empty passphrase (that is, do not provide a
passphrase when asked), enter:

$ ssh-keygen -t rsa
2. To copy the new public key to your target system's non-root user home folder, enter the following,
where:
username = the name used to access the target and target = the IP address or the network hostname
of the target
You should be prompted for the non-root user password for your target device.

$ scp id_rsa.pub username@target:id_rsa.pub


$ ssh username@target
$ cd ~
$ mkdir -p .ssh
$ chmod 700 .ssh
$ cat ~/id_rsa.pub >>.ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ exit

45
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Step 4: Confirm that a password is no longer required (non-root)


Follow this step to confirm that a password is no longer required for your non-root user.
1. To display the target's system information strings, including the target's hostname as the second field
in the output, enter:

ssh username@target uname -a


Step 5: Configure password-less access to root on your target
1. To login to the non-root user on the target using SSH and switch to the root user using sudo, enter:

$ ssh username@target
$ cd ~
$ sudo -E bash
Note that the sudo command should prompt you for your target system's non-root user password.
2. To copy the public key that you transferred to the non-root user account on the target into the root
user's authorized keys file, enter:

$ mkdir -p /root/.ssh
$ chmod 700 /root/.ssh
$ cat ./id_rsa.pub >>/root/.ssh/authorized_keys
$ chmod 600 /root/.ssh/authorized_keys
3. Exit twice, first from the sudo bash session, second from the ssh connection:

$ exit
$ exit

Step 6: [Optional] Check your progress


To test the root connection for your target, enter:

$ ssh root@target ls -a
You should see a directory listing of all files located in the /root folder on your target, without the need for a
login prompt.

Next: Create a New Connection and Connect to Your Target


For instructions to create a new connection, view existing connections, and connect to your target, see
Connecting to Your Board Using an SSH/TCF Agent Connection.
Notes
• Password-less access works only when you connect to your target system from your host development
system with a matching private SSH key. Attempting to connect from a different host system will still
require a password.
• Make sure that you have created a project for Linux targets, and that this project is selected in the Project
Explorer.

See Also
Why sudo user can use sched_setscheduler SCHED_RR while root can not?
Operation not permitted while setting new priority for thread
Debugging libmraa {#debugging}

SSH: Running Applications Built with Eclipse*


The following describes how to run a built application on a "remote target" system using an SSH connection.

46
Get Started with the Intel® oneAPI Base Toolkit for Linux*

If your application requires a password-less SSH connection, see Password-less Access to a Remote Linux*
Target Device.
This section assumes:
• you already have an Eclipse project
• have successfully built an application from that project
• have some familiarity with Eclipse
• remote target is already configured for remote access via SSH
1. Create a Connection within Eclipse by selecting the New Connection option:

2. Select the SSH Connection type and click Next:

3. Enter your remote targets Hostname and Username. You can either use Key-based authentication or
Password, depending on the configuration of the remote device.

47
Get Started with the Intel® oneAPI Base Toolkit for Linux*

4. You may also be then prompted to set an Eclipse Secure Storage password. This protects the password
you entered when stored on the disk. Enter the password and click OK.

5. When connecting to the device for the first time, you should be prompted to trust the device. Click Yes
to continue.

48
Get Started with the Intel® oneAPI Base Toolkit for Linux*

6. Once a connection has been defined within Eclipse, open the the Run Configurations window to create
your configuration.

7. Double-click on the "C/C++ Remote Application" configuration type.

49
Get Started with the Intel® oneAPI Base Toolkit for Linux*

This will create a run configuration for you project.


8. Set two properties on this view:
a. Select the connection you created earlier.
b. Define a location on the remote system where Eclipse will copy your project's binary. Your use-
case may dictate where your binary must be copied to, but often a location your user has write
permission to, such as /tmp/, will suffice.
9. Click Apply to save your configuration.
10. Click Run to run your project on the remote device.

50
Get Started with the Intel® oneAPI Base Toolkit for Linux*

You should see the output in the console of Eclipse.

51
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Note that you can re-use the same connection and launch configuration for debug too. Your remote device
will need at least gdbserver in the PATH environment variable.

Using Containers
Intel® oneAPI Base Toolkit
Containers allow you to set up and configure environments for building, running and profiling oneAPI
applications and distribute them using images:
• You can install an image containing an environment pre-configured with all the tools you need, then
develop within that environment.
• You can save an environment and use the image to move that environment to another machine without
additional setup.
• You can prepare containers with different sets of languages and runtimes, analysis tools, or other tools, as
needed.

Download Docker* Image


You can download a Docker* image from the Containers Repository.

NOTE The Docker image is ~5 GB and can take ~15 minutes to download. It will require 25 GB of disk
space.

image=intel/oneapi-basekit
docker pull "$image"

Singularity Containers
Build a Singularity image using a Singularity file.

Using Containers with the Command Line


Compile and run the containers directly.
The below enables the GPU, if available, using --device=/dev/dri (may not be available in Linux* VM or
Windows*). The command will leave you at a command prompt, inside the container, in interactive mode.
image=intel/oneapi-basekit
# --device=/dev/dri enables the gpu (if available). May not be available in Linux VM or Windows
docker run --device=/dev/dri -it "$image"

Once in the container, you can interact with it using Run a Sample Project Using the Command Line.

52
Get Started with the Intel® oneAPI Base Toolkit for Linux*

NOTE You may need to include proxy settings before -it "$image"if you are behind a proxy:

docker run -e http_proxy="$http_proxy" -e https_proxy="$https_proxy" -it "$image"

Using Intel® Advisor, Intel® Inspector or Intel® VTune™ Profiler with Containers
When using these tools, extra capabilities have to be provided to the container with these options:
--cap-add=SYS_ADMIN --cap-add=SYS_PTRACE

docker run --cap-add=SYS_ADMIN --cap-add=SYS_PTRACE \


--device=/dev/dri -it "$image"

Using Containers with Eclipse*

Install Eclipse
1. Prerequisite: ensure Java* JRE 8 or 11 is installed.
2. Download and install the latest Eclipse IDE for C/C++ Developers.
3. Download and install the Intel® oneAPI Base Toolkit in order to easily browse samples.

Use the Samples Browser to Create an Eclipse Project


If you have installed the Intel® oneAPI Base Toolkit, you can use the Samples Browser integrated into the
Eclipse IDE to create a project based on Intel® samples.
Run Eclipse and select Intel > Browse Intel oneAPI Samples…

NOTE If you do not see an Intel menu in Eclipse, install the oneAPI Eclipse plugins, then return to this
page for instructions on how to Build and Run a sample.

The Sample Browser dialog window will be displayed.

53
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Expand the sample tree and select a sample. Click Finish to create a project based on the sample.

54
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Configure Project Properties


1. Make sure you've followed the steps described on the Using Containers page to download the Docker
image.
2. Open the Project Properties by right clicking Project > Properties.
3. In Properties select C/C++ Build > Settings, and open the Container Settings tab to set the
following:

55
Get Started with the Intel® oneAPI Base Toolkit for Linux*

a. Check Build Inside Docker Image.


b. Select the docker image to use. If there are no images listed, verify that Docker is enabled for
non-root users. Alternatively, you can run Eclipse with sudo.
c. Apply and close the settings.

Compile the Application


1.
Click the Build Icon .
2. Confirm that the console prints "Running in image..."

Run Application
1. Create a Run Configuration with Run > Run Configurations.
2. Create a new C/C++ Container Launcher.

56
Get Started with the Intel® oneAPI Base Toolkit for Linux*

3. Select the Container tab and make sure the Image is set to the correct Docker image. To enable GPU
in the container, add /dev/dri in the Required Host Directories and check Run in Privileged
Mode.

57
Get Started with the Intel® oneAPI Base Toolkit for Linux*

4. To enable GPU in the container, select the Container tab and add /dev/dri as Required Host
Directories, and check Run in Privileged Mode.

58
Get Started with the Intel® oneAPI Base Toolkit for Linux*

5. Click the Run icon.

59
Get Started with the Intel® oneAPI Base Toolkit for Linux*

6. Confirm that the console prints Running in image...

Using Cloud CI Systems


Cloud CI systems allow you to build and test your software automatically. See the repo in github for
examples of configuration files that use oneAPI for the popular cloud CI systems.

Build and Run a Intel® oneAPI DL Framework Developer


Toolkit Sample Using the Command Line
NOTE
An internet connection is required to download the samples for oneAPI toolkits. If you are using an
offline system, download the samples from a system that is internet connected and transfer the
sample files to your offline system. If you are using an IDE for development, you will not be able to
use the oneAPI CLI Samples Browser while you are offline. Instead, download the samples and extract
them to a directory. Then open the sample with your IDE. The samples can be downloaded from here:
Intel® oneAPI DL Framework Toolkit Code Samples
After you have downloaded the samples, follow the instructions in the README.md file.

Command line development can be done with a terminal window or done through Visual Studio Code*. For
details on how to use VS Code locally, see Basic Usage of Visual Studio Code with oneAPI on Linux*. To use
VS Code remotely, see Remote Visual Studio Code Development with oneAPI on Linux*.

Download Samples using the oneAPI CLI Samples Browser


Use the oneAPI CLI Samples Browser to browse the collection of online oneAPI samples. As you browse the
oneAPI samples, you can copy them to your local disk as buildable sample projects. Most oneAPI sample
projects are built using Make or CMake, so the build instructions are included as part of the sample in a
README file. The oneAPI CLI utility is a single-file, stand-alone executable that has no dependencies on
dynamic runtime libraries. You can find the samples browser in the <install-dir>/dev-utilities/
latest/bin folder on your development host system.

NOTE
An internet connection is required to download the samples for oneAPI toolkits. For information on how
to use this toolkit offline, see Developing with Offline Systems in the Troubleshooting section.

60
Get Started with the Intel® oneAPI Base Toolkit for Linux*

1. Open a terminal window.


2. If you did not complete the steps in Option 2: One time set up for setvars.sh in the Configure Your
System section,set system variables by sourcing setvars:
For root or sudo installations:

. <install_dir>/setvars.sh
For local user installations:

. ~/intel/oneapi/setvars.sh
3. If you customized the installation folder, setvars.sh is in your custom folder.

NOTE
The setvars.sh script can also be managed using a configuration file. For more details, see Using a
Configuration File to Manage Setvars.sh.

4. In the same terminal window, run the application (it should be in your PATH):

oneapi-cli
The oneAPI CLI menu appears:

5. Move the arrow key down to select Create a project, then press Enter
6. Select the language for your sample. For your first project, select cpp, then press Enter. The toolkit
samples list appears.
7. Select the sample you wish to use. For your first sample, select the CCL Getting Started Sample.
After you successfully build and run the CCL Getting Started Sample, you can download more samples.
Descriptions of each sample are in the Samples Table.

61
Get Started with the Intel® oneAPI Base Toolkit for Linux*

8. After you select a sample, press Enter.


9. Enter an absolute or a relative directory path to create your project. Provide a directory and project
name. The Project Name is the name of the sample you chose in the previous step.
10. Press Tab to select Create, then press Enter:

The directory path is printed below the Create button.


Samples can be built and run using SYCL*, C, or C++ on a CPU and GPU. For users who are only using C or
C++ follow the instructions to run on CPU only.
Sample Name Description
oneCCL_Getting_Started for CPU and GPU This C++ API example demonstrates basic of CCL
programming model by invoking an allreduce operation.
oneCCL_Getting_Started for CPU only
oneDNN_Getting_Started for CPU and GPU This C++ API example demonstrates basic of oneDNN
programming model by using a ReLU operation.
oneDNN_CNN_FP32_Inference for CPU and GPU This C++ API example demonstrates building/running a
simple CNN fp32 inference against different oneDNN pre-
built binaries.
oneDNN_SYCL_InterOp for CPU and GPU This C++ API example demonstrates oneDNN SYCL
extensions API programming model by using a custom
SYCL kernel and a ReLU operation.

CCL Getting Started Sample for CPU and GPU


The commands below will allow you to build your first oneCCL project on a CPU or a GPU. When using these
commands, replace <install_dir> with your installation path, (example: /opt/intel/oneapi)

Build a Sample Project Using the Intel® oneAPI DPC++/C++ Compiler


1. Using a clean console environment without exporting any default environment variables, source
setvars.sh:

source <install_dir>/setvars.sh --ccl-configuration=cpu_gpu_dpcpp

62
Get Started with the Intel® oneAPI Base Toolkit for Linux*

2. Navigate to where the sample is located (i.e., DLDevKit-code-samples):

cd <project_dir>/oneCCL_Getting_Started
3. Navigate to the build directory:

mkdir build
cd build
4. Build the program with CMake. This will also copy the source file sycl_allreduce_cpp_test.cpp from
<install_dir>/ccl/latest/examples/sycl to the build/src/sycl folder:

cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp


make sycl_allreduce_cpp_test

NOTE If you receive any error messages after running the command above, refer to Configure Your
System to ensure your environment is set up properly.

5. Run the program. Replace the ${NUMBER_OF_PROCESSES} with the appropriate integer number and
select gpu or cpu:

mpirun -n ${NUMBER_OF_PROCESSES} ./out/sycl/sycl_allreduce_cpp_test {gpu|cpu}


Example: mpirun -n 2 ./out/sycl/sycl_allreduce_cpp_test gpu

Result Validation on CPU

Provided device type cpu


Running on Intel(R) Core(TM) i7-7567U CPU @ 3.50GHz
PASSED
Result Validation on GPU

Provided device type gpu


Running on Intel(R) Gen9
Provided device type gpu
Running on Intel(R) Gen9
PASSED
Run the oneAPI CLI Samples Browser to download another sample.

oneDNN Getting Started Sample for CPU and GPU


Using GNU C++ Compiler
Using the GNU C++ compiler, this sample demonstrates basic oneDNN operations on an Intel CPU and uses
GNU OpenMP for CPU parallelization.
1. Using a clean console environment without exporting any default environment variables, source
setvars.sh:

source /opt/intel/oneapi/setvars.sh --dnnl-configuration=cpu_gomp


2. Navigate to where the project is located (i.e., DLDevKit-code-samples):

cd <project_dir>/oneDNN_Getting_Started
3. Navigate to the cpu_comp directory:

mkdir cpu_comp
cd cpu_comp
4. Build the program with CMake.

63
Get Started with the Intel® oneAPI Base Toolkit for Linux*

NOTE If any errors appear after building with CMake, change to root privileges and try again.

cmake .. -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++


make getting-started-cpp
5. Enable oneDNN Verbose Log:

export DNNL_VERBOSE=1
6. Run the program on a CPU:

./out/getting-started-cpp
7. Result Validation on a CPU:

dnnl_verbose,info,DNNL v1.0.0 (commit 560f60fe6272528955a56ae9bffec1a16c1b3204)


dnnl_verbose,info,Detected ISA is Intel AVX2
dnnl_verbose,exec,cpu,eltwise,jit:avx2,forward_inference,data_f32::blocked:acdb:f0
diff_undef::undef::f0,alg:eltwise_relu:0:0,1x3x13x13,968.354
Example passes
Run the oneAPI CLI Samples Browser to download another sample.

oneDNN CNN FP32 Inference Sample for CPU and GPU


Compile with the Intel® oneAPI DPC++/C++ Compiler
The cnn-inference-f32-cpp sample is included in the toolkit installation.
®
By using the Intel® oneAPI DPC++/C++ Compiler, this sample will support CNN FP32 on an Intel CPU and
®
an Intel GPU. To build with CMake and the Intel® oneAPI DPC++/C++ Compiler:
1. Using a clean console environment without exporting any default environment variables, source
setvars.sh:

source /opt/intel/oneapi/setvars.sh --dnnl-configuration=cpu_dpcpp_gpu_dpcpp

cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp


make cnn-inference-f32-cpp
2. Navigate to where the sample is located (i.e., DLDevKit-code-samples):

cd <sample_dir>/oneDNN_CNN_INFERENCE_FP32
3. Navigate to the dpcpp directory:

mkdir dpcpp
cd dpcpp
4. Build the program with CMake:

cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp


make cnn-inference-f32-cpp

NOTE If any errors appear after building with CMake, change to root privileges and try again. If it still
does not work, refer to Configure Your System to ensure you have completed all set up steps.

5. Enable oneDNN Verbose Log

export DNNL_VERBOSE=1
6. Run the program on the following:
• On a CPU

./out/cnn-inference-f32-cpp cpu

64
Get Started with the Intel® oneAPI Base Toolkit for Linux*

• On a GPU (if available)

./out/cnn-inference-f32-cpp gpu
Result Validation
The output shows how long it takes to run this code on CPU and GPU. For example, here is a run time of 33
seconds on CPU.
On CPU:

dnnl_verbose,info,DNNL v1.0.0 (commit 560f60fe6272528955a56ae9bffec1a16c1b3204)


dnnl_verbose,info,Detected ISA is Intel AVX2
...
/DNNL VERBOSE LOGS/
...
dnnl_verbose,exec,cpu,inner_product,gemm:jit,forward_inference,src_f32::blocked:abcd:f0
wei_f32::blocked:abcd:f0 bia_f32::blocked:a:f0 dst_f32::blocked:ab:f0,,mb1ic9216oc4096,5.50391
dnnl_verbose,exec,cpu,inner_product,gemm:jit,forward_inference,src_f32::blocked:ab:f0
wei_f32::blocked:ab:f0 bia_f32::blocked:a:f0 dst_f32::blocked:ab:f0,,mb1ic4096oc4096,2.58618
dnnl_verbose,exec,cpu,inner_product,gemm:jit,forward_inference,src_f32::blocked:ab:f0
wei_f32::blocked:ab:f0 bia_f32::blocked:a:f0 dst_f32::blocked:ab:f0,,mb1ic4096oc1000,0.667969
dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:ab:f0
dst_f32::blocked:ab:f0,num:1,1x1000,0.0368652
Use time 33.22
On GPU

dnnl_verbose,info,DNNL v1.0.0 (commit 560f60fe6272528955a56ae9bffec1a16c1b3204)


dnnl_verbose,info,Detected ISA is Intel AVX2
...
/DNNL VERBOSE LOGS/
...
dnnl_verbose,exec,gpu,reorder,ocl:simple:any,undef,src_f32::blocked:aBcd16b:f0
dst_f32::blocked:abcd:f0,num:1,1x256x6x6
dnnl_verbose,exec,gpu,inner_product,ocl:gemm,forward_inference,src_f32::blocked:abcd:f0
wei_f32::blocked:abcd:f0 bia_f32::blocked:a:f0 dst_f32::blocked:ab:f0,,mb1ic9216oc4096
dnnl_verbose,exec,gpu,inner_product,ocl:gemm,forward_inference,src_f32::blocked:ab:f0
wei_f32::blocked:ab:f0 bia_f32::blocked:a:f0 dst_f32::blocked:ab:f0,,mb1ic4096oc4096
dnnl_verbose,exec,gpu,inner_product,ocl:gemm,forward_inference,src_f32::blocked:ab:f0
wei_f32::blocked:ab:f0 bia_f32::blocked:a:f0 dst_f32::blocked:ab:f0,,mb1ic4096oc1000
dnnl_verbose,exec,gpu,reorder,ocl:simple:any,undef,src_f32::blocked:ab:f0
dst_f32::blocked:ab:f0,num:1,1x1000
Use time 106.29

Run the oneAPI CLI Samples Browser to download another sample.

oneDNN SYCL Interop Sample for CPU and GPU


This DNNL SYCL Interop sample code is implemented using C++ and SYCL language for CPU and GPU.
Using DPC++ Compiler
1. Using a clean console environment without exporting any default environment variables, source
setvars.sh:

source /opt/intel/oneapi/setvars.sh --dnnl-configuration=cpu_dpcpp_gpu_dpcpp


2. Navigate to where the sample is located (i.e., DLDevKit-code-samples)

cd <sample_dir>/oneDNN_SYCL_InterOp

65
Get Started with the Intel® oneAPI Base Toolkit for Linux*

3. Navigate to the dpcpp directory.

mkdir dpcpp
cd dpcpp
4. Build the program with CMake

cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=dpcpp


make sycl-interop-cpp
5. Enable oneDNN Verbose Log:

export DNNL_VERBOSE=1
6. Run the Program on the following:
On a CPU

./out/sycl-interop-cpp cpu
On a GPU

./out/sycl-interop-cpp gpu
Result Validation
CPU Results

dnnl_verbose,info,DNNL v1.0.0 (commit 560f60fe6272528955a56ae9bffec1a16c1b3204)


dnnl_verbose,info,Detected ISA is Intel AVX2
dnnl_verbose,exec,cpu,eltwise,jit:avx2,forward_training,data_f32::blocked:abcd:f0
diff_undef::undef::f0,alg:eltwise_relu:0:0,2x3x4x5,958.552
Example passes
GPU Results

dnnl_verbose,info,DNNL v1.0.0 (commit 560f60fe6272528955a56ae9bffec1a16c1b3204)


dnnl_verbose,info,Detected ISA is Intel AVX2
dnnl_verbose,exec,gpu,eltwise,ocl:ref:any,forward_training,data_f32::blocked:abcd:f0
diff_undef::undef::f0,alg:eltwise_relu:0:0,2x3x4x5
Example passes
Run the oneAPI CLI Samples Browser to download another sample.

oneCCL Getting Started Sample for CPU Only


The commands below will allow you to build your first oneCCL project on a CPU or a GPU. When using these
commands, replace $<install_dir> with your installation path, (example: /opt/intel/oneapi)

Build a Sample Project Using the Intel® oneAPI DPC++/C++ Compiler


1. Using a clean console environment without exporting any default environment variables, source
setvars.sh:

source /opt/intel/oneapi/setvars.sh --ccl-configuration=cpu_icc


2. Navigate to where the sample is located (i.e., DLDevKit-code-samples):

cd <project_dir>/oneCCL_Getting_Started
3. Navigate to the build directory:

mkdir build
cd build
4. Build the program with CMake:

cmake .. -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++


make

66
Get Started with the Intel® oneAPI Base Toolkit for Linux*

NOTE If you receive any error messages after running the command above, refer to Configure Your
System to ensure your environment is set up properly.

5. Run the program. Replace the ${NUMBER_OF_PROCESSES} with the appropriate integer number and
select gpu or cpu:

mpirun -n ${NUMBER_OF_PROCESSES} ./out/cpu_allreduce_cpp_test {gpu|cpu}


Example: mpirun -n 2 ./out/cpu_allreduce_cpp_test gpu

Result Validation on CPU

Provided device type: cpu


Running on Intel(R) Core(TM) i7-7567U CPU @ 3.50GHz
Example passes
Run the oneAPI CLI Samples Browser to download another sample.

Troubleshooting
Potential errors and how to avoid or fix them.
Issue How to fix
Errors that occur during installation or directly after See the Troubleshooting page of the Intel® oneAPI Toolkits
installation. Installation Guide for Linux* OS.
Undefined error appears when building a sample.
If the sample was built by using cmake and then
make, more details to diagnose the reason for your
error are available by rebuilding with the VERBOSE
option:
make VERBOSE=1
For more comprehensive troubleshooting, use the
Diagnostics Utility for Intel® oneAPI Toolkits, which
provides system checks to find missing
dependencies and permissions errors. Learn more.

Developing with Offline Systems: I need to develop If you are using an offline system, download the
on a system that is not connected to the internet. samples from a system that is internet connected
and transfer the sample files to your offline system.
After you have downloaded the samples, follow the
instructions in the README.md file. The readme file
is located in the folder of the sample you are
interested in.
The samples can be downloaded from the code
samples repository.
The full set of documentation can be downloaded
from Downloadable Documentation.
Errors due to missing dependencies, missing environment
variables or missing machine capabilities.
The Diagnostics Utility for Intel oneAPI Toolkits
provides the ability to find missing dependencies
and permissions errors and is already installed with
this tooklit. Learn more.
Unable to install toolkits or access libraries. Verify that you meet the System Requirements listed on
the Intel® oneAPI Base Toolkit product page
Problems connecting from behind a proxy The oneAPI CLI Samples Browser does not work with
system proxy settings and does not support WPAD proxy.

67
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Issue How to fix

If you are unable to access the samples from the


oneAPI CLI Samples Browser, set the value of
https_proxy and http_proxy, then run oneapi-cli in
the same terminal. To set the proxy enter this
command, where your.proxy is the name of your
proxy server and 8080 is your default port.

export http_proxy=http://your.proxy:8080
After you set the proxy, return to the instructions
for Running a Sample Using the Command Line and
try again.

The Intel plug-ins for Eclipse did not install See Install Eclipse Plugins.
correctly, or the Intel menu is missing from Eclipse.

OpenCL* drivers are not found Refer to the GPU Drivers section of the Installation
Guide for Intel® oneAPI Toolkits for information on
downloading and installing OpenCL drivers.

Next Steps
After successfully building a sample application, Explore SYCL with Samples from Intel and explore the tools
in the Intel® oneAPI Base Toolkit.

Tool Description

Intel® DPC++ Compatibility Tool The Intel® DPC++ Compatibility Tool assists in
migration of CUDA* applications to SYCL* ready
code that can use the Intel® oneAPI DPC++/C++
Compiler. Get started.

Intel® oneAPI DPC++/C++ Compiler The Intel® oneAPI DPC++/C++ Compiler targets
CPUs and accelerators through single-source code
while permitting custom tuning. Get Started.
Intel® oneAPI DPC++ Library This library is a companion to the Intel® oneAPI DPC++/C
++ Compiler and provides an alternative for C++
developers who create heterogeneous applications and
solutions. Its APIs are based on familiar standards-C++
STL, Parallel STL (PSTL), Boost.Compute, and SYCL*-to
maximize productivity and performance across CPUs,
GPUs, and FPGAs.
Intel® oneAPI Video Processing Library (oneVPL) This library provides access to H264/H265 elementary
stream decode and frame processing. By using the high
level interface of oneVPL, the same code can unlock
hardware capabilities for media across CPU and Intel®
Processor Graphics GPUs. Get started.
Intel® Distribution for GDB*
GDB, the GNU Project debugger, allows you to see
what is going on `inside' another program while it
executes -- or what another program was doing at
the moment it crashed. Get Started.
Learn more.
Intel® oneAPI Math Kernel Library (oneMKL) The oneMKL helps you achieve maximum performance
with a math computing library of highly optimized,
extensively parallelized routines for CPU and GPU. The
library has C and Fortran interfaces for most routines on
CPU, and SYCL* interfaces for some routines on both CPU
and GPU. Get started.

68
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Intel® oneAPI Threading Building Blocks (oneTBB) oneTBB is a flexible performance library that simplifies the
work of adding parallelism to complex applications, even if
you're not a threading expert. Learn more.
Intel® Integrated Performance Primitives Intel® Integrated Performance Primitives (Intel® IPP) is an
extensive library of ready-to-use, domain-specific
functions that are highly optimized for diverse Intel®
architectures. Get Started with Intel® IPP or Get Started
with Intel® IPP Cryptography.
Intel® oneAPI Data Analytics Library The Intel® oneAPI Data Analytics Library (oneDAL) is a
library that helps speed up big data analysis by providing
highly optimized algorithmic building blocks for all stages
of data analytics (preprocessing, transformation, analysis,
modeling, validation, and decision making) in batch,
online, and distributed processing modes of computation.
The current version of oneDAL provides SYCL* API
extensions to the traditional C++ interface. Get started.
Intel® oneAPI Collective Communications Library Intel® oneAPI Collective Communications Library (oneCCL)
is a scalable and high-performance communication library
for Deep Learning (DL) and Machine Learning (ML)
workloads. It develops the ideas originated in Intel(R)
Machine Learning Scaling Library and expands the design
and API to encompass new features and use cases. Get
started.
Intel® oneAPI Deep Neural Network Library The Intel® oneAPI Deep Neural Network Library (oneDNN)
is an open-source performance library for deep learning
applications. The library includes basic building blocks for
neural networks optimized for Intel® Architecture
®
Processors and Intel Processor Graphics. oneDNN is
intended for deep learning applications and framework
developers interested in improving application
performance on Intel CPUs and GPUs. Get started.
Intel® VTune™ Profiler
Intel® VTune™ Profiler is a performance analysis tool
targeted for users developing serial and
multithreaded applications. The tool is delivered as
a Performance Profiler with Intel Performance
Snapshots and supports local and remote target
analysis on the Windows* and Linux* platforms.
Get started.
Intel® Advisor Intel® Advisor gives software architects and developers
the data and analysis tools they need to build well-
threaded and vectorized code that exploits modern
hardware capabilities. Get started.
Intel® FPGA Add-on for oneAPI Base Toolkit Use reconfigurable hardware to accelerate data-centric
workloads. Learn more.

Learn more about SYCL and targeting other accelerators using the following resources:
Resource Description
Intel® oneAPI Programming Guide Provides details on the oneAPI programming model,
including details about SYCL standards, programming for
various target accelerators, and introductions to the
oneAPI libraries.
Intel Acceleration Stack Quick Start Guide for Intel A quick start guide for the Intel® Programmable
Programmable Acceleration Card with Intel® Arria 10 GX Acceleration Card with Intel® Arria® 10 GX FPGA. This
FPGA guide provides instructions to load and run a loopback
test, Hello FPGA, in both non-virtualized and virtualized
environments.
Intel Acceleration Stack Quick Start Guide: Intel® FPGA A quick start guide for Intel® FPGA PAC D5005. This guide
Programmable Acceleration Card D5005 provides the instructions for installing the Open
Programmable Acceleration Engine (OPAE) on the host
Intel® Xeon® Processor to manage and access the Intel®
FPGA PAC, configuring and flashing the FPGA image and
Board Management Controller (BMC) images, running the
example hello_afu in a virtualized and non-virtualized
environment, and handling graceful thermal shutdown.

69
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Intel® FPGA SDK for OpenCL™ Pro Edition: Custom Outlines the procedure for creating an Intel® FPGA
Platform Toolkit User Guide Software Development Kit (SDK) for OpenCL™ Pro Edition
Custom Platform. The Intel® FPGA SDK for OpenCL™ Pro
Edition Custom Platform Toolkit provides the necessary
tools for implementing a fully functional Custom Platform.
Intel® FPGA Acceleration Hub Learn about the Acceleration Stack for Intel® Xeon® CPUs
with FPGAs and other Intel FPGA-based acceleration
platforms.
FPGA Optimization Guide for Intel® oneAPI Toolkits The oneAPI FPGA Optimization Guide provides guidance
on leveraging the functionalities of SYCL to optimize your
design.
Explore SYCL Through Intel® FPGA Code Samples Provides guidance on how to target and develop your
design on an FPGA using the oneAPI programming model.
It also provides guidance on how to optimize a design to
achieve performance and latency targets for an
application targeting the FPGA.
oneAPI GPU Optimization Guide The oneAPI GPU Optimization Guide demonstrates how to
improve the behavior of your software by partitioning it
across the host and accelerator to specialize portions of
the computation that run best on the accelerator.
Specialization includes restructuring and tuning the code
to create the best mapping of the application to the
hardware. The value of oneAPI is that it allows each of
these variations to be expressed in a common language
with device-specific variants launched on the appropriate
accelerator.

For more information about this toolkit, see the https://software.intel.com/content/www/us/en/develop/


tools/oneapi/base-toolkit.html page.

SSH: Password-less Access to Remote Linux* Target

Introduction
For some oneAPI applications, you must configure a password-less SSH connection for the root user on the
target system; for example:
• IoT applications that use the MRAA/UPM sensor library
• Any application that interacts with system resources that require su, sudo, or root access
• Any tool that requires remote root or sudo access to your target system
When you finish the configuration steps below you will be able to “ssh into” your remote Linux target from
your host development system without a password prompt, as a normal (non-root) user or as a root user.
For an introduction to SSH and how SSH keys work, see SSH Essentials: Working with SSH Servers, Clients,
and Keys.

NOTE Password-less access works only when you connect to your target system from your host
development system with a matching private SSH key. Attempting to connect from a different host
system will still require a password.

Configure Password-less SSH Access


These instructions apply to:
• Host development system: Linux*, Windows*, or macOS*
• Target system: Linux
Set up an .ssh directory
On your host development system:

70
Get Started with the Intel® oneAPI Base Toolkit for Linux*

1. Open a terminal session (CMD window on Windows) and CD to your home directory.
2. Enter the following commands to create an .ssh directory, set the proper permissions, and CD into the
new .ssh directory.
At a Windows CMD prompt

> %HomeDrive% && cd %HomePath%


> mkdir .ssh
> cd .ssh
At a Linux terminal (bash) prompt

$ cd ~
$ mkdir -p .ssh
$ chmod 700 .ssh
$ cd .ssh
Step 3: Generate keys and copy to the target system

NOTE From this point forward the instructions apply to all host development systems (Windows,
Linux, and macOS).

1. To generate a default-named RSA key pair with an empty passphrase (that is, do not provide a
passphrase when asked), enter:

$ ssh-keygen -t rsa
2. To copy the new public key to your target system's non-root user home folder, enter the following,
where:
username = the name used to access the target and target = the IP address or the network hostname
of the target
You should be prompted for the non-root user password for your target device.

$ scp id_rsa.pub username@target:id_rsa.pub


$ ssh username@target
$ cd ~
$ mkdir -p .ssh
$ chmod 700 .ssh
$ cat ~/id_rsa.pub >>.ssh/authorized_keys
$ chmod 600 .ssh/authorized_keys
$ exit
Step 4: Confirm that a password is no longer required (non-root)
Follow this step to confirm that a password is no longer required for your non-root user.
1. To display the target's system information strings, including the target's hostname as the second field
in the output, enter:

ssh username@target uname -a


Step 5: Configure password-less access to root on your target
1. To login to the non-root user on the target using SSH and switch to the root user using sudo, enter:

$ ssh username@target
$ cd ~
$ sudo -E bash

71
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Note that the sudo command should prompt you for your target system's non-root user password.
2. To copy the public key that you transferred to the non-root user account on the target into the root
user's authorized keys file, enter:

$ mkdir -p /root/.ssh
$ chmod 700 /root/.ssh
$ cat ./id_rsa.pub >>/root/.ssh/authorized_keys
$ chmod 600 /root/.ssh/authorized_keys
3. Exit twice, first from the sudo bash session, second from the ssh connection:

$ exit
$ exit

Step 6: [Optional] Check your progress


To test the root connection for your target, enter:

$ ssh root@target ls -a
You should see a directory listing of all files located in the /root folder on your target, without the need for a
login prompt.

Next: Create a New Connection and Connect to Your Target


For instructions to create a new connection, view existing connections, and connect to your target, see
Connecting to Your Board Using an SSH/TCF Agent Connection.
Notes
• Password-less access works only when you connect to your target system from your host development
system with a matching private SSH key. Attempting to connect from a different host system will still
require a password.
• Make sure that you have created a project for Linux targets, and that this project is selected in the Project
Explorer.

See Also
Why sudo user can use sched_setscheduler SCHED_RR while root can not?
Operation not permitted while setting new priority for thread
Debugging libmraa {#debugging}

SSH: Running Applications Built with Eclipse*


The following describes how to run a built application on a "remote target" system using an SSH connection.
If your application requires a password-less SSH connection, see Password-less Access to a Remote Linux*
Target Device.
This section assumes:
• you already have an Eclipse project
• have successfully built an application from that project
• have some familiarity with Eclipse
• remote target is already configured for remote access via SSH
1. Create a Connection within Eclipse by selecting the New Connection option:

72
Get Started with the Intel® oneAPI Base Toolkit for Linux*

2. Select the SSH Connection type and click Next:

3. Enter your remote targets Hostname and Username. You can either use Key-based authentication or
Password, depending on the configuration of the remote device.

73
Get Started with the Intel® oneAPI Base Toolkit for Linux*

4. You may also be then prompted to set an Eclipse Secure Storage password. This protects the password
you entered when stored on the disk. Enter the password and click OK.

5. When connecting to the device for the first time, you should be prompted to trust the device. Click Yes
to continue.

74
Get Started with the Intel® oneAPI Base Toolkit for Linux*

6. Once a connection has been defined within Eclipse, open the the Run Configurations window to create
your configuration.

7. Double-click on the "C/C++ Remote Application" configuration type.

75
Get Started with the Intel® oneAPI Base Toolkit for Linux*

This will create a run configuration for you project.


8. Set two properties on this view:
a. Select the connection you created earlier.
b. Define a location on the remote system where Eclipse will copy your project's binary. Your use-
case may dictate where your binary must be copied to, but often a location your user has write
permission to, such as /tmp/, will suffice.
9. Click Apply to save your configuration.
10. Click Run to run your project on the remote device.

76
Get Started with the Intel® oneAPI Base Toolkit for Linux*

You should see the output in the console of Eclipse.

77
Get Started with the Intel® oneAPI Base Toolkit for Linux*

Note that you can re-use the same connection and launch configuration for debug too. Your remote device
will need at least gdbserver in the PATH environment variable.

Notices and Disclaimers


Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its
subsidiaries. Other names and brands may be claimed as the property of others.

Product and Performance Information

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/
PerformanceIndex.
Notice revision #20201201

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this
document.

The products described may contain design defects or errors known as errata which may cause the product
to deviate from published specifications. Current characterized errata are available on request.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of
merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from
course of performance, course of dealing, or usage in trade.

78

You might also like