Red Hat Enterprise Linux-7-Developer Guide-en-US
Red Hat Enterprise Linux-7-Developer Guide-en-US
Red Hat Enterprise Linux-7-Developer Guide-en-US
Developer Guide
Olga Tikhomirova
Red Hat Customer Content Services
[email protected]
Zuzana Zoubková
Red Hat Customer Content Services
[email protected]
Vladimír Slávik
Red Hat Customer Content Services
Legal Notice
Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document describes different features and utilities that make Red Hat Enterprise Linux 7 an
ideal enterprise platform for application development.
Table of Contents
Table of Contents
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .
PREFACE
. . . . . . .I.. SETTING
PART . . . . . . . . . . UP
. . . .A. .DEVELOPMENT
. . . . . . . . . . . . . . . . .WORKSTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 1.. .INSTALLING
. . . . . . . . . . . . . THE
. . . . . OPERATING
. . . . . . . . . . . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .
Additional Resources 9
.CHAPTER
. . . . . . . . . . 2.
. . SETTING
. . . . . . . . . . UP
. . . .TO
. . . MANAGE
. . . . . . . . . . APPLICATION
. . . . . . . . . . . . . . . VERSIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
..............
Additional Resources 10
.CHAPTER
. . . . . . . . . . 3.
. . SETTING
. . . . . . . . . . UP
. . . .TO
. . . DEVELOP
. . . . . . . . . . . APPLICATIONS
. . . . . . . . . . . . . . . . .USING
. . . . . . .C. .AND
. . . . . C++
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . . . . . .
Additional Resources 11
.CHAPTER
. . . . . . . . . . 4.
. . .SETTING
. . . . . . . . . UP
. . . .TO
. . . DEBUG
. . . . . . . . .APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
..............
Additional Resources 12
. . . . . . . . . . . 5.
CHAPTER . . SETTING
. . . . . . . . . . UP
. . . .TO
. . . MEASURE
. . . . . . . . . . . PERFORMANCE
. . . . . . . . . . . . . . . . . OF
. . . .APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
..............
Additional Resources 13
. . . . . . . . . . . 6.
CHAPTER . . .SETTING
. . . . . . . . . UP
. . . .TO
. . . DEVELOP
. . . . . . . . . . . APPLICATIONS
. . . . . . . . . . . . . . . . .USING
. . . . . . .JAVA
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
..............
. . . . . . . . . . . 7.
CHAPTER . . SETTING
. . . . . . . . . . UP
. . . .TO
. . . DEVELOP
. . . . . . . . . . . APPLICATIONS
. . . . . . . . . . . . . . . . .USING
. . . . . . .PYTHON
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
..............
Python versions corresponding to Red Hat Software Collections packages 15
Additional Resources 15
. . . . . . . . . . . 8.
CHAPTER . . .SETTING
. . . . . . . . . UP
. . . .TO
. . . DEVELOP
. . . . . . . . . . . APPLICATIONS
. . . . . . . . . . . . . . . . .USING
. . . . . . .C#
. . . .AND
. . . . ..NET
. . . . .CORE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
..............
Additional Resources 16
.CHAPTER
. . . . . . . . . . 9.
. . .SETTING
. . . . . . . . . UP
. . . .TO
. . . DEVELOP
. . . . . . . . . . . CONTAINERIZED
. . . . . . . . . . . . . . . . . . APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
..............
Additional Resources 17
. . . . . . . . . . . 10.
CHAPTER . . . SETTING
. . . . . . . . . . UP
. . . .TO
. . . DEVELOP
. . . . . . . . . . .WEB
. . . . . APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
..............
Additional Resources 18
. . . . . . .II.. .COLLABORATING
PART . . . . . . . . . . . . . . . . . . . ON
. . . .APPLICATIONS
. . . . . . . . . . . . . . . . .WITH
. . . . . .OTHER
. . . . . . . DEVELOPERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
..............
.CHAPTER
. . . . . . . . . . 11.
. . .USING
. . . . . . .GIT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
..............
Installed Documentation 20
Online Documentation 20
. . . . . . .III.
PART . . MAKING
. . . . . . . . . .AN
. . . APPLICATION
. . . . . . . . . . . . . . . AVAILABLE
. . . . . . . . . . . . .TO
. . . USERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
..............
.CHAPTER
. . . . . . . . . . 12.
. . . DISTRIBUTION
. . . . . . . . . . . . . . . .OPTIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
..............
RPM Packages 22
Software Collections 22
Containers 22
Additional Resources 22
. . . . . . . . . . . 13.
CHAPTER . . . CREATING
. . . . . . . . . . . .A. .CONTAINER
. . . . . . . . . . . . .WITH
. . . . . .AN
. . . .APPLICATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
..............
Prerequisites 23
Steps 23
Additional Resources 24
.CHAPTER
. . . . . . . . . . 14.
. . . CONTAINERIZING
. . . . . . . . . . . . . . . . . . . AN
. . . .APPLICATION
. . . . . . . . . . . . . . .FROM
. . . . . . .PACKAGES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
..............
Prerequisites 25
Steps 25
1
Red Hat Enterprise Linux 7 Developer Guide
Additional Information 25
. . . . . . .IV.
PART . . .CREATING
. . . . . . . . . . .C
. . OR
. . . .C++
. . . . APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
..............
.CHAPTER
. . . . . . . . . . 15.
. . . BUILDING
. . . . . . . . . . . CODE
. . . . . . .WITH
. . . . . .GCC
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
..............
15.1. RELATIONSHIP BETWEEN CODE FORMS 27
Prerequisites 27
Possible Code Forms 27
Handling of Code Forms in GCC 27
Additional Resources 27
15.2. COMPILING SOURCE FILES TO OBJECT CODE 28
Prerequisites 28
Steps 28
Additional Resources 28
15.3. ENABLING DEBUGGING OF C AND C++ APPLICATIONS WITH GCC 28
Enabling Creation of Debugging Information with GCC 28
Additional Resources 29
15.4. CODE OPTIMIZATION WITH GCC 29
Code Optimization with GCC 29
Additional Resources 30
15.5. HARDENING CODE WITH GCC 30
Release Version Options 30
Development Options 30
Additional Resources 30
15.6. LINKING CODE TO CREATE EXECUTABLE FILES 30
Prerequisites 30
Steps 30
Additional Resources 31
15.7. C++ COMPATIBILITY OF VARIOUS RED HAT PRODUCTS 31
Additional Resources 31
15.8. EXAMPLE: BUILDING A C PROGRAM WITH GCC 31
Prerequisites 32
Steps 32
Additional Resources 32
15.9. EXAMPLE: BUILDING A C++ PROGRAM WITH GCC 32
Prerequisites 32
Steps 32
.CHAPTER
. . . . . . . . . . 16.
. . . USING
. . . . . . . .LIBRARIES
. . . . . . . . . . . WITH
. . . . . . GCC
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
..............
16.1. LIBRARY NAMING CONVENTIONS 34
Additional Resources 34
16.2. STATIC AND DYNAMIC LINKING 34
Comparison of static and dynamic linking 34
Reasons for static linking 35
Additional Resources 35
16.3. USING A LIBRARY WITH GCC 35
Compiling Code That Uses a Library 36
Linking Code That Uses a Library 36
Compiling and Linking Code Which Uses a Library in One Step 36
Additional Resources 36
16.4. USING A STATIC LIBRARY WITH GCC 36
Prerequisites 37
Steps 37
16.5. USING A DYNAMIC LIBRARY WITH GCC 38
2
Table of Contents
Prerequisites 38
Linking a Program Against a Dynamic Library 38
Using a rpath Value Stored in the Executable File 38
Using the LD_LIBRARY_PATH Environment Variable 38
Placing the Library into the Default Directories 39
16.6. USING BOTH STATIC AND DYNAMIC LIBRARIES WITH GCC 39
Prerequisites 39
Introduction 39
Specifying the static libraries by file 40
Using the -Wl option 40
Additional Resources 40
.CHAPTER
. . . . . . . . . . 17.
. . . CREATING
. . . . . . . . . . . .LIBRARIES
. . . . . . . . . . . WITH
. . . . . . GCC
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
..............
17.1. LIBRARY NAMING CONVENTIONS 41
Additional Resources 41
17.2. THE SONAME MECHANISM 41
Prerequisites 41
Problem Introduction 41
The soname Mechanism 41
Reading soname from a File 42
17.3. CREATING DYNAMIC LIBRARIES WITH GCC 42
Prerequisites 42
Steps 42
Additional Resources 43
17.4. CREATING STATIC LIBRARIES WITH GCC AND AR 43
Prerequisites 43
Steps 43
Additional Resources 44
.CHAPTER
. . . . . . . . . . 18.
. . . MANAGING
. . . . . . . . . . . . .MORE
. . . . . . .CODE
. . . . . . WITH
. . . . . . MAKE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
..............
18.1. GNU MAKE AND MAKEFILE OVERVIEW 45
Prerequisites 45
GNU make 45
Makefile Details 45
Typical Makefile 45
Additional resources 46
18.2. EXAMPLE: BUILDING A C PROGRAM USING A MAKEFILE 46
Prerequisites 46
Steps 46
Additional Resources 47
18.3. DOCUMENTATION RESOURCES FOR MAKE 47
Installed Documentation 47
Online Documentation 48
. . . . . . . . . . . 19.
CHAPTER . . . USING
. . . . . . . .THE
. . . . ECLIPSE
. . . . . . . . . .IDE
. . . .FOR
. . . . .C. .AND
. . . . .C++
. . . . APPLICATION
. . . . . . . . . . . . . . . .DEVELOPMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49
..............
Using Eclipse to Develop C and C++ Applications 49
Additional Resources 49
. . . . . . .V.
PART . . DEBUGGING
. . . . . . . . . . . . . .APPLICATIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
..............
. . . . . . . . . . . 20.
CHAPTER . . . .DEBUGGING
. . . . . . . . . . . . . .A. .RUNNING
. . . . . . . . . . APPLICATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
..............
20.1. ENABLING DEBUGGING WITH DEBUGGING INFORMATION 51
20.1.1. Debugging Information 51
Additional Resources 51
3
Red Hat Enterprise Linux 7 Developer Guide
4
Table of Contents
Prerequisites 65
Steps 65
Additional Resources 66
20.3.4. Monitoring Application’s System Calls with SystemTap 66
Prerequisites 66
Steps 66
Additional Resources 67
20.3.5. Using GDB to Intercept Application System Calls 67
Prerequisites 67
Stopping Program Execution on System Calls with GDB 67
Additional Resources 68
20.3.6. Using GDB to Intercept Handling of Signals by Applications 68
Prerequisites 68
Stopping Program Execution on Receiving a Signal with GDB 68
Additional Resources 68
.CHAPTER
. . . . . . . . . . 21.
. . . DEBUGGING
. . . . . . . . . . . . . .A. .CRASHED
. . . . . . . . . . .APPLICATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
..............
21.1. CORE DUMPS 69
Prerequisites 69
Description 69
21.2. RECORDING APPLICATION CRASHES WITH CORE DUMPS 69
Steps 69
Additional Resources 70
21.3. INSPECTING APPLICATION CRASH STATES WITH CORE DUMPS 70
Prerequisites 70
Steps 70
Additional Resources 72
21.4. DUMPING PROCESS MEMORY WITH GCORE 72
Prerequisites 72
Steps 72
Additional resources 73
21.5. DUMPING PROTECTED PROCESS MEMORY WITH GDB 73
Prerequisites 73
Steps 73
Additional Resources 73
. . . . . . .VI.
PART . . .MONITORING
. . . . . . . . . . . . . . .PERFORMANCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74
..............
. . . . . . . . . . . 22.
CHAPTER . . . .VALGRIND
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75
..............
22.1. VALGRIND TOOLS 75
22.2. USING VALGRIND 76
22.3. ADDITIONAL INFORMATION 76
.CHAPTER
. . . . . . . . . . 23.
. . . .OPROFILE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77
..............
23.1. USING OPROFILE 77
23.2. OPROFILE DOCUMENTATION 79
. . . . . . . . . . . 24.
CHAPTER . . . .SYSTEMTAP
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80
..............
24.1. ADDITIONAL INFORMATION 80
. . . . . . . . . . . 25.
CHAPTER . . . .PERFORMANCE
. . . . . . . . . . . . . . . . .COUNTERS
. . . . . . . . . . . . FOR
. . . . . LINUX
. . . . . . . (PCL)
. . . . . . .TOOLS
. . . . . . . .AND
. . . . .PERF
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
..............
25.1. PERF TOOL COMMANDS 81
25.2. USING PERF 81
. . . . . . . . . . . .A.
APPENDIX . . REVISION
. . . . . . . . . . . HISTORY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84
..............
5
Red Hat Enterprise Linux 7 Developer Guide
6
PREFACE
PREFACE
This document describes the different features and utilities that make Red Hat Enterprise Linux 7 an
ideal enterprise platform for application development.
7
Red Hat Enterprise Linux 7 Developer Guide
8
CHAPTER 1. INSTALLING THE OPERATING SYSTEM
1. Install Red Hat Enterprise Linux in the Workstation variant. Follow the instructions in Red Hat
Enterprise Linux Installation Guide.
2. While installing, pay attention to software selection. Select the Development and Creative
Workstation system profile and enable installation of Add-ons appropriate for your
development needs. The relevant Add-ons are listed in each of the following sections focusing
on various types of development.
3. To develop applications that cooperate closely with the Linux kernel such as drivers, enable
automatic crash dumping with kdump during the installation.
4. After the system itself is installed, register it and attach the required subscriptions. Follow the
instructions in Red Hat Enterprise Linux System Administrator’s Guide, Chapter Registering the
System and Managing Subscriptions.
The following sections focusing on various types of development list the particular
subscriptions that must be attached for the respective type of development.
5. More recent versions of development tools and utilities are available as Red Hat Software
Collections. For instructions on accessing Red Hat Software Collections, see Red Hat Software
Collections Release Notes, Chapter Installation.
Additional Resources
Red Hat Enterprise Linux Installation Guide — Subscription Manager
9
Red Hat Enterprise Linux 7 Developer Guide
1. Select the Development Tools Add-on during system installation to install Git.
2. Alternatively, install the git package from the Red Hat Enterprise Linux repositories after the
system is installed.
3. To get the latest version of Git supported by Red Hat, install the rh-git218 component from
Red Hat Software Collections.
4. Set the full name and email address associated with your Git commits:
Replace full name and email_address with your actual name and email address.
5. To change the default text editor started by Git, set value of the core.editor configuration
option:
Replace command with the command to be used to start the selected text editor.
Additional Resources
Chapter 11, Using Git
10
CHAPTER 3. SETTING UP TO DEVELOP APPLICATIONS USING C AND C++
1. Select the Development Tools and Debugging Tools Add-ons during system installation to
install the GNU Compiler Collection (GCC) and GNU Debugger (GDB) as well as other
development tools.
2. Latest versions of GCC, GDB and the associated tools are available as a part of the Red Hat
Developer Toolset toolchain component.
3. The Red Hat Enterprise Linux repositories contain many libraries widely used for development
of C and C++ applications. Install the development packages of the libraries needed for your
application using the yum package manager.
Additional Resources
Red Hat Developer Toolset User Guide — List of Components
11
Red Hat Enterprise Linux 7 Developer Guide
1. Select the Debugging Tools and Desktop Debugging and Performance Tools Add-ons
during system installation to install the GNU Debugger (GDB), Valgrind, SystemTap, ltrace,
strace, and other tools.
2. For the latest versions of GDB, Valgrind, SystemTap, strace, and ltrace, install Red Hat
Developer Toolset. This installs memstomp, too.
3. The memstomp utility is available only as a part of Red Hat Developer Toolset. In case installing
the whole Developer Toolset is not desirable and memstomp is required, install only its
component from Red Hat Developer Toolset.
5. To debug applications and libraries available as part of Red Hat Enterprise Linux, install their
respective debuginfo and source packages from the Red Hat Enterprise Linux repositories
using the debuginfo-install tool. This applies to core dump file analysis, too.
6. Install kernel debuginfo and source packages required by the SystemTap application. See
SystemTap Beginners Guide, section Installing SystemTap .
7. To capture kernel dumps, install and configure kdump. Follow the instructions in Kernel Crash
Dump Guide, Chapter Installing and Configuring kdump.
8. Make sure SELinux policies allow the relevant applications to run not only normally, but in the
debugging situations, too. See SELinux User’s and Administrator’s Guide, section Fixing
Problems.
Additional Resources
Section 20.1, “Enabling Debugging with Debugging Information”
12
CHAPTER 5. SETTING UP TO MEASURE PERFORMANCE OF APPLICATIONS
1. Select the Debugging Tools, Development Tools, and Performance Tools Add-ons during
system installation to install the tools OProfile, perf, and pcp.
2. Install the tools SystemTap which allows some types of performance analysis, and Valgrind
which includes modules for performance measurement.
# stap-prep
NOTE
4. For more frequently updated versions of SystemTap, OProfile, and Valgrind, install the Red
Hat Developer Toolset package perftools.
Additional Resources
Red Hat Developer Toolset User Guide — IV. Performance Monitoring Tools
13
Red Hat Enterprise Linux 7 Developer Guide
1. During system installation, select the Java Platform add-on to install OpenJDK as the default
Java version.
Alternatively, follow the instructions in Installation Guide for Red Hat CodeReady Studio,
Section Installing OpenJDK 1.8.0 on RHEL to install OpenJDK separately.
2. For an integrated graphical development environment, install the Eclipse-based Red Hat
CodeReady Studio offering extensive support for Java development. Follow the instructions in
Installation Guide for Red Hat CodeReady Studio .
14
CHAPTER 7. SETTING UP TO DEVELOP APPLICATIONS USING PYTHON
1. Newer versions of the Python interpreter and libraries are available as Red Hat
Software Collections packages. Install the package with desired version according to the table
below.
The python27 software collection is an updated version over the Python 2 packages in RHEL 7.
2. Install the Eclipse integrated development environment which supports development in the
Python language. Eclipse is available as part of Red Hat Developer Tools. For the actual
installation procedure, see Using Eclipse.
Additional Resources
Red Hat Software Collections Hello-World — Python
15
Red Hat Enterprise Linux 7 Developer Guide
Install the .NET Core for Red Hat Enterprise Linux, which includes the runtime, compilers, and
additional tools. Follow the instructions in the .NET Core Getting Started Guide, chapter Install
.NET Core.
Apart from C#, the .NET Core 3.1 for Red Hat Enterprise Linux supports development in ASP.NET, F#
and Visual Basic.
Both .NET Core 2.1 and 3.1 are Long Term Support (LTS) releases. For more information on life-cycle
support policy, see .NET Core Life Cycle.
Additional Resources
.NET Core for Red Hat Enterprise Linux Overview
16
CHAPTER 9. SETTING UP TO DEVELOP CONTAINERIZED APPLICATIONS
Install Red Hat Container Development Kit (CDK). CDK provides a Red Hat Enterprise Linux
virtual machine single-node Red Hat OpenShift cluster. Follow the instructions in the Red Hat
Container Development Kit Getting Started Guide, section Installing CDK.
Additionally, Red Hat Development Suite is a good choice for development of containerized
applications in Java, C, and C++. It consists of Red Hat JBoss Developer Studio, OpenJDK,
Red Hat Container Development Kit, and other minor components. To install DevSuite, follow
the instructions in Red Hat Development Suite Installation Guide .
Additional Resources
Red Hat CodeReady Studio — Getting Started with Container and Cloud-based Development
Red Hat Enterprise Linux Atomic Host — Overview of Containers in Red Hat Systems
17
Red Hat Enterprise Linux 7 Developer Guide
The topic of web development is too broad to capture it with a few simple instructions. This section
offers only the best supported paths to development of web applications on Red Hat Enterprise Linux.
To set up your environment for developing traditional web applications, install the Apache web
server, PHP runtime, and MariaDB database server and tools.
Alternatively, more recent versions of these applications are available as components of Red Hat
Software Collections.
Additional Resources
Red Hat Software Collections 3.4 Release Notes — Ruby on Rails
Advanced Linux Commands Cheat Sheet (setting up a LAMP stack) — Red Hat Developers
Portal Cheat Sheet
18
PART II. COLLABORATING ON APPLICATIONS WITH OTHER DEVELOPERS
19
Red Hat Enterprise Linux 7 Developer Guide
A detailed description of Git and its features is beyond the scope of this book. For more information
about this revision control system, see the resources listed below.
Installed Documentation
Linux manual pages for Git and tutorials:
$ man git
$ man gittutorial
$ man gittutorial-2
Note that many Git commands have their own manual pages.
Online Documentation
Pro Git — The online version of the Pro Git book provides a detailed description of Git, its
concepts and its usage.
20
PART III. MAKING AN APPLICATION AVAILABLE TO USERS
21
Red Hat Enterprise Linux 7 Developer Guide
RPM Packages
RPM Packages are the traditional method of distributing and installing software.
Only one version of a package can be installed, making multiple application version installations
difficult.
To create a RPM package, follow the instructions in RPM Packaging Guide, Chapter Packaging
Software.
Software Collections
A Software Collection is a specially prepared RPM package for an alternative version of an application.
For more information, see Red Hat Software Collections Packaging Guide, 1.2 What Are Software
Collections?
To create a software collection package, follow the instructions in Red Hat Software Collections
Packaging Guide, Chapter Packaging Software Collections.
Containers
Docker-formatted containers are a lightweight virtualization method.
Additional Resources
Red Hat Software Collections Packaging Guide — 1.2 What Are Software Collections?
22
CHAPTER 13. CREATING A CONTAINER WITH AN APPLICATION
Prerequisites
Understanding containers
Steps
1. Decide which base image to use.
NOTE
Red Hat recommends starting with a base image that uses Red Hat
Enterprise Linux as its foundation. Refer to Base Image in the Red Hat Container
Catalog for further information.
3. Prepare your application as a directory containing all of the application’s required files. Place this
directory inside the workspace directory.
4. Write a Dockerfile that describes the steps required to create the container.
Refer to the Dockerfile Reference for information about how to create a Dockerfile that
includes your content, sets default commands to run, and opens necessary ports and other
features.
FROM registry.access.redhat.com/rhel7
USER root
ADD my-program/ .
# docker build .
(...)
Successfully built container-id
During this step, note the container-id of the newly created container image.
6. Add a tag to the image, to identify the registry where you want the container image to be
stored. See Getting Started with Containers — Tagging Images .
23
Red Hat Enterprise Linux 7 Developer Guide
Replace container-id with the value shown in the output of the previous step.
Replace registry with address of the registry where you want to push the image, port with the
port of the registry (omit if not needed), and name with the name of the image.
For example, if you are running a registry using the docker-distribution service on your local
system with an image named myimage, the tag localhost:5000/myimage would make that
image ready to put to the registry.
7. Push the image to the registry so it can be pulled from that registry later by someone who wants
to use it.
Replace the tag parts with the same values as these used in the previous step.
To run your own Docker registry, see Getting Started with Containers — Working with Docker
registries
Additional Resources
OpenShift Container Platform — Creating Images
Red Hat Enterprise Linux Atomic Host — Recommended Practices for Container Development
Dockerfile Reference
Red Hat Enterprise Linux Atomic Host — Getting Started with Containers
24
CHAPTER 14. CONTAINERIZING AN APPLICATION FROM PACKAGES
Prerequisites
Understanding containers
Steps
To containerize an application from RPM packages, see Getting Started with Containers — Creating
Docker images.
Additional Information
OpenShift Container Platform — Creating Images
Red Hat Enterprise Linux Atomic Host — Getting Started with Containers
25
Red Hat Enterprise Linux 7 Developer Guide
26
CHAPTER 15. BUILDING CODE WITH GCC
Source code written in the C or C++ language, present as plain text files.
The files typically use extensions such as .c, .cc, .cpp, .h, .hpp, .i, .inc. For a complete list of
supported extensions and their interpretation, see the gcc manual pages:
$ man gcc
Object code, created by compiling the source code with a compiler. This is an intermediate
form.
The object code files use the .o extension.
NOTE
Library archive files for static linking also exist. This is a variant of object code and uses
the .a file name extension. Static linking is not recommended. See Section 16.2, “Static
and dynamic linking”.
2. Object files and libraries are linked (including the previously compiled sources).
It is possible to run GCC such that only step 1 happens, only step 2 happens, or both steps 1 and 2
happen. This is determined by the types of inputs and requested type of output(s).
Because larger projects require a build system which usually runs GCC separately for each action, it is
better to always consider compilation and linking as two distinct actions, even if GCC can perform both
at once.
Additional Resources
27
Red Hat Enterprise Linux 7 Developer Guide
Prerequisites
Steps
Object files are created, with their file names reflecting the original source code files: source.c
results in source.o.
NOTE
With C++ source code, replace the gcc command with g++ for convenient
handling of C++ Standard Library dependencies.
Additional Resources
Optimizations performed by the compiler and linker can result in executable code which is hard
to relate to the original source code: Variables may be optimized out, loops unrolled, operations
merged into the surrounding ones etc. This affects debugging negatively. For improved
28
CHAPTER 15. BUILDING CODE WITH GCC
debuging experience, consider setting the optimization with the -Og option. However, changing
the optimization level changes the executable code and may change the actual behaviour so as
to remove some bugs.
The -fcompare-debug GCC option tests code compiled by GCC with debug information and
without debug information. The test passes if the resulting two binary files are identical. This
test ensures that executable code is not affected by any debugging options, which further
ensures that there are no hidden bugs in the debug code. Note that using the -fcompare-debug
option significantly increases compilation time. See the GCC manual page for details about this
option.
Additional Resources
Using the GNU Compiler Collection (GCC) — Options for Debugging Your Program
$ man gcc
Level Description
fast Level 3 plus disregard for strict standards compliance to allow for additional optimizations
During development, the -Og option is more useful for debugging the program or library in some
situations. Because some bugs manifest only with certain optimization levels, ensure to test the
program or library with the release optimization level.
GCC offers a large number of options to enable individual optimizations. For more information, see the
29
Red Hat Enterprise Linux 7 Developer Guide
GCC offers a large number of options to enable individual optimizations. For more information, see the
following Additional Resources.
Additional Resources
$ man gcc
For programs, add the -fPIE and -pie Position Independent Executable options.
For dynamically linked libraries, the mandatory -fPIC (Position Independent Code) option
indirectly increases security.
Development Options
The following options are recommended to detect security flaws during development. Use these options
in conjunction with the options for the release version:
Additional Resources
Memory Error Detection Using GCC — Red Hat Developers Blog post
Prerequisites
Steps
30
CHAPTER 15. BUILDING CODE WITH GCC
2. Run gcc:
An executable file named executable-file is created from the supplied object files and libraries.
To link additional libraries, add the required options before the list of object files. See
Chapter 16, Using Libraries with GCC .
NOTE
With C++ source code, replace the gcc command with g++ for convenient
handling of C++ Standard Library dependencies.
Additional Resources
The system compiler based on GCC 4.8 and provided directly as part of Red Hat
Enterprise Linux 7 supports only compiling and linking the C++98 standard (also known as
C++03), and its variant with GNU extensions.
Using and mixing the C++11 and C++14 language versions is supported only when using compilers
from Red Hat Developer Toolset and only when all C++ objects compiled with the respective
flag have been built using the same major version of GCC.
When linking C++ files built with both Red Hat Developer Toolset and Red Hat Enterprise Linux
toolchain, prefer the Red Hat Developer Toolset version of compiler and linker.
The default setting for compilers in Red Hat Enterprise Linux 6 and 7 and Red Hat
Developer Toolset up to 4.1 is -std=gnu++98. That is, C++98 with GNU extensions.
The default setting for compilers in Red Hat Developer Toolset 6, 6.1, 7, and 7.1 is -
std=gnu++14. That is, C++14 with GNU extensions.
Additional Resources
What gcc versions are available in Red Hat Enterprise Linux? — Knowledge base solution
31
Red Hat Enterprise Linux 7 Developer Guide
This example shows the exact steps to build a sample minimal C program.
Prerequisites
Steps
$ mkdir hello-c
$ cd hello-c
#include <stdio.h>
int main() {
printf("Hello, World!\n");
return 0;
}
$ gcc -c hello.c
$ ./helloworld
Hello, World!
Additional Resources
Prerequisites
Steps
32
CHAPTER 15. BUILDING CODE WITH GCC
$ mkdir hello-cpp
$ cd hello-cpp
#include <iostream>
int main() {
std::cout << "Hello, World!\n";
return 0;
}
$ g++ -c hello.cpp
$ ./helloworld
Hello, World!
33
Red Hat Enterprise Linux 7 Developer Guide
When linking against the library, the library can be specified only by its name foo with the -l
option as -lfoo:
When creating the library, the full file name libfoo.so or libfoo.a must be specified.
Additional Resources
Resource use
Static linking results in larger executable files which contain more code. This additional code coming
from libraries cannot be shared across multiple programs on the system, increasing file system usage
and memory usage at run time. Multiple processes running the same statically linked program will still
share the code.
On the other hand, static applications need fewer run-time relocations, leading to reduced startup
time, and require less private resident set size (RSS) memory. Generated code for static linking can
be more efficient than for dynamic linking due to the overhead introduce by position-independent
code (PIC).
Security
Dynamically linked libraries which provide ABI compatibility can be updated without changing the
executable files depending on these libraries. This is especially important for libraries provided by
Red Hat as part of Red Hat Enterprise Linux, where Red Hat provides security updates. Static linking
against any such libraries is strongly discouraged.
Additionally, security measures such as load address randomization cannot be used with a statically
linked executable file. This further reduces security of the resulting application.
Compatibility
34
CHAPTER 16. USING LIBRARIES WITH GCC
Static linking appears to provide executable files independent of the versions of libraries provided by
the operating system. However, most libraries depend on other libraries. With static linking, this
dependency becomes inflexible and as a result, both forward and backward compatibility is lost.
Static linking is guaranteed to work only on the system where the executable file was built.
WARNING
Applications linking statically libraries from the GNU C library (glibc) still require
glibc to be present on the system as a dynamic library. Furthermore, the
dynamic library variant of glibc available at the application’s run time must be a
bitwise identical version to that present while linking the application. As a result,
static linking is guaranteed to work only on the system where the executable file
was built.
Support coverage
Most static libraries provided by Red Hat are in the Optional channel and not supported by Red Hat.
Functionality
Some libraries, notably the GNU C Library (glibc), offer reduced functionality when linked statically.
For example, when statically linked, glibc does not support threads and any form of calls to the
dlopen() function in the same program.
As a result of the listed disadvantages, static linking should be avoided at all costs, particularly for whole
applications and the glibc and libstdc++ libraries.
NOTE
The compat-glibc package is included with Red Hat Enterprise Linux 7, but it is not a run
time package and therefore not required for running anything. It is solely a development
package, containing header files and dummy libraries for linking. This allows compiling
and linking packages to run in older Red Hat Enterprise Linux versions (using compat-
gcc-\* against those headers and libraries). For more information on use of this package,
run: rpm -qpi compat-glibc-*.
Fully static linking can be required for running code in an empty chroot environment or
container. However, static linking using the glibc-static package is not supported by Red Hat.
Additional Resources
A library is a package of code which can be reused in your program. A C or C++ library consists of two
35
Red Hat Enterprise Linux 7 Developer Guide
A library is a package of code which can be reused in your program. A C or C++ library consists of two
parts:
Header files
Typically, header files of a library will be placed in a different directory than your application’s code. To
tell GCC where the header files are, use the -I option:
Replace include_path with the actual path to the header file directory.
The -I option can be used multiple times to add multiple directories with header files. When looking for a
header file, these directories are searched in the order of their appearance in the -I options.
Static libraries are available as archive files. They contain a group of object files. The archive file
has an file name extension .a.
Dynamic libraries are available as shared objects. They are a form of an executable file. A shared
object has an file name extension .so.
To tell GCC where the archives or shared object files of a are, use the -L option:
The -L option can be used multiple times to add multiple directories. When looking for a library, these
directories are searched in the order of their -L options.
The order of options matters: GCC cannot link against a library foo unless it knows the directory with
this library. Therefore, use the -L options to specify library directories before using the -l options for
linking against libraries.
Additional Resources
Using the GNU Compiler Collection (GCC) — 3.15 Options for Directory Search
Using the GNU Compiler Collection (GCC) — 3.14 Options for Linking
36
CHAPTER 16. USING LIBRARIES WITH GCC
Static libraries are available as archives containing object files. After linking, they become part of the
resulting executable file.
NOTE
Red Hat discourages use of static linking for various reasons. See Section 16.2, “Static
and dynamic linking”. Use static linking only when necessary, especially against libraries
provided by Red Hat.
Prerequisites
A set of source or object files forming a valid program, requiring some static library foo and no
other libraries
The foo library is available as a file libfoo.a, and no file libfoo.so is provided for dynamic linking.
NOTE
Most libraries which are part of Red Hat Enterprise Linux are supported for dynamic
linking only. The steps below only work for libraries which are not enabled for dynamic
linking. See Section 16.6, “Using Both Static and Dynamic Libraries with GCC” .
Steps
To link a program from source and object files, adding a statically linked library foo, which is to be found
as a file libfoo.a:
2. Compile the program source files with headers of the foo library:
Replace header_path with a path to a directory containing the header files for the foo library.
$ ./program
CAUTION
The -static GCC option related to static linking forbids all dynamic linking. Instead, use the -Wl,-Bstatic
and -Wl,-Bdynamic options to control linker behavior more precisely. See Section 16.6, “Using Both
Static and Dynamic Libraries with GCC”.
37
Red Hat Enterprise Linux 7 Developer Guide
Prerequisites
A set of source or object files forming a valid program, requiring some dynamic library foo and
no other libraries
When a program is linked against a dynamic library, the resulting program must always load the library at
run time. There are two options for locating the library:
The path library_path must point to a directory containing the file libfoo.so.
CAUTION
$ ./program
To run the program without rpath set, with libraries present in path library_path:
$ export LD_LIBRARY_PATH=library_path:$LD_LIBRARY_PATH
$ ./program
38
CHAPTER 16. USING LIBRARIES WITH GCC
Leaving out the rpath value offers flexibility, but requires setting the LD_LIBRARY_PATH variable
every time the program is to run.
A full description of the dynamic linker behavior is out of scope of this document. For more information,
see the following resources:
$ man ld.so
$ cat /etc/ld.so.conf
Report of the libraries recognized by the dynamic linker without additional configuration, which
includes the directories:
$ ldconfig -v
Prerequisites
Introduction
gcc recognizes both dynamic and static libraries. When the -lfoo option is encountered, gcc will first
attempt to locate a shared object (a .so file) containing a dynamically linked version of the foo library,
and then look for the archive file (.a) containing a static version of the library. Thus, the following
situations can result from this search:
Only the shared object is found and gcc links against it dynamically
Both the shared object and archive are found; gcc selects by default dynamic linking against the
shared object
Because of these rules, the best way to select the static or dynamic version of library for linking is having
only that version found by gcc. This can be controlled to some extent by using or leaving out directories
containing the library versions, when specifying the -Lpath options.
Additionally, because dynamic linking is the default, the only situation where linking must be explicitly
specified is when a library with both versions present should be linked statically. There are two possible
resolutions:
39
Red Hat Enterprise Linux 7 Developer Guide
From the file extension .a, gcc will understand that this is a library to link with the program. However,
specifying the full path to the library file is a less flexible method.
The ld linker used by gcc offers the options -Bstatic and -Bdynamic to specify whether libraries
following this option should be linked statically or dynamically, respectively. After passing -Bstatic and a
library to the linker, the default dynamic linking behaviour must be restored manually for the following
libraries to be linked dynamically with the -Bdynamic option.
To link a program, linking library first statically (libfirst.a) and second dynamically (libsecond.so):
NOTE
gcc can be configured to use linkers other than the default ld. The -Wl option applies to
the gold linker, too.
Additional Resources
Using the GNU Compiler Collection (GCC) — 3.14 Options for Linking
40
CHAPTER 17. CREATING LIBRARIES WITH GCC
When linking against the library, the library can be specified only by its name foo with the -l
option as -lfoo:
When creating the library, the full file name libfoo.so or libfoo.a must be specified.
Additional Resources
Prerequisites
Problem Introduction
A dynamically loaded library (shared object) exists as an independent executable file. This makes it
possible to update the library without updating the applications that depend on it. However, the
following problems arise with this concept:
A library foo version X.Y is ABI-compatible with other versions with the same value of X in version
number. Minor changes preserving compatibility increase the number Y. Major changes that break
compatibility increase the number X.
The actual library foo version X.Y exists as a file libfoo.so.x.y. Inside the library file, a soname is recorded
41
Red Hat Enterprise Linux 7 Developer Guide
The actual library foo version X.Y exists as a file libfoo.so.x.y. Inside the library file, a soname is recorded
with value libfoo.so.x to signal the compatibility.
When applications are built, the linker looks for the library by searching for the file libfoo.so. A symbolic
link with this name must exist, pointing to the actual library file. The linker then reads the soname from
the library file and records it into the application executable file. Finally, the linker creates the application
such that it declares dependency on the library using the soname, not name or file name.
When the runtime dynamic linker links an application before running, it reads the soname from
application’s executable file. This soname is libfoo.so.x. A symbolic link with this name must exist,
pointing to the actual library file. This allows loading the library, regardless of the Y component of
version, because the soname does not change.
NOTE
The Y component of the version number is not limited to just a single number.
Additionally, some libraries encode version in their name.
Replace somelibrary with the actual file name of the library you wish to examine.
Prerequisites
Steps
2. Compile each source file to an object file with the Position independent code option -fPIC:
The object files have the same file names as the original source code files, but their extension is
.o.
42
CHAPTER 17. CREATING LIBRARIES WITH GCC
4. Copy the libfoo.so.x.y file to an appropriate location, where the system’s dynamic linker can
find it. On Red Hat Enterprise Linux, the directory for libraries is /usr/lib64:
# cp libfoo.so.x.y /usr/lib64
Note that you need root permissions to manipulate files in this directory.
# ln -s libfoo.so.x.y libfoo.so.x
# ln -s libfoo.so.x libfoo.so
Additional Resources
NOTE
Red Hat discourages use of static linking for security reasons. Use static linking only when
necessary, especially against libraries provided by Red Hat. See Section 16.2, “Static and
dynamic linking”.
Prerequisites
Steps
Append more source files as required. The resulting object files share the file name but use the
.o file name extension.
2. Turn the object files into a static library (archive) using the ar tool from the binutils package.
$ nm libfoo.a
43
Red Hat Enterprise Linux 7 Developer Guide
5. When linking against the library, GCC will automatically recognize from the .a file name
extension that the library is an archive for static linking.
Additional Resources
$ man ar
44
CHAPTER 18. MANAGING MORE CODE WITH MAKE
Red Hat Enterprise Linux contains GNU make, a build system designed for this purpose.
Prerequisites
GNU make
GNU make reads Makefiles which contain the instructions describing the build process. A Makefile
contains multiple rules that describe a way to satisfy a certain condition ( target) with a specific action
(recipe). Rules can hierarchically depend on another rule.
Running make without any options makes it look for a Makefile in the current directory and attempt to
reach the default target. The actual Makefile file name can be one of Makefile, makefile, and
GNUmakefile. The default target is determined from the Makefile contents.
Makefile Details
Makefiles use a relatively simple syntax for defining variables and rules, which consists of a target and a
recipe. The target specifies what is the output if a rule is executed. The lines with recipes must start with
the TAB character.
Typically, a Makefile contains rules for compiling source files, a rule for linking the resulting object files,
and a target that serves as the entry point at the top of the hierarchy.
Consider the following Makefile for building a C program which consists of a single file, hello.c.
all: hello
hello: hello.o
gcc hello.o -o hello
hello.o: hello.c
gcc -c hello.c -o hello.o
This specifies that to reach the target all, file hello is required. To get hello, one needs hello.o (linked
by gcc), which in turn is created from hello.c (compiled by gcc).
The target all is the default target because it is the first target that does not start with a period (.).
Running make without any arguments is then identical to running make all, when the current directory
contains this Makefile.
Typical Makefile
A more typical Makefile uses variables for generalization of the steps and adds a target "clean" - remove
everything but the source files.
45
Red Hat Enterprise Linux 7 Developer Guide
CC=gcc
CFLAGS=-c -Wall
SOURCE=hello.c
OBJ=$(SOURCE:.c=.o)
EXE=hello
$(EXE): $(OBJ)
$(CC) $(OBJ) -o $@
%.o: %.c
$(CC) $(CFLAGS) $< -o $@
clean:
rm -rf $(OBJ) $(EXE)
Adding more source files to such Makefile requires only adding them to the line where the SOURCE
variable is defined.
Additional resources
Prerequisites
Steps
1. Create a directory hellomake and change to this directory:
$ mkdir hellomake
$ cd hellomake
#include <stdio.h>
CC=gcc
CFLAGS=-c -Wall
SOURCE=hello.c
46
CHAPTER 18. MANAGING MORE CODE WITH MAKE
OBJ=$(SOURCE:.c=.o)
EXE=hello
$(EXE): $(OBJ)
$(CC) $(OBJ) -o $@
%.o: %.c
$(CC) $(CFLAGS) $< -o $@
clean:
rm -rf $(OBJ) $(EXE)
CAUTION
The Makefile recipe lines must start with the tab character! When copying the text above from
the browser, you may paste spaces instead. Correct this change manually.
4. Run make:
$ make
gcc -c -Wall hello.c -o hello.o
gcc hello.o -o hello
$ ./hello
Hello, World!
$ make clean
rm -rf hello.o hello
Additional Resources
Installed Documentation
Use the man and info tools to view manual pages and information pages installed on your
system:
$ man make
$ info make
47
Red Hat Enterprise Linux 7 Developer Guide
Online Documentation
48
CHAPTER 19. USING THE ECLIPSE IDE FOR C AND C++ APPLICATION DEVELOPMENT
Additional Resources
Using Eclipse
49
Red Hat Enterprise Linux 7 Developer Guide
50
CHAPTER 20. DEBUGGING A RUNNING APPLICATION
a description of how the source code text relates to the binary code
Red Hat Enterprise Linux uses the ELF format for executable binaries, shared libraries, or debuginfo
files. Within these ELF files, the DWARF format is used to hold the debug information.
CAUTION
STABS is occasionally used with UNIX. STABS is an older, less capable format. Its use is discouraged by
Red Hat. GCC and GDB support STABS production and consumption on a best effort basis only. Some
other tools such as Valgrind and elfutils do not support STABS at all.
Additional Resources
Optimizations performed by the compiler and linker can result in executable code which is hard
to relate to the original source code: Variables may be optimized out, loops unrolled, operations
merged into the surrounding ones etc. This affects debugging negatively. For improved
debuging experience, consider setting the optimization with the -Og option. However, changing
the optimization level changes the executable code and may change the actual behaviour so as
to remove some bugs.
The -fcompare-debug GCC option tests code compiled by GCC with debug information and
51
Red Hat Enterprise Linux 7 Developer Guide
The -fcompare-debug GCC option tests code compiled by GCC with debug information and
without debug information. The test passes if the resulting two binary files are identical. This
test ensures that executable code is not affected by any debugging options, which further
ensures that there are no hidden bugs in the debug code. Note that using the -fcompare-debug
option significantly increases compilation time. See the GCC manual page for details about this
option.
Additional Resources
Using the GNU Compiler Collection (GCC) — Options for Debugging Your Program
$ man gcc
Prerequisites
Debuginfo Packages
For applications and libraries installed in packages from the Red Hat Enterprise Linux repositories, you
can obtain the debugging information and debug source code as separate debuginfo packages
available through another channel. The debuginfo packages contain .debug files, which contain DWARF
debuginfo and the source files used for compiling the binary packages. Debuginfo package contents are
installed to the /usr/lib/debug directory.
A debuginfo package provides debugging information valid only for a binary package with the same
name, version, release and architecture:
Prerequisites
Procedure
1. Start GDB attached to the application or library you want to debug. GDB automatically
recognizes missing debugging information and suggests a command to run.
52
CHAPTER 20. DEBUGGING A RUNNING APPLICATION
$ gdb -q /bin/ls
Reading symbols from /usr/bin/ls...Reading symbols from /usr/bin/ls...(no debugging symbols
found)...done.
(no debugging symbols found)...done.
Missing separate debuginfos, use: debuginfo-install coreutils-8.22-21.el7.x86_64
(gdb)
(gdb) q
3. Run the command suggested by GDB to install the needed debuginfo packages:
# debuginfo-install coreutils-8.22-21.el7.x86_64
Installing a debuginfo package for an application or library installs debuginfo packages for all
dependencies, too.
4. In case GDB is not able to suggest the debuginfo package, follow the procedure in
Section 20.1.5, “Getting debuginfo Packages for an Application or Library Manually” .
Additional Resources
Red Hat Developer Toolset User Guide, section Installing Debugging Information
How can I download or install debuginfo packages for RHEL systems? — Red Hat
Knowledgebase solution
NOTE
Prefer use of GDB to determine the packages for installation . Use this manual procedure
only if GDB is not able to suggest the package to install.
Prerequisites
Procedure
$ which nautilus
/usr/bin/nautilus
53
Red Hat Enterprise Linux 7 Developer Guide
If the original reasons for debugging included error messages, pick the result where the
library has the same additional numbers in its file name. If in doubt, try following the rest of
the procedure with the result where the library file name includes no additional numbers.
NOTE
2. Using the file path, search for a package which provides that file.
IMPORTANT
If this step does not produce any results, it is not possible to determine which
package provided the binary file and this procedure fails.
3. Use the rpm low-level package management tool to find what package version is installed on
the system. Use the package name as an argument:
$ rpm -q zlib
zlib-1.2.7-17.el7.x86_64
The output provides details for the installed package in the format
name-version.distribution.platform.
4. Install the debuginfo packages using the debuginfo-install utility. In the command, use the
package name and other details you determined during the previous step:
# debuginfo-install zlib-1.2.7-17.el7.x86_64
Installing a debuginfo package for an application or library installs debuginfo packages for all
dependencies, too.
54
CHAPTER 20. DEBUGGING A RUNNING APPLICATION
Additional Resources
Red Hat Developer Toolset User Guide — 1.5.4. Installing Debugging Information
How can I download or install debuginfo packages for RHEL systems? — Knowledgebase article
Red Hat Enterprise Linux contains the GNU debugger (GDB) which offers this functionality through a
command line user interface.
For a graphical frontend to GDB, install the Eclipse integrated development environment. See Using
Eclipse.
GDB Capabilities
A single GDB session can debug:
programs on remote machines or in containers with the gdbserver utility connected over a
TCP/IP network connection
Debugging Requirements
To debug any executable code, GDB requires the respective debugging information:
For programs developed by you, you can create the debugging information while building the
code.
For system programs installed from packages, their respective debuginfo packages must be
installed.
Prerequisites
$ gdb program
GDB sets up to start execution of the program. You can set up breakpoints and the gdb environemnt
55
Red Hat Enterprise Linux 7 Developer Guide
GDB sets up to start execution of the program. You can set up breakpoints and the gdb environemnt
before beginning execution of the process with the run command.
$ ps -C program -o pid h
pid
$ gdb -p pid
1. Use the shell GDB command to run the ps command and find the program’s process id ( pid):
NOTE
In some cases, GDB might not be able to find the respective executable file. Use the file
command to specify the path:
Additional Resources
Prerequisites
r (run)
Start the execution of the program. If run is executed with any arguments, those arguments are
passed on to the executable as if the program has been started normally. Users normally issue this
command after setting breakpoints.
start
Start the execution of the program, and stop at the beginning of the program’s main function. If
start is executed with any arguments, those arguments are passed on to the executable as if the
program has been started normally.
c (continue)
Continue the execution of the program from the current state. The execution of the program will
continue until one of the following becomes true:
A breakpoint is reached
An error occurs
n (next)
Continue the execution of the program from the current state, until next line of code in the current
source file is reached. The execution of the program will continue until one of the following becomes
true:
A breakpoint is reached
An error occurs
s (step)
The step command also halts execution at each sequential line of code in the current source file.
However, if the execution is currently stopped at a source line containing a function call, GDB stops
the execution after entering the function call (rather than executing it).
until location
57
Red Hat Enterprise Linux 7 Developer Guide
Continue the execution until the code location specified by the location option is reached.
fini (finish)
Resume the execution of the program and halt when execution returns from a function. The
execution of the program will continue until one of the following becomes true:
A breakpoint is reached
An error occurs
q (quit)
Terminate the execution and exit GDB.
Additional Resources
Section 20.2.5, “Using GDB Breakpoints to Stop Execution at Defined Code Locations”
Prerequisites
p (print)
Display the value of the argument given. Usually, the argument is the name of a variable of any
complexity, from a simple single value to a structure. An argument can also be an expression valid in
the current language, including the use of program variables and library functions, or functions
defined in the program being tested.
It is possible to extend GDB with pretty-printer Python or Guile scripts for customized display of data
structures (such as classes, structs) using the print command.
bt (backtrace)
Display the chain of function calls used to reach the current execution point, or the chain of functions
used up until execution was terminated. This is useful for investigating serious bugs (such as
segmentation faults) with elusive causes.
Adding the full option to the backtrace command displays local variables, too.
It is possible to extend GDB with frame filter Python scripts for customized display of data displayed
using the bt and info frame commands. The term frame refers to the data associated with a single
function call.
58
CHAPTER 20. DEBUGGING A RUNNING APPLICATION
info
The info command is a generic command to provide information about various items. It takes an
option specifying the item to describe.
The info args command displays options of the function call that is the currently selected
frame.
The info locals command displays local variables in the currently selected frame.
For a list of the possible items, run the command help info in a GDB session:
l (list)
Show the line in the source code where the program stopped. This command is available only when
the program execution is stopped. While not strictly a command to show internal state, list helps the
user understand what changes to the internal state will happen in the next step of the program’s
execution.
Additional Resources
Prerequisites
Understanding GDB
To place a breakpoint:
Specify the name of the source code file and the line in that file:
(gdb) br file:line
When file is not present, name of the source file at the current point of execution is used:
(gdb) br line
(gdb) br function_name
A program might encounter an error after a certain number of iterations of a task. To specify an
additional condition to halt execution:
59
Red Hat Enterprise Linux 7 Developer Guide
Replace condition with a condition in the C or C++ language. The meaning of file and line is the
same as above.
(gdb) info br
To remove a breakpoint by using its number as displayed in the output of info br:
Additional Resources
20.2.6. Using GDB Watchpoints to Stop Execution on Data Access and Changes
In many cases, it is advantageous to let the program execute until certain data changes or is accessed.
This section lists the most common
Prerequisites
Understanding GDB
Replace expression with an expression that describes what you want to watch. For variables,
expression is equal to the name of the variable.
To place a watchpoint for any data access (both read and write):
(gdb) info br
60
CHAPTER 20. DEBUGGING A RUNNING APPLICATION
To remove a watchpoint:
Replace the num option with the number reported by the info br command.
Additional Resources
Prerequisites
The follow-fork-mode setting controls whether GDB follows the parent or the child after the
fork.
The set detach-on-fork setting controls whether the GDB keeps control of the other (not
followed) process or leaves it to run.
set detach-on-fork on
The process which is not followed (depending on the value of follow-fork-mode) is detached
and runs independently. This is the default.
set detach-on-fork off
GDB keeps control of both processes. The process which is followed (depending on the
value of follow-fork-mode) is debugged as usual, while the other is suspended.
show detach-on-fork
Display the current setting of detach-on-fork.
61
Red Hat Enterprise Linux 7 Developer Guide
GDB uses a concept of current thread. By default, commands apply to the current thread only.
info threads
Display a list of threads with their id and gid numbers, indicating the current thread.
thread id
Set the thread with the specified id as the current thread.
thread apply ids command
Apply the command command to all threads listed by ids. The ids option is a space-separated list of
thread ids. A special value all applies the command to all threads.
break location thread id if condition
Set a breakpoint at a certain location with a certain condition only for the thread number id.
watch expression thread id
Set a watchpoint defined by expression only for the thread number id.
command&
Execute command command and return immediately to the gdb prompt (gdb), continuing any code
execution in the background.
interrupt
Halt execution in the background.
Additional Resources
strace
The strace tool primarily enables logging of system calls (kernel functions) used by an application.
The strace output is detailed and explains the calls well, because strace interprets
parameters and results with knowledge of the underlying kernel code. Numbers are turned
into the respective constant names, bitwise combined flags expanded to flag list, pointers to
character arrays dereferenced to provide the actual string, and more. Support for more
recent kernel features may be lacking.
You can filter the traced calls to reduce the amount of captured data.
The use of [command]`strace does not require any particular setup except for setting up the
log filter.
Tracing the application code with strace results in significant slowdown of the application’s
62
CHAPTER 20. DEBUGGING A RUNNING APPLICATION
Tracing the application code with strace results in significant slowdown of the application’s
execution. As a result, strace is not suitable for many production deployments. As an
alternative, consider using ltrace or SystemTap.
The version of strace available in Red Hat Developer Toolset can also perform system call
tampering. This capability is useful for debugging.
ltrace
The ltrace tool enables logging of an application’s user space calls into shared objects (dynamic
libraries).
You can filter the traced calls to reduce the amount of captured data.
The use of ltrace does not require any particular setup except for setting up the log filter.
ltrace is lightweight and fast, offering an alternative to strace: it is possible to trace the
respective interfaces in libraries such as glibc with ltrace instead of tracing kernel functions
with strace.
Because ltrace does not handle a known set of calls like strace, it does not attempt to
explain the values passed to library functions. The ltrace output contains only raw numbers
and pointers. The interpretation of ltrace output requires consulting the actual interface
declarations of the libraries present in the output.
SystemTap
SystemTap is an instrumentation platform for probing running processes and kernel activity on the
Linux system. SystemTap uses its own scripting language for programming custom event handlers.
Compared to using strace and ltrace, scripting the logging means more work in the initial
setup phase. However, the scripting capabilities extend SystemTap’s usefulness beyond just
producing logs.
SystemTap works by creating and inserting a kernel module. The use of SystemTap is
efficient and does not create a significant slowdown of the system or application execution
on its own.
GDB
The GNU Debugger is primarily meant for debugging, not logging. However, some of its features
make it useful even in the scenario where an application’s interaction is the primary activity of
interest.
With GDB, it is possible to conveniently combine the capture of an interaction event with
immediate debugging of the subsequent execution path.
GDB is best suited for analyzing response to infrequent or singular events, after the initial
identification of problematic situation by other tools. Using GDB in any scenario with
frequent events becomes inefficient or even impossible.
Additional Resources
63
Red Hat Enterprise Linux 7 Developer Guide
Prerequisites
Steps
2. If the program you want to monitor is not running, start strace and specify the program:
Replace call with the system calls to be displayed. You can use the -e trace=call option multiple
times. If left out, strace will display all system call types. See the strace(1) manual page for more
information.
If the program is already running, find its process id (pid) and attach strace to it:
$ ps -C program
(...)
$ strace -fvttTyy -s 256 -e trace=call -ppid
If you do not wish to trace any forked processes or threads, leave out the -f option.
3. strace displays the system calls made by the application and their details.
In most cases, an application and its libraries make a large number of calls and strace output
appears immediately, if no filter for system calls is set.
If strace started the program, the program terminates together with strace.
If you attached strace to an already running program, the program terminates together
with strace.
Problems with resource access or availability are present in the log as calls returning errors.
Values passed to the system calls and patterns of call sequences provide insight into the
causes of the application’s behaviour.
If the application crashes, the important information is probably at the end of log.
The output contains a lot of unnecessary information. However, you can construct a more
precise filter and repeat the procedure.
NOTE
64
CHAPTER 20. DEBUGGING A RUNNING APPLICATION
NOTE
It is advantageous to both see the output and save it to a file. Use the tee command to
achieve this:
Additional Resources
How do I use strace to trace system calls made by a command? — Knowledgebase article
Prerequisites
Steps
2. If the program you want to monitor is not running, start ltrace and specify program:
Supply the function names to be displayed as function. The -e function option can be used
multiple times. If left out, ltrace will display calls to all functions.
Instead of specifying functions, you can specify whole libraries with the -l library option.
This option behaves similarly to the -e function option.
If the program is already running, find its process id (pid) and attach ltrace to it:
$ ps -C program
(...)
$ ltrace ... -ppid
If you do not wish to trace any forked processes or threads, leave out the -f option.
65
Red Hat Enterprise Linux 7 Developer Guide
To terminate the monitoring before the traced program exits, press ctrl+C.
If ltrace started the program, the program terminates together with ltrace.
If you attached ltrace to an already running program, the program terminates together with
ltrace.
If the application crashes, the important information is probably at the end of log.
The output contains a lot of unnecessary information. However, you can construct a more
precise filter and repeat the procedure.
NOTE
It is advantageous to both see the output and save it to a file. Use the tee command to
achieve this:
Additional Resources
Prerequisites
Steps
probe begin
{
printf("waiting for syscalls of process %d \n", target())
}
probe syscall.*
{
if (pid() == target())
printf("%s(%s)\n", name, argstr)
}
probe process.end
{
if (pid() == target())
exit()
}
66
CHAPTER 20. DEBUGGING A RUNNING APPLICATION
$ ps -aux
The script is compiled to a kernel module which is then loaded. This introduces a slight delay
between entering the command and getting the output.
4. When the process performs a system call, the call name and its parameters are printed to the
terminal.
5. The script exits when the process terminates, or when you press Ctrl+C.
Additional Resources
Prerequisites
The command catch syscall sets a special type of breakpoint that halts execution when a
system call is performed by the program.
The syscall-name option specifies the name of the call. You can specify multiple catchpoints
for various system calls. Leaving out the syscall-name option causes GDB to stop on any
system call.
(gdb) r
67
Red Hat Enterprise Linux 7 Developer Guide
(gdb) c
3. GDB halts execution after any specified system call is performed by the program.
Additional Resources
Prerequisites
The command catch signal sets a special type of a breakpoint that halts execution when a
signal is received by the program. The signal-type option specifies the type of the signal. Use
the special value 'all' to catch all signals.
(gdb) r
(gdb) c
3. GDB halts execution after the program receives any specified signal.
Additional Resources
68
CHAPTER 21. DEBUGGING A CRASHED APPLICATION
Prerequisites
Description
A core dump is a copy of a part of the application’s memory at the moment the application stopped
working, stored in the ELF format. It contains all the application’s internal variables and stack, which
enables inspection of the application’s final state. When augmented with the respective executable file
and debugging information, it is possible to analyze a core dump file with a debugger in a way similar to
analyzing a running program.
The Linux operating system kernel can record core dumps automatically, if this functionality is enabled.
Alternatively, you can send a signal to any running application to generate a core dump regardless of its
actual state.
WARNING
Steps
1. Enable core dumps. Edit the file /etc/systemd/systemd.conf and change the line containing
DefaultLimitCORE to the following:
DefaultLimitCORE=infinity
# shutdown -r now
# ulimit -c unlimited
To reverse this change, run the command with value 0 instead of unlimited.
4. When an application crashes, a core dump is generated. The default location for core dumps is
69
Red Hat Enterprise Linux 7 Developer Guide
4. When an application crashes, a core dump is generated. The default location for core dumps is
the application’s working directory at the time of crash.
# sosreport
This creates a tar archive containing information about your system, such as copies of
configuration files.
6. Transfer the core dump and the SOS report to the computer where the debugging will take
place. Transfer the executable file, too, if it is known.
IMPORTANT
When the executable file is not known, subsequent analysis of the core file
identifies it.
7. Optional: Remove the core dump and SOS report after transferring them, to free up disk space.
Additional Resources
How to enable core file dumps when an application crashes or segmentation faults —
Knowledgebase article
What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? —
Knowledgebase article
Steps
1. To identify the executable file where the crash occurred, run the eu-unstrip command with the
core dump file:
$ eu-unstrip -n --core=./core.9814
0x400000+0x207000 2818b2009547f780a5639c904cded443e564973e@0x400284
/usr/bin/sleep /usr/lib/debug/bin/sleep.debug [exe]
0x7fff26fff000+0x1000 1e2a683b7d877576970e4275d41a6aaec280795e@0x7fff26fff340 . -
linux-vdso.so.1
0x35e7e00000+0x3b6000
374add1ead31ccb449779bc7ee7877de3377e5ad@0x35e7e00280 /usr/lib64/libc-2.14.90.so
/usr/lib/debug/lib64/libc-2.14.90.so.debug libc.so.6
0x35e7a00000+0x224000
3ed9e61c2b7e707ce244816335776afa2ad0307d@0x35e7a001d8 /usr/lib64/ld-2.14.90.so
/usr/lib/debug/lib64/ld-2.14.90.so.debug ld-linux-x86-64.so.2
The output contains details for each module on a line, spearated by spaces. The information is
listed in this order:
70
CHAPTER 21. DEBUGGING A CRASHED APPLICATION
2. The build-id of the module and where in the memory it was found.
3. The module’s executable file name, displayed as - when unknown, or as . when the module
has not been loaded loaded from a file
4. The source of debugging information, displayed as a file name when available, as . when
contained in the executable file itself, or as - when not present at all
5. The shared library name (soname), or [exe] for the main module
In this example, the important details are the file name /usr/bin/sleep and the build-id
2818b2009547f780a5639c904cded443e564973e on the line containing the text [exe]. With this
information, you can identify the executable file required for analyzing the core dump.
If possible, copy it from the system where the crash occurred. Use the file name extracted
from the core file.
Alternatively, use an identical executable file on your system. Each executable file built on
Red Hat Enterprise Linux contains a note with an unique build-id value. Determine the build-
id of the relevant locally available executable files:
$ eu-readelf -n executable_file
Use this information to match the executable file on the remote system with your local
copy. The build-id of the local file and build-id listed in the core dump must match.
Finally, if the application is installed from a RPM package, you can get the executable file
from the package. Use the sosreport output to find the exact version of the package
required.
3. Get the shared libraries used by the executable file. Use the same steps as for the executable
file.
4. If the application is distributed as a package, load the executable file in GDB, to display hints for
missing debuginfo packages. For more details, see Section 20.1.4, “Getting debuginfo Packages
for an Application or Library using GDB”.
5. To examine the core file in detail, load the executable file and core dump file with GDB:
Further messages about missing files and debugging information help you identify what is
missing for the debugging session. Return to the previous step if needed.
If the application’s debugging information is available as a file instead of as a package, load this
file in GDB with the symbol-file command:
NOTE
71
Red Hat Enterprise Linux 7 Developer Guide
NOTE
It might not be necessary to install the debugging information for all executable
files contained in the core dump. Most of these executable files are libraries used
by the application code. These libraries might not directly contribute to the
problem you are analyzing, and you do not need to include debugging information
for them.
6. Use the GDB commands to inspect the state of the application at the moment it crashed. See
Section 20.2, “Inspecting Application Internal State with GDB”.
NOTE
When analyzing a core file, GDB is not attached to a running process. Commands
for controlling execution have no effect.
Additional Resources
Prerequisites
Steps
To dump a process memory using gcore:
1. Find out the process id (pid). Use tools such as ps, pgrep, and top:
$ ps -C some-program
This creates a file filename is created and dumps the process memory in it. While the memory is
being dumped, the execution of the process is halted.
3. After the core dump is finished, the process resumes normal execution.
72
CHAPTER 21. DEBUGGING A CRASHED APPLICATION
# sosreport
This creates a tar archive containing information about your system, such as copies of
configuration files.
5. Transfer the program’s executable file, core dump, and the SOS report to the computer where
the debugging will take place.
6. Optional: Remove the core dump and SOS report after transferring them, to free up disk space.
Additional resources
In some cases, it is necessary to dump the whole contents of the process memory regardless of these
protections. This procedure shows how to do this using the GDB debugger.
Prerequisites
Steps
Replace core-file with name of file where you want to dump the memory.
Additional Resources
Debugging with GDB - How to Produce a Core File from Your Program
73
Red Hat Enterprise Linux 7 Developer Guide
Red Hat Enterprise Linux includes a number of different tools (Valgrind, OProfile, perf, and
SystemTap) to collect profiling data. Each tool is suitable for performing specific types of profile runs,
as described in the following sections.
74
CHAPTER 22. VALGRIND
Valgrind provides instrumentation for user-space binaries to check for errors, such as the use of
uninitialized memory, improper allocation/freeing of memory, and improper arguments for systemcalls.
Its profiling tools can be used by normal users on most binaries; however, compared to other profilers,
Valgrind profile runs are significantly slower. To profile a binary, Valgrind runs it inside a special virtual
machine, which allows Valgrind to intercept all of the binary instructions. Valgrind's tools are most useful
for looking for memory-related issues in user-space programs; it is not suitable for debugging time-
specific issues or kernel-space instrumentation and debugging.
Valgrind reports are most useful and accurate when debuginfo packages are installed for the programs
or libraries under investigation. See Section 20.1, “Enabling Debugging with Debugging Information” .
memcheck
This tool detects memory management problems in programs by:
memcheck is perhaps the most used Valgrind tool, as memory management problems can be
difficult to detect using other means. Such problems often remain undetected for long periods,
eventually causing crashes that are difficult to diagnose.
cachegrind
cachegrind is a cache profiler that accurately pinpoints sources of cache misses in code by
performing a detailed simulation of the I1, D1 and L2 caches in the CPU. It shows the number of cache
misses, memory references, and instructions accruing to each line of source code; cachegrind also
provides per-function, per-module, and whole-program summaries, and can even show counts for
each individual machine instructions.
callgrind
Like cachegrind, callgrind can model cache behavior. However, the main purpose of callgrind is to
record callgraphs data for the executed code.
massif
massif is a heap profiler; it measures how much heap memory a program uses, providing information
on heap blocks, heap administration overheads, and stack sizes. Heap profilers are useful in finding
ways to reduce heap memory usage. On systems that use virtual memory, programs with optimized
heap memory usage are less likely to run out of memory, and may be faster as they require less
paging.
helgrind
In programs that use the POSIX pthreads threading primitives, helgrind detects synchronization
errors. Such errors are:
75
Red Hat Enterprise Linux 7 Developer Guide
Valgrind also allows you to develop your own profiling tools. In line with this, Valgrind includes the
lackey tool, which is a sample that can be used as a template for generating your own tools.
See Section 22.1, “Valgrind Tools” for a list of arguments for toolname. In addition to the suite of
Valgrind tools, none is also a valid argument for toolname; this argument allows you to run a program
under Valgrind without performing any profiling. This is useful for debugging or benchmarking Valgrind
itself.
You can also instruct Valgrind to send all of its information to a specific file. To do so, use the option --
log-file=filename. For example, to check the memory usage of the executable file hello and send
profile information to output, use:
See Section 22.3, “Additional information” for more information on Valgrind, along with other available
documentation on the Valgrind suite of tools.
/usr/share/doc/valgrind-version/valgrind_manual.pdf
/usr/share/doc/valgrind-version/html/index.html
76
CHAPTER 23. OPROFILE
ophelp
Displays available events for the system’s processor along with a brief description of each.
operf
The main profiling tool. The operf tool uses the Linux Performance Events subsystem, which allows
precise targeting of profiling, and allows OProfile to operate alongside other tools using
performance monitoring hardware of your system.
Unlike the previously used opcontrol tool, no initial setup is required, and it can be used without root
privileges unless the --system-wide option is used.
ocount
A tool for counting the absolute number of event occurrences. It can count events on the whole
system, per process, per CPU, or per thread.
opimport
Converts sample database files from a foreign binary format to the native format for the system.
Only use this option when analyzing a sample database from a different architecture.
opannotate
Creates an annotated source for an executable if the application was compiled with debugging
symbols.
opreport
Retrieves profile data.
The following example shows counting the amount of events with ocount during execution of the
sleep utility:
77
Red Hat Enterprise Linux 7 Developer Guide
In the following example, the operf tool is used to collect profiling data from the ls -l ~ command.
# debuginfo-install -y coreutils
$ operf ls -l ~
Profiling done.
$ opreport --symbols
CPU: Intel Skylake microarchitecture, speed 3.4e+06 MHz (estimated)
Counted cpu_clk_unhalted events () with a unit mask of 0x00 (Core cycles when at least
one thread on the physical core is not in halt state) count 100000
samples % image name symbol name
161 81.3131 no-vmlinux /no-vmlinux
3 1.5152 libc-2.17.so get_next_seq
3 1.5152 libc-2.17.so strcoll_l
2 1.0101 ld-2.17.so _dl_fixup
2 1.0101 ld-2.17.so _dl_lookup_symbol_x
[...]
In the following example, the operf tool is used to collect profiling data from a Java (JIT) program,
and the opreport tool is then used to output per-symbol data.
1. Install the demonstration Java program used in this example. It is a part of the java-1.8.0-
openjdk-demo package, which is included in the Optional channel. See Adding the Optional
and Supplementary Repositories for instructions on how to use the Optional channel. When
the Optional channel is enabled, install the package:
2. Install the oprofile-jit package for OProfile to be able to collect profiling data from Java
programs:
$ mkdir ~/oprofile_data
78
CHAPTER 23. OPROFILE
$ cd /usr/lib/jvm/java-1.8.0-openjdk/demo/applets/MoleculeViewer/
6. Change into the home directory and analyze the collected data:
$ cd
OProfile Manual
A comprehensive manual with detailed instructions on the setup and use of OProfile is found at
file:///usr/share/doc/oprofile-version/oprofile.html
OProfile Internals
Documentation on the internal workings of OProfile, useful for programmers interested in
contributing to the OProfile upstream, can be found at
file:///usr/share/doc/oprofile-version/internals.html
79
Red Hat Enterprise Linux 7 Developer Guide
1. Write SystemTap scripts that specify which system events (for example, virtual file system
reads, packet transmissions) should trigger specified actions (for example, print, parse, or
otherwise manipulate data).
2. SystemTap translates the script into a C program, which it compiles into a kernel module.
SystemTap scripts are useful for monitoring system operation and diagnosing system issues with
minimal intrusion into the normal operation of the system. You can quickly instrument running system
test hypotheses without having to recompile and re-install instrumented code. To compile a SystemTap
script that probes kernel-space, SystemTap uses information from three different kernel information
packages:
kernel-variant-devel-version
kernel-variant-debuginfo-version
kernel-debuginfo-common-arch-version
These kernel information packages must match the kernel to be probed. In addition, to compile
SystemTap scripts for multiple kernels, the kernel information packages of each kernel must also be
installed.
80
CHAPTER 25. PERFORMANCE COUNTERS FOR LINUX (PCL) TOOLS AND PERF
Performance counters can also be configured to record samples. The relative amount of samples can be
used to identify which regions of code have the greatest impact on performance.
perf stat
This perf command provides overall statistics for common performance events, including
instructions executed and clock cycles consumed. Options allow selection of events other than the
default measurement events.
perf record
This perf command records performance data into a file which can be later analyzed using perf
report.
perf report
This perf command reads the performance data from a file and analyzes the recorded data.
perf list
This perf command lists the events available on a particular machine. These events will vary based on
performance monitoring hardware and software configuration of the system.
Use perf help to obtain a complete list of perf commands. To retrieve man page information on each
perf command, use perf help command.
To collect statistics on make and its children, use the following command:
The perf command collects a number of different hardware and software counters. It then prints the
following information:
81
Red Hat Enterprise Linux 7 Developer Guide
The perf tool can also record samples. For example, to record data on the make command and its
children, use:
This prints out the file in which the samples are stored, along with the number of samples collected:
Error: open_counter returned with 16 (Device or resource busy). /usr/bin/dmesg may provide
additional information.
# opcontrol --deinit
You can then analyze perf.data to determine the relative frequency of samples. The report output
includes the command, object, and function for the samples. Use perf report to output an analysis of
perf.data. For example, the following command produces a report of the executable that consumes the
most time:
# Samples: 1083783860000
#
# Overhead Command
# ........ ...............
#
48.19% xsltproc
44.48% pdfxmltex
6.01% make
0.95% perl
82
CHAPTER 25. PERFORMANCE COUNTERS FOR LINUX (PCL) TOOLS AND PERF
0.17% kernel-doc
0.05% xmllint
0.05% cc1
0.03% cp
0.01% xmlto
0.01% sh
0.01% docproc
0.01% ld
0.01% gcc
0.00% rm
0.00% sed
0.00% git-diff-files
0.00% bash
0.00% git-diff-index
The column on the left shows the relative amount of samples. This output shows that make spends most
of this time in xsltproc and pdfxmltex. To reduce the time for make to complete, focus on xsltproc
and pdfxmltex. To list functions executed by xsltproc, run:
This generates:
comm: xsltproc
# Samples: 472520675377
#
# Overhead Samples Shared Object Symbol
# ........ .......... ............................. ......
#
45.54%215179861044 libxml2.so.2.7.6 [.] xmlXPathCmpNodesExt
11.63%54959620202 libxml2.so.2.7.6 [.] xmlXPathNodeSetAdd__internal_alias
8.60%40634845107 libxml2.so.2.7.6 [.] xmlXPathCompOpEval
4.63%21864091080 libxml2.so.2.7.6 [.] xmlXPathReleaseObject
2.73%12919672281 libxml2.so.2.7.6 [.] xmlXPathNodeSetSort__internal_alias
2.60%12271959697 libxml2.so.2.7.6 [.] valuePop
2.41%11379910918 libxml2.so.2.7.6 [.] xmlXPathIsNaN__internal_alias
2.19%10340901937 libxml2.so.2.7.6 [.] valuePush__internal_alias
83
Red Hat Enterprise Linux 7 Developer Guide
84