Este conteúdo não está disponível no idioma selecionado.
Developing C and C++ applications in RHEL 8
Setting up a developer workstation, and developing and debugging C and C++ applications in Red Hat Enterprise Linux 8
Abstract
Providing feedback on Red Hat documentation Copiar o linkLink copiado para a área de transferência!
We are committed to providing high-quality documentation and value your feedback. To help us improve, you can submit suggestions or report errors through the Red Hat Jira tracking system.
Procedure
Log in to the Jira website.
If you do not have an account, select the option to create one.
- Click Create in the top navigation bar.
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Setting up a development workstation Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux 8 supports the development of custom applications. To set up your system for development, install the required tools and utilities for the most common development use cases.
1.1. Enabling debug and source repositories Copiar o linkLink copiado para a área de transferência!
To access essential debugging data for system components, enable debug and source repositories. RHEL disables these by default to save space. Enable them to install debuginfo packages required for performance measurement and deep system troubleshooting.
Procedure
Enable the source and debug information package channels:
Enable the BaseOS debug repository:
# subscription-manager repos --enable rhel-8-for-$(uname -i)-baseos-debug-rpmsEnable the BaseOS source repository:
# subscription-manager repos --enable rhel-8-for-$(uname -i)-baseos-source-rpmsEnable the AppStream debug repository:
# subscription-manager repos --enable rhel-8-for-$(uname -i)-appstream-debug-rpmsEnable the AppStream source repository:
# subscription-manager repos --enable rhel-8-for-$(uname -i)-appstream-source-rpmsThe
$(uname -i)part is automatically replaced with a matching value for architecture of your system:
| Architecture name | Value |
| 64-bit Intel and AMD | x86_64 |
| 64-bit ARM | aarch64 |
| IBM POWER | ppc64le |
| 64-bit IBM Z | s390x |
1.2. Setting up to manage application versions Copiar o linkLink copiado para a área de transferência!
Effective version control is essential to all multi-developer projects. Red Hat Enterprise Linux includes Git, a distributed version control system.
Procedure
Install the git package:
# yum install gitOptional: Set the full name associated with your Git commits:
$ git config --global user.name "Full Name"Optional: Set the email address associated with your Git commits:
$ git config --global user.email "email@example.com"Replace Full Name and email@example.com with your actual name and email address.
Optional: To change the default text editor started by Git, set value of the
core.editorconfiguration option:$ git config --global core.editor commandReplace command with the command to be used to start the selected text editor.
1.3. Setting up to develop applications using C and C++ Copiar o linkLink copiado para a área de transferência!
To develop C and C++ applications on Red Hat Enterprise Linux, you can use the development tools provided by Red Hat Enterprise Linux. This procedure describes how to install the standard development tools, including the GCC and LLVM toolchains.
Prerequisites
- The debug and source repositories must be enabled.
Procedure
Install the Development Tools package group including GNU Compiler Collection (GCC), GNU Debugger (GDB), and other development tools:
# yum group install "Development Tools"Install the LLVM-based toolchain including the
clangcompiler andlldbdebugger:# yum install llvm-toolsetOptional: For Fortran dependencies, install the GNU Fortran compiler:
# yum install gcc-gfortran
1.4. Setting up to debug applications Copiar o linkLink copiado para a área de transferência!
To analyze and troubleshoot internal application behavior, Red Hat Enterprise Linux offers multiple debugging and instrumentation tools.
Prerequisites
- The debug and source repositories must be enabled.
Procedure
Install the tools useful for debugging:
# yum install gdb valgrind systemtap ltrace straceInstall the yum-utils package in order to use the
debuginfo-installtool:# yum install yum-utilsRun a SystemTap helper script for setting up the environment.
# stap-prepNote that stap-prep installs packages relevant to the currently running kernel, which might not be the same as the actually installed kernel(s). To ensure stap-prep installs the correct kernel-debuginfo and kernel-headers packages, double-check the current kernel version by using the
uname -rcommand and reboot your system if necessary.-
Make sure
SELinuxpolicies allow the relevant applications to run not only normally, but in the debugging situations, too. For more information, see Using SELinux.
Additional resources
1.5. Setting up tools to measure application performance Copiar o linkLink copiado para a área de transferência!
To identify the causes of application performance loss, you can use the performance measurement tools provided by Red Hat Enterprise Linux. This procedure describes how to install tools such as perf, Valgrind, SystemTap, and Performance Co-Pilot (PCP).
Prerequisites
- The debug and source repositories must be enabled.
Procedure
Install the tools for performance measurement:
# yum install perf papi pcp-zeroconf valgrind strace sysstat systemtapRun a SystemTap helper script for setting up the environment.
# stap-prepNote that stap-prep installs packages relevant to the currently running kernel, which might not be the same as the actually installed kernel(s). To ensure stap-prep installs the correct kernel-debuginfo and kernel-headers packages, double-check the current kernel version by using the
uname -rcommand and reboot your system if necessary.Enable the Performance Co-Pilot (PCP) collector service:
# systemctl enable pmcdStart the Performance Co-Pilot (PCP) collector service:
# systemctl start pmcd
Chapter 2. Creating C or C++ Applications Copiar o linkLink copiado para a área de transferência!
Learn how to build C and C++ code with GCC, use and create libraries, manage builds with Make, and understand toolchain changes from RHEL 7 onward.
2.1. Building code with GCC Copiar o linkLink copiado para a área de transferência!
Learn about situations where source code must be transformed into executable code.
2.1.1. Relationship between code forms Copiar o linkLink copiado para a área de transferência!
The C and C++ languages have three forms of code that are created through different stages of the build process. Understanding these relationships helps you work effectively with the GNU Compiler Collection (GCC).
Prerequisites
- Understanding the concepts of compiling and linking
Possible code forms
The code forms of C and C++ languages:
Source code written in the C or C++ language, present as plain text files.
The files typically use extensions such as
.c,.cc,.cpp,.h,.hpp,.i,.inc. For a complete list of supported extensions and their interpretation, see the gcc manual pages:$ man gccObject code, created by compiling the source code with a compiler. This is an intermediate form.
The object code files use the
.oextension.Executable code, created by linking object code with a linker.
Linux application executable files do not use any file name extension. Shared object (library) executable files use the
.sofile name extension.
Library archive files for static linking also exist. This is a variant of object code that uses the .a file name extension. Static linking is not recommended. See Section 2.2.2, “Static and dynamic linking”.
Handling of code forms in GCC
Producing executable code from source code is performed in two steps, which require different applications or tools. GCC can be used as an intelligent driver for both compilers and linkers. This allows you to use a single gcc command for any of the required actions (compiling and linking). GCC automatically selects the actions and their sequence:
- Source files are compiled to object files.
- Object files and libraries are linked (including the previously compiled sources).
It is possible to run GCC so that it performs only compiling, only linking, or both compiling and linking in a single step. This is determined by the types of inputs and requested type of output(s).
Because larger projects require a build system which usually runs GCC separately for each action, it is better to always consider compilation and linking as two distinct actions, even if GCC can perform both at once.
2.1.2. Compiling source files to object code Copiar o linkLink copiado para a área de transferência!
To compile source files into object files without creating an executable, use GCC’s -c option.
Prerequisites
- C or C++ source code file(s)
- GCC installed on the system
Procedure
- Change to the directory containing the source code file(s).
Run
gccwith the-coption:$ gcc -c source.c another_source.cObject files are created, with their file names reflecting the original source code files:
source.cresults insource.o.NoteWith C++ source code, replace the
gcccommand withg++for convenient handling of C++ Standard Library dependencies.
2.1.3. Enabling debugging of C and C++ applications with the GCC Copiar o linkLink copiado para a área de transferência!
To debug C and C++ applications effectively, generate debugging information during compilation. Use GCC’s -g option to create this data. Debuggers use this data to map executable code to source lines for inspecting variables and logic.
Prerequisites
-
You have the
gccpackage installed.
Procedure
Compile and link your code with the
-goption to generate debugging information:$ gcc ... -g ...Optional: Set the optimization level to
-Og:$ gcc ... -g -Og ...Compiler optimizations can make executable code hard to relate to the source code. The
-Ogoption optimizes the code without interfering with debugging. However, be aware that changing optimization levels can alter the program’s behavior.Optional: Use
-gfor moderate debugging information, or-g3to include macro definitions:$ gcc ... -g3 ...
Verification
Test the code by using the
-fcompare-debugGCC option:$ gcc -fcompare-debug ...This option tests code compiled with and without debug information. If the resulting binaries are identical, the executable code is not affected by debugging options. By using the
-fcompare-debugoption significantly increases compilation time.
2.1.4. Code optimization with the GCC Copiar o linkLink copiado para a área de transferência!
A single program can be transformed into more than one sequence of machine instructions. You can achieve better performance, such as faster execution speed, greater resource efficiency, or smaller file size, if you allocate more resources to analyzing the code during compilation.
With the GNU Compiler Collection (GCC), you can set the optimization level using the -Olevel option. This option accepts a set of values in place of the level.
| Level | Description |
|---|---|
|
| Optimize for compilation speed - no code optimization (default). |
|
| Optimize to increase code execution speed (the larger the number, the greater the speed). |
|
| Optimize for file size. |
|
|
Same as a level |
|
| Optimize for debugging experience. |
For release builds, use the optimization option -O2.
During development, the -Og option is useful for debugging the program or library in some situations. Because some bugs manifest only with certain optimization levels, test the program or library with the release optimization level.
GCC offers a large number of options to enable individual optimizations. For more information, see the following Additional resources.
2.1.5. Options for hardening code with the GCC Copiar o linkLink copiado para a área de transferência!
To add security checks during code compilation, you can use GNU Compiler Collection (GCC) compiler options. This helps produce more secure programs and libraries without changing source code.
Release version options
The following list of options is the recommended minimum for developers targeting Red Hat Enterprise Linux:
+
$ gcc ... -O2 -g -Wall -Wl,-z,now,-z,relro -fstack-protector-strong -fstack-clash-protection -D_FORTIFY_SOURCE=2 ...
-
For programs, add the
-fPIEand-piePosition Independent Executable options. -
For dynamically linked libraries, the mandatory
-fPIC(Position Independent Code) option indirectly increases security.
Development options
Use the following options to detect security flaws during development. Use these options in conjunction with the options for the release version:
+
$ gcc ... -Walloc-zero -Walloca-larger-than -Wextra -Wformat-security -Wvla-larger-than ...
Additional resources
- Defensive Coding Guide
- Memory Error Detection Using GCC - Red Hat Developers Blog post
2.1.6. Linking code to create executable files Copiar o linkLink copiado para a área de transferência!
Linking is the final step when building a C or C++ application. Linking combines all object files and libraries into an executable file.
Prerequisites
- One or more object file(s)
- GCC must be installed on the system
Procedure
- Change to the directory containing the object code file(s).
Run
gcc:$ gcc ... objfile.o another_object.o ... -o executable-fileAn executable file named executable-file is created from the supplied object files and libraries.
To link additional libraries, add the required options after the list of object files. For more information, see Section 2.2, “Using Libraries with the GCC”.
NoteWith C++ source code, replace the
gcccommand withg++for convenient handling of C++ Standard Library dependencies.
2.1.7. Example: Building a C program with the GCC (compiling and linking in one step) Copiar o linkLink copiado para a área de transferência!
To build a simple sample C program, you can use the GNU Compiler Collection (GCC). In this example, compiling and linking the code is done in one step.
Prerequisites
- You must understand how to use GCC.
Procedure
Create a directory
hello-cand change to it:$ mkdir hello-c $ cd hello-cCreate file
hello.cwith the following contents:#include <stdio.h> int main() { printf("Hello, World!\n"); return 0; }Compile and link the code with GCC:
$ gcc hello.c -o helloworldThis compiles the code, creates the object file
hello.o, and links the executable filehelloworldfrom the object file.Run the resulting executable file:
$ ./helloworld Hello, World!
Additional resources
2.1.8. Example: Building a C program with the GCC (compiling and linking in two steps) Copiar o linkLink copiado para a área de transferência!
To build a simple sample C program, you can use the GNU Compiler Collection (GCC). In this example, compiling and linking the code are two separate steps.
Prerequisites
- You must understand how to use GCC.
Procedure
Create a directory
hello-cand change to it:$ mkdir hello-c $ cd hello-cCreate file
hello.cwith the following contents:#include <stdio.h> int main() { printf("Hello, World!\n"); return 0; }Compile the code with GCC:
$ gcc -c hello.cThe object file
hello.ois created.Link an executable file
helloworldfrom the object file:$ gcc hello.o -o helloworldRun the resulting executable file:
$ ./helloworld Hello, World!
Additional resources
2.1.9. Example: Building a C++ program with the GCC (compiling and linking in one step) Copiar o linkLink copiado para a área de transferência!
To build a sample minimal C++ program, you can use the GNU Compiler Collection (GCC). In this example, compiling and linking the code is done in one step.
Prerequisites
-
You must understand the difference between
gccandg++.
Procedure
Create a directory
hello-cppand change to it:$ mkdir hello-cpp $ cd hello-cppCreate file
hello.cppwith the following contents:#include <iostream> int main() { std::cout << "Hello, World!\n"; return 0; }Compile and link the code with
g++:$ g++ hello.cpp -o helloworldThis compiles the code, creates the object file
hello.o, and links the executable filehelloworldfrom the object file.Run the resulting executable file:
$ ./helloworld Hello, World!
2.1.10. Example: Building a C++ program with the GCC (compiling and linking in two steps) Copiar o linkLink copiado para a área de transferência!
To build a minimal C++ program by using a two-step process, you can use the GNU Compiler Collection (GCC). First, compile the source into an object file, then link it to create the executable. This example demonstrates modular building with the GCC compiler.
Prerequisites
-
You must understand the difference between
gccandg++.
Procedure
Create a directory
hello-cppand change to it:$ mkdir hello-cpp $ cd hello-cppCreate file
hello.cppwith the following contents:#include <iostream> int main() { std::cout << "Hello, World!\n"; return 0; }Compile the code with
g++:$ g++ -c hello.cppThe object file
hello.ois created.Link an executable file
helloworldfrom the object file:$ g++ hello.o -o helloworldRun the resulting executable file:
$ ./helloworld Hello, World!
2.2. Using Libraries with the GCC Copiar o linkLink copiado para a área de transferência!
Learn about using libraries in code.
2.2.1. Library naming conventions Copiar o linkLink copiado para a área de transferência!
System libraries require consistent naming. A library known as foo is expected to exist as file libfoo.so or libfoo.a. This convention is automatically understood by the linking input options of the GNU Compiler Collection (GCC), but not by the output options:
When linking against the library, the library can be specified only by its name foo with the
-loption as-lfoo:$ gcc ... -lfoo ...-
When creating the library, the full file name
libfoo.soorlibfoo.amust be specified.
2.2.2. Static and dynamic linking Copiar o linkLink copiado para a área de transferência!
When building C or C++ applications, you must use dynamic linking. Static linking reduces compatibility and prevents timely library security updates.
Comparison of static and dynamic linking
Static linking makes libraries part of the resulting executable file. Dynamic linking keeps these libraries as separate files.
Dynamic and static linking can be compared in several ways:
- Resource use
Static linking results in larger executable files which contain more code. This additional code coming from libraries cannot be shared across multiple programs on the system, increasing file system usage and memory usage at run time. Multiple processes running the same statically linked program will still share the code.
Conversely, static applications need fewer runtime relocations, leading to reduced startup time, and require less private resident set size (RSS) memory. Generated code for static linking can be more efficient than for dynamic linking due to the performance cost of position-independent code (PIC).
- Security
- Dynamically linked libraries which provide ABI compatibility can be updated without changing the executable files depending on these libraries. This is especially important for libraries provided by Red Hat as part of Red Hat Enterprise Linux, where Red Hat provides security updates. Static linking against any such libraries is strongly discouraged.
- Compatibility
Static linking might seem to provide executable files independent of the versions of libraries provided by the operating system. However, most libraries depend on other libraries. With static linking, this dependency becomes inflexible and as a result, both forward and backward compatibility is lost. Static linking is guaranteed to work only on the system where the executable file was built.
WarningApplications linking statically libraries from the GNU C library (glibc) still require glibc to be present on the system as a dynamic library. Furthermore, the dynamic library variant of glibc available at the application’s run time must be a bitwise identical version to that present while linking the application. As a result, static linking is guaranteed to work only on the system where the executable file was built.
- Support coverage
- Most static libraries provided by Red Hat are in the CodeReady Linux Builder channel and not supported by Red Hat.
- Functionality
Some libraries, notably the GNU C Library (glibc), offer reduced functionality when linked statically.
For example, when statically linked, glibc does not support threads and any form of calls to the
dlopen()function in the same program.
As a result of the listed disadvantages, static linking should be avoided at all costs, particularly for whole applications and the glibc and libstdc++ libraries.
Cases for static linking
Static linking might be a reasonable choice in some cases, such as:
- Using a library which is not enabled for dynamic linking.
-
Fully static linking can be required for running code in an empty chroot environment or container. However, static linking using the
glibc-staticpackage is not supported by Red Hat.
2.2.3. Using a library with the GCC Copiar o linkLink copiado para a área de transferência!
A library is a package of code which can be reused in your program. A C or C++ library consists of two parts:
- The library code
- Header files
Compiling code that uses a library
The header files describe the interface of the library: the functions and variables available in the library. Information from the header files is needed for compiling the code.
Typically, header files of a library will be placed in a different directory than your application’s code. To tell the GNU Compiler Collection (GCC) where the header files are, use the -I option:
$ gcc ... -Iinclude_path ...
Replace include_path with the actual path to the header file directory.
The -I option can be used multiple times to add multiple directories with header files. When looking for a header file, these directories are searched in the order of their appearance in the -I options.
Linking code that uses a library
When linking the executable file, both the object code of your application and the binary code of the library must be available. The code for static and dynamic libraries is present in different forms:
-
Static libraries are available as archive files. They contain a group of object files. The archive file has a file name extension
.a. -
Dynamic libraries are available as shared objects. They are a form of an executable file. A shared object has a file name extension
.so.
To tell GCC where the archives or shared object files of a library are, use the -L option:
+
$ gcc ... -Llibrary_path -lfoo ...
Replace library_path with the actual path to the library directory.
The -L option can be used multiple times to add multiple directories. When looking for a library, these directories are searched in the order of their -L options.
The order of options matters: GCC cannot link against a library foo unless it knows the directory with this library. Therefore, use the -L options to specify library directories before using the -l options for linking against libraries.
Compiling and linking code which uses a library in one step
When the situation allows the code to be compiled and linked in one gcc command, use the options for both situations mentioned above at once.
2.2.4. Linking static libraries with the GCC Copiar o linkLink copiado para a área de transferência!
To link static libraries, bundle them as archives that contain object files. After linking, they become part of the resulting executable file. Static linking overrides the default dynamic linking behavior.
Red Hat discourages static linking for security reasons. See Section 2.2.2, “Static and dynamic linking”. Use static linking only when necessary, especially against libraries provided by Red Hat.
Prerequisites
- GCC is installed on your system.
-
You have a specific static library (e.g.,
libfoo.a) and no dynamic version (libfoo.so) available.
Most libraries in Red Hat support dynamic linking only. These steps apply to libraries not enabled for dynamic linking.
Procedure
Compile the program source files with headers of the static library:
$ gcc ... -Iheader_path -c ...Replace header_path with the directory path containing the header files.
Link the program with the static library:
$ gcc ... -Llibrary_path -lfoo ...Replace library_path with the directory path containing the file
libfoo.a.Run the program:
$ ./program
The -static option forbids all dynamic linking. Use -Wl,-Bstatic and -Wl,-Bdynamic to control linker behavior more precisely. See Section 2.2.6, “Static and dynamic libraries with GCC”.
2.2.5. Using a dynamic library with the GCC Copiar o linkLink copiado para a área de transferência!
Dynamic libraries are available as standalone executable files, required at both linking time and run time. They stay independent of your application’s executable file.
Prerequisites
- GCC must be installed on the system.
- A set of source or object files forming a valid program, requiring some dynamic library foo and no other libraries.
The foo library must be available as a file libfoo.so.
- Linking a program against a dynamic library
- To link a program against a dynamic library foo:
$ gcc ... -Llibrary_path -lfoo ...
When a program is linked against a dynamic library, the resulting program must always load the library at run time. There are two options for locating the library:
-
Using a
rpathvalue stored in the executable file itself Using the
LD_LIBRARY_PATHvariable at run time- Using a
rpathvalue stored in the executable file -
The
rpathis a special value saved as a part of an executable file when it is being linked. Later, when the program is loaded from its executable file, the runtime linker will use therpathvalue to locate the library files.
- Using a
While linking with the GNU Compiler Collection (GCC), to store the path library_path as rpath:
$ gcc ... -Llibrary_path -lfoo -Wl,-rpath=library_path ...
The path library_path must point to a directory containing the file libfoo.so.
Do not add a space after the comma in the -Wl,-rpath= option.
To run the program later:
$ ./program
- Using the LD_LIBRARY_PATH environment variable
-
If no
rpathis found in the program’s executable file, the runtime linker will use theLD_LIBRARY_PATHenvironment variable. The value of this variable must be changed for each program. This value should represent the path where the shared library objects are located.
To run the program without rpath set, with libraries present in path library_path:
$ export LD_LIBRARY_PATH=library_path:$LD_LIBRARY_PATH
$ ./program
Leaving out the rpath value offers flexibility, but requires setting the LD_LIBRARY_PATH variable every time the program is to run.
- Placing the library into the default directories
- The runtime linker configuration specifies several directories as a default location of dynamic library files. To use this default behaviour, copy your library to the appropriate directory.
For full details on the dynamic linker behavior, see the following resources:
Linux manual pages for the dynamic linker:
$ man ld.soContents of the
/etc/ld.so.confconfiguration file:$ cat /etc/ld.so.confReport of the libraries recognized by the dynamic linker without additional configuration, which includes the directories:
$ ldconfig -v
2.2.6. Static and dynamic libraries with GCC Copiar o linkLink copiado para a área de transferência!
Combining static and dynamic linking balances portability and efficiency. GCC automatically selects shared objects over static archives unless configured otherwise. Understand this behavior to control exactly which library versions your application uses.
Prerequisites
Introduction
gcc recognizes both dynamic and static libraries. When the -lfoo option is encountered, gcc will first attempt to locate a shared object (a .so file) containing a dynamically linked version of the foo library, and then look for the archive file (.a) containing a static version of the library. Thus, the following situations can result from this search:
- Only the shared object is found, and gcc links against it dynamically.
- Only the archive is found, and gcc links against it statically.
- Both the shared object and archive are found, and by default, gcc selects dynamic linking against the shared object.
- Neither shared object nor archive is found, and linking fails.
Because of these rules, the best way to select the static or dynamic version of a library for linking is having only that version found by gcc. This can be controlled to some extent by using or leaving out directories containing the library versions, when specifying the -Lpath options.
Additionally, because dynamic linking is the default, the only situation where linking must be explicitly specified is when a library with both versions present should be linked statically. There are two possible resolutions:
-
Specifying the static libraries by file path instead of the
-loption -
Using the
-Wloption to pass options to the linker
Specifying the static libraries by file
Usually, gcc is instructed to link against the foo library with the -lfoo option. However, it is possible to specify the full path to file libfoo.a containing the library instead:
$ gcc ... path/to/libfoo.a ...
From the file extension .a, gcc will understand that this is a library to link with the program. However, specifying the full path to the library file is a less flexible method.
Using the -Wl option
The gcc option -Wl is a special option for passing options to the underlying linker. Syntax of this option differs from the other gcc options. The -Wl option is followed by a comma-separated list of linker options, while other gcc options require space-separated list of options.
The ld linker used by gcc offers the options -Bstatic and -Bdynamic to specify whether libraries following this option should be linked statically or dynamically. After passing -Bstatic and a library to the linker, the default dynamic linking behaviour must be restored manually for the following libraries to be linked dynamically with the -Bdynamic option.
To link a program, link library first statically (libfirst.a) and second dynamically (libsecond.so):
$ gcc ... -Wl,-Bstatic -lfirst -Wl,-Bdynamic -lsecond ...
gcc can be configured to use linkers other than the default ld.
2.3. Creating libraries with the GCC Copiar o linkLink copiado para a área de transferência!
Learn about the steps to creating libraries and the necessary concepts used by the Linux operating system for libraries.
2.3.1. Library naming conventions Copiar o linkLink copiado para a área de transferência!
System libraries require consistent naming. A library known as foo is expected to exist as file libfoo.so or libfoo.a. This convention is automatically understood by the linking input options of the GNU Compiler Collection (GCC), but not by the output options:
When linking against the library, the library can be specified only by its name foo with the
-loption as-lfoo:$ gcc ... -lfoo ...-
When creating the library, the full file name
libfoo.soorlibfoo.amust be specified.
2.3.2. The Soname mechanism Copiar o linkLink copiado para a área de transferência!
To manage multiple compatible versions of a library, dynamically loaded libraries (shared objects) use the soname mechanism.
Prerequisites
- You must understand dynamic linking and libraries.
- You must understand the concept of ABI compatibility.
- You must understand library naming conventions.
You must understand symbolic links.
- Problem introduction
- A dynamically loaded library (shared object) exists as an independent executable file. This makes it possible to update the library without updating the applications that depend on it. However, the following problems arise with this concept:
- Identification of the actual version of the library
- Need for multiple versions of the same library present
Signalling ABI compatibility of each of the multiple versions
- Soname mechanism
- To resolve this, Linux uses a mechanism called soname.
A foo library version X.Y is ABI-compatible with other versions with the same value of X in a version number. Minor changes preserving compatibility increase the number Y. Major changes that break compatibility increase the number X.
The actual foo library version X.Y exists as a file libfoo.so.x.y. Inside the library file, a soname is recorded with value libfoo.so.x to signal the compatibility.
When applications are built, the linker looks for the library by searching for the file libfoo.so. A symbolic link with this name must exist, pointing to the actual library file. The linker then reads the soname from the library file and records it into the application executable file. Finally, the linker creates the application that declares dependency on the library using the soname, not a name or a file name.
When the runtime dynamic linker links an application before running, it reads the soname from application’s executable file. This soname is libfoo.so.x. A symbolic link with this name must exist, pointing to the actual library file. This allows loading the library, regardless of the Y component of a version, because the soname does not change.
The Y component of the version number is not limited to just a single number. Additionally, some libraries encode their version in their name.
- Reading soname from a file
-
To display the soname of a library file
somelibrary:
$ objdump -p somelibrary | grep SONAME
Replace somelibrary with the actual file name of the library that you want to examine.
2.3.3. Creating dynamic libraries with the GCC Copiar o linkLink copiado para a área de transferência!
To build and install a dynamic library from the source code, you can use the GNU Compiler Collection (GCC). Dynamically linked libraries, also known as shared objects, help you conserve resources by reusing code and increase security by making library updates easier.
Prerequisites
- You must understand the soname mechanism.
- GCC must be installed on the system.
- You must have source code for a library.
Procedure
- Change to the directory with library sources.
Compile each source file to an object file with the Position independent code option
-fPIC:$ gcc ... -c -fPIC some_file.c ...The object files have the same file names as the original source code files, but their extension is
.o.Link the shared library from the object files:
$ gcc -shared -o libfoo.so.x.y -Wl,-soname,libfoo.so.x some_file.o ...The used major version number is X and minor version number Y.
Copy the
libfoo.so.x.yfile to an appropriate location, where the system’s dynamic linker can find it. On Red Hat Enterprise Linux, the directory for libraries is/usr/lib64:# cp libfoo.so.x.y /usr/lib64Note that you need root permissions to manipulate files in this directory.
Create the symlink structure for soname mechanism:
# ln -s libfoo.so.x.y libfoo.so.x # ln -s libfoo.so.x libfoo.so
Additional resources
- The Linux Documentation Project - Program Library HOWTO - 3. Shared Libraries
2.3.4. Creating static libraries Copiar o linkLink copiado para a área de transferência!
To create static libraries, bundle object files into an archive by using the ar utility. Use the resulting .a file for static linking and for distributing self-contained libraries without external dependencies.
Red Hat discourages the use of static linking for security reasons. Use static linking only when necessary, especially against libraries provided by Red Hat. See Section 2.2.2, “Static and dynamic linking” for more details.
Prerequisites
- GCC and binutils must be installed on the system.
- You must understand static and dynamic linking.
- Source file(s) with functions to be shared as a library are available.
Procedure
Create intermediate object files with GCC.
$ gcc -c source_file.c ...Append more source files if required. The resulting object files share the file name but use the
.ofile name extension.Turn the object files into a static library (archive) using the
artool from thebinutilspackage.$ ar rcs libfoo.a source_file.o ...File
libfoo.ais created.Use the
nmcommand to inspect the resulting archive:$ nm libfoo.a- Copy the static library file to the appropriate directory.
When linking against the library, GCC will automatically recognize from the
.afile name extension that the library is an archive for static linking.$ gcc ... -lfoo ...
2.4. Managing More Code with Make Copiar o linkLink copiado para a área de transferência!
The GNU make utility, commonly abbreviated make, is a tool for controlling the generation of executables from source files. make automatically determines which parts of a complex program have changed and need to be recompiled. make uses configuration files called Makefiles to control the way programs are built.
2.4.1. GNU make and Makefile overview Copiar o linkLink copiado para a área de transferência!
To build executable programs and libraries from source code, and to record and repeat the steps if required, Red Hat Enterprise Linux contains the GNU’s Not Unix (GNU) make command.
GNU make
GNU make reads Makefiles which contain the instructions describing the build process. A Makefile contains multiple rules that describe a way to satisfy a certain condition (target) with a specific action (recipe). Rules can hierarchically depend on another rule.
Running make without any options makes it look for a Makefile in the current directory and attempt to reach the default target. The actual Makefile file name can be one of Makefile, makefile, and GNUmakefile. The default target is determined from the Makefile contents.
Makefile details
Makefiles use a relatively simple syntax for defining variables and rules, which consists of a target and a recipe. The target specifies what is the output if a rule is executed. The lines with recipes must start with the TAB character.
Typically, a Makefile contains rules for compiling source files, a rule for linking the resulting object files, and a target that serves as the entry point at the top of the hierarchy.
Consider the following Makefile for building a C program which consists of a single file, hello.c.
all: hello
hello: hello.o
gcc hello.o -o hello
hello.o: hello.c
gcc -c hello.c -o hello.o
This example shows that to reach the target all, file hello is required. To get hello, one needs hello.o (linked by gcc), which in turn is created from hello.c (compiled by gcc).
The target all is the default target because it is the first target that does not start with a period (.). Running make without any arguments is then identical to running make all, when the current directory contains this Makefile.
Typical makefile
A more typical Makefile uses variables for generalization of the steps and adds a target "clean" - remove everything but the source files.
CC=gcc
CFLAGS=-c -Wall
SOURCE=hello.c
OBJ=$(SOURCE:.c=.o)
EXE=hello
all: $(SOURCE) $(EXE)
$(EXE): $(OBJ)
$(CC) $(OBJ) -o $@
%.o: %.c
$(CC) $(CFLAGS) $< -o $@
clean:
rm -rf $(OBJ) $(EXE)
Adding more source files to such Makefile requires only adding them to the line where the SOURCE variable is defined.
2.4.2. Example: Building a C program using a Makefile Copiar o linkLink copiado para a área de transferência!
To automate C program builds and manage compilation dependencies efficiently, use a Makefile. Create a script that streamlines repetitive tasks to establish a consistent and reliable build process.
Prerequisites
Procedure
Create a directory
hellomake:$ mkdir hellomakeChange to the
hellomakedirectory:$ cd hellomakeCreate a file
hello.cwith the following contents:#include <stdio.h> int main(int argc, char *argv[]) { printf("Hello, World!\n"); return 0; }Create a file
Makefilewith the following contents:CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=$(SOURCE:.c=.o) EXE=hello all: $(SOURCE) $(EXE) $(EXE): $(OBJ) $(CC) $(OBJ) -o $@ %.o: %.c $(CC) $(CFLAGS) $< -o $@ clean: rm -rf $(OBJ) $(EXE)ImportantThe Makefile recipe lines must start with the tab character! When copying the text above from the documentation, the cut-and-paste process may paste spaces instead of tabs. If this happens, correct the issue manually.
Run
make:$ make$ gcc -c -Wall hello.c -o hello.o $ gcc hello.o -o helloThis creates an executable file
hello.Run the executable file
hello:$ ./helloHello, World!Run the Makefile target
cleanto remove the created files:$ make clean# rm -rf hello.o hello
2.4.3. Documentation resources for make Copiar o linkLink copiado para a área de transferência!
To find more information about make, see the following resources.
- Installed documentation
Use the
manandinfotools to view manual pages and information pages installed on your system.- View the make manual page:
$ man make
- View the make information page:
$ info make
- Online documentation
- The GNU Make Manual hosted by the Free Software Foundation
2.5. Changes in toolchain since RHEL 7 Copiar o linkLink copiado para a área de transferência!
The following sections list changes in toolchain since the release of the described components in Red Hat Enterprise Linux 7. See also Release notes for Red Hat Enterprise Linux 8.0.
2.5.1. Changes in GCC in RHEL 8 Copiar o linkLink copiado para a área de transferência!
In Red Hat Enterprise Linux 8, the GNU Compiler Collection (GCC) toolchain is based on the GCC 8.2 release series. Notable changes since Red Hat Enterprise Linux 7 include:
- Numerous general optimizations have been added, such as alias analysis, vectorizer improvements, identical code folding, inter-procedural analysis, store merging optimization pass, and others.
- The Address Sanitizer has been improved.
- The Leak Sanitizer for detection of memory leaks has been added.
- The Undefined Behavior Sanitizer for detection of undefined behavior has been added.
- Debug information can now be produced in the DWARF5 format. This capability is experimental.
- The source code coverage analysis tool GCOV has been extended with various improvements.
- Support for the OpenMP 4.5 specification has been added. Additionally, the offloading features of the OpenMP 4.0 specification are now supported by the C, C++, and Fortran compilers.
- New warnings and improved diagnostics have been added for static detection of certain likely programming errors.
- Source locations are now tracked as ranges rather than points, which allows much richer diagnostics. The compiler now offers “fix-it" hints, suggesting possible code modifications. A spell checker has been added to offer alternative names and ease detecting typos.
Security
GCC has been extended to provide tools to ensure additional hardening of the generated code.
For more details, see Section 2.5.2, “Security enhancements in the GCC in RHEL 8”.
Architecture and processor support
Improvements to architecture and processor support include:
- Multiple new architecture-specific options for the Intel AVX-512 architecture, many of its microarchitectures, and Intel Software Guard Extensions (SGX) have been added.
- Code generation can now target the 64-bit ARM architecture LSE extensions, ARMv8.2-A 16-bit Floating Point Extensions (FPE), and ARMv8.2-A, ARMv8.3-A, and ARMv8.4-A architecture versions.
-
Handling of the
-march=nativeoption on the ARM and 64-bit ARM architectures has been fixed. - Support for the z13 and z14 processors of the 64-bit IBM Z architecture has been added.
Languages and standards
Notable changes related to languages and standards include:
- The default standard used when compiling code in the C language has changed to C17 with GNU extensions.
- The default standard used when compiling code in the C++ language has changed to C++14 with GNU extensions.
- The C++ runtime library now supports the C++11 and C++14 standards.
-
The C++ compiler now implements the C++14 standard with many new features such as variable templates, aggregates with non-static data member initializers, the extended
constexprspecifier, sized deallocation functions, generic lambdas, variable-length arrays, digit separators, and others. - Support for the C language standard C11 has been improved: ISO C11 atomics, generic selections, and thread-local storage are now available.
-
The new
__auto_typeGNU C extension provides a subset of the functionality of C++11autokeyword in the C language. -
The
_FloatNand_FloatNxtype names specified by the ISO/IEC TS 18661-3:2015 standard are now recognized by the C front end. -
The default standard used when compiling code in the C language has changed to C17 with GNU extensions. This has the same effect as using the
--std=gnu17option. Previously, the default was C89 with GNU extensions. - GCC can now experimentally compile code using the C++17 language standard and certain features from the C++20 standard.
- Passing an empty class as an argument now takes up no space on the Intel 64 and AMD64 architectures, as required by the platform ABI. Passing or returning a class with only deleted copy and move constructors now uses the same calling convention as a class with a non-trivial copy or move constructor.
-
The value returned by the C++11
alignofoperator has been corrected to match the C_Alignofoperator and return minimum alignment. To find the preferred alignment, use the GNU extension__alignof__. -
The main version of the
libgfortranlibrary for Fortran language code has been changed to 5. - Support for the Ada (GNAT), GCC Go, and Objective C/C++ languages has been removed. Use the Go Toolset for Go code development.
2.5.2. Security enhancements in the GCC in RHEL 8 Copiar o linkLink copiado para a área de transferência!
The following are changes in GCC related to security and added since the release of Red Hat Enterprise Linux 7.0.
- New warnings
- These warning options have been added:
| Option | Displays warnings for |
|---|---|
|
|
Calls to bounded string manipulation functions such as |
|
|
Objects of non-trivial class types manipulated in potentially unsafe ways by raw memory functions such as The warning helps detect calls that bypass user-defined constructors or copy-assignment operators, corrupt virtual table pointers, data members of const-qualified types or references, or member pointers. The warning also detects calls that would bypass access controls to data members. |
|
| Places where the indentation of the code gives a misleading idea of the block structure of the code to a human reader. |
|
|
Calls to memory allocation functions where the amount of memory to allocate exceeds size. Works also with functions where the allocation is specified by multiplying two parameters and with any functions decorated with attribute |
|
|
Calls to memory allocation functions that attempt to allocate zero amount of memory. Works also with functions where the allocation is specified by multiplying two parameters and with any functions decorated with attribute |
|
|
All calls to the |
|
|
Calls to the |
|
| Definitions of Variable Length Arrays (VLA) that can either exceed the specified size or whose bound is not known to be sufficiently constrained. |
|
|
Both certain and likely buffer overflow in calls to the |
|
|
Both certain and likely output truncation in calls to the |
|
|
Buffer overflow in calls to string handling functions such as |
- Warning improvements
These GCC warnings have been improved:
-
The
-Warray-boundsoption has been improved to detect more instances of out-of-bounds array indices and pointer offsets. For example, negative or excessive indices into flexible array members and string literals are detected. -
The
-Wrestrictoption introduced in GCC 7 has been enhanced to detect many more instances of overlapping accesses to objects via restrict-qualified arguments to standard memory and string manipulation functions such asmemcpyandstrcpy. -
The
-Wnonnulloption has been enhanced to detect a broader set of cases of passing null pointers to functions that expect a non-null argument (decorated with attributenonnull).
-
The
- New UndefinedBehaviorSanitizer
- A new runtime sanitizer for detecting undefined behavior called UndefinedBehaviorSanitizer has been added. The following options are noteworthy:
| Option | Check |
|---|---|
|
| Detect floating point division by zero. |
|
| Check that the result of floating point type to integer conversions do not overflow. |
|
| Enable instrumentation of array bounds and detect out-of-bounds accesses. |
|
| Enable alignment checking and detect various misaligned objects. |
|
| Enable object size checking and detect various out-of-bounds accesses. |
|
| Enable checking of C++ member function calls, member accesses, and some conversions between pointers to base and derived classes. Additionally, detect when referenced objects do not have correct dynamic type. |
|
|
Enable strict checking of array bounds. This enables |
|
| Diagnose arithmetic overflows even on arithmetic operations with generic vectors. |
|
|
Diagnose at run time invalid arguments to |
|
|
Perform cheap runtime tests for pointer wrapping. Includes checks from |
- New options for AddressSanitizer
- These options have been added to AddressSanitizer:
| Option | Check |
|---|---|
|
| Warn about comparison of pointers that point to a different memory object. |
|
| Warn about subtraction of pointers that point to a different memory object. |
|
| Sanitize variables whose address is taken and used after a scope where the variable is defined. |
- Other sanitizers and instrumentation
-
The option
-fstack-clash-protectionhas been added to insert probes when stack space is allocated statically or dynamically to reliably detect stack overflows and thus mitigate the attack vector that relies on jumping over a stack guard page provided by the operating system. -
A new option
-fcf-protection=[full|branch|return|none]has been added to perform code instrumentation and increase program security by checking that target addresses of control-flow transfer instructions (such as indirect function call, function return, indirect jump) are valid.
-
The option
2.5.3. Compatibility-breaking changes in the GCC in RHEL 8 Copiar o linkLink copiado para a área de transferência!
Certain changes in the GNU Compiler Collection (GCC) between RHEL 7 and RHEL 8 break compatibility, such as C++ ABI changes requiring application rebuilds and removal of language support requiring alternative toolchains.
C++ ABI change in std::string and std::list
The Application Binary Interface (ABI) of the std::string and std::list classes from the libstdc++ library changed between RHEL 7 (GCC 4.8) and RHEL 8 (GCC 8) to conform to the C++11 standard. The libstdc++ library supports both the old and new ABI, but some other C++ system libraries do not. As a consequence, applications that dynamically link against these libraries will need to be rebuilt. This affects all C++ standard modes, including C++98. It also affects applications built with Red Hat Developer Toolset compilers for RHEL 7, which kept the old ABI to maintain compatibility with the system libraries.
GCC no longer builds Ada, Go, and Objective C/C++ code
Capability for building code in the Ada (GNAT), GCC Go, and Objective C/C++ languages has been removed from the GCC compiler.
To build Go code, use the Go Toolset instead.
Chapter 3. Debugging Applications Copiar o linkLink copiado para a área de transferência!
Debugging applications is a very wide topic. This part provides a developer with the most common techniques for debugging in multiple situations.
3.1. Enabling Debugging with Debugging Information Copiar o linkLink copiado para a área de transferência!
To debug applications and libraries, debugging information is required. The following sections describe how to obtain this information.
3.1.1. Debugging information Copiar o linkLink copiado para a área de transferência!
Reliable debugging requires connecting binary code back to source code. Debugging information provides this link, essential for inspecting variables and execution flow. The GNU Compiler Collection (GCC) generates this data in the DWARF format within ELF files, which tools like the GNU Debugger (GDB) use to analyze program behavior.
Red Hat Enterprise Linux uses the ELF format for executable binaries, shared libraries, or debuginfo files. Within these ELF files, the DWARF format is used to hold the debug information.
To display DWARF information stored within an ELF file, run the readelf -w file command.
STABS is an older, less capable format, occasionally used with UNIX. Its use is discouraged by Red Hat. GCC and GDB provide STABS production and consumption on a best effort basis only. Some other tools such as Valgrind and elfutils do not work with STABS.
3.1.2. Enabling debugging of C and C++ applications with the GCC Copiar o linkLink copiado para a área de transferência!
To debug C and C++ applications effectively, generate debugging information during compilation. Use GCC’s -g option to create this data. Debuggers use this data to map executable code to source lines for inspecting variables and logic.
Prerequisites
-
You have the
gccpackage installed.
Procedure
Compile and link your code with the
-goption to generate debugging information:$ gcc ... -g ...Optional: Set the optimization level to
-Og:$ gcc ... -g -Og ...Compiler optimizations can make executable code hard to relate to the source code. The
-Ogoption optimizes the code without interfering with debugging. However, be aware that changing optimization levels can alter the program’s behavior.Optional: Use
-gfor moderate debugging information, or-g3to include macro definitions:$ gcc ... -g3 ...
Verification
Test the code by using the
-fcompare-debugGCC option:$ gcc -fcompare-debug ...This option tests code compiled with and without debug information. If the resulting binaries are identical, the executable code is not affected by debugging options. By using the
-fcompare-debugoption significantly increases compilation time.
3.1.3. Debuginfo and debugsource packages Copiar o linkLink copiado para a área de transferência!
The debuginfo and debugsource packages contain debugging information and source code for programs and libraries. To debug Red Hat Enterprise Linux applications, install these packages from additional repositories.
Debugging information package types
There are two types of packages available for debugging:
- Debuginfo packages
-
The
debuginfopackages provide debugging information needed to provide human-readable names for binary code features. These packages contain.debugfiles, which contain DWARF debugging information. These files are installed to the/usr/lib/debugdirectory. - Debugsource packages
-
The
debugsourcepackages contain the source files used for compiling the binary code. With bothdebuginfoanddebugsourcepackages installed, debuggers such as GDB or LLDB can relate the execution of binary code to the source code. The source code files are installed to the/usr/src/debugdirectory.
Differences from RHEL 7
In Red Hat Enterprise Linux 7, the debuginfo packages contained both kinds of information. Red Hat Enterprise Linux 8 splits the source code data needed for debugging from the debuginfo packages into separate debugsource packages.
Package names
A debuginfo or debugsource package provides debugging information valid only for a binary package with the same name, version, release, and architecture:
-
Binary package:
packagename-version-release.architecture.rpm -
Debuginfo package:
packagename-debuginfo-version-release.architecture.rpm -
Debugsource package:
packagename-debugsource-version-release.architecture.rpm
3.1.4. Getting debuginfo packages for an application or library using GDB Copiar o linkLink copiado para a área de transferência!
To obtain the necessary debuginfo packages for troubleshooting installed applications or libraries, use the GNU Debugger (GDB). It automatically detects missing symbols and identifies the specific packages needed. Follow GDB’s recommendations to install these packages and enable full debugging capabilities.
Prerequisites
- The application or library you want to debug must be installed on the system.
-
GDB and the
debuginfo-installtool must be installed on the system. -
Repositories providing
debuginfoanddebugsourcepackages must be configured and enabled on the system. For details, see Enabling debug and source repositories.
Procedure
Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run.
$ gdb -q /bin/lsReading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64 (gdb)Exit GDB: type q and confirm with Enter.
(gdb) qRun the command suggested by GDB to install the required
debuginfopackages:# dnf debuginfo-install coreutils-8.30-6.el8.x86_64The
dnfpackage management tool provides a summary of the changes, asks for confirmation and once you confirm, downloads and installs all the necessary files.-
In case GDB is not able to suggest the
debuginfopackage, follow the procedure described in Section 3.1.5, “Getting debuginfo packages for an application or library manually”.
3.1.5. Getting debuginfo packages for an application or library manually Copiar o linkLink copiado para a área de transferência!
To manually determine which debuginfo packages you need to install, locate the executable file and find the package that installs it.
Red Hat recommends that you use GDB to determine the packages for installation. Use this manual procedure only if GDB is not able to suggest the package to install.
Prerequisites
- The application or library must be installed on the system.
- The application or library was installed from a package.
-
The
debuginfo-installtool must be available on the system. -
Channels providing the
debuginfopackages must be configured and enabled on the system.
Procedure
Find the executable file of the application or library.
Use the
whichcommand to find the application file.$ which less/usr/bin/lessUse the
locatecommand to find the library file.$ locate libz | grep so/usr/lib64/libz.so.1 /usr/lib64/libz.so.1.2.11If the original reasons for debugging include error messages, pick the result where the library has the same additional numbers in its file name as those mentioned in the error messages. If in doubt, try following the rest of the procedure with the result where the library file name includes no additional numbers.
NoteThe
locatecommand is provided by themlocatepackage. To install it and enable its use:Install the
mlocatepackage:# yum install mlocateUpdate the database:
# updatedb
Search for a name and version of the package that provided the file:
$ rpm -qf /usr/lib64/libz.so.1.2.7zlib-1.2.11-10.el8.x86_64The output provides details for the installed package in the name:epoch-version.release.architecture format.
ImportantIf this step does not produce any results, it is not possible to determine which package provided the binary file. There are several possible cases:
- The file is installed from a package which is not known to package management tools in their current configuration.
-
The file is installed from a locally downloaded and manually installed package. Determining a suitable
debuginfopackage automatically is impossible in that case. - Your package management tools are misconfigured.
-
The file is not installed from any package. In such a case, no matching
debuginfopackage exists.
Because further steps depend on this one, you must resolve this situation or abort this procedure. Describing the exact troubleshooting steps is beyond the scope of this procedure.
Install the
debuginfopackages by using thedebuginfo-installutility. In the command, use the package name and other details you determined during the previous step:# debuginfo-install zlib-1.2.11-10.el8.x86_64
3.2. Inspecting Application Internal State with GDB Copiar o linkLink copiado para a área de transferência!
Use the GNU Debugger (GDB) to diagnose application issues, control program execution, and inspect its internal state.
3.2.1. GNU debugger (GDB) Copiar o linkLink copiado para a área de transferência!
Use the GNU Debugger (GDB) to inspect program execution and post-crash states. Also, you can analyze internal data and control execution flow when tracking down runtime errors. This command-line tool shows the detailed application state needed to identify and fix bugs in complex applications.
GDB capabilities
A single GDB session can debug the following types of programs:
- Multithreaded and forking programs
- Multiple programs at once
-
Programs on remote machines or in containers with the
gdbserverutility connected over a TCP/IP network connection
Debugging requirements
To debug any executable code, GDB requires debugging information for that particular code:
- For programs developed by you, you can create the debugging information while building the code.
- For system programs installed from packages, you must install their debuginfo packages.
3.2.2. Attaching GDB to a process Copiar o linkLink copiado para a área de transferência!
To examine a process, GDB must be attached to the process.
Prerequisites
GDB must be installed on the system
- Starting a program with GDB
When the program is not running as a process, start it with GDB:
$ gdb program
Replace program with a file name or path to the program.
GDB sets up to start execution of the program. You can set up breakpoints and the gdb environment before beginning the execution of the process with the run command.
- Attaching GDB to an already-running process
To attach GDB to a program already running as a process:
Find the process ID (pid) with the
pscommand:$ ps -C program -o pid hReplace program with a file name or path to the program.
Attach GDB to this process:
$ gdb -p pid
Replace pid with an actual process ID number from the ps output.
- Attaching an already-running GDB to an already-running process
To attach an already running GDB to an already running program:
Use the
shellGDB command to run thepscommand and find the program’s process ID (pid):(gdb) shell ps -C program -o pid hReplace program with a file name or path to the program.
Use the
attachcommand to attach GDB to the program:(gdb) attach pid
Replace pid by an actual process ID number from the ps output.
In some cases, GDB might not be able to find the corresponding executable file. Use the file command to specify the path:
(gdb) file path/to/program
3.2.3. Controlling program execution with GDB Copiar o linkLink copiado para a área de transferência!
When the GDB debugger has been attached to a program, you can use several commands to control the execution of the program. These commands parse code, set breakpoints, and control program flow during debugging sessions.
Prerequisites
You must have the required debugging information available:
- The program is compiled and built with debugging information, or
- The relevant debuginfo packages are installed
- GDB must be attached to the program to be debugged
GDB commands to step through the code
r(run)-
Start the execution of the program. If
runis executed with any arguments, those arguments are passed on to the executable as if the program has been started normally. Users normally issue this command after setting breakpoints. start-
Start the execution of the program but stop at the beginning of the program’s main function. If
startis executed with any arguments, those arguments are passed on to the executable as if the program has been started normally. c(continue)Continue the execution of the program from the current state. The execution of the program will continue until one of the following becomes true:
- A breakpoint is reached.
- A specified condition is satisfied.
- A signal is received by the program.
- An error occurs.
- The program terminates.
n(next)Continue the execution of the program from the current state, until the next line of code in the current source file is reached. The execution of the program will continue until one of the following becomes true:
- A breakpoint is reached.
- A specified condition is satisfied.
- A signal is received by the program.
- An error occurs.
- The program terminates.
s(step)-
The
stepcommand also halts execution at each sequential line of code in the current source file. However, if the execution is currently stopped at a source line containing a function call, GDB stops the execution after entering the function call (rather than executing it). untillocation- Continue the execution until the code location specified by the location option is reached.
fini(finish)Resume the execution of the program and halt when execution returns from a function. The execution of the program will continue until one of the following becomes true:
- A breakpoint is reached.
- A specified condition is satisfied.
- A signal is received by the program.
- An error occurs.
- The program terminates.
q(quit)- Terminate the execution and exit GDB.
3.2.4. Showing program internal values with GDB Copiar o linkLink copiado para a área de transferência!
Displaying the values of a program’s internal variables is important for understanding of what the program is doing. The GNU Debugger (GDB) offers multiple commands that you can use to inspect the internal variables. The following are the most useful of these commands:
p(print)- Display the value of the argument given. Usually, the argument is the name of a variable of any complexity, from a simple single value to a structure. An argument can also be an expression valid in the current language, including the use of program variables and library functions, or functions defined in the program being tested.
It is possible to extend GDB with pretty-printer Python or Guile scripts for customized display of data structures (such as classes, structs) using the print command.
bt(backtrace)- Display the chain of function calls used to reach the current execution point, or the chain of functions used up until execution was terminated. This is useful for investigating serious bugs (such as segmentation faults) with elusive causes.
Adding the full option to the backtrace command displays local variables, too.
It is possible to extend GDB with frame filter Python scripts for customized display of data displayed using the bt and info frame commands. The term frame refers to the data associated with a single function call.
info-
The
infocommand is a generic command to provide information about various items. It takes an option specifying the item to describe.
-
The
info argscommand displays options of the function call that is the currently selected frame. -
The
info localscommand displays local variables in the currently selected frame.
For a list of the possible items, run the command help info in a GDB session:
+
(gdb) help info
l(list)-
Show the line in the source code where the program stopped. This command is available only when the program execution is stopped. While not strictly a command to show internal state,
listhelps the user understand what changes to the internal state will happen in the next step of the program’s execution.
3.2.5. Using GDB breakpoints to stop execution at defined code locations Copiar o linkLink copiado para a área de transferência!
During a debugging session, you often need to investigate only specific sections of code. Breakpoints are markers that instruct GDB to stop the execution of a program at a defined location. Since breakpoints are typically associated with lines of source code, placing them requires you to specify the correct source file and line number.
To place a breakpoint:
Specify the name of the source code file and the line in that file:
(gdb) br file:lineWhen file is not present, name of the source file at the current point of execution is used:
(gdb) br lineAlternatively, use a function name to put the breakpoint on its start:
(gdb) br function_name
A program might encounter an error after a certain number of iterations of a task. To specify an additional condition to halt execution:
(gdb) br file:line if condition
Replace condition with a condition in the C or C++ language. The meaning of file and line is the same as above.
To inspect the status of all breakpoints and watchpoints:
(gdb) info brTo remove a breakpoint by using its number as displayed in the output of
info br:(gdb) delete numberTo remove a breakpoint at a given location:
(gdb) clear file:line
3.2.6. GDB watchpoints for stopping execution on data access and changes Copiar o linkLink copiado para a área de transferência!
GDB watchpoints pause execution when data changes or is accessed. Use them to debug unexpected variable corruption when the cause is unknown. Setting a watchpoint stops the program at the exact moment of access to inspect the state and find the root cause.
Using watchpoints in GDB
Watchpoints are markers which tell GDB to stop the execution of a program. Watchpoints are associated with data: placing a watchpoint requires specifying an expression that describes a variable, multiple variables, or a memory address.
To place a watchpoint for data change (write):
(gdb) watch expressionReplace expression with an expression that describes what you want to watch. For variables, expression is equal to the name of the variable.
To place a watchpoint for data access (read):
(gdb) rwatch expressionTo place a watchpoint for any data access (both read and write):
(gdb) awatch expressionTo inspect the status of all watchpoints and breakpoints:
(gdb) info brTo remove a watchpoint:
(gdb) delete numReplace the num option with the number reported by the
info brcommand.
3.2.7. GDB commands and settings for forked and threaded programs Copiar o linkLink copiado para a área de transferência!
Some programs use forking or threads to achieve parallel code execution. To debug multiple simultaneous execution paths, you can use a variety of commands based on your use case.
Prerequisites
You must understand the concepts of process forking and threads.
- Debugging forked programs with GDB
- Forking is a situation when a program (parent) creates an independent copy of itself (child). Use the following settings and commands to affect what GDB does when a fork occurs:
The
follow-fork-modesetting controls whether GDB follows the parent or the child after the fork.set follow-fork-mode parent- After a fork, debug the parent process. This is the default.
set follow-fork-mode child- After a fork, debug the child process.
show follow-fork-mode-
Display the current setting of
follow-fork-mode.
The
set detach-on-forksetting controls whether the GDB keeps control of the other (not followed) process or leaves it to run.set detach-on-fork on-
The process which is not followed (depending on the value of
follow-fork-mode) is detached and runs independently. This is the default. set detach-on-fork off-
GDB keeps control of both processes. The process which is followed (depending on the value of
follow-fork-mode) is debugged as usual, while the other is suspended. show detach-on-fork-
Display the current setting of
detach-on-fork.
- Debugging threaded programs with GDB
-
GDB has the ability to debug individual threads, and to manipulate and examine them independently. To make GDB stop only the thread that is examined, use the commands
set non-stop onandset target-async on. You can add these commands to the.gdbinitfile. After that functionality is turned on, GDB is ready to conduct thread debugging.
GDB uses a concept of current thread. By default, commands apply to the current thread only.
info threads-
Display a list of threads with their
idandgidnumbers, indicating the current thread. thread id-
Set the thread with the specified
idas the current thread. thread apply ids command-
Apply the command
commandto all threads listed byids. Theidsoption is a space-separated list of thread ids. A special valueallapplies the command to all threads. break location thread id if condition-
Set a breakpoint at a certain
locationwith a certainconditiononly for the thread numberid. watch expression thread id-
Set a watchpoint defined by
expressiononly for the thread numberid. command&-
Execute command
commandand return immediately to the gdb prompt(gdb), continuing any code execution in the background. interrupt- Halt execution in the background.
3.3. Recording Application Interactions Copiar o linkLink copiado para a área de transferência!
The executable code of applications interacts with the code of the operating system and shared libraries. Recording an activity log of these interactions can provide enough insight into the application’s behavior without debugging the actual application code. Alternatively, analyzing an application’s interactions can help pinpoint the conditions in which a bug manifests.
3.3.1. Tools for recording application interactions Copiar o linkLink copiado para a área de transferência!
To record application interactions, you can use several tools are available in RHEL. For system calls use strace, for library calls use ltrace, and for advanced probing SystemTap. Select the appropriate tool to log specific runtime behaviors and diagnose integration issues.
- strace
The
stracetool primarily enables logging of system calls (kernel functions) used by an application.-
The
stracetool can provide a detailed output about calls, becausestraceinterprets parameters and results with knowledge of the underlying kernel code. Numbers are turned into the corresponding constant names, bitwise combined flags expanded to flag list, pointers to character arrays dereferenced to provide the actual string, and more. Support for more recent kernel features may be lacking. - You can filter the traced calls to reduce the amount of captured data.
-
The use of
stracedoes not require any particular setup except for setting up the log filter. -
Tracing the application code with
straceresults in significant slowdown of the application’s execution. As a result,straceis not suitable for many production deployments. As an alternative, consider usingltraceor SystemTap. -
The version of
straceavailable in Red Hat Developer Toolset can also perform system call tampering. This capability is useful for debugging.
-
The
- ltrace
-
The
ltracetool enables logging of an application’s user space calls into shared objects (dynamic libraries).
-
The
ltracetool enables tracing calls to any library. - You can filter the traced calls to reduce the amount of captured data.
-
The use of
ltracedoes not require any particular setup except for setting up the log filter. -
The
ltracetool is lightweight and fast, offering an alternative tostrace: it is possible to trace the corresponding interfaces in libraries such asglibcwithltraceinstead of tracing kernel functions withstrace. -
Because
ltracedoes not handle a known set of calls such asstrace, it does not attempt to explain the values passed to library functions. Theltraceoutput contains only raw numbers and pointers. The interpretation ofltraceoutput requires consulting the actual interface declarations of the libraries present in the output.
In Red Hat Enterprise Linux 8, a known issue prevents ltrace from tracing system executable files. This limitation does not apply to executable files built by users.
- SystemTap
SystemTap is an instrumentation platform for probing running processes and kernel activity on the Linux system. SystemTap uses its own scripting language for programming custom event handlers.
-
Compared to using
straceandltrace, scripting the logging means more work in the initial setup phase. However, the scripting capabilities extend SystemTap’s usefulness beyond just producing logs. - SystemTap works by creating and inserting a kernel module. The use of SystemTap is efficient and does not create a significant slowdown of the system or application execution on its own.
- SystemTap includes a set of usage examples.
-
Compared to using
- GDB
The GNU Debugger (GDB) is primarily meant for debugging, not logging. However, some of its features make it useful even in the scenario where an application’s interaction is the primary activity of interest.
- With GDB, it is possible to conveniently combine the capture of an interaction event with immediate debugging of the subsequent execution path.
- GDB is best suited for analyzing response to infrequent or singular events, after the initial identification of problematic situation by other tools. Using GDB in any scenario with frequent events becomes inefficient or even impossible.
Additional resources
3.3.2. Monitoring an application’s system calls with strace Copiar o linkLink copiado para a área de transferência!
To monitor the system (kernel) calls performed by an application, use the strace tool.
Prerequisites
Procedure
- Identify the system calls to monitor.
Start
straceand attach it to the program.If the program you want to monitor is not running, start
straceand specify the program:$ strace -fvttTyy -s 256 -e trace=call programIf the program is already running, find its process id (pid) and attach
straceto it:Find the process ID:
$ ps -C program(...)Attach strace to the process:
$ strace -fvttTyy -s 256 -e trace=call -ppid
-
Replace call with the system calls to be displayed. You can use the
-e trace=calloption multiple times. If left out,stracewill display all system call types. See the strace(1) manual page for more information. -
If you do not want to trace any forked processes or threads, omit the
-foption.
The
stracetool displays the system calls made by the application and their details.In most cases, an application and its libraries make a large number of calls and
straceoutput displays immediately, if no filter for system calls is set.The
stracetool exits when the program exits.To terminate the monitoring before the traced program exits, press .
-
If
stracestarted the program, the program terminates together withstrace. -
If you attached
straceto an already running program, the program terminates together withstrace.
-
If
Analyze the list of system calls done by the application.
- Problems with resource access or availability are present in the log as calls returning errors.
- Values passed to the system calls and patterns of call sequences provide insight into the causes of the application’s behaviour.
- If the application crashes, the important information is probably at the end of log.
- The output contains unnecessary information. However, you can construct a more precise filter for the system calls of interest and repeat the procedure.
It is advantageous to both see the output and save it to a file. Use the tee command to achieve this:
$ strace ... |& tee your_log_file.log
3.3.3. Monitoring application’s library function calls with ltrace Copiar o linkLink copiado para a área de transferência!
To monitor an application’s calls to functions available in libraries (shared objects), use the ltrace tool.
In Red Hat Enterprise Linux 8, a known issue prevents ltrace from tracing system executable files. This limitation does not apply to executable files built by users.
Prerequisites
Procedure
- Identify the libraries and functions of interest, if possible.
Start
ltraceand attach it to the program.If the program you want to monitor is not running, start
ltraceand specify program:$ ltrace -f -l library -e function program-
If the program is already running, find its process id (pid) and attach
ltraceto it:
Find the process ID:
$ ps -C program(...)Attach ltrace to the process:
$ ltrace -f -l library -e function program -ppidUse the
-e,-fand-loptions to filter the output:-
Supply the function names to be displayed as function. The
-e functionoption can be used multiple times. If left out,ltracedisplays calls to all functions. -
Instead of specifying functions, you can specify whole libraries with the
-l libraryoption. This option behaves similarly to the-e functionoption. -
If you do not want to trace any forked processes or threads, omit the
-foption.
See the ltrace(1)_ manual page for more information.
-
Supply the function names to be displayed as function. The
ltracedisplays the library calls made by the application.In most cases, an application makes a large number of calls and
ltraceoutput displays immediately, if no filter is set.ltraceexits when the program exits.To terminate the monitoring before the traced program exits, press .
-
If
ltracestarted the program, the program terminates together withltrace. -
If you attached
ltraceto an already running program, the program terminates together withltrace.
-
If
Analyze the list of library calls done by the application.
- If the application crashes, the important information is probably at the end of log.
- The output contains unnecessary information. However, you can construct a more precise filter and repeat the procedure.
It is advantageous to both see the output and save it to a file. Use the tee command to achieve this:
$ ltrace ... |& tee your_log_file.log
3.3.4. Monitoring application’s system calls with SystemTap Copiar o linkLink copiado para a área de transferência!
To register custom event handlers for kernel events, use the SystemTap tool. SystemTap is more efficient than the strace tool, but requires more setup. The included strace.stp script provides strace-like functionality. Installing SystemTap also installs the strace.stp script, which provides an approximation of the strace functionality when using SystemTap.
Procedure
Find the process ID (pid) of the process you want to monitor:
$ ps -auxRun SystemTap with the
strace.stpscript:# stap /usr/share/systemtap/examples/process/strace.stp -x pidThe value of pid is the process id.
The script is compiled to a kernel module, which is then loaded. This introduces a slight delay between entering the command and getting the output.
- When the process performs a system call, the call name and its parameters are printed to the terminal.
-
The script exits when the process terminates, or when you press
Ctrl+C.
3.3.5. Using GDB to intercept application system calls Copiar o linkLink copiado para a área de transferência!
To stop program execution when the program performs specific system calls, use GDB catchpoints, then inspect the program state and system call parameters at those points.
Prerequisites
Procedure
Set the catchpoint:
(gdb) catch syscall syscall-nameThe command
catch syscallsets a special type of breakpoint that halts execution when the program performs a system call.The
syscall-nameoption specifies the name of the call. You can specify multiple catchpoints for various system calls. Leaving out thesyscall-nameoption causes GDB to stop on any system call.Start execution of the program.
If the program has not started execution, start it:
(gdb) rIf the program execution is halted, resume it:
(gdb) c
- GDB halts execution after the program performs any specified system call.
Additional resources
3.3.6. Using GDB to intercept handling of signals by applications Copiar o linkLink copiado para a área de transferência!
To stop the execution of a program under specific circumstances, you can use the GNU Debugger (GDB). To stop the execution when the program receives a signal from the operating system, use a GDB catchpoint.
Prerequisites
Procedure
Set the catchpoint:
(gdb) catch signal signal-typeThe command
catch signalsets a special type of a breakpoint that halts execution when a signal is received by the program. Thesignal-typeoption specifies the type of the signal. Use the special value'all'to catch all signals.Let the program run.
If the program has not started execution, start it:
(gdb) rIf the program execution is halted, resume it:
(gdb) c
- GDB halts execution after the program receives any specified signal.
3.4. Debugging a Crashed Application Copiar o linkLink copiado para a área de transferência!
Sometimes, it is not possible to debug an application directly. In these situations, you can collect information about the application at the moment of its termination and analyze it afterwards.
3.4.1. Core dumps: what they are and how to use them Copiar o linkLink copiado para a área de transferência!
A core dump records parts of an application’s memory when the application stops. After an application fails, you can analyze the core dump, along with the executable and debuginfo, by using a debugger.
The Linux operating system kernel can record core dumps automatically, if this functionality is enabled. Alternatively, you can send a signal to any running application to generate a core dump regardless of its actual state.
Some limits might affect the ability to generate a core dump. To see the current limits:
$ ulimit -a
3.4.2. Recording application crashes with core dumps Copiar o linkLink copiado para a área de transferência!
To record application crashes, set up core dump saving and add information about the system.
Procedure
To enable core dumps, ensure that the
/etc/systemd/system.conffile contains the following lines:DumpCore=yes DefaultLimitCORE=infinityYou can also add comments describing if these settings were previously present, and what the previous values were. This will enable you to reverse these changes later, if needed. Comments are lines starting with the
#character.Changing the file requires administrator level access.
Apply the new configuration:
# systemctl daemon-reexecRemove the limits for core dump sizes:
$ ulimit -c unlimitedTo reverse this change, run the command with value
0instead ofunlimited.Install the
sospackage which provides thesosreportutility for collecting system information:# yum install sos-
When an application crashes, a core dump is generated and handled by
systemd-coredump. Create an SOS report to provide additional information about the system:
# sosreportThis creates a
.tararchive containing information about your system, such as copies of configuration files.Locate the core dump:
$ coredumpctl list executable-nameExport the core dump:
$ coredumpctl dump executable-name > /path/to/file-for-exportIf the application crashed multiple times, output of the first command lists more captured core dumps. In that case, construct for the second command a more precise query by using the other information. See the coredumpctl(1) manual page for details.
Transfer the core dump and the SOS report to the computer where the debugging will take place. Transfer the executable file, too, if it is known.
ImportantWhen the executable file is not known, subsequent analysis of the core file identifies it.
- Optional: Remove the core dump and SOS report after transferring them, to free up disk space.
3.4.3. Inspecting application crash states with core dumps Copiar o linkLink copiado para a área de transferência!
To inspect the state of an application at the moment it terminated unexpectedly, use core dumps.
Prerequisites
- You must have a core dump file and sosreport from the system where the crash occurred.
- GDB and elfutils must be installed on your system.
Procedure
To identify the executable file where the crash occurred, run the
eu-unstripcommand with the core dump file:$ eu-unstrip -n --core=./core.98140x400000+0x207000 2818b2009547f780a5639c904cded443e564973e@0x400284 /usr/bin/sleep /usr/lib/debug/bin/sleep.debug [exe] 0x7fff26fff000+0x1000 1e2a683b7d877576970e4275d41a6aaec280795e@0x7fff26fff340 . - linux-vdso.so.1 0x35e7e00000+0x3b6000 374add1ead31ccb449779bc7ee7877de3377e5ad@0x35e7e00280 /usr/lib64/libc-2.14.90.so /usr/lib/debug/lib64/libc-2.14.90.so.debug libc.so.6 0x35e7a00000+0x224000 3ed9e61c2b7e707ce244816335776afa2ad0307d@0x35e7a001d8 /usr/lib64/ld-2.14.90.so /usr/lib/debug/lib64/ld-2.14.90.so.debug ld-linux-x86-64.so.2The output contains details for each module on a line, separated by spaces. The information is listed in this order:
- The memory address where the module was mapped
- The build-id of the module and where in the memory it was found
-
The module’s executable file name, displayed as
-when unknown, or as.when the module has not been loaded from a file -
The source of debugging information, displayed as a file name when available, as
.when contained in the executable file itself, or as-when not present at all -
The shared library name (soname) or
[exe]for the main module
In this example, the important details are the file name
/usr/bin/sleepand the build-id2818b2009547f780a5639c904cded443e564973eon the line containing the text[exe]. With this information, you can identify the executable file required for analyzing the core dump.Get the executable file that crashed.
- If possible, copy it from the system where the crash occurred. Use the file name extracted from the core file.
You can also use an identical executable file on your system. Each executable file built on Red Hat Enterprise Linux contains a note with a unique build-id value. Determine the build-id of the relevant locally available executable files:
$ eu-readelf -n executable_fileUse this information to match the executable file on the remote system with your local copy. The build-id of the local file and build-id listed in the core dump must match.
-
Finally, if the application is installed from an RPM package, you can get the executable file from the package. Use the
sosreportoutput to find the exact version of the package required.
- Get the shared libraries used by the executable file. Use the same steps as for the executable file.
- If the application is distributed as a package, load the executable file in GDB, to display hints for missing debuginfo packages. For more details, see Section 3.1.4, “Getting debuginfo packages for an application or library using GDB”.
To examine the core file in detail, load the executable file and core dump file with GDB:
$ gdb -e executable_file -c core_fileFurther messages about missing files and debugging information help you identify what is missing for the debugging session. Return to the previous step if needed.
If the application’s debugging information is available as a file instead of as a package, load this file in GDB with the
symbol-filecommand:(gdb) symbol-file program.debugReplace program.debug with the actual file name.
NoteIt might not be necessary to install the debugging information for all executable files contained in the core dump. Most of these executable files are libraries used by the application code. These libraries might not directly contribute to the problem you are analyzing, and you do not need to include debugging information for them.
Use the GDB commands to inspect the state of the application at the moment it crashed. See Inspecting Application Internal State with GDB.
NoteWhen analyzing a core file, GDB is not attached to a running process. Commands for controlling execution have no effect.
Additional resources
- Debugging with GDB - 2.1.1 Choosing Files
- Debugging with GDB - 18.1 Commands to Specify Files
- Debugging with GDB - 18.3 Debugging Information in Separate Files
3.4.4. Creating and accessing a core dump with coredumpctl Copiar o linkLink copiado para a área de transferência!
To manage and analyze core dumps directly on the affected system, use coredumpctl. This tool simplifies finding, capturing, and inspecting crash data. Identify an unresponsive process, force a core dump, and verify its successful capture to diagnose application failures.
Prerequisites
The system must be configured to use
systemd-coredumpfor core dump handling. To verify this is true:$ sysctl kernel.core_patternThe configuration is correct if the output starts with the following:
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump
Procedure
Find the PID of the hung process, based on a known part of the executable file name:
$ pgrep -a executable-name-fragmentThis command will output a line in the form
PID command-lineUse the command-line value to verify that the PID belongs to the intended process.
For example:
$ pgrep -a bc5459 bcSend an abort signal to the process:
# kill -ABRT PIDVerify that the core has been captured by
coredumpctl:$ coredumpctl list PIDFor example:
$ coredumpctl list 5459TIME PID UID GID SIG COREFILE EXE Thu 2019-11-07 15:14:46 CET 5459 1000 1000 6 present /usr/bin/bcFurther examine or use the core file as needed.
You can specify the core dump by PID and other values. See the coredumpctl(1) manual page for further details.
To show details of the core file:
$ coredumpctl info PIDTo load the core file in the GDB debugger:
$ coredumpctl debug PIDDepending on availability of debugging information, GDB will suggest commands to run, such as:
Missing separate debuginfos, use: dnf debuginfo-install bc-1.07.1-5.el8.x86_64For more details on this process, see Section 3.1.4, “Getting debuginfo packages for an application or library using GDB”.
To export the core file for further processing elsewhere:
$ coredumpctl dump PID > /path/to/file_for_exportReplace /path/to/file_for_export with the file where you want to put the core dump.
3.4.5. Dumping process memory with gcore Copiar o linkLink copiado para a área de transferência!
To capture the memory state of a running process without terminating it, use the gcore utility. This creates a core dump file for offline analysis. The result is a snapshot of the application’s memory that helps you investigate issues while the service remains available.
Prerequisites
Procedure
Find out the process id (pid). Use tools such as
ps,pgrep, andtop:$ ps -C some-programDump the memory of this process:
$ gcore -o filename pidThis creates a file filename and dumps the process memory in it. While the memory is being dumped, the execution of the process is halted.
- After the core dump is finished, the process resumes normal execution.
Create an SOS report to provide additional information about the system:
# sosreportThis creates a tar archive containing information about your system, such as copies of configuration files.
- Transfer the program’s executable file, core dump, and the SOS report to the computer where the debugging will take place.
- Optional: Remove the core dump and SOS report after transferring them, to free up disk space.
3.4.6. Dumping protected process memory with GDB Copiar o linkLink copiado para a área de transferência!
To dump protected process memory, configure the GNU Debugger (GDB) to ignore core dump filters. Capture memory regions flagged as non-dumpable, such as those conserving resources or holding sensitive data. Use the gcore command within GDB to generate the complete core file.
Prerequisites
Procedure
Set GDB to ignore the settings in the
/proc/PID/coredump_filterfile:(gdb) set use-coredump-filter offSet GDB to ignore the memory page flag
VM_DONTDUMP:(gdb) set dump-excluded-mappings onDump the memory:
(gdb) gcore core-fileReplace core-file with name of file where you want to dump the memory.
3.5. Compatibility-breaking changes in GDB Copiar o linkLink copiado para a área de transferência!
The version of the GNU Debugger (GDB) provided in Red Hat Enterprise Linux 8 contains several changes that break compatibility, especially for cases where the GDB output is read directly from the terminal. The following sections provide more details about these changes.
Parsing output of GDB is not recommended. Prefer scripts using the Python GDB API or the GDB Machine Interface (MI).
GDBserver now starts inferiors with shell
To enable expansion and variable substitution in inferior command line arguments, GDBserver now starts the inferior in a shell, same as GDB.
To disable using the shell:
-
When using the
target extended-remoteGDB command, disable shell with theset startup-with-shell offcommand. -
When using the
target remoteGDB command, disable shell with the--no-startup-with-shelloption of GDBserver.
Example 3.1. Example of shell expansion in remote GDB inferiors
This example shows how running the /bin/echo /* command through GDBserver differs on Red Hat Enterprise Linux versions 7 and 8:
On RHEL 7:
Start GDBserver:
$ gdbserver --multi :1234Run GDB with the command:
$ gdb -batch -ex 'target extended-remote :1234' -ex 'set remote exec-file /bin/echo' -ex 'file /bin/echo' -ex 'run /'*/*
On RHEL 8:
Start GDBserver:
$ gdbserver --multi :1234Run GDB with the command:
$ gdb -batch -ex 'target extended-remote :1234' -ex 'set remote exec-file /bin/echo' -ex 'file /bin/echo' -ex 'run /'*/bin /boot (...) /tmp /usr /var
gcj support removed
Support for debugging Java programs compiled with the GNU Compiler for Java (gcj) has been removed.
New syntax for symbol dumping maintenance commands
The symbol dumping maintenance commands syntax now includes options before file names. As a result, commands that worked with GDB in RHEL 7 do not work in RHEL 8.
As an example, the following command no longer stores symbols in a file, but produces an error message:
+
(gdb) maintenance print symbols /tmp/out main.c
The new syntax for the symbol dumping maintenance commands is:
maint print symbols [-pc address] [--] [filename]
maint print symbols [-objfile objfile] [-source source] [--] [filename]
maint print psymbols [-objfile objfile] [-pc address] [--] [filename]
maint print psymbols [-objfile objfile] [-source source] [--] [filename]
maint print msymbols [-objfile objfile] [--] [filename]
Thread numbers are no longer global
Previously, GDB used only global thread numbering. The numbering has been extended to be displayed per inferior in the form inferior_num.thread_num, such as 2.1. As a consequence, thread numbers in the $_thread convenience variable and in the InferiorThread.num Python attribute are no longer unique between inferiors.
GDB now stores a second thread ID per thread, called the global thread ID, which is the new equivalent of thread numbers in previous releases. To access the global thread number, use the $_gthread convenience variable and InferiorThread.global_num Python attribute.
For backwards compatibility, the Machine Interface (MI) thread IDs always contains the global IDs.
Example 3.2. Example of GDB thread number changes
On Red Hat Enterprise Linux 7:
Install debuginfo packages:
# debuginfo-install coreutilsRun GDB:
$ gdb -batch -ex 'file echo' -ex start -ex 'add-inferior' -ex 'inferior 2' -ex 'file echo' -ex start -ex 'info threads' -ex 'pring $_thread' -ex 'inferior 1' -ex 'pring $_thread'(...) Id Target Id Frame * 2 process 203923 "echo" main (argc=1, argv=0x7fffffffdb88) at src/echo.c:109 1 process 203914 "echo" main (argc=1, argv=0x7fffffffdb88) at src/echo.c:109 $1 = 2 (...) $2 = 1
On Red Hat Enterprise Linux 8:
Install debuginfo packages:
# dnf debuginfo-install coreutilsRun GDB:
$ gdb -batch -ex 'file echo' -ex start -ex 'add-inferior' -ex 'inferior 2' -ex 'file echo' -ex start -ex 'info threads' -ex 'pring $_thread' -ex 'inferior 1' -ex 'pring $_thread'(...) Id Target Id Frame 1.1 process 4106488 "echo" main (argc=1, argv=0x7fffffffce58) at ../src/echo.c:109 * 2.1 process 4106494 "echo" main (argc=1, argv=0x7fffffffce58) at ../src/echo.c:109 $1 = 1 (...) $2 = 1
Memory for value contents can be limited
Previously, GDB did not limit the amount of memory allocated for value contents. As a consequence, debugging incorrect programs could cause GDB to allocate too much memory. The max-value-size setting has been added to enable limiting the amount of allocated memory. The default value of this limit is 64 KiB. As a result, GDB in Red Hat Enterprise Linux 8 will not display too large values, but report that the value is too large instead.
As an example, printing a value defined as char s[128*1024]; produces different results:
-
On Red Hat Enterprise Linux 7,
$1 = 'A' <repeats 131072 times> -
On Red Hat Enterprise Linux 8,
value requires 131072 bytes, which is more than max-value-size
Sun version of stabs format no longer supported
Support for the Sun version of the stabs debug file format has been removed. The stabs format produced by GCC in RHEL with the gcc -gstabs option is still supported by GDB.
Sysroot handling changes
The set sysroot path command specifies system root when searching for files needed for debugging. Directory names supplied to this command may now be prefixed with the string target: to make GDB read the shared libraries from the target system (both local and remote). The formerly available remote: prefix is now treated as target:. Additionally, the default system root value has changed from an empty string to target: for backward compatibility.
The specified system root is prepended to the file name of the main executable, when GDB starts processes remotely, or when it attaches to already running processes (both local and remote). This means that for remote processes, the default value target: makes GDB always try to load the debugging information from the remote system. To prevent this, run the set sysroot command before the target remote command so that local symbol files are found before the remote ones.
HISTSIZE no longer controls GDB command history size
Previously, GDB used the HISTSIZE environment variable to determine how long command history should be kept. GDB has been changed to use the GDBHISTSIZE environment variable instead. This variable is specific only to GDB. The possible values and their effects are:
- a positive number - use command history of this size,
-
-1or an empty string - keep history of all commands, - non-numeric values - ignored.
Completion limiting added
The maximum number of candidates considered during completion can now be limited using the set max-completions command. To show the current limit, run the show max-completions command. The default value is 200. This limit prevents GDB from generating excessively large completion lists and becoming unresponsive.
As an example, the output after the input p <tab><tab> is:
-
on RHEL 7:
Display all 29863 possibilities? (y or n) -
on RHEL 8:
Display all 200 possibilities? (y or n)
HP-UX XDB compatibility mode removed
The -xdb option for the HP-UX XDB compatibility mode has been removed from GDB.
Handling signals for threads
Previously, GDB could deliver a signal to the current thread instead of the thread for which the signal was actually sent. This bug has been fixed, and GDB now always passes the signal to the correct thread when resuming execution.
Additionally, the signal command now always correctly delivers the requested signal to the current thread. If the program is stopped for a signal and the user switched threads, GDB asks for confirmation.
Breakpoint modes always-inserted off and auto merged
The breakpoint always-inserted setting has been changed. The auto value and corresponding behavior has been removed. The default value is now off. Additionally, the off value now causes GDB to not remove breakpoints from the target until all threads stop.
remotebaud commands no longer supported
The set remotebaud and show remotebaud commands are no longer supported. Use the set serial baud and show serial baud commands instead.
3.6. Debugging applications in containers Copiar o linkLink copiado para a área de transferência!
To troubleshoot container applications, you can use various command-line tools.
This is not a complete list of command-line tools. The choice of tool for debugging a container application is heavily based on the container image and your use case.
For instance, the systemctl, journalctl, ip, netstat, ping, traceroute, perf, iostat tools may need root access because they interact with system-level resources such as networking, systemd services, or hardware performance counters, which are restricted in rootless containers for security reasons.
Rootless containers operate without requiring elevated privileges, running as non-root users within user namespaces to provide improved security and isolation from the host system. They offer limited interaction with the host, reduced attack surface, and enhanced security by mitigating the risk of privilege escalation vulnerabilities.
Rootful containers run with elevated privileges, typically as the root user, granting full access to system resources and capabilities. While rootful containers offer greater flexibility and control, they pose security risks due to their potential for privilege escalation and exposure of the host system to vulnerabilities.
For more information about rootful and rootless containers, see Setting up rootless containers, Upgrading to rootless containers, and Special considerations for rootless containers.
- Systemd and process management tools,
systemctl - Controls systemd services within containers, allowing start, stop, enable, and disable operations.
journalctl- Views logs generated by systemd services, aiding in troubleshooting container issues.
- Networking tools,
ip - Manages network interfaces, routing, and addresses within containers.
netstat- Displays network connections, routing tables, and interface statistics.
ping- Verifies network connectivity between containers or hosts.
traceroute- Identifies the path packets take to reach a destination, useful for diagnosing network issues.
- Process and performance tools,
ps - Lists currently running processes within containers.
top- Provides real-time insights into resource usage by processes within containers.
htop- Interactive process viewer for monitoring resource utilization.
perf- CPU performance profiling, tracing, and monitoring, aiding in pinpointing performance bottlenecks within the system or applications.
vmstat- Reports virtual memory statistics within containers, aiding in performance analysis.
iostat- Monitors input/output statistics for block devices within containers.
gdb(GNU Debugger)- A command-line debugger that helps in examining and debugging programs by allowing users to track and control their execution, inspect variables, and analyze memory and registers during runtime. For more information, see the Debugging applications within Red Hat OpenShift containers article.
strace- Intercepts and records system calls made by a program, aiding in troubleshooting by revealing interactions between the program and the operating system.
- Security and access control tools,
sudo - Enables executing commands with elevated privileges.
chroot- Changes the root directory for a command, helpful in testing or troubleshooting within a different root directory.
- Podman-specific tools,
podman logs - Batch-retrieves whatever logs are present for one or more containers at the time of execution.
podman inspect- Displays the low-level information on containers and images as identified by name or ID.
podman events-
Monitor and print events that occur in Podman. Each event includes a timestamp, a type, a status, a name (if applicable), and an image (if applicable). The default logging mechanism is
journald. podman run --health-cmd- Use the health check to determine the health or readiness of the process running inside the container.
podman top- Display the running processes of the container.
podman exec- Running commands in or attaching to a running container is extremely useful to get a better understanding of what is happening in the container.
podman export- When the container fails, it is basically impossible to know what happened. Exporting the filesystem structure from the container will allow for checking other logs files that may not be in the mounted volumes.
Chapter 4. Additional toolsets for development Copiar o linkLink copiado para a área de transferência!
GCC Toolset and related toolsets provide newer compilers and debuggers for C, C++, and Fortran. Use these topics to install, use, and run container images for development.
4.1. Using the GCC Toolset Copiar o linkLink copiado para a área de transferência!
4.1.1. What is the GCC Toolset Copiar o linkLink copiado para a área de transferência!
Red Hat Enterprise Linux 8 introduces the GCC Toolset, a compiler toolset that provides a variety of development and performance analysis tools. GCC Toolset is similar to Red Hat Developer Toolset.
GCC Toolset is available as an Application Stream in the form of a software collection in the AppStream repository. GCC Toolset is fully supported under Red Hat Enterprise Linux Subscription Level Agreements, is functionally complete, and is intended for production use. Applications and libraries provided by the GCC Toolset do not replace the Red Hat Enterprise Linux system versions, do not override them, and do not automatically become default or preferred choices. Using a framework called software collections, an additional set of developer tools is installed into the /opt/ directory and is explicitly enabled by the user on-demand by using the scl utility. Unless noted otherwise for specific tools or features, the GCC Toolset is available for all architectures supported by Red Hat Enterprise Linux.
For information about the length of support, see Red Hat Enterprise Linux Application Streams Life Cycle.
4.1.2. Installing the GCC Toolset Copiar o linkLink copiado para a área de transferência!
Installing the GCC Toolset on a system installs the main tools and all necessary dependencies. Note that some parts of the toolset are not installed by default and must be installed separately.
Procedure
To install the GCC Toolset version N:
# yum install gcc-toolset-N
4.1.3. Installing individual packages from the GCC Toolset Copiar o linkLink copiado para a área de transferência!
To install only certain tools from the GCC Toolset instead of the whole toolset, list the available packages and install the selected ones with the yum package management tool. Use selective installation to access packages not installed by default with the full toolset.
Procedure
List the packages available in the GCC Toolset version N:
$ yum list available gcc-toolset-N-\*To install any of these packages:
# yum install package_nameReplace package_name with a space-separated list of packages to install. For example, to install the
gcc-toolset-13-annobin-annocheckandgcc-toolset-13-binutils-develpackages:# yum install gcc-toolset-13-annobin-annocheck gcc-toolset-13-binutils-devel
4.1.4. Uninstalling the GCC Toolset Copiar o linkLink copiado para a área de transferência!
To remove the GCC Toolset from your system, uninstall it by using the yum package management tool.
Procedure
To uninstall the GCC Toolset version N:
# yum remove gcc-toolset-N\*
4.1.5. Accessing the GCC Toolset Copiar o linkLink copiado para a área de transferência!
To access the GCC Toolset, you can run a specific tool using the scl utility, or start a shell session where the toolset versions override the system versions.
Procedure
To run a single tool from the GCC Toolset version N:
$ scl enable gcc-toolset-N toolReplace tool with the command provided by the tool you want to run.
To run a shell session where tool versions from the GCC Toolset version N override system versions of these tools:
$ scl enable gcc-toolset-N bash
4.2. GCC Toolset 9 Copiar o linkLink copiado para a área de transferência!
Learn about information specific to the GCC Toolset version 9 and the tools contained in this version.
4.2.1. Tools and versions provided by the GCC Toolset 9 Copiar o linkLink copiado para a área de transferência!
GCC Toolset 9 provides the following tools and versions.
| Name | Version | Description |
|---|---|---|
| GCC | 9.2.1 | A portable compiler suite with support for C, C++, and Fortran. |
| GDB | 8.3 | A command-line debugger for programs written in C, C++, and Fortran. |
| Valgrind | 3.15.0 | An instrumentation framework and several tools to profile applications in order to detect memory errors, identify memory management problems, and report any use of improper arguments in system calls. |
| SystemTap | 4.1 | A tracing and probing tool to monitor the activities of the entire system without the need to instrument, recompile, install, and reboot. |
| Dyninst | 10.1.0 | A library for instrumenting and working with user-space executables during their execution. |
| binutils | 2.32 | A collection of binary tools and other utilities to inspect and manipulate object files and binaries. |
| elfutils | 0.176 | A collection of binary tools and other utilities to inspect and manipulate ELF files. |
| dwz | 0.12 | A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. |
| make | 4.2.1 | A dependency-tracking build automation tool. |
| strace | 5.1 | A debugging tool to monitor system calls that a program uses and signals it receives. |
| ltrace | 0.7.91 | A debugging tool to display calls to dynamic libraries that a program makes. It can also monitor system calls executed by programs. |
| annobin | 9.08 | A build security checking tool. |
4.2.2. C++ compatibility in the GCC Toolset 9 Copiar o linkLink copiado para a área de transferência!
The GNU Compiler Collection (GCC) in the GCC Toolset 9 supports multiple C++ language standards.
The compatibility information presented here apply only to the GCC from the GCC Toolset 9.
- C++14
-
This is the default language standard setting for the GCC Toolset 9, with GNU extensions, equivalent to explicitly using option
-std=gnu++14.
Using the C++14 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 6 or later.
- C++11
- This language standard is available in the GCC Toolset 9.
Using the C++11 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 5 or later.
- C++98
- This language standard is available in the GCC Toolset 9. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from the GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8.
- C++17, C++2a
- These language standards are available in the GCC Toolset 9 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using these standards cannot be guaranteed.
All of the language standards are available in both the standard compliant variant or with GNU extensions.
When mixing objects built by using the GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), the GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by the GCC Toolset are resolved at link time.
4.2.3. Specifics of GCC in the GCC Toolset 9 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of the GCC Toolset 9 differ from the base Red Hat Enterprise Linux GNU Compiler Collection (GCC). These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-9 'gcc -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-9 'gcc objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC.
4.2.4. Specifics of binutils in the GCC Toolset 9 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of binutils in the GCC Toolset 9 differ from the base Red Hat Enterprise Linux binutils. These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-9 'ld -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-9 'ld objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils.
4.3. GCC Toolset 10 Copiar o linkLink copiado para a área de transferência!
Learn about information specific to the GCC Toolset version 10 and the tools contained in this version.
4.3.1. Tools and versions provided by the GCC Toolset 10 Copiar o linkLink copiado para a área de transferência!
GCC Toolset 10 provides the following tools and versions.
| Name | Version | Description |
|---|---|---|
| GCC | 10.2.1 | A portable compiler suite with support for C, C++, and Fortran. |
| GDB | 9.2 | A command-line debugger for programs written in C, C++, and Fortran. |
| Valgrind | 3.16.0 | An instrumentation framework and several tools to profile applications in order to detect memory errors, identify memory management problems, and report any use of improper arguments in system calls. |
| SystemTap | 4.4 | A tracing and probing tool to monitor the activities of the entire system without the need to instrument, recompile, install, and reboot. |
| Dyninst | 10.2.1 | A library for instrumenting and working with user-space executables during their execution. |
| binutils | 2.35 | A collection of binary tools and other utilities to inspect and manipulate object files and binaries. |
| elfutils | 0.182 | A collection of binary tools and other utilities to inspect and manipulate ELF files. |
| dwz | 0.12 | A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. |
| make | 4.2.1 | A dependency-tracking build automation tool. |
| strace | 5.7 | A debugging tool to monitor system calls that a program uses and signals it receives. |
| ltrace | 0.7.91 | A debugging tool to display calls to dynamic libraries that a program makes. It can also monitor system calls executed by programs. |
| annobin | 9.29 | A build security checking tool. |
4.3.2. C++ compatibility in the GCC Toolset 10 Copiar o linkLink copiado para a área de transferência!
The GNU Compiler Collection (GCC) in the GCC Toolset can use the following C++ standards:
The compatibility information presented here apply only to the GCC from the GCC Toolset 10.
- C++14
This is the default language standard setting for the GCC Toolset 10, with GNU extensions, equivalent to explicitly using option
-std=gnu++14.Using the C++14 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 6 or later.
- C++11
This language standard is available in the GCC Toolset 10.
Using the C++11 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 5 or later.
- C++98
- This language standard is available in the GCC Toolset 10. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from the GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8.
- C++17
- This language standard is available in the GCC Toolset 10.
- C++20
- This language standard is available in the GCC Toolset 10 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed.
All of the language standards are available in both the standard compliant variant or with GNU extensions.
When mixing objects built by using the GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), the GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by the GCC Toolset are resolved at link time.
4.3.3. Specifics of GCC in the GCC Toolset 10 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of the GCC Toolset 10 differ from the base Red Hat Enterprise Linux GNU Compiler Collection (GCC). These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-10 'gcc -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-10 'gcc objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC.
4.3.4. Specifics of binutils in the GCC Toolset 10 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of binutils in the GCC Toolset 10 differ from the base Red Hat Enterprise Linux binutils. These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-10 'ld -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-10 'ld objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils.
4.4. GCC Toolset 11 Copiar o linkLink copiado para a área de transferência!
Learn about information specific to the GCC Toolset version 11 and the tools contained in this version.
4.4.1. Tools and versions provided by the GCC Toolset 11 Copiar o linkLink copiado para a área de transferência!
GCC Toolset 11 provides the following tools and versions.
| Name | Version | Description |
|---|---|---|
| GCC | 11.2.1 | A portable compiler suite with support for C, C++, and Fortran. |
| GDB | 10.2 | A command-line debugger for programs written in C, C++, and Fortran. |
| Valgrind | 3.17.0 | An instrumentation framework and several tools to profile applications in order to detect memory errors, identify memory management problems, and report any use of improper arguments in system calls. |
| SystemTap | 4.5 | A tracing and probing tool to monitor the activities of the entire system without the need to instrument, recompile, install, and reboot. |
| Dyninst | 11.0.0 | A library for instrumenting and working with user-space executables during their execution. |
| binutils | 2.36.1 | A collection of binary tools and other utilities to inspect and manipulate object files and binaries. |
| elfutils | 0.185 | A collection of binary tools and other utilities to inspect and manipulate ELF files. |
| dwz | 0.14 | A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. |
| make | 4.3 | A dependency-tracking build automation tool. |
| strace | 5.13 | A debugging tool to monitor system calls that a program uses and signals it receives. |
| ltrace | 0.7.91 | A debugging tool to display calls to dynamic libraries that a program makes. It can also monitor system calls executed by programs. |
| annobin | 10.23 | A build security checking tool. |
4.4.2. C++ compatibility in the GCC Toolset 11 Copiar o linkLink copiado para a área de transferência!
The GNU Compiler Collection (GCC) in the GCC Toolset can use the following C++ standards:
The compatibility information presented here apply only to the GCC from the GCC Toolset 11.
- C++14
This language standard is available in the GCC Toolset 11.
Using the C++14 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 6 or later.
- C++11
This language standard is available in the GCC Toolset 11.
Using the C++11 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 5 or later.
- C++98
- This language standard is available in the GCC Toolset 11. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from the GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8.
- C++17
This language standard is available in the GCC Toolset 11.
This is the default language standard setting for the GCC Toolset 11, with GNU extensions, equivalent to explicitly using option
-std=gnu++17.Using the C++17 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 10 or later.
- C++20 and C++23
This language standard is available in the GCC Toolset 11 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed.
To enable C++20 support, add the command-line option
-std=c++20to your g++ command line.To enable C++23 support, add the command-line option
-std=c++2bto your g++ command line.
All of the language standards are available in both the standard compliant variant or with GNU extensions.
When mixing objects built by using the GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), the GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by the GCC Toolset are resolved at link time.
4.4.3. Specifics of GCC in the GCC Toolset 11 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of the GCC Toolset 11 differ from the base Red Hat Enterprise Linux GNU Compiler Collection (GCC). These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-11 'gcc -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-11 'gcc objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC.
4.4.4. Specifics of binutils in the GCC Toolset 11 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of binutils in the GCC Toolset 11 differ from the base Red Hat Enterprise Linux binutils. These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-11 'ld -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-11 'ld objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils.
4.5. GCC Toolset 12 Copiar o linkLink copiado para a área de transferência!
Learn about information specific to the GCC Toolset version 12 and the tools contained in this version.
4.5.1. Tools and versions provided by the GCC Toolset 12 Copiar o linkLink copiado para a área de transferência!
GCC Toolset 12 provides the following tools and versions.
| Name | Version | Description |
|---|---|---|
| GCC | 12.2.1 | A portable compiler suite with support for C, C++, and Fortran. |
| GDB | 11.2 | A command-line debugger for programs written in C, C++, and Fortran. |
| binutils | 2.38 | A collection of binary tools and other utilities to inspect and manipulate object files and binaries. |
| dwz | 0.14 | A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. |
| annobin | 11.08 | A build security checking tool. |
4.5.2. C++ compatibility in the GCC Toolset 12 Copiar o linkLink copiado para a área de transferência!
The GNU Compiler Collection (GCC) in the GCC Toolset can use the following C++ standards:
The compatibility information presented here apply only to the GCC from the GCC Toolset 12.
- C++14
This language standard is available in the GCC Toolset 12.
Using the C++14 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 6 or later.
- C++11
This language standard is available in the GCC Toolset 12.
Using the C++11 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 5 or later.
- C++98
- This language standard is available in the GCC Toolset 12. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from the GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8.
- C++17
This language standard is available in the GCC Toolset 12.
This is the default language standard setting for the GCC Toolset 12, with GNU extensions, equivalent to explicitly using option
-std=gnu++17.Using the C++17 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 10 or later.
- C++20 and C++23
This language standard is available in the GCC Toolset 12 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed.
To enable C++20 support, add the command-line option
-std=c++20to your g++ command line.To enable C++23 support, add the command-line option
-std=c++23to your g++ command line.
All of the language standards are available in both the standard compliant variant or with GNU extensions.
When mixing objects built by using the GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), the GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by the GCC Toolset are resolved at link time.
4.5.3. Specifics of GCC in the GCC Toolset 12 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of the GCC Toolset 12 differ from the base Red Hat Enterprise Linux GNU Compiler Collection (GCC). These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-12 'gcc -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-12 'gcc objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC.
4.5.4. Specifics of binutils in the GCC Toolset 12 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of binutils in the GCC Toolset 12 differ from the base Red Hat Enterprise Linux binutils. These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-12 'ld -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-12 'ld objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils.
4.5.5. Specifics of annobin in the GCC Toolset 12 Copiar o linkLink copiado para a área de transferência!
Under some circumstances, due to a synchronization issue between annobin and gcc in the GCC Toolset 12, your compilation can fail with an error message that looks similar to the following:
cc1: fatal error: inaccessible plugin file
opt/rh/gcc-toolset-12/root/usr/lib/gcc/architecture-linux-gnu/12/plugin/gcc-annobin.so
expanded from short plugin name gcc-annobin: No such file or directory
To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file:
Change to the plugin directory:
# cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/architecture-linux-gnu/12/pluginCreate the symbolic link:
# ln -s annobin.so gcc-annobin.so
Replace architecture with the architecture you use in your system:
-
aarch64 -
i686 -
ppc64le -
s390x -
x86_64
4.6. GCC Toolset 13 Copiar o linkLink copiado para a área de transferência!
Learn about information specific to the GCC Toolset version 13 and the tools contained in this version.
4.6.1. Tools and versions provided by the GCC Toolset 13 Copiar o linkLink copiado para a área de transferência!
GCC Toolset 13 provides the following tools and versions.
| Name | Version | Description |
|---|---|---|
| GCC | 13.2.1 | A portable compiler suite with support for C, C++, and Fortran. |
| GDB | 12.1 | A command-line debugger for programs written in C, C++, and Fortran. |
| binutils | 2.40 | A collection of binary tools and other utilities to inspect and manipulate object files and binaries. |
| dwz | 0.14 | A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. |
| annobin | 12.32 | A build security checking tool. |
4.6.2. C++ compatibility in the GCC Toolset 13 Copiar o linkLink copiado para a área de transferência!
The GNU Compiler Collection (GCC) compiler in the GCC Toolset can use the following C++ standards:
The compatibility information presented here apply only to the GCC from the GCC Toolset 13.
- C++14
This language standard is available in the GCC Toolset 13.
Using the C++14 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 6 or later.
- C++11
This language standard is available in the GCC Toolset 13.
Using the C++11 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 5 or later.
- C++98
- This language standard is available in the GCC Toolset 13. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from the GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8.
- C++17
This language standard is available in the GCC Toolset 13.
This is the default language standard setting for the GCC Toolset 13, with GNU extensions, equivalent to explicitly using option
-std=gnu++17.Using the C++17 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 10 or later.
- C++20 and C++23
These language standards are available in the GCC Toolset 13 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed.
To enable the C++20 standard, add the command-line option
-std=c++20to your g++ command line.To enable the C++23 standard, add the command-line option
-std=c++23to your g++ command line.
All of the language standards are available in both the standard compliant variant or with GNU extensions.
When mixing objects built by using the GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), the GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by the GCC Toolset are resolved at link time.
4.6.3. Specifics of GCC in the GCC Toolset 13 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of the GCC Toolset 13 differ from the base Red Hat Enterprise Linux GNU Compiler Collection (GCC). These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-13 'gcc -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-13 'gcc objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC.
4.6.4. Specifics of binutils in the GCC Toolset 13 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of binutils in the GCC Toolset 13 differ from the base Red Hat Enterprise Linux binutils. These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-13 'ld -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-13 'ld objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils.
4.6.5. Specifics of annobin in the GCC Toolset 13 Copiar o linkLink copiado para a área de transferência!
Under some circumstances, due to a synchronization issue between annobin and gcc in the GCC Toolset 13, your compilation can fail with an error message that looks similar to the following:
cc1: fatal error: inaccessible plugin file
opt/rh/gcc-toolset-13/root/usr/lib/gcc/architecture-linux-gnu/13/plugin/gcc-annobin.so
expanded from short plugin name gcc-annobin: No such file or directory
To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file:
Change to the plugin directory:
# cd /opt/rh/gcc-toolset-13/root/usr/lib/gcc/architecture-linux-gnu/13/pluginCreate the symbolic link:
# ln -s annobin.so gcc-annobin.so
Replace architecture with the architecture you use in your system:
-
aarch64 -
i686 -
ppc64le -
s390x -
x86_64
4.7. GCC Toolset 14 Copiar o linkLink copiado para a área de transferência!
Learn about information specific to the GCC Toolset version 14 and the tools contained in this version.
4.7.1. Tools and versions provided by the GCC Toolset 14 Copiar o linkLink copiado para a área de transferência!
GCC Toolset 14 provides the following tools and versions.
| Name | Version | Description |
|---|---|---|
| GCC | 14.2.1 | A portable compiler suite with support for C, C++, and Fortran. |
| GDB | 14.2 | A command-line debugger for programs written in C, C++, and Fortran. |
| binutils | 2.41 | A collection of binary tools and other utilities to inspect and manipulate object files and binaries. |
| dwz | 0.14 | A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. |
| annobin | 12.70 | A build security checking tool. |
4.7.2. C++ compatibility in the GCC Toolset 14 Copiar o linkLink copiado para a área de transferência!
The GNU Compiler Collection (GCC) in the GCC Toolset can use the following C++ standards:
The compatibility information presented here apply only to the GCC from the GCC Toolset 14.
- C++14
This language standard is available in the GCC Toolset 14.
Using the C++14 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 6 or later.
- C++11
This language standard is available in the GCC Toolset 14.
Using the C++11 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 5 or later.
- C++98
- This language standard is available in the GCC Toolset 14. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from the GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8.
- C++17
This language standard is available in the GCC Toolset 14.
This is the default language standard setting for the GCC Toolset 14, with GNU extensions, equivalent to explicitly using option
-std=gnu++17.Using the C++17 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 10 or later.
- C++20 and C++23
These language standards are available in the GCC Toolset 14 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed.
To enable the C++20 standard, add the command-line option
-std=c++20to your g++ command line.To enable the C++23 standard, add the command-line option
-std=c++23to your g++ command line.
All of the language standards are available in both the standard compliant variant or with GNU extensions.
When mixing objects built by using the GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), the GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by the GCC Toolset are resolved at link time.
4.7.3. Specifics of GCC in the GCC Toolset 14 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of the GCC Toolset 14 differ from the base Red Hat Enterprise Linux GNU Compiler Collection (GCC). These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-14 'gcc -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-14 'gcc objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC.
4.7.4. Specifics of binutils in the GCC Toolset 14 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of binutils in the GCC Toolset 14 differ from the base Red Hat Enterprise Linux binutils. These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
Static linking of libraries
Certain more recent library features are statically linked into applications built with the GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
Specify libraries after object files when linking
In the GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-14 'ld -lsomelib objfile.o'
Using a library from the GCC Toolset in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files:
$ scl enable gcc-toolset-14 'ld objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils.
4.7.5. Specifics of annobin in the GCC Toolset 14 Copiar o linkLink copiado para a área de transferência!
Builds that use the GCC Toolset 14 and annobin can fail due to a synchronization issue between the annobin plugin and gcc. This causes the compiler to fail locating the gcc-annobin.so plugin file.
cc1: fatal error: inaccessible plugin file
opt/rh/gcc-toolset-14/root/usr/lib/gcc/architecture-linux-gnu/14/plugin/gcc-annobin.so
expanded from short plugin name gcc-annobin: No such file or directory
To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file:
Change to the plugin directory:
# cd /opt/rh/gcc-toolset-14/root/usr/lib/gcc/architecture-linux-gnu/14/pluginCreate the symbolic link:
# ln -s annobin.so gcc-annobin.so
Replace architecture with the architecture you use in your system:
-
aarch64 -
i686 -
ppc64le -
s390x -
x86_64
4.8. GCC Toolset 15 Copiar o linkLink copiado para a área de transferência!
GCC Toolset 15 in Red Hat Enterprise Linux offers updated compilers and debuggers for C, C++, and Fortran. It enables building, testing, and optimizing applications with current features while maintaining system stability and support.
4.8.1. GCC Toolset 15 tools and versions Copiar o linkLink copiado para a área de transferência!
The GCC Toolset 15 offers updated versions of development tools for building and debugging applications on Red Hat Enterprise Linux 8.
| Name | Version | Description |
| GCC | 15.1.1 | A portable compiler suite with support for C, C++, and Fortran. |
| GDB | 16.3 | A command-line debugger for programs written in C, C++, and Fortran. |
| binutils | 2.44 | A collection of binary tools and other utilities to inspect and manipulate object files and binaries. |
| annobin | 12.93 | A build security checking tool. |
| dwz | 0.16 | A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. |
4.8.2. C++ compatibility in the GCC Toolset 15 Copiar o linkLink copiado para a área de transferência!
The GNU Compiler Collection (GCC) Toolset 15 supports multiple C++ language standards. The default is C++17, with options for C++98, C++11, C++14, and experimental versions such as C++20, C++23, and C++26.
This compatibility information applies only to GCC from the GCC Toolset 15.
The GCC compiler in the GCC Toolset 15 can use the following C++ standards:
- C++98
- This language standard is available in the GCC Toolset 15. Binaries, shared libraries, and objects built using this standard can be freely mixed regardless of being built with GCC from the GCC Toolset 15, Red Hat Developer Toolset, and RHEL 5, 6, 7, and 8.
- C++11
- This language standard is available in the GCC Toolset 15.
- C++14
- This language standard is available in the GCC Toolset 15.
Using the C++14 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 6 or later.
- C++17
- This language standard is available in the GCC Toolset 15.
This is the default language standard setting for the GCC Toolset 15, with GNU extensions, equivalent to explicitly using option -std=gnu++17.
Using the C++17 language version is supported when all C++ objects compiled with the appropriate flag have been built using GCC version 10 or later.
- C++20 and C++23, and C++26
- These language standards are available in the GCC Toolset 15 only as an experimental, unstable, and unsupported capabilities. Additionally, the compatibility of objects, binary files, and libraries built using these standards cannot be guaranteed.
To enable the C++20 standard, add the command-line option -std=c++20 to your g++ command line.
To enable the C++23 standard, add the command-line option -std=c++23 to your g++ command line.
To enable the C++26 standard, add the command-line option -std=c++26 to your g++ command line. All of the language standards are available in both the standard compliant variant or with GNU extensions.
Use the GCC Toolset 15 for linking when you combine objects built by using the GCC Toolset 15 with objects built by using the system toolchain, particularly .o or .a files. This ensures any newer library features provided only by the GCC Toolset 15 are resolved at link time.
4.8.3. Specifics of GCC in the GCC Toolset 15 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of the GCC Toolset 15 differ from the base Red Hat Enterprise Linux GNU Compiler Collection (GCC). These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
- Static linking of libraries
- Certain more recent library features are statically linked into applications built with the GCC Toolset 15 to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
- Specify libraries after object files when linking
- In the GCC Toolset 15, libraries are linked using linker scripts, which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-15 'gcc -lsomelib objfile.o'
Using a library from the GCC Toolset 15 in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice and specify the option by adding the library after the options specifying the object files:
+
$ scl enable gcc-toolset-15 'gcc objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC.
4.8.4. Specifics of binutils in the GCC Toolset 15 Copiar o linkLink copiado para a área de transferência!
Certain behaviors and requirements of binutils in the GCC Toolset 15 differ from the base Red Hat Enterprise Linux binutils. These include automatic static linking of certain library features and the requirement to specify libraries after object files during linking.
- Static linking of libraries
- GCC Toolset 15 statically links newer library features into applications to ensure compatibility across multiple Red Hat Enterprise Linux versions. Statically linked code can introduce minor security risks, because security updates require applications to be rebuilt. If a security vulnerability is discovered, Red Hat will notify developers to rebuild affected applications through a security advisory.
Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons.
- Specify libraries after object files when linking
- In the GCC Toolset 15, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the corresponding shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files:
$ scl enable gcc-toolset-15 'ld -lsomelib objfile.o'
Using a library from the GCC Toolset 15 in this manner results in the linker error message undefined reference to symbol. To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files:
+
$ scl enable gcc-toolset-15 'ld objfile.o -lsomelib'
Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils.
4.8.5. Specifics of annobin in the GCC Toolset 15 Copiar o linkLink copiado para a área de transferência!
Builds that use the GCC Toolset 15 and annobin can fail due to a synchronization issue between the annobin plugin and gcc. This causes the compiler to fail locating the gcc-annobin.so plugin file.
cc1: fatal error: inaccessible plugin file
opt/rh/gcc-toolset-15/root/usr/lib/gcc/_architecture_-linux-gnu/15/plugin/gcc-annobin.so
expanded from short plugin name gcc-annobin: No such file or directory
To work around the issue, create a symbolic link in the plugin directory from annobin.so to gcc-annobin.so:
Change to the plugin directory:
$ cd /opt/rh/gcc-toolset-15/root/usr/lib/gcc/architecture-linux-gnu/15/pluginCreate the symbolic link:
$ ln -s annobin.so gcc-annobin.so
Replace architecture with the architecture used on your system:
-
aarch64 -
i686 -
ppc64le -
s390x -
x86_64
4.9. Using the gcc-toolset container image Copiar o linkLink copiado para a área de transferência!
Only the two latest GCC Toolset container images are supported. Container images of earlier GCC Toolset versions are unsupported.
The GCC Toolset 13 and the GCC Toolset 14 components are available in the GCC Toolset 13 Toolchain and GCC Toolset 14 Toolchain container images, respectively.
The GCC Toolset container image is based on the rhel8 base image and is available for all architectures supported by RHEL 8:
- AMD and Intel 64-bit architectures
- The 64-bit ARM architecture
- IBM Power Systems, Little Endian
- 64-bit IBM Z
4.9.1. GCC Toolset container image contents Copiar o linkLink copiado para a área de transferência!
Tools versions provided in the GCC Toolset 14 container image match the GCC Toolset 14 components versions.
- GCC Toolset 14 toolchain contents
-
The
rhel8/gcc-toolset-14-toolchaincontainer image consists of the following components:
| Component | Package |
|---|---|
|
| gcc-toolset-14-gcc |
|
| gcc-toolset-14-gcc-c++ |
|
| gcc-toolset-14-gcc-gfortran |
|
| gcc-toolset-14-gdb |
4.9.2. Accessing and running the GCC Toolset container image Copiar o linkLink copiado para a área de transferência!
To access and run the GCC Toolset container image, use a standard container engine like Podman. Authenticate with the registry, pull the image, and launch a container to access an isolated, pre-configured toolchain for development.
Prerequisites
- Podman is installed.
Procedure
Access the Red Hat Container Registry by using your Customer Portal credentials:
$ podman login registry.redhat.ioUsername: username Password: **Pull the container image you require by running a relevant command as root:
# podman pull registry.redhat.io/rhel8/gcc-toolset-<toolset_version>-toolchainReplace toolset_version with the GCC Toolset version, for example 14.
NoteOn RHEL 8.1 and later versions, you can set up your system to work with containers as a non-root user. For details, see Setting up rootless containers.
Optional: Check that pulling was successful by running a command that lists all container images on your local system:
# podman imagesRun a container by launching a bash shell inside a container:
# podman run -it image_name /bin/bashThe
-ioption creates an interactive session; without this option the shell opens and instantly exits.The
-toption opens a terminal session; without this option you cannot type anything to the shell.
4.9.3. Example: Using the GCC Toolset 14 Toolchain container image Copiar o linkLink copiado para a área de transferência!
To pull and start using the GCC Toolset 14 Toolchain container image, access the registry, pull the image, and launch it.
Prerequisites
- Podman is installed.
Procedure
Access the Red Hat Container Registry by using your Customer Portal credentials:
$ podman login registry.redhat.ioUsername: username Password: **Pull the container image as root:
# podman pull registry.redhat.io/rhel{ProductNumber}/gcc-toolset-14-toolchainLaunch the container image with an interactive shell as root:
# podman run -it registry.redhat.io/rhel{ProductNumber}/gcc-toolset-14-toolchain /bin/bashRun the GCC Toolset tools as expected. For example, to verify the
gcccompiler version, run:bash-4.4$ gcc -v... gcc version 14.2.1 20240801 (Red Hat 14.2.1-1) (GCC)To list all packages provided in the container, run:
bash-4.4$ rpm -qa
4.10. Using the gcc-toolset-15 container image Copiar o linkLink copiado para a área de transferência!
The gcc-toolset-15 container image provides a complete toolchain for building, testing, and troubleshooting C and C++ applications in a containerized environment.
4.10.1. Introduction to the gcc-toolset-15 container image Copiar o linkLink copiado para a área de transferência!
The gcc-toolset-15 container image provides a GCC Toolset 15 toolchain for building, testing, and troubleshooting C and C++ applications on Red Hat Enterprise Linux (RHEL). By using this image, you can maintain a reproducible environment without installing packages directly on the host.
The image is part of the RHEL container collection. All included RPM packages originate from official RHEL repositories, ensuring the image follows standard RHEL lifecycle and support policies. This provides a portable, containerized alternative to traditional RPM-based delivery.
Deployment scenarios include:
- Interactive development on a RHEL host by running an interactive container that mounts source code from the host.
- Noninteractive builds in CI pipelines or scheduled jobs, with build scripts running inside the container and writing artifacts to host-mounted directories.
- Evaluation and troubleshooting of GCC Toolset 15 behavior in an isolated environment without installing the toolchain on the host.
4.10.2. Supported platforms and architectures for the gcc-toolset-15 container image Copiar o linkLink copiado para a área de transferência!
The gcc-toolset-15 image targets the same architectures provided by Red Hat Enterprise Linux (RHEL), including AMD64 and Intel 64 (x86_64), 64-bit ARM (aarch64), IBM Power, little endian (ppc64le), and 64-bit IBM Z (s390x).
For image-specific metadata, supported architectures, and tags, see the Red Hat container catalog entry for the gcc-toolset-15 container image.
If you plan to deploy the image across multiple architectures, ensure that you choose a tag that includes multi-architecture support or use architecture-specific tags according to your organization’s standards.
The gcc-toolset-15 image is available from Red Hat container registries. The exact path can differ depending on whether you use public registries, internal mirrors, or both. Placeholders for image locations are used below:
-
<REGISTRY>for the registry hostname. -
<NAMESPACE>for the registry namespace that contains thegcc-toolset-15image. -
<TAG>for the image tag that corresponds to the GCC Toolset 15 level that you require.
Use your product documentation, internal image catalog, or registry UI to determine the correct values for <REGISTRY>, <NAMESPACE>, and <TAG>. Record these values for use in the procedures that follow.
4.10.3. Preparing a RHEL host for the gcc-toolset-15 container image Copiar o linkLink copiado para a área de transferência!
To configure hosts to use the gcc-toolset-15 image, verify system requirements and install necessary container tools. For comprehensive guidelines, see Building, running, and managing containers.
Prerequisites
- Administrator access to the RHEL host.
- The host is registered and has access to required Red Hat Enterprise Linux repositories.
- The host runs a supported Red Hat Enterprise Linux variant and architecture.
- Ensure that the host has adequated resources for CPU, memory, and storage for build workloads.
- The host has network access to the container registry or to an internal mirror.
Procedure
Install the container management tools:
$ sudo dnf install -y container-toolsVerify that
podmanis available:$ podman --versionIf the command completes successfully, it prints the version. For example:
podman version 5.4.1Verify subscription and repository access:
$ sudo dnf repolistIf required, configure subscription settings according to your environment so that the host can consume RHEL content and container registries.
4.10.4. Authenticating to the container registry Copiar o linkLink copiado para a área de transferência!
To pull the gcc-toolset-15 image, you must log in to the container registry.
Prerequisites
-
container-toolsare installed on the RHEL host. - You have the registry hostname and valid credentials.
Procedure
Log in to the registry by using
podman:$ podman login <REGISTRY>Replace
<REGISTRY>with your registry hostname.- When prompted, enter your user name and password, or use the authentication mechanism that your organization provides.
Verification
Verify that the login and registry access were successful by searching for the
gcc-toolset-15image:$ podman search <REGISTRY>/<NAMESPACE>/gcc-toolset-15**
4.10.5. Pulling the gcc-toolset-15 container image Copiar o linkLink copiado para a área de transferência!
To use the gcc-toolset-15 image, pull it to your local system after authenticating to the registry.
Prerequisites
- You have authenticated to the container registry.
-
You know the
<REGISTRY>,<NAMESPACE>, and<TAG>values for the image. You can determine these details from the Red Hat Container Catalog or your internal image catalog.
Procedure
Pull the image:
$ podman pull <REGISTRY>/<NAMESPACE>/gcc-toolset-15:<TAG>Replace
<REGISTRY>,<NAMESPACE>, and<TAG>with the values you determined earlier.
Verification
Verify that the image is available locally:
$ podman images <REGISTRY>/<NAMESPACE>/gcc-toolset-15The output should display a table row containing the repository, tag, image ID, and size. For example:
REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/.../gcc-toolset-15 latest a1b2c3d4e5f6 2 days ago 540MB
4.10.6. Using the gcc-toolset-15 to run interactive development containers Copiar o linkLink copiado para a área de transferência!
To use the gcc-toolset-15 image for interactive development, run an interactive container that mounts source code from the host.
Prerequisites
-
The
gcc-toolset-15image is available locally. -
You have a host directory containing the application source code, for example,
/home/devuser/src. - You have read and write permissions for the host directory.
Procedure
Start an interactive container that mounts the source directory and uses it as the working directory:
$ podman run --rm -it -v /home/devuser/src:/src -w /src <REGISTRY>/<NAMESPACE>/gcc-toolset-15:<TAG> /bin/bashReplace
<REGISTRY>,<NAMESPACE>, and<TAG>with your specific values.Inside the container, verify that the source code is available:
# ls /srcBuild an application using GCC Toolset 15:
# gcc -o myapp main.cBuild artifacts created in
/srcare stored on the host in/home/devuser/src.When the interactive session completes, exit the shell:
# exitBecause the container was started with the
--rmflag, it is removed automatically.
4.10.7. Running noninteractive builds with gcc-toolset-15 Copiar o linkLink copiado para a área de transferência!
To perform noninteractive builds, run the gcc-toolset-15 image with a predefined build script that exits upon completion.
In CI systems, you can integrate this pattern by running podman run as part of a pipeline step. Use pipeline variables or configuration files to provide the values for <REGISTRY>, <NAMESPACE>, <TAG>, and host paths.
Ensure build scripts write logs and artifacts to host-mounted directories. This practice enables CI systems and administrators to inspect outputs after the container exits and to archive them as needed.
Prerequisites
-
The
gcc-toolset-15image is available locally. -
A host directory for source and build scripts, for example,
/srv/src. -
A host directory for build output, for example,
/srv/build-output. -
A build script, for example,
build.sh, that runs GCC Toolset 15 commands.
Procedure
- Prepare your build workflow. Ensure both the source and output directories on the host have appropriate permissions.
Run the container in noninteractive mode:
$ podman run --rm -v /srv/src:/src -v /srv/build-output:/build-output -w /src <REGISTRY> / <NAMESPACE> /gcc-toolset-15: <TAG> /src/build.shIn this example:
-
/src/build.shis a script inside the container that performs the build. -
Build artifacts and logs are written to
/build-output, which maps to/srv/build-outputon the host.
-
4.10.8. Maintaining the gcc-toolset-15 container image Copiar o linkLink copiado para a área de transferência!
To maintain the gcc-toolset-15 image, monitor for updates including security fixes, bug fixes, and dependency changes. Track updates and errata by monitoring RHEL release notes, or internal advisories for new images that contain updated GCC Toolset 15 content.
- If you use an internal registry, mirror the updated image into the internal registry or rely on existing synchronization mechanisms.
-
Use internal paths for
<REGISTRY>and<NAMESPACE>in all procedures. - Coordinate with the team that manages the internal registry to ensure that mirrors are updated inline with your rollout schedule.
Procedure
Update to a new image tag: Consult the Red Hat Ecosystem Catalog or your internal registry to find the new image tag.
-
Determine the
<NEW_TAG>that corresponds to the updatedgcc-toolset-15image. Pull the new tag to your environment:
$ podman pull <REGISTRY>/<NAMESPACE>/gcc-toolset-15:<NEW_TAG>-
Update scripts, CI jobs, and documentation to reference
<NEW_TAG>as appropriate.
-
Determine the
Remove outdated image tags after you complete validation and roll out the new tag, according to your retention policy:
$ podman rmi <REGISTRY>/<NAMESPACE>/gcc-toolset-15:<OLD_TAG>
4.10.9. Troubleshooting the gcc-toolset-15 container image Copiar o linkLink copiado para a área de transferência!
Identify and resolve common issues when pulling or running the gcc-toolset-15 container image.
- Common issues when pulling images
If
podman pullfails:-
Check registry login by running
podman login <REGISTRY>. - Verify network connectivity to the registry.
-
Confirm that
<REGISTRY>,<NAMESPACE>, and<TAG>values are correct. -
If access is denied, verify that your subscription and registry permissions allow access to the
gcc-toolset-15image.
-
Check registry login by running
- Common issues when running containers
If
podman runfails:-
Verify that the image is available locally by running
podman images <REGISTRY>/<NAMESPACE>/gcc-toolset-15.
-
Verify that the image is available locally by running
The output should display a table row containing the repository, tag, image ID, and size. For example:
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.redhat.io/.../gcc-toolset-15 latest a1b2c3d4e5f6 2 days ago 540MB
- Check volume mount paths to ensure that host directories exist and that you have appropriate permissions.
Review container exit codes and logs to identify script or compiler errors.
- Collecting data for support
- When opening a support case, collect the following data:
-
The exact
podmancommands that you ran. - Full command output and error messages.
-
The values of
<REGISTRY>,<NAMESPACE>, and<TAG>that you used. - Information about the host, including Red Hat Enterprise Linux version and architecture.
4.10.10. Operational and security considerations for the gcc-toolset-15 container image Copiar o linkLink copiado para a área de transferência!
Review the following operational and security guidelines when using the gcc-toolset-15 container image in your environment.
- Network and registry access control
- Ensure that hosts can access the required registries or internal mirrors and that firewall settings allow necessary connections. Use internal registries where possible to centralize control of images. Consult your internal documentation for details on your organization’s registry layout and CI integration patterns.
- Resource management for build workloads
-
Build containers can be resource-intensive. Use cgroup controls,
systemdintegration, or container runtime options to manage CPU, memory, and I/O usage on shared hosts. - Logging and auditing containerized builds
- Configure build scripts to write logs to host-mounted directories. Integrate these logs with your logging and auditing infrastructure to track build activity and diagnose issues.
- Security and compliance guidelines
Because the
gcc-toolset-15image is built from Red Hat Enterprise Linux (RHEL) repositories, it follows RHEL security practices. However, you should still follow these practicies:- Limit who can run containers on shared hosts.
- Periodically review and update image tags to consume security fixes.
- Use internal registries and scanning tools according to your organization’s policies.
4.11. Compiler toolsets Copiar o linkLink copiado para a área de transferência!
RHEL 8 provides the following compiler toolsets as Application Streams. You can use these toolsets to build applications with different versions of languages and tools.
- LLVM Toolset provides the LLVM compiler infrastructure framework, the Clang compiler for the C and C++ languages, the LLDB debugger, and related tools for code analysis.
-
Rust Toolset provides the Rust programming language compiler
rustc, thecargobuild tool and dependency manager, thecargo-vendorplugin, and required libraries. -
Go Toolset provides the Go programming language tools and libraries. Go is alternatively known as
golang.
For more details and information about usage, see the compiler toolsets user guides on the Red Hat Developer Tools page.
4.12. The Annobin project Copiar o linkLink copiado para a área de transferência!
The Annobin project is an implementation of the Watermark specification project. Watermark specification project intends to add markers to Executable and Linkable Format (ELF) objects to determine their properties. The Annobin project consists of the annobin plugin and the annockeck program.
The annobin plugin scans the GNU Compiler Collection (GCC) command line, the compilation state. The compilation process, and generates the ELF notes. The ELF notes record how the binary was built and provide information for the annocheck program to perform security hardening checks.
The security hardening checker is part of the annocheck program and is enabled by default. It checks the binary files to determine whether the program was built with necessary security hardening options and compiled correctly. annocheck is able to recursively scan directories, archives, and RPM packages for ELF object files.
The files must be in ELF format. annocheck does not handle any other binary file types.
The following section describes how to:
-
Use the
annobinplugin -
Use the
annocheckprogram
4.12.1. Using the annobin plugin Copiar o linkLink copiado para a área de transferência!
The following section describes how to:
-
Enable the
annobinplugin -
Pass options to the
annobinplugin
4.12.1.1. Enabling the annobin plugin Copiar o linkLink copiado para a área de transferência!
To add build security notes to binaries, enable the annobin plug-in by using command-line options with gcc or clang utilities.
Procedure
To enable the
annobinplugin withgcc, use:$ gcc -fplugin=annobinIf
gccdoes not find theannobinplugin, use:$ gcc -iplugindir=/path/to/directory/containing/annobin/Replace /path/to/directory/containing/annobin/ with the absolute path to the directory that contains
annobin.To find the directory containing the
annobinplugin, use:$ gcc --print-file-name=plugin
To enable the
annobinplugin withclang, use:$ clang -fplugin=/path/to/directory/containing/annobin/Replace /path/to/directory/containing/annobin/ with the absolute path to the directory that contains
annobin.Optional: To remove the redundant
annobinnotes, use theobjcopyutility:$ objcopy --merge-notes file-name
4.12.1.2. Passing options to the annobin plugin Copiar o linkLink copiado para a área de transferência!
To pass options to the annobin plugin, use the appropriate command-line arguments with gcc or clang.
Procedure
To pass options to the
annobinplugin withgcc, use:$ gcc -fplugin=annobin -fplugin-arg-annobin-option file-nameReplace option with the
annobincommand line arguments and replace file-name with the name of the file.- Example: verbose option with GCC
To display additional details about what
annobinit is doing, use:$ gcc -fplugin=annobin -fplugin-arg-annobin-verbose file-nameReplace file-name with the name of the file.
To pass options to the
annobinplugin withclang, use:$ clang -fplugin=/path/to/directory/containing/annobin/ -Xclang -plugin-arg-annobin -Xclang option file-nameReplace option with the
annobincommand line arguments and replace /path/to/directory/containing/annobin/ with the absolute path to the directory containingannobin.- Example: verbose option with Clang
To display additional details about what
annobinit is doing, use:$ clang -fplugin=/usr/lib64/clang/10/lib/annobin.so -Xclang -plugin-arg-annobin -Xclang verbose file-nameReplace file-name with the name of the file.
4.12.2. Using the annocheck program Copiar o linkLink copiado para a área de transferência!
The following section describes how to use annocheck to examine:
- Files
- Directories
- RPM packages
-
annocheckextra tools
annocheck recursively scans directories, archives, and RPM packages for ELF object files. The files have to be in the ELF format. annocheck does not handle any other binary file types.
4.12.2.1. Using annocheck to examine files Copiar o linkLink copiado para a área de transferência!
To verify hardening options and build security notes of ELF files, examine the files by using the annocheck tool.
Procedure
To examine a file, use:
$ annocheck file-nameReplace file-name with the name of a file.
The files must be in ELF format. annocheck does not handle any other binary file types. annocheck processes static libraries that contain ELF object files.
Additional resources
-
For more information about
annocheckand possible command line options, see theannocheckman page on your system.
4.12.2.2. Using annocheck to examine directories Copiar o linkLink copiado para a área de transferência!
To examine ELF files in a directory, use the annocheck tool, which recursively scans directories, subdirectories, and archives.
Procedure
To scan a directory, use:
$ annocheck directory-nameReplace directory-name with the name of a directory.
annocheckautomatically examines the contents of the directory, its sub-directories, and any archives and RPM packages within the directory.
annocheck only looks for ELF files. Other file types are ignored.
Additional resources
-
For more information about
annocheckand possible command line options, see theannocheckman page on your system.
4.12.2.3. Using annocheck to examine RPM packages Copiar o linkLink copiado para a área de transferência!
To examine ELF files in an RPM package, use the annocheck tool, which recursively scans all ELF files inside the package.
Procedure
To scan an RPM package, use:
$ annocheck rpm-package-nameReplace rpm-package-name with the name of an RPM package.
annocheckrecursively scans all the ELF files inside the RPM package.
annocheck only looks for ELF files. Other file types are ignored.
To scan an RPM package with provided debug info RPM, use:
$ annocheck rpm-package-name --debug-rpm debuginfo-rpmReplace rpm-package-name with the name of an RPM package, and debuginfo-rpm with the name of a debug info RPM associated with the binary RPM.
Additional resources
-
For more information about
annocheckand possible command line options, see theannocheckman page on your system.
4.12.2.4. Using annocheck extra tools Copiar o linkLink copiado para a área de transferência!
annocheck includes multiple tools for examining binary files. You can enable these tools with the command-line options.
The following section describes how to enable the:
-
built-bytool -
notestool -
section-sizetool
You can enable multiple tools at the same time.
The hardening checker is enabled by default.
4.12.2.4.1. Enabling the built-by tool Copiar o linkLink copiado para a área de transferência!
To find the name of the compiler that built a specific binary file, you can use the annocheck built-by tool.
Procedure
To enable the
built-bytool, use:$ annocheck --enable-built-by
Additional resources
-
For more information about the
built-bytool, see the--helpcommand-line option.
4.12.2.4.2. Enabling the notes tool Copiar o linkLink copiado para a área de transferência!
To display the notes stored inside a binary file created by the annobin plug-in, you can use the annocheck notes tool.
Procedure
To enable the
notestool, use:$ annocheck --enable-notesThe notes are displayed in a sequence sorted by the address range.
Additional resources
-
For more information about the
notestool, see the--helpcommand-line option.
4.12.2.4.3. Enabling the section-size tool Copiar o linkLink copiado para a área de transferência!
To display the size of named sections, you can use the annocheck section-size tool.
Procedure
To enable the
section-sizetool, use:$ annocheck --section-size=nameReplace name with the name of the named section. The output is restricted to specific sections. A cumulative result is produced at the end.
Additional resources
-
For more information about the
section-sizetool, see the--helpcommand-line option.
4.12.2.4.4. Hardening checker basics Copiar o linkLink copiado para a área de transferência!
The hardening checker is enabled by default. You can disable the hardening checker with the --disable-hardened command-line option.
4.12.2.4.4.1. Hardening checker options Copiar o linkLink copiado para a área de transferência!
The annocheck tool verifies binaries for various hardening options, such as stack protection, PIC/PIE usage, and secure linker settings. The following options are checked:
-
Lazy binding is disabled using the
-z nowlinker option. - The program does not have a stack in an executable region of memory.
- The relocations for the GOT table are set to read only.
- No program segment has all three of the read, write and execute permission bits set.
- There are no relocations against executable code.
- The runpath information for locating shared libraries at runtime includes only directories rooted at /usr.
-
The program was compiled with
annobinnotes enabled. -
The program was compiled with the
-fstack-protector-strongoption enabled. -
The program was compiled with
-D_FORTIFY_SOURCE=2. -
The program was compiled with
-D_GLIBCXX_ASSERTIONS. -
The program was compiled with
-fexceptionsenabled. -
The program was compiled with
-fstack-clash-protectionenabled. -
The program was compiled at
-O2or higher. - The program does not have any relocations held in a writeable.
- Dynamic executables have a dynamic segment.
-
Shared libraries were compiled with
-fPICor-fPIE. -
Dynamic executables were compiled with
-fPIEand linked with-pie. -
If available, the
-fcf-protection=fulloption was used. -
If available, the
-mbranch-protectionoption was used. -
If available, the
-mstackrealignoption was used.
4.12.2.4.4.2. Disabling the hardening checker Copiar o linkLink copiado para a área de transferência!
To skip security checks during binary analysis, disable the hardening checker by using the annocheck utility.
Procedure
To scan the notes in a file without the hardening checker, use:
$ annocheck --enable-notes --disable-hardened file-nameReplace file-name with the name of a file.
4.12.3. Specifics of annobin in the GCC Toolset 12 Copiar o linkLink copiado para a área de transferência!
Under some circumstances, due to a synchronization issue between annobin and gcc in the GCC Toolset 12, your compilation can fail with an error message that looks similar to the following:
cc1: fatal error: inaccessible plugin file
opt/rh/gcc-toolset-12/root/usr/lib/gcc/architecture-linux-gnu/12/plugin/gcc-annobin.so
expanded from short plugin name gcc-annobin: No such file or directory
To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file:
Change to the plugin directory:
# cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/architecture-linux-gnu/12/pluginCreate the symbolic link:
# ln -s annobin.so gcc-annobin.so
Replace architecture with the architecture you use in your system:
-
aarch64 -
i686 -
ppc64le -
s390x -
x86_64
Chapter 5. Supplementary topics Copiar o linkLink copiado para a área de transferência!
5.1. Compatibility-breaking changes in compilers and development tools Copiar o linkLink copiado para a área de transferência!
For a list of libraries, interfaces, and features removed or deprecated in Red Hat Enterprise Linux compilers and development tools compared to previous versions, along with syntax and behavioral changes that require application modifications, see below.
librtkaio removed
With this update, the librtkaio library has been removed. This library provided high-performance real-time asynchronous I/O access for some files, which was based on Linux kernel Asynchronous I/O support (KAIO).
As a result of the removal:
-
Applications using the
LD_PRELOADmethod to load librtkaio display a warning about a missing library, load the librt library instead and run correctly. -
Applications using the
LD_LIBRARY_PATHmethod to load librtkaio load the librt library instead and run correctly, without any warning. -
Applications using the
dlopen()system call to access librtkaio directly load the librt library instead.
Users of librtkaio have the following options:
- Use the fallback mechanism described above, without any changes to their applications.
- Change code of their applications to use the librt library, which offers a compatible POSIX-compliant API.
- Change code of their applications to use the libaio library, which offers a compatible API.
Both librt and libaio can provide comparable features and performance under specific conditions.
Note that the libaio package has Red Hat compatibility level of 2, while librtk and the removed librtkaio level 1.
For more details, see https://fedoraproject.org/wiki/Changes/GLIBC223_librtkaio_removal
Sun RPC and NIS interfaces removed from glibc
The glibc library no longer provides Sun RPC and NIS interfaces for new applications. These interfaces are now available only for running legacy applications. Developers must change their applications to use the libtirpc library instead of Sun RPC and libnsl2 instead of NIS. Applications can benefit from IPv6 support in the replacement libraries.
The nosegneg libraries for 32-bit Xen have been removed
Previously, the glibc i686 packages contained an alternative glibc build, which avoided the use of the thread descriptor segment register with negative offsets (nosegneg). This alternative build was only used in the 32-bit version of the Xen Project hypervisor without hardware virtualization support, as an optimization to reduce the cost of full paravirtualization. These alternative builds are no longer used and they have been removed.
make new operator != causes a different interpretation of certain existing makefile syntax
The != shell assignment operator has been added to GNU make as an alternative to the $(shell …) function to increase compatibility with BSD makefiles. As a consequence, variables with name ending in exclamation mark and immediately followed by assignment such as variable!=value are now interpreted as the shell assignment. To restore the previous behavior, add a space after the exclamation mark, such as variable! =value.
For more details and differences between the operator and the function, see the GNU make manual.
Valgrind library for MPI debugging support removed
The libmpiwrap.so wrapper library for Valgrind provided by the valgrind-openmpi package has been removed. This library enabled Valgrind to debug programs using the Message Passing Interface (MPI). This library was specific to the Open MPI implementation version in previous versions of Red Hat Enterprise Linux.
Users of libmpiwrap.so are encouraged to build their own version from upstream sources specific to their MPI implementation and version. Supply these custom-built libraries to Valgrind using the LD_PRELOAD technique.
Development headers and static libraries removed from valgrind-devel
Previously, the valgrind-devel subpackage used to include development files for developing custom valgrind tools. This update removes these files because they do not have a guaranteed API, have to be linked statically, and are unsupported. The valgrind-devel package still does contain the development files for valgrind-aware programs and header files such as valgrind.h, callgrind.h, drd.h, helgrind.h, and memcheck.h, which are stable and well-supported.
5.2. Options for running a RHEL 6 or 7 application on RHEL 8 Copiar o linkLink copiado para a área de transferência!
To run legacy Red Hat Enterprise Linux 6 or 7 applications on Red Hat Enterprise Linux 8, you can use virtualization, containers, or native compatibility. Each approach balances resource usage with configuration complexity. Compare the available strategies to select the best method for your deployment.
- Run the application in a virtual machine with a matching RHEL version guest OS
- Resource costs are high for this option, but the environment is a close match to the application’s requirements, and this approach does not require many additional considerations. This is the currently recommended option.
- Run the application in a container-based on the corresponding RHEL version
- Resource costs are lower than in the previous cases, while configuration requirements are stricter. For details on the relationship between container hosts and guest user spaces, see the Red Hat Enterprise Linux Container Compatibility Matrix.
- Run the application natively on RHEL 8
This option offers the lowest resource costs, but also the most strict requirements. The application developer must determine a correct configuration of the RHEL 8 system. The following resources can help the developer in this task:
Note that this list is not a complete set of resources needed to determine application compatibility. These are only starting points with lists of known incompatible changes and Red Hat policies related to compatibility. For more information about kernel and compatibility, see the Red Hat Knowledgebase solution What is Kernel Application Binary Interface (kABI)?.