Suchen

Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 3. Debugging Applications

download PDF

Debugging applications is a very wide topic. This part provides a developer with the most common techniques for debugging in multiple situations.

3.1. Enabling Debugging with Debugging Information

To debug applications and libraries, debugging information is required. The following sections describe how to obtain this information.

3.1.1. Debugging information

While debugging any executable code, two types of information allow the tools, and by extension the programmer, to comprehend the binary code:

  • the source code text
  • a description of how the source code text relates to the binary code

Such information is called debugging information.

Red Hat Enterprise Linux uses the ELF format for executable binaries, shared libraries, or debuginfo files. Within these ELF files, the DWARF format is used to hold the debug information.

To display DWARF information stored within an ELF file, run the readelf -w file command.

Important

STABS is an older, less capable format, occasionally used with UNIX. Its use is discouraged by Red Hat. GCC and GDB provide STABS production and consumption on a best effort basis only. Some other tools such as Valgrind and elfutils do not work with STABS.

Additional resources

3.1.2. Enabling debugging of C and C++ applications with GCC

Because debugging information is large, it is not included in executable files by default. To enable debugging of your C and C++ applications with it, you must explicitly instruct the compiler to create it.

To enable creation of debugging information with GCC when compiling and linking code, use the -g option:

$ gcc ... -g ...
  • Optimizations performed by the compiler and linker can result in executable code which is hard to relate to the original source code: variables may be optimized out, loops unrolled, operations merged into the surrounding ones, and so on. This affects debugging negatively. For improved debugging experience, consider setting the optimization with the -Og option. However, changing the optimization level changes the executable code and may change the actual behaviour including removing some bugs.
  • To also include macro definitions in the debug information, use the -g3 option instead of -g.
  • The -fcompare-debug GCC option tests code compiled by GCC with debug information and without debug information. The test passes if the resulting two binary files are identical. This test ensures that executable code is not affected by any debugging options, which further ensures that there are no hidden bugs in the debug code. Note that using the -fcompare-debug option significantly increases compilation time. See the GCC manual page for details about this option.

Additional resources

3.1.3. Debuginfo and debugsource packages

The debuginfo and debugsource packages contain debugging information and debug source code for programs and libraries. For applications and libraries installed in packages from the Red Hat Enterprise Linux repositories, you can obtain separate debuginfo and debugsource packages from an additional channel.

Debugging information package types

There are two types of packages available for debugging:

Debuginfo packages
The debuginfo packages provide debugging information needed to provide human-readable names for binary code features. These packages contain .debug files, which contain DWARF debugging information. These files are installed to the /usr/lib/debug directory.
Debugsource packages
The debugsource packages contain the source files used for compiling the binary code. With both respective debuginfo and debugsource package installed, debuggers such as GDB or LLDB can relate the execution of binary code to the source code. The source code files are installed to the /usr/src/debug directory.

Differences from RHEL 7

In Red Hat Enterprise Linux 7, the debuginfo packages contained both kinds of information. Red Hat Enterprise Linux 8 splits the source code data needed for debugging from the debuginfo packages into separate debugsource packages.

Package names

A debuginfo or debugsource package provides debugging information valid only for a binary package with the same name, version, release, and architecture:

  • Binary package: packagename-version-release.architecture.rpm
  • Debuginfo package: packagename-debuginfo-version-release.architecture.rpm
  • Debugsource package: packagename-debugsource-version-release.architecture.rpm

3.1.4. Getting debuginfo packages for an application or library using GDB

Debugging information is required to debug code. For code that is installed from a package, the GNU Debugger (GDB) automatically recognizes missing debug information, resolves the package name and provides concrete advice on how to get the package.

Prerequisites

Procedure

  1. Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run.

    $ gdb -q /bin/ls
    Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done.
    (no debugging symbols found)...done.
    Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64
    (gdb)
  2. Exit GDB: type q and confirm with Enter.

    (gdb) q
  3. Run the command suggested by GDB to install the required debuginfo packages:

    # dnf debuginfo-install coreutils-8.30-6.el8.x86_64

    The dnf package management tool provides a summary of the changes, asks for confirmation and once you confirm, downloads and installs all the necessary files.

  4. In case GDB is not able to suggest the debuginfo package, follow the procedure described in Section 3.1.5, “Getting debuginfo packages for an application or library manually”.

Additional resources

3.1.5. Getting debuginfo packages for an application or library manually

You can determine manually which debuginfo packages you need to install by locating the executable file and then finding the package that installs it.

Note

Red Hat recommends that you use GDB to determine the packages for installation. Use this manual procedure only if GDB is not able to suggest the package to install.

Prerequisites

Procedure

  1. Find the executable file of the application or library.

    1. Use the which command to find the application file.

      $ which less
      /usr/bin/less
    2. Use the locate command to find the library file.

      $ locate libz | grep so
      /usr/lib64/libz.so.1
      /usr/lib64/libz.so.1.2.11

      If the original reasons for debugging include error messages, pick the result where the library has the same additional numbers in its file name as those mentioned in the error messages. If in doubt, try following the rest of the procedure with the result where the library file name includes no additional numbers.

      Note

      The locate command is provided by the mlocate package. To install it and enable its use:

      # yum install mlocate
      # updatedb
  2. Search for a name and version of the package that provided the file:

    $ rpm -qf /usr/lib64/libz.so.1.2.7
    zlib-1.2.11-10.el8.x86_64

    The output provides details for the installed package in the name:epoch-version.release.architecture format.

    Important

    If this step does not produce any results, it is not possible to determine which package provided the binary file. There are several possible cases:

    • The file is installed from a package which is not known to package management tools in their current configuration.
    • The file is installed from a locally downloaded and manually installed package. Determining a suitable debuginfo package automatically is impossible in that case.
    • Your package management tools are misconfigured.
    • The file is not installed from any package. In such a case, no respective debuginfo package exists.

    Because further steps depend on this one, you must resolve this situation or abort this procedure. Describing the exact troubleshooting steps is beyond the scope of this procedure.

  3. Install the debuginfo packages using the debuginfo-install utility. In the command, use the package name and other details you determined during the previous step:

    # debuginfo-install zlib-1.2.11-10.el8.x86_64

Additional resources

3.2. Inspecting Application Internal State with GDB

To find why an application does not work properly, control its execution and examine its internal state with a debugger. This section describes how to use the GNU Debugger (GDB) for this task.

3.2.1. GNU debugger (GDB)

Red Hat Enterprise Linux contains the GNU debugger (GDB) which lets you investigate what is happening inside a program through a command-line user interface.

GDB capabilities

A single GDB session can debug the following types of programs:

  • Multithreaded and forking programs
  • Multiple programs at once
  • Programs on remote machines or in containers with the gdbserver utility connected over a TCP/IP network connection

Debugging requirements

To debug any executable code, GDB requires debugging information for that particular code:

  • For programs developed by you, you can create the debugging information while building the code.
  • For system programs installed from packages, you must install their debuginfo packages.

3.2.2. Attaching GDB to a process

In order to examine a process, GDB must be attached to the process.

Starting a program with GDB

When the program is not running as a process, start it with GDB:

$ gdb program

Replace program with a file name or path to the program.

GDB sets up to start execution of the program. You can set up breakpoints and the gdb environment before beginning the execution of the process with the run command.

Attaching GDB to an already running process

To attach GDB to a program already running as a process:

  1. Find the process ID (pid) with the ps command:

    $ ps -C program -o pid h
     pid

    Replace program with a file name or path to the program.

  2. Attach GDB to this process:

    $ gdb -p pid

    Replace pid with an actual process ID number from the ps output.

Attaching an already running GDB to an already running process

To attach an already running GDB to an already running program:

  1. Use the shell GDB command to run the ps command and find the program’s process ID (pid):

    (gdb) shell ps -C program -o pid h
     pid

    Replace program with a file name or path to the program.

  2. Use the attach command to attach GDB to the program:

    (gdb) attach pid

    Replace pid by an actual process ID number from the ps output.

Note

In some cases, GDB might not be able to find the respective executable file. Use the file command to specify the path:

(gdb) file path/to/program

Additional resources

3.2.3. Stepping through program code with GDB

Once the GDB debugger is attached to a program, you can use a number of commands to control the execution of the program.

Prerequisites

GDB commands to step through the code

r (run)
Start the execution of the program. If run is executed with any arguments, those arguments are passed on to the executable as if the program has been started normally. Users normally issue this command after setting breakpoints.
start
Start the execution of the program but stop at the beginning of the program’s main function. If start is executed with any arguments, those arguments are passed on to the executable as if the program has been started normally.
c (continue)

Continue the execution of the program from the current state. The execution of the program will continue until one of the following becomes true:

  • A breakpoint is reached.
  • A specified condition is satisfied.
  • A signal is received by the program.
  • An error occurs.
  • The program terminates.
n (next)

Continue the execution of the program from the current state, until the next line of code in the current source file is reached. The execution of the program will continue until one of the following becomes true:

  • A breakpoint is reached.
  • A specified condition is satisfied.
  • A signal is received by the program.
  • An error occurs.
  • The program terminates.
s (step)
The step command also halts execution at each sequential line of code in the current source file. However, if the execution is currently stopped at a source line containing a function call, GDB stops the execution after entering the function call (rather than executing it).
until location
Continue the execution until the code location specified by the location option is reached.
fini (finish)

Resume the execution of the program and halt when execution returns from a function. The execution of the program will continue until one of the following becomes true:

  • A breakpoint is reached.
  • A specified condition is satisfied.
  • A signal is received by the program.
  • An error occurs.
  • The program terminates.
q (quit)
Terminate the execution and exit GDB.

3.2.4. Showing program internal values with GDB

Displaying the values of a program’s internal variables is important for understanding of what the program is doing. GDB offers multiple commands that you can use to inspect the internal variables. The following are the most useful of these commands:

p (print)

Display the value of the argument given. Usually, the argument is the name of a variable of any complexity, from a simple single value to a structure. An argument can also be an expression valid in the current language, including the use of program variables and library functions, or functions defined in the program being tested.

It is possible to extend GDB with pretty-printer Python or Guile scripts for customized display of data structures (such as classes, structs) using the print command.

bt (backtrace)

Display the chain of function calls used to reach the current execution point, or the chain of functions used up until execution was terminated. This is useful for investigating serious bugs (such as segmentation faults) with elusive causes.

Adding the full option to the backtrace command displays local variables, too.

It is possible to extend GDB with frame filter Python scripts for customized display of data displayed using the bt and info frame commands. The term frame refers to the data associated with a single function call.

info

The info command is a generic command to provide information about various items. It takes an option specifying the item to describe.

  • The info args command displays options of the function call that is the currently selected frame.
  • The info locals command displays local variables in the currently selected frame.

For a list of the possible items, run the command help info in a GDB session:

(gdb) help info
l (list)
Show the line in the source code where the program stopped. This command is available only when the program execution is stopped. While not strictly a command to show internal state, list helps the user understand what changes to the internal state will happen in the next step of the program’s execution.

Additional resources

3.2.5. Using GDB breakpoints to stop execution at defined code locations

Often, only small portions of code are investigated. Breakpoints are markers that tell GDB to stop the execution of a program at a certain place in the code. Breakpoints are most commonly associated with source code lines. In that case, placing a breakpoint requires specifying the source file and line number.

  • To place a breakpoint:

    • Specify the name of the source code file and the line in that file:

      (gdb) br file:line
    • When file is not present, name of the source file at the current point of execution is used:

      (gdb) br line
    • Alternatively, use a function name to put the breakpoint on its start:

      (gdb) br function_name
  • A program might encounter an error after a certain number of iterations of a task. To specify an additional condition to halt execution:

    (gdb) br file:line if condition

    Replace condition with a condition in the C or C++ language. The meaning of file and line is the same as above.

  • To inspect the status of all breakpoints and watchpoints:

    (gdb) info br
  • To remove a breakpoint by using its number as displayed in the output of info br:

    (gdb) delete number
  • To remove a breakpoint at a given location:

    (gdb) clear file:line

Additional resources

3.2.6. Using GDB watchpoints to stop execution on data access and changes

In many cases, it is advantageous to let the program execute until certain data changes or is accessed. The following examples are the most common use cases.

Prerequisites

  • Understanding GDB

Using watchpoints in GDB

Watchpoints are markers which tell GDB to stop the execution of a program. Watchpoints are associated with data: placing a watchpoint requires specifying an expression that describes a variable, multiple variables, or a memory address.

  • To place a watchpoint for data change (write):

    (gdb) watch expression

    Replace expression with an expression that describes what you want to watch. For variables, expression is equal to the name of the variable.

  • To place a watchpoint for data access (read):

    (gdb) rwatch expression
  • To place a watchpoint for any data access (both read and write):

    (gdb) awatch expression
  • To inspect the status of all watchpoints and breakpoints:

    (gdb) info br
  • To remove a watchpoint:

    (gdb) delete num

    Replace the num option with the number reported by the info br command.

Additional resources

3.2.7. Debugging forking or threaded programs with GDB

Some programs use forking or threads to achieve parallel code execution. Debugging multiple simultaneous execution paths requires special considerations.

Prerequisites

  • You must understand the concepts of process forking and threads.

Debugging forked programs with GDB

Forking is a situation when a program (parent) creates an independent copy of itself (child). Use the following settings and commands to affect what GDB does when a fork occurs:

  • The follow-fork-mode setting controls whether GDB follows the parent or the child after the fork.

    set follow-fork-mode parent
    After a fork, debug the parent process. This is the default.
    set follow-fork-mode child
    After a fork, debug the child process.
    show follow-fork-mode
    Display the current setting of follow-fork-mode.
  • The set detach-on-fork setting controls whether the GDB keeps control of the other (not followed) process or leaves it to run.

    set detach-on-fork on
    The process which is not followed (depending on the value of follow-fork-mode) is detached and runs independently. This is the default.
    set detach-on-fork off
    GDB keeps control of both processes. The process which is followed (depending on the value of follow-fork-mode) is debugged as usual, while the other is suspended.
    show detach-on-fork
    Display the current setting of detach-on-fork.

Debugging Threaded Programs with GDB

GDB has the ability to debug individual threads, and to manipulate and examine them independently. To make GDB stop only the thread that is examined, use the commands set non-stop on and set target-async on. You can add these commands to the .gdbinit file. After that functionality is turned on, GDB is ready to conduct thread debugging.

GDB uses a concept of current thread. By default, commands apply to the current thread only.

info threads
Display a list of threads with their id and gid numbers, indicating the current thread.
thread id
Set the thread with the specified id as the current thread.
thread apply ids command
Apply the command command to all threads listed by ids. The ids option is a space-separated list of thread ids. A special value all applies the command to all threads.
break location thread id if condition
Set a breakpoint at a certain location with a certain condition only for the thread number id.
watch expression thread id
Set a watchpoint defined by expression only for the thread number id.
command&
Execute command command and return immediately to the gdb prompt (gdb), continuing any code execution in the background.
interrupt
Halt execution in the background.

Additional resources

3.3. Recording Application Interactions

The executable code of applications interacts with the code of the operating system and shared libraries. Recording an activity log of these interactions can provide enough insight into the application’s behavior without debugging the actual application code. Alternatively, analyzing an application’s interactions can help pinpoint the conditions in which a bug manifests.

3.3.1. Tools useful for recording application interactions

Red Hat Enterprise Linux offers multiple tools for analyzing an application’s interactions.

strace

The strace tool primarily enables logging of system calls (kernel functions) used by an application.

  • The strace tool can provide a detailed output about calls, because strace interprets parameters and results with knowledge of the underlying kernel code. Numbers are turned into the respective constant names, bitwise combined flags expanded to flag list, pointers to character arrays dereferenced to provide the actual string, and more. Support for more recent kernel features may be lacking.
  • You can filter the traced calls to reduce the amount of captured data.
  • The use of strace does not require any particular setup except for setting up the log filter.
  • Tracing the application code with strace results in significant slowdown of the application’s execution. As a result, strace is not suitable for many production deployments. As an alternative, consider using ltrace or SystemTap.
  • The version of strace available in Red Hat Developer Toolset can also perform system call tampering. This capability is useful for debugging.
ltrace

The ltrace tool enables logging of an application’s user space calls into shared objects (dynamic libraries).

  • The ltrace tool enables tracing calls to any library.
  • You can filter the traced calls to reduce the amount of captured data.
  • The use of ltrace does not require any particular setup except for setting up the log filter.
  • The ltrace tool is lightweight and fast, offering an alternative to strace: it is possible to trace the respective interfaces in libraries such as glibc with ltrace instead of tracing kernel functions with strace.
  • Because ltrace does not handle a known set of calls like strace, it does not attempt to explain the values passed to library functions. The ltrace output contains only raw numbers and pointers. The interpretation of ltrace output requires consulting the actual interface declarations of the libraries present in the output.
Note

In Red Hat Enterprise Linux 8, a known issue prevents ltrace from tracing system executable files. This limitation does not apply to executable files built by users.

SystemTap

SystemTap is an instrumentation platform for probing running processes and kernel activity on the Linux system. SystemTap uses its own scripting language for programming custom event handlers.

  • Compared to using strace and ltrace, scripting the logging means more work in the initial setup phase. However, the scripting capabilities extend SystemTap’s usefulness beyond just producing logs.
  • SystemTap works by creating and inserting a kernel module. The use of SystemTap is efficient and does not create a significant slowdown of the system or application execution on its own.
  • SystemTap comes with a set of usage examples.
GDB

The GNU Debugger (GDB) is primarily meant for debugging, not logging. However, some of its features make it useful even in the scenario where an application’s interaction is the primary activity of interest.

  • With GDB, it is possible to conveniently combine the capture of an interaction event with immediate debugging of the subsequent execution path.
  • GDB is best suited for analyzing response to infrequent or singular events, after the initial identification of problematic situation by other tools. Using GDB in any scenario with frequent events becomes inefficient or even impossible.

3.3.2. Monitoring an application’s system calls with strace

The strace tool enables monitoring the system (kernel) calls performed by an application.

Procedure

  1. Identify the system calls to monitor.
  2. Start strace and attach it to the program.

    • If the program you want to monitor is not running, start strace and specify the program:

      $ strace -fvttTyy -s 256 -e trace=call program
    • If the program is already running, find its process id (pid) and attach strace to it:

      $ ps -C program
      (...)
      $ strace -fvttTyy -s 256 -e trace=call -ppid
    • Replace call with the system calls to be displayed. You can use the -e trace=call option multiple times. If left out, strace will display all system call types. See the strace(1) manual page for more information.
    • If you do not want to trace any forked processes or threads, leave out the -f option.
  3. The strace tool displays the system calls made by the application and their details.

    In most cases, an application and its libraries make a large number of calls and strace output appears immediately, if no filter for system calls is set.

  4. The strace tool exits when the program exits.

    To terminate the monitoring before the traced program exits, press Ctrl+C.

    • If strace started the program, the program terminates together with strace.
    • If you attached strace to an already running program, the program terminates together with strace.
  5. Analyze the list of system calls done by the application.

    • Problems with resource access or availability are present in the log as calls returning errors.
    • Values passed to the system calls and patterns of call sequences provide insight into the causes of the application’s behaviour.
    • If the application crashes, the important information is probably at the end of log.
    • The output contains a lot of unnecessary information. However, you can construct a more precise filter for the system calls of interest and repeat the procedure.
Note

It is advantageous to both see the output and save it to a file. Use the tee command to achieve this:

$ strace ... |& tee your_log_file.log

Additional resources

3.3.3. Monitoring application’s library function calls with ltrace

The ltrace tool enables monitoring an application’s calls to functions available in libraries (shared objects).

Note

In Red Hat Enterprise Linux 8, a known issue prevents ltrace from tracing system executable files. This limitation does not apply to executable files built by users.

Procedure

  1. Identify the libraries and functions of interest, if possible.
  2. Start ltrace and attach it to the program.

    • If the program you want to monitor is not running, start ltrace and specify program:

      $ ltrace -f -l library -e function program
    • If the program is already running, find its process id (pid) and attach ltrace to it:

      $ ps -C program
      (...)
      $ ltrace -f -l library -e function program -ppid
    • Use the -e, -f and -l options to filter the output:

      • Supply the function names to be displayed as function. The -e function option can be used multiple times. If left out, ltrace displays calls to all functions.
      • Instead of specifying functions, you can specify whole libraries with the -l library option. This option behaves similarly to the -e function option.
      • If you do not want to trace any forked processes or threads, leave out the -f option.

      See the ltrace(1)_ manual page for more information.

  3. ltrace displays the library calls made by the application.

    In most cases, an application makes a large number of calls and ltrace output displays immediately, if no filter is set.

  4. ltrace exits when the program exits.

    To terminate the monitoring before the traced program exits, press ctrl+C.

    • If ltrace started the program, the program terminates together with ltrace.
    • If you attached ltrace to an already running program, the program terminates together with ltrace.
  5. Analyze the list of library calls done by the application.

    • If the application crashes, the important information is probably at the end of log.
    • The output contains a lot of unnecessary information. However, you can construct a more precise filter and repeat the procedure.
Note

It is advantageous to both see the output and save it to a file. Use the tee command to achieve this:

$ ltrace ... |& tee your_log_file.log

Additional resources

  • The ltrace(1) manual page:

    $ man ltrace
  • Red Hat Developer Toolset User Guide — Chapter ltrace

3.3.4. Monitoring application’s system calls with SystemTap

The SystemTap tool enables registering custom event handlers for kernel events. In comparison with the strace tool, it is harder to use but more efficient and enables more complicated processing logic. A SystemTap script called strace.stp is installed together with SystemTap and provides an approximation of strace functionality using SystemTap.

Procedure

  1. Find the process ID (pid) of the process you want to monitor:

    $ ps -aux
  2. Run SystemTap with the strace.stp script:

    # stap /usr/share/systemtap/examples/process/strace.stp -x pid

    The value of pid is the process id.

    The script is compiled to a kernel module, which is then loaded. This introduces a slight delay between entering the command and getting the output.

  3. When the process performs a system call, the call name and its parameters are printed to the terminal.
  4. The script exits when the process terminates, or when you press Ctrl+C.

3.3.5. Using GDB to intercept application system calls

GNU Debugger (GDB) lets you stop an execution in various situations that arise during program execution. To stop the execution when the program performs a system call, use a GDB catchpoint.

Procedure

  1. Set the catchpoint:

    (gdb) catch syscall syscall-name

    The command catch syscall sets a special type of breakpoint that halts execution when the program performs a system call.

    The syscall-name option specifies the name of the call. You can specify multiple catchpoints for various system calls. Leaving out the syscall-name option causes GDB to stop on any system call.

  2. Start execution of the program.

    • If the program has not started execution, start it:

      (gdb) r
    • If the program execution is halted, resume it:

      (gdb) c
  3. GDB halts execution after the program performs any specified system call.

3.3.6. Using GDB to intercept handling of signals by applications

GNU Debugger (GDB) lets you stop the execution in various situations that arise during program execution. To stop the execution when the program receives a signal from the operating system, use a GDB catchpoint.

Procedure

  1. Set the catchpoint:

    (gdb) catch signal signal-type

    The command catch signal sets a special type of a breakpoint that halts execution when a signal is received by the program. The signal-type option specifies the type of the signal. Use the special value 'all' to catch all signals.

  2. Let the program run.

    • If the program has not started execution, start it:

      (gdb) r
    • If the program execution is halted, resume it:

      (gdb) c
  3. GDB halts execution after the program receives any specified signal.

3.4. Debugging a Crashed Application

Sometimes, it is not possible to debug an application directly. In these situations, you can collect information about the application at the moment of its termination and analyze it afterwards.

3.4.1. Core dumps: what they are and how to use them

A core dump is a copy of a part of the application’s memory at the moment the application stopped working, stored in the ELF format. It contains all the application’s internal variables and stack, which enables inspection of the application’s final state. When augmented with the respective executable file and debugging information, it is possible to analyze a core dump file with a debugger in a way similar to analyzing a running program.

The Linux operating system kernel can record core dumps automatically, if this functionality is enabled. Alternatively, you can send a signal to any running application to generate a core dump regardless of its actual state.

Warning

Some limits might affect the ability to generate a core dump. To see the current limits:

$ ulimit -a

3.4.2. Recording application crashes with core dumps

To record application crashes, set up core dump saving and add information about the system.

Procedure

  1. To enable core dumps, ensure that the /etc/systemd/system.conf file contains the following lines:

    DumpCore=yes
    DefaultLimitCORE=infinity

    You can also add comments describing if these settings were previously present, and what the previous values were. This will enable you to reverse these changes later, if needed. Comments are lines starting with the # character.

    Changing the file requires administrator level access.

  2. Apply the new configuration:

    # systemctl daemon-reexec
  3. Remove the limits for core dump sizes:

    # ulimit -c unlimited

    To reverse this change, run the command with value 0 instead of unlimited.

  4. Install the sos package which provides the sosreport utility for collecting system information:

    # yum install sos
  5. When an application crashes, a core dump is generated and handled by systemd-coredump.
  6. Create an SOS report to provide additional information about the system:

    # sosreport

    This creates a .tar archive containing information about your system, such as copies of configuration files.

  7. Locate and export the core dump:

    $ coredumpctl list executable-name
    $ coredumpctl dump executable-name > /path/to/file-for-export

    If the application crashed multiple times, output of the first command lists more captured core dumps. In that case, construct for the second command a more precise query using the other information. See the coredumpctl(1) manual page for details.

  8. Transfer the core dump and the SOS report to the computer where the debugging will take place. Transfer the executable file, too, if it is known.

    Important

    When the executable file is not known, subsequent analysis of the core file identifies it.

  9. Optional: Remove the core dump and SOS report after transferring them, to free up disk space.

Additional resources

3.4.3. Inspecting application crash states with core dumps

Prerequisites

  • You must have a core dump file and sosreport from the system where the crash occurred.
  • GDB and elfutils must be installed on your system.

Procedure

  1. To identify the executable file where the crash occurred, run the eu-unstrip command with the core dump file:

    $ eu-unstrip -n --core=./core.9814
    0x400000+0x207000 2818b2009547f780a5639c904cded443e564973e@0x400284 /usr/bin/sleep /usr/lib/debug/bin/sleep.debug [exe]
    0x7fff26fff000+0x1000 1e2a683b7d877576970e4275d41a6aaec280795e@0x7fff26fff340 . - linux-vdso.so.1
    0x35e7e00000+0x3b6000 374add1ead31ccb449779bc7ee7877de3377e5ad@0x35e7e00280 /usr/lib64/libc-2.14.90.so /usr/lib/debug/lib64/libc-2.14.90.so.debug libc.so.6
    0x35e7a00000+0x224000 3ed9e61c2b7e707ce244816335776afa2ad0307d@0x35e7a001d8 /usr/lib64/ld-2.14.90.so /usr/lib/debug/lib64/ld-2.14.90.so.debug ld-linux-x86-64.so.2

    The output contains details for each module on a line, separated by spaces. The information is listed in this order:

    1. The memory address where the module was mapped
    2. The build-id of the module and where in the memory it was found
    3. The module’s executable file name, displayed as - when unknown, or as . when the module has not been loaded from a file
    4. The source of debugging information, displayed as a file name when available, as . when contained in the executable file itself, or as - when not present at all
    5. The shared library name (soname) or [exe] for the main module

    In this example, the important details are the file name /usr/bin/sleep and the build-id 2818b2009547f780a5639c904cded443e564973e on the line containing the text [exe]. With this information, you can identify the executable file required for analyzing the core dump.

  2. Get the executable file that crashed.

    • If possible, copy it from the system where the crash occurred. Use the file name extracted from the core file.
    • You can also use an identical executable file on your system. Each executable file built on Red Hat Enterprise Linux contains a note with a unique build-id value. Determine the build-id of the relevant locally available executable files:

      $ eu-readelf -n executable_file

      Use this information to match the executable file on the remote system with your local copy. The build-id of the local file and build-id listed in the core dump must match.

    • Finally, if the application is installed from an RPM package, you can get the executable file from the package. Use the sosreport output to find the exact version of the package required.
  3. Get the shared libraries used by the executable file. Use the same steps as for the executable file.
  4. If the application is distributed as a package, load the executable file in GDB, to display hints for missing debuginfo packages. For more details, see Section 3.1.4, “Getting debuginfo packages for an application or library using GDB”.
  5. To examine the core file in detail, load the executable file and core dump file with GDB:

    $ gdb -e executable_file -c core_file

    Further messages about missing files and debugging information help you identify what is missing for the debugging session. Return to the previous step if needed.

    If the application’s debugging information is available as a file instead of as a package, load this file in GDB with the symbol-file command:

    (gdb) symbol-file program.debug

    Replace program.debug with the actual file name.

    Note

    It might not be necessary to install the debugging information for all executable files contained in the core dump. Most of these executable files are libraries used by the application code. These libraries might not directly contribute to the problem you are analyzing, and you do not need to include debugging information for them.

  6. Use the GDB commands to inspect the state of the application at the moment it crashed. See Inspecting Application Internal State with GDB.

    Note

    When analyzing a core file, GDB is not attached to a running process. Commands for controlling execution have no effect.

Additional resources

3.4.4. Creating and accessing a core dump with coredumpctl

The coredumpctl tool of systemd can significantly streamline working with core dumps on the machine where the crash happened. This procedure outlines how to capture a core dump of unresponsive process.

Prerequisites

  • The system must be configured to use systemd-coredump for core dump handling. To verify this is true:

    $ sysctl kernel.core_pattern

    The configuration is correct if the output starts with the following:

    kernel.core_pattern = |/usr/lib/systemd/systemd-coredump

Procedure

  1. Find the PID of the hung process, based on a known part of the executable file name:

    $ pgrep -a executable-name-fragment

    This command will output a line in the form

    PID command-line

    Use the command-line value to verify that the PID belongs to the intended process.

    For example:

    $ pgrep -a bc
    5459 bc
  2. Send an abort signal to the process:

    # kill -ABRT PID
  3. Verify that the core has been captured by coredumpctl:

    $ coredumpctl list PID

    For example:

    $ coredumpctl list 5459
    TIME                            PID   UID   GID SIG COREFILE  EXE
    Thu 2019-11-07 15:14:46 CET    5459  1000  1000   6 present   /usr/bin/bc
  4. Further examine or use the core file as needed.

    You can specify the core dump by PID and other values. See the coredumpctl(1) manual page for further details.

    • To show details of the core file:

      $ coredumpctl info PID
    • To load the core file in the GDB debugger:

      $ coredumpctl debug PID

      Depending on availability of debugging information, GDB will suggest commands to run, such as:

      Missing separate debuginfos, use: dnf debuginfo-install bc-1.07.1-5.el8.x86_64

      For more details on this process, see Section 3.1.4, “Getting debuginfo packages for an application or library using GDB”.

    • To export the core file for further processing elsewhere:

      $ coredumpctl dump PID > /path/to/file_for_export

      Replace /path/to/file_for_export with the file where you want to put the core dump.

3.4.5. Dumping process memory with gcore

The workflow of core dump debugging enables the analysis of the program’s state offline. In some cases, you can use this workflow with a program that is still running, such as when it is hard to access the environment with the process. You can use the gcore command to dump memory of any process while it is still running.

Procedure

  1. Find out the process id (pid). Use tools such as ps, pgrep, and top:

    $ ps -C some-program
  2. Dump the memory of this process:

    $ gcore -o filename pid

    This creates a file filename and dumps the process memory in it. While the memory is being dumped, the execution of the process is halted.

  3. After the core dump is finished, the process resumes normal execution.
  4. Create an SOS report to provide additional information about the system:

    # sosreport

    This creates a tar archive containing information about your system, such as copies of configuration files.

  5. Transfer the program’s executable file, core dump, and the SOS report to the computer where the debugging will take place.
  6. Optional: Remove the core dump and SOS report after transferring them, to free up disk space.

Additional resources

3.4.6. Dumping protected process memory with GDB

You can mark the memory of processes as not to be dumped. This can save resources and ensure additional security when the process memory contains sensitive data: for example, in banking or accounting applications or on whole virtual machines. Both kernel core dumps (kdump) and manual core dumps (gcore, GDB) do not dump memory marked this way.

In some cases, you must dump the whole contents of the process memory regardless of these protections. This procedure shows how to do this using the GDB debugger.

Procedure

  1. Set GDB to ignore the settings in the /proc/PID/coredump_filter file:

    (gdb) set use-coredump-filter off
  2. Set GDB to ignore the memory page flag VM_DONTDUMP:

    (gdb) set dump-excluded-mappings on
  3. Dump the memory:

    (gdb) gcore core-file

    Replace core-file with name of file where you want to dump the memory.

Additional resources

3.5. Compatibility-breaking changes in GDB

The version of GDB provided in Red Hat Enterprise Linux 8 contains a number of changes that break compatibility, especially for cases where the GDB output is read directly from the terminal. The following sections provide more details about these changes.

Parsing output of GDB is not recommended. Prefer scripts using the Python GDB API or the GDB Machine Interface (MI).

GDBserver now starts inferiors with shell

To enable expansion and variable substitution in inferior command line arguments, GDBserver now starts the inferior in a shell, same as GDB.

To disable using the shell:

  • When using the target extended-remote GDB command, disable shell with the set startup-with-shell off command.
  • When using the target remote GDB command, disable shell with the --no-startup-with-shell option of GDBserver.

Example 3.1. Example of shell expansion in remote GDB inferiors

This example shows how running the /bin/echo /* command through GDBserver differs on Red Hat Enterprise Linux versions 7 and 8:

  • On RHEL 7:

    $ gdbserver --multi :1234
    $ gdb -batch -ex 'target extended-remote :1234' -ex 'set remote exec-file /bin/echo' -ex 'file /bin/echo' -ex 'run /*'
    /*
  • On RHEL 8:

    $ gdbserver --multi :1234
    $ gdb -batch -ex 'target extended-remote :1234' -ex 'set remote exec-file /bin/echo' -ex 'file /bin/echo' -ex 'run /*'
    /bin /boot (...) /tmp /usr /var

gcj support removed

Support for debugging Java programs compiled with the GNU Compiler for Java (gcj) has been removed.

New syntax for symbol dumping maintenance commands

The symbol dumping maintenance commands syntax now includes options before file names. As a result, commands that worked with GDB in RHEL 7 do not work in RHEL 8.

As an example, the following command no longer stores symbols in a file, but produces an error message:

(gdb) maintenance print symbols /tmp/out main.c

The new syntax for the symbol dumping maintenance commands is:

maint print symbols [-pc address] [--] [filename]
maint print symbols [-objfile objfile] [-source source] [--] [filename]
maint print psymbols [-objfile objfile] [-pc address] [--] [filename]
maint print psymbols [-objfile objfile] [-source source] [--] [filename]
maint print msymbols [-objfile objfile] [--] [filename]

Thread numbers are no longer global

Previously, GDB used only global thread numbering. The numbering has been extended to be displayed per inferior in the form inferior_num.thread_num, such as 2.1. As a consequence, thread numbers in the $_thread convenience variable and in the InferiorThread.num Python attribute are no longer unique between inferiors.

GDB now stores a second thread ID per thread, called the global thread ID, which is the new equivalent of thread numbers in previous releases. To access the global thread number, use the $_gthread convenience variable and InferiorThread.global_num Python attribute.

For backwards compatibility, the Machine Interface (MI) thread IDs always contains the global IDs.

Example 3.2. Example of GDB thread number changes

On Red Hat Enterprise Linux 7:

# debuginfo-install coreutils
$ gdb -batch -ex 'file echo' -ex start -ex 'add-inferior' -ex 'inferior 2' -ex 'file echo' -ex start -ex 'info threads' -ex 'pring $_thread' -ex 'inferior 1' -ex 'pring $_thread'
(...)
  Id   Target Id         Frame
* 2    process 203923 "echo" main (argc=1, argv=0x7fffffffdb88) at src/echo.c:109
  1    process 203914 "echo" main (argc=1, argv=0x7fffffffdb88) at src/echo.c:109
$1 = 2
(...)
$2 = 1

On Red Hat Enterprise Linux 8:

# dnf debuginfo-install coreutils
$ gdb -batch -ex 'file echo' -ex start -ex 'add-inferior' -ex 'inferior 2' -ex 'file echo' -ex start -ex 'info threads' -ex 'pring $_thread' -ex 'inferior 1' -ex 'pring $_thread'
(...)
  Id   Target Id         Frame
  1.1  process 4106488 "echo" main (argc=1, argv=0x7fffffffce58) at ../src/echo.c:109
* 2.1  process 4106494 "echo" main (argc=1, argv=0x7fffffffce58) at ../src/echo.c:109
$1 = 1
(...)
$2 = 1

Memory for value contents can be limited

Previously, GDB did not limit the amount of memory allocated for value contents. As a consequence, debugging incorrect programs could cause GDB to allocate too much memory. The max-value-size setting has been added to enable limiting the amount of allocated memory. The default value of this limit is 64 KiB. As a result, GDB in Red Hat Enterprise Linux 8 will not display too large values, but report that the value is too large instead.

As an example, printing a value defined as char s[128*1024]; produces different results:

  • On Red Hat Enterprise Linux 7, $1 = 'A' <repeats 131072 times>
  • On Red Hat Enterprise Linux 8, value requires 131072 bytes, which is more than max-value-size

Sun version of stabs format no longer supported

Support for the Sun version of the stabs debug file format has been removed. The stabs format produced by GCC in RHEL with the gcc -gstabs option is still supported by GDB.

Sysroot handling changes

The set sysroot path command specifies system root when searching for files needed for debugging. Directory names supplied to this command may now be prefixed with the string target: to make GDB read the shared libraries from the target system (both local and remote). The formerly available remote: prefix is now treated as target:. Additionally, the default system root value has changed from an empty string to target: for backward compatibility.

The specified system root is prepended to the file name of the main executable, when GDB starts processes remotely, or when it attaches to already running processes (both local and remote). This means that for remote processes, the default value target: makes GDB always try to load the debugging information from the remote system. To prevent this, run the set sysroot command before the target remote command so that local symbol files are found before the remote ones.

HISTSIZE no longer controls GDB command history size

Previously, GDB used the HISTSIZE environment variable to determine how long command history should be kept. GDB has been changed to use the GDBHISTSIZE environment variable instead. This variable is specific only to GDB. The possible values and their effects are:

  • a positive number - use command history of this size,
  • -1 or an empty string - keep history of all commands,
  • non-numeric values - ignored.

Completion limiting added

The maximum number of candidates considered during completion can now be limited using the set max-completions command. To show the current limit, run the show max-completions command. The default value is 200. This limit prevents GDB from generating excessively large completion lists and becoming unresponsive.

As an example, the output after the input p <tab><tab> is:

  • on RHEL 7: Display all 29863 possibilities? (y or n)
  • on RHEL 8: Display all 200 possibilities? (y or n)

HP-UX XDB compatibility mode removed

The -xdb option for the HP-UX XDB compatibility mode has been removed from GDB.

Handling signals for threads

Previously, GDB could deliver a signal to the current thread instead of the thread for which the signal was actually sent. This bug has been fixed, and GDB now always passes the signal to the correct thread when resuming execution.

Additionally, the signal command now always correctly delivers the requested signal to the current thread. If the program is stopped for a signal and the user switched threads, GDB asks for confirmation.

Breakpoint modes always-inserted off and auto merged

The breakpoint always-inserted setting has been changed. The auto value and corresponding behavior has been removed. The default value is now off. Additionally, the off value now causes GDB to not remove breakpoints from the target until all threads stop.

remotebaud commands no longer supported

The set remotebaud and show remotebaud commands are no longer supported. Use the set serial baud and show serial baud commands instead.

3.6. Debugging applications in containers

You can use various command-line tools tailored to different aspects of troubleshooting. The following provides categories along with common command-line tools.

Note

This is not a complete list of command-line tools. The choice of tool for debugging a container application is heavily based on the container image and your use case.

For instance, the systemctl, journalctl, ip, netstat, ping, traceroute, perf, iostat tools may need root access because they interact with system-level resources such as networking, systemd services, or hardware performance counters, which are restricted in rootless containers for security reasons.

Rootless containers operate without requiring elevated privileges, running as non-root users within user namespaces to provide improved security and isolation from the host system. They offer limited interaction with the host, reduced attack surface, and enhanced security by mitigating the risk of privilege escalation vulnerabilities.

Rootful containers run with elevated privileges, typically as the root user, granting full access to system resources and capabilities. While rootful containers offer greater flexibility and control, they pose security risks due to their potential for privilege escalation and exposure of the host system to vulnerabilities.

For more information about rootful and rootless containers, see Setting up rootless containers, Upgrading to rootless containers, and Special considerations for rootless containers.

Systemd and Process Management Tools

systemctl
Controls systemd services within containers, allowing start, stop, enable, and disable operations.
journalctl
Views logs generated by systemd services, aiding in troubleshooting container issues.

Networking Tools

ip
Manages network interfaces, routing, and addresses within containers.
netstat
Displays network connections, routing tables, and interface statistics.
ping
Verifies network connectivity between containers or hosts.
traceroute
Identifies the path packets take to reach a destination, useful for diagnosing network issues.

Process and Performance Tools

ps
Lists currently running processes within containers.
top
Provides real-time insights into resource usage by processes within containers.
htop
Interactive process viewer for monitoring resource utilization.
perf
CPU performance profiling, tracing, and monitoring, aiding in pinpointing performance bottlenecks within the system or applications.
vmstat
Reports virtual memory statistics within containers, aiding in performance analysis.
iostat
Monitors input/output statistics for block devices within containers.
gdb (GNU Debugger)
A command-line debugger that helps in examining and debugging programs by allowing users to track and control their execution, inspect variables, and analyze memory and registers during runtime. For more information, see the Debugging applications within Red Hat OpenShift containers article.
strace
Intercepts and records system calls made by a program, aiding in troubleshooting by revealing interactions between the program and the operating system.

Security and Access Control Tools

sudo
Enables executing commands with elevated privileges.
chroot
Changes the root directory for a command, helpful in testing or troubleshooting within a different root directory.

Podman-Specific Tools

podman logs
Batch-retrieves whatever logs are present for one or more containers at the time of execution.
podman inspect
Displays the low-level information on containers and images as identified by name or ID.
podman events
Monitor and print events that occur in Podman. Each event includes a timestamp, a type, a status, a name (if applicable), and an image (if applicable). The default logging mechanism is journald.
podman run --health-cmd
Use the health check to determine the health or readiness of the process running inside the container.
podman top
Display the running processes of the container.
podman exec
Running commands in or attaching to a running container is extremely useful to get a better understanding of what is happening in the container.
podman export
When the container fails, it is basically impossible to know what happened. Exporting the filesystem structure from the container will allow for checking other logs files that may not be in the mounted volumes.

Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.