Security hardening
Enhancing security of Red Hat Enterprise Linux 10 systems
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Switching RHEL to FIPS mode Copy linkLink copied to clipboard!
To enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) 140-3, you must operate Red Hat Enterprise Linux 10 in FIPS mode. The only correct way to switch the system to FIPS mode is to enable it during the installation process.
Switching a system to FIPS mode after installation is not supported in Red Hat Enterprise Linux 10. In particular, setting the FIPS
system-wide cryptographic policy is not sufficient to enable FIPS mode and guarantee compliance with the FIPS 140 standard. The fips-mode-setup
tool, which used the FIPS
policy for the FIPS mode enablement process, has been removed.
To turn off FIPS mode, you must reinstall the system without enabling FIPS mode during the installation process.
1.1. Federal Information Processing Standards 140 and FIPS mode Copy linkLink copied to clipboard!
The Federal Information Processing Standards (FIPS) Publication 140 is a series of computer security standards developed by the National Institute of Standards and Technology (NIST) to ensure the quality of cryptographic modules. The FIPS 140 standard ensures that cryptographic tools implement their algorithms correctly. Runtime cryptographic algorithms and integrity self-tests are some of the mechanisms to ensure a system uses cryptography that meets the requirements of the standard.
1.1.1. RHEL in FIPS mode Copy linkLink copied to clipboard!
To ensure that your RHEL system generates and uses all cryptographic keys only with FIPS-approved algorithms, you must switch RHEL to FIPS mode.
To enable FIPS mode, start the installation in FIPS mode. This avoids cryptographic key material regeneration and reevaluation of the compliance of the resulting system associated with converting already deployed systems. Additionally, components that change their algorithm choices based on whether FIPS mode is enabled choose the correct algorithms. For example, LUKS disk encryption uses the PBKDF2 key derivation function (KDF) during installation in FIPS mode, but it chooses the non-FIPS-compliant Argon2 KDF otherwise. Therefore, a non-FIPS installation with disk encryption is either not compliant or potentially unbootable when switched to FIPS mode after the installation.
To operate a FIPS-compliant system, create all cryptographic key material in FIPS mode. Furthermore, the cryptographic key material must never leave the FIPS environment unless it is securely wrapped and never unwrapped in non-FIPS environments.
1.1.2. FIPS mode status Copy linkLink copied to clipboard!
Whether FIPS mode is enabled is determined by the fips=1
boot option on the kernel command line. The system-wide cryptographic policy automatically follows this setting if it is not explicitly set by using the update-crypto-policies --set FIPS
command. Systems with a separate partition for /boot
additionally require a boot=UUID=<uuid-of-boot-disk>
kernel command line argument. The installation program performs the required changes when started in FIPS mode.
Enforcement of restrictions required in FIPS mode depends on the contents of the /proc/sys/crypto/fips_enabled
file. If the file contains 1
, RHEL core cryptographic components switch to mode, in which they use only FIPS-approved implementations of cryptographic algorithms. If /proc/sys/crypto/fips_enabled
contains 0
, the cryptographic components do not enable their FIPS mode.
1.1.3. FIPS in crypto-policies Copy linkLink copied to clipboard!
The FIPS
system-wide cryptographic policy helps to configure higher-level restrictions. Therefore, communication protocols supporting cryptographic agility do not announce ciphers that the system refuses when selected. For example, the ChaCha20 algorithm is not FIPS-approved, and the FIPS
cryptographic policy ensures that TLS servers and clients do not announce the TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS cipher suite, because any attempt to use such a cipher fails.
If you operate RHEL in FIPS mode and use an application providing its own FIPS-mode-related configuration options, ignore these options and the corresponding application guidance. The system runs in FIPS mode and the system-wide cryptographic policies enforce only FIPS-compliant cryptography. For example, the Node.js configuration option --enable-fips
is ignored if the system runs in FIPS mode. If you use the --enable-fips
option on a system not running in FIPS mode, you do not meet the FIPS-140 compliance requirements.
1.1.4. RHEL 10.0 OpenSSL FIPS indicators Copy linkLink copied to clipboard!
Because RHEL introduced OpenSSL FIPS indicators before the OpenSSL upstream did, and both designs differ, the indicators might change in a future minor version of RHEL 10. After the potential adoption of the upstream API, the RHEL 10.0 indicators might return an error message "unsupported" instead of a result. See the OpenSSL FIPS Indicators GitHub document for details.
The cryptographic modules of RHEL 10 are not yet certified for the FIPS 140-3 requirements by the National Institute of Standards and Technology (NIST) Cryptographic Module Validation Program (CMVP). You can see the validation status of cryptographic modules in the FIPS - Federal Information Processing Standards section on the Product compliance Red Hat Customer Portal page.
1.2. Installing the system with FIPS mode enabled Copy linkLink copied to clipboard!
To enable the cryptographic module self-checks mandated by the Federal Information Processing Standard (FIPS) 140, enable FIPS mode during the system installation.
After you complete the setup of FIPS mode, you cannot switch off FIPS mode without putting the system into an inconsistent state. If your scenario requires this change, the only correct way is a complete re-installation of the system.
Procedure
Add the
fips=1
option to the kernel command line at the start of the system installation when the Red Hat Enterprise Linux boot window opens and displays available boot options.On UEFI systems, press the e key, move the cursor to the end of the
linuxefi
kernel command line, and addfips=1
to the end of this line, for example:linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-10-0-BaseOS-x86_64 rd.live.\ check quiet fips=1
linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-10-0-BaseOS-x86_64 rd.live.\ check quiet fips=1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On BIOS systems, press the Tab key, move the cursor to the end of the kernel command line, and add
fips=1
to the end of this line, for example:> vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=RHEL-10-0-BaseOS-x86_64 rd.live.check quiet fips=1
> vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=RHEL-10-0-BaseOS-x86_64 rd.live.check quiet fips=1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- During the software selection stage, do not install any third-party software.
- After the installation, the system starts in FIPS mode automatically.
Verification
After the system starts, check that FIPS mode is enabled:
cat /proc/sys/crypto/fips_enabled 1
$ cat /proc/sys/crypto/fips_enabled 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3. Enabling FIPS mode with RHEL image builder Copy linkLink copied to clipboard!
You can create a customized image and boot a FIPS-enabled RHEL image. Before you compose the image, you must change the value of the fips
directive in your blueprint.
Prerequisites
-
You are logged in as the root user or a user who is a member of the
weldr
group.
Procedure
Create a plain text file in the Tom’s Obvious, Minimal Language (TOML) format with the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import the blueprint to the RHEL image builder server:
composer-cli blueprints push <blueprint_name>.toml
# composer-cli blueprints push <blueprint_name>.toml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the existing blueprints to check whether the created blueprint is successfully imported and exists:
composer-cli blueprints show <blueprint_name>
# composer-cli blueprints show <blueprint_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the components and versions listed in the blueprint and their dependencies are valid:
composer-cli blueprints depsolve <blueprint_name>
# composer-cli blueprints depsolve <blueprint_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Build the customized RHEL image:
composer-cli compose start \ blueprint-name \ image-type \
# composer-cli compose start \ blueprint-name \ image-type \
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the image status:
composer-cli compose status UUID FINISHED date <blueprint_name> <blueprint_version> <image_type>
# composer-cli compose status $ UUID FINISHED date <blueprint_name> <blueprint_version> <image_type>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the image:
composer-cli compose image UUID
# composer-cli compose image UUID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow RHEL image builder downloads the image to the current directory path. The UUID number and the image size are displayed alongside:
UUID-image-name.type: size MB
$ UUID-image-name.type: size MB
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
- Log in to the system image with the username and password that you configured in your blueprint.
Check if FIPS mode is enabled:
cat /proc/sys/crypto/fips_enabled 1
$ cat /proc/sys/crypto/fips_enabled 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Creating a bootable disk image for a FIPS-enabled system Copy linkLink copied to clipboard!
You can create a disk image and enable FIPS mode when performing an Anaconda installation. You must add the fips=1
kernel argument when booting the disk image.
Prerequisites
- You have Podman installed on your host machine.
-
You have
virt-install
installed on your host machine. -
You have root access to run the
bootc-image-builder
tool, and run the containers in--privileged
mode, to build the images.
Procedure
Create a
01-fips.toml
to configure FIPS enablement, for example:Enable FIPS
# Enable FIPS kargs = ["fips=1"]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a Containerfile with the following instructions to enable the
fips=1
kernel argument and adjust the cryptographic policies:FROM registry.redhat.io/rhel10/rhel-bootc:latest # Enable fips=1 kernel argument: https://bootc-dev.github.io/bootc/building/kernel-arguments.html COPY 01-fips.toml /usr/lib/bootc/kargs.d/ # Install and enable the FIPS crypto policy RUN dnf install -y crypto-policies-scripts && update-crypto-policies --no-reload --set FIPS
FROM registry.redhat.io/rhel10/rhel-bootc:latest # Enable fips=1 kernel argument: https://bootc-dev.github.io/bootc/building/kernel-arguments.html COPY 01-fips.toml /usr/lib/bootc/kargs.d/ # Install and enable the FIPS crypto policy RUN dnf install -y crypto-policies-scripts && update-crypto-policies --no-reload --set FIPS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create your bootc
<image>
compatible base disk image by usingContainerfile
in the current directory:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable FIPS mode during the system installation:
When booting the RHEL Anaconda installer, on the installation screen, press the TAB key and add the
fips=1
kernel argument.After the installation, the system starts in FIPS mode automatically.
Verification
After login in to the system, check that FIPS mode is enabled:
cat /proc/sys/crypto/fips_enabled 1 update-crypto-policies --show FIPS
$ cat /proc/sys/crypto/fips_enabled 1 $ update-crypto-policies --show FIPS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.5. List of RHEL applications using cryptography that is not compliant with FIPS 140-3 Copy linkLink copied to clipboard!
To pass all relevant cryptographic certifications, such as FIPS 140-3, use libraries from the core cryptographic components set. These libraries, except for libgcrypt
, also follow the RHEL system-wide cryptographic policies.
See the RHEL core cryptographic components Red Hat Knowledgebase article for information about the core cryptographic components, how they are selected, how they integrate with the operating system, how they support hardware security modules and smart cards, and how cryptographic certifications apply to them.
The following RHEL 10 applications use cryptography that is not compliant with FIPS 140-3:
- Bacula
- Implements the CRAM-MD5 authentication protocol.
- Cyrus SASL
- Uses the SCRAM-SHA-1 authentication method.
- Dovecot
- Uses SCRAM-SHA-1.
- Emacs
- Uses SCRAM-SHA-1.
- FreeRADIUS
- Uses MD5 and SHA-1 for authentication protocols.
- Ghostscript
- Custom cryptography implementation (MD5, RC4, SHA-2, AES) to encrypt and decrypt documents.
- GnuPG
-
The package uses the
libgcrypt
module, which is not validated. - GRUB2
-
Supports legacy firmware protocols requiring SHA-1 and includes the
libgcrypt
library. - iPXE
- Implements TLS stack.
- Kerberos
- Preserves support for SHA-1 (interoperability with Windows).
- Lasso
-
The
lasso_wsse_username_token_derive_key()
key derivation function (KDF) uses SHA-1. - libgcrypt
- The module is deprecated. It is no longer validated since RHEL 10.0.
- MariaDB, MariaDB Connector
-
The
mysql_native_password
authentication plugin uses SHA-1. - MySQL
-
mysql_native_password
uses SHA-1. - OpenIPMI
- The RAKP-HMAC-MD5 authentication method is not approved for FIPS usage and does not work in FIPS mode.
- Ovmf (UEFI firmware), Edk2, shim
- Full cryptographic stack (an embedded copy of the OpenSSL library).
- Perl
- Uses HMAC, HMAC-SHA1, HMAC-MD5, SHA-1, and SHA-224.
- Pidgin
- Implements DES and RC4 ciphers.
- Poppler
- Can save PDFs with signatures, passwords, and encryption based on non-allowed algorithms if they are present in the original PDF (for example, MD5, RC4, and SHA-1).
- PostgreSQL
- Implements Blowfish, DES, and MD5. A KDF uses SHA-1.
- QAT Engine
- Uses a mix of hardware and software implementation of cryptographic primitives (RSA, EC, DH, AES, and others).
- Ruby
- Provides insecure MD5 and SHA-1 library functions.
- Samba
- Preserves support for RC4 and DES (interoperability with Windows).
- Sequoia
- Uses the deprecated OpenSSL API, which does not work in FIPS mode.
- Syslinux
- Firmware passwords use SHA-1.
- SWTPM
- Explicitly disables FIPS mode in its OpenSSL usage.
- Unbound
- DNS specification requires that DNSSEC resolvers use a SHA-1-based algorithm in DNSKEY records for validation.
- Valgrind
- AES, SHA hashes.[1]
- zip
- Custom cryptography implementation (insecure PKWARE encryption algorithm) to encrypt and decrypt archives by using a password.
Chapter 2. Using system-wide cryptographic policies Copy linkLink copied to clipboard!
The system-wide cryptographic policies component configures the core cryptographic subsystems, which cover the TLS, IPsec, SSH, DNSSEC, and Kerberos protocols. As an administrator, you can select one of the provided cryptographic policies for your system.
2.1. System-wide cryptographic policies Copy linkLink copied to clipboard!
When a system-wide policy is set up, applications in RHEL follow it and refuse to use algorithms and protocols that do not meet the policy, unless you explicitly request the application to do so. That is, the policy applies to the default behavior of applications when running with the system-provided configuration but you can override it if required.
RHEL 10 contains the following predefined policies:
DEFAULT
- The default system-wide cryptographic policy level offers secure settings for current threat models. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 2048 bits long. TLS ciphers that use the RSA key exchange are rejected.
LEGACY
- Ensures maximum compatibility with Red Hat Enterprise Linux 6 and earlier; it is less secure due to an increased attack surface. CBC-mode ciphers are allowed to be used with SSH. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 2048 bits long. SHA-1 signatures are allowed outside TLS. Ciphers that use the RSA key exchange are accepted.
FUTURE
A stricter forward-looking security level intended for testing a possible future policy. This policy does not allow the use of SHA-1 in DNSSEC or as an HMAC. SHA2-224 and SHA3-224 hashes are rejected. 128-bit ciphers are disabled. CBC-mode ciphers are disabled except in Kerberos. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 3072 bits long. If your system communicates on the public internet, you might face interoperability problems.
ImportantBecause a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the
FUTURE
system-wide cryptographic policy, theredhat-support-tool
utility does not work with this policy level at the moment.To work around this problem, use the
DEFAULT
cryptographic policy while connecting to the Customer Portal API.FIPS
Conforms with the FIPS 140 requirements. Red Hat Enterprise Linux systems in FIPS mode use this policy.
NoteYour system is not FIPS-compliant after you set the
FIPS
cryptographic policy. The only correct way to make your RHEL system compliant with the FIPS 140 standards is by installing it in FIPS mode.RHEL also provides the
FIPS:OSPP
system-wide subpolicy, which contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification. The system becomes less interoperable after you set this subpolicy. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. SettingFIPS:OSPP
also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that useFIPS:OSPP
, communication between RHEL hosts usingFIPS:OSPP
and AD domains might not work, or some AD accounts might not be able to authenticate.NoteYour system is not CC-compliant after you set the
FIPS:OSPP
cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in thecc-config
package. See the Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides.
Red Hat continuously adjusts all policy levels so that all libraries provide secure defaults, except when using the LEGACY
policy. Even though the LEGACY
profile does not provide secure defaults, it does not include any algorithms that are easily exploitable. As such, the set of enabled algorithms or acceptable key sizes in any provided policy may change during the lifetime of Red Hat Enterprise Linux.
Such changes reflect new security standards and new security research. If you must ensure interoperability with a specific system for the whole lifetime of Red Hat Enterprise Linux, you should opt-out from the system-wide cryptographic policies for components that interact with that system or re-enable specific algorithms using custom cryptographic policies.
The specific algorithms and ciphers described as allowed in the policy levels are available only if an application supports them:
LEGACY | DEFAULT | FIPS | FUTURE | |
---|---|---|---|---|
IKEv1 | no | no | no | no |
3DES | no | no | no | no |
RC4 | no | no | no | no |
DH | min. 2048-bit | min. 2048-bit | min. 2048-bit | min. 3072-bit |
RSA | min. 2048-bit | min. 2048-bit | min. 2048-bit | min. 3072-bit |
DSA | no | no | no | no |
TLS v1.1 and older | no | no | no | no |
TLS v1.2 and newer | yes | yes | yes | yes |
SHA-1 in digital signatures and certificates | yes[a] | no | no | no |
CBC mode ciphers | yes | no[b] | no[c] | no[d] |
Symmetric ciphers with keys < 256 bits | yes | yes | yes | no |
[a]
SHA-1 signatures are disabled in TLS contexts
[b]
CBC ciphers are disabled for SSH
[c]
CBC ciphers are disabled for all protocols except Kerberos
[d]
CBC ciphers are disabled for all protocols except Kerberos
|
You can find further details about cryptographic policies and covered applications in the crypto-policies(7)
man page on your system.
2.2. Changing the system-wide cryptographic policy Copy linkLink copied to clipboard!
You can change the system-wide cryptographic policy on your system by using the update-crypto-policies
tool and restarting your system. The update-crypto-policies(8)
man page on your system provides the reference of all options, corresponding files, and per-application details.
Prerequisites
- You have root privileges on the system.
Procedure
Optional: Display the current cryptographic policy:
update-crypto-policies --show
$ update-crypto-policies --show DEFAULT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the new cryptographic policy:
update-crypto-policies --set <POLICY>
# update-crypto-policies --set <POLICY> <POLICY>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<POLICY>
with the policy or subpolicy you want to set, for example,FUTURE
,LEGACY
, orFIPS:OSPP
.Restart the system:
reboot
# reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display the current cryptographic policy:
update-crypto-policies --show
$ update-crypto-policies --show <POLICY>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Switching the system-wide cryptographic policy to mode compatible with earlier releases Copy linkLink copied to clipboard!
The default system-wide cryptographic policy in Red Hat Enterprise Linux 10 does not allow communication using older, insecure protocols. For environments that require compatibility with Red Hat Enterprise Linux 6 or earlier, the less secure LEGACY
policy is available.
Switching to the LEGACY
policy results in a less secure system and applications.
Prerequisites
-
Commands that start with the
#
command prompt require administrative privileges provided bysudo
or root user access. For information on how to configuresudo
access, see Enabling unprivileged users to run certain commands.
Procedure
To switch the system-wide cryptographic policy to
LEGACY
, enter:update-crypto-policies --set LEGACY
# update-crypto-policies --set LEGACY Setting system policy to LEGACY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the list of available cryptographic policies, see the
update-crypto-policies(8)
man page on your system.To make your cryptographic settings effective for already running services and applications, restart the system:
reboot
$ reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After the restart, verify the current policy is set to
LEGACY
:update-crypto-policies --show
$ update-crypto-policies --show LEGACY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
-
For defining custom cryptographic policies, see the
Custom Policies
section in theupdate-crypto-policies(8)
man page and theCrypto Policy Definition Format
section in thecrypto-policies(7)
man page on your system.
2.4. Setting up system-wide cryptographic policies in the web console Copy linkLink copied to clipboard!
You can set one of the system-wide cryptographic policies and subpolicies directly in the RHEL web console interface. Besides the three predefined system-wide cryptographic policies, you can also apply the following combination of the LEGACY
policy and the AD-SUPPORT
subpolicy through the graphical interface. The LEGACY:AD-SUPPORT
policy is the LEGACY
policy with less secure settings that improve interoperability for Active Directory services.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
-
Commands that start with the
#
command prompt require administrative privileges provided bysudo
or root user access. For information on how to configuresudo
access, see Enabling unprivileged users to run certain commands.
Procedure
- Log in to the RHEL 10 web console.
In the Configuration card of the Overview page, click your current policy value next to Crypto policy.
- In the Change crypto policy dialog window, click on the policy you want to start using on your system.
- Click the button.
Verification
After the restart, log back in to web console, and check that the Crypto policy value corresponds to the one you selected.
Alternatively, you can enter the
update-crypto-policies --show
command to display the current system-wide cryptographic policy in your terminal.
2.5. Excluding an application from following system-wide crypto policies Copy linkLink copied to clipboard!
You can customize cryptographic settings used by your application by configuring supported cipher suites and protocols directly in the application.
You can also remove a symlink related to your application from the /etc/crypto-policies/back-ends
directory and replace it with your customized cryptographic settings. This configuration prevents the use of system-wide cryptographic policies for applications that use the excluded back end. Furthermore, this modification is not supported by Red Hat.
2.5.1. Examples of opting out of system-wide crypto policies Copy linkLink copied to clipboard!
- curl
To specify ciphers used by the
curl
tool, use the--ciphers
option and provide a colon-separated list of ciphers as a value. For example:curl <https://example.com> --ciphers '@SECLEVEL=0:DES-CBC3-SHA:RSA-DES-CBC3-SHA'
$ curl <https://example.com> --ciphers '@SECLEVEL=0:DES-CBC3-SHA:RSA-DES-CBC3-SHA'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the
curl(1)
man page for more information.- Libreswan
- See the Enabling legacy ciphers and algorithms in Libreswan section in the Securing networks document for detailed information.
- Mozilla Firefox
-
Even though you cannot opt out of system-wide cryptographic policies in the Mozilla Firefox web browser, you can further restrict supported ciphers and TLS versions in the Firefox’s Configuration Editor. Type
about:config
in the address bar and change the value of thesecurity.tls.version.min
option as required. Settingsecurity.tls.version.min
to1
allows TLS 1.0 as the minimum required,security.tls.version.min 2
enables TLS 1.1, and so on. - OpenSSH server
To opt out of the system-wide cryptographic policies for your OpenSSH server, specify the cryptographic policy in a drop-in configuration file located in the
/etc/ssh/sshd_config.d/
directory. Use a two-digit number prefix smaller than 50, so that it lexicographically precedes the50-redhat.conf
file, and a.conf
suffix, for example,49-crypto-policy-override.conf
.See the
sshd_config(5)
man page for more information.- OpenSSH client
To opt out of system-wide cryptographic policies for your OpenSSH client, perform one of the following tasks:
-
For a given user, override the global
ssh_config
with a user-specific configuration in the~/.ssh/config
file. -
For the entire system, specify the cryptographic policy in a drop-in configuration file located in the
/etc/ssh/ssh_config.d/
directory, with a two-digit number prefix smaller than 50, so that it lexicographically precedes the50-redhat.conf
file, and with a.conf
suffix, for example,49-crypto-policy-override.conf
.
See the
ssh_config(5)
man page for more information.-
For a given user, override the global
- wget
To customize cryptographic settings used by the
wget
network downloader, use the--secure-protocol
and--ciphers
options. For example:wget --secure-protocol=TLSv1_1 --ciphers="SECURE128" <https://example.com>
$ wget --secure-protocol=TLSv1_1 --ciphers="SECURE128" <https://example.com>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the HTTPS (SSL/TLS) Options section of the
wget(1)
man page for more information.
2.6. Customizing system-wide cryptographic policies with subpolicies Copy linkLink copied to clipboard!
You can adjust the set of enabled cryptographic algorithms or protocols on the system. You can either apply custom subpolicies on top of an existing system-wide cryptographic policy or define such a policy from scratch.
The concept of scoped policies allows enabling different sets of algorithms for different back ends. You can limit each configuration directive to specific protocols, libraries, or services.
Furthermore, you can use wildcard characters in directives, for example, an asterisk to specify multiple values. For the complete syntax reference, see the Custom Policies
section in the update-crypto-policies(8)
man page and the Crypto Policy Definition Format
section in the crypto-policies(7)
man page on your system.
-
The
/etc/crypto-policies/state/CURRENT.pol
file lists all settings in the currently applied system-wide cryptographic policy after wildcard expansion. -
To make your cryptographic policy more strict, consider using values listed in the
/usr/share/crypto-policies/policies/FUTURE.pol
file. -
You can find example subpolicies in the
/usr/share/crypto-policies/policies/modules/
directory.
Procedure
Checkout to the
/etc/crypto-policies/policies/modules/
directory:cd /etc/crypto-policies/policies/modules/
# cd /etc/crypto-policies/policies/modules/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create subpolicies for your adjustments, for example:
touch <MYCRYPTO-1>.pmod touch <SCOPES-AND-WILDCARDS>.pmod
# touch <MYCRYPTO-1>.pmod # touch <SCOPES-AND-WILDCARDS>.pmod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantUse upper-case letters in file names of policy modules.
Open the policy modules in a text editor of your choice and insert options that modify the system-wide cryptographic policy, for example:
vi <MYCRYPTO-1>.pmod
# vi <MYCRYPTO-1>.pmod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow min_rsa_size = 3072 hash = SHA2-384 SHA2-512 SHA3-384 SHA3-512
min_rsa_size = 3072 hash = SHA2-384 SHA2-512 SHA3-384 SHA3-512
Copy to Clipboard Copied! Toggle word wrap Toggle overflow vi <SCOPES-AND-WILDCARDS>.pmod
# vi <SCOPES-AND-WILDCARDS>.pmod
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes in the module files.
Apply your policy adjustments to the
DEFAULT
system-wide cryptographic policy level:update-crypto-policies --set DEFAULT:<MYCRYPTO-1>:<SCOPES-AND-WILDCARDS>
# update-crypto-policies --set DEFAULT:<MYCRYPTO-1>:<SCOPES-AND-WILDCARDS>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To make your cryptographic settings effective for already running services and applications, restart the system:
reboot
# reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
/etc/crypto-policies/state/CURRENT.pol
file contains your changes, for example:cat /etc/crypto-policies/state/CURRENT.pol | grep rsa_size min_rsa_size = 3072
$ cat /etc/crypto-policies/state/CURRENT.pol | grep rsa_size min_rsa_size = 3072
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. Creating and setting a custom system-wide cryptographic policy Copy linkLink copied to clipboard!
For specific scenarios, you can customize the system-wide cryptographic policy by creating and using a complete policy file.
Procedure
Create a policy file for your customizations:
cd /etc/crypto-policies/policies/ touch <MYPOLICY>.pol
# cd /etc/crypto-policies/policies/ # touch <MYPOLICY>.pol
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, start by copying one of the four predefined policy levels:
cp /usr/share/crypto-policies/policies/DEFAULT.pol /etc/crypto-policies/policies/<MYPOLICY>.pol
# cp /usr/share/crypto-policies/policies/DEFAULT.pol /etc/crypto-policies/policies/<MYPOLICY>.pol
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the file with your custom cryptographic policy in a text editor of your choice to fit your requirements, for example:
vi /etc/crypto-policies/policies/<MYPOLICY>.pol
# vi /etc/crypto-policies/policies/<MYPOLICY>.pol
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the
Custom Policies
section in theupdate-crypto-policies(8)
man page and theCrypto Policy Definition Format
section in thecrypto-policies(7)
man page on your system for the complete syntax reference.Switch the system-wide cryptographic policy to your custom level:
update-crypto-policies --set <MYPOLICY>
# update-crypto-policies --set <MYPOLICY>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To make your cryptographic settings effective for already running services and applications, restart the system:
reboot
# reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. Enabling post-quantum algorithms system-wide Copy linkLink copied to clipboard!
You can enable post-quantum cryptography (PQC) system-wide by applying the TEST-PQ
subpolicy. Post-quantum key-exchange algorithms that use the Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM) standard according to the FIPS 203 draft are available for TLS connections in OpenSSL, GnuTLS, and NSS, and for SSH connections in OpenSSH.
All PQC algorithms in Red Hat Enterprise Linux 10 are provided as a Technology Preview feature. The package and system-wide cryptographic policy name are subject to change when post-quantum cryptography exits the Technology Preview state.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
-
The
crypto-policies-scripts
package is installed on the system.
-
Commands that start with the
#
command prompt require administrative privileges provided bysudo
or root user access. For information on how to configuresudo
access, see Enabling unprivileged users to run certain commands.
Procedure
Install the
crypto-policies-pq-preview
package:dnf install -y crypto-policies-pq-preview
# dnf install -y crypto-policies-pq-preview
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
TEST-PQ
cryptographic subpolicy on top of your current system-wide policy, for example:update-crypto-policies --show update-crypto-policies --set DEFAULT:TEST-PQ
# update-crypto-policies --show DEFAULT # update-crypto-policies --set DEFAULT:TEST-PQ
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To make your cryptographic settings effective for already running services and applications, restart the system:
reboot
# reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
/etc/crypto-policies/state/CURRENT.pol
file contains PQC, for example:cat /etc/crypto-policies/state/CURRENT.pol | grep MLKEM512 group = MLKEM512 P256-MLKEM512 X25519-MLKEM512 MLKEM768 P384-MLKEM768 X448-MLKEM768 MLKEM768-X25519 X25519-MLKEM768 P256-MLKEM768 MLKEM1024 P521-MLKEM1024 P384-MLKEM1024 X25519 SECP256R1 X448 SECP521R1 SECP384R1 FFDHE-2048 FFDHE-3072 FFDHE-4096 FFDHE-6144 FFDHE-8192
$ cat /etc/crypto-policies/state/CURRENT.pol | grep MLKEM512 group = MLKEM512 P256-MLKEM512 X25519-MLKEM512 MLKEM768 P384-MLKEM768 X448-MLKEM768 MLKEM768-X25519 X25519-MLKEM768 P256-MLKEM768 MLKEM1024 P521-MLKEM1024 P384-MLKEM1024 X25519 SECP256R1 X448 SECP521R1 SECP384R1 FFDHE-2048 FFDHE-3072 FFDHE-4096 FFDHE-6144 FFDHE-8192
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Enhancing security with the FUTURE cryptographic policy using the crypto_policies RHEL system role Copy linkLink copied to clipboard!
You can use the crypto_policies
RHEL system role to configure the FUTURE
policy on your managed nodes. This policy helps to achieve for example:
- Future-proofing against emerging threats
- Anticipates advancements in computational power.
- Enhanced security
- Stronger encryption standards require longer key lengths and more secure algorithms.
- Compliance with high-security standards
- In some industries, for example, in healthcare, telco, and finance the data sensitivity is high, and availability of strong cryptography is critical.
Typically, FUTURE
is suitable for environments handling highly sensitive data, preparing for future regulations, or adopting long-term security strategies.
Legacy systems and software do not have to support the more modern and stricter algorithms and protocols enforced by the FUTURE
policy. For example, older systems might not support TLS 1.3 or larger key sizes. This could lead to compatibility problems.
Also, using strong algorithms usually increases the computational workload, which could negatively affect your system performance.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
crypto_policies_policy: FUTURE
-
Configures the required cryptographic policy (
FUTURE
) on the managed node. It can be either the base policy or a base policy with some subpolicies. The specified base policy and subpolicies have to be available on the managed node. The default value isnull
, which means that the configuration is not changed and thecrypto_policies
RHEL system role only collects the Ansible facts. crypto_policies_reboot_ok: true
-
Causes the system to reboot after the cryptographic policy change to make sure all of the services and applications will read the new configuration files. The default value is
false
.
For details about the role variables and the cryptographic configuration options, see the
/usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md
file and theupdate-crypto-policies(8)
andcrypto-policies(7)
manual pages on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the control node, create another playbook named, for example,
verify_playbook.yml
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
crypto_policies_active
-
An exported Ansible fact that contains the currently active policy name in the format as accepted by the
crypto_policies_policy
variable.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/verify_playbook.yml
$ ansible-playbook --syntax-check ~/verify_playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook ~/verify_playbook.yml
$ ansible-playbook ~/verify_playbook.yml TASK [debug] ************************** ok: [host] => { "crypto_policies_active": "FUTURE" }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
crypto_policies_active
variable shows the active policy on the managed node.
Chapter 3. Configuring applications to use cryptographic hardware through PKCS #11 Copy linkLink copied to clipboard!
Separating parts of your secret information about dedicated cryptographic devices, such as smart cards and cryptographic tokens for end-user authentication and hardware security modules (HSM) for server applications, provides an additional layer of security. In Red Hat Enterprise Linux, support for cryptographic hardware through the PKCS #11 API is consistent across different applications, and the isolation of secrets on cryptographic hardware is not a complicated task.
3.1. Cryptographic hardware support through PKCS #11 Copy linkLink copied to clipboard!
Public-Key Cryptography Standard (PKCS) #11 defines an application programming interface (API) to cryptographic devices that hold cryptographic information and perform cryptographic functions.
PKCS #11 introduces the cryptographic token, an object that presents each hardware or software device to applications in a unified manner. Therefore, applications view devices such as smart cards, which are typically used by persons, and hardware security modules, which are typically used by computers, as PKCS #11 cryptographic tokens.
A PKCS #11 token can store various object types including a certificate; a data object; and a public, private, or secret key. These objects are uniquely identifiable through the PKCS #11 Uniform Resource Identifier (URI) scheme.
A PKCS #11 URI is a standard way to identify a specific object in a PKCS #11 module according to the object attributes. This enables you to configure all libraries and applications with the same configuration string in the form of a URI.
RHEL provides the OpenSC PKCS #11 driver for smart cards by default. However, hardware tokens and HSMs can have their own PKCS #11 modules that do not have their counterpart in the system. You can register such PKCS #11 modules with the p11-kit
tool, which acts as a wrapper over the registered smart-card drivers in the system.
To make your own PKCS #11 module work on the system, add a new text file to the /etc/pkcs11/modules/
directory
You can add your own PKCS #11 module into the system by creating a new text file in the /etc/pkcs11/modules/
directory. For example, the OpenSC configuration file in p11-kit
looks as follows:
cat /usr/share/p11-kit/modules/opensc.module module: opensc-pkcs11.so
$ cat /usr/share/p11-kit/modules/opensc.module
module: opensc-pkcs11.so
3.2. Authenticating by SSH keys stored on a smart card Copy linkLink copied to clipboard!
You can create and store ECDSA and RSA keys on a smart card and authenticate by the smart card on an OpenSSH client. Smart-card authentication replaces the default password authentication.
Prerequisites
-
On the client side, the
opensc
package is installed and thepcscd
service is running.
Procedure
List all keys provided by the OpenSC PKCS #11 module including their PKCS #11 URIs and save the output to the
keys.pub
file:ssh-keygen -D pkcs11: > keys.pub
$ ssh-keygen -D pkcs11: > keys.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Transfer the public key to the remote server. Use the
ssh-copy-id
command with thekeys.pub
file created in the previous step:ssh-copy-id -f -i keys.pub <username@ssh-server-example.com>
$ ssh-copy-id -f -i keys.pub <username@ssh-server-example.com>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to
<ssh-server-example.com>
by using the ECDSA key. You can use just a subset of the URI, which uniquely references your key, for example:ssh -i "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so" <ssh-server-example.com>
$ ssh -i "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] $
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because OpenSSH uses the
p11-kit-proxy
wrapper and the OpenSC PKCS #11 module is registered to thep11-kit
tool, you can simplify the previous command:ssh -i "pkcs11:id=%01" <ssh-server-example.com>
$ ssh -i "pkcs11:id=%01" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] $
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you skip the
id=
part of a PKCS #11 URI, OpenSSH loads all keys that are available in the proxy module. This can reduce the amount of typing required:ssh -i pkcs11: <ssh-server-example.com>
$ ssh -i pkcs11: <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] $
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: You can use the same URI string in the
~/.ssh/config
file to make the configuration permanent:cat ~/.ssh/config IdentityFile "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so" ssh <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] $
$ cat ~/.ssh/config IdentityFile "pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so" $ ssh <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] $
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ssh
client utility now automatically uses this URI and the key from the smart card.
3.3. Configuring applications for authentication with certificates on smart cards Copy linkLink copied to clipboard!
Authentication by using smart cards in applications may increase security and simplify automation. You can integrate the Public Key Cryptography Standard (PKCS) #11 URIs into your application by using the following methods:
-
The
Firefox
web browser automatically loads thep11-kit-proxy
PKCS #11 module. This means that every supported smart card in the system is automatically detected. For using TLS client authentication, no additional setup is required and keys and certificates from a smart card are automatically used when a server requests them. -
If your application uses the
GnuTLS
orNSS
library, it already supports PKCS #11 URIs. Also, applications that rely on theOpenSSL
library can access cryptographic hardware modules, including smart cards, through the PKCS #11 provider installed by thepkcs11-provider
package. -
Applications that require working with private keys on smart cards and that do not use
NSS
,GnuTLS
, norOpenSSL
can use thep11-kit
API directly to work with cryptographic hardware modules, including smart cards, rather than using the PKCS #11 API of specific PKCS #11 modules. With the the
wget
network downloader, you can specify PKCS #11 URIs instead of paths to locally stored private keys and certificates. This might simplify creation of scripts for tasks that require safely stored private keys and certificates. For example:wget --private-key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --certificate 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/
$ wget --private-key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --certificate 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also specify PKCS #11 URI when using the
curl
tool:curl --key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --cert 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/
$ curl --key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --cert 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBecause a PIN is a security measure that controls access to keys stored on a smart card and the configuration file contains the PIN in the plain-text form, consider additional protection to prevent an attacker from reading the PIN. For example, you can use the
pin-source
attribute and provide afile:
URI for reading the PIN from a file. See RFC 7512: PKCS #11 URI Scheme Query Attribute Semantics for more information. Note that using a command path as a value of thepin-source
attribute is not supported.
3.4. Using HSMs protecting private keys in Apache Copy linkLink copied to clipboard!
The Apache
HTTP server can work with private keys stored on hardware security modules (HSMs), which helps to prevent the keys' disclosure and man-in-the-middle attacks. Note that this usually requires high-performance HSMs for busy servers.
For secure communication in the form of the HTTPS protocol, the Apache
HTTP server (httpd
) uses the OpenSSL library. OpenSSL does not support PKCS #11 natively. To use HSMs, you must install the pkcs11-provider
package, which provides access to PKCS #11 modules. You can use a PKCS #11 URI instead of a regular file name to specify a server key and a certificate in the /etc/httpd/conf.d/ssl.conf
configuration file, for example:
SSLCertificateFile "pkcs11:id=%01;token=softhsm;type=cert" SSLCertificateKeyFile "pkcs11:id=%01;token=softhsm;type=private?pin-value=111111"
SSLCertificateFile "pkcs11:id=%01;token=softhsm;type=cert"
SSLCertificateKeyFile "pkcs11:id=%01;token=softhsm;type=private?pin-value=111111"
Install the httpd-manual
package to obtain complete documentation for the Apache
HTTP Server, including TLS configuration. The directives available in the /etc/httpd/conf.d/ssl.conf
configuration file are described in detail in the /usr/share/httpd/manual/mod/mod_ssl.html
file.
Chapter 4. Controlling access to smart cards by using polkit Copy linkLink copied to clipboard!
To cover possible threats that cannot be prevented by mechanisms built into smart cards, such as PINs, PIN pads, and biometrics, and for more fine-grained control, Red Hat Enterprise Linux uses the polkit
framework for controlling access control to smart cards.
System administrators can configure polkit
to fit specific scenarios, such as smart-card access for non-privileged or non-local users or services.
4.1. Smart-card access control through polkit Copy linkLink copied to clipboard!
The Personal Computer/Smart Card (PC/SC) protocol specifies a standard for integrating smart cards and their readers into computing systems. In RHEL, the pcsc-lite
package provides middleware to access smart cards that use the PC/SC API. A part of this package, the pcscd
(PC/SC Smart Card) daemon, ensures that the system can access a smart card by using the PC/SC protocol.
Because access-control mechanisms built into smart cards, such as PINs, PIN pads, and biometrics, do not cover all possible threats, RHEL uses the polkit
framework for more robust access control. The polkit
authorization manager can grant access to privileged operations. In addition to granting access to disks, you can use polkit
also to specify policies for securing smart cards. For example, you can define which users can perform which operations with a smart card.
After installing the pcsc-lite
package and starting the pcscd
daemon, the system enforces policies defined in the /usr/share/polkit-1/actions/
directory. The default system-wide policy is in the /usr/share/polkit-1/actions/org.debian.pcsc-lite.policy
file. Polkit policy files use the XML format and the syntax is described in the polkit(8)
man page.
The polkitd
service monitors the /etc/polkit-1/rules.d/
and /usr/share/polkit-1/rules.d/
directories for any changes in rule files stored in these directories. The files contain authorization rules in JavaScript format. System administrators can add custom rule files in both directories, and polkitd
reads them in lexical order based on their file name. If two files have the same names, then the file in /etc/polkit-1/rules.d/
is read first.
4.3. Displaying more detailed information about polkit authorization to PC/SC Copy linkLink copied to clipboard!
In the default configuration, the polkit
authorization framework sends only limited information to the Journal log. You can extend polkit
log entries related to the PC/SC protocol by adding new rules.
Prerequisites
-
You have installed the
pcsc-lite
package on your system. -
The
pcscd
daemon is running.
Procedure
Create a new file in the
/etc/polkit-1/rules.d/
directory:touch /etc/polkit-1/rules.d/00-test.rules
# touch /etc/polkit-1/rules.d/00-test.rules
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the file in an editor of your choice, for example:
vi /etc/polkit-1/rules.d/00-test.rules
# vi /etc/polkit-1/rules.d/00-test.rules
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Insert the following lines:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the file, and exit the editor.
Restart the
pcscd
andpolkit
services:systemctl restart pcscd.service pcscd.socket polkit.service
# systemctl restart pcscd.service pcscd.socket polkit.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
-
Make an authorization request for
pcscd
. For example, open the Firefox web browser or use thepkcs11-tool -L
command provided by theopensc
package. Display the extended log entries, for example:
journalctl -u polkit --since "1 hour ago"
# journalctl -u polkit --since "1 hour ago" polkitd[1224]: <no filename>:4: action=[Action id='org.debian.pcsc-lite.access_pcsc'] polkitd[1224]: <no filename>:5: subject=[Subject pid=2020481 user=user' groups=user,wheel,mock,wireshark seat=null session=null local=true active=true]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Scanning the system for configuration compliance Copy linkLink copied to clipboard!
A compliance audit is a process of determining whether a given object follows all the rules specified in a compliance policy. The compliance policy is defined by security professionals who specify the required settings, often in the form of a checklist, that a computing environment should use.
Compliance policies can vary substantially across organizations and even across different systems within the same organization. Differences among these policies are based on the purpose of each system and its importance for the organization. Custom software settings and deployment characteristics also raise a need for custom policy checklists.
5.1. Configuration compliance tools in RHEL Copy linkLink copied to clipboard!
You can perform a fully automated compliance audit in Red Hat Enterprise Linux by using the following set of configuration compliance tools. These tools are based on the Security Content Automation Protocol (SCAP) standard and are designed for automated tailoring of compliance policies.
- OpenSCAP
The
OpenSCAP
library, with the accompanyingoscap
command-line utility, is designed to perform configuration scans on a local system, to validate configuration compliance content, and to generate reports and guides based on these scans and evaluations. Withoscap
, you can scan systems to assess their alignment with security policies contained inscap-security-guide
. You can also perform an automated remediation that configures the system into a state that is aligned with a selected policy.ImportantYou can experience memory-consumption problems while using OpenSCAP, which can cause stopping the program prematurely and prevent generating any result files. See the OpenSCAP memory-consumption problems Knowledgebase article for details.
- SCAP Security Guide (SSG)
-
The
scap-security-guide
package provides collections of security policies for Linux systems. The guidance consists of a catalog of practical hardening advice, linked to government requirements where applicable. The project bridges the gap between generalized policy requirements and specific implementation guidelines. - Script Check Engine (SCE)
-
With SCE, which is an extension to the SCAP protocol, administrators can write their security content by using a scripting language, such as Bash, Python, and Ruby. The SCE extension is provided in the
openscap-engine-sce
package. The SCE itself is not part of the SCAP standard.
Alternatively, you can perform automated compliance audits on multiple systems remotely by using the OpenSCAP solution for Red Hat Satellite.
5.2. Configuration compliance scanning Copy linkLink copied to clipboard!
To verify if your Red Hat Enterprise Linux systems adhere to a specific security baseline, such as industry standards or internal policies, you can perform a configuration compliance scan. With the tools provided for Red Hat Enterprise Linux, such as OpenSCAP and the SCAP Security Guide, you can scan local and remote systems, and also containers and container images.
5.2.1. Configuration compliance in RHEL Copy linkLink copied to clipboard!
You can use configuration compliance scanning to conform to a baseline defined by a specific organization. For example, if you are a payment processor, you might have to align your systems with the Payment Card Industry Data Security Standard (PCI-DSS). You can also perform configuration compliance scanning to harden your system security.
Follow the Security Content Automation Protocol (SCAP) content provided in the SCAP Security Guide package because it is in line with Red Hat best practices for affected components.
The SCAP Security Guide package provides content which conforms to the SCAP 1.2 and SCAP 1.3 standards. The openscap scanner
utility is compatible with both SCAP 1.2 and SCAP 1.3 content provided in the SCAP Security Guide package.
Performing a configuration compliance scanning does not guarantee the system is compliant.
The SCAP Security Guide suite provides profiles for several platforms in the form of data stream documents. A data stream is a file that contains definitions, benchmarks, profiles, and individual rules. Each rule specifies the applicability and requirements for compliance. RHEL provides several profiles for compliance with security policies. In addition to the industry standard, Red Hat data streams also contain information for remediation of failed rules.
Structure of compliance scanning resources
A profile is a set of rules based on a security policy, such as PCI-DSS and Health Insurance Portability and Accountability Act (HIPAA). After you select a profile, you can then perform an automated audit of the system for compliance with that profile.
You can also modify, or tailor, a profile to customize certain rules, for example, password length. For more information about profile tailoring, see Customizing a security profile with autotailor.
5.2.2. Possible results of an OpenSCAP scan Copy linkLink copied to clipboard!
Depending on the data stream and profile applied to an OpenSCAP scan and the various properties of your system, each rule produces a specific result. These are the possible results with brief explanations of their meanings:
- Pass
- The scan did not find any conflicts with this rule.
- Fail
- The scan found a conflict with this rule.
- Not checked
- OpenSCAP does not perform an automatic evaluation of this rule. Check whether your system conforms to this rule manually.
- Not applicable
- This rule does not apply to the current configuration.
- Not selected
- This rule is not part of the profile. OpenSCAP does not evaluate this rule and does not display these rules in the results.
- Error
-
The scan encountered an error. For additional information, you can enter the
oscap
command with the--verbose DEVEL
option. File a support case on the Red Hat Customer Portal or open a ticket in the RHEL project in Red Hat Jira. - Unknown
-
The scan encountered an unexpected situation. For additional information, you can enter the
oscap
command with the--verbose DEVEL
option. File a support case on the Red Hat Customer Portal or open a ticket in the RHEL project in Red Hat Jira.
5.2.3. Viewing profiles for configuration compliance Copy linkLink copied to clipboard!
Before you decide to use profiles for scanning or remediation, you can list them and check their detailed descriptions by using the oscap info
subcommand.
Prerequisites
-
The
openscap-scanner
andscap-security-guide
packages are installed.
Procedure
List all available files with security compliance profiles provided by the SCAP Security Guide project:
ls /usr/share/xml/scap/ssg/content/
$ ls /usr/share/xml/scap/ssg/content/ ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display detailed information about a selected data stream by using the
oscap info
subcommand. XML files containing data streams are indicated by the-ds
string in their names. In theProfiles
section, you can find a list of available profiles and their IDs:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Select a profile from the data stream file and display additional details about the selected profile. To do so, use
oscap info
with the--profile
option followed by the last section of the ID displayed in the output of the previous command. For example, the ID of the HIPPA profile isxccdf_org.ssgproject.content_profile_hipaa
, and the value for the--profile
option ishipaa
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.4. Assessing configuration compliance with a specific baseline Copy linkLink copied to clipboard!
You can determine whether your system or a remote system conforms to a specific baseline, and save the results in a report by using the oscap
command-line tool.
Prerequisites
-
The
openscap-scanner
andscap-security-guide
packages are installed. - You know the ID of the profile within the baseline with which the system should comply. To find the ID, see the Viewing profiles for configuration compliance section.
Procedure
Scan the local system for compliance with the selected profile and save the scan results to a file:
oscap xccdf eval --report <scan_report.html> --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
$ oscap xccdf eval --report <scan_report.html> --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
-
<scan_report.html>
with the file name whereoscap
saves the scan results. -
<profile_ID>
with the profile ID with which the system should comply, for example,hipaa
.
-
Optional: Scan a remote system for compliance with the selected profile and save the scan results to a file:
oscap-ssh <username>@<hostname> <port> xccdf eval --report <scan_report.html> --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
$ oscap-ssh <username>@<hostname> <port> xccdf eval --report <scan_report.html> --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
-
<username>@<hostname>
with the user name and host name of the remote system. -
<port>
with the port number through which you can access the remote system. -
<scan_report.html>
with the file name whereoscap
saves the scan results. -
<profile_ID>
with the profile ID with which the system should comply, for example,hipaa
.
-
5.2.5. Assessing security compliance of a container or a container image with a specific baseline Copy linkLink copied to clipboard!
You can assess the compliance of your container or a container image with a specific security baseline, such as Payment Card Industry Data Security Standard (PCI-DSS) and Health Insurance Portability and Accountability Act (HIPAA).
Prerequisites
-
The
openscap-utils
andscap-security-guide
packages are installed. - You have root access to the system.
Procedure
Find the ID of a container or a container image:
To find the ID of a container:
podman ps -a
# podman ps -a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To find the ID of a container image:
podman images
# podman images
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Evaluate the compliance of the container or container image with a profile and save the scan results into a file:
oscap-podman <ID> xccdf eval --report <scan_report.html> --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
# oscap-podman <ID> xccdf eval --report <scan_report.html> --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
-
<ID>
with the ID of your container or container image -
<scan_report.html>
with the file name whereoscap
saves the scan results -
<profile_ID>
with the profile ID with which the system should comply, for example,hipaa
orpci-dss
-
Verification
Check the results in a browser of your choice, for example:
firefox <scan_report.html> &
$ firefox <scan_report.html> &
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The rules marked as notapplicable
apply only to bare-metal and virtualized systems and not to containers or container images.
5.3. Configuration compliance remediation Copy linkLink copied to clipboard!
To automatically align your system with a specific profile, you can perform a remediation. You can remediate the system to align with any profile provided by the SCAP Security Guide.
5.3.1. Remediating the system to align with a specific baseline Copy linkLink copied to clipboard!
You can remediate the RHEL system to align with a specific baseline. You can remediate the system to align with any profile provided by the SCAP Security Guide.
For details on listing available profiles, see the Viewing profiles for configuration compliance section.
Remediations are supported on RHEL systems in the default configuration. Remediating a system that has been altered after installation might render the system nonfunctional or noncompliant with the required security profile. Red Hat does not provide any automated method to revert changes made by security-hardening remediations.
Test the effects of the remediation before applying it on production systems.
Prerequisites
-
The
openscap-scanner
andscap-security-guide
packages are installed.
Procedure
Remediate the system:
oscap xccdf eval --profile <profile_ID> --remediate /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
# oscap xccdf eval --profile <profile_ID> --remediate /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<profile_ID>
with the profile ID with which the system should comply, for example,hipaa
.- Restart your system.
Verification
Evaluate compliance of the system with the profile, and save the scan results to a file:
oscap xccdf eval --report <scan_report.html> --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
$ oscap xccdf eval --report <scan_report.html> --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
-
<scan_report.html>
with the file name whereoscap
saves the scan results. -
<profile_ID>
with the profile ID with which the system should comply, for example,hipaa
.
-
5.3.2. Remediating the system to align with a specific baseline by using an SSG Ansible Playbook Copy linkLink copied to clipboard!
You can remediate your system to align with a specific baseline by using an Ansible Playbook file from the SCAP Security Guide project. You can remediate to align with any profile provided by the SCAP Security Guide.
Remediations are supported on RHEL systems in the default configuration. Remediating a system that has been altered after installation might render the system nonfunctional or noncompliant with the required security profile. Red Hat does not provide any automated method to revert changes made by security-hardening remediations.
Test the effects of the remediation before applying it on production systems.
Prerequisites
-
The
scap-security-guide
package is installed. -
The
ansible-core
package is installed. See the Ansible Installation Guide for more information. -
The
rhc-worker-playbook
package is installed. - You know the ID of the profile according to which you want to remediate your system. For details, see Viewing profiles for configuration compliance.
Procedure
Remediate your system to align with a selected profile by using Ansible:
ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -i "localhost," -c local /usr/share/scap-security-guide/ansible/rhel10-playbook-<profile_ID>.yml
# ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -i "localhost," -c local /usr/share/scap-security-guide/ansible/rhel10-playbook-<profile_ID>.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ANSIBLE_COLLECTIONS_PATH
environment variable is necessary for the command to run the playbook.Replace
<profile_ID>
with the profile ID of the selected profile.- Restart the system.
Verification
Evaluate the compliance of the system with the selected profile, and save the scan results to a file:
oscap xccdf eval --profile <profile_ID> --report <scan_report.html> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
# oscap xccdf eval --profile <profile_ID> --report <scan_report.html> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<scan_report.html>
with the file name whereoscap
saves the scan results.
5.3.3. Creating a remediation Ansible Playbook to align the system with a specific baseline Copy linkLink copied to clipboard!
You can create an Ansible Playbook that contains only the remediations that are required to align your system with a specific baseline. This playbook is smaller because it does not cover already satisfied requirements. Creating the playbook does not modify your system in any way, because you only prepare a file for later application.
Prerequisites
-
The
scap-security-guide
package is installed. -
The
ansible-core
package is installed. See the Ansible Installation Guide for more information. -
The
rhc-worker-playbook
package is installed. - You know the ID of the profile according to which you want to remediate your system. For details, see Viewing profiles for configuration compliance.
Procedure
Scan the system and save the results:
oscap xccdf eval --profile <profile_ID> --results <profile_results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
# oscap xccdf eval --profile <profile_ID> --results <profile_results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace: *
<profile_ID>
with the profile ID with which the system should comply, for example,hipaa
*<profile_results.xml>
with the path to the file whereoscap
saves the resultsFind the value of the result ID in the file with the results:
oscap info <profile_results.xml>
# oscap info <profile_results.xml>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate an Ansible Playbook based on the file generated in step 1:
oscap xccdf generate fix --fix-type ansible --result-id xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_<profile_ID> --output <profile_remediations.yml> <profile_results.xml>
# oscap xccdf generate fix --fix-type ansible --result-id xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_<profile_ID> --output <profile_remediations.yml> <profile_results.xml>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<profile_remediations.yml>
with the path to the file whereoscap
saves rules that failed the scan.-
Review the generated
<profile_remediations.yml>
file. Remediate your system to align with a selected profile by using Ansible:
ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -i "localhost," -c local <profile_remediations.yml>`
# ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -i "localhost," -c local <profile_remediations.yml>`
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
ANSIBLE_COLLECTIONS_PATH
environment variable is necessary for the command to run the playbook.WarningRemediations are supported on RHEL systems in the default configuration. Remediating a system that has been altered after installation might render the system nonfunctional or noncompliant with the required security profile. Red Hat does not provide any automated method to revert changes made by security-hardening remediations.
Test the effects of the remediation before applying it on production systems.
Verification
Evaluate the compliance of the system with the selected profile, and save the scan results to a file:
oscap xccdf eval --profile <profile_ID> --report <scan_report.html> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
# oscap xccdf eval --profile <profile_ID> --report <scan_report.html> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<scan_report.html>
with the file name whereoscap
saves the scan results.
5.4. Performing a hardened installation of RHEL with Kickstart Copy linkLink copied to clipboard!
If you need your system to be compliant with a specific security profile, such as DISA STIG, CIS, or ANSSI, you can prepare a Kickstart file that defines the hardened configuration, customize the configuration with a tailoring file, and start an automated installation of the hardened system.
Prerequisites
-
The
openscap-scanner
is installed on your system. The
scap-security-guide
package is installed on your system and the package version corresponds to the version of RHEL that you want to install. For more information, see Supported versions of the SCAP Security Guide in RHEL. Using a different version can cause conflicts.NoteIf your system has the same version of RHEL as the version you want to install, you can install the
scap-security-guide
package directly.
Procedure
Find the ID of the security profile from the data stream file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: If you want to customize your hardening with XCCDF Tailoring file you can use the
autotailor
command provided in theopenscap-utils
package. For more information, see Customizing a security profile with autotailor. Generate the Kickstart file from the SCAP source data stream:
oscap xccdf generate fix --profile <profile_ID> --output <kickstart_file>.cfg --fix-type kickstart /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
$ oscap xccdf generate fix --profile <profile_ID> --output <kickstart_file>.cfg --fix-type kickstart /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow + Replace
<profile_ID>
with the profile ID with which the system should comply, for example,hipaa
.+ If you are using a tailoring file, embed the tailoring file into the generated Kickstart file by using the
--tailoring-file tailoring.xml
option and your custom profile ID, for example:+
*oscap xccdf generate fix --tailoring-file tailoring.xml --profile _<custom_profile_ID>_ --output _<kickstart_file>_.cfg --fix-type kickstart ./ssg-rhel10-ds.xml*
$ *oscap xccdf generate fix --tailoring-file tailoring.xml --profile _<custom_profile_ID>_ --output _<kickstart_file>_.cfg --fix-type kickstart ./ssg-rhel10-ds.xml*
Review and, if necessary, manually modify the generated
<kickstart_file>.cfg
to fit the needs of your deployment. Follow the instructions in the comments in the file.NoteSome changes might affect the compliance of the systems installed by the Kickstart file. For example, some security policies require defined partitions or specific packages and services.
- Use the Kickstart file for your installation. For the installation program to use the Kickstart, the Kickstart can be served through a web server, provided in PXE, or embedded into the ISO image. For detailed steps, see the Semi-automated installations: Making Kickstart files available to the RHEL installer chapter in the Automatically installing RHEL document.
-
After the installation finishes, the system reboots automatically. After the reboot, log in and review the installation SCAP report saved in the
/root
directory.
Verification
Scan the system for compliance and save the report in a HTML file for review:
With the original profile:
oscap xccdf eval --report report.html --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
# oscap xccdf eval --report report.html --profile <profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow With the tailored profile:
oscap xccdf eval --report report.html --tailoring-file tailoring.xml --profile <custom_profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
# oscap xccdf eval --report report.html --tailoring-file tailoring.xml --profile <custom_profile_ID> /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. Customizing a security profile with autotailor Copy linkLink copied to clipboard!
You can customize a security profile to better adjust it to your specific needs, for example, to implement an internal policy that differs from the official profile. When customizing a profile, you can select additional rules, remove rules that you cover differently, and change parameters of certain rules such as minimum password length. You cannot define new rules when customizing a profile.
By using the autotailor
utility, you create an XCCDF tailoring file that contains all of the modifications of the original profile. Then, when you are scanning, remediating, or installing a system in accordance to a SCAP profile, you pass this tailoring file to the oscap
command-line utility.
Prerequisites
-
The
openscap-utils
package is installed on your system. - You know the ID of the profile within the baseline which you want to customize. To find the ID, see the Viewing profiles for configuration compliance section.
Procedure
Create a tailoring file for your profile by using the
autotailor
command, for example:autotailor \ --select=<rule_ID_1> \ --select=<rule_ID_2> \ --unselect=<rule_ID_3> \ --var-value=<value_ID_1>=<value_1> \ --var-value=<value_ID_2>=<value_2> \ --output=<tailoring.xml> \ --tailored-profile-id=<custom_profile_ID> \ /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml <profile_ID>
$ autotailor \ --select=<rule_ID_1> \ --select=<rule_ID_2> \ --unselect=<rule_ID_3> \ --var-value=<value_ID_1>=<value_1> \ --var-value=<value_ID_2>=<value_2> \ --output=<tailoring.xml> \ --tailored-profile-id=<custom_profile_ID> \ /usr/share/xml/scap/ssg/content/ssg-rhel10-ds.xml <profile_ID>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
<customization_options>
are the modifications of the profile. Use one or more of the following options:--select=<rule_ID>
- Add an existing rule to the profile.
--unselect=<rule_ID>
- Remove a rule from the profile.
--var-value=<value_ID>=<value>
-
Override a pre-set value. For example, to set
var_sshd_max_sessions
to10
, use--var-value=var_sshd_max_sessions=10
.
-
<tailoring.xml>
is the file name whereautotailor
saves the tailoring. -
<custom_profile_ID>
is the profile ID within which theautotailor
saves customizations, for example,custom_cis
. -
<profile_ID>
is the profile ID with which the system should comply, for example,cis
.
NoteFor all profile, rule, and variable XCCDF IDs, you can use either a full namespaced identifier or a shortened ID that
autotailor
automatically augments with the namespace prefix. For example,cis
is equivalent toxccdf_org.ssgproject.content_profile_cis
.You can override the default namespace
org.ssgproject.content
by using the--id-namespace
option.Optional: Create a tailoring file based on the customizations defined in the JSON Tailoring format:
autotailor --output=<tailoring.xml> --json-tailoring=<json_tailoring.json>
$ autotailor --output=<tailoring.xml> --json-tailoring=<json_tailoring.json>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<json_tailoring.json>
with the file name with JSON Tailoring definitions.NoteYou can mix
--json-tailoring
with--select
,--unselect
, and--var-value
command-line customizations. In that case, command-line customizations have priority over JSON Tailoring.
5.6. SCAP Security Guide profiles supported in RHEL 10 Copy linkLink copied to clipboard!
Use only the SCAP content provided in the particular minor release of RHEL. This is because components that participate in hardening are sometimes updated with new capabilities. SCAP content changes to reflect these updates, but it is not always compatible with earlier versions.
You can get the information relevant for the version of scap-security-guide
RPM installed on your system by using the oscap info
command. For more information, see Viewing profiles for configuration compliance.
Profile name | Profile ID | Policy version |
---|---|---|
French National Agency for the Security of Information Systems (ANSSI) BP-028 Enhanced Level |
| 2.0 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 High Level |
| 2.0 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 Intermediary Level |
| 2.0 |
French National Agency for the Security of Information Systems (ANSSI) BP-028 Minimal Level |
| 2.0 |
[DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Server |
| DRAFT |
[DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Server |
| DRAFT |
[DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 1 - Workstation |
| DRAFT |
[DRAFT] CIS Red Hat Enterprise Linux 9 Benchmark for Level 2 - Workstation |
| DRAFT |
Australian Cyber Security Centre (ACSC) Essential Eight |
| not versioned |
Health Insurance Portability and Accountability Act (HIPAA) |
| not versioned |
Australian Cyber Security Centre (ACSC) ISM Official - Base |
| not versioned |
Australian Cyber Security Centre (ACSC) ISM Official - Secret |
| not versioned |
Australian Cyber Security Centre (ACSC) ISM Official - Top Secret |
| not versioned |
PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 9 |
| 4.0 |
The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) for Red Hat Enterprise Linux 10 |
| vendor |
The Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) with GUI for Red Hat Enterprise Linux 10 |
| vendor |
Chapter 6. Ensuring system integrity with Keylime Copy linkLink copied to clipboard!
With Keylime, you can continuously monitor the integrity of remote systems and verify the state of systems at boot. You can also send encrypted files to the monitored systems, and specify automated actions triggered whenever a monitored system fails the integrity test.
6.1. How Keylime works Copy linkLink copied to clipboard!
You can configure Keylime agents to perform one or more of the following actions:
- Runtime integrity monitoring
- Keylime runtime integrity monitoring continuously monitors the system on which the agent is deployed and measures the integrity of the files included in the allowlist and not included in the excludelist.
- Measured boot
- Keylime measured boot verifies the system state at boot.
Keylime’s concept of trust is based on the Trusted Platform Module (TPM) technology. A TPM is a hardware, firmware, or virtual component with integrated cryptographic keys. By polling TPM quotes and comparing the hashes of objects, Keylime provides initial and runtime monitoring of remote systems.
Keylime running in a virtual machine or using a virtual TPM depends upon the integrity of the underlying host. Ensure you trust the host environment before relying upon Keylime measurements in a virtual environment.
Keylime consists of three main components:
- Verifier
-
Initially and continuously verifies the integrity of the systems that run the agent. You can deploy the verifier from a package, as a container, or by using the
keylime_server
RHEL system role. - Registrar
-
Contains a database of all agents and it hosts the public keys of the TPM vendors. You can deploy the registrar from a package, as a container, or by using the
keylime_server
RHEL system role. - Agent
- Deployed to remote systems measured by the verifier.
In addition, Keylime uses the keylime_tenant
utility for many functions, including provisioning the agents on the target systems.
Figure 6.1. Connections between Keylime components through configurations
Keylime ensures the integrity of the monitored systems in a chain of trust by using keys and certificates exchanged between the components and the tenant. For a secure foundation of this chain, use a certificate authority (CA) that you can trust.
If the agent receives no key and certificate, it generates a key and a self-signed certificate with no involvement from the CA.
Figure 6.2. Connections between Keylime components certificates and keys
6.2. Deploying Keylime verifier from a package Copy linkLink copied to clipboard!
The verifier is the most important component in Keylime. It performs initial and periodic checks of system integrity and supports bootstrapping a cryptographic key securely with the agent. The verifier uses mutual TLS encryption for its control interface.
To maintain the chain of trust, keep the system that runs the verifier secure and under your control.
You can install the verifier on a separate system or on the same system as the Keylime registrar, depending on your requirements. Running the verifier and registrar on separate systems provides better performance.
To keep the configuration files organized within the drop-in directories, use file names with a two-digit number prefix, for example /etc/keylime/verifier.conf.d/00-verifier-ip.conf
. The configuration processing reads the files inside the drop-in directory in lexicographic order and sets each option to the last value it reads.
Prerequisites
-
You have
root
permissions and network connection to the system or systems on which you want to install Keylime components. - You have valid keys and certificates from your certificate authority.
Optional: You have access to the databases where Keylime saves data from the verifier. You can use any of the following database management systems:
- SQLite (default)
- PostgreSQL
- MySQL
- MariaDB
Procedure
Install the Keylime verifier:
dnf install keylime-verifier
# dnf install keylime-verifier
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define the IP address and port of verifier by creating a new
.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-verifier-ip.conf
, with the following content:[verifier] ip = <verifier_IP_address>
[verifier] ip = <verifier_IP_address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<verifier_IP_address>
with the verifier’s IP address. Alternatively, useip = *
orip = 0.0.0.0
to bind the verifier to all available IP addresses. -
Optionally, you can also change the verifier’s port from the default value
8881
by using theport
option.
-
Replace
Optional: Configure the verifier’s database for the list of agents. The default configuration uses an SQLite database in the verifier’s
/var/lib/keylime/cv_data.sqlite/
directory. You can define a different database by creating a new.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-db-url.conf
, with the following content:[verifier] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
[verifier] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
with the URL of the database, for example,postgresql://verifier:UQ?nRNY9g7GZzN7@198.51.100.1/verifierdb
.Ensure that the credentials you use provide the permissions for Keylime to create the database structure.
Add certificates and keys to the verifier. You can either let Keylime generate them, or use existing keys and certificates:
-
With the default
tls_dir = generate
option, Keylime generates new certificates for the verifier, registrar, and tenant in the/var/lib/keylime/cv_ca/
directory. To load existing keys and certificates in the configuration, define their location in the verifier configuration. The certificates must be accessible by the
keylime
user, under which the Keylime services are running.Create a new
.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-keys-and-certs.conf
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse absolute paths to define key and certificate locations. Alternatively, relative paths are resolved from the directory defined in the
tls_dir
option.
-
With the default
Open the port in firewall:
firewall-cmd --add-port 8881/tcp firewall-cmd --runtime-to-permanent
# firewall-cmd --add-port 8881/tcp # firewall-cmd --runtime-to-permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a different port, replace
8881
with the port number defined in the.conf
file.Start the verifier service:
systemctl enable --now keylime_verifier
# systemctl enable --now keylime_verifier
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn the default configuration, start the
keylime_verifier
before starting thekeylime_registrar
service because the verifier creates the CA and certificates for the other Keylime components. This order is not necessary when you use custom certificates.
Verification
Check that the
keylime_verifier
service is active and running:systemctl status keylime_verifier
# systemctl status keylime_verifier ● keylime_verifier.service - The Keylime verifier Loaded: loaded (/usr/lib/systemd/system/keylime_verifier.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:08 EST; 1min 45s ago
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
6.3. Deploying Keylime verifier as a container Copy linkLink copied to clipboard!
The Keylime verifier performs initial and periodic checks of system integrity and supports bootstrapping a cryptographic key securely with the agent. You can configure the Keylime verifier as a container instead of the RPM method, without any binaries or packages on the host. The container deployment provides better isolation, modularity, and reproducibility of Keylime components.
After you start the container, the Keylime verifier is deployed with default configuration files. You can customize the configuration by using one or more of following methods:
- Mounting the host’s directories that contain the configuration files to the container.
- Modifying the environment variables directly on the container. Modifying the environment variables overrides the values from the configuration files.
Prerequisites
-
The
podman
package and its dependencies are installed on the system. Optional: You have access to a database where Keylime saves data from the verifier. You can use any of the following database management systems:
- SQLite (default)
- PostgreSQL
- MySQL
- MariaDB
- You have valid keys and certificates from your certificate authority.
Procedure
Optional: Install the
keylime-verifier
package to access the configuration files. You can configure the container without this package, but it might be easier to modify the configuration files provided with the package.dnf install keylime-verifier
# dnf install keylime-verifier
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the verifier to all available IP addresses by creating a new
.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-verifier-ip.conf
, with the following content:[verifier] ip = *
[verifier] ip = *
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optionally, you can also change the verifier’s port from the default value
8881
by using theport
option.
-
Optionally, you can also change the verifier’s port from the default value
Optional: Configure the verifier’s database for the list of agents. The default configuration uses an SQLite database in the verifier’s
/var/lib/keylime/cv_data.sqlite/
directory. You can define a different database by creating a new.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-db-url.conf
, with the following content:[verifier] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
[verifier] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
with the URL of the database, for example,postgresql://verifier:UQ?nRNY9g7GZzN7@198.51.100.1/verifierdb
.Ensure that the credentials you use have the permissions for Keylime to create the database structure.
Add certificates and keys to the verifier. You can either let Keylime generate them, or use existing keys and certificates:
-
With the default
tls_dir = generate
option, Keylime generates new certificates for the verifier, registrar, and tenant in the/var/lib/keylime/cv_ca/
directory. To load existing keys and certificates in the configuration, define their location in the verifier configuration. The certificates must be accessible by the
keylime
user, under which the Keylime processes are running.Create a new
.conf
file in the/etc/keylime/verifier.conf.d/
directory, for example,/etc/keylime/verifier.conf.d/00-keys-and-certs.conf
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse absolute paths to define key and certificate locations. Alternatively, relative paths are resolved from the directory defined in the
tls_dir
option.
-
With the default
Open the port in firewall:
firewall-cmd --add-port 8881/tcp firewall-cmd --runtime-to-permanent
# firewall-cmd --add-port 8881/tcp # firewall-cmd --runtime-to-permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a different port, replace
8881
with the port number defined in the.conf
file.Run the container:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
-p
option opens the default port8881
on the host and on the container. The
-v
option creates a bind mount for the directory to the container.-
With the
Z
option, Podman marks the content with a private unshared label. This means only the current container can use the private volume.
-
With the
-
The
-d
option runs the container detached and in the background. -
The option
-e KEYLIME_VERIFIER_SERVER_KEY_PASSWORD=<passphrase1>
defines the server key passphrase. -
The option
-e KEYLIME_VERIFIER_CLIENT_KEY_PASSWORD=<passphrase2>
defines the client key passphrase. -
You can override configuration options with environment variables by using the option
-e KEYLIME_VERIFIER_<ENVIRONMENT_VARIABLE>=<value>
. To modify additional options, insert the-e
option separately for each environment variable. For a complete list of environment variables and their default values, see Keylime environment variables.
-
The
Verification
Check that the container is running:
podman ps -a
$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 80b6b9dbf57c registry.access.redhat.com/rhel9/keylime-verifier:latest keylime_verifier 14 seconds ago Up 14 seconds 0.0.0.0:8881->8881/tcp keylime-verifier
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
6.4. Deploying Keylime registrar from a package Copy linkLink copied to clipboard!
The registrar is the Keylime component that contains a database of all agents, and it hosts the public keys of the TPM vendors. After the registrar’s HTTPS service accepts trusted platform module (TPM) public keys, it presents an interface to obtain these public keys for checking quotes.
To maintain the chain of trust, keep the system that runs the registrar secure and under your control.
You can install the registrar on a separate system or on the same system as the Keylime verifier, depending on your requirements. Running the verifier and registrar on separate systems provides better performance.
To keep the configuration files organized within the drop-in directories, use file names with a two-digit number prefix, for example /etc/keylime/registrar.conf.d/00-registrar-ip.conf
. The configuration processing reads the files inside the drop-in directory in lexicographic order and sets each option to the last value it reads.
Prerequisites
- You have network access to the systems where the Keylime verifier is installed and running. For more information, see
Section 6.2, “Deploying Keylime verifier from a package”.
-
You have
root
permissions and network connection to the system or systems on which you want to install Keylime components. You have access to the database where Keylime saves data from the registrar. You can use any of the following database management systems:
- SQLite (default)
- PostgreSQL
- MySQL
- MariaDB
- You have valid keys and certificates from your certificate authority.
Procedure
Install the Keylime registrar:
dnf install keylime-registrar
# dnf install keylime-registrar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define the IP address and port of the registrar by creating a new
.conf
file in the/etc/keylime/registrar.conf.d/
directory, for example,/etc/keylime/registrar.conf.d/00-registrar-ip.conf
, with the following content:[registrar] ip = <registrar_IP_address>
[registrar] ip = <registrar_IP_address>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<registrar_IP_address>
with the registrar’s IP address. Alternatively, useip = *
orip = 0.0.0.0
to bind the registrar to all available IP addresses. -
Optionally, change the port to which the Keylime agents connect by using the
port
option. The default value is8890
. -
Optionally, change the TLS port to which the Keylime verifier and tenant connect by using the
tls_port
option. The default value is8891
.
-
Replace
Optional: Configure the registrar’s database for the list of agents. The default configuration uses an SQLite database in the registrar’s
/var/lib/keylime/reg_data.sqlite
directory. You can create a new.conf
file in the/etc/keylime/registrar.conf.d/
directory, for example,/etc/keylime/registrar.conf.d/00-db-url.conf
, with the following content:[registrar] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
[registrar] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
with the URL of the database, for example,postgresql://registrar:EKYYX-bqY2?#raXm@198.51.100.1/registrardb
.Ensure that the credentials you use have the permissions for Keylime to create the database structure.
Add certificates and keys to the registrar:
-
You can use the default configuration and load the keys and certificates to the
/var/lib/keylime/reg_ca/
directory. Alternatively, you can define the location of the keys and certificates in the configuration. Create a new
.conf
file in the/etc/keylime/registrar.conf.d/
directory, for example,/etc/keylime/registrar.conf.d/00-keys-and-certs.conf
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse absolute paths to define key and certificate locations. Alternatively, you can define a directory in the
tls_dir
option and use paths relative to that directory.
-
You can use the default configuration and load the keys and certificates to the
Open the ports in firewall:
firewall-cmd --add-port 8890/tcp --add-port 8891/tcp firewall-cmd --runtime-to-permanent
# firewall-cmd --add-port 8890/tcp --add-port 8891/tcp # firewall-cmd --runtime-to-permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a different port, replace
8890
or8891
with the port number defined in the.conf
file.Start the
keylime_registrar
service:systemctl enable --now keylime_registrar
# systemctl enable --now keylime_registrar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn the default configuration, start the
keylime_verifier
before starting thekeylime_registrar
service because the verifier creates the CA and certificates for the other Keylime components. This order is not necessary when you use custom certificates.
Verification
Check that the
keylime_registrar
service is active and running:systemctl status keylime_registrar
# systemctl status keylime_registrar ● keylime_registrar.service - The Keylime registrar service Loaded: loaded (/usr/lib/systemd/system/keylime_registrar.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:17 EST; 1min 42s ago ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
6.5. Deploying Keylime registrar as a container Copy linkLink copied to clipboard!
The registrar is the Keylime component that contains a database of all agents, and it hosts the public keys of the trusted platform module (TPM) vendors. After the registrar’s HTTPS service accepts TPM public keys, it presents an interface to obtain these public keys for checking quotes. You can configure the Keylime registrar as a container instead of the RPM method, without any binaries or packages on the host. The container deployment provides better isolation, modularity, and reproducibility of Keylime components.
After you start the container, the Keylime registrar is deployed with default configuration files. You can customize the configuration by using one or more of following methods:
- Mounting the host’s directories that contain the configuration files to the container. This is available in all versions of RHEL 9.
- Modifying the environment variables directly on the container. This is available in RHEL 9.3 and later versions. Modifying the environment variables overrides the values from the configuration files.
Prerequisites
-
The
podman
package and its dependencies are installed on the system. Optional: You have access to a database where Keylime saves data from the registrar. You can use any of the following database management systems:
- SQLite (default)
- PostgreSQL
- MySQL
- MariaDB
- You have valid keys and certificates from your certificate authority.
Procedure
Optional: Install the
keylime-registrar
package to access the configuration files. You can configure the container without this package, but it might be easier to modify the configuration files provided with the package.dnf install keylime-registrar
# dnf install keylime-registrar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the registrar to all available IP addresses by creating a new
.conf
file in the/etc/keylime/registrar.conf.d/
directory, for example,/etc/keylime/registrar.conf.d/00-registrar-ip.conf
, with the following content:[registrar] ip = *
[registrar] ip = *
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optionally, change the port to which the Keylime agents connect by using the
port
option. The default value is8890
. -
Optionally, change the TLS port to which the Keylime tenant connects by using the
tls_port
option. The default value is8891
.
-
Optionally, change the port to which the Keylime agents connect by using the
Optional: Configure the registrar’s database for the list of agents. The default configuration uses an SQLite database in the registrar’s
/var/lib/keylime/reg_data.sqlite
directory. You can create a new.conf
file in the/etc/keylime/registrar.conf.d/
directory, for example,/etc/keylime/registrar.conf.d/00-db-url.conf
, with the following content:[registrar] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
[registrar] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>
with the URL of the database, for example,postgresql://registrar:EKYYX-bqY2?#raXm@198.51.100.1/registrardb
.Ensure that the credentials you use have the permissions for Keylime to create the database structure.
Add certificates and keys to the registrar:
-
You can use the default configuration and load the keys and certificates to the
/var/lib/keylime/reg_ca/
directory. Alternatively, you can define the location of the keys and certificates in the configuration. Create a new
.conf
file in the/etc/keylime/registrar.conf.d/
directory, for example,/etc/keylime/registrar.conf.d/00-keys-and-certs.conf
, with the following content:[registrar] tls_dir = /var/lib/keylime/reg_ca server_key = </path/to/server_key> server_cert = </path/to/server_cert> trusted_client_ca = ['</path/to/ca/cert1>', '</path/to/ca/cert2>']
[registrar] tls_dir = /var/lib/keylime/reg_ca server_key = </path/to/server_key> server_cert = </path/to/server_cert> trusted_client_ca = ['</path/to/ca/cert1>', '</path/to/ca/cert2>']
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse absolute paths to define key and certificate locations. Alternatively, you can define a directory in the
tls_dir
option and use paths relative to that directory.
-
You can use the default configuration and load the keys and certificates to the
Open the ports in firewall:
firewall-cmd --add-port 8890/tcp --add-port 8891/tcp firewall-cmd --runtime-to-permanent
# firewall-cmd --add-port 8890/tcp --add-port 8891/tcp # firewall-cmd --runtime-to-permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a different port, replace
8890
or8891
with the port number defined in the.conf
file.Run the container:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
-p
option opens the default ports8890
and8881
on the host and on the container. The
-v
option creates a bind mount for the directory to the container.-
With the
Z
option, Podman marks the content with a private unshared label. This means only the current container can use the private volume.
-
With the
-
The
-d
option runs the container detached and in the background. -
The option
-e KEYLIME_VERIFIER_SERVER_KEY_PASSWORD=<passphrase1>
defines the server key passphrase. -
You can override configuration options with environment variables by using the option
-e KEYLIME_REGISTRAR__<ENVIRONMENT_VARIABLE>=<value>
. To modify additional options, insert the-e
option separately for each environment variable. For a complete list of environment variables and their default values, see Section 6.12, “Keylime environment variables”.
-
The
Verification
Check that the container is running:
podman ps -a
$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 07d4b4bff1b6 localhost/keylime-registrar:latest keylime_registrar 12 seconds ago Up 12 seconds 0.0.0.0:8881->8881/tcp, 0.0.0.0:8891->8891/tcp keylime-registrar
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
6.6. Deploying a Keylime server by using RHEL system roles Copy linkLink copied to clipboard!
You can set up the verifier and registrar, which are the Keylime server components, by using the keylime_server
RHEL system role. The keylime_server
role installs and configures both the verifier and registrar components together on each node.
Perform this procedure on the Ansible control node.
For more information about Keylime, see How Keylime works.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook that defines the required role:
Create a new YAML file and open it in a text editor, for example:
vi keylime-playbook.yml
# vi keylime-playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Insert the following content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can find out more about the variables in Variables for the keylime_server RHEL system role.
Run the playbook:
ansible-playbook <keylime-playbook.yml>
$ ansible-playbook <keylime-playbook.yml>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the
keylime_verifier
service is active and running on the managed host:systemctl status keylime_verifier
# systemctl status keylime_verifier ● keylime_verifier.service - The Keylime verifier Loaded: loaded (/usr/lib/systemd/system/keylime_verifier.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:08 EST; 1min 45s ago
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the
keylime_registrar
service is active and running:systemctl status keylime_registrar
# systemctl status keylime_registrar ● keylime_registrar.service - The Keylime registrar service Loaded: loaded (/usr/lib/systemd/system/keylime_registrar.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:17 EST; 1min 42s ago ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
6.7. Variables for the keylime_server RHEL system role Copy linkLink copied to clipboard!
When setting up a Keylime server by using the keylime_server
RHEL system role, you can customize the following variables for registrar and verifier.
List of keylime_server
RHEL system role variables for configuring the Keylime verifier:
keylime_server_verifier_ip
- Defines the IP address of the verifier.
keylime_server_verifier_tls_dir
-
Specifies the directory where the keys and certificates are stored. If set to default, the verifier uses the
/var/lib/keylime/cv_ca
directory. keylime_server_verifier_server_key_passphrase
- Specifies a passphrase to decrypt the server private key. If the value is empty, the private key is not encrypted.
keylime_server_verifier_server_cert
: Specifies the Keylime verifier server certificate file.
keylime_server_verifier_trusted_client_ca
-
Defines the list of trusted client CA certificates. You must store the files in the directory set in the
keylime_server_verifier_tls_dir
option. keylime_server_verifier_client_key
- Defines the file containing the Keylime verifier private client key.
keylime_server_verifier_client_key_passphrase
- Defines the passphrase to decrypt the client private key file. If the value is empty, the private key is not encrypted.
keylime_server_verifier_client_cert
- Defines the Keylime verifier client certificate file.
keylime_server_verifier_trusted_server_ca
-
Defines the list of trusted server CA certificates. You must store the files in the directory set in the
keylime_server_verifier_tls_dir
option.
List of registrar variables for setting up keylime_server RHEL system role:
keylime_server_registrar_ip
- Defines the IP address of the registrar.
keylime_server_registrar_tls_dir
-
Specifies the directory where you store the keys and certificates for the registrar. If you set it to default, the registrar uses the
/var/lib/keylime/reg_ca
directory. keylime_server_registrar_server_key
- Defines the Keylime registrar private server key file.
keylime_server_registrar_server_key_passphrase
- Specifies the passphrase to decrypt the server private key of the registrar. If the value is empty, the private key is not encrypted.
keylime_server_registrar_server_cert
- Specifies the Keylime registrar server certificate file.
keylime_server_registrar_trusted_client_ca
-
Defines the list of trusted client CA certificates. You must store the files in the directory set in the
keylime_server_registrar_tls_dir
option.
6.8. Deploying Keylime tenant from a package Copy linkLink copied to clipboard!
Keylime uses the keylime_tenant
utility for many functions, including provisioning the agents on the target systems. You can install keylime_tenant
on any system, including the systems that run other Keylime components, or on a separate system, depending on your requirements.
Prerequisites
-
You have
root
permissions and network connection to the system or systems on which you want to install Keylime components. You have network access to the systems where the other Keylime components are configured:
- Verifier
- For more information, see Section 6.2, “Deploying Keylime verifier from a package”.
- Registrar
- For more information, see Section 6.4, “Deploying Keylime registrar from a package”.
Procedure
Install the Keylime tenant:
dnf install keylime-tenant
# dnf install keylime-tenant
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define the tenant’s connection to the Keylime verifier by editing the
/etc/keylime/tenant.conf.d/00-verifier-ip.conf
file:[tenant] verifier_ip = <verifier_ip>
[tenant] verifier_ip = <verifier_ip>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<verifier_ip>
with the IP address to the verifier’s system. -
If the verifier uses a different port than the default value
8881
, add theverifier_port = <verifier_port>
setting.
-
Replace
Define the tenant’s connection to the Keylime registrar by editing the
/etc/keylime/tenant.conf.d/00-registrar-ip.conf
file:[tenant] registrar_ip = <registrar_ip>
[tenant] registrar_ip = <registrar_ip>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<registrar_ip>
with the IP address to the registrar’s system. -
If the registrar uses a different port than the default value
8891
, add theregistrar_port = <registrar_port>
setting.
-
Replace
Add certificates and keys to the tenant:
-
You can use the default configuration and load the keys and certificates to the
/var/lib/keylime/cv_ca
directory. Alternatively, you can define the location of the keys and certificates in the configuration. Create a new
.conf
file in the/etc/keylime/tenant.conf.d/
directory, for example,/etc/keylime/tenant.conf.d/00-keys-and-certs.conf
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
trusted_server_ca
parameter accepts paths to the verifier and registrar server CA certificate. You can provide multiple comma-separated paths, for example if the verifier and registrar use different CAs.NoteUse absolute paths to define key and certificate locations. Alternatively, you can define a directory in the
tls_dir
option and use paths relative to that directory.
-
You can use the default configuration and load the keys and certificates to the
-
Optional: If the trusted platform module (TPM) endorsement key (EK) cannot be verified by using certificates in the
/var/lib/keylime/tpm_cert_store
directory, add the certificate to that directory. This can occur particularly when using virtual machines with emulated TPMs.
Verification
Check the status of the verifier:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If correctly set up, and if no agent is configured, the verifier responds that it does not recognize the default agent UUID.
Check the status of the registrar:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If correctly set up, and if no agent is configured, the registrar responds that it does not recognize the default agent UUID.
Next steps
6.9. Deploying Keylime agent from a package Copy linkLink copied to clipboard!
The Keylime agent is the component deployed to all systems to be monitored by Keylime.
By default, the Keylime agent stores all its data in the /var/lib/keylime/
directory of the monitored system.
To keep the configuration files organized within the drop-in directories, use file names with a two-digit number prefix, for example /etc/keylime/agent.conf.d/00-registrar-ip.conf
. The configuration processing reads the files inside the drop-in directory in lexicographic order and sets each option to the last value it reads.
Prerequisites
-
You have
root
permissions to the monitored system. -
The monitored system has a Trusted Platform Module (TPM). To verify, enter the
tpm2_pcrread
command. If the output returns several hashes, a TPM is available. You have network access to the systems where the other Keylime components are configured:
- Verifier
- For more information, see Deploying Keylime verifier from a package.
- Registrar
- For more information, see Deploying Keylime registrar from a package.
- Tenant
- For more information, see Deploying Keylime tenant from a package.
- Integrity measurement architecture (IMA) is enabled on the monitored system. For more information, see Enabling integrity measurement architecture and extended verification module.
Procedure
Install the Keylime agent:
dnf install keylime-agent
# dnf install keylime-agent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command installs the
keylime-agent-rust
package.Define the agent’s IP address and port in the configuration files. Create a new
.conf
file in the/etc/keylime/agent.conf.d/
directory, for example,/etc/keylime/agent.conf.d/00-agent-ip.conf
, with the following content:[agent] ip = '<agent_ip>'
[agent] ip = '<agent_ip>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Keylime agent configuration uses the TOML format, which is different from the INI format used for configuration of the other components. Therefore, enter values in valid TOML syntax, for example, paths in single quotation marks and arrays of multiple paths in square brackets.
-
Replace
<agent_IP_address>
with the agent’s IP address. Alternatively, useip = '*'
orip = '0.0.0.0'
to bind the agent to all available IP addresses. -
Optionally, you can also change the agent’s port from the default value
9002
by using theport = '<agent_port>'
option.
-
Replace
Define the registrar’s IP address and port in the configuration files. Create a new
.conf
file in the/etc/keylime/agent.conf.d/
directory, for example,/etc/keylime/agent.conf.d/00-registrar-ip.conf
, with the following content:[agent] registrar_ip = '<registrar_IP_address>'
[agent] registrar_ip = '<registrar_IP_address>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<registrar_IP_address>
with the registrar’s IP address. -
Optionally, you can also change the registrar’s port from the default value
8890
by using theregistrar_port = '<registrar_port>'
option.
-
Replace
Optional: Define the agent’s universally unique identifier (UUID). If it is not defined, the default UUID is used. Create a new
.conf
file in the/etc/keylime/agent.conf.d/
directory, for example,/etc/keylime/agent.conf.d/00-agent-uuid.conf
, with the following content:[agent] uuid = '<agent_UUID>'
[agent] uuid = '<agent_UUID>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<agent_UUID>
with the agent’s UUID, for exampled432fbb3-d2f1-4a97-9ef7-abcdef012345
. You can use theuuidgen
utility to generate a UUID.
-
Replace
Optional: Load existing keys and certificates for the agent. If the agent receives no
server_key
andserver_cert
, it generates its own key and a self-signed certificate.Define the location of the keys and certificates in the configuration. Create a new
.conf
file in the/etc/keylime/agent.conf.d/
directory, for example,/etc/keylime/agent.conf.d/00-keys-and-certs.conf
, with the following content:[agent] server_key = '</path/to/server_key>' server_key_password = '<passphrase1>' server_cert = '</path/to/server_cert>' trusted_client_ca = '[</path/to/ca/cert3>, </path/to/ca/cert4>]'
[agent] server_key = '</path/to/server_key>' server_key_password = '<passphrase1>' server_cert = '</path/to/server_cert>' trusted_client_ca = '[</path/to/ca/cert3>, </path/to/ca/cert4>]'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse absolute paths to define key and certificate locations. The Keylime agent does not accept relative paths.
Open the port in firewall:
firewall-cmd --add-port 9002/tcp firewall-cmd --runtime-to-permanent
# firewall-cmd --add-port 9002/tcp # firewall-cmd --runtime-to-permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use a different port, replace
9002
with the port number defined in the.conf
file.Enable and start the
keylime_agent
service:systemctl enable --now keylime_agent
# systemctl enable --now keylime_agent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: From the system where the Keylime tenant is configured, verify that the agent is correctly configured and can connect to the registrar.
keylime_tenant -c regstatus --uuid <agent_uuid>
# keylime_tenant -c regstatus --uuid <agent_uuid> Reading configuration from ['/etc/keylime/logging.conf'] ... ==\n-----END CERTIFICATE-----\n", "ip": "127.0.0.1", "port": 9002, "regcount": 1, "operational_state": "Registered"}}}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<agent_uuid>
with the agent’s UUID.If the registrar and agent are correctly configured, the output displays the agent’s IP address and port, followed by
"operational_state": "Registered"
.
Create a new IMA policy by entering the following content into the
/etc/ima/ima-policy
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This policy targets runtime monitoring of executed applications. You can adjust this policy according to your scenario. You can find the MAGIC constants in the
statfs(2)
man page.Update kernel parameters:
grubby --update-kernel DEFAULT --args 'ima_appraise=fix ima_canonical_fmt ima_policy=tcb ima_template=ima-ng'
# grubby --update-kernel DEFAULT --args 'ima_appraise=fix ima_canonical_fmt ima_policy=tcb ima_template=ima-ng'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Reboot the system to apply the new IMA policy.
Verification
Verify that the agent is running:
systemctl status keylime_agent
# systemctl status keylime_agent ● keylime_agent.service - The Keylime compute agent Loaded: loaded (/usr/lib/systemd/system/keylime_agent.service; enabled; preset: disabled) Active: active (running) since ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Next steps
After the agent is configured on all systems you want to monitor, you can deploy Keylime to perform one or both of the following functions:
6.10. Configuring Keylime for runtime monitoring Copy linkLink copied to clipboard!
To verify that the state of monitored systems is correct, the Keylime agent must be running on the monitored systems.
Because Keylime runtime monitoring uses integrity measurement architecture (IMA) to measure large numbers of files, it might have a significant impact on the performance of your system.
When provisioning the agent, you can also define a file that Keylime sends to the monitored system. Keylime encrypts the file sent to the agent, and decrypts it only if the agent’s system complies with the TPM policy and with the IMA allowlist.
You can make Keylime ignore changes of specific files or within specific directories by configuring a Keylime excludelist. The excluded files are still measured by IMA.
The allowlist and excludelist are combined into the Keylime runtime policy.
Prerequisites
You have network access to the systems where the Keylime components are configured:
- Verifier
- For more information, see Deploying Keylime verifier from a package.
- Registrar
- For more information, see Deploying Keylime registrar from a package.
- Tenant
- For more information, see Deploying Keylime tenant from a package.
- Agent
- For more information, see Deploying Keylime agent from a package.
Procedure
On the monitored system where the Keylime agent is configured and running, install the
python3-keylime
package, which contains thekeylime-policy
tool:dnf -y install python3-keylime
# dnf -y install python3-keylime
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a runtime policy from the current state of the agent system:
keylime-policy create runtime --ima-measurement --rootfs '/' --ramdisk-dir '/boot/' --output <policy.json>
# keylime-policy create runtime --ima-measurement --rootfs '/' --ramdisk-dir '/boot/' --output <policy.json>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this command,
-
Replace
<policy.json>
with the file name of the runtime policy. The following directories are automatically excluded from measurement:
-
/sys
-
/run
-
/proc
-
/lost+found
-
/dev
-
/media
-
/snap
-
/mnt
-
/var
-
/tmp
-
-
Optionally, you can exclude additional specific paths from measurement by adding a
--excludelist <excludelist.txt>
option. The excludelist accepts Python regular expressions with one regular expression per line. See Regular expression operations at docs.python.org for the complete list of special characters.
-
Replace
Copy the generated runtime policy to the system where the
keylime_tenant
utility is configured, for example:scp <policy.json> root@<tenant.ip>:/root/<policy.json>
# scp <policy.json> root@<tenant.ip>:/root/<policy.json>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the system where the Keylime tenant is configured, provision the agent by using the
keylime_tenant
utility:keylime_tenant --command add --targethost <agent_ip> --uuid <agent_uuid> --runtime-policy <policy.json> --cert default
# keylime_tenant --command add --targethost <agent_ip> --uuid <agent_uuid> --runtime-policy <policy.json> --cert default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<agent_ip>
with the agent’s IP address. -
Replace
<agent_uuid>
with the agent’s UUID. -
Replace
<policy.json>
with the path to the Keylime runtime policy file. With the
--cert
option, the tenant generates and signs a certificate for the agent by using the CA certificates and keys located in the specified directory, or the default/var/lib/keylime/ca/
directory. If the directory contains no CA certificates and keys, the tenant will generate them automatically according to the configuration in the/etc/keylime/ca.conf
file and save them to the specified directory. The tenant then sends these keys and certificates to the agent.When generating CA certificates or signing agent certificates, you might be prompted for the password to access the CA private key:
Please enter the password to decrypt your keystore:
.NoteKeylime encrypts the file sent to the agent, and decrypts it only if the agent’s system complies with the TPM policy and the IMA allowlist. By default, Keylime decompresses sent
.zip
files.
As an example, with the following command,
keylime_tenant
provisions a new Keylime agent at127.0.0.1
with UUIDd432fbb3-d2f1-4a97-9ef7-75bd81c00000
and loads a runtime policypolicy.json
. It also generates a certificate in the default directory and sends the certificate file to the agent. Keylime decrypts the file only if the TPM policy configured in/etc/keylime/verifier.conf
is satisfied:keylime_tenant --command add --targethost 127.0.0.1 --uuid d432fbb3-d2f1-4a97-9ef7-75bd81c00000 --runtime-policy policy.json --cert default
# keylime_tenant --command add --targethost 127.0.0.1 --uuid d432fbb3-d2f1-4a97-9ef7-75bd81c00000 --runtime-policy policy.json --cert default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can stop Keylime from monitoring a node by using the
# keylime_tenant --command delete --uuid <agent_uuid>
command.You can modify the configuration of an already registered agent by using the
keylime_tenant --command update
command.-
Replace
Verification
- Optional: Reboot the monitored system to verify that the settings are persistent.
Verify a successful attestation of the agent:
keylime_tenant --command cvstatus --uuid <agent_uuid>
# keylime_tenant --command cvstatus --uuid <agent_uuid> ... {"<agent_uuid>": {"operational_state": "Get Quote"..."attestation_count": 5 ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<agent_uuid>
with the agent’s UUID.If the value of
operational_state
isGet Quote
andattestation_count
is nonzero, the attestation of this agent is successful.If the value of
operational_state
isInvalid Quote
orFailed
attestation fails, the command displays output similar to the following:{"<agent_uuid>": {"operational_state": "Invalid Quote", ... "ima.validation.ima-ng.not_in_allowlist", "attestation_count": 5, "last_received_quote": 1684150329, "last_successful_attestation": 1684150327}}
{"<agent_uuid>": {"operational_state": "Invalid Quote", ... "ima.validation.ima-ng.not_in_allowlist", "attestation_count": 5, "last_received_quote": 1684150329, "last_successful_attestation": 1684150327}}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the attestation fails, display more details in the verifier log:
journalctl --unit keylime_verifier
# journalctl --unit keylime_verifier keylime.tpm - INFO - Checking IMA measurement list... keylime.ima - WARNING - File not found in allowlist: /root/bad-script.sh keylime.ima - ERROR - IMA ERRORS: template-hash 0 fnf 1 hash 0 good 781 keylime.cloudverifier - WARNING - agent D432FBB3-D2F1-4A97-9EF7-75BD81C00000 failed, stopping polling
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.11. Configuring Keylime for measured boot attestation Copy linkLink copied to clipboard!
When you configure Keylime for measured boot attestation, Keylime checks that the boot process on the measured system corresponds to the state you defined.
Prerequisites
You have network access to the systems where the Keylime components are configured:
- Verifier
- For more information, see Deploying Keylime verifier from a package.
- Registrar
- For more information, see Deploying Keylime registrar from a package.
- Tenant
- For more information, see Deploying Keylime tenant from a package.
- Agent
- For more information, see Deploying Keylime agent from a package.
- Unified Extensible Firmware Interface (UEFI) is enabled on the agent system.
Procedure
On the monitored system where the Keylime agent is configured and running, install the
python3-keylime
package, which contains thekeylime-policy
tool:dnf -y install python3-keylime
# dnf -y install python3-keylime
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the monitored system, generate a policy from the measured boot log of the current state of the system by using the
keylime-policy
tool:keylime-policy create measured-boot --eventlog-file /sys/kernel/security/tpm0/binary_bios_measurements --output <./measured_boot_reference_state.json>
# keylime-policy create measured-boot --eventlog-file /sys/kernel/security/tpm0/binary_bios_measurements --output <./measured_boot_reference_state.json>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<./measured_boot_reference_state.json>
with the path wherekeylime-policy
saves the generated policy. If your UEFI system does not have Secure Boot enabled, pass the
--without-secureboot
argument.ImportantThe policy generated with
keylime-policy
is based on the current state of the system and is very strict. Any modifications of the system including kernel updates and system updates will change the boot process and the system will fail the attestation.
-
Replace
Copy the generated policy to the system where the
keylime_tenant
utility is configured, for example:scp root@<agent_ip>:<./measured_boot_reference_state.json> <./measured_boot_reference_state.json>
# scp root@<agent_ip>:<./measured_boot_reference_state.json> <./measured_boot_reference_state.json>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the system where the Keylime tenant is configured, provision the agent by using the
keylime_tenant
utility:keylime_tenant --command add --targethost <agent_ip> --uuid <agent_uuid> --mb_refstate <./measured_boot_reference_state.json> --cert default
# keylime_tenant --command add --targethost <agent_ip> --uuid <agent_uuid> --mb_refstate <./measured_boot_reference_state.json> --cert default
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<agent_ip>
with the agent’s IP address. -
Replace
<agent_uuid>
with the agent’s UUID. -
Replace
<./measured_boot_reference_state.json>
with the path to the measured boot policy.
If you configure measured boot in combination with runtime monitoring, provide all the options from both use cases when entering the
keylime_tenant --command add
command.NoteYou can stop Keylime from monitoring a node by using the
# keylime_tenant --command delete --targethost <agent_ip> --uuid <agent_uuid>
command.You can modify the configuration of an already registered agent by using the
keylime_tenant --command update
command.-
Replace
Verification
Reboot the monitored system and verify a successful attestation of the agent:
keylime_tenant --command cvstatus --uuid <agent_uuid>
# keylime_tenant --command cvstatus --uuid <agent_uuid> ... {"<agent_uuid>": {"operational_state": "Get Quote"..."attestation_count": 5 ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<agent_uuid>
with the agent’s UUID.If the value of
operational_state
isGet Quote
andattestation_count
is nonzero, the attestation of this agent is successful.If the value of
operational_state
isInvalid Quote
orFailed
attestation fails, the command displays output similar to the following:{"<agent_uuid>": {"operational_state": "Invalid Quote", ... "ima.validation.ima-ng.not_in_allowlist", "attestation_count": 5, "last_received_quote": 1684150329, "last_successful_attestation": 1684150327}}
{"<agent_uuid>": {"operational_state": "Invalid Quote", ... "ima.validation.ima-ng.not_in_allowlist", "attestation_count": 5, "last_received_quote": 1684150329, "last_successful_attestation": 1684150327}}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the attestation fails, display more details in the verifier log:
journalctl -u keylime_verifier
# journalctl -u keylime_verifier {"d432fbb3-d2f1-4a97-9ef7-75bd81c00000": {"operational_state": "Tenant Quote Failed", ... "last_event_id": "measured_boot.invalid_pcr_0", "attestation_count": 0, "last_received_quote": 1684487093, "last_successful_attestation": 0}}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.12. Keylime environment variables Copy linkLink copied to clipboard!
You can set Keylime environment variables to override the values from the configuration files, for example, when starting a container with the podman run
command by using the -e
option.
The environment variables have the following syntax:
KEYLIME_<SECTION>_<ENVIRONMENT_VARIABLE>=<value>
KEYLIME_<SECTION>_<ENVIRONMENT_VARIABLE>=<value>
Where:
-
<SECTION>
is the section of the Keylime configuration file. -
<ENVIRONMENT_VARIABLE>
is the environment variable. -
<value>
is the value to which you want to set the environment variable.
For example, -e KEYLIME_VERIFIER_MAX_RETRIES=6
sets the max_retries
configuration option in the [verifier]
section to 6
.
Verifier configuration
Configuration option | Environment variable | Default value |
---|---|---|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
Configuration option | Environment variable | Default value |
|
|
|
|
|
Registrar configuration
Configuration option | Environment variable | Default value |
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
| |
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
Tenant configuration
Configuration option | Environment variable | Default value |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
CA configuration
Configuration option | Environment variable | Default value |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Agent configuration
Configuration option | Environment variable | Default value |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
Logging configuration
Configuration option | Environment variable | Default value |
|
|
|
Configuration option | Environment variable | Default value |
|
|
|
Configuration option | Environment variable | Default value |
|
|
|
Configuration option | Environment variable | Default value |
|
|
|
Configuration option | Environment variable | Default value |
|
|
|
|
|
|
Configuration option | Environment variable | Default value |
|
|
|
|
|
|
Configuration option | Environment variable | Default value |
|
|
|
|
|
|
|
|
|
|
|
|
Configuration option | Environment variable | Default value |
|
| |
|
|
|
|
|
|
Chapter 7. Checking integrity with AIDE Copy linkLink copied to clipboard!
Advanced Intrusion Detection Environment (AIDE) is a utility that creates a database of files on the system, and then uses that database to ensure file integrity and detect system intrusions.
7.1. Installing AIDE Copy linkLink copied to clipboard!
To start file-integrity checking with AIDE, you must install the corresponding package and initiate the AIDE database.
Prerequisites
-
The
AppStream
repository is enabled.
Procedure
Install the
aide
package:dnf install aide
# dnf install aide
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate an initial database:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: In the default configuration, the
aide --init
command checks just a set of directories and files defined in the/etc/aide.conf
file. To include additional directories or files in the AIDE database, and to change their watched parameters, edit/etc/aide.conf
accordingly. To start using the database, remove the
.new
substring from the initial database file name:mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz
# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Optional: To change the location of the AIDE database, edit the
/etc/aide.conf
file and modify theDBDIR
value. For additional security, store the database, configuration, and the/usr/sbin/aide
binary file in a secure location such as a read-only media.
7.2. Performing integrity checks with AIDE Copy linkLink copied to clipboard!
You can use the crond
service to schedule regular file-integrity checks with AIDE.
Prerequisites
- AIDE is properly installed and its database is initialized. See Installing AIDE
Procedure
To initiate a manual check:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow At a minimum, configure the system to run AIDE weekly. Optimally, run AIDE daily. For example, to schedule a daily execution of AIDE at 04:05 a.m. by using the
cron
command, add the following line to the/etc/crontab
file:05 4 * * * root /usr/sbin/aide --check
05 4 * * * root /usr/sbin/aide --check
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3. Updating an AIDE database Copy linkLink copied to clipboard!
After verifying the changes of your system, such as package updates or configuration files adjustments, update also your baseline AIDE database.
Prerequisites
- AIDE is properly installed and its database is initialized. See Installing AIDE
Procedure
Update your baseline AIDE database:
aide --update
# aide --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
aide --update
command creates the/var/lib/aide/aide.db.new.gz
database file.-
To start using the updated database for integrity checks, remove the
.new
substring from the file name.
7.4. File-integrity tools: AIDE and IMA Copy linkLink copied to clipboard!
Red Hat Enterprise Linux provides several tools for checking and preserving the integrity of files and directories on your system. The following table helps you decide which tool better fits your scenario.
Question | Advanced Intrusion Detection Environment (AIDE) | Integrity Measurement Architecture (IMA) |
---|---|---|
What | AIDE is a utility that creates a database of files and directories on the system. This database serves for checking file integrity and detect intrusion detection. | IMA detects if a file is altered by checking file measurement (hash values) compared to previously stored extended attributes. |
How | AIDE uses rules to compare the integrity state of the files and directories. | IMA uses file hash values to detect the intrusion. |
Why | Detection - AIDE detects if a file is modified by verifying the rules. | Detection and Prevention - IMA detects and prevents an attack by replacing the extended attribute of a file. |
Usage | AIDE detects a threat when the file or directory is modified. | IMA detects a threat when someone tries to alter the entire file. |
Extension | AIDE checks the integrity of files and directories on the local system. | IMA ensures security on the local and remote systems. |
7.5. Configuring file integrity checks with the aide RHEL system role Copy linkLink copied to clipboard!
You can configure Advanced Intrusion Detection Environment (AIDE) consistently across multiple systems by using the aide
RHEL system role. The role automatically installs the aide
package on all managed nodes, and depending on your configuration, it can perform the following actions:
- Initialize the AIDE database and store it on the control node
- Run AIDE integrity checks on the managed nodes
- Update the AIDE database and store it on the control node
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
aide_db_fetch_dir: files
-
Specifies the directory on the Ansible Control Node (ACN) for storing the AIDE database fetched from the remote nodes. With the default
files
value, the role stores the database in the same directory as the playbook. To store the database files somewhere else, specify a different path. aide_check: false
- Runs an integrity check on the remote nodes.
aide_update: false
- Updates the AIDE database and stores it on the control node.
aide_cron_check: true
-
Configures a periodic
cron
job that activates AIDE integrity checks on the managed nodes. aide_cron_interval: 0 12 * * *
Sets the interval for the
cron
job in the format<minute> <hour> <day_of_month> <month> <day of week>
. The value0 12 * * *
sets it to run every day at noon.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.aide/README.md
file on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. Managing sudo access Copy linkLink copied to clipboard!
System administrators can grant sudo
access to allow non-root users to execute administrative commands that are normally reserved for the root user. As a result, non-root users can execute such commands without logging in to the root user account.
8.1. User authorizations in sudoers Copy linkLink copied to clipboard!
The /etc/sudoers
file specifies which users can use the sudo
command to execute other commands. The rules can apply to individual users and user groups. You can also define rules for groups of hosts, commands, and even users more easily by using aliases. Default aliases are defined in the first part of the /etc/sudoers
file.
When a user enters a command with sudo
for which the user does not have authorization, the system records a message that contains the string <username> : user NOT in sudoers
to the journal log.
The default /etc/sudoers
file provides information and examples of authorizations. You can activate a specific example rule by uncommenting the corresponding line. The section with user authorizations is marked with the following introduction:
## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems).
## Next comes the main part: which users can run what software on
## which machines (the sudoers file can be shared between multiple
## systems).
You can create new sudoers
authorizations and modify existing authorizations by using the following format:
<username> <hostname.example.com>=(<run_as_user>:<run_as_group>) <path/to/command>
<username> <hostname.example.com>=(<run_as_user>:<run_as_group>) <path/to/command>
Where:
-
<username>
is the user that enters the command, for example,user1
. If the value starts with%
, it defines a group, for example,%group1
. -
<hostname.example.com>
is the name of the host on which the rule applies. -
The section
(<run_as_user>:<run_as_group>)
defines the user or group as which the command is executed. If you omit this section,<username>
can execute the command as root. -
<path/to/command>
is the complete absolute path to the command. You can also limit the user to only performing a command with specific options and arguments by adding those options after the command path. If you do not specify any options, the user can use the command with all options.
You can apply the rule to all users, hosts, or commands by replacing any of these variables with ALL
.
With overly permissive rules, such as ALL ALL=(ALL) ALL
, all users can run all commands as all users on all hosts. This presents serious security risks.
You can specify the arguments negatively by using the !
operator. For example, !root
specifies all users except root. Note that allowing specific users, groups, and commands is more secure than disallowing specific users, groups, and commands. This is because allow rules also block new unauthorized users or groups.
Avoid using negative rules for commands because users can overcome such rules by renaming commands with the alias
command.
The system reads the /etc/sudoers
file from beginning to end. Therefore, if the file contains multiple entries for a user, the entries are applied in order. In case of conflicting values, the system uses the last match, even if it is not the most specific match.
To preserve the rules during system updates and for easier fixing of errors, enter new rules by creating new files in the /etc/sudoers.d/
directory instead of entering rules directly to the /etc/sudoers
file. The system reads the files in the /etc/sudoers.d
directory when it reaches the following line in the /etc/sudoers
file:
#includedir /etc/sudoers.d
#includedir /etc/sudoers.d
Note that the number sign (#
) at the beginning of this line is part of the syntax and does not mean the line is a comment. The names of files in that directory must not contain a period and must not end with a tilde (~
).
8.2. Granting sudo access to a user Copy linkLink copied to clipboard!
System administrators can allow non-root users to execute administrative commands by granting them sudo
access. The sudo
command provides users with administrative access without using the password of the root user.
When users need to perform an administrative command, they can precede that command with sudo
. If the user has authorization for the command, the command is executed as if they were root.
Be aware of the following limitations:
-
Only users listed in the
/etc/sudoers
configuration file can use thesudo
command. -
The command is executed in the shell of the user, not in the root shell. However, there are some exceptions such as when full
sudo
privileges are granted to any user. In such cases, users can switch to and run the commands in root shell. For example: -
sudo -i
-
sudo su -
Prerequisites
- You have root access to the system.
Procedure
As root, open the
/etc/sudoers
file.visudo
# visudo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
/etc/sudoers
file defines the policies applied by thesudo
command.In the
/etc/sudoers
file, find the lines that grantsudo
access to users in the administrativewheel
group.## Allows people in group wheel to run all commands %wheel ALL=(ALL) ALL
## Allows people in group wheel to run all commands %wheel ALL=(ALL) ALL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Make sure the line that starts with
%wheel
is not commented out with the number sign (#
). - Save any changes, and exit the editor.
Add users you want to grant
sudo
access to into the administrativewheel
group.usermod --append -G wheel <username>
# usermod --append -G wheel <username>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<username>
with the name of the user.
Verification
Verify that the user is in the administrative
wheel
group:id <username>
# id <username> uid=5000(<username>) gid=5000(<username>) groups=5000(<username>),10(wheel)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
8.3. Enabling unprivileged users to run certain commands Copy linkLink copied to clipboard!
As an administrator, you can allow unprivileged users to enter certain commands on specific workstations by configuring a policy in the /etc/sudoers.d/
directory. This is more secure than granting full sudo
access to a user or giving someone the root password for the following reasons:
- More granular control over privileged actions. You can allow a user to perform certain actions on specific hosts instead of giving them full administrative access.
-
Better logging. When a user performs an action through
sudo
, the action is logged with their user name and not just root. -
Transparent control. You can set email notifications for every time the user attempts to use
sudo
privileges.
Prerequisites
- You have root access to the system.
Procedure
As root, create a new
sudoers.d/
directory under/etc/
:mkdir -p /etc/sudoers.d/
# mkdir -p /etc/sudoers.d/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new file in the
/etc/sudoers.d/
directory:visudo -f /etc/sudoers.d/<filename>
# visudo -f /etc/sudoers.d/<filename>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The file opens automatically.
Add the following line to the
/etc/sudoers.d/<filename>
file:<username> <hostname.example.com> = (<run_as_user>:<run_as_group>) <path/to/command>
<username> <hostname.example.com> = (<run_as_user>:<run_as_group>) <path/to/command>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<username>
with the name of the user. -
Replace
<hostname.example.com>
with the URL of the host. -
Replace
(<run_as_user>:<run_as_group>)
with the user or group as which the command can be executed. If you omit this section,<username>
can execute the command as root. -
Replace
<path/to/command>
with the complete absolute path to the command. You can also limit the user to only performing a command with specific options and arguments by adding those options after the command path. If you do not specify any options, the user can use the command with all options. To allow two and more commands on the same host on one line, you can list them separated by a comma followed by a space.
For example, to allow
user1
to execute thednf
andreboot
commands onhost1.example.com
, enteruser1 host1.example.com = /bin/dnf, /sbin/reboot
.
-
Replace
Optional: To receive email notifications every time the user attempts to use
sudo
privileges, add the following lines to the file:Defaults mail_always Defaults mailto="<email@example.com>"
Defaults mail_always Defaults mailto="<email@example.com>"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and exit the editor.
Verification
To verify if a user can run a command with
sudo
privileges, switch the account:su <username> -
# su <username> -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As the user, enter the command with the
sudo
command:sudo <command>
$ sudo <command> [sudo] password for <username>:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the user’s
sudo
password.If the privileges are configured correctly, the system displays the list of commands and options. For example, with the
dnf
command, it shows the following output:… usage: dnf [options] COMMAND …
… usage: dnf [options] COMMAND …
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the system returns the error message
<username> is not in the sudoers file. This incident will be reported
, the file for<username>
in/etc/sudoers.d/
does not exist.If the system returns the error message
<username> is not allowed to run sudo on <host.example.com>
, the configuration was not completed correctly. Ensure that you are logged in as root and that the configuration was performed correctly.If the system returns the error message
Sorry, user <username> is not allowed to execute '<path/to/command>' as root on <host.example.com>.
, the command is not correctly defined in the rule for the user.
8.4. Applying custom sudoers configuration by using RHEL system roles Copy linkLink copied to clipboard!
You can use the sudo
RHEL system role to apply custom sudoers
configuration on your managed nodes. That way, you can define which users can run which commands on which hosts, with better configuration efficiency and more granular control.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the playbook include the following:
users
- The list of users that the rule applies to.
hosts
-
The list of hosts that the rule applies to. You can use
ALL
for all hosts. commands
-
The list of commands that the rule applies to. You can use
ALL
for all commands.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.sudo/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On the managed node, verify that the playbook applied the new rules.
cat /etc/sudoers | tail -n1 <user_name> <host_name>= <path_to_command_binary>
# cat /etc/sudoers | tail -n1 <user_name> <host_name>= <path_to_command_binary>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 9. Configuring automated unlocking of encrypted volumes by using policy-based decryption Copy linkLink copied to clipboard!
Policy-Based Decryption (PBD) is a collection of technologies that enable unlocking encrypted root and secondary volumes of hard drives on physical and virtual machines. PBD uses a variety of unlocking methods, such as user passwords, a Trusted Platform Module (TPM) device, a PKCS #11 device connected to a system, for example, a smart card, or a special network server.
PBD allows combining different unlocking methods into a policy, which makes it possible to unlock the same volume in different ways. The current implementation of the PBD in Red Hat Enterprise Linux consists of the Clevis framework and plug-ins called pins. Each pin provides a separate unlocking capability. Currently, the following pins are available:
tang
- Allows unlocking volumes by using a network server.
tpm2
- Allows unlocking volumes by using a TPM2 policy.
pkcs11
- Allows unlocking volumes by using a PKCS #11 URI.
sss
- Allows deploying high-availability systems by using the Shamir’s Secret Sharing (SSS) cryptographic scheme.
9.1. Network-bound disk encryption Copy linkLink copied to clipboard!
The Network Bound Disc Encryption (NBDE) is a subcategory of Policy-Based Decryption (PBD) that allows binding encrypted volumes to a special network server. The current implementation of the NBDE includes a Clevis pin for the Tang server and the Tang server itself.
In RHEL, NBDE is implemented through the following components and technologies:
Figure 9.1. NBDE scheme when using a LUKS1-encrypted volume. The luksmeta package is not used for LUKS2 volumes.
Tang is a server for binding data to network presence. It makes a system containing your data available when the system is bound to a certain secure network. Tang is stateless and does not require TLS or authentication. Unlike escrow-based solutions, where the server stores all encryption keys and has knowledge of every key ever used, Tang never interacts with any client keys, so it never gains any identifying information from the client.
Clevis is a pluggable framework for automated decryption. In NBDE, Clevis provides automated unlocking of LUKS volumes. The clevis
package provides the client side of the feature.
A Clevis pin is a plug-in into the Clevis framework. One of such pins is a plug-in that implements interactions with the NBDE server - Tang.
Clevis and Tang are generic client and server components that provide network-bound encryption. In RHEL, they are used in conjunction with LUKS to encrypt and decrypt root and non-root storage volumes to accomplish Network-Bound Disk Encryption.
Both client- and server-side components use the José library to perform encryption and decryption operations.
When you begin provisioning NBDE, the Clevis pin for Tang server gets a list of the Tang server’s advertised asymmetric keys. Alternatively, since the keys are asymmetric, a list of Tang’s public keys can be distributed out of band so that clients can operate without access to the Tang server. This mode is called offline provisioning.
The Clevis pin for Tang uses one of the public keys to generate a unique, cryptographically-strong encryption key. Once the data is encrypted by using this key, the key is discarded. The Clevis client should store the state produced by this provisioning operation in a convenient location. This process of encrypting data is the provisioning step.
The LUKS version 2 (LUKS2) is the default disk-encryption format in RHEL, hence, the provisioning state for NBDE is stored as a token in a LUKS2 header. The leveraging of provisioning state for NBDE by the luksmeta
package is used only for volumes encrypted with LUKS1.
The Clevis pin for Tang supports both LUKS1 and LUKS2 without specification need. Clevis can encrypt plain-text files but you have to use the cryptsetup
tool for encrypting block devices.
When the client is ready to access its data, it loads the metadata produced in the provisioning step and it responds to recover the encryption key. This process is the recovery step.
In NBDE, Clevis binds a LUKS volume by using a pin so that it can be automatically unlocked. After successful completion of the binding process, the disk can be unlocked using the provided Dracut unlocker.
If the kdump
kernel crash dumping mechanism is set to save the content of the system memory to a LUKS-encrypted device, you are prompted for entering a password during the second kernel boot.
9.2. Deploying a Tang server with SELinux in enforcing mode Copy linkLink copied to clipboard!
You can use a Tang server to automatically unlock LUKS-encrypted volumes on Clevis-enabled clients. In the minimalistic scenario, you deploy a Tang server on port 80 by installing the tang
package and entering the systemctl enable tangd.socket --now
command. The following example procedure demonstrates the deployment of a Tang server running on a custom port as a confined service in SELinux enforcing mode.
Prerequisites
-
The
policycoreutils-python-utils
package and its dependencies are installed. -
The
firewalld
service is running.
Procedure
To install the
tang
package and its dependencies, enter the following command asroot
:dnf install tang
# dnf install tang
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pick an unoccupied port, for example, 7500/tcp, and allow the
tangd
service to bind to that port:semanage port -a -t tangd_port_t -p tcp 7500
# semanage port -a -t tangd_port_t -p tcp 7500
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that a port can be used only by one service at a time, and thus an attempt to use an already occupied port implies the
ValueError: Port already defined
error message.Open the port in the firewall:
firewall-cmd --add-port=7500/tcp firewall-cmd --runtime-to-permanent
# firewall-cmd --add-port=7500/tcp # firewall-cmd --runtime-to-permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
tangd
service:systemctl enable tangd.socket
# systemctl enable tangd.socket
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an override file:
systemctl edit tangd.socket
# systemctl edit tangd.socket
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following editor screen, which opens an empty
override.conf
file located in the/etc/systemd/system/tangd.socket.d/
directory, change the default port for the Tang server from 80 to the previously picked number by adding the following lines:[Socket] ListenStream= ListenStream=7500
[Socket] ListenStream= ListenStream=7500
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantInsert the previous code snippet between the lines starting with
# Anything between here
and# Lines below this
, otherwise the system discards your changes.-
Save the changes and exit the editor. In the default
vi
editor, you can do that by pressing Esc to switch into command mode, entering:wq
, and pressing Enter. Reload the changed configuration:
systemctl daemon-reload
# systemctl daemon-reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that your configuration is working:
systemctl show tangd.socket -p Listen
# systemctl show tangd.socket -p Listen Listen=[::]:7500 (Stream)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
tangd
service:systemctl restart tangd.socket
# systemctl restart tangd.socket
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Because
tangd
uses thesystemd
socket activation mechanism, the server starts as soon as the first connection comes in. A new set of cryptographic keys is automatically generated at the first start. To perform cryptographic operations such as manual key generation, use thejose
utility.
Verification
On your NBDE client, verify that your Tang server works correctly by using the following command. The command must return the identical message you pass for encryption and decryption:
echo test | clevis encrypt tang '{"url":"<tang.server.example.com:7500>"}' -y | clevis decrypt
# echo test | clevis encrypt tang '{"url":"<tang.server.example.com:7500>"}' -y | clevis decrypt test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.3. Rotating Tang server keys and updating bindings on clients Copy linkLink copied to clipboard!
For security reasons, rotate your Tang server keys and update existing bindings on clients periodically. The precise interval at which you should rotate them depends on your application, key sizes, and institutional policy.
Alternatively, you can rotate Tang keys by using the nbde_server
RHEL system role. See Using the nbde_server system role for setting up multiple Tang servers for more information.
Prerequisites
- A Tang server is running.
-
The
clevis
andclevis-luks
packages are installed on your clients.
Procedure
Rename all keys in the
/var/db/tang
key database directory to have a leading.
to hide them from advertisement. Note that the file names in the following example differs from unique file names in the key database directory of your Tang server:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that you renamed and therefore hid all keys from the Tang server advertisement:
ls -l
# ls -l total 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Generate new keys using the
/usr/libexec/tangd-keygen
command in/var/db/tang
on the Tang server:/usr/libexec/tangd-keygen /var/db/tang ls /var/db/tang
# /usr/libexec/tangd-keygen /var/db/tang # ls /var/db/tang 3ZWS6-cDrCG61UPJS2BMmPU4I54.jwk zyLuX6hijUy_PSeUEFDi7hi38.jwk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that your Tang server advertises the signing key from the new key pair, for example:
tang-show-keys 7500
# tang-show-keys 7500 3ZWS6-cDrCG61UPJS2BMmPU4I54
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On your NBDE clients, use the
clevis luks report
command to check if the keys advertised by the Tang server remains the same. You can identify slots with the relevant binding using theclevis luks list
command, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To regenerate LUKS metadata for the new keys either press
y
to the prompt of the previous command, or use theclevis luks regen
command:clevis luks regen -d /dev/sda2 -s 1
# clevis luks regen -d /dev/sda2 -s 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you are sure that all old clients use the new keys, you can remove the old keys from the Tang server, for example:
cd /var/db/tang rm .*.jwk
# cd /var/db/tang # rm .*.jwk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Removing the old keys while clients are still using them can result in data loss. If you accidentally remove such keys, use the clevis luks regen
command on the clients, and provide your LUKS password manually.
9.4. Configuring automated unlocking by using a Tang key in the web console Copy linkLink copied to clipboard!
You can configure automated unlocking of a LUKS-encrypted storage device by using a key provided by a Tang server.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
-
The
cockpit-storaged
andclevis-luks
packages are installed on your system. -
The
cockpit.socket
service is running at port 9090. - A Tang server is available. See Deploying a Tang server with SELinux in enforcing mode for details.
-
You have
root
privileges or permissions to enter administrative commands withsudo
.
Procedure
- Log in to the RHEL 10 web console.
- Switch to administrative access, provide your credentials, and click Storage table, click the disk that contains an encrypted volume you plan to add to unlock automatically. . In the
In the following page with details of the selected disk, click Keys section to add a Tang key:
in theSelect
Tang keyserver
asKey source
, provide the address of your Tang server, and a password that unlocks the LUKS-encrypted device. Click to confirm:The following dialog window provides a command to verify that the key hash matches.
In a terminal on the Tang server, use the
tang-show-keys
command to display the key hash for comparison. In this example, the Tang server is running on the port 7500:tang-show-keys 7500
# tang-show-keys 7500 x100_1k6GPiDOaMlL3WbpCjHOy9ul1bSfdhI3M08wO0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Click
when the key hashes in the web console and in the output of previously listed commands are the same:
After you select an encrypted root file system and a Tang server, you can skip adding the rd.neednet=1
parameter to the kernel command line, installing the clevis-dracut
package, and regenerating an initial RAM disk (initrd
). For non-root file systems, the web console now enables the remote-cryptsetup.target
and clevis-luks-akspass.path
systemd
units, installs the clevis-systemd
package, and adds the _netdev
parameter to the fstab
and crypttab
configuration files.
Verification
Check that the newly added Tang key is now listed in the Keys section with the
Keyserver
type:Verify that the bindings are available for the early boot, for example:
lsinitrd | grep clevis-luks
# lsinitrd | grep clevis-luks lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path …
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.5. Basic NBDE and TPM2 encryption-client operations Copy linkLink copied to clipboard!
The Clevis framework can encrypt plain-text files and decrypt both ciphertexts in the JSON Web Encryption (JWE) format and LUKS-encrypted block devices. Clevis clients can use either Tang network servers or Trusted Platform Module 2.0 (TPM 2.0) chips for cryptographic operations.
The following commands demonstrate the basic functionality provided by Clevis on examples containing plain-text files. You can also use them for troubleshooting your NBDE or Clevis+TPM deployments.
Encryption client bound to a Tang server
To check that a Clevis encryption client binds to a Tang server, use the
clevis encrypt tang
sub-command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
http://tang.srv:port
URL in the previous example to match the URL of the server wheretang
is installed. Thesecret.jwe
output file contains your encrypted cipher text in the JWE format. This cipher text is read from theinput-plain.txt
input file.Alternatively, if your configuration requires a non-interactive communication with a Tang server without SSH access, you can download an advertisement and save it to a file:
curl -sfg http://tang.srv:port/adv -o adv.jws
$ curl -sfg http://tang.srv:port/adv -o adv.jws
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the advertisement in the
adv.jws
file for any following tasks, such as encryption of files or messages:echo 'hello' | clevis encrypt tang '{"url":"http://tang.srv:port","adv":"adv.jws"}'
$ echo 'hello' | clevis encrypt tang '{"url":"http://tang.srv:port","adv":"adv.jws"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To decrypt data, use the
clevis decrypt
command and provide the cipher text (JWE):clevis decrypt < secret.jwe > output-plain.txt
$ clevis decrypt < secret.jwe > output-plain.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Encryption client using TPM 2.0
To encrypt using a TPM 2.0 chip, use the
clevis encrypt tpm2
sub-command with the only argument in form of the JSON configuration object:clevis encrypt tpm2 '{}' < input-plain.txt > secret.jwe
$ clevis encrypt tpm2 '{}' < input-plain.txt > secret.jwe
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To choose a different hierarchy, hash, and key algorithms, specify configuration properties, for example:
clevis encrypt tpm2 '{"hash":"sha256","key":"rsa"}' < input-plain.txt > secret.jwe
$ clevis encrypt tpm2 '{"hash":"sha256","key":"rsa"}' < input-plain.txt > secret.jwe
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To decrypt the data, provide the ciphertext in the JSON Web Encryption (JWE) format:
clevis decrypt < secret.jwe > output-plain.txt
$ clevis decrypt < secret.jwe > output-plain.txt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The pin also supports sealing data to a Platform Configuration Registers (PCR) state. That way, the data can only be unsealed if the PCR hashes values match the policy used when sealing.
For example, to seal the data to the PCR with index 0 and 7 for the SHA-256 bank:
clevis encrypt tpm2 '{"pcr_bank":"sha256","pcr_ids":"0,7"}' < input-plain.txt > secret.jwe
$ clevis encrypt tpm2 '{"pcr_bank":"sha256","pcr_ids":"0,7"}' < input-plain.txt > secret.jwe
Hashes in PCRs can be rewritten, and you no longer can unlock your encrypted volume. For this reason, add a strong passphrase that enable you to unlock the encrypted volume manually even when a value in a PCR changes.
If the system cannot automatically unlock your encrypted volume after an upgrade of the shim-x64
package, follow the steps in the Clevis TPM2 no longer decrypts LUKS devices after a restart Red Hat Knowledgebase solution.
9.6. Configuring NBDE clients for automated unlocking of LUKS-encrypted volumes Copy linkLink copied to clipboard!
With the Clevis framework, you can configure clients for automated unlocking of LUKS-encrypted volumes when a selected Tang server is available. This creates an NBDE (Network-Bound Disk Encryption) deployment.
Prerequisites
- A Tang server is running and available.
Procedure
To automatically unlock an existing LUKS-encrypted volume, install the
clevis-luks
subpackage:dnf install clevis-luks
# dnf install clevis-luks
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the LUKS-encrypted volume for PBD. In the following example, the block device is referred as /dev/sda2:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the volume to a Tang server using the
clevis luks bind
command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command performs four steps:
- Creates a new key with the same entropy as the LUKS master key.
- Encrypts the new key with Clevis.
- Stores the Clevis JWE object in the LUKS2 header token or uses LUKSMeta if the non-default LUKS1 header is used.
- Enables the new key for use with LUKS.
NoteThe binding procedure assumes that the header contains at least one free LUKS password slot. The
clevis luks bind
command takes one of the slots.Now, you can unlock the volume with your existing password as well as with the Clevis policy.
To enable the early boot system to process the disk binding, use the
dracut
tool on an already installed system. In RHEL, Clevis produces a genericinitrd
(initial RAM disk) without host-specific configuration options and does not automatically add parameters such asrd.neednet=1
to the kernel command line. If your configuration relies on a Tang pin that requires network during early boot, use the--hostonly-cmdline
argument anddracut
addsrd.neednet=1
when it detects a Tang binding:Install the
clevis-dracut
package:dnf install clevis-dracut
# dnf install clevis-dracut
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Regenerate the initial RAM disk:
dracut -fv --regenerate-all --hostonly-cmdline
# dracut -fv --regenerate-all --hostonly-cmdline
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, create a .conf file in the
/etc/dracut.conf.d/
directory, and add thehostonly_cmdline=yes
option to the file. Then, you can usedracut
without--hostonly-cmdline
, for example:echo "hostonly_cmdline=yes" > /etc/dracut.conf.d/clevis.conf dracut -fv --regenerate-all
# echo "hostonly_cmdline=yes" > /etc/dracut.conf.d/clevis.conf # dracut -fv --regenerate-all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can also ensure that networking for a Tang pin is available during early boot by using the
grubby
tool on the system where Clevis is installed:grubby --update-kernel=ALL --args="rd.neednet=1"
# grubby --update-kernel=ALL --args="rd.neednet=1"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the Clevis JWE object is successfully placed in a LUKS header, use the
clevis luks list
command:clevis luks list -d /dev/sda2
# clevis luks list -d /dev/sda2 1: tang '{"url":"http://tang.srv:port"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the bindings are available for the early boot, for example:
lsinitrd | grep clevis-luks
# lsinitrd | grep clevis-luks lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path …
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.7. Configuring NBDE clients with static IP configuration Copy linkLink copied to clipboard!
To use NBDE for clients with static IP configuration (without DHCP), you must pass your network configuration to the dracut
tool manually.
Prerequisites
- A Tang server is running and available.
The NBDE client is configured for automated unlocking of encrypted volumes by the Tang server.
For details, see Configuring NBDE clients for automated unlocking of LUKS-encrypted volumes.
Procedure
You can provide your static network configuration as a value for the
kernel-cmdline
option in adracut
command, for example:dracut -fv --regenerate-all --kernel-cmdline "ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none nameserver=192.0.2.100"
# dracut -fv --regenerate-all --kernel-cmdline "ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none nameserver=192.0.2.100"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, create a .conf file in the
/etc/dracut.conf.d/
directory with the static network information and then, regenerate the initial RAM disk image:cat /etc/dracut.conf.d/static_ip.conf kernel_cmdline="ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none nameserver=192.0.2.100" dracut -fv --regenerate-all
# cat /etc/dracut.conf.d/static_ip.conf kernel_cmdline="ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none nameserver=192.0.2.100" # dracut -fv --regenerate-all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.8. Configuring manual enrollment of LUKS-encrypted volumes by using a TPM 2.0 policy Copy linkLink copied to clipboard!
You can configure unlocking of LUKS-encrypted volumes by using a Trusted Platform Module 2.0 (TPM 2.0) policy. The clevis-encrypt-tpm2(1)
man page on your system provides a complete reference of available parameters.
Prerequisites
- An accessible TPM 2.0-compatible device.
- A system with the 64-bit Intel or 64-bit AMD architecture.
Procedure
Install the
clevis-luks
subpackage:dnf install clevis-luks
# dnf install clevis-luks
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the LUKS-encrypted volume for PBD. In the following example, the block device is referred as /dev/sda2:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the volume to a TPM 2.0 device with the
clevis luks bind
command, for example:clevis luks bind -d /dev/sda2 tpm2 '{"hash":"sha256","key":"rsa"}'
# clevis luks bind -d /dev/sda2 tpm2 '{"hash":"sha256","key":"rsa"}' … Do you wish to initialize /dev/sda2? [yn] y Enter existing LUKS password:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command performs four steps:
- Creates a new key with the same entropy as the LUKS master key.
- Encrypts the new key with Clevis.
- Stores the Clevis JWE object in the LUKS2 header token or uses LUKSMeta if the non-default LUKS1 header is used.
Enables the new key for use with LUKS.
NoteThe binding procedure assumes that there is at least one free LUKS password slot. The
clevis luks bind
command takes one of the slots.Alternatively, if you want to seal data to specific Platform Configuration Registers (PCR) states, add the
pcr_bank
andpcr_ids
values to theclevis luks bind
command, for example:clevis luks bind -d /dev/sda2 tpm2 '{"hash":"sha256","key":"rsa","pcr_bank":"sha256","pcr_ids":"0,1"}'
# clevis luks bind -d /dev/sda2 tpm2 '{"hash":"sha256","key":"rsa","pcr_bank":"sha256","pcr_ids":"0,1"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantBecause the data can only be unsealed if PCR hashes values match the policy used when sealing and the hashes can be rewritten, add a strong passphrase that enable you to unlock the encrypted volume manually when a value in a PCR changes.
If the system cannot automatically unlock your encrypted volume after an upgrade of the
shim-x64
package, follow the steps in the Clevis TPM2 no longer decrypts LUKS devices after a restart Red Hat Knowledgebase solution.
- The volume can now be unlocked with your existing password as well as with the Clevis policy.
To enable the early boot system to process the disk binding, use the
dracut
tool on an already installed system:dnf install clevis-dracut dracut -fv --regenerate-all
# dnf install clevis-dracut # dracut -fv --regenerate-all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
To verify that the Clevis JWE object is successfully placed in a LUKS header, use the
clevis luks list
command:clevis luks list -d /dev/sda2
# clevis luks list -d /dev/sda2 1: tpm2 '{"hash":"sha256","key":"rsa"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.9. Configuring unlocking of LUKS-encrypted volumes by using a PKCS #11 pin Copy linkLink copied to clipboard!
You can configure unlocking of LUKS-encrypted volumes by using a device compatible with PKCS #11, which can be either a smart card or a hardware security module (HSM).
Automated unlocking of encrypted volumes with a Clevis PKCS #11 pin requires also changes in the /etc/crypttab
file, which configure the systemd
manager to use an AF_UNIX
socket to wait for the keyphrase for unlocking the volumes instead of prompting the user through the console.
The Clevis PKCS #11 unit file configures the socket in the /run/systemd/clevis-pkcs11.sock
file for sending and receiving the information about disk unlocking. For disks unlocked through a Clevis PKCS #11 pin, you must configure the socket file as a key file.
Prerequisites
- The PKCS #11 device is already configured and accessible.
-
The
clevis-pin-pkcs11
package is installed. -
At least one free LUKS password slot for the
clevis luks bind
command.
Procedure
Identify the LUKS-encrypted volume for PBD. In the following example, the block device is referred as /dev/sda2:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the URI of the PKCS #11 device you want to use for unlocking volumes, for example:
pkcs11-tool -L | grep uri
$ pkcs11-tool -L | grep uri uri : pkcs11:model=PKCS%2315%20emulated;manufacturer=piv_II;serial=42facd1f749ece7f;token=clevis uri : pkcs11:model=PKCS%2315%20emulated;manufacturer=OpenPGP%20project;serial=000f06080f4f;token=OpenPGP%20card%20%28User%20PIN%29
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bind the volume to a PKCS #11 device with the
clevis luks bind
command, for example:clevis luks bind -d /dev/sda2 pkcs11 '{"uri":"pkcs11:model=PKCS%2315%20emulated;manufacturer=OpenPGP%20project;serial=000f06080f4f;token=OpenPGP%20card%20%28User%20PIN%29;id=%03;object=Authentication%20key;type=public"}'
# clevis luks bind -d /dev/sda2 pkcs11 '{"uri":"pkcs11:model=PKCS%2315%20emulated;manufacturer=OpenPGP%20project;serial=000f06080f4f;token=OpenPGP%20card%20%28User%20PIN%29;id=%03;object=Authentication%20key;type=public"}' … Do you wish to initialize /dev/sda2? [yn] y Enter existing LUKS password:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command performs the following steps:
- Creates a new key with the same entropy as the LUKS master key.
- Encrypts the new key with Clevis.
- Stores the Clevis JWE object in the LUKS2 header token or uses LUKSMeta if the non-default LUKS1 header is used.
- Enables the new key for use with LUKS.
Optionally: If your scenario requires specifying the module to use, add the module-path URI parameter:
clevis luks bind -d /dev/sda2 pkcs11 '{"uri":"pkcs11:module-path=/usr/lib64/libykcs11.so.2";model=PKCS%2315%20emulated;manufacturer=OpenPGP%20project;serial=000f06080f4f;token=OpenPGP%20card%20%28User%20PIN%29;id=%03;object=Authentication%20key;type=public}'
# clevis luks bind -d /dev/sda2 pkcs11 '{"uri":"pkcs11:module-path=/usr/lib64/libykcs11.so.2";model=PKCS%2315%20emulated;manufacturer=OpenPGP%20project;serial=000f06080f4f;token=OpenPGP%20card%20%28User%20PIN%29;id=%03;object=Authentication%20key;type=public}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
clevis-luks-pkcs11-askpass.socket
unit:systemctl enable --now clevis-luks-pkcs11-askpass.socket
# systemctl enable --now clevis-luks-pkcs11-askpass.socket
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open the
/etc/crypttab
file in a text editor and identify the line containing the LUKS-encrypted volume you want to unlock by the PKCS #11 pin, for example:luks-6e38d5e1-7f83-43cc-819a-7416bcbf9f84 UUID=6e38d5e1-7f83-43cc-819a-7416bcbf9f84 - -
luks-6e38d5e1-7f83-43cc-819a-7416bcbf9f84 UUID=6e38d5e1-7f83-43cc-819a-7416bcbf9f84 - -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the dashes with the
/run/systemd/clevis-pkcs11.sock
file path and thekeyfile-timeout
option:luks-6e38d5e1-7f83-43cc-819a-7416bcbf9f84 UUID=6e38d5e1-7f83-43cc-819a-7416bcbf9f84 /run/systemd/clevis-pkcs11.sock keyfile-timeout=30s
luks-6e38d5e1-7f83-43cc-819a-7416bcbf9f84 UUID=6e38d5e1-7f83-43cc-819a-7416bcbf9f84 /run/systemd/clevis-pkcs11.sock keyfile-timeout=30s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
keyfile-timeout
option provides a fall-through mechanism for when an unlocking error occurs and the system requires entering the passphrase manually through the console.- Save the changes, and exit the editor.
To enable the early boot system to process the disk binding, which is required for unlocking root file systems, use the
dracut
tool on an already installed system:dracut -fv --regenerate-all
# dracut -fv --regenerate-all
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the system.
During the following boot process, the system prompts for the PKCS #11 device PIN and decrypts the corresponding configured encrypted disk only in case you enter the correct PIN.
Verification
Instead of manually testing the boot process, you can encrypt and decrypt a text message with the following command:
echo "top secret" | clevis encrypt pkcs11 '{"uri":"pkcs11:module-path=/usr/lib64/libykcs11.so.2?pin-value=<PIN>"}' | clevis decrypt
# echo "top secret" | clevis encrypt pkcs11 '{"uri":"pkcs11:module-path=/usr/lib64/libykcs11.so.2?pin-value=<PIN>"}' | clevis decrypt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<PIN>
with a PIN value. You must enter this PIN value to decrypt the message.To verify that the Clevis JWE object is successfully placed in a LUKS header, use the
clevis luks list
command, for example:clevis luks list -d /dev/sda2
# clevis luks list -d /dev/sda2 1: pkcs11 '{"uri": "pkcs11:model=PKCS%2315%20emulated;manufacturer=piv_II; serial=0a35ba26b062b9c5;token=clevis;id=%02;object=Encryption%20Key? module-path=/usr/lib64/libykcs11.so.2"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.10. Removing a Clevis pin from a LUKS-encrypted volume manually Copy linkLink copied to clipboard!
You can manually remove the metadata created by the clevis luks bind
command and also wipe a key slot that contains passphrase added by Clevis.
The recommended way to remove a Clevis pin from a LUKS-encrypted volume is through the clevis luks unbind
command. The removal procedure using clevis luks unbind
consists of only one step and works for both LUKS1 and LUKS2 volumes. The following example command removes the metadata created by the binding step and wipe the key slot 1
on the /dev/sda2
device:
clevis luks unbind -d /dev/sda2 -s 1
# clevis luks unbind -d /dev/sda2 -s 1
Prerequisites
- A LUKS-encrypted volume with a Clevis binding.
Procedure
Check which LUKS version the volume, for example
/dev/sda2
, is encrypted by and identify a slot and a token that is bound to Clevis:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the previous example, the Clevis token is identified by
0
and the associated key slot is1
.In case of LUKS2 encryption, remove the token:
cryptsetup token remove --token-id 0 /dev/sda2
# cryptsetup token remove --token-id 0 /dev/sda2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your device is encrypted by LUKS1, which is indicated by the
Version: 1
string in the output of thecryptsetup luksDump
command, perform this additional step with theluksmeta wipe
command:luksmeta wipe -d /dev/sda2 -s 1
# luksmeta wipe -d /dev/sda2 -s 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wipe the key slot containing the Clevis passphrase:
cryptsetup luksKillSlot /dev/sda2 1
# cryptsetup luksKillSlot /dev/sda2 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.11. Configuring automated enrollment of LUKS-encrypted volumes by using Kickstart Copy linkLink copied to clipboard!
You can configure an automated installation process that uses Clevis for the enrollment of LUKS-encrypted volumes.
Procedure
Instruct Kickstart to partition the disk such that LUKS encryption has enabled for all mount points, other than
/boot
, with a temporary password. The password is temporary for this step of the enrollment process.part /boot --fstype="xfs" --ondisk=vda --size=256 part / --fstype="xfs" --ondisk=vda --grow --encrypted --passphrase=temppass
part /boot --fstype="xfs" --ondisk=vda --size=256 part / --fstype="xfs" --ondisk=vda --grow --encrypted --passphrase=temppass
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that OSPP-compliant systems require a more complex configuration, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the related Clevis packages by listing them in the
%packages
section:%packages clevis-dracut clevis-luks clevis-systemd %end
%packages clevis-dracut clevis-luks clevis-systemd %end
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: To ensure that you can unlock the encrypted volume manually when required, add a strong passphrase to the volume before you remove the temporary passphrase.
Call
clevis luks bind
to perform binding in the%post
section. Afterward, remove the temporary password:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If your configuration relies on a Tang pin that requires network during early boot or you use NBDE clients with static IP configurations, you have to modify the
dracut
command as described in Configuring NBDE clients with static IP configuration.WarningThe
cryptsetup luksRemoveKey
command prevents any further administration of a LUKS2 device on which you apply it. You can recover a removed master key using thedmsetup
command only for LUKS1 devices. See thedmsetup(8)
man page on your system for more information.
You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server.
9.12. Configuring automated unlocking of a LUKS-encrypted removable storage device Copy linkLink copied to clipboard!
You can set up an automated unlocking process of a LUKS-encrypted USB storage device.
Procedure
To automatically unlock a LUKS-encrypted removable storage device, such as a USB drive, install the
clevis-udisks2
package:dnf install clevis-udisks2
# dnf install clevis-udisks2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the system, and then perform the binding step using the
clevis luks bind
command as described in Configuring NBDE clients for automated unlocking of LUKS-encrypted volumes, for example:clevis luks bind -d /dev/sdb1 tang '{"url":"http://tang.srv"}'
# clevis luks bind -d /dev/sdb1 tang '{"url":"http://tang.srv"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The LUKS-encrypted removable device can be now unlocked automatically in your GNOME desktop session. The device bound to a Clevis policy can be also unlocked by the
clevis luks unlock
command:clevis luks unlock -d /dev/sdb1
# clevis luks unlock -d /dev/sdb1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can use an analogous procedure when using a TPM 2.0 policy instead of a Tang server.
9.13. Deploying high-availability NBDE systems Copy linkLink copied to clipboard!
Tang provides two methods for building a high-availability deployment:
- Client redundancy (recommended)
-
Clients should be configured with the ability to bind to multiple Tang servers. In this setup, each Tang server has its own keys and clients can decrypt by contacting a subset of these servers. Clevis already supports this workflow through its
sss
plug-in. Red Hat recommends this method for a high-availability deployment. - Key sharing
-
For redundancy purposes, more than one instance of Tang can be deployed. To set up a second or any subsequent instance, install the
tang
packages and copy the key directory to the new host usingrsync
overSSH
. Note that Red Hat does not recommend this method because sharing keys increases the risk of key compromise and requires additional automation infrastructure.
High-available NBDE using Shamir’s Secret Sharing
Shamir’s Secret Sharing (SSS) is a cryptographic scheme that divides a secret into several unique parts. To reconstruct the secret, a number of parts is required. The number is called threshold and SSS is also referred to as a thresholding scheme.
Clevis provides an implementation of SSS. It creates a key and divides it into a number of pieces. Each piece is encrypted using another pin including even SSS recursively. Additionally, you define the threshold t
. If an NBDE deployment decrypts at least t
pieces, then it recovers the encryption key and the decryption process succeeds. When Clevis detects a smaller number of parts than specified in the threshold, it prints an error message.
Example 1: Redundancy with two Tang servers
The following command decrypts a LUKS-encrypted device when at least one of two Tang servers is available:
clevis luks bind -d /dev/sda1 sss '{"t":1,"pins":{"tang":[{"url":"http://tang1.srv"},{"url":"http://tang2.srv"}]}}'
# clevis luks bind -d /dev/sda1 sss '{"t":1,"pins":{"tang":[{"url":"http://tang1.srv"},{"url":"http://tang2.srv"}]}}'
The previous command used the following configuration scheme:
In this configuration, the SSS threshold t
is set to 1
and the clevis luks bind
command successfully reconstructs the secret if at least one from two listed tang
servers is available.
Example 2: Shared secret on a Tang server and a TPM device
The following command successfully decrypts a LUKS-encrypted device when both the tang
server and the tpm2
device are available:
clevis luks bind -d /dev/sda1 sss '{"t":2,"pins":{"tang":[{"url":"http://tang1.srv"}], "tpm2": {"pcr_ids":"0,7"}}}'
# clevis luks bind -d /dev/sda1 sss '{"t":2,"pins":{"tang":[{"url":"http://tang1.srv"}], "tpm2": {"pcr_ids":"0,7"}}}'
The configuration scheme with the SSS threshold 't' set to '2' is now:
9.14. Deployment of virtual machines in a NBDE network Copy linkLink copied to clipboard!
The clevis luks bind
command does not change the LUKS master key. This implies that if you create a LUKS-encrypted image for use in a virtual machine or cloud environment, all the instances that run this image share a master key. This is extremely insecure and should be avoided at all times.
This is not a limitation of Clevis but a design principle of LUKS. If your scenario requires having encrypted root volumes in a cloud, perform the installation process (usually using Kickstart) for each instance of Red Hat Enterprise Linux in the cloud as well. The images cannot be shared without also sharing a LUKS master key.
To deploy automated unlocking in a virtualized environment, use systems such as lorax
or virt-install
together with a Kickstart file (see Configuring automated enrollment of LUKS-encrypted volumes by using Kickstart) or another automated provisioning tool to ensure that each encrypted VM has a unique master key.
9.15. Building automatically-enrollable VM images for cloud environments by using NBDE Copy linkLink copied to clipboard!
Deploying automatically-enrollable encrypted images in a cloud environment can provide a unique set of challenges. Like other virtualization environments, it is recommended to reduce the number of instances started from a single image to avoid sharing the LUKS master key.
Therefore, the best practice is to create customized images that are not shared in any public repository and that provide a base for the deployment of a limited amount of instances. The exact number of instances to create should be defined by deployment’s security policies and based on the risk tolerance associated with the LUKS master key attack vector.
To build LUKS-enabled automated deployments, systems such as Lorax or virt-install together with a Kickstart file should be used to ensure master key uniqueness during the image building process.
Cloud environments enable two Tang server deployment options which we consider here. First, the Tang server can be deployed within the cloud environment itself. Second, the Tang server can be deployed outside of the cloud on independent infrastructure with a VPN link between the two infrastructures.
Deploying Tang natively in the cloud does allow for easy deployment. However, given that it shares infrastructure with the data persistence layer of ciphertext of other systems, it may be possible for both the Tang server’s private key and the Clevis metadata to be stored on the same physical disk. Access to this physical disk permits a full compromise of the ciphertext data.
Always maintain a physical separation between the location where the data is stored and the system where Tang is running. This separation between the cloud and the Tang server ensures that the Tang server’s private key cannot be accidentally combined with the Clevis metadata. It also provides local control of the Tang server if the cloud infrastructure is at risk.
9.16. Deploying Tang as a container Copy linkLink copied to clipboard!
The tang
container image provides Tang-server decryption capabilities for Clevis clients that run either in OpenShift Container Platform (OCP) clusters or in separate virtual machines.
Prerequisites
-
The
podman
package and its dependencies are installed on the system. -
You have logged in on the
registry.redhat.io
container catalog using thepodman login registry.redhat.io
command. See Red Hat Container Registry Authentication for more information. - The Clevis client is installed on systems containing LUKS-encrypted volumes that you want to automatically unlock by using a Tang server.
Procedure
Pull the
tang
container image from theregistry.redhat.io
registry:podman pull registry.redhat.io/rhel10/tang
# podman pull registry.redhat.io/rhel10/tang
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the container, specify its port, and specify the path to the Tang keys. The previous example runs the
tang
container, specifies the port 7500, and indicates a path to the Tang keys of the/var/db/tang
directory:podman run -d -p 7500:7500 -v tang-keys:/var/db/tang --name tang registry.redhat.io/rhel10/tang
# podman run -d -p 7500:7500 -v tang-keys:/var/db/tang --name tang registry.redhat.io/rhel10/tang
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that Tang uses port 80 by default but this may collide with other services such as the Apache HTTP server.
Optional: For increased security, rotate the Tang keys periodically. You can use the
tangd-rotate-keys
script, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On a system that contains LUKS-encrypted volumes for automated unlocking by the presence of the Tang server, check that the Clevis client can encrypt and decrypt a plain-text message using Tang:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The previous example command shows the
test
string at the end of its output when a Tang server is available on the localhost URL and communicates through port 7500.
9.17. Configuring NBDE by using RHEL system roles Copy linkLink copied to clipboard!
You can use the nbde_client
and nbde_server
RHEL system roles for automated deployments of Policy-Based Decryption (PBD) solutions by using Clevis and Tang. The rhel-system-roles
package contains these system roles, the related examples, and also the reference documentation.
9.17.1. Using the nbde_server RHEL system role for setting up multiple Tang servers Copy linkLink copied to clipboard!
By using the nbde_server
system role, you can deploy and manage a Tang server as part of an automated disk encryption solution. This role supports the following features:
- Rotating Tang keys
- Deploying and backing up Tang keys
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example playbook ensures deploying of your Tang server and a key rotation.
The settings specified in the example playbook include the following:
nbde_server_manage_firewall: true
-
Use the
firewall
system role to manage ports used by thenbde_server
role. nbde_server_manage_selinux: true
Use the
selinux
system role to manage ports used by thenbde_server
role.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.nbde_server/README.md
file on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On your NBDE client, verify that your Tang server works correctly by using the following command. The command must return the identical message you pass for encryption and decryption:
ansible managed-node-01.example.com -m command -a 'echo test | clevis encrypt tang '{"url":"<tang.server.example.com>"}' -y | clevis decrypt'
# ansible managed-node-01.example.com -m command -a 'echo test | clevis encrypt tang '{"url":"<tang.server.example.com>"}' -y | clevis decrypt' test
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.17.2. Setting up Clevis clients with DHCP by using the nbde_client RHEL system role Copy linkLink copied to clipboard!
The nbde_client
system role enables you to deploy multiple Clevis clients in an automated way.
This role supports binding a LUKS-encrypted volume to one or more Network-Bound (NBDE) servers - Tang servers. You can either preserve the existing volume encryption with a passphrase or remove it. After removing the passphrase, you can unlock the volume only using NBDE. This is useful when a volume is initially encrypted using a temporary key or password that you should remove after you provision the system.
If you provide both a passphrase and a key file, the role uses what you have provided first. If it does not find any of these valid, it attempts to retrieve a passphrase from an existing binding.
Policy-Based Decryption (PBD) defines a binding as a mapping of a device to a slot. This means that you can have multiple bindings for the same device. The default slot is slot 1.
The nbde_client
system role supports only Tang bindings. Therefore, you cannot use it for TPM2 bindings.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - A volume that is already encrypted by using LUKS.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example playbook configures Clevis clients for automated unlocking of two LUKS-encrypted volumes when at least one of two Tang servers is available.
The settings specified in the example playbook include the following:
state: present
-
The values of
state
indicate the configuration after you run the playbook. Use thepresent
value for either creating a new binding or updating an existing one. Contrary to aclevis luks bind
command, you can usestate: present
also for overwriting an existing binding in its device slot. Theabsent
value removes a specified binding. nbde_client_early_boot: true
The
nbde_client
role ensures that networking for a Tang pin is available during early boot by default. If you scenario requires to disable this feature, add thenbde_client_early_boot: false
variable to your playbook.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.nbde_client/README.md
file on the control node.
Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
On your NBDE client, check that the encrypted volume that should be automatically unlocked by your Tang servers contain the corresponding information in its LUKS pins:
ansible managed-node-01.example.com -m command -a 'clevis luks list -d /dev/rhel/root'
# ansible managed-node-01.example.com -m command -a 'clevis luks list -d /dev/rhel/root' 1: tang '{"url":"<http://server1.example.com/>"}' 2: tang '{"url":"<http://server2.example.com/>"}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you do not use the
nbde_client_early_boot: false
variable, verify that the bindings are available for the early boot, for example:ansible managed-node-01.example.com -m command -a 'lsinitrd | grep clevis-luks'
# ansible managed-node-01.example.com -m command -a 'lsinitrd | grep clevis-luks' lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path …
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
9.17.3. Setting up static-IP Clevis clients by using the nbde_client RHEL system role Copy linkLink copied to clipboard!
The nbde_client
RHEL system role supports only scenarios with Dynamic Host Configuration Protocol (DHCP). On an NBDE client with static IP configuration, you must pass your network configuration as a kernel boot parameter.
Typically, administrators want to reuse a playbook and not maintain individual playbooks for each host to which Ansible assigns static IP addresses during early boot. In this case, you can use variables in the playbook and provide the settings in an external file. As a result, you need only one playbook and one file with the settings.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them. - A volume that is already encrypted by using LUKS.
Procedure
Create a file with the network settings of your hosts, for example,
static-ip-settings-clients.yml
, and add the values you want to dynamically assign to the hosts:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow This playbook reads certain values dynamically for each host listed in the
~/static-ip-settings-clients.yml
file.For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.network/README.md
file on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 10. Blocking and allowing applications by using fapolicyd Copy linkLink copied to clipboard!
Setting and enforcing a policy that either allows or denies application execution based on a rule set efficiently prevents the execution of unknown and potentially malicious software.
10.1. Introduction to fapolicyd Copy linkLink copied to clipboard!
The fapolicyd
software framework controls the execution of applications based on a user-defined policy. This is one of the most efficient ways to prevent running untrusted and possibly malicious applications on the system.
The fapolicyd
framework provides the following components:
-
fapolicyd
service -
fapolicyd
command-line utilities -
fapolicyd
RPM plugin -
fapolicyd
rule language -
fagenrules
script
The administrator can define the allow
and deny
execution rules for any application with the possibility of auditing based on a path, hash, MIME type, or trust.
The fapolicyd
framework introduces the concept of trust. An application is trusted when it is properly installed by the system package manager, and therefore it is registered in the system RPM database. The fapolicyd
daemon uses the RPM database as a list of trusted binaries and scripts. The fapolicyd
RPM plugin registers any system update that is handled by either the {PackageManagerName} package manager or the RPM Package Manager. The plugin notifies the fapolicyd
daemon about changes in this database. Other ways of adding applications require the creation of custom rules and restarting the fapolicyd
service.
The fapolicyd
service configuration is located in the /etc/fapolicyd/
directory with the following structure:
-
The
/etc/fapolicyd/fapolicyd.trust
file contains a list of trusted files. You can also use multiple trust files in the/etc/fapolicyd/trust.d/
directory. -
The
/etc/fapolicyd/rules.d/
directory for files containingallow
anddeny
execution rules. Thefagenrules
script merges these component rules files to the/etc/fapolicyd/compiled.rules
file. -
The
fapolicyd.conf
file contains the daemon’s configuration options. This file is useful primarily for performance-tuning purposes.
Rules in /etc/fapolicyd/rules.d/
are organized in several files, each representing a different policy goal. The numbers at the beginning of the corresponding file names determine the order in /etc/fapolicyd/compiled.rules
:
- 10
- Language rules.
- 20
- Dracut-related Rules.
- 21
- rules for updaters.
- 30
- Patterns.
- 40
- ELF rules.
- 41
- Shared objects rules.
- 42
- Trusted ELF rules.
- 70
- Trusted language rules.
- 72
- Shell rules.
- 90
- Deny execute rules.
- 95
- Allow open rules.
You can use one of the following ways for fapolicyd
integrity checking:
- File-size checking
- Comparing SHA-256 hashes
- Integrity Measurement Architecture (IMA) subsystem
By default, fapolicyd
does no integrity checking. Integrity checking based on the file size is fast, but an attacker can replace the content of the file and preserve its byte size. Computing and checking SHA-256 checksums is more secure, but it affects the performance of the system. The integrity = ima
option in fapolicyd.conf
requires support for files extended attributes (also known as xattr
) on all file systems containing executable files.
10.2. Deploying fapolicyd Copy linkLink copied to clipboard!
When deploying the fapolicyd
application allowlisting framework, you can either try your configuration in permissive mode first or directly enable the service in the default configuration.
Procedure
Install the
fapolicyd
package:dnf install fapolicyd
# dnf install fapolicyd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the Audit subsystem for recording
fapolicyd
events:auditctl -w /etc/fapolicyd/ -p wa -k fapolicyd_changes service try-restart auditd
# auditctl -w /etc/fapolicyd/ -p wa -k fapolicyd_changes # service try-restart auditd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To try your configuration first, change mode to permissive.
Open the
/etc/fapolicyd/fapolicyd.conf
file in a text editor of your choice, for example:vi /etc/fapolicyd/fapolicyd.conf
# vi /etc/fapolicyd/fapolicyd.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the value of the
permissive
option from0
to1
, save the file, and exit the editor:permissive = 1
permissive = 1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can debug your configuration by using the
fapolicyd --debug-deny --permissive
command before you start the service. See the Troubleshooting problems related to fapolicyd section for more information.
Enable and start the
fapolicyd
service:systemctl enable --now fapolicyd
# systemctl enable --now fapolicyd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you enabled permissive mode through
/etc/fapolicyd/fapolicyd.conf
:- Use your applications.
Check Audit logs for
fanotify
denials, for example:ausearch -ts recent -m fanotify
# ausearch -ts recent -m fanotify
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When debugged, disable permissive mode by changing the corresponding value back to
permissive = 0
, and restart the service:systemctl restart fapolicyd
# systemctl restart fapolicyd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
fapolicyd
service is running correctly:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in as a user without root privileges, and check that
fapolicyd
is working, for example:cp /bin/ls /tmp /tmp/ls
$ cp /bin/ls /tmp $ /tmp/ls bash: /tmp/ls: Operation not permitted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.3. Marking files as trusted using an additional source of trust Copy linkLink copied to clipboard!
The fapolicyd
framework trusts files contained in the RPM database. You can mark additional files as trusted by adding the corresponding entries to the /etc/fapolicyd/fapolicyd.trust
plain-text file or the /etc/fapolicyd/trust.d/
directory, which supports separating a list of trusted files into more files. You can modify fapolicyd.trust
or the files in /etc/fapolicyd/trust.d
either directly using a text editor or through fapolicyd-cli
commands.
Marking files as trusted using fapolicyd.trust
or trust.d/
is better than writing custom fapolicyd
rules due to performance reasons.
Prerequisites
-
The
fapolicyd
framework is deployed on your system.
Procedure
Copy your custom binary to the required directory, for example:
cp /bin/ls /tmp /tmp/ls
$ cp /bin/ls /tmp $ /tmp/ls bash: /tmp/ls: Operation not permitted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mark your custom binary as trusted, and store the corresponding entry to the
myapp
file in/etc/fapolicyd/trust.d/
:fapolicyd-cli --file add /tmp/ls --trust-file myapp
# fapolicyd-cli --file add /tmp/ls --trust-file myapp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If you skip the
--trust-file
option, then the previous command adds the corresponding line to/etc/fapolicyd/fapolicyd.trust
. -
To mark all existing files in a directory as trusted, provide the directory path as an argument of the
--file
option, for example:fapolicyd-cli --file add /tmp/my_bin_dir/ --trust-file myapp
.
-
If you skip the
Update the
fapolicyd
database:fapolicyd-cli --update
# fapolicyd-cli --update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Changing the content of a trusted file or directory changes their checksum, and therefore fapolicyd
no longer considers them trusted.
To make the new content trusted again, refresh the file trust database by using the fapolicyd-cli --file update
command. If you do not provide any argument, the entire database refreshes. Alternatively, you can specify a path to a specific file or directory. Then, update the database by using fapolicyd-cli --update
.
Verification
Check that your custom binary can be now executed, for example:
/tmp/ls
$ /tmp/ls ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.4. Adding custom allow and deny rules for fapolicyd Copy linkLink copied to clipboard!
The default set of rules in the fapolicyd
package does not affect system functions. For custom scenarios, such as storing binaries and scripts in a non-standard directory or adding applications without the dnf
or rpm
installers, you must either mark additional files as trusted or add new custom rules.
For basic scenarios, prefer Marking files as trusted using an additional source of trust. In more advanced scenarios such as allowing to execute a custom binary only for specific user and group identifiers, add new custom rules to the /etc/fapolicyd/rules.d/
directory.
The following steps demonstrate adding a new rule to allow a custom binary.
Prerequisites
-
The
fapolicyd
framework is deployed on your system.
Procedure
Copy your custom binary to the required directory, for example:
cp /bin/ls /tmp /tmp/ls
$ cp /bin/ls /tmp $ /tmp/ls bash: /tmp/ls: Operation not permitted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the
fapolicyd
service:systemctl stop fapolicyd
# systemctl stop fapolicyd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use debug mode to identify a corresponding rule. Because the output of the
fapolicyd --debug
command is verbose and you can stop it only by pressing Ctrl+C or killing the corresponding process, redirect the error output to a file. In this case, you can limit the output only to access denials by using the--debug-deny
option instead of--debug
:fapolicyd --debug-deny 2> fapolicy.output &
# fapolicyd --debug-deny 2> fapolicy.output & [1] 51341
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can run
fapolicyd
debug mode in another terminal.Repeat the command that
fapolicyd
denied:/tmp/ls
$ /tmp/ls bash: /tmp/ls: Operation not permitted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop debug mode by resuming it in the foreground and pressing Ctrl+C:
fg
# fg fapolicyd --debug 2> fapolicy.output ^C ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, kill the process of
fapolicyd
debug mode:kill 51341
# kill 51341
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find a rule that denies the execution of your application:
cat fapolicy.output | grep 'deny_audit' ... rule=13 dec=deny_audit perm=execute auid=0 pid=6855 exe=/usr/bin/bash : path=/tmp/ls ftype=application/x-executable trust=0
# cat fapolicy.output | grep 'deny_audit' ... rule=13 dec=deny_audit perm=execute auid=0 pid=6855 exe=/usr/bin/bash : path=/tmp/ls ftype=application/x-executable trust=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Locate the file that contains a rule that prevented the execution of your custom binary. In this case, the
deny_audit perm=execute
rule belongs to the90-deny-execute.rules
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add a new
allow
rule to the file that lexically precedes the rule file that contains the rule that denied the execution of your custom binary in the/etc/fapolicyd/rules.d/
directory:touch /etc/fapolicyd/rules.d/80-myapps.rules vi /etc/fapolicyd/rules.d/80-myapps.rules
# touch /etc/fapolicyd/rules.d/80-myapps.rules # vi /etc/fapolicyd/rules.d/80-myapps.rules
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Insert the following rule to the
80-myapps.rules
file:allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0
allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can allow executions of all binaries in the
/tmp
directory by adding the following rule to the rule file in/etc/fapolicyd/rules.d/
:allow perm=execute exe=/usr/bin/bash trust=1 : dir=/tmp/ trust=0
allow perm=execute exe=/usr/bin/bash trust=1 : dir=/tmp/ trust=0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo make a rule effective recursively on all directories under the specified directory, add a trailing slash to the value of the
dir=
parameter in the rule (/tmp/
in the previous example).To prevent changes in the content of your custom binary, define the required rule using an SHA-256 checksum:
sha256sum /tmp/ls
$ sha256sum /tmp/ls 780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836 ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the rule to the following definition:
allow perm=execute exe=/usr/bin/bash trust=1 : sha256hash=780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836
allow perm=execute exe=/usr/bin/bash trust=1 : sha256hash=780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the list of compiled differs from the rule set in
/etc/fapolicyd/rules.d/
, and update the list, which is stored in the/etc/fapolicyd/compiled.rules
file:fagenrules --check fagenrules --load
# fagenrules --check /usr/sbin/fagenrules: Rules have changed and should be updated # fagenrules --load
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that your custom rule is in the list of
fapolicyd
rules before the rule that prevented the execution:fapolicyd-cli --list
# fapolicyd-cli --list ... 13. allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0 14. deny_audit perm=execute all : all ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
fapolicyd
service:systemctl start fapolicyd
# systemctl start fapolicyd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that your custom binary can be now executed, for example:
/tmp/ls
$ /tmp/ls ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.5. Enabling fapolicyd integrity checks Copy linkLink copied to clipboard!
By default, fapolicyd
does not perform integrity checking. You can configure fapolicyd
to perform integrity checks by comparing either file sizes or SHA-256 hashes. You can also set integrity checks by using the Integrity Measurement Architecture (IMA) subsystem.
Prerequisites
-
The
fapolicyd
framework is deployed on your system.
Procedure
Open the
/etc/fapolicyd/fapolicyd.conf
file in a text editor of your choice, for example:vi /etc/fapolicyd/fapolicyd.conf
# vi /etc/fapolicyd/fapolicyd.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the value of the
integrity
option fromnone
tosha256
, save the file, and exit the editor:integrity = sha256
integrity = sha256
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
fapolicyd
service:systemctl restart fapolicyd
# systemctl restart fapolicyd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Back up the file used for the verification:
cp /bin/more /bin/more.bak
# cp /bin/more /bin/more.bak
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the content of the
/bin/more
binary:cat /bin/less > /bin/more
# cat /bin/less > /bin/more
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the changed binary as a regular user:
su example.user /bin/more /etc/redhat-release
# su example.user $ /bin/more /etc/redhat-release bash: /bin/more: Operation not permitted
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Revert the changes:
mv -f /bin/more.bak /bin/more
# mv -f /bin/more.bak /bin/more
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.7. Preventing users from executing untrustworthy code by using the fapolicyd RHEL system role Copy linkLink copied to clipboard!
You can automate the installation and configuration of the fapolicyd
service by using the fapolicyd
RHEL system role. With this role, you can remotely configure the service to allow users to execute only trusted applications, for example, the ones which are listed in the RPM database and in an allow list. Additionally, the service can perform integrity checks before it executes an allowed application.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudo
permissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml
, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
fapolicyd_setup_permissive: <true|false>
-
Enables or disables sending policy decisions to the kernel for enforcement. Set this variable for debugging and testing purposes to
false
. fapolicyd_setup_integrity: <type_type>
Defines the integrity checking method. You can set one of the following values:
-
none
(default): Disables integrity checking. -
size
: The service compares only the file sizes of allowed applications. -
ima
: The service checks the SHA-256 hash that the kernel’s Integrity Measurement Architecture (IMA) stored in a file’s extended attribute. Additionally, the service performs a size check. Note that the role does not configure the IMA kernel subsystem. To use this option, you must manually configure the IMA subsystem. -
sha256
: The service compares the SHA-256 hash of allowed applications.
-
fapolicyd_setup_trust: <trust_backends>
-
Defines the list of trust backends. If you include the
file
backend, specify the allowed executable files in thefapolicyd_add_trusted_file
list.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.fapolicyd.README.md
file on the control node.Validate the playbook syntax:
ansible-playbook ~/playbook.yml --syntax-check
$ ansible-playbook ~/playbook.yml --syntax-check
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.yml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Execute a binary application that is not on the allow list as a user:
ansible managed-node-01.example.com -m command -a 'su -c "/bin/not_authorized_application " <user_name>'
$ ansible managed-node-01.example.com -m command -a 'su -c "/bin/not_authorized_application " <user_name>' bash: line 1: /bin/not_authorized_application: Operation not permitted non-zero return code
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 11. Protecting systems against intrusive USB devices Copy linkLink copied to clipboard!
USB devices can be loaded with spyware, malware, or trojans, which can steal your data or damage your system. As a Red Hat Enterprise Linux administrator, you can prevent such USB attacks with USBGuard.
11.1. USBGuard Copy linkLink copied to clipboard!
With the USBGuard software framework, you can protect your systems against intrusive USB devices by using basic lists of permitted and forbidden devices based on the USB device authorization feature in the kernel.
The USBGuard framework provides the following components:
- The system service component with an inter-process communication (IPC) interface for dynamic interaction and policy enforcement
-
The command-line interface to interact with a running
usbguard
system service - The rule language for writing USB device authorization policies
- The C++ API for interacting with the system service component implemented in a shared library
The usbguard
system service configuration file (/etc/usbguard/usbguard-daemon.conf
) includes the options to authorize the users and groups to use the IPC interface.
The system service provides the USBGuard public IPC interface. In Red Hat Enterprise Linux, the access to this interface is limited to only the root user by default.
Consider setting either the IPCAccessControlFiles
option (recommended) or the IPCAllowedUsers
and IPCAllowedGroups
options to limit access to the IPC interface.
Ensure that you do not leave the Access Control List (ACL) unconfigured because this exposes the IPC interface to all local users and allows them to manipulate the authorization state of USB devices and modify the USBGuard policy.
11.2. Installing USBGuard Copy linkLink copied to clipboard!
Use this procedure to install and initiate the USBGuard framework.
Procedure
Install the
usbguard
package:dnf install usbguard
# dnf install usbguard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an initial rule set:
usbguard generate-policy > /etc/usbguard/rules.conf
# usbguard generate-policy > /etc/usbguard/rules.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
usbguard
daemon and ensure that it starts automatically on boot:systemctl enable --now usbguard
# systemctl enable --now usbguard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
usbguard
service is running:Copy to Clipboard Copied! Toggle word wrap Toggle overflow List USB devices recognized by USBGuard:
usbguard list-devices
# usbguard list-devices 4: allow id 1d6b:0002 serial "0000:02:00.0" name "xHCI Host Controller" hash...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3. Blocking and authorizing a USB device by using the CLI Copy linkLink copied to clipboard!
You can set USBGuard to allow, block, or reject a specific USB device by using the usbguard
command in your terminal. This setting persists as long as USBGuard is running. USBGuard uses the terms block
and reject
with the following meanings:
block
- Do not interact with this device for now.
reject
- Ignore this device as if it does not exist.
Prerequisites
-
The
usbguard
service is installed and running.
Procedure
Determine the ID of the USB device by listing the devices recognized by USBGuard:
usbguard list-devices
# usbguard list-devices 1: allow id 1d6b:0002 serial "0000:00:06.7" name "EHCI Host Controller" hash "JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=" parent-hash "4PHGcaDKWtPjKDwYpIRG722cB9SlGz9l9Iea93+Gt9c=" via-port "usb1" with-interface 09:00:00 ... 6: block id 1b1c:1ab1 serial "000024937962" name "Voyager" hash "CrXgiaWIf2bZAU+5WkzOE7y0rdSO82XMzubn7HDb95Q=" parent-hash "JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=" via-port "1-3" with-interface 08:06:50
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Authorize a device to interact with the system:
usbguard allow-device <ID>
# usbguard allow-device <ID>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deauthorize and remove a device:
usbguard reject-device <ID>
# usbguard reject-device <ID>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deauthorize and retain a device:
usbguard block-device <ID>
# usbguard block-device <ID>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Permanently blocking and authorizing a USB device Copy linkLink copied to clipboard!
You can permanently block and authorize a USB device by using the -p
option. This adds a device-specific rule to the current policy and persists across restarts and reboots. USBGuard uses the terms block
and reject
with the following meanings:
block
- Do not interact with this device for now.
reject
- Ignore this device as if it does not exist.
Prerequisites
-
The
usbguard
service is installed and running.
Procedure
Configure SELinux to allow the
usbguard
daemon to write rules.Display the
semanage
Booleans relevant tousbguard
.semanage boolean -l | grep usbguard
# semanage boolean -l | grep usbguard usbguard_daemon_write_conf (off , off) Allow usbguard to daemon write conf usbguard_daemon_write_rules (on , on) Allow usbguard to daemon write rules
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
usbguard_daemon_write_rules
Boolean is turned off, turn it on.semanage boolean -m --on usbguard_daemon_write_rules
# semanage boolean -m --on usbguard_daemon_write_rules
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Determine the ID of the USB device by listing the devices recognized by USBGuard:
usbguard list-devices
# usbguard list-devices 1: allow id 1d6b:0002 serial "0000:00:06.7" name "EHCI Host Controller" hash "JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=" parent-hash "4PHGcaDKWtPjKDwYpIRG722cB9SlGz9l9Iea93+Gt9c=" via-port "usb1" with-interface 09:00:00 ... 6: block id 1b1c:1ab1 serial "000024937962" name "Voyager" hash "CrXgiaWIf2bZAU+5WkzOE7y0rdSO82XMzubn7HDb95Q=" parent-hash "JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=" via-port "1-3" with-interface 08:06:50
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Permanently authorize a device to interact with the system:
usbguard allow-device <ID> -p
# usbguard allow-device <ID> -p
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Permanently deauthorize and remove a device:
usbguard reject-device <ID> -p
# usbguard reject-device <ID> -p
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Permanently deauthorize and retain a device:
usbguard block-device <ID> -p
# usbguard block-device <ID> -p
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that the USBGuard rules include the changes you made.
usbguard list-rules
# usbguard list-rules
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Creating a custom policy for USB devices Copy linkLink copied to clipboard!
The following procedure contains steps for creating a rule set for USB devices that reflects the requirements of your scenario.
Prerequisites
-
The
usbguard
service is installed and running. -
The
/etc/usbguard/rules.conf
file contains an initial rule set generated by theusbguard generate-policy
command.
Procedure
Create a policy which authorizes the currently connected USB devices, and store the generated rules to the
rules.conf
file:usbguard generate-policy --no-hashes > ./rules.conf
# usbguard generate-policy --no-hashes > ./rules.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
--no-hashes
option does not generate hash attributes for devices. Avoid hash attributes in your configuration settings because they might not be persistent.In the
rules.conf
file, add, remove, or edit the rules as required by using a text editor. For example, the following rule allows only devices with a single mass storage interface to interact with the system:allow with-interface equals { 08:*:* }
allow with-interface equals { 08:*:* }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the
usbguard-rules.conf(5)
man page for a detailed rule-language description and more examples.Install the updated policy:
install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf
# install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
usbguard
daemon to apply your changes:systemctl restart usbguard
# systemctl restart usbguard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that your custom rules are in the active policy, for example:
usbguard list-rules
# usbguard list-rules ... 4: allow with-interface 08:*:* ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Creating a structured custom policy for USB devices Copy linkLink copied to clipboard!
You can organize your custom USBGuard policy in several .conf
files within the /etc/usbguard/rules.d/
directory. The usbguard-daemon
then combines the main rules.conf
file with the .conf
files within the directory in alphabetical order.
Prerequisites
-
The
usbguard
service is installed and running.
Procedure
Create a policy which authorizes the currently connected USB devices, and store the generated rules to a new
.conf
file, for example,<policy.conf>
.usbguard generate-policy --no-hashes > ./<policy.conf>
# usbguard generate-policy --no-hashes > ./<policy.conf>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
--no-hashes
option does not generate hash attributes for devices. Avoid hash attributes in your configuration settings because they might not be persistent.Open the
<policy.conf>
file with a text editor of your choice, and select the lines with the rules that you want to record, for example:... allow id 04f2:0833 serial "" name "USB Keyboard" via-port "7-2" with-interface { 03:01:01 03:00:00 } with-connect-type "unknown" ...
... allow id 04f2:0833 serial "" name "USB Keyboard" via-port "7-2" with-interface { 03:01:01 03:00:00 } with-connect-type "unknown" ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the selected lines into a separate
.conf
file.NoteThe two digits at the beginning of the file name specify the order in which the daemon reads the configuration files.
For example, to copy the rules for your keyboards into a new
.conf
file:grep "USB Keyboard" ./<policy.conf> > ./<10keyboards.conf>
# grep "USB Keyboard" ./<policy.conf> > ./<10keyboards.conf>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the new policy to the
/etc/usbguard/rules.d/
directory.install -m 0600 -o root -g root <10keyboards.conf> /etc/usbguard/rules.d/<10keyboards.conf>
# install -m 0600 -o root -g root <10keyboards.conf> /etc/usbguard/rules.d/<10keyboards.conf>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the rest of the lines to the main
rules.conf
file.grep -v "USB Keyboard" ./policy.conf > ./rules.conf
# grep -v "USB Keyboard" ./policy.conf > ./rules.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the remaining rules.
install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf
# install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
usbguard
daemon to apply your changes.systemctl restart usbguard
# systemctl restart usbguard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Display all active USBGuard rules.
usbguard list-rules
# usbguard list-rules ... 15: allow id 04f2:0833 serial "" name "USB Keyboard" hash "kxM/iddRe/WSCocgiuQlVs6Dn0VEza7KiHoDeTz0fyg=" parent-hash "2i6ZBJfTl5BakXF7Gba84/Cp1gslnNc1DM6vWQpie3s=" via-port "7-2" with-interface { 03:01:01 03:00:00 } with-connect-type "unknown" ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the contents of the
rules.conf
file and all the.conf
files in the/etc/usbguard/rules.d/
directory.cat /etc/usbguard/rules.conf /etc/usbguard/rules.d/*.conf
# cat /etc/usbguard/rules.conf /etc/usbguard/rules.d/*.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify that the active rules contain all the rules from the files and are in the correct order.
11.7. Authorizing users and groups to use the USBGuard IPC interface Copy linkLink copied to clipboard!
By default, only the root user can use the USBGuard public IPC interface interface. You can authorize a specific user or a group to use this interface in addition to root. You can do that either by editing the /etc/usbguard/usbguard-daemon.conf
file or by using the usbguard add-user
subcommand.
Prerequisites
-
The
usbguard
service is installed and running. -
The
/etc/usbguard/rules.conf
file contains an initial rule set generated by theusbguard generate-policy
command.
Procedure
Edit the
/etc/usbguard/usbguard-daemon.conf
file with the rules you want to add. For example, to allow all users in thewheel
group to use the IPC interface, add this line:IPCAllowGroups=wheel
IPCAllowGroups=wheel
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can add users or groups also with the
usbguard
command. For example, the following command enables a user to have full access to theDevices
andExceptions
sections and to list and modify the current policy:usbguard add-user <user_name> --devices ALL --policy modify,list --exceptions ALL
# usbguard add-user <user_name> --devices ALL --policy modify,list --exceptions ALL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<user_name>
with the user name that should receive these permissions.You can remove the granted permissions for a user by using the
usbguard remove-user <user_name>
command.Restart the
usbguard
daemon to apply your changes:systemctl restart usbguard
# systemctl restart usbguard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.8. Logging USBguard authorization events to the Linux Audit log Copy linkLink copied to clipboard!
By default, the usbguard
daemon logs events to the /var/log/usbguard/usbguard-audit.log
file. You can integrate logging of USBguard authorization events to the standard Linux Audit log.
Prerequisites
-
The
usbguard
service is installed and running. -
The
auditd
service is running.
Procedure
In the
/etc/usbguard/usbguard-daemon.conf
file, change theAuditBackend
option fromFileAudit
toLinuxAudit
:AuditBackend=LinuxAudit
AuditBackend=LinuxAudit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
usbguard
daemon to apply the configuration change:systemctl restart usbguard
# systemctl restart usbguard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Query the
audit
daemon log for a USB authorization event, for example:ausearch -ts recent -m USER_DEVICE
# ausearch -ts recent -m USER_DEVICE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow