Ricerca

Questo contenuto non è disponibile nella lingua selezionata.

4.3. Securing Services

download PDF
While user access to administrative controls is an important issue for system administrators within an organization, monitoring which network services are active is of paramount importance to anyone who administers and operates a Linux system.
Many services under Red Hat Enterprise Linux 7 are network servers. If a network service is running on a machine, then a server application (called a daemon), is listening for connections on one or more network ports. Each of these servers should be treated as a potential avenue of attack.

4.3.1. Risks To Services

Network services can pose many risks for Linux systems. Below is a list of some of the primary issues:
  • Denial of Service Attacks (DoS) — By flooding a service with requests, a denial of service attack can render a system unusable as it tries to log and answer each request.
  • Distributed Denial of Service Attack (DDoS) — A type of DoS attack which uses multiple compromised machines (often numbering in the thousands or more) to direct a coordinated attack on a service, flooding it with requests and making it unusable.
  • Script Vulnerability Attacks — If a server is using scripts to execute server-side actions, as Web servers commonly do, an attacker can target improperly written scripts. These script vulnerability attacks can lead to a buffer overflow condition or allow the attacker to alter files on the system.
  • Buffer Overflow Attacks — Services that want to listen on ports 1 through 1023 must start either with administrative privileges or the CAP_NET_BIND_SERVICE capability needs to be set for them. Once a process is bound to a port and is listening on it, the privileges or the capability are often dropped. If the privileges or the capability are not dropped, and the application has an exploitable buffer overflow, an attacker could gain access to the system as the user running the daemon. Because exploitable buffer overflows exist, crackers use automated tools to identify systems with vulnerabilities, and once they have gained access, they use automated rootkits to maintain their access to the system.

Note

The threat of buffer overflow vulnerabilities is mitigated in Red Hat Enterprise Linux 7 by ExecShield, an executable memory segmentation and protection technology supported by x86-compatible uni- and multi-processor kernels. ExecShield reduces the risk of buffer overflow by separating virtual memory into executable and non-executable segments. Any program code that tries to execute outside of the executable segment (such as malicious code injected from a buffer overflow exploit) triggers a segmentation fault and terminates.
Execshield also includes support for No eXecute (NX) technology on AMD64 platforms and Intel® 64 systems. These technologies work in conjunction with ExecShield to prevent malicious code from running in the executable portion of virtual memory with a granularity of 4KB of executable code, lowering the risk of attack from buffer overflow exploits.

Important

To limit exposure to attacks over the network, all services that are unused should be turned off.

4.3.2. Identifying and Configuring Services

To enhance security, most network services installed with Red Hat Enterprise Linux 7 are turned off by default. There are, however, some notable exceptions:
  • cups — The default print server for Red Hat Enterprise Linux 7.
  • cups-lpd — An alternative print server.
  • xinetd — A super server that controls connections to a range of subordinate servers, such as gssftp and telnet.
  • sshd — The OpenSSH server, which is a secure replacement for Telnet.
When determining whether to leave these services running, it is best to use common sense and avoid taking any risks. For example, if a printer is not available, do not leave cups running. The same is true for portreserve. If you do not mount NFSv3 volumes or use NIS (the ypbind service), then rpcbind should be disabled. Checking which network services are available to start at boot time is not sufficient. It is recommended to also check which ports are open and listening. Refer to Section 4.4.2, “Verifying Which Ports Are Listening” for more information.

4.3.3. Insecure Services

Potentially, any network service is insecure. This is why turning off unused services is so important. Exploits for services are routinely revealed and patched, making it very important to regularly update packages associated with any network service. See Chapter 3, Keeping Your System Up-to-Date for more information.
Some network protocols are inherently more insecure than others. These include any services that:
  • Transmit Usernames and Passwords Over a Network Unencrypted — Many older protocols, such as Telnet and FTP, do not encrypt the authentication session and should be avoided whenever possible.
  • Transmit Sensitive Data Over a Network Unencrypted — Many protocols transmit data over the network unencrypted. These protocols include Telnet, FTP, HTTP, and SMTP. Many network file systems, such as NFS and SMB, also transmit information over the network unencrypted. It is the user's responsibility when using these protocols to limit what type of data is transmitted.
Examples of inherently insecure services include rlogin, rsh, telnet, and vsftpd.
All remote login and shell programs (rlogin, rsh, and telnet) should be avoided in favor of SSH. See Section 4.3.11, “Securing SSH” for more information about sshd.
FTP is not as inherently dangerous to the security of the system as remote shells, but FTP servers must be carefully configured and monitored to avoid problems. See Section 4.3.9, “Securing FTP” for more information about securing FTP servers.
Services that should be carefully implemented and behind a firewall include:
  • auth
  • nfs-server
  • smb and nbm (Samba)
  • yppasswdd
  • ypserv
  • ypxfrd
More information on securing network services is available in Section 4.4, “Securing Network Access”.

4.3.4. Securing rpcbind

The rpcbind service is a dynamic port assignment daemon for RPC services such as NIS and NFS. It has weak authentication mechanisms and has the ability to assign a wide range of ports for the services it controls. For these reasons, it is difficult to secure.

Note

Securing rpcbind only affects NFSv2 and NFSv3 implementations, since NFSv4 no longer requires it. If you plan to implement an NFSv2 or NFSv3 server, then rpcbind is required, and the following section applies.
If running RPC services, follow these basic rules.

4.3.4.1. Protect rpcbind With TCP Wrappers

It is important to use TCP Wrappers to limit which networks or hosts have access to the rpcbind service since it has no built-in form of authentication.
Further, use only IP addresses when limiting access to the service. Avoid using host names, as they can be forged by DNS poisoning and other methods.

4.3.4.2. Protect rpcbind With firewalld

To further restrict access to the rpcbind service, it is a good idea to add firewalld rules to the server and restrict access to specific networks.
Below are two example firewalld rich language commands. The first allows TCP connections to the port 111 (used by the rpcbind service) from the 192.168.0.0/24 network. The second allows TCP connections to the same port from the localhost. All other packets are dropped.
~]# firewall-cmd --add-rich-rule='rule family="ipv4" port port="111" protocol="tcp" source address="192.168.0.0/24" invert="True" drop'
~]# firewall-cmd --add-rich-rule='rule family="ipv4" port port="111" protocol="tcp" source address="127.0.0.1" accept'
To similarly limit UDP traffic, use the following command:
~]# firewall-cmd --add-rich-rule='rule family="ipv4" port port="111" protocol="udp" source address="192.168.0.0/24" invert="True" drop'

Note

Add --permanent to the firewalld rich language commands to make the settings permanent. See Chapter 5, Using Firewalls for more information about implementing firewalls.

4.3.5. Securing rpc.mountd

The rpc.mountd daemon implements the server side of the NFS MOUNT protocol, a protocol used by NFS version 2 (RFC 1904) and NFS version 3 (RFC 1813).
If running RPC services, follow these basic rules.

4.3.5.1. Protect rpc.mountd With TCP Wrappers

It is important to use TCP Wrappers to limit which networks or hosts have access to the rpc.mountd service since it has no built-in form of authentication.
Further, use only IP addresses when limiting access to the service. Avoid using host names, as they can be forged by DNS poisoning and other methods.

4.3.5.2. Protect rpc.mountd With firewalld

To further restrict access to the rpc.mountd service, add firewalld rich language rules to the server and restrict access to specific networks.
Below are two example firewalld rich language commands. The first allows mountd connections from the 192.168.0.0/24 network. The second allows mountd connections from the local host. All other packets are dropped.
~]# firewall-cmd --add-rich-rule 'rule family="ipv4" source NOT address="192.168.0.0/24" service name="mountd" drop'
~]# firewall-cmd --add-rich-rule 'rule family="ipv4" source address="127.0.0.1" service name="mountd" accept'

Note

Add --permanent to the firewalld rich language commands to make the settings permanent. See Chapter 5, Using Firewalls for more information about implementing firewalls.

4.3.6. Securing NIS

The Network Information Service (NIS) is an RPC service, called ypserv, which is used in conjunction with rpcbind and other related services to distribute maps of user names, passwords, and other sensitive information to any computer claiming to be within its domain.
A NIS server is comprised of several applications. They include the following:
  • /usr/sbin/rpc.yppasswdd — Also called the yppasswdd service, this daemon allows users to change their NIS passwords.
  • /usr/sbin/rpc.ypxfrd — Also called the ypxfrd service, this daemon is responsible for NIS map transfers over the network.
  • /usr/sbin/ypserv — This is the NIS server daemon.
NIS is somewhat insecure by today's standards. It has no host authentication mechanisms and transmits all of its information over the network unencrypted, including password hashes. As a result, extreme care must be taken when setting up a network that uses NIS. This is further complicated by the fact that the default configuration of NIS is inherently insecure.
It is recommended that anyone planning to implement a NIS server first secure the rpcbind service as outlined in Section 4.3.4, “Securing rpcbind”, then address the following issues, such as network planning.

4.3.6.1. Carefully Plan the Network

Because NIS transmits sensitive information unencrypted over the network, it is important the service be run behind a firewall and on a segmented and secure network. Whenever NIS information is transmitted over an insecure network, it risks being intercepted. Careful network design can help prevent severe security breaches.

4.3.6.2. Use a Password-like NIS Domain Name and Hostname

Any machine within a NIS domain can use commands to extract information from the server without authentication, as long as the user knows the NIS server's DNS host name and NIS domain name.
For instance, if someone either connects a laptop computer into the network or breaks into the network from outside (and manages to spoof an internal IP address), the following command reveals the /etc/passwd map:
ypcat -d <NIS_domain> -h <DNS_hostname> passwd
If this attacker is a root user, they can obtain the /etc/shadow file by typing the following command:
ypcat -d <NIS_domain> -h <DNS_hostname> shadow

Note

If Kerberos is used, the /etc/shadow file is not stored within a NIS map.
To make access to NIS maps harder for an attacker, create a random string for the DNS host name, such as o7hfawtgmhwg.domain.com. Similarly, create a different randomized NIS domain name. This makes it much more difficult for an attacker to access the NIS server.

4.3.6.3. Edit the /var/yp/securenets File

If the /var/yp/securenets file is blank or does not exist (as is the case after a default installation), NIS listens to all networks. One of the first things to do is to put netmask/network pairs in the file so that ypserv only responds to requests from the appropriate network.
Below is a sample entry from a /var/yp/securenets file:
255.255.255.0     192.168.0.0

Warning

Never start a NIS server for the first time without creating the /var/yp/securenets file.
This technique does not provide protection from an IP spoofing attack, but it does at least place limits on what networks the NIS server services.

4.3.6.4. Assign Static Ports and Use Rich Language Rules

All of the servers related to NIS can be assigned specific ports except for rpc.yppasswdd — the daemon that allows users to change their login passwords. Assigning ports to the other two NIS server daemons, rpc.ypxfrd and ypserv, allows for the creation of firewall rules to further protect the NIS server daemons from intruders.
To do this, add the following lines to /etc/sysconfig/network:
YPSERV_ARGS="-p 834"
YPXFRD_ARGS="-p 835"
The following rich language firewalld rules can then be used to enforce which network the server listens to for these ports:
~]# firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.0.0/24" invert="True" port port="834-835" protocol="tcp" drop'
~]# firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.0.0/24" invert="True" port port="834-835" protocol="udp" drop'
This means that the server only allows connections to ports 834 and 835 if the requests come from the 192.168.0.0/24 network. The first rule is for TCP and the second for UDP.

Note

See Chapter 5, Using Firewalls for more information about implementing firewalls with iptables commands.

4.3.6.5. Use Kerberos Authentication

One of the issues to consider when NIS is used for authentication is that whenever a user logs into a machine, a password hash from the /etc/shadow map is sent over the network. If an intruder gains access to a NIS domain and sniffs network traffic, they can collect user names and password hashes. With enough time, a password cracking program can guess weak passwords, and an attacker can gain access to a valid account on the network.
Since Kerberos uses secret-key cryptography, no password hashes are ever sent over the network, making the system far more secure. See the Logging into IdM Using Kerberos section in the Linux Domain Identity, Authentication, and Policy Guide for more information about Kerberos.

4.3.7. Securing NFS

Important

NFS traffic can be sent using TCP in all versions, it should be used with NFSv3, rather than UDP, and is required when using NFSv4. All versions of NFS support Kerberos user and group authentication, as part of the RPCSEC_GSS kernel module. Information on rpcbind is still included, since Red Hat Enterprise Linux 7 supports NFSv3 which utilizes rpcbind.

4.3.7.1. Carefully Plan the Network

NFSv2 and NFSv3 traditionally passed data insecurely. All versions of NFS now have the ability to authenticate (and optionally encrypt) ordinary file system operations using Kerberos. Under NFSv4 all operations can use Kerberos; under NFSv2 or NFSv3, file locking and mounting still do not use it. When using NFSv4.0, delegations may be turned off if the clients are behind NAT or a firewall. For information on the use of NFSv4.1 to allow delegations to operate through NAT and firewalls, see the pNFS section of the Red Hat Enterprise Linux 7 Storage Administration Guide.

4.3.7.2. Securing NFS Mount Options

The use of the mount command in the /etc/fstab file is explained in the Using the mount Command chapter of the Red Hat Enterprise Linux 7 Storage Administration Guide. From a security administration point of view it is worthwhile to note that the NFS mount options can also be specified in /etc/nfsmount.conf, which can be used to set custom default options.
4.3.7.2.1. Review the NFS Server

Warning

Only export entire file systems. Exporting a subdirectory of a file system can be a security issue. It is possible in some cases for a client to "break out" of the exported part of the file system and get to unexported parts (see the section on subtree checking in the exports(5) man page.
Use the ro option to export the file system as read-only whenever possible to reduce the number of users able to write to the mounted file system. Only use the rw option when specifically required. See the man exports(5) page for more information. Allowing write access increases the risk from symlink attacks for example. This includes temporary directories such as /tmp and /usr/tmp.
Where directories must be mounted with the rw option avoid making them world-writable whenever possible to reduce risk. Exporting home directories is also viewed as a risk as some applications store passwords in clear text or weakly encrypted. This risk is being reduced as application code is reviewed and improved. Some users do not set passwords on their SSH keys so this too means home directories present a risk. Enforcing the use of passwords or using Kerberos would mitigate that risk.
Restrict exports only to clients that need access. Use the showmount -e command on an NFS server to review what the server is exporting. Do not export anything that is not specifically required.
Do not use the no_root_squash option and review existing installations to make sure it is not used. See Section 4.3.7.4, “Do Not Use the no_root_squash Option” for more information.
The secure option is the server-side export option used to restrict exports to reserved ports. By default, the server allows client communication only from reserved ports (ports numbered less than 1024), because traditionally clients have only allowed trusted code (such as in-kernel NFS clients) to use those ports. However, on many networks it is not difficult for anyone to become root on some client, so it is rarely safe for the server to assume that communication from a reserved port is privileged. Therefore the restriction to reserved ports is of limited value; it is better to rely on Kerberos, firewalls, and restriction of exports to particular clients.
Most clients still do use reserved ports when possible. However, reserved ports are a limited resource, so clients (especially those with a large number of NFS mounts) may choose to use higher-numbered ports as well. Linux clients may do this using the noresvport mount option. If you want to allow this on an export, you may do so with the insecure export option.
It is good practice not to allow users to login to a server. While reviewing the above settings on an NFS server conduct a review of who and what can access the server.
4.3.7.2.2. Review the NFS Client
Use the nosuid option to disallow the use of a setuid program. The nosuid option disables the set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program. Use this option on the client and the server side.
The noexec option disables all executable files on the client. Use this to prevent users from inadvertently executing files placed in the file system being shared. The nosuid and noexec options are standard options for most, if not all, file systems.
Use the nodev option to prevent device-files from being processed as a hardware device by the client.
The resvport option is a client-side mount option and secure is the corresponding server-side export option (see explanation above). It restricts communication to a "reserved port". The reserved or "well known" ports are reserved for privileged users and processes such as the root user. Setting this option causes the client to use a reserved source port to communicate with the server.
All versions of NFS now support mounting with Kerberos authentication. The mount option to enable this is: sec=krb5.
NFSv4 supports mounting with Kerberos using krb5i for integrity and krb5p for privacy protection. These are used when mounting with sec=krb5, but need to be configured on the NFS server. See the man page on exports (man 5 exports) for more information.
The NFS man page (man 5 nfs) has a SECURITY CONSIDERATIONS section which explains the security enhancements in NFSv4 and contains all the NFS specific mount options.

Important

The MIT Kerberos libraries provided by the krb5-libs package do not support using the Data Encryption Standard (DES) algorithm in new deployments. Due to security and also certain compatibility reasons, DES is deprecated and disabled by default in the Kerberos libraries. Use DES only for compatibility reasons if your environment does not support any newer and more secure algorithm.

4.3.7.3. Beware of Syntax Errors

The NFS server determines which file systems to export and which hosts to export these directories to by consulting the /etc/exports file. Be careful not to add extraneous spaces when editing this file.
For instance, the following line in the /etc/exports file shares the directory /tmp/nfs/ to the host bob.example.com with read/write permissions.
/tmp/nfs/     bob.example.com(rw)
The following line in the /etc/exports file, on the other hand, shares the same directory to the host bob.example.com with read-only permissions and shares it to the world with read/write permissions due to a single space character after the host name.
/tmp/nfs/     bob.example.com (rw)
It is good practice to check any configured NFS shares by using the showmount command to verify what is being shared:
showmount -e <hostname>

4.3.7.4. Do Not Use the no_root_squash Option

By default, NFS shares change the root user to the nfsnobody user, an unprivileged user account. This changes the owner of all root-created files to nfsnobody, which prevents uploading of programs with the setuid bit set.
If no_root_squash is used, remote root users are able to change any file on the shared file system and leave applications infected by Trojans for other users to inadvertently execute.

4.3.7.5. NFS Firewall Configuration

NFSv4 is the default version of NFS for Red Hat Enterprise Linux 7 and it only requires port 2049 to be open for TCP. If using NFSv3 then four additional ports are required as explained below.
Configuring Ports for NFSv3
The ports used for NFS are assigned dynamically by the rpcbind service, which might cause problems when creating firewall rules. To simplify this process, use the /etc/sysconfig/nfs file to specify which ports are to be used:
  • MOUNTD_PORT — TCP and UDP port for mountd (rpc.mountd)
  • STATD_PORT — TCP and UDP port for status (rpc.statd)
In Red Hat Enterprise Linux 7, set the TCP and UDP port for the NFS lock manager (nlockmgr) in the /etc/modprobe.d/lockd.conf file:
  • nlm_tcpport — TCP port for nlockmgr (rpc.lockd)
  • nlm_udpport — UDP port nlockmgr (rpc.lockd)
Port numbers specified must not be used by any other service. Configure your firewall to allow the port numbers specified, as well as TCP and UDP port 2049 (NFS). See /etc/modprobe.d/lockd.conf for descriptions of additional customizable NFS lock manager parameters.
Run the rpcinfo -p command on the NFS server to see which ports and RPC programs are being used.

4.3.7.6. Securing NFS with Red Hat Identity Management

Kerberos-aware NFS setup can be greatly simplified in an environment that is using Red Hat Identity Management, which is included in Red Hat Enterprise Linux.
See the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide, in particular Setting up a Kerberos-aware NFS Server to learn how to secure NFS with Kerberos when using Red Hat Identity Management.

4.3.8. Securing HTTP Servers

4.3.8.1. Securing the Apache HTTP Server

The Apache HTTP Server is one of the most stable and secure services in Red Hat Enterprise Linux 7. A large number of options and techniques are available to secure the Apache HTTP Server — too numerous to delve into deeply here. The following section briefly explains good practices when running the Apache HTTP Server.
Always verify that any scripts running on the system work as intended before putting them into production. Also, ensure that only the root user has write permissions to any directory containing scripts or CGIs. To do this, enter the following commands as the root user:
chown root <directory_name>
chmod 755 <directory_name>
System administrators should be careful when using the following configuration options (configured in /etc/httpd/conf/httpd.conf):
FollowSymLinks
This directive is enabled by default, so be sure to use caution when creating symbolic links to the document root of the Web server. For instance, it is a bad idea to provide a symbolic link to /.
Indexes
This directive is enabled by default, but may not be desirable. To prevent visitors from browsing files on the server, remove this directive.
UserDir
The UserDir directive is disabled by default because it can confirm the presence of a user account on the system. To enable user directory browsing on the server, use the following directives:
UserDir enabled
	        UserDir disabled root
These directives activate user directory browsing for all user directories other than /root/. To add users to the list of disabled accounts, add a space-delimited list of users on the UserDir disabled line.
ServerTokens
The ServerTokens directive controls the server response header field which is sent back to clients. It includes various information which can be customized using the following parameters:
  • ServerTokens Full (default option) — provides all available information (OS type and used modules), for example:
    Apache/2.0.41 (Unix) PHP/4.2.2 MyMod/1.2
    
  • ServerTokens Prod or ServerTokens ProductOnly — provides the following information:
    Apache
    
  • ServerTokens Major — provides the following information:
    Apache/2
    
  • ServerTokens Minor — provides the following information:
    Apache/2.0
    
  • ServerTokens Min or ServerTokens Minimal — provides the following information:
    Apache/2.0.41
    
  • ServerTokens OS — provides the following information:
    Apache/2.0.41 (Unix)
    
It is recommended to use the ServerTokens Prod option so that a possible attacker does not gain any valuable information about your system.

Important

Do not remove the IncludesNoExec directive. By default, the Server-Side Includes (SSI) module cannot execute commands. It is recommended that you do not change this setting unless absolutely necessary, as it could, potentially, enable an attacker to execute commands on the system.
Removing httpd Modules
In certain scenarios, it is beneficial to remove certain httpd modules to limit the functionality of the HTTP Server. To do so, edit configuration files in the /etc/httpd/conf.modules.d directory. For example, to remove the proxy module:
echo '# All proxy modules disabled' > /etc/httpd/conf.modules.d/00-proxy.conf
Note that the /etc/httpd/conf.d/ directory contains configuration files which are used to load modules as well.
httpd and SELinux
For information, see the The Apache HTTP Server and SELinux chapter from the Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide.

4.3.8.2. Securing NGINX

NGINX is a high-performance HTTP and proxy server. This section briefly documents additional steps that harden your NGINX configuration. Perform all of the following configuration changes in the server section of your NGINX configuration files.
Disabling Version Strings
To prevent attackers from learning the version of NGINX running on your server, use the following configuration option:
server_tokens        off;
This has the effect of removing the version number and simply reporting the string nginx in all requests served by NGINX:
$ curl -sI http://localhost | grep Server
Server: nginx
Including Additional Security-related Headers
Each request served by NGINX can include additional HTTP headers that mitigate certain known web application vulnerabilities:
  • add_header X-Frame-Options SAMEORIGIN; — this option denies any page outside of your domain to frame any content served by NGINX, effectively mitigating clickjacking attacks.
  • add_header X-Content-Type-Options nosniff; — this option prevents MIME-type sniffing in certain older browsers.
  • add_header X-XSS-Protection "1; mode=block"; — this option enables the Cross-Site Scripting (XSS) filtering, which prevents a browser from rendering potentially malicious content included in a response by NGINX.
Disabling Potentially Harmful HTTP Methods
If enabled, some of the HTTP methods may allow an attacker to perform actions on the web server that were designed for developers to test web applications. For example, the TRACE method is known to allow Cross-Site Tracing (XST).
Your NGINX server can disallow these harmful HTTP methods as well as any arbitrary methods by whitelisting only those that should be allowed. For example:
# Allow GET, PUT, POST; return "405 Method Not Allowed" for all others.
if ( $request_method !~ ^(GET|PUT|POST)$ ) {
    return 405;
}
Configuring SSL
To protect the data served by your NGINX web server, consider serving it over HTTPS only. To generate a secure configuration profile for enabling SSL in your NGINX server, see the Mozilla SSL Configuration Generator. The generated configuration assures that known vulnerable protocols (for example, SSLv2 or SSLv3), ciphers, and hashing algorithms (for example, 3DES or MD5) are disabled.
You can also use the SSL Server Test to verify that your configuration meets modern security requirements.

4.3.9. Securing FTP

The File Transfer Protocol (FTP) is an older TCP protocol designed to transfer files over a network. Because all transactions with the server, including user authentication, are unencrypted, it is considered an insecure protocol and should be carefully configured.
Red Hat Enterprise Linux 7 provides two FTP servers:
  • Red Hat Content Accelerator (tux) — A kernel-space Web server with FTP capabilities.
  • vsftpd — A standalone, security oriented implementation of the FTP service.
The following security guidelines are for setting up the vsftpd FTP service.

4.3.9.1. FTP Greeting Banner

Before submitting a user name and password, all users are presented with a greeting banner. By default, this banner includes version information useful to crackers trying to identify weaknesses in a system.
To change the greeting banner for vsftpd, add the following directive to the /etc/vsftpd/vsftpd.conf file:
ftpd_banner=<insert_greeting_here>
Replace <insert_greeting_here> in the above directive with the text of the greeting message.
For mutli-line banners, it is best to use a banner file. To simplify management of multiple banners, place all banners in a new directory called /etc/banners/. The banner file for FTP connections in this example is /etc/banners/ftp.msg. Below is an example of what such a file may look like:
######### Hello, all activity on ftp.example.com is logged. #########

Note

It is not necessary to begin each line of the file with 220 as specified in Section 4.4.1, “Securing Services With TCP Wrappers and xinetd”.
To reference this greeting banner file for vsftpd, add the following directive to the /etc/vsftpd/vsftpd.conf file:
banner_file=/etc/banners/ftp.msg
It also is possible to send additional banners to incoming connections using TCP Wrappers as described in Section 4.4.1.1, “TCP Wrappers and Connection Banners”.

4.3.9.2. Anonymous Access

The presence of the /var/ftp/ directory activates the anonymous account.
The easiest way to create this directory is to install the vsftpd package. This package establishes a directory tree for anonymous users and configures the permissions on directories to read-only for anonymous users.
By default the anonymous user cannot write to any directories.

Warning

If enabling anonymous access to an FTP server, be aware of where sensitive data is stored.
4.3.9.2.1. Anonymous Upload
To allow anonymous users to upload files, it is recommended that a write-only directory be created within /var/ftp/pub/. To do this, enter the following command as root:
~]# mkdir /var/ftp/pub/upload
Next, change the permissions so that anonymous users cannot view the contents of the directory:
~]# chmod 730 /var/ftp/pub/upload
A long format listing of the directory should look like this:
~]# ls -ld /var/ftp/pub/upload
drwx-wx---. 2 root ftp 4096 Nov 14 22:57 /var/ftp/pub/upload
Administrators who allow anonymous users to read and write in directories often find that their servers become a repository of stolen software.
Additionally, under vsftpd, add the following line to the /etc/vsftpd/vsftpd.conf file:
anon_upload_enable=YES

4.3.9.3. User Accounts

Because FTP transmits unencrypted user names and passwords over insecure networks for authentication, it is a good idea to deny system users access to the server from their user accounts.
To disable all user accounts in vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf:
local_enable=NO
4.3.9.3.1. Restricting User Accounts
To disable FTP access for specific accounts or specific groups of accounts, such as the root user and those with sudo privileges, the easiest way is to use a PAM list file as described in Section 4.2.1, “Disallowing Root Access”. The PAM configuration file for vsftpd is /etc/pam.d/vsftpd.
It is also possible to disable user accounts within each service directly.
To disable specific user accounts in vsftpd, add the user name to /etc/vsftpd/ftpusers

4.3.9.4. Use TCP Wrappers To Control Access

Use TCP Wrappers to control access to either FTP daemon as outlined in Section 4.4.1, “Securing Services With TCP Wrappers and xinetd”.

4.3.10. Securing Postfix

Postfix is a Mail Transfer Agent (MTA) that uses the Simple Mail Transfer Protocol (SMTP) to deliver electronic messages between other MTAs and to email clients or delivery agents. Although many MTAs are capable of encrypting traffic between one another, most do not, so sending email over any public networks is considered an inherently insecure form of communication. Postfix replaces Sendmail as the default MTA in Red Hat Enterprise Linux 7.
It is recommended that anyone planning to implement a Postfix server address the following issues.

4.3.10.1. Limiting a Denial of Service Attack

Because of the nature of email, a determined attacker can flood the server with mail fairly easily and cause a denial of service. The effectiveness of such attacks can be limited by setting limits of the directives in the /etc/postfix/main.cf file. You can change the value of the directives which are already there or you can add the directives you need with the value you want in the following format:
<directive> = <value>
. The following is a list of directives that can be used for limiting a denial of service attack:
  • smtpd_client_connection_rate_limit — The maximum number of connection attempts any client is allowed to make to this service per time unit (described below). The default value is 0, which means a client can make as many connections per time unit as Postfix can accept. By default, clients in trusted networks are excluded.
  • anvil_rate_time_unit — This time unit is used for rate limit calculations. The default value is 60 seconds.
  • smtpd_client_event_limit_exceptions — Clients that are excluded from the connection and rate limit commands. By default, clients in trusted networks are excluded.
  • smtpd_client_message_rate_limit — The maximum number of message deliveries a client is allowed to request per time unit (regardless of whether or not Postfix actually accepts those messages).
  • default_process_limit — The default maximum number of Postfix child processes that provide a given service. This limit can be overruled for specific services in the master.cf file. By default the value is 100.
  • queue_minfree — The minimum amount of free space in bytes in the queue file system that is needed to receive mail. This is currently used by the Postfix SMTP server to decide if it will accept any mail at all. By default, the Postfix SMTP server rejects MAIL FROM commands when the amount of free space is less than 1.5 times the message_size_limit. To specify a higher minimum free space limit, specify a queue_minfree value that is at least 1.5 times the message_size_limit. By default the queue_minfree value is 0.
  • header_size_limit — The maximum amount of memory in bytes for storing a message header. If a header is larger, the excess is discarded. By default the value is 102400.
  • message_size_limit — The maximum size in bytes of a message, including envelope information. By default the value is 10240000.

4.3.10.2. NFS and Postfix

Never put the mail spool directory, /var/spool/postfix/, on an NFS shared volume. Because NFSv2 and NFSv3 do not maintain control over user and group IDs, two or more users can have the same UID, and receive and read each other's mail.

Note

With NFSv4 using Kerberos, this is not the case, since the SECRPC_GSS kernel module does not utilize UID-based authentication. However, it is still considered good practice not to put the mail spool directory on NFS shared volumes.

4.3.10.3. Mail-only Users

To help prevent local user exploits on the Postfix server, it is best for mail users to only access the Postfix server using an email program. Shell accounts on the mail server should not be allowed and all user shells in the /etc/passwd file should be set to /sbin/nologin (with the possible exception of the root user).

4.3.10.4. Disable Postfix Network Listening

By default, Postfix is set up to only listen to the local loopback address. You can verify this by viewing the file /etc/postfix/main.cf.
View the file /etc/postfix/main.cf to ensure that only the following inet_interfaces line appears:
inet_interfaces = localhost
This ensures that Postfix only accepts mail messages (such as cron job reports) from the local system and not from the network. This is the default setting and protects Postfix from a network attack.
For removal of the localhost restriction and allowing Postfix to listen on all interfaces the inet_interfaces = all setting can be used.

4.3.10.5. Configuring Postfix to Use SASL

The Red Hat Enterprise Linux 7 version of Postfix can use the Dovecot or Cyrus SASL implementations for SMTP Authentication (or SMTP AUTH). SMTP Authentication is an extension of the Simple Mail Transfer Protocol. When enabled, SMTP clients are required to authenticate to the SMTP server using an authentication method supported and accepted by both the server and the client. This section describes how to configure Postfix to make use of the Dovecot SASL implementation.
To install the Dovecot POP/IMAP server, and thus make the Dovecot SASL implementation available on your system, issue the following command as the root user:
~]# yum install dovecot
The Postfix SMTP server can communicate with the Dovecot SASL implementation using either a UNIX-domain socket or a TCP socket. The latter method is only needed in case the Postfix and Dovecot applications are running on separate machines. This guide gives preference to the UNIX-domain socket method, which affords better privacy.
In order to instruct Postfix to use the Dovecot SASL implementation, a number of configuration changes need to be performed for both applications. Follow the procedures below to effect these changes.
Setting Up Dovecot
  1. Modify the main Dovecot configuration file, /etc/dovecot/conf.d/10-master.conf, to include the following lines (the default configuration file already includes most of the relevant section, and the lines just need to be uncommented):
    service auth {
      unix_listener /var/spool/postfix/private/auth {
        mode = 0660
        user = postfix
        group = postfix
      }
    }
    The above example assumes the use of UNIX-domain sockets for communication between Postfix and Dovecot. It also assumes default settings of the Postfix SMTP server, which include the mail queue located in the /var/spool/postfix/ directory, and the application running under the postfix user and group. In this way, read and write permissions are limited to the postfix user and group.
    Alternatively, you can use the following configuration to set up Dovecot to listen for Postfix authentication requests through TCP:
    service auth {
      inet_listener {
        port = 12345
      }
    }
    In the above example, replace 12345 with the number of the port you want to use.
  2. Edit the /etc/dovecot/conf.d/10-auth.conf configuration file to instruct Dovecot to provide the Postfix SMTP server with the plain and login authentication mechanisms:
    auth_mechanisms = plain login
Setting Up Postfix
In the case of Postfix, only the main configuration file, /etc/postfix/main.cf, needs to be modified. Add or edit the following configuration directives:
  1. Enable SMTP Authentication in the Postfix SMTP server:
    smtpd_sasl_auth_enable = yes
  2. Instruct Postfix to use the Dovecot SASL implementation for SMTP Authentication:
    smtpd_sasl_type = dovecot
  3. Provide the authentication path relative to the Postfix queue directory (note that the use of a relative path ensures that the configuration works regardless of whether the Postfix server runs in a chroot or not):
    smtpd_sasl_path = private/auth
    This step assumes that you want to use UNIX-domain sockets for communication between Postfix and Dovecot. To configure Postfix to look for Dovecot on a different machine in case you use TCP sockets for communication, use configuration values similar to the following:
    smtpd_sasl_path = inet:127.0.0.1:12345
    In the above example, 127.0.0.1 needs to be substituted by the IP address of the Dovecot machine and 12345 by the port specified in Dovecot's /etc/dovecot/conf.d/10-master.conf configuration file.
  4. Specify SASL mechanisms that the Postfix SMTP server makes available to clients. Note that different mechanisms can be specified for encrypted and unencrypted sessions.
    smtpd_sasl_security_options = noanonymous, noplaintext
    smtpd_sasl_tls_security_options = noanonymous
    The above example specifies that during unencrypted sessions, no anonymous authentication is allowed and no mechanisms that transmit unencrypted user names or passwords are allowed. For encrypted sessions (using TLS), only non-anonymous authentication mechanisms are allowed.
    See http://www.postfix.org/SASL_README.html#smtpd_sasl_security_options for a list of all supported policies for limiting allowed SASL mechanisms.
Additional Resources
The following online resources provide additional information useful for configuring Postfix SMTP Authentication through SASL.

4.3.11. Securing SSH

Secure Shell (SSH) is a powerful network protocol used to communicate with another system over a secure channel. The transmissions over SSH are encrypted and protected from interception. See the OpenSSH chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for general information about the SSH protocol and about using the SSH service in Red Hat Enterprise Linux 7.

Important

This section draws attention to the most common ways of securing an SSH setup. By no means should this list of suggested measures be considered exhaustive or definitive. See sshd_config(5) for a description of all configuration directives available for modifying the behavior of the sshd daemon and to ssh(1) for an explanation of basic SSH concepts.

4.3.11.1. Cryptographic Login

SSH supports the use of cryptographic keys for logging in to computers. This is much more secure than using only a password. If you combine this method with other authentication methods, it can be considered a multi-factor authentication. See Section 4.3.11.2, “Multiple Authentication Methods” for more information about using multiple authentication methods.
In order to enable the use of cryptographic keys for authentication, the PubkeyAuthentication configuration directive in the /etc/ssh/sshd_config file needs to be set to yes. Note that this is the default setting. Set the PasswordAuthentication directive to no to disable the possibility of using passwords for logging in.
SSH keys can be generated using the ssh-keygen command. If invoked without additional arguments, it creates a 2048-bit RSA key set. The keys are stored, by default, in the ~/.ssh/ directory. You can utilize the -b switch to modify the bit-strength of the key. Using 2048-bit keys is normally sufficient. The Configuring OpenSSH chapter in the Red Hat Enterprise Linux 7 System Administrator's Guide includes detailed information about generating key pairs.
You should see the two keys in your ~/.ssh/ directory. If you accepted the defaults when running the ssh-keygen command, then the generated files are named id_rsa and id_rsa.pub and contain the private and public key respectively. You should always protect the private key from exposure by making it unreadable by anyone else but the file's owner. The public key, however, needs to be transferred to the system you are going to log in to. You can use the ssh-copy-id command to transfer the key to the server:
~]$ ssh-copy-id -i [user@]server
This command will also automatically append the public key to the ~/.ssh/authorized_keys file on the server. The sshd daemon will check this file when you attempt to log in to the server.
Similarly to passwords and any other authentication mechanism, you should change your SSH keys regularly. When you do, make sure you remove any unused keys from the authorized_keys file.

4.3.11.2. Multiple Authentication Methods

Using multiple authentication methods, or multi-factor authentication, increases the level of protection against unauthorized access, and as such should be considered when hardening a system to prevent it from being compromised. Users attempting to log in to a system that uses multi-factor authentication must successfully complete all specified authentication methods in order to be granted access.
Use the AuthenticationMethods configuration directive in the /etc/ssh/sshd_config file to specify which authentication methods are to be utilized. Note that it is possible to define more than one list of required authentication methods using this directive. If that is the case, the user must complete every method in at least one of the lists. The lists need to be separated by blank spaces, and the individual authentication-method names within the lists must be comma-separated. For example:
AuthenticationMethods publickey,gssapi-with-mic publickey,keyboard-interactive
An sshd daemon configured using the above AuthenticationMethods directive only grants access if the user attempting to log in successfully completes either publickey authentication followed by gssapi-with-mic or by keyboard-interactive authentication. Note that each of the requested authentication methods needs to be explicitly enabled using a corresponding configuration directive (such as PubkeyAuthentication) in the /etc/ssh/sshd_config file. See the AUTHENTICATION section of ssh(1) for a general list of available authentication methods.

4.3.11.3. Other Ways of Securing SSH

Protocol Version
Even though the implementation of the SSH protocol supplied with Red Hat Enterprise Linux 7 still supports both the SSH-1 and SSH-2 versions of the protocol for SSH clients, only the latter should be used whenever possible. The SSH-2 version contains a number of improvements over the older SSH-1, and the majority of advanced configuration options is only available when using SSH-2.
Red Hat recommends using SSH-2 to maximize the extent to which the SSH protocol protects the authentication and communication for which it is used. The version or versions of the protocol supported by the sshd daemon can be specified using the Protocol configuration directive in the /etc/ssh/sshd_config file. The default setting is 2. Note that the SSH-2 version is the only version supported by the Red Hat Enterprise Linux 7 SSH server.
Key Types
While the ssh-keygen command generates a pair of SSH-2 RSA keys by default, using the -t option, it can be instructed to generate DSA or ECDSA keys as well. The ECDSA (Elliptic Curve Digital Signature Algorithm) offers better performance at the same equivalent symmetric key length. It also generates shorter keys.
Non-Default Port
By default, the sshd daemon listens on TCP port 22. Changing the port reduces the exposure of the system to attacks based on automated network scanning, thus increasing security through obscurity. The port can be specified using the Port directive in the /etc/ssh/sshd_config configuration file. Note also that the default SELinux policy must be changed to allow for the use of a non-default port. You can do this by modifying the ssh_port_t SELinux type by typing the following command as root:
~]# semanage -a -t ssh_port_t -p tcp port_number
In the above command, replace port_number with the new port number specified using the Port directive.
No Root Login
Provided that your particular use case does not require the possibility of logging in as the root user, you should consider setting the PermitRootLogin configuration directive to no in the /etc/ssh/sshd_config file. By disabling the possibility of logging in as the root user, the administrator can audit which user runs what privileged command after they log in as regular users and then gain root rights.
Using the ⁠X Security extension
The X server in Red Hat Enterprise Linux 7 clients does not provide the X Security extension. Therefore clients cannot request another security layer when connecting to untrusted SSH servers with X11 forwarding. The most applications were not able to run with this extension enabled anyway. By default, the ForwardX11Trusted option in the /etc/ssh/ssh_config file is set to yes, and there is no difference between the ssh -X remote_machine (untrusted host) and ssh -Y remote_machine (trusted host) command.

Warning

Red Hat recommends not using X11 forwarding while connecting to untrusted hosts.

4.3.12. Securing PostgreSQL

PostgreSQL is an Object-Relational database management system (DBMS). In Red Hat Enterprise Linux 7, the postgresql-server package provides PostgreSQL. If it is not installed, enter the following command as the root user to install it:
~]# yum install postgresql-server
Before you can start using PostgreSQL, you must initialize a database storage area on disk. This is called a database cluster. To initialize a database cluster, use the command initdb, which is installed with PostgreSQL. The desired file system location of your database cluster is indicated by the -D option. For example:
~]$ initdb -D /home/postgresql/db1
The initdb command will attempt to create the directory you specify if it does not already exist. We use the name /home/postgresql/db1 in this example. The /home/postgresql/db1 directory contains all the data stored in the database and also the client authentication configuration file:
~]$ cat pg_hba.conf
# PostgreSQL Client Authentication Configuration File
# This file controls: which hosts are allowed to connect, how clients
# are authenticated, which PostgreSQL user names they can use, which
# databases they can access.  Records take one of these forms:
#
# local      DATABASE  USER  METHOD  [OPTIONS]
# host       DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
# hostssl    DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
# hostnossl  DATABASE  USER  ADDRESS  METHOD  [OPTIONS]
The following line in the pg_hba.conf file allows any authenticated local users to access any databases with their user names:
local   all             all                                     trust
This can be problematic when you use layered applications that create database users and no local users. If you do not want to explicitly control all user names on the system, remove this line from the pg_hba.conf file.

4.3.13. Securing Docker

Docker is an open source project that automates the deployment of applications inside Linux Containers, and provides the capability to package an application with its runtime dependencies into a container. To make your Docker workflow more secure, follow procedures in the Red Hat Enterprise Linux Atomic Host 7 Container Security Guide.

4.3.14. Securing memcached against DDoS Attacks

Memcached is an open source, high-performance, distributed memory object caching system. While it is generic in nature, it is mostly used for improving the performance of dynamic web applications by lowering database load.
Memcached is an in-memory key-value store for small chunks of arbitrary data, such as strings and objects, from results of database calls, API calls, or page rendering. Memcached allows applications to take memory from parts of the system where it has more than it needs and make it accessible to areas where applications have less than they need.

memcached Vulnerabilities

In 2018, vulnerabilities of DDoS amplification attacks by exploiting memcached servers exposed to the public internet were discovered. These attacks take advantage of memcached communication using the UDP protocol for transport. The attack is effective because of the high amplification ratio - a request with the size of a few hundred bytes can generate a response of a few megabytes or even hundreds of megabytes in size. This issue was assigned CVE-2018-1000115.
In most situations, the memcached service does not need to be exposed to the public Internet. Such exposure may have their own security problems, allowing remote attackers to leak or modify information stored in memcached.

Hardening memcached

To mitigate security risks, perform as many from the following steps as applicable for your configuration:
  • Configure a firewall in your LAN. If your memcached server should be accessible only from within your local network, do not allow external traffic to ports used by memcached. For example, remove the port 11211, which is used by memcached by default, from the list of allowed ports:
    ~]# firewall-cmd --remove-port=11211/udp
    ~]# firewall-cmd --runtime-to-permanent
    See Section 5.8, “Using Zones to Manage Incoming Traffic Depending on Source” for firewalld commands that allow specific IP ranges to use the port 11211.
  • Disable UDP by adding the -U 0 -p 11211 value to the OPTIONS variable in the /etc/sysconfig/memcached file unless your clients really need this protocol:
    OPTIONS="-U 0 -p 11211"
  • If you use just a single memcached server on the same machine as your application, set up memcached to listen to localhost traffic only. Add the -l 127.0.0.1,::1 value to OPTIONS in /etc/sysconfig/memcached:
    OPTIONS="-l 127.0.0.1,::1"
  • If changing the authentication is possible, enable SASL (Simple Authentication and Security Layer) authentication:
    1. Modify or add in the /etc/sasl2/memcached.conf file:
      sasldb_path: /path.to/memcached.sasldb
    2. Add an account in the SASL database:
      ~]# saslpasswd2 -a memcached -c cacheuser -f /path.to/memcached.sasldb
    3. Ensure the database is accessible for the memcached user and group.
      ~]# chown memcached:memcached /path.to/memcached.sasldb
    4. Enable SASL support in memcached by adding the -S value to OPTIONS to /etc/sysconfig/memcached:
      OPTIONS="-S"
    5. Restart the memcached server to apply the changes.
    6. Add the user name and password created in the SASL database to the memcached client configuration of your application.
  • Encrypt communication between memcached clients and servers with stunnel. Since memcached does not support TLS, a workaround is to use a proxy, such as stunnel, which provides TLS on top of the memcached protocol.
    You could either configure stunnel to use PSK (Pre Shared Keys) or even better to use user certificates. When using certificates, only authenticated users can reach your memcached servers and your traffic is encrypted.

    Important

    If you use a tunnel to access memcached, ensure that the service is either listening only on localhost or a firewall prevents access from the network to the memcached port.
    See Section 4.8, “Using stunnel” for more information.
Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.