Chapter 4. Planning IP Address usage
An OpenStack deployment can consume a larger number of IP addresses than might be expected. This section aims to help with correctly anticipating the quantity of addresses required, and explains where they will be used.
VIPs (also known as Virtual IP Addresses) - VIP addresses host HA services, and are basically an IP address shared between multiple controller nodes.
4.1. Using multiple VLANs
When planning your OpenStack deployment, you might begin with a number of these subnets, from which you would be expected to allocate how the individual addresses will be used. Having multiple subnets allows you to segregate traffic between systems into VLANs. For example, you would not generally want management or API traffic to share the same network as systems serving web traffic. Traffic between VLANs will also need to traverse through a router, which represents an opportunity to have firewalls in place to further govern traffic flow.
4.2. Isolating VLAN traffic
You would typically allocate separate VLANs for the different types of network traffic you will host. For example, you could have separate VLANs for each of these types of networks. Of these, only the External network needs to be routable to the external physical network. In this release, DHCP services are provided by the director.
Not all of the isolated VLANs in this section will be required for every OpenStack deployment. For example, if your cloud users don’t need to create ad hoc virtual networks on demand, then you may not require a tenant network; if you just need each VM to connect directly to the same switch as any other physical system, then you probably just need to connect your Compute nodes directly to a provider network and have your instances use that provider network directly.
- Provisioning network - This VLAN is dedicated to deploying new nodes using director over PXE boot. OpenStack Orchestration (heat) installs OpenStack onto the overcloud bare metal servers; these are attached to the physical network to receive the platform installation image from the undercloud infrastructure.
Internal API network - The Internal API network is used for communication between the OpenStack services, and covers API communication, RPC messages, and database communication. In addition, this network is used for operational messages between controller nodes. When planning your IP address allocation, note that each API service requires its own IP address. Specifically, an IP address is required for each of these services:
- vip-msg (ampq)
- vip-keystone-int
- vip-glance-int
- vip-cinder-int
- vip-nova-int
- vip-neutron-int
- vip-horizon-int
- vip-heat-int
- vip-ceilometer-int
- vip-swift-int
- vip-keystone-pub
- vip-glance-pub
- vip-cinder-pub
- vip-nova-pub
- vip-neutron-pub
- vip-horizon-pub
- vip-heat-pub
- vip-ceilometer-pub
- vip-swift-pub
When using High Availability, Pacemaker expects to be able to move the VIP addresses between the physical nodes.
- Storage - Block Storage, NFS, iSCSI, among others. Ideally, this would be isolated to separate physical Ethernet links for performance reasons.
- Storage Management - OpenStack Object Storage (swift) uses this network to synchronise data objects between participating replica nodes. The proxy service acts as the intermediary interface between user requests and the underlying storage layer. The proxy receives incoming requests and locates the necessary replica to retrieve the requested data. Services that use a Ceph back-end connect over the Storage Management network, since they do not interact with Ceph directly but rather use the front-end service. Note that the RBD driver is an exception; this traffic connects directly to Ceph.
- Tenant networks - Neutron provides each tenant with their own networks using either VLAN segregation (where each tenant network is a network VLAN), or tunneling via VXLAN or GRE. Network traffic is isolated within each tenant network. Each tenant network has an IP subnet associated with it, and multiple tenant networks may use the same addresses.
- External - The External network hosts the public API endpoints and connections to the Dashboard (horizon). You can also optionally use this same network for SNAT, but this is not a requirement. In a production deployment, you will likely use a separate network for floating IP addresses and NAT.
- Provider networks - These networks allows instances to be attached to existing network infrastructure. You can use provider networks to map directly to an existing physical network in the data center, using flat networking or VLAN tags. This allows an instance to share the same layer-2 network as a system external to the OpenStack Networking infrastructure.
4.3. IP address consumption
The following systems will consume IP addresses from your allocated range:
- Physical nodes - Each physical NIC will require one IP address; it is common practice to dedicate physical NICs to specific functions. For example, management and NFS traffic would each be allocated their own physical NICs (sometimes with multiple NICs connecting across to different switches for redundancy purposes).
- Virtual IPs (VIPs) for High Availability - You can expect to allocate around 1 to 3 for each network shared between controller nodes.
4.4. Virtual Networking
These virtual resources consume IP addresses in OpenStack Networking. These are considered local to the cloud infrastructure, and do not need to be reachable by systems in the external physical network:
- Tenant networks - Each tenant network will require a subnet from which it will allocate IP addresses to instances.
- Virtual routers - Each router interface plugging into a subnet will require one IP address (with an additional address required if DHCP is enabled).
- Instances - Each instance will require an address from the tenant subnet they are hosted in. If ingress traffic is needed, an additional floating IP address will need to be allocated from the designated external network.
- Management traffic - Includes OpenStack Services and API traffic. In Red Hat OpenStack Platform 11, requirements for virtual IP addresses have been reduced; all services will instead share a small number of VIPs. API, RPC and database services will communicate on the internal API VIP.
4.5. Example network plan
This example shows a number of networks that accommodate multiple subnets, with each subnet being assigned a range of IP addresses:
Subnet name | Address range | Number of addresses | Subnet Mask |
---|---|---|---|
Provisioning network | 192.168.100.1 - 192.168.100.250 | 250 | 255.255.255.0 |
Internal API network | 172.16.1.10 - 172.16.1.250 | 241 | 255.255.255.0 |
Storage | 172.16.2.10 - 172.16.2.250 | 241 | 255.255.255.0 |
Storage Management | 172.16.3.10 - 172.16.3.250 | 241 | 255.255.255.0 |
Tenant network (GRE/VXLAN) | 172.19.4.10 - 172.16.4.250 | 241 | 255.255.255.0 |
External network (incl. floating IPs) | 10.1.2.10 - 10.1.3.222 | 469 | 255.255.254.0 |
Provider network (infrastructure) | 10.10.3.10 - 10.10.3.250 | 241 | 255.255.252.0 |