リリースノート
Red Hat OpenStack Platform 8 リリースの詳細
概要
第1章 はじめに リンクのコピーリンクがクリップボードにコピーされました!
- 完全に分散されたオブジェクトストレージ
- 永続的なブロックレベルのストレージ
- 仮想マシンのプロビジョニングエンジンおよびイメージストレージ
- 認証および承認メカニズム
- 統合されたネットワーク
- ユーザーおよび管理両方の Web ブラウザーベースの GUI。
1.1. 本リリースについて リンクのコピーリンクがクリップボードにコピーされました!
1.2. 要件 リンクのコピーリンクがクリップボードにコピーされました!
- Chrome
- Firefox
- Firefox ESR
- Internet Explorer 11 以降( 互換性モード が無効になっている)
1.3. デプロイメント制限事項 リンクのコピーリンクがクリップボードにコピーされました!
1.4. データベースサイズの管理 リンクのコピーリンクがクリップボードにコピーされました!
1.5. 認定済みのドライバーとプラグイン リンクのコピーリンクがクリップボードにコピーされました!
1.6. 認定済みゲストオペレーティングシステム リンクのコピーリンクがクリップボードにコピーされました!
1.7. ハイパーバイザーのサポート リンクのコピーリンクがクリップボードにコピーされました!
libvirt ドライバーとの使用(コンピュートノード上で KVM をハイパーバイザーで使用する)でのみサポートされています。
1.8. コンテンツ配信ネットワーク (CDN) チャンネル リンクのコピーリンクがクリップボードにコピーされました!
#subscription-manager repos --enable=[reponame]
#subscription-manager repos --enable=[reponame]
#subscription-manager repos --disable=[reponame]
#subscription-manager repos --disable=[reponame]
| Channel | リポジトリー名 |
|---|---|
| Red Hat Enterprise Linux 7 Server (RPMS) |
rhel-7-server-rpms
|
| Red Hat Enterprise Linux 7 Server - RH Common (RPMs) |
rhel-7-server-rh-common-rpms
|
| Red Hat Enterprise Linux High Availability (for RHEL 7 Server) |
rhel-ha-for-rhel-7-server-rpms
|
| Red Hat OpenStack Platform 8 for RHEL 7 (RPMs) |
rhel-7-server-openstack-8-rpms
|
| Red Hat OpenStack Platform 8 director for RHEL 7 (RPMs) |
rhel-7-server-openstack-8-director-rpms
|
| Red Hat Enterprise Linux 7 Server - Extras (RPM) |
rhel-7-server-extras-rpms
|
| Channel | リポジトリー名 |
|---|---|
| Red Hat Enterprise Linux 7 Server - Optional |
rhel-7-server-optional-rpms
|
| Red Hat OpenStack Platform 8 Operational Tools for RHEL 7 (RPMs) |
rhel-7-server-openstack-8-optools-rpms
|
無効にするチャネル
以下の表には、Red Hat OpenStack Platform 8 が正常に機能するために無効にする必要のあるチャンネルをまとめています。
| Channel | リポジトリー名 |
|---|---|
| Red Hat CloudForms Management Engine |
"cf-me-*"
|
| Red Hat Enterprise Virtualization |
"rhel-7-server-rhev*"
|
| Red Hat Enterprise Linux 7 Server - Extended Update Support |
"*-eus-rpms"
|
1.9. 製品サポート リンクのコピーリンクがクリップボードにコピーされました!
- カスタマーポータル
- Red Hat カスタマーポータルでは、OpenStack デプロイメントのプランニング、デプロイ、メンテナンスを支援するために、幅広いリソースを提供しています。カスタマーポータルから、以下のリソースを利用することができます。
- ナレッジベースアーティクルおよびソリューション
- テクニカルブリーフ
- 製品マニュアル
- サポートケース管理
カスタマーポータル https://access.redhat.com/ ()にアクセスします。 - メーリングリスト
- Red Hat は、OpenStack ユーザーに適した公開メーリングリストを提供しています。
rhsa-announceメーリングリストは、Red Hat OpenStack Platform など、全 Red Hat 製品のセキュリティー修正リリースに関する通知を提供します。で https://www.redhat.com/mailman/listinfo/rhsa-announce サブスクライブします。
第2章 最も重要な新機能 リンクのコピーリンクがクリップボードにコピーされました!
2.1. Red Hat OpenStack Platform director リンクのコピーリンクがクリップボードにコピーされました!
Red Hat OpenStack Platform 8 では、director に注目すべき新たな機能拡張がいくつか追加されました。
- Neutron を介した Cisco Networking の幅広いサポート。以下に例を示します。
- N1KV ML2 plug-in
- N1KV VEM および VSM モジュール
- Nexus 9K ML2 プラグイン
- UCSM ML2 プラグイン
-
type_drivers、service_plugins、core_pluginなど、環境ファイルのネットワーク設定用の新しいパラメーター。 - Big Switch ML2 プラグイン、LLDP、ボンディングのサポートを含むビッグスイッチネットワークのサポート
- VXLAN がデフォルトのオーバーレイネットワークになりました。これは、VXLAN の方が優れ、VXLAN オフロードを使用する NIC の方が一般的であるためです。
- MariaDB の最大接続数は、コントローラーノードの CPU コアの数でスケーリングできるようになりました。
- director は、RabbitMQ のファイル記述子の制限を設定できるようになりました。
- オーバークラウドのノードにデプロイされた Red Hat OpenStack Platform コンポーネントの SSL サポート。
- オーバークラウドノードの IPv6 サポート
2.2. Block Storage リンクのコピーリンクがクリップボードにコピーされました!
以下のセクションでは、Red Hat OpenStack Platform 8 の Block Storage サービスに含まれる新機能を簡単に説明します。
汎用ボリュームの移行
汎用ボリューム移行では、iSCSI をサポートしないボリュームドライバーを許可し、データトランスポートがボリューム移行操作に参加するための他の手段を使用できます。これは create_export を使用して iSCSI 経由でボリュームを作成および割り当て、I/O 操作を実行します。これはより汎用的なものにすることで、他のドライバーもボリューム移行で参加できるようにすることができます。
この変更は、Ceph ドライバーのボリューム移行をサポートするために必要です。
スナップショットをインポート/エクスポート
スナップショットをインポートおよびエクスポートする手段を提供します。スナップショットのインポート/エクスポート機能は、ボリュームのインポート/エクスポート機能の補完です。
- Block Storage ボリュームから別の Block Storage ボリュームにボリュームのスナップショットをインポートし、すでにバックエンドデバイスに OpenStack 以外 のスナップショットをインポートする機能を提供します。
- スナップショットのエクスポートは、ボリュームのエクスポートと同じように機能します。
非分散バックアップ
以前は、バックアップ操作は、ボリュームのデタッチ時にのみ実行できました。これで、以下の手順に従ってボリュームのバックアップを作成できます。
- 一時的なスナップショットの作成
- スナップショットの割り当て
- スナップショットからのバックアップの実行
- 一時スナップショットのクリーンアップ
アタッチされたボリュームの場合、一時的なスナップショットの作成は、通常は一時ボリューム全体を作成するよりもコストが低くなります。スナップショットをアタッチして、直接読み取ることができるようになりました。
ドライバーがスナップショットの割り当てを実装しておらず、スナップショットから読み込む方法がない場合は、アタッチされたソースボリュームから一時ボリュームを作成し、一時ボリュームをバックアップすることができます。
新規ボリュームレプリケーション API
ボリュームのレプリケーションは、主要なストレージ機能と、OpenStack クラウドで実行しているアプリケーションの高可用性や障害復旧などの機能の要件です。本リリースでは、Block Storage サービスでのボリュームレプリケーションの初期サポートが追加され、以下のサポートが含まれます。
- ボリュームの複製(プライマリーのアプローチ)
- セカンダリーをプライマリーにプロモートする(およびレプリケーションの停止)
- レプリケーションの再有効化
- レプリケーションが適切に実行されていることをテストする
一般的なイメージキャッシュ
現在、一部のボリュームドライバーは clone_image メソッドを実装し、最近使用されたイメージを保持するバックエンドでボリュームの内部キャッシュを使用します。非常に効率的なボリュームのクローンを実行できるストレージバックエンドの場合、各ボリュームにイメージコンテンツをアタッチしてコピーする上で、パフォーマンスが大きくなる可能性があります。この機能を簡単に他のボリュームドライバーを使用できるようにし、コードベースでの重複を防ぐために、イメージキャッシュが追加されます。
イメージからボリュームを複数回作成する場合は、この機能を使用してください。エンドユーザーは、初回後にイメージからボリュームの作成が速くなることを示しています。
2.3. コンピュート リンクのコピーリンクがクリップボードにコピーされました!
Red Hat OpenStack Platform 8 では、Compute サービスに重要な新機能がいくつか含まれています。
- サーバーの管理者パスワードを変更する nova set-password server コマンドが利用できるようになりました。
- libvirt ドライバーが強化され、インスタンスに対して virtio-net マルチキュー が有効になりました。この機能をオンにすると、ワークロードが vCPU 間でスケーリングされるため、ネットワークパフォーマンスが向上します。
- Ceph RBD (RADOS ブロックデバイス)ストレージを使用する場合のディスク QoS (Quality of Service)。たとえば、連続した読み取りまたは書き込み制限や、ゲストに許可される合計 IOPS または帯域幅を設定できます。
- 外部高可用性ソリューションの Mark host down API:この API により、外部ツールはコンピュートノード障害を Compute サービスに通知できるため、インスタンスの回復性が向上します。
2.4. Identity リンクのコピーリンクがクリップボードにコピーされました!
Red Hat OpenStack Platform 8 では、Identity サービス向けの新機能が多数導入されています。
- アイデンティティープロバイダー固有の WebSSO を設定できるようになりました。以前は、keystone に対して WebSSO をグローバルに設定する必要がありました。今回の更新により、アイデンティティープロバイダーごとに WebSSO を設定し、追加の検出手順を実行するのではなく、ダッシュボードクエリーを個別のエンドポイントに転送することができるようになりました。
-
SAML アサーションには新しい属性を使用できます。マッピングユーザードメインの場合は
openstack_user_domain、プロジェクトドメインをマッピングするためのopenstack_project_domainです。 - X.509 SSL クライアント証明書を使用した keystone トークンレス認証の実験的なサポートが追加されました。
2.5. Image サービス リンクのコピーリンクがクリップボードにコピーされました!
以下の項では、Red Hat OpenStack Platform 8 の Image サービスに含まれる新機能を簡単に説明します。
イメージの署名および暗号化
この機能は、イメージ署名と署名の検証をサポートします。これにより、ユーザーはイメージを起動する前にイメージが変更されていないことを確認できます。
アーティファクトリーポジトリー(実験的 API)
この機能は、Image サービス機能を拡張して、仮想マシンイメージだけでなく、バイナリーオブジェクトに複合メタデータなどのその他のアーティファクトも格納します。
Image Service はこのようなアーティファクトのカタログとなるため、アーティファクト、それらのメタデータ、および関連するバイナリーオブジェクトを保存、検索、および取得する機能を提供します。
2.6. オブジェクトストレージ リンクのコピーリンクがクリップボードにコピーされました!
このリリースには、新しいリングツールである Ring Builder Analyzer も含まれています。これは、特定のシナリオでリングビルダーがそのジョブを実行するかを分析するために使用されます。
リングビルダーのアナライザーは、リングビルダーの初期パラメーターと一定数のラウンドを含むシナリオファイルを取ります。各ラウンドで、ビルダーにいくつかの変更が加えられます。たとえば、デバイスの追加、デバイスの削除、デバイスのウェイトの変更などを行います。次に、ビルダーは、設定が停止するまで繰り返しリバランスされます。その丸めに関するデータが出力され、次のラウンドが開始されます。
2.7. OpenStack ネットワーキング リンクのコピーリンクがクリップボードにコピーされました!
2.7.1. QoS リンクのコピーリンクがクリップボードにコピーされました!
Red Hat OpenStack Platform 8 では、ネットワークの quality-of-service (QoS)ポリシーがサポートされるようになりました。これらのポリシーにより、OpenStack 管理者はインスタンスの受信およびエグレストラフィックにレート制限を適用して、さまざまなサービスレベルを提供できます。指定されたレートを超過するトラフィックはドロップされます。
2.7.2. Open vSwitch の更新 リンクのコピーリンクがクリップボードにコピーされました!
Open vSwitch (OVS)がアップストリーム 2.4.0 リリースに更新されました。この更新には、多くの注目すべき機能強化が含まれています。
- Rapid Spanning Tree Protocol (IEEE 802.1D-2004)をサポートするため、トポロジーの変更後の収束が速くなります。
- IP マルチキャストスヌーピング(IGMPv1、IGMPv2、および IGMPv3)に対応するマルチキャストの効率が最適化されました。
- vhost-user をサポートします。これは、ゲストとユーザー空間の vSwitch の間で I/O 効率を向上させる QEMU 機能です。
- OVS バージョン 2.4.0 には、さまざまなパフォーマンスと安定性も向上しています。
Open vSwitch 2.4.0 の詳細は、http://openvswitch.org/releases/NEWS-2.4.0を参照してください。
2.7.3. ネットワークの RBAC リンクのコピーリンクがクリップボードにコピーされました!
OpenStack Networking の Role-based Access Control (RBAC) により、細かな粒度で neutron 共有ネットワークを制御することができます。以前のリリースでは、ネットワークは全テナントで共有するか、全く共有しないかのいずれかでした。OpenStack Networking は RBAC テーブルを使用してテナント間の neutron ネットワークの共有を制御するようになり、管理者はインスタンスをネットワークにアタッチする権限が付与されるテナントを制御できるようになりました。
その結果、クラウド管理者は、一部のテナントからネットワーク作成機能を削除することや、代わりにそのプロジェクトに対応した既存ネットワークへの接続を許可することが可能です。
2.8. テクノロジープレビュー リンクのコピーリンクがクリップボードにコピーされました!
テクノロジープレビューと記した機能のサポート範囲についての詳細は、https://access.redhat.com/support/offerings/techpreview/ を参照してください。
2.8.1. 新規テクノロジープレビュー リンクのコピーリンクがクリップボードにコピーされました!
- サービスのベンチマーク
rally は、マルチノードの OpenStack デプロイメント、クラウド検証、ベンチマーク、およびプロファイリングを自動化および統合するベンチマークツールです。SLA、パフォーマンス、および安定性を継続的に改善する OpenStack CI/CD システムの基本ツールとして使用できます。Rally は、以下のコアコンポーネントで設定されます。
- サーバープロバイダー: 異なる仮想化テクノロジー (LXS、Virsh など) およびクラウドサプライヤーと対話するための統合インターフェイスを提供します。ssh アクセスを介して、1 つの L3 ネットワーク内でこれを行います。
- エンジンのデプロイ:サーバープロバイダーから取得したサーバーを使用して、ベンチマーク手順を実行する前に OpenStack ディストリビューションをデプロイします。
- 検証:デプロイしたクラウドに対して特定のテストセットを実行して正しく機能することを確認し、結果を収集して人間が読める形式で提示します。
- ベンチマークエンジン:パラメーター化されたベンチマークシナリオを書き込み、クラウドに対して実行することができます。
- DPDK-Accelerated Open vSwitch
- Data Plane Development Kit (DPDK)は、高速なパケット処理用のライブラリーとユーザー空間ドライバーのセットで設定され、アプリケーションが NIC との間で直接独自のパケット処理を実行でき、特定のユースケースに対して最大ワイヤースピードパフォーマンスを提供します。また、OVS+DPDK は、コア機能を維持しながら、Open vSwitch のパフォーマンスを大幅に向上させます。これにより、ゲストインスタンス(およびゲストインスタンス間)内のホストの物理 NIC からアプリケーションへの切り替えをほぼ完全にユーザー空間で処理できます。本リリースでは、OpenStack Networking (neutron) OVS プラグインが更新され、OVS+DPDK バックエンド設定がサポートされるようになりました。OpenStack プロジェクトは、neutron API を使用してネットワーク、サブネット、その他のネットワーク設定をプロビジョニングし、OVS+DPDK を使用してインスタンスのネットワークパフォーマンスを向上できるようになりました。
- Promtion Integration
- Red Hat OpenStack Platform 8 には、Inflex SDN コントローラーとの統合のテクノロジープレビューが追加されました。JmxTrans は、多くの異なるアプリケーションをサポートする柔軟でモジュール式、オープン SDN プラットフォームです。Red Hat OpenStack Platform 8 に含まれるサンドボックスディストリビューションは、OVSDB NetVirt を使用した OpenStack デプロイメントをサポートするために必要なモジュールに限定され、アップストリームの Beryllium バージョンに基づいています。以下のパッケージは、opendaylight、networking-odl を提供します。
- リアルタイム KVM 統合
- リアルタイム KVM と Compute サービスの統合により、ホスト CPU で実行されるカーネルタスク等の原因から生じる CPU レイテンシーの影響を減らすことで、CPU ピニングが提供する vCPU スケジューリング保証がさらに強化されます。この機能は、CPU レイテンシーの削減が重要なネットワーク機能仮想化(NFV)などのワークロードにとって重要です。
- コンテナー化されたコンピュートノード
- Red Hat OpenStack Platform director には、OpenStack のコンテナー化プロジェクト(kolla)からオーバークラウドのコンピュートノードにサービスを統合する機能があります。これには、Red Hat Enterprise Linux Atomic Host を基本オペレーティングシステムとして使用するコンピュートノードの作成と、さまざまな OpenStack サービスを実行するための個々のコンテナーの作成が含まれます。
2.8.2. 以前にリリースされたテクノロジープレビュー リンクのコピーリンクがクリップボードにコピーされました!
- セル
- OpenStack Compute には、コンピュートリソースを分割するために nova-cells パッケージにより提供されるセルの概念が組み込まれています。セルの詳細は、ホストとセルの スケジュール を参照してください。あるいは、Red Hat Enterprise Linux OpenStack Platform は、Red Hat Enterprise Linux OpenStack Platform のコンピュートリソース(地域、アベイラビリティーゾーン、およびホストアグリゲート)を分割するための完全にサポートされた方法も提供します。詳細は、ホストアグリゲートの 管理 を参照して ください。
- database-as-a-Service (DBaaS)
- OpenStack Database-as-a-Service により、ユーザーは OpenStack Compute インスタンス内に単一テナントのデータベースを簡単にプロビジョニングすることができます。Database-as-a-Service フレームワークにより、データベースのデプロイ、使用、管理、監視、およびスケーリングに関連する従来の管理オーバーヘッドの多くをバイパスできます。
- 分散仮想ルーティング
- 分散仮想ルーター(DVR)では、L3 ルーターをコンピュートノードに直接配置することができます。その結果、インスタンストラフィックは、ネットワークノード経由のルーティングを必要とせずに、コンピュートノード(East-West)の間で転送されます。Floating IP アドレスのないインスタンスは、ネットワークノードを介して SNAT トラフィックをルーティングします。
- DNS-as-a-Service (DNSaaS)
- Red Hat OpenStack Platform 8 には、Designate とも呼ばれる DNS-as-a-Service (DNSaaS)のテクノロジープレビューが同梱されています。DNSaaS にはドメインとレコードの管理のための REST API が実装されており、マルチテナントに対応しています。また DNSaaS は OpenStack Identity サービス (keystone) と統合して認証を行います。DNSaaS には Compute (nova) および OpenStack Networking (neutron) の通知と統合するフレームワークが実装されており、DNS レコードの自動生成が可能です。さらに、DNSaaS は PowerDNS および Bind9 の統合もサポートしています。
- erasure Coding (EC)
- Object Storage サービスには、アクセス頻度が低い大量のデータを持つデバイスの EC ストレージポリシータイプが含まれます。EC ストレージポリシーは、独自のリングと設定可能なパラメーターセットを使用します。これにより、コストおよびストレージ要件の削減につながります(トリプルレプリケーションの容量の約半分が必要)。EC にはさらに多くの CPU およびネットワークリソースが必要なため、EC をポリシーとして実装することで、クラスターの EC 機能に関連するすべてのストレージデバイスを分離できます。
- ファイル共有サービス
- OpenStack File Share Service は、OpenStack で共有ファイルシステムをプロビジョニングおよび管理するシームレスで簡単な方法を提供します。これらの共有ファイルシステムは、インスタンスに安全に使用する(マウントされた)ことができます。また、ファイル共有サービスを使用すると、プロビジョニングされたファイル共有を堅牢に管理したり、クォータを設定したり、アクセスを設定したり、スナップショットの作成を行い、他の有用な管理タスクを実行することができます。
以下のセクションでは、Red Hat OpenStack Platform 8 のファイル共有サービスに含まれる新機能を簡単に説明します。
Manila Horizon dashboard プラグイン
このリリースでは、ユーザーはダッシュボードを介してファイル共有サービスが提供する機能を操作できるようになりました。これには、ファイル共有を作成して操作するためのインタラクティブなメニューが含まれます。
ファイル共有移行
ファイル共有の移行は、バックエンドからバックエンドへのファイル共有の移行を可能にする新機能です。
以下の方法を使用できます。
- ドライバーへの委譲:これは、非常に最適化され、制限されたアプローチです。ドライバーは、移行先のバックエンドを理解することで、より効率的な方法で移行を行うことができます。移行後、ドライバーがモデルの更新を返す必要があります。
コーディネートを管理し、一部のタスクをドライバーに委譲します。このアプローチは、移行先ホストに新規共有を作成し、manila ノードからエクスポートをマウントし、すべてのファイルをコピーしてから古いファイル共有を削除します。このアプローチは、移行プロセスを支援するために必要な方法を実装するドライバーに対して機能します。以下に例を示します。
- ソース共有を読み取り専用に変更して、ユーザーが移行による影響を受けないようにします。
- 特定のプロトコルを使用したエクスポートのマウント/マウント解除。
2 番目が動作するためには、server_setup メソッド中に、共有サーバーと manila ノード間の接続を可能にするポートを作成する必要があります。
アベイラビリティーゾーン
File Share Service クライアントの共有作成コードは、アベイラビリティーゾーン引数を受け入れて使用できるようになりました。これにより、スナップショットから共有を作成するときに、アベイラビリティーゾーン情報を予約することもできます。
シンプロビジョニングのオーバーサブスクリプション
このリリースでは、シンプロビジョニングでオーバーサブスクリプションのサポートが追加されました。これにより、特定のドライバーが容量に対して 無限 または 不明 なユースケースに対応し、オーバーサブスクリプションが発生する可能性があります。今回の更新では、以下のパラメーターが追加されました。
-
max_over_subscription_ratio: 適用されるサブスクリプションの比率を表す浮動小数点数。この比率は、利用可能な合計容量に対するプロビジョニングされたストレージの比率として計算されます。そのため、サブスクリプションの比率が
1.0の場合は、プロビジョニングされたストレージの合計量が利用可能なストレージの合計量を超えることができませんが、サブスクリプションの割合が 2.0 になると、プロビジョニングされるストレージの合計量が、利用可能なストレージの合計量に達する可能性があります。 - provisioned_capacity: プロビジョニングされたストレージの容量が明確です。このパラメーターの値は、max_over_subscroption_ratio の計算に使用されます。
- Firewall-as-a-Service (FWaaS)
- Firewall-as-a-Service プラグインは、OpenStack Networking (neutron) に境界ファイアウォール管理機能を提供します。FWaaS は iptables を使用して、プロジェクト内の全 Networking ルーターにファイアウォールポリシーを適用し、プロジェクトごとにファイアウォールポリシーと論理ファイアウォールインスタンス 1 つをサポートします。FWaaS は、OpenStack Networking (neutron) ルーターでトラフィックをフィルタリングすることによって境界で稼働します。インスタンスレベルで稼働するセキュリティーグループとは、この点が異なります。
- 運用ツール
- 運用ツールは、トラブルシューティングを容易にするロギングおよび監視ツールです。一元化された使いやすいアナリティクスと検索ダッシュボードを使用すると、トラブルシューティングが簡素化され、サービスの可用性チェック、アラーム管理、グラフを使用したデータの収集と表示などの機能が利用できます。
- VPN-as-a-Service (VPNaaS)
- VPN-as-a-Service により、OpenStack で VPN 接続を作成および管理することができます。
- Time-Series-Database-as-a-Service (TSDaaS)
- Time-Series-Database-as-a-Service (
gnocchi)は、マルチテナント、メトリクス、およびリソースデータベースです。大規模なメトリックを保管する一方で、オペレーターやユーザーにメトリックおよびリソースの情報へのアクセスを提供します。
第3章 リリースの情報 リンクのコピーリンクがクリップボードにコピーされました!
3.1. 機能拡張 リンクのコピーリンクがクリップボードにコピーされました!
- BZ#978365
The ability of the libvirt driver to set the admin password has been added. To use this feature, run the following command: "nova root-password [server]".
The ability of the libvirt driver to set the admin password has been added. To use this feature, run the following command: "nova root-password [server]".Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#978365
The ability of the libvirt driver to set the admin password has been added. To use this feature, run the following command: "nova root-password [server]".
The ability of the libvirt driver to set the admin password has been added. To use this feature, run the following command: "nova root-password [server]".Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1042947
This update adds support for volume migrations of the Block Storage (cinder) service. These are done in the 'Volumes' panel of the OpenStack dashboard (Project-> Compute -> Volumes and in Admin-> System Panel-> Volumes). You can perform this action on the 'Volumes' row in the table. The final patch in this series resolved the command action itself; it had previously errored out due to incorrect parameters, and parameter count issues.
This update adds support for volume migrations of the Block Storage (cinder) service. These are done in the 'Volumes' panel of the OpenStack dashboard (Project-> Compute -> Volumes and in Admin-> System Panel-> Volumes). You can perform this action on the 'Volumes' row in the table. The final patch in this series resolved the command action itself; it had previously errored out due to incorrect parameters, and parameter count issues.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1100542
OpenStack dashboard tables summarize information about a large number of entities. This update adds a table enhancement that enables this information to be displayed within the table as a slide-down "drawer" that is activated when you click on a toggle switch within a row. The drawer appears as an additional row (with configurable height) and contains additional information about the entity in the row above it (e.g. additional entity details, metrics, graphs, etc.). Multiple drawers may be opened at one time.
OpenStack dashboard tables summarize information about a large number of entities. This update adds a table enhancement that enables this information to be displayed within the table as a slide-down "drawer" that is activated when you click on a toggle switch within a row. The drawer appears as an additional row (with configurable height) and contains additional information about the entity in the row above it (e.g. additional entity details, metrics, graphs, etc.). Multiple drawers may be opened at one time.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1104445
Instances can now be cold migrated or live migrated from hosts marked for maintenance. A new action button in the System > Hypervisors > Compute Host tab in the dashboard allows administrative users to set options for instance migration. Cold migration moves an instance from one host to another, reboots across the move, and its destination is chosen by the scheduler. This type of migration should be used when the administrative user did not select the 'live_migrate' option in the dashboard or the migrated instance is not running. Live migration moves an instance (with “Power state” = “active”) from one host to another, the instance doesn't appear to reboot, and its destination is optional (it can be defined by the administrative user or chosen by the scheduler). This type of migration should be used when the administrative user selected the 'live_migrate' option in the dashboard and the migrated instance is still running.
Instances can now be cold migrated or live migrated from hosts marked for maintenance. A new action button in the System > Hypervisors > Compute Host tab in the dashboard allows administrative users to set options for instance migration. Cold migration moves an instance from one host to another, reboots across the move, and its destination is chosen by the scheduler. This type of migration should be used when the administrative user did not select the 'live_migrate' option in the dashboard or the migrated instance is not running. Live migration moves an instance (with “Power state” = “active”) from one host to another, the instance doesn't appear to reboot, and its destination is optional (it can be defined by the administrative user or chosen by the scheduler). This type of migration should be used when the administrative user selected the 'live_migrate' option in the dashboard and the migrated instance is still running.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1149599
With this feature, you can now use Block Storage (cinder) to create a volume by specifying either the image ID or image name.
With this feature, you can now use Block Storage (cinder) to create a volume by specifying either the image ID or image name.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1166963
This update replaces the network topology with curvature based graph as the previous UI did not work well with larger number of nodes or networks. The new network topology map can handle more nodes, looks stylish and the node layout can be re-organized.
This update replaces the network topology with curvature based graph as the previous UI did not work well with larger number of nodes or networks. The new network topology map can handle more nodes, looks stylish and the node layout can be re-organized.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1167563
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1167565
This update adds a common API hosted by the Image Service (glance) for vendors, admins, services, and users to meaningfully define an available key/value pair, and tag metadata. The intent is to enable better metadata collaboration across artifacts, services, and projects for OpenStack users. This definition describes the available metadata that can be used on different types of resources (images, artifacts, volumes, flavors, aggregates, among others). A definition includes the properties type, key, description, and constraints. This catalog will not store the values for specific instance properties. For example, a definition of a virtual CPU topology property for a number of cores will include the key to use, a description, and value constraints, such as requiring it to be an integer. As a result, users (potentially through the dashboard) would be able to search this catalog to list the available properties they can add to a flavor or image. They will see the virtual CPU topology property in the list and know that it must be an integer. In the dashboard example, when the user adds the property, its key and value will be stored in the service that owns that resource (in nova for flavors, and in glance for images).
This update adds a common API hosted by the Image Service (glance) for vendors, admins, services, and users to meaningfully define an available key/value pair, and tag metadata. The intent is to enable better metadata collaboration across artifacts, services, and projects for OpenStack users. This definition describes the available metadata that can be used on different types of resources (images, artifacts, volumes, flavors, aggregates, among others). A definition includes the properties type, key, description, and constraints. This catalog will not store the values for specific instance properties. For example, a definition of a virtual CPU topology property for a number of cores will include the key to use, a description, and value constraints, such as requiring it to be an integer. As a result, users (potentially through the dashboard) would be able to search this catalog to list the available properties they can add to a flavor or image. They will see the virtual CPU topology property in the list and know that it must be an integer. In the dashboard example, when the user adds the property, its key and value will be stored in the service that owns that resource (in nova for flavors, and in glance for images).Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1168359
Nova's serial console API is now exposed for instances. Specifically, a serial console is available for hypervisors not supporting VNC or Spice. This update adds support for it in the dashboard.
Nova's serial console API is now exposed for instances. Specifically, a serial console is available for hypervisors not supporting VNC or Spice. This update adds support for it in the dashboard.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1189502
With this update, configuration settings now exist to set timeouts, after which clusters which have failed to reach the 'Active' state will be automatically deleted.
With this update, configuration settings now exist to set timeouts, after which clusters which have failed to reach the 'Active' state will be automatically deleted.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1189517
When creating a job template intended for re-use, you can now register a variable for datasource URLs with OpenStack Data Processing (sahara). Doing so allows you to easily change input and output paths per run, rather than an actual URL (which would require revising the template, or manually revising the URL per run between jobs). This makes it easier to reuse job templates when data source jobs are mutable between runs, as is true for most real-world cases.
When creating a job template intended for re-use, you can now register a variable for datasource URLs with OpenStack Data Processing (sahara). Doing so allows you to easily change input and output paths per run, rather than an actual URL (which would require revising the template, or manually revising the URL per run between jobs). This makes it easier to reuse job templates when data source jobs are mutable between runs, as is true for most real-world cases.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1192641
With this release, in order to provide security isolation, the '/usr/local' path has been removed from the default Block Storage rootwrap configuration. As a result, the deployments relying on Block Storage service executing commands from the '/usr/local/' as the 'root' user will need to add configuration for the commands to work.
With this release, in order to provide security isolation, the '/usr/local' path has been removed from the default Block Storage rootwrap configuration. As a result, the deployments relying on Block Storage service executing commands from the '/usr/local/' as the 'root' user will need to add configuration for the commands to work.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1212158
This updates provides OpenStack notifications. Previously there were external consumers of OpenStack notifications that could not interface with director-deployed cloud because notifications were not enabled. Now director enables notifications for external consumers.
This updates provides OpenStack notifications. Previously there were external consumers of OpenStack notifications that could not interface with director-deployed cloud because notifications were not enabled. Now director enables notifications for external consumers.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1214230
With this update, a new feature adds pagination support for the Block Storage 'snapshots-list' and 'backups-list' commands. You can now limit, marker and sort parameters to control the number of returned results, starting element and their order. Retrieving a limited number of results instead of the entire data set can be extremely useful on the large deployments with thousands of snapshots and backups.
With this update, a new feature adds pagination support for the Block Storage 'snapshots-list' and 'backups-list' commands. You can now limit, marker and sort parameters to control the number of returned results, starting element and their order. Retrieving a limited number of results instead of the entire data set can be extremely useful on the large deployments with thousands of snapshots and backups.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1225163
The Director now properly enabled notifications for external consumers.
The Director now properly enabled notifications for external consumers.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1229634
Previously, there was no secure way to remotely access S3 backend in a private network. With this update, a new feature allows Image service S3 driver to connect a S3 backend from a different network in a secure way through the HTTP proxy.
Previously, there was no secure way to remotely access S3 backend in a private network. With this update, a new feature allows Image service S3 driver to connect a S3 backend from a different network in a secure way through the HTTP proxy.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1238807
This enhancement enables the distribution of per-node hieradata, matching the nodes from their UUID (as reported by 'dmidecode'). This allows you to scale CephStorage across nodes equipped with a different number/type of disks. As a result, CephStorage nodes can now be configured with non-homogeneous disk topologies. This is done by provisioning a different configuration hash for the ceph::profile::params::osds parameter.
This enhancement enables the distribution of per-node hieradata, matching the nodes from their UUID (as reported by 'dmidecode'). This allows you to scale CephStorage across nodes equipped with a different number/type of disks. As a result, CephStorage nodes can now be configured with non-homogeneous disk topologies. This is done by provisioning a different configuration hash for the ceph::profile::params::osds parameter.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1249601
OpenStack Bare Metal (ironic) now supports deploying nodes in UEFI mode. This is due to requests from customers with servers that only support UEFI boot.
OpenStack Bare Metal (ironic) now supports deploying nodes in UEFI mode. This is due to requests from customers with servers that only support UEFI boot.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1257306
This release includes a tech preview of Image Signing and Verification for glance images. This feature helps protect image integrity by ensuring no modifications occur after the image is uploaded by a user. This capability includes both signing of the image, and signature validation of bootable images when used.
This release includes a tech preview of Image Signing and Verification for glance images. This feature helps protect image integrity by ensuring no modifications occur after the image is uploaded by a user. This capability includes both signing of the image, and signature validation of bootable images when used.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1258643
To provide better flexibility for administrators on deployments with an assortment of storage backends, Block Storage now defines standard names for the capabilities, for example, QoS, compression, replication, bandwidth control, and thin provisioning. This means volume type specifications that will work with multiple drivers without modifications can be defined.
To provide better flexibility for administrators on deployments with an assortment of storage backends, Block Storage now defines standard names for the capabilities, for example, QoS, compression, replication, bandwidth control, and thin provisioning. This means volume type specifications that will work with multiple drivers without modifications can be defined.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1258645
This enhancement adds a new scaled backend replication implementation (between backends) that leaves the bulk of the work up to the driver, while providing basic admin API methods. This is available where replication is set at the volume types level, and when the cinder driver reports its capabilities. New configuration options are available: replication_enabled - set to True replication_type - async, sync replication_count - Number of replicas
This enhancement adds a new scaled backend replication implementation (between backends) that leaves the bulk of the work up to the driver, while providing basic admin API methods. This is available where replication is set at the volume types level, and when the cinder driver reports its capabilities. New configuration options are available: replication_enabled - set to True replication_type - async, sync replication_count - Number of replicasCopy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1259003
The domain name for overcloud nodes defaulted to 'localdomain'. For example: 'overcloud-compute-0.localdomain'. This enhancement provides a parameter (CloudDomain) to customize the domain name. Create an environment file with the CloudDomain parameter included in the 'parameter_defaults" section. If no domain name is defined, the Heat templates default to 'localdomain'.
The domain name for overcloud nodes defaulted to 'localdomain'. For example: 'overcloud-compute-0.localdomain'. This enhancement provides a parameter (CloudDomain) to customize the domain name. Create an environment file with the CloudDomain parameter included in the 'parameter_defaults" section. If no domain name is defined, the Heat templates default to 'localdomain'.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1262106
This enhancement enables backup of Block Storage (cinder) volumes to a Ceph object store using the same user interface as that for backing up cinder volumes to Object Storage (swift). This was done to avoid the need for a second object store if Ceph was already being used.
This enhancement enables backup of Block Storage (cinder) volumes to a Ceph object store using the same user interface as that for backing up cinder volumes to Object Storage (swift). This was done to avoid the need for a second object store if Ceph was already being used.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1266104
This update adds neutron QoS (Quality of Service) extensions to provide better control over tenant networking qualities and limits. Overclouds are now deployed with Neutron QoS extension enabled.
This update adds neutron QoS (Quality of Service) extensions to provide better control over tenant networking qualities and limits. Overclouds are now deployed with Neutron QoS extension enabled.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1266156
The OpenDaylight OpenStack neutron driver has been split from the neutron project and moved to a new package, python-networking-odl. Operators still have the driver available for use as part of their Red Hat OpenStack Platform installations.
The OpenDaylight OpenStack neutron driver has been split from the neutron project and moved to a new package, python-networking-odl. Operators still have the driver available for use as part of their Red Hat OpenStack Platform installations.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1266219
The Director can now deploy the Block Storage service with a Dell EqualLogic or Dell Storage Center appliance as a back end. For more information, see: https://access.redhat.com/documentation/en/red-hat-openstack-platform/version-8/dell-equallogic-back-end-guide/ https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/dell-storage-center-back-end-guide/dell-storage-center-back-end-guide
The Director can now deploy the Block Storage service with a Dell EqualLogic or Dell Storage Center appliance as a back end. For more information, see: https://access.redhat.com/documentation/en/red-hat-openstack-platform/version-8/dell-equallogic-back-end-guide/ https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/dell-storage-center-back-end-guide/dell-storage-center-back-end-guideCopy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1267951
This update introduces nested quotas. Deployers now have the ability to manage a hierarchy of quotas in Cinder, with subprojects that inherit from parent projects.
This update introduces nested quotas. Deployers now have the ability to manage a hierarchy of quotas in Cinder, with subprojects that inherit from parent projects.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1273303
The Director now supports the OpenStack Networking 'enable_isolated_metadata' option. This option allows access to instance metadata on VMs on external routers or on isolated networks.
The Director now supports the OpenStack Networking 'enable_isolated_metadata' option. This option allows access to instance metadata on VMs on external routers or on isolated networks.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1279812
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1282429
This update adds new parameters to configure API worker process counts, which allows you to tune overcloud's memory utilization and request processing capacity. The parameters are: CeilometerWorkers, CinderWorkers, GlanceWorkers, HeatWorkers, KeystoneWorkers, NeutronWorkers, NovaWorkers, and SwiftWorkers.
This update adds new parameters to configure API worker process counts, which allows you to tune overcloud's memory utilization and request processing capacity. The parameters are: CeilometerWorkers, CinderWorkers, GlanceWorkers, HeatWorkers, KeystoneWorkers, NeutronWorkers, NovaWorkers, and SwiftWorkers.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1295690
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1296568
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1298247
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1305023
This update allows the Dashboard (horizon) to accept an IPv6 address as a VIP address to a Load Balancing Pool. As a result, you can now use Dashboard to configure IPv6 addresses on a Load Balancing Pool.
This update allows the Dashboard (horizon) to accept an IPv6 address as a VIP address to a Load Balancing Pool. As a result, you can now use Dashboard to configure IPv6 addresses on a Load Balancing Pool.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1312373
This update adds options to configure Ceilometer to store events, which can be retrieved later through Ceilometer APIs. This is an alternative to listening to the message bus to capture events. A brief outline of the configuration is in https://bugzilla.redhat.com/show_bug.cgi?id=1318397.
This update adds options to configure Ceilometer to store events, which can be retrieved later through Ceilometer APIs. This is an alternative to listening to the message bus to capture events. A brief outline of the configuration is in https://bugzilla.redhat.com/show_bug.cgi?id=1318397.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1316235
With the Red Hat OpenStack Platform 8 release, the inbuilt implementation of Amazon EC2 API in the OpenStack Compute (nova) service is deprecated and will be removed in the future releases. Moving forward, with the Red Hat OpenStack Platform 9 release, a new standalone EC2 API service will be available.
With the Red Hat OpenStack Platform 8 release, the inbuilt implementation of Amazon EC2 API in the OpenStack Compute (nova) service is deprecated and will be removed in the future releases. Moving forward, with the Red Hat OpenStack Platform 9 release, a new standalone EC2 API service will be available.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1340717
This update removes unnecessary downtime caused by updating OvS switch reconfiguration when restarting the OvS agent. Previously, dropping flows on physical bridges caused networking to drop. The same issue was experienced when the patch port between br-int and br-tun was deleted and rebuilt during startup. This enhancement resolves these issues, making it possible to restart the OvS agent without unnecessarily disrupting network traffic. This results in no downtime when restarting the OvS neutron agent if the bridge is already set up and reconfiguration was not requested.
This update removes unnecessary downtime caused by updating OvS switch reconfiguration when restarting the OvS agent. Previously, dropping flows on physical bridges caused networking to drop. The same issue was experienced when the patch port between br-int and br-tun was deleted and rebuilt during startup. This enhancement resolves these issues, making it possible to restart the OvS agent without unnecessarily disrupting network traffic. This results in no downtime when restarting the OvS neutron agent if the bridge is already set up and reconfiguration was not requested.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. テクノロジープレビュー リンクのコピーリンクがクリップボードにコピーされました!
- BZ#1322944
This update provides the following technology preview: The director provides an option to integrate services from OpenStack's containerization project (kolla) into the Overcloud's Compute nodes. This includes creating Compute nodes that use Red Hat Enterprise Linux Atomic Host as a base operating system and individual containers to run different OpenStack services.
This update provides the following technology preview: The director provides an option to integrate services from OpenStack's containerization project (kolla) into the Overcloud's Compute nodes. This includes creating Compute nodes that use Red Hat Enterprise Linux Atomic Host as a base operating system and individual containers to run different OpenStack services.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. リリースノート リンクのコピーリンクがクリップボードにコピーされました!
- BZ#1244555
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1266050
The Open vSwitch (openvswitch) package is now re-based to upstream version 2.4.0.
The Open vSwitch (openvswitch) package is now re-based to upstream version 2.4.0.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1300735
With this release, the 'Metering' panel in Dashboard (horizon) has been disabled due to performance issues.
With this release, the 'Metering' panel in Dashboard (horizon) has been disabled due to performance issues.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. 既知の問題 リンクのコピーリンクがクリップボードにコピーされました!
- BZ#1177611
A known issue has been identified for interactions between High Availability (VRRP) routers and L2 Population. Currently, when connecting a HA router to a subnet, HA routers use a distributed port by design. Each router has the same port details on each node that it's scheduled on, and only the master router has IPs configured on that port; all the slaves have the port without any IPs configured. Consequently, L2Population uses the stale information to advise that the router is present on the node (which it states in the port binding information for that port). As a result, each node that has a port on that logical network has a tunnel created only to the node where the port is presumably bound. In addition, a forwarding entry is set so that any traffic to that port is sent through the created tunnel. However, this action may not succeed as there is not guarantee that the master router is on the node specified in the port binding. Furthermore, in the event that the master router is in fact on the node, a failover event would cause it to migrate to another node and result in a loss of connectivity with the router.
A known issue has been identified for interactions between High Availability (VRRP) routers and L2 Population. Currently, when connecting a HA router to a subnet, HA routers use a distributed port by design. Each router has the same port details on each node that it's scheduled on, and only the master router has IPs configured on that port; all the slaves have the port without any IPs configured. Consequently, L2Population uses the stale information to advise that the router is present on the node (which it states in the port binding information for that port). As a result, each node that has a port on that logical network has a tunnel created only to the node where the port is presumably bound. In addition, a forwarding entry is set so that any traffic to that port is sent through the created tunnel. However, this action may not succeed as there is not guarantee that the master router is on the node specified in the port binding. Furthermore, in the event that the master router is in fact on the node, a failover event would cause it to migrate to another node and result in a loss of connectivity with the router.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1234601
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1237009
The swift proxy port is denied in the Undercloud firewall. This means the swift proxy only accepts connections from localhost. As a workaround, open the swift proxy port in the firewall: # sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT This enabled connections to the swift proxy from remote machines.
The swift proxy port is denied in the Undercloud firewall. This means the swift proxy only accepts connections from localhost. As a workaround, open the swift proxy port in the firewall: # sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT This enabled connections to the swift proxy from remote machines.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1268426
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1272591
The Undercloud used the Public API to configure service endpoints during the post-deployment stage. This meant the Undercloud needed to reach the Public API in order to complete the deployment. If the External uplink on the Undercloud is not the same subnet as the Public API, the Undercloud requires a route to the Public API and any firewall ACLs must allow this traffic. With this route, the Undercloud connects to the Public API and completes post-deployment tasks.
The Undercloud used the Public API to configure service endpoints during the post-deployment stage. This meant the Undercloud needed to reach the Public API in order to complete the deployment. If the External uplink on the Undercloud is not the same subnet as the Public API, the Undercloud requires a route to the Public API and any firewall ACLs must allow this traffic. With this route, the Undercloud connects to the Public API and completes post-deployment tasks.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1290881
The default driver with Block Storage service is the internal LVM software iSCSI driver. This is the volume back-end which manages local volumes. However, the Cinder iSCSI LVM driver has significant performance issues. In production environments, with high I/O activity, there are many potential issues which could affect performance or data integrity, Red Hat strongly recommends using a certified Block Storage plug-in provider for storage in a production environment. The software iSCSI LVM driver should be used and is only supported for single node evaluations and proof of concept environments.
The default driver with Block Storage service is the internal LVM software iSCSI driver. This is the volume back-end which manages local volumes. However, the Cinder iSCSI LVM driver has significant performance issues. In production environments, with high I/O activity, there are many potential issues which could affect performance or data integrity, Red Hat strongly recommends using a certified Block Storage plug-in provider for storage in a production environment. The software iSCSI LVM driver should be used and is only supported for single node evaluations and proof of concept environments.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1293979
Updating packages on the Undercloud left the Undercloud in an indeterminate state. This meant some Undercloud services were disabled after the package update and could not start again. As a workaround, run 'openstack undercloud install' to reconfigure all Undercloud services. After the command complete, the Undercloud services operate normally.
Updating packages on the Undercloud left the Undercloud in an indeterminate state. This meant some Undercloud services were disabled after the package update and could not start again. As a workaround, run 'openstack undercloud install' to reconfigure all Undercloud services. After the command complete, the Undercloud services operate normally.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1295374
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1463061
When using Red Hat Ceph Storage as a back end for both Block Storage (cinder) volumes and backups, any attempt to perform an incremental backup will result in a full backup instead, without any warning.
When using Red Hat Ceph Storage as a back end for both Block Storage (cinder) volumes and backups, any attempt to perform an incremental backup will result in a full backup instead, without any warning.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1321179
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.
OpenStack command-line clients that use `python-requests` can not currently validate certificates that have an IP address in the SAN field.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. 非推奨の機能 リンクのコピーリンクがクリップボードにコピーされました!
- BZ#1295573
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1296135
With this release, support for PowerDNS (pdns) has been removed due to a known security issue with PolarSSL/mbedtls. Designate can now be used with BIND9 as a backend.
With this release, support for PowerDNS (pdns) has been removed due to a known security issue with PolarSSL/mbedtls. Designate can now be used with BIND9 as a backend.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1312889
This update removes the Tuskar API service from Red Hat OpenStack Platform director 8. Tuskar was installed and configured on the Undercloud, including an endpoint existing in the Keystone service catalog. The RPM is no longer installed, the service is not configured, and the endpoint is not created in the service catalog.
This update removes the Tuskar API service from Red Hat OpenStack Platform director 8. Tuskar was installed and configured on the Undercloud, including an endpoint existing in the Keystone service catalog. The RPM is no longer installed, the service is not configured, and the endpoint is not created in the service catalog.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
第4章 テクニカルノート リンクのコピーリンクがクリップボードにコピーされました!
4.1. RHEA-2016:0603: Red Hat OpenStack Platform 8 機能拡張アドバイザリー リンクのコピーリンクがクリップボードにコピーされました!
diskimage-builder
- BZ#1307001
The diskimage-builder package has been upgraded to upstream version 1.10.0, which provides a number of bug fixes and enhancements over the previous version. Notably, the python-devel package is no longer removed by default, as it previously caused other packages to be removed as well.
The diskimage-builder package has been upgraded to upstream version 1.10.0, which provides a number of bug fixes and enhancements over the previous version. Notably, the python-devel package is no longer removed by default, as it previously caused other packages to be removed as well.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
memcached
- BZ#1299075
Previously, memcached was unable to bind IPv6 addresses, resulting in memcached failing to start in IPv6 environments. This update addresses this issue, with memcached-1.4.15-9.1.el7ost now IPv6-enabled.
Previously, memcached was unable to bind IPv6 addresses, resulting in memcached failing to start in IPv6 environments. This update addresses this issue, with memcached-1.4.15-9.1.el7ost now IPv6-enabled.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
mongodb
- BZ#1308855
This rebase package adds improved performance for range queries. Specifically, queries that used the `$or` operator were previously affected with the 2.4 release. Those regressions are now fixed in 2.6
This rebase package adds improved performance for range queries. Specifically, queries that used the `$or` operator were previously affected with the 2.4 release. Those regressions are now fixed in 2.6Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-cinder
- BZ#1272572
Previously, a bug in the Block Storage component caused it to be incompatible with the Identity API v2 when working with quotas, resulting in failures when managing information on quotas in Block Storage. With this update, Block Storage has now been updated to be compatible with the Identity API v2, and the dashboard can now correctly retrieve information on volume quotas.
Previously, a bug in the Block Storage component caused it to be incompatible with the Identity API v2 when working with quotas, resulting in failures when managing information on quotas in Block Storage. With this update, Block Storage has now been updated to be compatible with the Identity API v2, and the dashboard can now correctly retrieve information on volume quotas.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1295576
Previously, a bug in the cinder API server quota code used `encryption_auth_url` when it should have used `auth_uri`. Consequently, cinder failed to talk to keystone when querying quota information, causing the client to receive HTTP 500 errors from cinder. This issue has been resolved in 7.0.1. Fix: Fixed in Cinder API service in 7.0.1, resulting in expected behavior of the cinder quota commands.
Previously, a bug in the cinder API server quota code used `encryption_auth_url` when it should have used `auth_uri`. Consequently, cinder failed to talk to keystone when querying quota information, causing the client to receive HTTP 500 errors from cinder. This issue has been resolved in 7.0.1. Fix: Fixed in Cinder API service in 7.0.1, resulting in expected behavior of the cinder quota commands.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1262106
This enhancement enables backup of Block Storage (cinder) volumes to a Ceph object store using the same user interface as that for backing up cinder volumes to Object Storage (swift). This was done to avoid the need for a second object store if Ceph was already being used.
This enhancement enables backup of Block Storage (cinder) volumes to a Ceph object store using the same user interface as that for backing up cinder volumes to Object Storage (swift). This was done to avoid the need for a second object store if Ceph was already being used.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1179445
Previously, when Ceph was used as the backing store for Block Storage (cinder), operations such as deleting or flattening a large volume may have blocked other driver threads. Consequently, deleting and flattening threads may have prevented cinder from doing other work until they completed. This fix changes the delete and flattening threads to run in a sub-process, rather than as green threads in the same process. As a result, delete and flattening operations are run in the background so that other cinder operations (such as volume creates and attaches) can run concurrently.
Previously, when Ceph was used as the backing store for Block Storage (cinder), operations such as deleting or flattening a large volume may have blocked other driver threads. Consequently, deleting and flattening threads may have prevented cinder from doing other work until they completed. This fix changes the delete and flattening threads to run in a sub-process, rather than as green threads in the same process. As a result, delete and flattening operations are run in the background so that other cinder operations (such as volume creates and attaches) can run concurrently.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1192641
With this release, in order to provide security isolation, the '/usr/local' path has been removed from the default Block Storage rootwrap configuration. As a result, the deployments relying on Block Storage service executing commands from the '/usr/local/' as the 'root' user will need to add configuration for the commands to work.
With this release, in order to provide security isolation, the '/usr/local' path has been removed from the default Block Storage rootwrap configuration. As a result, the deployments relying on Block Storage service executing commands from the '/usr/local/' as the 'root' user will need to add configuration for the commands to work.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1258645
This enhancement adds a new scaled backend replication implementation (between backends) that leaves the bulk of the work up to the driver, while providing basic admin API methods. This is available where replication is set at the volume types level, and when the cinder driver reports its capabilities. New configuration options are available: replication_enabled - set to True replication_type - async, sync replication_count - Number of replicas
This enhancement adds a new scaled backend replication implementation (between backends) that leaves the bulk of the work up to the driver, while providing basic admin API methods. This is available where replication is set at the volume types level, and when the cinder driver reports its capabilities. New configuration options are available: replication_enabled - set to True replication_type - async, sync replication_count - Number of replicasCopy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1258643
To provide better flexibility for administrators on deployments with an assortment of storage backends, Block Storage now defines standard names for the capabilities, for example, QoS, compression, replication, bandwidth control, and thin provisioning. This means volume type specifications that will work with multiple drivers without modifications can be defined.
To provide better flexibility for administrators on deployments with an assortment of storage backends, Block Storage now defines standard names for the capabilities, for example, QoS, compression, replication, bandwidth control, and thin provisioning. This means volume type specifications that will work with multiple drivers without modifications can be defined.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1267951
This update introduces nested quotas. Deployers now have the ability to manage a hierarchy of quotas in Cinder, with subprojects that inherit from parent projects.
This update introduces nested quotas. Deployers now have the ability to manage a hierarchy of quotas in Cinder, with subprojects that inherit from parent projects.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-glance
- BZ#1167565
This update adds a common API hosted by the Image Service (glance) for vendors, admins, services, and users to meaningfully define an available key/value pair, and tag metadata. The intent is to enable better metadata collaboration across artifacts, services, and projects for OpenStack users. This definition describes the available metadata that can be used on different types of resources (images, artifacts, volumes, flavors, aggregates, among others). A definition includes the properties type, key, description, and constraints. This catalog will not store the values for specific instance properties. For example, a definition of a virtual CPU topology property for a number of cores will include the key to use, a description, and value constraints, such as requiring it to be an integer. As a result, users (potentially through the dashboard) would be able to search this catalog to list the available properties they can add to a flavor or image. They will see the virtual CPU topology property in the list and know that it must be an integer. In the dashboard example, when the user adds the property, its key and value will be stored in the service that owns that resource (in nova for flavors, and in glance for images).
This update adds a common API hosted by the Image Service (glance) for vendors, admins, services, and users to meaningfully define an available key/value pair, and tag metadata. The intent is to enable better metadata collaboration across artifacts, services, and projects for OpenStack users. This definition describes the available metadata that can be used on different types of resources (images, artifacts, volumes, flavors, aggregates, among others). A definition includes the properties type, key, description, and constraints. This catalog will not store the values for specific instance properties. For example, a definition of a virtual CPU topology property for a number of cores will include the key to use, a description, and value constraints, such as requiring it to be an integer. As a result, users (potentially through the dashboard) would be able to search this catalog to list the available properties they can add to a flavor or image. They will see the virtual CPU topology property in the list and know that it must be an integer. In the dashboard example, when the user adds the property, its key and value will be stored in the service that owns that resource (in nova for flavors, and in glance for images).Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-gnocchi
- BZ#1252954
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-heat
- BZ#1303084
Previously, heat would attempt to validate old properties based on the current property's definitions. Consequently, during director upgrades where a property definition changed type, the process would fail with a 'TypeError' when heat tried to validate the old property value. With this fix, heat no longer tries to validate old property values. As a result, heat can now gracefully handle property schema definitions changes by only validating new property values.
Previously, heat would attempt to validate old properties based on the current property's definitions. Consequently, during director upgrades where a property definition changed type, the process would fail with a 'TypeError' when heat tried to validate the old property value. With this fix, heat no longer tries to validate old property values. As a result, heat can now gracefully handle property schema definitions changes by only validating new property values.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1318474
Previously, director used a patch update when updating a cloud, which reused all the parameters passed at creation. Parameters which were removed in an update were failing validation. Consequently, updating a stack with parameters removed, and using a patch update would fail unless the parameters were explicitly cleared. With this fix, heat changes the handling of patched updates to ignore parameters which were not present in the newest template. As a result, it's now possible to remove top-level parameters and update a stack using a patch update.
Previously, director used a patch update when updating a cloud, which reused all the parameters passed at creation. Parameters which were removed in an update were failing validation. Consequently, updating a stack with parameters removed, and using a patch update would fail unless the parameters were explicitly cleared. With this fix, heat changes the handling of patched updates to ignore parameters which were not present in the newest template. As a result, it's now possible to remove top-level parameters and update a stack using a patch update.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1303723
Previously, heat would leave the context roles empty when loading the stored context. When signaling heat used the stored context (trust scoped token), and if the context did not have any roles, it failed. Consequently, the process failed with the error 'trustee has no delegated roles'. This fix addresses this issue by populating roles when loading the stored context. As a result, loading the auth ref, and populating the roles from the token will confirm that any RBAC performed on the context roles will work as expected, and that the stack update succeeds.
Previously, heat would leave the context roles empty when loading the stored context. When signaling heat used the stored context (trust scoped token), and if the context did not have any roles, it failed. Consequently, the process failed with the error 'trustee has no delegated roles'. This fix addresses this issue by populating roles when loading the stored context. As a result, loading the auth ref, and populating the roles from the token will confirm that any RBAC performed on the context roles will work as expected, and that the stack update succeeds.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1303112
Previously, heat changed the name of properties on several neutron resources; while it used a mechanism to support the old names when creating them, it failed to validate resources created with a previous version. Consequently, using Red Hat OpenStack Platform 8 to update a stack created in version 7 (or previous) using with a neutron port resource would fail by trying to lookup a 'None' object. With this fix, when heat updates the resource, it now uses the translation mechanism on old properties too. As a result, supporting deprecated properties now works as expected with resources created from a previous version.
Previously, heat changed the name of properties on several neutron resources; while it used a mechanism to support the old names when creating them, it failed to validate resources created with a previous version. Consequently, using Red Hat OpenStack Platform 8 to update a stack created in version 7 (or previous) using with a neutron port resource would fail by trying to lookup a 'None' object. With this fix, when heat updates the resource, it now uses the translation mechanism on old properties too. As a result, supporting deprecated properties now works as expected with resources created from a previous version.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-ironic-python-agent
- BZ#1312187
Sometimes, hard drives were not available in time for a deployment ramdisk run. Consequently, the deployment failed if the ramdisk was unable to find the required root device. With this update, the "udev settle" command is executed before enumerating disks in the ramdisk, and the deployment no longer fails due to the missing root device.
Sometimes, hard drives were not available in time for a deployment ramdisk run. Consequently, the deployment failed if the ramdisk was unable to find the required root device. With this update, the "udev settle" command is executed before enumerating disks in the ramdisk, and the deployment no longer fails due to the missing root device.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-keystone
- BZ#1282944
Identity Service (keystone) used a hard-coded LDAP membership attribute when checking if a user was enabled, if the 'enabled emulation' feature was being used. Consequently, users who were `enabled` could show as `disabled` if an unexpected LDAP membership attribute was used. With this fix, the 'enabled emulation' membership check now uses the configurable LDAP membership attribute that is used for group resources. As a result, the 'enabled' status for users is shown correctly when different LDAP membership attributes are configured.
Identity Service (keystone) used a hard-coded LDAP membership attribute when checking if a user was enabled, if the 'enabled emulation' feature was being used. Consequently, users who were `enabled` could show as `disabled` if an unexpected LDAP membership attribute was used. With this fix, the 'enabled emulation' membership check now uses the configurable LDAP membership attribute that is used for group resources. As a result, the 'enabled' status for users is shown correctly when different LDAP membership attributes are configured.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1300395
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#923598
Previously, the Identity Service (keystone) allowed administrators to set a maximum password length limit that was larger than the limit used by the Passlib python module. Consequently, if the maximum password length limit was set larger than the Passlib limit, attempts to set a user password larger than the Passlib limit would fail with a HTTP 500 response and an uncaught exception. With this update, Identity Service now validates that the 'max_password_length' configuration value is less than or equal to the Passlib maximum password length limit. As a result, if the Identity Service setting 'max_password_length' is too large, it will fail to start with a configuration validation error.
Previously, the Identity Service (keystone) allowed administrators to set a maximum password length limit that was larger than the limit used by the Passlib python module. Consequently, if the maximum password length limit was set larger than the Passlib limit, attempts to set a user password larger than the Passlib limit would fail with a HTTP 500 response and an uncaught exception. With this update, Identity Service now validates that the 'max_password_length' configuration value is less than or equal to the Passlib maximum password length limit. As a result, if the Identity Service setting 'max_password_length' is too large, it will fail to start with a configuration validation error.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-neutron
- BZ#1292570
Previously, the 'ip netns list' command returned unexpected ID data in recent versions of 'iproute2'. Consequently, neutron was unable to parse namespaces. This fix addresses this issue by updating the parser used in neutron. As a result, neutron can now be expected to properly parse namespaces.
Previously, the 'ip netns list' command returned unexpected ID data in recent versions of 'iproute2'. Consequently, neutron was unable to parse namespaces. This fix addresses this issue by updating the parser used in neutron. As a result, neutron can now be expected to properly parse namespaces.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1287736
Prior to this update, the L3 agent failed to respawn keepalived process if the keepalived parent process died. This was because the child keepalived process was still running. Consequently, the L3 agent could not recover from keepalived parent process death, breaking the HA router served by the process. With this update, the L3 agent is made aware of the child keepalived process, and now cleans up it as well before respawning keepalived. As a result, the L3 agent is now able to recover HA routers when the keepalived process dies.
Prior to this update, the L3 agent failed to respawn keepalived process if the keepalived parent process died. This was because the child keepalived process was still running. Consequently, the L3 agent could not recover from keepalived parent process death, breaking the HA router served by the process. With this update, the L3 agent is made aware of the child keepalived process, and now cleans up it as well before respawning keepalived. As a result, the L3 agent is now able to recover HA routers when the keepalived process dies.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1290562
Red Hat OpenStack Platform 8 introduced a new RBAC feature that allows you to share neutron networks with a specific list of tenants, instead of globally. As part of the feature, the default policy.json file for neutron started triggering I/O, consuming database fetches for every port fetch in attempt to allow the owner of a network to list all ports that belong to his network, even if they were created by other tenants. Consequently, the list operation for ports triggered multiple unneeded database fetches, which drastically affected performance of the operation. This update addresses this issue by running the I/O operations only when they are actually needed, for example, when the port to be validated by the policy engine does not belong to the tenant that invokes the list operation. As a result, list operations for ports will scale normally again.
Red Hat OpenStack Platform 8 introduced a new RBAC feature that allows you to share neutron networks with a specific list of tenants, instead of globally. As part of the feature, the default policy.json file for neutron started triggering I/O, consuming database fetches for every port fetch in attempt to allow the owner of a network to list all ports that belong to his network, even if they were created by other tenants. Consequently, the list operation for ports triggered multiple unneeded database fetches, which drastically affected performance of the operation. This update addresses this issue by running the I/O operations only when they are actually needed, for example, when the port to be validated by the policy engine does not belong to the tenant that invokes the list operation. As a result, list operations for ports will scale normally again.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1222775
Prior to this update, the fix for BZ#1215177 added the 'garp_master_repeat 5' and 'garp_master_refresh 10' options to Keepalived configuration. Consequently however, Keepalived continuously spammed the network with Gratuitous ARP (GARP) broadcasts; in addition, instances would lose their IPv6 default gateway settings. As a result of these issues, the IPv6 router stopped working with VRRP. This update addresses these issues by dropping the 'repeat' and 'refresh' Keepalived options. This fixes the IPv6 bug but re-introduces the bug described in BZ#1215177. To resolve this, use the 'delay' option instead. As a result, Keepalived sends a GARP when it transitions to 'MASTER', and then waits a number of seconds (determined by the delay option), and sends another GARP. Use an aggressive 'delay' setting to make sure that when the node boots and the L3/L2 agents start, there is enough time for the L2 agent to wire the ports.
Prior to this update, the fix for BZ#1215177 added the 'garp_master_repeat 5' and 'garp_master_refresh 10' options to Keepalived configuration. Consequently however, Keepalived continuously spammed the network with Gratuitous ARP (GARP) broadcasts; in addition, instances would lose their IPv6 default gateway settings. As a result of these issues, the IPv6 router stopped working with VRRP. This update addresses these issues by dropping the 'repeat' and 'refresh' Keepalived options. This fixes the IPv6 bug but re-introduces the bug described in BZ#1215177. To resolve this, use the 'delay' option instead. As a result, Keepalived sends a GARP when it transitions to 'MASTER', and then waits a number of seconds (determined by the delay option), and sends another GARP. Use an aggressive 'delay' setting to make sure that when the node boots and the L3/L2 agents start, there is enough time for the L2 agent to wire the ports.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1283623
Prior to this update, a change to the Open vSwitch agent introduced a bug in how the agent handles the segmentation ID value for flat networking during agent startup. Consequently, the agent failed to restart when serving a flat network. With this update, the agent code was fixed to handle segmentation properly for flat networking. As a result, the agent is successfully restarted when serving a flat network.
Prior to this update, a change to the Open vSwitch agent introduced a bug in how the agent handles the segmentation ID value for flat networking during agent startup. Consequently, the agent failed to restart when serving a flat network. With this update, the agent code was fixed to handle segmentation properly for flat networking. As a result, the agent is successfully restarted when serving a flat network.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1295690
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1177611
A known issue has been identified for interactions between High Availability (VRRP) routers and L2 Population. Currently, when connecting a HA router to a subnet, HA routers use a distributed port by design. Each router has the same port details on each node that it's scheduled on, and only the master router has IPs configured on that port; all the slaves have the port without any IPs configured. Consequently, L2Population uses the stale information to advise that the router is present on the node (which it states in the port binding information for that port). As a result, each node that has a port on that logical network has a tunnel created only to the node where the port is presumably bound. In addition, a forwarding entry is set so that any traffic to that port is sent through the created tunnel. However, this action may not succeed as there is not guarantee that the master router is on the node specified in the port binding. Furthermore, in the event that the master router is in fact on the node, a failover event would cause it to migrate to another node and result in a loss of connectivity with the router.
A known issue has been identified for interactions between High Availability (VRRP) routers and L2 Population. Currently, when connecting a HA router to a subnet, HA routers use a distributed port by design. Each router has the same port details on each node that it's scheduled on, and only the master router has IPs configured on that port; all the slaves have the port without any IPs configured. Consequently, L2Population uses the stale information to advise that the router is present on the node (which it states in the port binding information for that port). As a result, each node that has a port on that logical network has a tunnel created only to the node where the port is presumably bound. In addition, a forwarding entry is set so that any traffic to that port is sent through the created tunnel. However, this action may not succeed as there is not guarantee that the master router is on the node specified in the port binding. Furthermore, in the event that the master router is in fact on the node, a failover event would cause it to migrate to another node and result in a loss of connectivity with the router.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1300308
Previously, the neutron-server service would sometimes erroneously require a new RPC entrypoint version from the L2 agents that listened for security group updates. Consequently, the RHEL OpenStack Platform 7 neutron L2 agents could not handle certain security group update notifications sent by Red Hat OpenStack Platform 8 neutron-server services, causing certain security group updates to not be propagated to the data plane. This update addresses this issue by ending the requirement of the new RPC endpoint version from agents, as this will assist the rolling upgrade scenario between RHEL OpenStack Platform 7 and Red Hat OpenStack Platform 8. As a result, RHEL OpenStack Platform 7 neutron L2 agents will now correctly handle security group update notifications sent by the Red Hat OpenStack Platform 8 neutron-server services.
Previously, the neutron-server service would sometimes erroneously require a new RPC entrypoint version from the L2 agents that listened for security group updates. Consequently, the RHEL OpenStack Platform 7 neutron L2 agents could not handle certain security group update notifications sent by Red Hat OpenStack Platform 8 neutron-server services, causing certain security group updates to not be propagated to the data plane. This update addresses this issue by ending the requirement of the new RPC endpoint version from agents, as this will assist the rolling upgrade scenario between RHEL OpenStack Platform 7 and Red Hat OpenStack Platform 8. As a result, RHEL OpenStack Platform 7 neutron L2 agents will now correctly handle security group update notifications sent by the Red Hat OpenStack Platform 8 neutron-server services.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1293381
Prior to this update, when the last HA router of a tenant was deleted, the HA network belonging to the tenant was not removed. This happened in certain scenarios, such as the 'router delete' API call, which raised an exception since the router had been deleted. That scenario was possible due to a race condition between HA router 'create' and 'delete' operations. As a result of this issue, HA network tenants were not deleted. This update resolves the race condition, and now catches the exceptions 'ObjectDeletedError' and 'NetworkInUse' when a user deletes the last HA router, and also moves the HA network deleting procedure under the 'ha_network exist' check block. In addition, the fix checks whether or not HA routers are present, and deletes the HA network when the last HA router is deleted.
Prior to this update, when the last HA router of a tenant was deleted, the HA network belonging to the tenant was not removed. This happened in certain scenarios, such as the 'router delete' API call, which raised an exception since the router had been deleted. That scenario was possible due to a race condition between HA router 'create' and 'delete' operations. As a result of this issue, HA network tenants were not deleted. This update resolves the race condition, and now catches the exceptions 'ObjectDeletedError' and 'NetworkInUse' when a user deletes the last HA router, and also moves the HA network deleting procedure under the 'ha_network exist' check block. In addition, the fix checks whether or not HA routers are present, and deletes the HA network when the last HA router is deleted.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1255037
Neutron ports created when neutron-openvswitch-agent is down are in status "DOWN, binding:vif_type=binding_failed", which is expected. Nevertheless, prior to this update, there was no way to recover those ports even if neutron-openvswitch-agent was back online. Now, the function "_bind_port_if_needed" binds at least once when the port's binding status passed in is already in "binding_failed". As a result, ports can now recover from a failed binding status by repeated binding attempts triggered when neutron-openvswitch-agent comes back online.
Neutron ports created when neutron-openvswitch-agent is down are in status "DOWN, binding:vif_type=binding_failed", which is expected. Nevertheless, prior to this update, there was no way to recover those ports even if neutron-openvswitch-agent was back online. Now, the function "_bind_port_if_needed" binds at least once when the port's binding status passed in is already in "binding_failed". As a result, ports can now recover from a failed binding status by repeated binding attempts triggered when neutron-openvswitch-agent comes back online.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1284739
Prior to this update, the status of a floating IP address was not set when the floating IP address was realized by an HA router. Consequently, 'neutron floatingip-show <floating_ip>' would not output an updated status. With this update, a floating IP address status is updated when realized by HA routers, and when the L3 agent configures a router. As a result, the status field for floating IP addresses realized by HA routers are now updated to 'ACTIVE' when the floating IP is configured by the L3 agent.
Prior to this update, the status of a floating IP address was not set when the floating IP address was realized by an HA router. Consequently, 'neutron floatingip-show <floating_ip>' would not output an updated status. With this update, a floating IP address status is updated when realized by HA routers, and when the L3 agent configures a router. As a result, the status field for floating IP addresses realized by HA routers are now updated to 'ACTIVE' when the floating IP is configured by the L3 agent.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-nova
- BZ#978365
The ability of the libvirt driver to set the admin password has been added. To use this feature, run the following command: "nova root-password [server]".
The ability of the libvirt driver to set the admin password has been added. To use this feature, run the following command: "nova root-password [server]".Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1298825
Previously, selecting an odd number of vCPUs would cause the assignment of one core and one thread in the guest instance per CPU, which would impact performance. The update addresses this issue by correctly assigning pairs of threads and one independent thread per CPU, when an odd number of vCPUs is assigned.
Previously, selecting an odd number of vCPUs would cause the assignment of one core and one thread in the guest instance per CPU, which would impact performance. The update addresses this issue by correctly assigning pairs of threads and one independent thread per CPU, when an odd number of vCPUs is assigned.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1301914
Previously, when a source compute node is back up after a migration, instances that have been successfully evacuated from it when the node was down were not deleted. A result of having the non-deleted instances makes it impossible to evacuate them. With this update, the successful migration status when evacuating an instance is now verified for knowing which instance to delete when a compute node is back up and running again. As a result, instances can be evacuated from one host to another, regardless of their previous locations.
Previously, when a source compute node is back up after a migration, instances that have been successfully evacuated from it when the node was down were not deleted. A result of having the non-deleted instances makes it impossible to evacuate them. With this update, the successful migration status when evacuating an instance is now verified for knowing which instance to delete when a compute node is back up and running again. As a result, instances can be evacuated from one host to another, regardless of their previous locations.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1315394
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1293607
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-packstack
- BZ#1301366
Previously, Packstack did not enable the VPNaaS tab in the Dashboard even if the CONFIG_NEUTRON_VPNAAS parameter was set to 'y'. As a result, the tab for VPNaaS was not shown on the Dashboard. With this update, a check to see if VPNaaS is enabled has been set up. This check then enables the Dashboard tab in the Puppet manifest. As a result, the VPNaaS tab is now shown on the Dashboard when the service is configured in Packstack.
Previously, Packstack did not enable the VPNaaS tab in the Dashboard even if the CONFIG_NEUTRON_VPNAAS parameter was set to 'y'. As a result, the tab for VPNaaS was not shown on the Dashboard. With this update, a check to see if VPNaaS is enabled has been set up. This check then enables the Dashboard tab in the Puppet manifest. As a result, the VPNaaS tab is now shown on the Dashboard when the service is configured in Packstack.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1297712
Previously, Packstack edited the /etc/lvm/lvm.conf file to set specific parameters for snapshot autoextend. However, the regexp used only allowed black spaces instead of the tabs as currently used in the file. As a result, some lines were added at the end of the file, breaking its format. With this update, the regexp is updated in Packstack to set the parameters properly. As a result, there are no error messages when running LVM commands.
Previously, Packstack edited the /etc/lvm/lvm.conf file to set specific parameters for snapshot autoextend. However, the regexp used only allowed black spaces instead of the tabs as currently used in the file. As a result, some lines were added at the end of the file, breaking its format. With this update, the regexp is updated in Packstack to set the parameters properly. As a result, there are no error messages when running LVM commands.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-puppet-modules
- BZ#1289180
Previously, although the haproxy is configured to allow a value of 10000 for the 'maxconn' parameter for all proxies together, there is a default 'maxconn' value of 2000 for each proxy individually. If the specific proxy used for MySQL reached the limit of 2000, it dropped all further connections to the database and the client would not retry, which caused API timeout and subsequent commands to fail. With this update, the default value for 'maxconn' parameter has been increased to work better for production environments, As a result, the database connections are far less likely to time out.
Previously, although the haproxy is configured to allow a value of 10000 for the 'maxconn' parameter for all proxies together, there is a default 'maxconn' value of 2000 for each proxy individually. If the specific proxy used for MySQL reached the limit of 2000, it dropped all further connections to the database and the client would not retry, which caused API timeout and subsequent commands to fail. With this update, the default value for 'maxconn' parameter has been increased to work better for production environments, As a result, the database connections are far less likely to time out.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1280523
Previously, Facter 2 did not have netmask6 and netmask6_<ifce> facts. As a result, IPv6 was not supported. With this update, the relevant custom facts have been added to support checks on IPv6 interfaces, resulting in the IPv6 interfaces are now supported.
Previously, Facter 2 did not have netmask6 and netmask6_<ifce> facts. As a result, IPv6 was not supported. With this update, the relevant custom facts have been added to support checks on IPv6 interfaces, resulting in the IPv6 interfaces are now supported.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1243611
Previously, there was no default time out parameter, resulting in some stages of Ceph cluster set-up that look longer than the default 5 minutes (300 seconds). With this update, a time out parameter is added for relevant operations. The default time out parameter value is set at 600 seconds. You can modify the default value, if necessary. As a result, the installation is more resilient, especially when some of the Ceph setup operations take longer than average.
Previously, there was no default time out parameter, resulting in some stages of Ceph cluster set-up that look longer than the default 5 minutes (300 seconds). With this update, a time out parameter is added for relevant operations. The default time out parameter value is set at 600 seconds. You can modify the default value, if necessary. As a result, the installation is more resilient, especially when some of the Ceph setup operations take longer than average.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-sahara
- BZ#1189502
With this update, configuration settings now exist to set timeouts, after which clusters which have failed to reach the 'Active' state will be automatically deleted.
With this update, configuration settings now exist to set timeouts, after which clusters which have failed to reach the 'Active' state will be automatically deleted.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1189517
When creating a job template intended for re-use, you can now register a variable for datasource URLs with OpenStack Data Processing (sahara). Doing so allows you to easily change input and output paths per run, rather than an actual URL (which would require revising the template, or manually revising the URL per run between jobs). This makes it easier to reuse job templates when data source jobs are mutable between runs, as is true for most real-world cases.
When creating a job template intended for re-use, you can now register a variable for datasource URLs with OpenStack Data Processing (sahara). Doing so allows you to easily change input and output paths per run, rather than an actual URL (which would require revising the template, or manually revising the URL per run between jobs). This makes it easier to reuse job templates when data source jobs are mutable between runs, as is true for most real-world cases.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1299982
With this update, the integration for CDH 5.4 with Sahara is now complete and hence, the default-enabled option for the plugin version, CDH 5.3 is now removed.
With this update, the integration for CDH 5.4 with Sahara is now complete and hence, the default-enabled option for the plugin version, CDH 5.3 is now removed.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1233159
Previously, the tenant context information was not available to the periodic task responsible for cleaning up stale clusters. With this update, temporary trusts are established between the tenant and admin, allowing the periodic job to use this trust to delete stale clusters.
Previously, the tenant context information was not available to the periodic task responsible for cleaning up stale clusters. With this update, temporary trusts are established between the tenant and admin, allowing the periodic job to use this trust to delete stale clusters.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-selinux
- BZ#1281547
Previously, httpd was not allowed to search through directories having the "nova_t" label. Consequently, nova-novncproxy failed to deploy an HA overcloud. This update allows httpd to search through such directories, which enables nova-novncproxy to run successfully.
Previously, httpd was not allowed to search through directories having the "nova_t" label. Consequently, nova-novncproxy failed to deploy an HA overcloud. This update allows httpd to search through such directories, which enables nova-novncproxy to run successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1284268
Previously, Openvswitch was trying to create a tun socket, but SELinux prevented that. This update allows Openvswitch to create a tun socket, and as a result, Openvswitch now runs without failures.
Previously, Openvswitch was trying to create a tun socket, but SELinux prevented that. This update allows Openvswitch to create a tun socket, and as a result, Openvswitch now runs without failures.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1310383
Previously, SELinux blocked ovsdb-server from running, resulting in simple networking operations to fail. With this update, Open vSwitch is allowed to connect to its own port. As a result, ovsdb-server now runs without issues and the networking operations are completed successfully.
Previously, SELinux blocked ovsdb-server from running, resulting in simple networking operations to fail. With this update, Open vSwitch is allowed to connect to its own port. As a result, ovsdb-server now runs without issues and the networking operations are completed successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1284133
Previously, SELinux prevented redis from connecting to its own port, resulting in redis failing at restart. With this update, redis has the permission to connect to the 'redis' labeled port. As a result, redis runs properly and resource restart is successful.
Previously, SELinux prevented redis from connecting to its own port, resulting in redis failing at restart. With this update, redis has the permission to connect to the 'redis' labeled port. As a result, redis runs properly and resource restart is successful.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1281588
Prior to this update, SELinux prevented nova from uploading the public key to the overcloud. A new rule has now been added to allow nova to upload the key.
Prior to this update, SELinux prevented nova from uploading the public key to the overcloud. A new rule has now been added to allow nova to upload the key.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1306525
Previously, when nova was trying to retrieve a list of glance images, SELinux prevented that, and nova failed with an "Unexpected API Error". This update allows nova to communicate with glance. As a result, nova can now list glance images.
Previously, when nova was trying to retrieve a list of glance images, SELinux prevented that, and nova failed with an "Unexpected API Error". This update allows nova to communicate with glance. As a result, nova can now list glance images.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1283674
Prior to this update, SELinix prevented dhclient, vnc, and redis from working. New rules have now been added to allow these software tools to run successfully.
Prior to this update, SELinix prevented dhclient, vnc, and redis from working. New rules have now been added to allow these software tools to run successfully.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openvswitch
- BZ#1266050
The Open vSwitch (openvswitch) package is now re-based to upstream version 2.4.0.
The Open vSwitch (openvswitch) package is now re-based to upstream version 2.4.0.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-cinderclient
- BZ#1214230
With this update, a new feature adds pagination support for the Block Storage 'snapshots-list' and 'backups-list' commands. You can now limit, marker and sort parameters to control the number of returned results, starting element and their order. Retrieving a limited number of results instead of the entire data set can be extremely useful on the large deployments with thousands of snapshots and backups.
With this update, a new feature adds pagination support for the Block Storage 'snapshots-list' and 'backups-list' commands. You can now limit, marker and sort parameters to control the number of returned results, starting element and their order. Retrieving a limited number of results instead of the entire data set can be extremely useful on the large deployments with thousands of snapshots and backups.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-django-horizon
- BZ#1167563
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1100542
OpenStack dashboard tables summarize information about a large number of entities. This update adds a table enhancement that enables this information to be displayed within the table as a slide-down "drawer" that is activated when you click on a toggle switch within a row. The drawer appears as an additional row (with configurable height) and contains additional information about the entity in the row above it (e.g. additional entity details, metrics, graphs, etc.). Multiple drawers may be opened at one time.
OpenStack dashboard tables summarize information about a large number of entities. This update adds a table enhancement that enables this information to be displayed within the table as a slide-down "drawer" that is activated when you click on a toggle switch within a row. The drawer appears as an additional row (with configurable height) and contains additional information about the entity in the row above it (e.g. additional entity details, metrics, graphs, etc.). Multiple drawers may be opened at one time.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1166963
This update replaces the network topology with curvature based graph as the previous UI did not work well with larger number of nodes or networks. The new network topology map can handle more nodes, looks stylish and the node layout can be re-organized.
This update replaces the network topology with curvature based graph as the previous UI did not work well with larger number of nodes or networks. The new network topology map can handle more nodes, looks stylish and the node layout can be re-organized.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1042947
This update adds support for volume migrations of the Block Storage (cinder) service. These are done in the 'Volumes' panel of the OpenStack dashboard (Project-> Compute -> Volumes and in Admin-> System Panel-> Volumes). You can perform this action on the 'Volumes' row in the table. The final patch in this series resolved the command action itself; it had previously errored out due to incorrect parameters, and parameter count issues.
This update adds support for volume migrations of the Block Storage (cinder) service. These are done in the 'Volumes' panel of the OpenStack dashboard (Project-> Compute -> Volumes and in Admin-> System Panel-> Volumes). You can perform this action on the 'Volumes' row in the table. The final patch in this series resolved the command action itself; it had previously errored out due to incorrect parameters, and parameter count issues.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1305905
The python-django-horizon packages have been upgraded to upstream version 8.0.1, which provides a number of bug fixes and enhancements over the previous version. Notably, this version contains localization updates, includes Italian localization, fixes job_binaries deletion, and adds support for accepting IPv6 in the VIP address for an LB pool.
The python-django-horizon packages have been upgraded to upstream version 8.0.1, which provides a number of bug fixes and enhancements over the previous version. Notably, this version contains localization updates, includes Italian localization, fixes job_binaries deletion, and adds support for accepting IPv6 in the VIP address for an LB pool.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1279812
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1300735
With this release, the 'Metering' panel in Dashboard (horizon) has been disabled due to performance issues.
With this release, the 'Metering' panel in Dashboard (horizon) has been disabled due to performance issues.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1297757
Previously, no timeout was specified in horizon's systemd snippet for httpd, so the standard one-minute timeout was used when waiting for httpd to fully start up. In some cases, however, especially when running in a virtualized or a very loaded environment, the startup takes longer. Consequently, a failure from systemd sometimes occurred even if httpd was already running. With this update, the timeout has been set to two minutes, which resolves the problem.
Previously, no timeout was specified in horizon's systemd snippet for httpd, so the standard one-minute timeout was used when waiting for httpd to fully start up. In some cases, however, especially when running in a virtualized or a very loaded environment, the startup takes longer. Consequently, a failure from systemd sometimes occurred even if httpd was already running. With this update, the timeout has been set to two minutes, which resolves the problem.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-glance-store
- BZ#1284845
Previously, when Object Storage service was used as a backend storage for Image service, image data was stored in Object Storage service as multiple 'chunks' of data. When using the Image service APIv2, there were circumstances in which the upload operations would fail if the client sent a final zero-sized 'chunk' to the server. The failure involved a race condition between the operation to store a zero-sized 'chunk' and a cleanup delete of that 'chunk'. As a result, intermittent failure occurred while storing Image service images in Object Storage service. With this update, the cleanup delete operations are retried rather than failing them as well as the primary upload image task. As a result, Image service APIv2 handles this rare circumstance gracefully, so that the image upload does not fail.
Previously, when Object Storage service was used as a backend storage for Image service, image data was stored in Object Storage service as multiple 'chunks' of data. When using the Image service APIv2, there were circumstances in which the upload operations would fail if the client sent a final zero-sized 'chunk' to the server. The failure involved a race condition between the operation to store a zero-sized 'chunk' and a cleanup delete of that 'chunk'. As a result, intermittent failure occurred while storing Image service images in Object Storage service. With this update, the cleanup delete operations are retried rather than failing them as well as the primary upload image task. As a result, Image service APIv2 handles this rare circumstance gracefully, so that the image upload does not fail.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1229634
Previously, there was no secure way to remotely access S3 backend in a private network. With this update, a new feature allows Image service S3 driver to connect a S3 backend from a different network in a secure way through the HTTP proxy.
Previously, there was no secure way to remotely access S3 backend in a private network. With this update, a new feature allows Image service S3 driver to connect a S3 backend from a different network in a secure way through the HTTP proxy.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-glanceclient
- BZ#1314069
Previously, the Image service client could be configured to only allow uploading images in certain formats (for example, raw, ami, iso) to the Image service server. The client also allowed download of an image from the server only if it was in one of these formats. As a result of this restriction, users could no longer download images in other formats that had been previously uploaded. With this update, as the Image service server already validates image formats at the time they are imported, there is no need for the Image service client to verify image format when it is downloaded. As a result, the image format validation when an image is downloaded is now skipped, allowing the consumption of images in legitimate formats even if the client-side support for upload of images in those formats is no longer configured.
Previously, the Image service client could be configured to only allow uploading images in certain formats (for example, raw, ami, iso) to the Image service server. The client also allowed download of an image from the server only if it was in one of these formats. As a result of this restriction, users could no longer download images in other formats that had been previously uploaded. With this update, as the Image service server already validates image formats at the time they are imported, there is no need for the Image service client to verify image format when it is downloaded. As a result, the image format validation when an image is downloaded is now skipped, allowing the consumption of images in legitimate formats even if the client-side support for upload of images in those formats is no longer configured.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-heatclient
- BZ#1234108
Previously, the output of the "heat resource-list --nested-depth ..." command contained a column called "parent_resource"; however, the output did not include the information required to run a subsequent "heat resource-show ..." command. With this update, the output of the "heat resource-list --nested-depth ..." command includes a column called "stack_name", which provides the values to use in a "heat resource-show [stack_name] [resource_name]" call.
Previously, the output of the "heat resource-list --nested-depth ..." command contained a column called "parent_resource"; however, the output did not include the information required to run a subsequent "heat resource-show ..." command. With this update, the output of the "heat resource-list --nested-depth ..." command includes a column called "stack_name", which provides the values to use in a "heat resource-show [stack_name] [resource_name]" call.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-networking-odl
- BZ#1266156
The OpenDaylight OpenStack neutron driver has been split from the neutron project and moved to a new package, python-networking-odl. Operators still have the driver available for use as part of their Red Hat OpenStack Platform installations.
The OpenDaylight OpenStack neutron driver has been split from the neutron project and moved to a new package, python-networking-odl. Operators still have the driver available for use as part of their Red Hat OpenStack Platform installations.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-neutronclient
- BZ#1291739
The 'neutron router-gateway-set' command now supports the '--fixed-ip' option, which allows you to configure the fixed IP address and subnet that the router will use in the external network. This IP address is used by the OpenStack Networking service (openstack-neutron) to connect interfaces on the software level to connect the tenant networks to the external network.
The 'neutron router-gateway-set' command now supports the '--fixed-ip' option, which allows you to configure the fixed IP address and subnet that the router will use in the external network. This IP address is used by the OpenStack Networking service (openstack-neutron) to connect interfaces on the software level to connect the tenant networks to the external network.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-openstackclient
- BZ#1303038
With this release, the python-openstackclient package is now re-based to upstream version 1.7.2. This applies several fixes and enhancements, which include improved exception handling for 'find_resource'.
With this release, the python-openstackclient package is now re-based to upstream version 1.7.2. This applies several fixes and enhancements, which include improved exception handling for 'find_resource'.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-oslo-messaging
- BZ#1302391
Oslo Messaging used the "shuffle" strategy to select a RabbitMQ host from the list of RabbitMQ servers. When a node of the cluster running RabbitMQ was restarted, each OpenStack service connected to this server reconnected to a new RabbitMQ server. Unfortunately, this strategy does not handle dead RabbitMQ servers correctly; it can try to connect to the same dead server multiple times in a row. The strategy also leads to increased reconnection time, and sometimes it may lead to RPC operations timing out because no guarantee is provided on how long the reconnection process will take. With this update, Oslo Messaging uses the "round-robin" strategy to select a RabbitMQ host. This strategy provides the least achievable reconnection time and avoids RPC timeout when a node is restarted. It also guarantees that if K of N RabbitMQ hosts are alive, it will take at most N - K + 1 attempts to successfully reconnect to the RabbitMQ cluster.
Oslo Messaging used the "shuffle" strategy to select a RabbitMQ host from the list of RabbitMQ servers. When a node of the cluster running RabbitMQ was restarted, each OpenStack service connected to this server reconnected to a new RabbitMQ server. Unfortunately, this strategy does not handle dead RabbitMQ servers correctly; it can try to connect to the same dead server multiple times in a row. The strategy also leads to increased reconnection time, and sometimes it may lead to RPC operations timing out because no guarantee is provided on how long the reconnection process will take. With this update, Oslo Messaging uses the "round-robin" strategy to select a RabbitMQ host. This strategy provides the least achievable reconnection time and avoids RPC timeout when a node is restarted. It also guarantees that if K of N RabbitMQ hosts are alive, it will take at most N - K + 1 attempts to successfully reconnect to the RabbitMQ cluster.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1312912
When the RabbitMQ service fails to deliver an AMQP message from one OpenStack service to another, it reconnects and retries delivery. The "rabbit_retry_backoff" option, whose default is 2 seconds, is supposed to control the pace of retries; however, retries were previously done every second irrespective of the configured value of this option. The consequence of this problem was excessive retries, for example, when an endpoint was not available. This problem has now been fixed, and the "rabbit_retry_backoff" option, as explicitly configured or with the default value of two seconds, properly controls message delivery retries.
When the RabbitMQ service fails to deliver an AMQP message from one OpenStack service to another, it reconnects and retries delivery. The "rabbit_retry_backoff" option, whose default is 2 seconds, is supposed to control the pace of retries; however, retries were previously done every second irrespective of the configured value of this option. The consequence of this problem was excessive retries, for example, when an endpoint was not available. This problem has now been fixed, and the "rabbit_retry_backoff" option, as explicitly configured or with the default value of two seconds, properly controls message delivery retries.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-oslo-middleware
- BZ#1313875
With this release, oslo.middleware now supports SSL/TLS, which in turn allows OpenStack services to listen to HTTPS traffic and encrypt exchanges. In previous releases, OpenStack services could only listen to HTTP, and all exchanges were done in cleartext.
With this release, oslo.middleware now supports SSL/TLS, which in turn allows OpenStack services to listen to HTTPS traffic and encrypt exchanges. In previous releases, OpenStack services could only listen to HTTP, and all exchanges were done in cleartext.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-oslo-service
- BZ#1288528
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
sahara-image-elements
- BZ#1286276
In some base image contexts, iptables was not initialized prior to save. This cause 'iptables save' in the 'disable-firewall' element to fail. This release adds the non-destructive command 'iptables -L', which successfully initializes iptables in all contexts, thereby ensuring a successful image generation.
In some base image contexts, iptables was not initialized prior to save. This cause 'iptables save' in the 'disable-firewall' element to fail. This release adds the non-destructive command 'iptables -L', which successfully initializes iptables in all contexts, thereby ensuring a successful image generation.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1286856
In the Liberty release, the OpenStack versioning scheme is now based on the major release number (previously, it was based on year). This update adds an epoch to the current sahara-image-elements package to ensure that it upgrades the older version.
In the Liberty release, the OpenStack versioning scheme is now based on the major release number (previously, it was based on year). This update adds an epoch to the current sahara-image-elements package to ensure that it upgrades the older version.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2. RHEA-2016:0604 - Red Hat OpenStack Platform 8 director 機能拡張アドバイザリー リンクのコピーリンクがクリップボードにコピーされました!
instack-undercloud
- BZ#1212158
This updates provides OpenStack notifications. Previously there were external consumers of OpenStack notifications that could not interface with director-deployed cloud because notifications were not enabled. Now director enables notifications for external consumers.
This updates provides OpenStack notifications. Previously there were external consumers of OpenStack notifications that could not interface with director-deployed cloud because notifications were not enabled. Now director enables notifications for external consumers.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1223257
A misconfiguration of Ceilometer on the Undercloud caused hardware meters to not work correctly. This fix provides a vaild default Ceilometer configuration. Now Ceilometer hardware meters work as expected.
A misconfiguration of Ceilometer on the Undercloud caused hardware meters to not work correctly. This fix provides a vaild default Ceilometer configuration. Now Ceilometer hardware meters work as expected.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1296295
Running "openstack undercloud install" attempted to delete and recreate the Undercloud's neutron subnet even if the subnet required no changes. If an Overcloud was already deployed, the subnet delete attempt failed since the subnet contained allocated ports. This caused the "openstack undercloud install" command to fail. This fix changes this behavior to only attempt to delete and recreate the subnet if the "openstack undercloud install" command has a configuration change to apply to the subnet. If an Overcloud is already deployed, the same error message still occurs since the director cannot delete the subnet. This is expected behavior though since we do not recommend change the subnet's configuration with an Overcloud already deployed. However, in cases with no subnet configuration changes, the "openstack undercloud install" command no longer fails with this error message.
Running "openstack undercloud install" attempted to delete and recreate the Undercloud's neutron subnet even if the subnet required no changes. If an Overcloud was already deployed, the subnet delete attempt failed since the subnet contained allocated ports. This caused the "openstack undercloud install" command to fail. This fix changes this behavior to only attempt to delete and recreate the subnet if the "openstack undercloud install" command has a configuration change to apply to the subnet. If an Overcloud is already deployed, the same error message still occurs since the director cannot delete the subnet. This is expected behavior though since we do not recommend change the subnet's configuration with an Overcloud already deployed. However, in cases with no subnet configuration changes, the "openstack undercloud install" command no longer fails with this error message.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1298189
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1315546
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1312889
This update removes the Tuskar API service from Red Hat OpenStack Platform director 8. Tuskar was installed and configured on the Undercloud, including an endpoint existing in the Keystone service catalog. The RPM is no longer installed, the service is not configured, and the endpoint is not created in the service catalog.
This update removes the Tuskar API service from Red Hat OpenStack Platform director 8. Tuskar was installed and configured on the Undercloud, including an endpoint existing in the Keystone service catalog. The RPM is no longer installed, the service is not configured, and the endpoint is not created in the service catalog.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-ironic-inspector
- BZ#1282580
The director includes new functionality to allow automatic profile matching. Users can specify automatic matching between nodes and deployment roles based on data available from the introspection step. Users now use ironic-inspector introspection rules and new python-tripleoclient commands to assign profiles to nodes.
The director includes new functionality to allow automatic profile matching. Users can specify automatic matching between nodes and deployment roles based on data available from the introspection step. Users now use ironic-inspector introspection rules and new python-tripleoclient commands to assign profiles to nodes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1270117
Previously, periodic iptables calls made by Ironic Inspector did not contain the -w option, which instructs iptables to wait for the xtables lock. As a consequence, periodic iptables updates occasionally failed. This update adds the -w option to the iptables calls, which prevents the periodic iptables updates from failing.
Previously, periodic iptables calls made by Ironic Inspector did not contain the -w option, which instructs iptables to wait for the xtables lock. As a consequence, periodic iptables updates occasionally failed. This update adds the -w option to the iptables calls, which prevents the periodic iptables updates from failing.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-ironic-python-agent
- BZ#1283650
Log processing in the introspection ramdisk did not take into account non-Latin characters in logs. Consequently, the "logs" collector failed during introspection. With this update, log processing has been fixed to properly handle any encoding.
Log processing in the introspection ramdisk did not take into account non-Latin characters in logs. Consequently, the "logs" collector failed during introspection. With this update, log processing has been fixed to properly handle any encoding.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1314642
The director uses a new ramdisk for inspection and deployment. This ramdisk included a new algorithm to pick the default root device for users not using root device hints. However, possible root device changes occurred on redployment, leading to failures. This fix reverts the ramdisk device logic to be the same as OpenStack Platform director 7. Note that this does not mean that the default root device is the same, as device names are not reliable. Also this behavior will change again in a future releases. Make sure to use root device hints if you nodes use multiple hard drives.
The director uses a new ramdisk for inspection and deployment. This ramdisk included a new algorithm to pick the default root device for users not using root device hints. However, possible root device changes occurred on redployment, leading to failures. This fix reverts the ramdisk device logic to be the same as OpenStack Platform director 7. Note that this does not mean that the default root device is the same, as device names are not reliable. Also this behavior will change again in a future releases. Make sure to use root device hints if you nodes use multiple hard drives.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
openstack-tripleo-heat-templates
- BZ#1295830
Pacemaker used a 100s timeout for service resources. However, a systemd timeout requires an additional timeout period after the initial timeout to accommodate for a SIGTERM and then a SIGKILL. This fix increases the Pacemaker timeout to 200s to accommodate two full systemd timeout periods. Now the timeout period is enough for systemd to perform a SIGTERM and then a SIGKILL.
Pacemaker used a 100s timeout for service resources. However, a systemd timeout requires an additional timeout period after the initial timeout to accommodate for a SIGTERM and then a SIGKILL. This fix increases the Pacemaker timeout to 200s to accommodate two full systemd timeout periods. Now the timeout period is enough for systemd to perform a SIGTERM and then a SIGKILL.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1311005
The notify=true parameter was previously missing from the RabbitMQ Pacemaker resource. Consequently, RabbitMQ instances were unable to rejoin the RabbitMQ cluster. This update adds support for notify=true to the pacemaker resource agent for RabbitMQ, and adds notify=true to OpenStack director. As a result, RabbitMQ instances are now able to rejoin the RabbitMQ cluster.
The notify=true parameter was previously missing from the RabbitMQ Pacemaker resource. Consequently, RabbitMQ instances were unable to rejoin the RabbitMQ cluster. This update adds support for notify=true to the pacemaker resource agent for RabbitMQ, and adds notify=true to OpenStack director. As a result, RabbitMQ instances are now able to rejoin the RabbitMQ cluster.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1283632
The 'ceilometer' user lacked a role needed for some functionality, which causes some Ceilometer meters to function incorrectly. This fix adds the necessary role to the 'ceilometer' user. Now all ceilometer meters work correctly.
The 'ceilometer' user lacked a role needed for some functionality, which causes some Ceilometer meters to function incorrectly. This fix adds the necessary role to the 'ceilometer' user. Now all ceilometer meters work correctly.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1299227
Prior to this update, the swift_device and swift_proxy_memcache URIs used for the swift ringbuilder and the swift proxy memcache server respectively were not properly formatted for IPv6 addresses, lacking the expected '[]' delimiting the IPv6 address. As a consequence, when deploying with IPv6 enabled for the overcloud, the deploy failed with "Error: Parameter name failed on Ring_object_device ...". Now, when IPv6 is enabled, the IP addresses used as part of the swift_device and swift_proxy_memcache URIs are correctly delimited with '[]'. As a result, deploying with IPv6 no longer fails on incorrect formatting for swift_device or swift_proxy_memcache.
Prior to this update, the swift_device and swift_proxy_memcache URIs used for the swift ringbuilder and the swift proxy memcache server respectively were not properly formatted for IPv6 addresses, lacking the expected '[]' delimiting the IPv6 address. As a consequence, when deploying with IPv6 enabled for the overcloud, the deploy failed with "Error: Parameter name failed on Ring_object_device ...". Now, when IPv6 is enabled, the IP addresses used as part of the swift_device and swift_proxy_memcache URIs are correctly delimited with '[]'. As a result, deploying with IPv6 no longer fails on incorrect formatting for swift_device or swift_proxy_memcache.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1238807
This enhancement enables the distribution of per-node hieradata, matching the nodes from their UUID (as reported by 'dmidecode'). This allows you to scale CephStorage across nodes equipped with a different number/type of disks. As a result, CephStorage nodes can now be configured with non-homogeneous disk topologies. This is done by provisioning a different configuration hash for the ceph::profile::params::osds parameter.
This enhancement enables the distribution of per-node hieradata, matching the nodes from their UUID (as reported by 'dmidecode'). This allows you to scale CephStorage across nodes equipped with a different number/type of disks. As a result, CephStorage nodes can now be configured with non-homogeneous disk topologies. This is done by provisioning a different configuration hash for the ceph::profile::params::osds parameter.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1242396
Previously, the os-collect-config utility only printed Puppet logs after Puppet had finished running. As a consequence, Puppet logs were not available for Puppet runs that were in progress. With this update, logs for Puppet runs are available even when a Puppet run is in progress. They can be found in the /var/run/heat-config/deployed/ directory.
Previously, the os-collect-config utility only printed Puppet logs after Puppet had finished running. As a consequence, Puppet logs were not available for Puppet runs that were in progress. With this update, logs for Puppet runs are available even when a Puppet run is in progress. They can be found in the /var/run/heat-config/deployed/ directory.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1266104
This update adds neutron QoS (Quality of Service) extensions to provide better control over tenant networking qualities and limits. Overclouds are now deployed with Neutron QoS extension enabled.
This update adds neutron QoS (Quality of Service) extensions to provide better control over tenant networking qualities and limits. Overclouds are now deployed with Neutron QoS extension enabled.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1320454
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1279615
This update allows enabling of the Neutron L2 population feature. This helps reduce the amount of broadcast traffic in Tenant networks. Set the NeutronEnableL2Pop parameter in an environment file's 'default_parameters' section to enable Neutron L2 population.
This update allows enabling of the Neutron L2 population feature. This helps reduce the amount of broadcast traffic in Tenant networks. Set the NeutronEnableL2Pop parameter in an environment file's 'default_parameters' section to enable Neutron L2 population.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1225163
The Director now properly enabled notifications for external consumers.
The Director now properly enabled notifications for external consumers.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1259003
The domain name for overcloud nodes defaulted to 'localdomain'. For example: 'overcloud-compute-0.localdomain'. This enhancement provides a parameter (CloudDomain) to customize the domain name. Create an environment file with the CloudDomain parameter included in the 'parameter_defaults" section. If no domain name is defined, the Heat templates default to 'localdomain'.
The domain name for overcloud nodes defaulted to 'localdomain'. For example: 'overcloud-compute-0.localdomain'. This enhancement provides a parameter (CloudDomain) to customize the domain name. Create an environment file with the CloudDomain parameter included in the 'parameter_defaults" section. If no domain name is defined, the Heat templates default to 'localdomain'.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1273303
The Director now supports the OpenStack Networking 'enable_isolated_metadata' option. This option allows access to instance metadata on VMs on external routers or on isolated networks.
The Director now supports the OpenStack Networking 'enable_isolated_metadata' option. This option allows access to instance metadata on VMs on external routers or on isolated networks.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1308422
Previously, '/v2.0' was missing from the end of the URL specified in the admin_auth_url setting in the [neutron] section of /etc/nova/nova.conf. This would prevent Nova from being able to boot instances because it could not connect to the Keystone catalog to query for the Neutron service endpoint to create and bind the port for instances. Now, '/v2.0' is correctly added to the end of the URL specified in the admin_auth_url setting, allowing instances to be started successfully after deploying an overcloud with the director.
Previously, '/v2.0' was missing from the end of the URL specified in the admin_auth_url setting in the [neutron] section of /etc/nova/nova.conf. This would prevent Nova from being able to boot instances because it could not connect to the Keystone catalog to query for the Neutron service endpoint to create and bind the port for instances. Now, '/v2.0' is correctly added to the end of the URL specified in the admin_auth_url setting, allowing instances to be started successfully after deploying an overcloud with the director.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1298247
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1266219
The Director can now deploy the Block Storage service with a Dell EqualLogic or Dell Storage Center appliance as a back end. For more information, see: https://access.redhat.com/documentation/en/red-hat-openstack-platform/version-8/dell-equallogic-back-end-guide/ https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/dell-storage-center-back-end-guide/dell-storage-center-back-end-guide
The Director can now deploy the Block Storage service with a Dell EqualLogic or Dell Storage Center appliance as a back end. For more information, see: https://access.redhat.com/documentation/en/red-hat-openstack-platform/version-8/dell-equallogic-back-end-guide/ https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/dell-storage-center-back-end-guide/dell-storage-center-back-end-guideCopy to Clipboard Copied! Toggle word wrap Toggle overflow
os-cloud-config
- BZ#1288475
A bug in the Identity service's endpoint registration code failed to mark the Telemetry service as SSL-enabled. This prevented the Telemetry service endpoint from being registered as HTTPS. This update fixes the bug: the Identity service now correctly registers Telemetry, and Telemetry traffic is now encrypted as expected.
A bug in the Identity service's endpoint registration code failed to mark the Telemetry service as SSL-enabled. This prevented the Telemetry service endpoint from being registered as HTTPS. This update fixes the bug: the Identity service now correctly registers Telemetry, and Telemetry traffic is now encrypted as expected.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1319878
When using Linux kernel mode for bridges and bonds (as opposed to Open vSwitch), the physical device was not detected for the VLAN interfaces. This, in turn, prevented the VLAN interfaces from working correctly. With this release, the os-net-config utility automatically detects the physical interface for a VLAN as long as the VLAN is a member of the physical bridge (that is, the VLAN must be in the 'members:' section of the bridge). As such, VLAN interfaces now work properly with both OVS bridges and Linux kernel bridges.
When using Linux kernel mode for bridges and bonds (as opposed to Open vSwitch), the physical device was not detected for the VLAN interfaces. This, in turn, prevented the VLAN interfaces from working correctly. With this release, the os-net-config utility automatically detects the physical interface for a VLAN as long as the VLAN is a member of the physical bridge (that is, the VLAN must be in the 'members:' section of the bridge). As such, VLAN interfaces now work properly with both OVS bridges and Linux kernel bridges.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1316730
In previous releases, when VLAN interfaces were placed directly on a Linux kernel bond with no bridge, it was possible for the VLANs to start before the bond. When this occurred, the VLANs failed to start. With this release, the os-net-config utility now starts the physical network (namely, bridges first, then bonds and interfaces) before VLANs. This ensures that the VLANs have the interfaces necessary to start properly.
In previous releases, when VLAN interfaces were placed directly on a Linux kernel bond with no bridge, it was possible for the VLANs to start before the bond. When this occurred, the VLANs failed to start. With this release, the os-net-config utility now starts the physical network (namely, bridges first, then bonds and interfaces) before VLANs. This ensures that the VLANs have the interfaces necessary to start properly.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-rdomanager-oscplugin
- BZ#1271250
In previous releases, a bug made it possible for failed nodes to be marked as available. Whenever this occurred, deployments failed because nodes were not in a proper state. This update backports an upstream patch to fix the bug.
In previous releases, a bug made it possible for failed nodes to be marked as available. Whenever this occurred, deployments failed because nodes were not in a proper state. This update backports an upstream patch to fix the bug.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
python-tripleoclient
- BZ#1288544
Previously, bulk introspection only printed on-screen errors, but never returned a failure status code. This prevented introspection failures from being detected. This update changes the status code of errors to non-zero, which ensures that failed introspections can now be detected through their status codes.
Previously, bulk introspection only printed on-screen errors, but never returned a failure status code. This prevented introspection failures from being detected. This update changes the status code of errors to non-zero, which ensures that failed introspections can now be detected through their status codes.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1261920
Previously, bulk introspection operated on nodes currently in maintenance mode. This could cause introspection to fail, or even break node maintenance (depending on the reason for node maintenance). With this release, bulk introspection now ignores nodes in maintenance mode.
Previously, bulk introspection operated on nodes currently in maintenance mode. This could cause introspection to fail, or even break node maintenance (depending on the reason for node maintenance). With this release, bulk introspection now ignores nodes in maintenance mode.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1246589
In older deployments using the python-rdomanager-oscplugin (not the python-tripleoclient) for Overcloud deployment, the dhcp_agents_per_network parameter for neutron was set to a minimum of 3, even in the case of a non-HA single controller deployment. This meant the dhcp_agents_per_network was set to 3 when deploying with only 1 Controller. This fix takes into account the single Controller case. The director sets at most 3 dhcp_agents_per_network and never more than the number of Controllers. Now if you deploy in HA with 3 or more controller nodes, the dhcp_agents_per_network configuration parameter in neutron.conf on those Controller nodes will be set to '3'. Alternatively if you deploy in non-HA with only 1 Controller, this same dhcp_agents_per_network parameter will be set to '1'.
In older deployments using the python-rdomanager-oscplugin (not the python-tripleoclient) for Overcloud deployment, the dhcp_agents_per_network parameter for neutron was set to a minimum of 3, even in the case of a non-HA single controller deployment. This meant the dhcp_agents_per_network was set to 3 when deploying with only 1 Controller. This fix takes into account the single Controller case. The director sets at most 3 dhcp_agents_per_network and never more than the number of Controllers. Now if you deploy in HA with 3 or more controller nodes, the dhcp_agents_per_network configuration parameter in neutron.conf on those Controller nodes will be set to '3'. Alternatively if you deploy in non-HA with only 1 Controller, this same dhcp_agents_per_network parameter will be set to '1'.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
rhel-osp-director
- BZ#1293979
Updating packages on the Undercloud left the Undercloud in an indeterminate state. This meant some Undercloud services were disabled after the package update and could not start again. As a workaround, run 'openstack undercloud install' to reconfigure all Undercloud services. After the command complete, the Undercloud services operate normally.
Updating packages on the Undercloud left the Undercloud in an indeterminate state. This meant some Undercloud services were disabled after the package update and could not start again. As a workaround, run 'openstack undercloud install' to reconfigure all Undercloud services. After the command complete, the Undercloud services operate normally.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1234601
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1249601
OpenStack Bare Metal (ironic) now supports deploying nodes in UEFI mode. This is due to requests from customers with servers that only support UEFI boot.
OpenStack Bare Metal (ironic) now supports deploying nodes in UEFI mode. This is due to requests from customers with servers that only support UEFI boot.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1236372
A misconfiguration of the health check for Nova EC2 API caused HAProxy to believe the API was down. This meant the API was unreachable through HAProxy. This fix corrects the health check to query the API service state correctly. Now the Nova EC2 API is reachable through HAProxy.
A misconfiguration of the health check for Nova EC2 API caused HAProxy to believe the API was down. This meant the API was unreachable through HAProxy. This fix corrects the health check to query the API service state correctly. Now the Nova EC2 API is reachable through HAProxy.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1265180
The director requires the 'baremetal' flavor, even if unused. Without this flavor, the deployment fails with an error. Now the Undercloud installation automatically creates the 'baremetal' flavor. With the flavor in place, the director does not report the error.
The director requires the 'baremetal' flavor, even if unused. Without this flavor, the deployment fails with an error. Now the Undercloud installation automatically creates the 'baremetal' flavor. With the flavor in place, the director does not report the error.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1318583
Previously, the os_tenant_name variable in the Ceilometer configuration was incorrectly set to the 'admin' tenant instead of the 'service' tenant. This caused the ceilometer-central-agent to fail with the error "ERROR ceilometer.agent.manager Skipping tenant, keystone issue: User 739a3abf8504498e91044d6d2a6830b1 is unauthorized for tenant d097e6c45c494c2cbef4071c2c273a58". Now, Ceilometer is correctly configured to use the 'service' tenant.
Previously, the os_tenant_name variable in the Ceilometer configuration was incorrectly set to the 'admin' tenant instead of the 'service' tenant. This caused the ceilometer-central-agent to fail with the error "ERROR ceilometer.agent.manager Skipping tenant, keystone issue: User 739a3abf8504498e91044d6d2a6830b1 is unauthorized for tenant d097e6c45c494c2cbef4071c2c273a58". Now, Ceilometer is correctly configured to use the 'service' tenant.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1315467
Previously, after upgrading the undercloud, there was a missing restart of the openstack-nova-api service, which would cause upgrades of the overcloud to fail due to a timeout that would report the error "ERROR: Timed out waiting for a reply to message ID 84a44ca3ed724eda991ba689cc364852". Now, the openstack-nova-api service is correctly restarted as part of the undercloud upgrade process, allowing the overcloud upgrade process to proceed without encountering this timeout issue.
Previously, after upgrading the undercloud, there was a missing restart of the openstack-nova-api service, which would cause upgrades of the overcloud to fail due to a timeout that would report the error "ERROR: Timed out waiting for a reply to message ID 84a44ca3ed724eda991ba689cc364852". Now, the openstack-nova-api service is correctly restarted as part of the undercloud upgrade process, allowing the overcloud upgrade process to proceed without encountering this timeout issue.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. RHBA-2016:1063 - openstack-neutron bug fix advisory リンクのコピーリンクがクリップボードにコピーされました!
4.3.1. openstack-neutron リンクのコピーリンクがクリップボードにコピーされました!
- BZ#1286302
Previously, using 'neutron-netns-cleanup' when manually taking down a node from an HA cluster would not properly clean up processes in the neutron L3-HA routers. Consequently, when the node was connected again to the cluster, and services were re-created, the processes would not properly respawn with the right connectivity. As a result, even if the processes were alive, they were disconnected; this sometimes led to a situation where no L3-HA router was able to take the 'ACTIVE' role. With this update, the 'neutron-netns-cleanup' scripts and related OCF resources have been fixed to kill the relevant keepalived processes and child processes. As a result, nodes can be taken off the cluster and back, and the resources will be properly cleaned up when taken off the cluster, and restored when taken back.
Previously, using 'neutron-netns-cleanup' when manually taking down a node from an HA cluster would not properly clean up processes in the neutron L3-HA routers. Consequently, when the node was connected again to the cluster, and services were re-created, the processes would not properly respawn with the right connectivity. As a result, even if the processes were alive, they were disconnected; this sometimes led to a situation where no L3-HA router was able to take the 'ACTIVE' role. With this update, the 'neutron-netns-cleanup' scripts and related OCF resources have been fixed to kill the relevant keepalived processes and child processes. As a result, nodes can be taken off the cluster and back, and the resources will be properly cleaned up when taken off the cluster, and restored when taken back.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - BZ#1325806
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
付録A 更新履歴 リンクのコピーリンクがクリップボードにコピーされました!
| 改訂履歴 | |||
|---|---|---|---|
| 改訂 8.0.0-0 | Wed Feb 3 2016 | ||
| |||