Moderate: Red Hat OpenShift Data Foundation 4.12.4 security and Bug Fix update

Related Vulnerabilities: CVE-2022-2795   CVE-2022-3172   CVE-2022-36227   CVE-2022-40023   CVE-2023-2491   CVE-2023-27535  

Synopsis

Moderate: Red Hat OpenShift Data Foundation 4.12.4 security and Bug Fix update

Type/Severity

Security Advisory: Moderate

Topic

Updated images that fix several bugs are now available for Red Hat OpenShift Data Foundation 4.12.4 on Red Hat Enterprise Linux 8 from Red Hat Container Registry.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Data Foundation. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multi-cloud data management service with an S3-compatible API.

Security Fix(es):

  • kube-apiserver: Aggregated API server can cause clients to be redirected (SSRF) (CVE-2022-3172)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug fixes:

  • Previously, when a sub-directory was created, it would always use its parent’s non-projected `gid`/`uid` metadata to set up its own `gid`/`uid` metadata. If the journal logs were not flushed, it would always retrieve the old `gid`/`uid` metadata.

With this fix, sub-directory uses the projected `gid`/`uid` metadata and as a result, the sub-directories inherit the correct `gid`/`uid` metadata from its parent. (BZ#2182943)

  • Previously, stale RADOS block device (RBD) images were left in the cluster as there was some trouble deleting the the RBD image due to "numerical result is out of range" error. With this fix, the number of trash entries list is increased in go-ceph. So, stale RBD images are not found in the Ceph cluster. (BZ#2195989)
  • Previously, Multicloud Object Gateway (MCG) Key Management Service (KMS) encryption was enabled even when the clusterwide encryption was not enabled and only with the KMS encryption enabled. This was because MCG encryption was set to enable when one of these conditions was true:
  • storagecluster.Spec.Encryption.Enable
  • storagecluster.Spec.Encryption.ClusterWide
  • storagecluster.Spec.Encryption.KeyManagementService.Enable.

With this fix, MCG encryption is enabled only when the storagecluster spec has KMS enabled and any one of the following conditions is true:

  • Encryption.Enabled OR
  • Encryption.ClusterWide is true OR
  • MCG is in Standalone mode

As a result, MCG is encrypted appropriately. (BZ#2192596)

All users of Red Hat OpenShift Data Foundation are advised to upgrade to
these updated images, which provide these bug fixes.

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat OpenShift Data Foundation 4 for RHEL 8 x86_64
  • Red Hat OpenShift Data Foundation for IBM Power, little endian 4 for RHEL 8 ppc64le
  • Red Hat OpenShift Data Foundation for IBM Z and LinuxONE 4 for RHEL 8 s390x

Fixes

  • BZ - 2127804 - CVE-2022-3172 kube-apiserver: Aggregated API server can cause clients to be redirected (SSRF)
  • BZ - 2182943 - [GSS] [Tracker for Ceph https://bugzilla.redhat.com/show_bug.cgi?id=2189936] FSGroup is not correctly set on subPath volume for CephFS CSI
  • BZ - 2188331 - [IBM Z ] DR operator is not available in the Operator Hub
  • BZ - 2192596 - [Backport-4.12.z][KMS][VAULT] Storage cluster remains in 'Progressing' state during deployment with storage class encryption, despite all pods being up and running.
  • BZ - 2195989 - timeout during waiting for condition. "error preparing volumesnapshots"
  • BZ - 2208477 - Update to RHCS 5.3z3 Ceph container image at ODF-4.12.4