Moderate: Red Hat Ceph Storage 6.1 security, enhancement, and bug fix update

Related Vulnerabilities: CVE-2018-14041   CVE-2018-20676   CVE-2018-20677   CVE-2023-43040  

Synopsis

Moderate: Red Hat Ceph Storage 6.1 security, enhancement, and bug fix update

Type/Severity

Security Advisory: Moderate

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

Topic

An update is now available for Red Hat Ceph Storage 6.1 in the Red Hat
Ecosystem Catalog.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform
that combines the most stable version of the Ceph storage system with a
Ceph management platform, deployment utilities, and support services.

These new packages include numerous enhancements, and bug fixes. Space precludes documenting all of these changes in this advisory.
Users are directed to the Red Hat Ceph Storage Release Notes for
information on the most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6.1/html/release_notes/index

Solution

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/articles/2789521

and

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6

For supported configurations, refer to:

https://access.redhat.com/articles/1548993

Affected Products

  • Red Hat Enterprise Linux for x86_64 9 x86_64
  • Red Hat Enterprise Linux for IBM z Systems 9 s390x
  • Red Hat Enterprise Linux for Power, little endian 9 ppc64le

Fixes

  • BZ - 1601616 - CVE-2018-14041 bootstrap: Cross-site Scripting (XSS) in the data-target property of scrollspy
  • BZ - 1668082 - CVE-2018-20676 bootstrap: XSS in the tooltip data-viewport attribute
  • BZ - 1668089 - CVE-2018-20677 bootstrap: XSS in the affix configuration target property
  • BZ - 1960643 - [RGW][Notification][kafka]: on versioned bucket, deleting the deleteMarkder has "eventTime":"0.000000" in event record
  • BZ - 2088172 - RGW: Lifecycle lock - failed to acquire lock on lc.0
  • BZ - 2114615 - pybind/mgr/volumes: investigate moving calls which may block on libcephfs into another thread
  • BZ - 2138216 - [Tracker for Bug 2104207] [MetroDR] Monitor crash - ceph_assert(0 == \"how did we try and do stretch recovery while we have dead monitor buckets?\"
  • BZ - 2141003 - [RFE] RBD mirroring (and related processes) needs a connection health metric to its peers
  • BZ - 2161569 - [rgw-ms][scale]: Observed slow sync and data inconsistencies on a multisite.
  • BZ - 2166413 - [RGW][osp17]: http 401 Unauthorized error is not seen with swift client if the keystone server time is ahead of current time for some hours or days
  • BZ - 2166688 - [dashboard] block/rbd : tab:images : Disable promote option to images which are not mirroring enabled
  • BZ - 2170836 - [cee/sd][RGW] RGW ops log duplicates uri query string
  • BZ - 2172838 - [rgw/lc-transition]: Object transition to EC pools from replicated pool is not respected via bucket lifecycle transition policy
  • BZ - 2183926 - [rgw][rfe]: rgw-restore-bucket-index should be able to recover metadata for buckets in non-default realms 6.1z2
  • BZ - 2188557 - [RHCS 6.0 / 5.3][RGW log size quickly increasing since upgrading to RHEL 9]
  • BZ - 2203397 - [CEE/SD][ceph-volume]Even though there is enough space in the DB device, the OSDs are not being created after attaching the device
  • BZ - 2210944 - [GSS][ceph-dashboard] Size and Provisioned columns in ceph dashboard for RBD images is labelled incorrectly
  • BZ - 2211290 - [rbd-mirror] : snapshot schedules stopped : possibly due to hang in MirrorSnapshotScheduleHandler.shutdown which could be in wait_for_pending()
  • BZ - 2211477 - cephfs-journal-tool --rank with all option in confusing
  • BZ - 2212787 - [rgw-ms][archive]: Log trimming does not happen in the primary and secondary zones for a resharded bucket
  • BZ - 2214278 - [rbd-mirror] : primary- 'demoted ' snapshots piling up after consecutive planned failovers (relocation)
  • BZ - 2215392 - rbd-mirror daemon may skip deleting non-primary image when primary image is deleted
  • BZ - 2216230 - avoid ballooning client_mount_timeout by 10x (5 -> 50 minutes)
  • BZ - 2216855 - CVE-2023-43040 rgw: improperly verified POST keys
  • BZ - 2216920 - [Ceph-Dashboard] SSO error: AttributeError: 'str' object has no attribute
  • BZ - 2217817 - mgr/dashboard: empty grafana panels for performance of daemons
  • BZ - 2219465 - rgw: take upstream fixes for race handling zone trace during full sync
  • BZ - 2220922 - [Ceph Dashboard]: host list server side pagination
  • BZ - 2222720 - Ceph-volume doesn't allow using --osd-id while preparing OSDs
  • BZ - 2222726 - Disallow enabling snapshot-based mirroring on cloned images
  • BZ - 2223990 - [rgw] take fix consistency bug with OLH objects
  • BZ - 2224230 - client: wait rename to finish
  • BZ - 2224233 - client: do not send metrics until the MDS rank is ready
  • BZ - 2224239 - client: trigger to flush the buffer when making snapshot
  • BZ - 2224243 - mds: cap revoke and cap update's seqs mismatched
  • BZ - 2224407 - RadosGW API: incorrect bucket quota in response to HEAD /{bucket}/?usage
  • BZ - 2227045 - [RGW][The value of the content-length header is inconsistent with the body size for website-enabled buckets]
  • BZ - 2227842 - [rgw-ms][object-encryption][multipart upload]: TOOLING: On multisite setup where SSE-S3 configured, there is md5sum mismatch while downloading multipart object.
  • BZ - 2228004 - mds: do not send split_realms for CEPH_SNAP_OP_UPDATE msg
  • BZ - 2228242 - rgw 6.1z2: bring upstream updates to rgw-orphan-list and rgw-gap-list.
  • BZ - 2228357 - mds: MDLog::_recovery_thread: handle the errors gracefully
  • BZ - 2228875 - [RGW] revised fix for Notify timeout can induce a race condition as it attempts to resend the cache update
  • BZ - 2229179 - SignatureDoesNotMatch error on RGW Object Gateway Dashboard after upgrade to 6.1
  • BZ - 2229267 - Labeled Perf Counters "counter dump" asok command emits invalid JSON
  • BZ - 2231068 - ceph-exporter scrapes failing on multi-homed server
  • BZ - 2232087 - [cee/sd][cephadm] cephadm: Call fails if output too long
  • BZ - 2232640 - multisite upgrades to 6.1z1 may corrupt compressed+encrypted objects on replication
  • BZ - 2233131 - [backport for 6.1.z] (mds.1): 3 slow requests are blocked
  • BZ - 2233762 - exporter: crash of exporter daemons
  • BZ - 2236188 - [GSS][backport for 6.1.z] CephFS blocked requests with warning 1 MDSs behind on trimming
  • BZ - 2236385 - The command `ceph mds metadata` doesn't list information for the active MDS server
  • BZ - 2237376 - [IBM] [Ceph Dashboard]: Allow CORS for an unauthorized access
  • BZ - 2238174 - rgw: fix build of rgw-nfs unit tests
  • BZ - 2238623 - Failing to AddExport or UpdateExport over dbus calls
  • BZ - 2239697 - [Ceph Dashboard] Host Details section under the Expand Cluster review page is displaying blank space