Important: Red Hat Ceph Storage 5.3 Bug fix and security update

Related Vulnerabilities: CVE-2022-3650  

Synopsis

Important: Red Hat Ceph Storage 5.3 Bug fix and security update

Type/Severity

Security Advisory: Important

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

Topic

An update is now available for Red Hat Ceph Storage 5.3.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

  • Ceph: ceph-crash.service allows local ceph user to root exploit (CVE-2022-3650)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es):

These updated packages include various bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage Release Notes for information on the most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.3/html/release_notes/index

All users of Red Hat Ceph Storage are advised to upgrade to these updated packages that provide various bugs and security fixes.

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Enterprise Linux for x86_64 9 x86_64
  • Red Hat Enterprise Linux for x86_64 8 x86_64
  • Red Hat Ceph Storage (OSD) 5 for RHEL 8 x86_64
  • Red Hat Ceph Storage (MON) 5 for RHEL 8 x86_64

Fixes

  • BZ - 2008524 - (RHCS 5.3z1) MGR is not reporting the version label in the ceph_mon_metadata metric
  • BZ - 2040337 - [GSS][RFE][Include an additional task in the cephadm-preflight playbook to populate /etc/containers/registries.conf for disconnected installations]
  • BZ - 2064429 - [CEE/SD][ceph-volume] ceph-volume lvm batch not accepting the /dev/disk/by-path/ & /dev/disk/by-id/ for persistent naming
  • BZ - 2064441 - [CEE/SD][cephadm][RFE] cephadm should add the necessary firewall ports during iscsi deployment
  • BZ - 2073273 - make cephfs-top display scroll-able like top(1) and fix the blank screen for great number of clients
  • BZ - 2083468 - cephfs-top: multiple file system support
  • BZ - 2094822 - [CephFS] Clone operations are failing with Assertion Error
  • BZ - 2097680 - [cephadm-ansible] cephadm-preflight.yml should be improved for current ceph_origin=custom changes
  • BZ - 2099470 - [iscsi]- Adding/expanding iscsi gateways in gwcli to the existing is failed saying "Failed : /etc/ceph/iscsi-gateway.cfg on ceph-52-iscsifix-bcb6z****** does not match the local version. Correct and retry request"
  • BZ - 2103677 - [RFE] `address` parameter is mandatory when adding host using `ceph_orch_host` module
  • BZ - 2106849 - [CephFS-NFS} - haproxy.cfg failed to replace old NFS server IP with a new NFS Server during HA Failover.
  • BZ - 2107407 - [RHCS 5.3] pacific doesn't defer small writes for pre-pacific hdd osds
  • BZ - 2111573 - Unable to remove ingress service from a Host which is down.
  • BZ - 2118263 - NFS client unable to see newly created files when listing directory contents in a FS subvolume clone
  • BZ - 2118541 - bootstrap with apply-spec does not return fail exit code with a failure
  • BZ - 2119100 - (RHCS 5.3z1) heap command returning empty output
  • BZ - 2120491 - CephFS: mgr/volumes: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
  • BZ - 2120497 - cephfs-top: wrong/infinitely changing wsp values
  • BZ - 2120498 - mgr/snap_schedule assumes that the client snap dir is always ".snap"
  • BZ - 2122275 - pybind/mgr/volumes: add basic introspection
  • BZ - 2122284 - mgr/stats: change in structure of perf_stats o/p
  • BZ - 2124417 - mgr/stats: be resilient to offline MDS rank-0
  • BZ - 2125575 - [CephFS] mgr/volumes: display in-progress clones for a snapshot
  • BZ - 2125578 - [CephFS] mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
  • BZ - 2126163 - ceph-mds issues during upgrade from 5.1 to 5.2
  • BZ - 2127110 - [cee/sd][iscsigw] While removing iscsigw in rhcs5, getting hung
  • BZ - 2127442 - client: track (and forward to MDS) average read/write/metadata latency
  • BZ - 2128215 - [RGW-MS] RGW multisite sync is slow during brownfield execution
  • BZ - 2129996 - mds only stores damage for up to one dentry per dirfrag
  • BZ - 2130667 - [RHCS 5.3.z1] osd/scrub: "scrub a chunk" requests are sent to the wrong set of replicas
  • BZ - 2130845 - [cee/sd][ceph-dashboard] ceph-dashboard is showing ISCSI Gateways as down after following the documentation
  • BZ - 2130901 - Do not abort MDS on unknown messages
  • BZ - 2135723 - mgr/volumes: addition of human-readable flag to `fs volume info` command
  • BZ - 2136407 - mds: wait unlink to finish to avoid conflict when creating same dentries
  • BZ - 2136909 - CVE-2022-3650 Ceph: ceph-crash.service allows local ceph user to root exploit
  • BZ - 2141164 - [cee/sd][iscsi] Unable to remove the duplicate "host.containers.internal" GW entry from gwcli in RHCS 5
  • BZ - 2142624 - [RHCS 5.2][Prometheus][Most PG metrics are missing]
  • BZ - 2152053 - ceph orchestrator affected by ceph-volume inventory commands that hang and stay in D state
  • BZ - 2153774 - [CEE/sd][RGW][Swift API client] Intermittent HTTP 401 Unauthorized error with swift client after upgrading Ceph cluster to RHCS 4.3z1
  • BZ - 2157952 - [RHCS][RHEL 9][GSS][OCS 4.9] pod rook-ceph-rgw client.rgw.ocs.storagecluster.cephobjectstore.a crashed - thread_name:radosgw
  • BZ - 2158286 - [CEE][cephfs] cephfs-mirror service kept getting permission denied errors
  • BZ - 2158690 - cephfs-top: new options to sort and limit
  • BZ - 2159301 - mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
  • BZ - 2160209 - [RFE] Bucket Notifications Using SASL with SCRAM-SHA-256 Mechanism
  • BZ - 2160398 - [Ceph-Dashboard] Allow CORS if the origin ip is known
  • BZ - 2161478 - MDS: scan_stray_dir doesn't walk through all stray inode fragment
  • BZ - 2161481 - mds: md_log_replay thread (replay thread) can remain blocked
  • BZ - 2162135 - [cee][rgw] Upgrade to 4.3z1 with vault results in (AccessDenied) failures when accessing buckets.
  • BZ - 2164338 - Large Omap objects found in pool 'ocs-storagecluster-cephfilesystem-metadata'
  • BZ - 2164853 - SNMP gateway daemon incorrectly set in cephadm binary
  • BZ - 2165890 - [CEE/sd][ceph-rgw][Unable to rename large objects using aws cli or s3cmd after upgrading to RHCS 5.3]
  • BZ - 2166652 - ceph fs volume create command not found
  • BZ - 2166713 - [cee/sd][ceph-volume] limit filter is not working when multiple osd service spec are deployed and getting the warning "cephadm [INF] Refuse to add /dev/nvme0n1 due to limit policy of <x>"
  • BZ - 2167549 - cephadm removes config and keyring mid flight
  • BZ - 2168019 - [CephFS] cephfs-top not working
  • BZ - 2170812 - OSD prepare job fails with KeyError: 'KNAME'