Moderate: Red Hat Ceph Storage security, bug fix, and enhancement update

Related Vulnerabilities: CVE-2019-19337   CVE-2019-19337   CVE-2019-19337  

Synopsis

Moderate: Red Hat Ceph Storage security, bug fix, and enhancement update

Type/Severity

Security Advisory: Moderate

Topic

An update is now available for Red Hat Ceph Storage 3.3 that runs on Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

  • ceph: denial of service in RGW daemon (CVE-2019-19337)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Bug Fix(es) and Enhancement(s):

For detailed information on changes in this release, see the Red Hat Ceph
Storage 3.3 Release Notes available at:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3.3/html-single/release_notes/index

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Ceph Storage 3 x86_64
  • Red Hat Ceph Storage MON 3 x86_64
  • Red Hat Ceph Storage OSD 3 x86_64
  • Red Hat Ceph Storage for Power 3 ppc64le
  • Red Hat Ceph Storage MON for Power 3 ppc64le
  • Red Hat Ceph Storage OSD for Power 3 ppc64le

Fixes

  • BZ - 1552210 - [ceph-ansible] [ceph-container] : failed to add new mgr with '--limit' option - trying to copy mgr keyring without generating
  • BZ - 1569689 - MDS rolling-upgrade process needs to be changed to follow new recommendations
  • BZ - 1603551 - OSP13 deploy fails pg count exceeds max
  • BZ - 1616159 - [ceph-ansible] [ceph-container] : switch from rpm to containerized - OSDs not coming up after the switch saying encrypted device still in use
  • BZ - 1622729 - remove warnings for unsupported variables
  • BZ - 1623580 - [RFE] Prevent customers from installing an OSD device on the same disk as the OS
  • BZ - 1638904 - lv-create.yml/lv-teardown.yml should fail if lv_vars.yaml has not been edited
  • BZ - 1640525 - [Ceph-Ansible] Missing fourth and fifth scenarios in osds.yml.sample
  • BZ - 1644611 - [RFE] Listing ceph-disk’s OSDs
  • BZ - 1646456 - [ceph-ansible] - ubuntu - playbook must fail if debian rhcs packages are not installed
  • BZ - 1654790 - ceph-validate : No clear error when osd_scenario is not set
  • BZ - 1664112 - Cache size is not created correctly in a hyperconverged installation when using the is_hci flag
  • BZ - 1665877 - RBD mirroring configuration issue with ceph-ansible
  • BZ - 1734513 - all users has access to read ceph manager client keyring files
  • BZ - 1744529 - fetching config overrides can result in crash due to unsafe observer calls
  • BZ - 1749097 - ceph-ansible filestore fails to start containerized OSD when using block device like /dev/loop3
  • BZ - 1749124 - Invalid bucket added to reshard list cannot be removed
  • BZ - 1749489 - [RFE] Support use of SSE-S3 headers in RGW with AES256 server side default encryption
  • BZ - 1749874 - [RHCS 3][RFE] Adding Placement Group id in Large omap log message
  • BZ - 1750115 - When listing of bucket entries, entries following an entry for which check_disk_state() = -ENOENT may not get listed
  • BZ - 1752163 - [RFE] tools/rados: allow list objects in a specific pg in a pool
  • BZ - 1753942 - [GSS] cephmetrics grafana dashboard do not show disk IOPS/Throughput in RHCS 3.3
  • BZ - 1754432 - [cee/sd][ceph-ansible] when running playbook to push new ceph.conf: ansible-playbook site.yml --tags='ceph_update_config' playbook fails on "The conditional check 'osd_socket_stat.rc == 0' failed" (for mon_socket_stat too)
  • BZ - 1757298 - [RGW]: Bucket rename creates a duplicate entry in bucket list
  • BZ - 1757400 - please backport speed improvement in chown command in switch to containers
  • BZ - 1765230 - [ceph-ansible]Ceph-mds -allow multimds task is failing
  • BZ - 1765652 - upgrade is broken when no mds is present in inventory
  • BZ - 1769760 - [ceph-ansible] - ceph_repository_type being validated unnecessarily in containerized scenario
  • BZ - 1777050 - STS crashes with uncaught exception when session token is not base64 encoded
  • BZ - 1779158 - [RGW]: Put object ACL fails due to missing content length
  • BZ - 1780688 - /etc/systemd/system/ceph-osd@.service contain the wrong OSD container names
  • BZ - 1781170 - CVE-2019-19337 ceph: denial of service in RGW daemon

CVEs

References