Moderate: Red Hat Ceph Storage 1.3.3 security, bug fix, and enhancement update

Related Vulnerabilities: CVE-2016-7031   CVE-2016-7031   CVE-2016-7031  

Synopsis

Moderate: Red Hat Ceph Storage 1.3.3 security, bug fix, and enhancement update

Type/Severity

Security Advisory: Moderate

Topic

Red Hat Ceph Storage 1.3.3 that fixes one security issue, multiple bugs, and adds various enhancements is now available for Red Hat Enterprise Linux 7.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

Security Fix(es):

  • A flaw was found in Ceph RGW code which allows an anonymous user to list contents of RGW bucket by bypassing ACL which should only allow authenticated users to list contents of bucket. (CVE-2016-7031)

For detailed information on changes in this release, see the Red Hat Ceph Storage 1.3.3 Release Notes linked from the References section.

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Enterprise Linux Server 7 x86_64
  • Red Hat Enterprise Linux Workstation 7 x86_64
  • Red Hat Enterprise Linux Desktop 7 x86_64
  • Red Hat Enterprise Linux for Scientific Computing 7 x86_64
  • Red Hat Ceph Storage 1.3 x86_64
  • Red Hat Ceph Storage Calamari 1.3 x86_64
  • Red Hat Ceph Storage MON 1.3 x86_64
  • Red Hat Ceph Storage OSD 1.3 x86_64

Fixes

  • BZ - 1193710 - ceph tell: broken error message / wrong hinting
  • BZ - 1273127 - Backport tracker 12738 - OSD reboots every few minutes with FAILED assert(clone_size.count(clone))
  • BZ - 1278524 - 1.3.1: ceph-deploy mon destroy prints " UnboundLocalError: local variable 'status_args' referenced before assignment"
  • BZ - 1284696 - config set with negative value results in "error setting 'filestore_merge_threshold' to '-40': (22) Invalid argument"
  • BZ - 1291632 - OSDMap Leak : OSD does not delete old OSD Maps in a timely fashion
  • BZ - 1299409 - OSD processes doesn't have a PID and hence sysvinit fails to restart the OSD processes
  • BZ - 1301706 - Need "orphans find" command to be listed in the manpage of radosgw-admin command
  • BZ - 1302721 - "ceph-deploy calamari connect" is not installing diamond package also failing to start salt-minion service
  • BZ - 1304533 - ceph-deploy 1.5.36 for 1.3.3
  • BZ - 1306842 - Adding a new OSD to a Ceph cluster with a CRUSH weight of 0 causes 'ceph df' to report invalid MAX AVAIL on pools
  • BZ - 1312587 - [RADOS]:- Stale cluster interference causes mon start failures
  • BZ - 1316268 - Set EncodingType in ListBucketResult
  • BZ - 1316287 - Possible QEMU deadlock after creating image snapshots
  • BZ - 1317427 - [RFE] ceph.log is used for 2 different purposes (ceph and ceph-deploy) , Rename ceph-deploy log file to ceph-deploy-${CLUSTER}.log
  • BZ - 1330279 - [RH Ceph 1.3.2Async / 0.94.5-12.el7cp] Few selinux denials from ceph-mon / ceph-osd
  • BZ - 1330643 - ceph df - %USED per pool is wrong
  • BZ - 1331523 - [RADOS]:- osd gets heavy weight due to reweight-by-utilization with max_change set to 1
  • BZ - 1331764 - OSDs are not selected properly while reweight-by-utilization
  • BZ - 1332470 - rados bench sometimes numbers not separated by blank
  • BZ - 1333907 - reweight-by-utilization:- While increasing the weight of the underutilized osds we should consider the least used first
  • BZ - 1334534 - collection split causes orphaned files in filestore causing inconsistent scrubs and crashes on pg removal
  • BZ - 1335269 - rebase ceph to 0.94.9
  • BZ - 1344134 - RGW Sync agent doesn't resync some objects after failover
  • BZ - 1347010 - [RFE] RGW - Let the default quota settings take effect during user creation
  • BZ - 1349484 - radosgw-admin region-map set is not setting the bucket quota
  • BZ - 1360444 - [backport] rpm: SELinux relabel does not work on systems using CIL
  • BZ - 1360467 - Newly added but never started OSDs cause osdmap update issues for Calamari
  • BZ - 1368402 - backport tracker : 15647 : osd: rados cppool omap to ec pool crashes osd
  • BZ - 1369013 - [RGW] Files incorrectly uploaded with swift api
  • BZ - 1372446 - CVE-2016-7031 ceph: RGW permits bucket listing when authenticated_users=read

CVEs

References