Commit e79323bd authored by Mikulas Patocka's avatar Mikulas Patocka Committed by Linus Torvalds

user namespace: fix incorrect memory barriers

smp_read_barrier_depends() can be used if there is data dependency between
the readers - i.e. if the read operation after the barrier uses address
that was obtained from the read operation before the barrier.

In this file, there is only control dependency, no data dependecy, so the
use of smp_read_barrier_depends() is incorrect. The code could fail in the
following way:
* the cpu predicts that idx < entries is true and starts executing the
  body of the for loop
* the cpu fetches map->extent[0].first and map->extent[0].count
* the cpu fetches map->nr_extents
* the cpu verifies that idx < extents is true, so it commits the
  instructions in the body of the for loop

The problem is that in this scenario, the cpu read map->extent[0].first
and map->nr_extents in the wrong order. We need a full read memory barrier
to prevent it.
Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent c9eaa447
...@@ -152,7 +152,7 @@ static u32 map_id_range_down(struct uid_gid_map *map, u32 id, u32 count) ...@@ -152,7 +152,7 @@ static u32 map_id_range_down(struct uid_gid_map *map, u32 id, u32 count)
/* Find the matching extent */ /* Find the matching extent */
extents = map->nr_extents; extents = map->nr_extents;
smp_read_barrier_depends(); smp_rmb();
for (idx = 0; idx < extents; idx++) { for (idx = 0; idx < extents; idx++) {
first = map->extent[idx].first; first = map->extent[idx].first;
last = first + map->extent[idx].count - 1; last = first + map->extent[idx].count - 1;
...@@ -176,7 +176,7 @@ static u32 map_id_down(struct uid_gid_map *map, u32 id) ...@@ -176,7 +176,7 @@ static u32 map_id_down(struct uid_gid_map *map, u32 id)
/* Find the matching extent */ /* Find the matching extent */
extents = map->nr_extents; extents = map->nr_extents;
smp_read_barrier_depends(); smp_rmb();
for (idx = 0; idx < extents; idx++) { for (idx = 0; idx < extents; idx++) {
first = map->extent[idx].first; first = map->extent[idx].first;
last = first + map->extent[idx].count - 1; last = first + map->extent[idx].count - 1;
...@@ -199,7 +199,7 @@ static u32 map_id_up(struct uid_gid_map *map, u32 id) ...@@ -199,7 +199,7 @@ static u32 map_id_up(struct uid_gid_map *map, u32 id)
/* Find the matching extent */ /* Find the matching extent */
extents = map->nr_extents; extents = map->nr_extents;
smp_read_barrier_depends(); smp_rmb();
for (idx = 0; idx < extents; idx++) { for (idx = 0; idx < extents; idx++) {
first = map->extent[idx].lower_first; first = map->extent[idx].lower_first;
last = first + map->extent[idx].count - 1; last = first + map->extent[idx].count - 1;
...@@ -615,9 +615,8 @@ static ssize_t map_write(struct file *file, const char __user *buf, ...@@ -615,9 +615,8 @@ static ssize_t map_write(struct file *file, const char __user *buf,
* were written before the count of the extents. * were written before the count of the extents.
* *
* To achieve this smp_wmb() is used on guarantee the write * To achieve this smp_wmb() is used on guarantee the write
* order and smp_read_barrier_depends() is guaranteed that we * order and smp_rmb() is guaranteed that we don't have crazy
* don't have crazy architectures returning stale data. * architectures returning stale data.
*
*/ */
mutex_lock(&id_map_mutex); mutex_lock(&id_map_mutex);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment