Commit 3ca6644e authored by Michael Ellerman's avatar Michael Ellerman Committed by Paul Mackerras

[POWERPC] Make IOMMU code safe for > 132 GB of memory

Currently the IOMMU code allocates one page for the segment table, that
isn't safe if we have more than 132 GB of RAM.
Signed-off-by: default avatarMichael Ellerman <michael@ellerman.id.au>
Acked-by: default avatarJeremy Kerr <jk@ozlabs.org>
Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
parent bd83fbde
...@@ -310,8 +310,8 @@ static void cell_iommu_setup_hardware(struct cbe_iommu *iommu, unsigned long siz ...@@ -310,8 +310,8 @@ static void cell_iommu_setup_hardware(struct cbe_iommu *iommu, unsigned long siz
{ {
struct page *page; struct page *page;
int ret, i; int ret, i;
unsigned long reg, segments, pages_per_segment, ptab_size, n_pte_pages; unsigned long reg, segments, pages_per_segment, ptab_size, stab_size,
unsigned long xlate_base; n_pte_pages, xlate_base;
unsigned int virq; unsigned int virq;
if (cell_iommu_find_ioc(iommu->nid, &xlate_base)) if (cell_iommu_find_ioc(iommu->nid, &xlate_base))
...@@ -328,7 +328,8 @@ static void cell_iommu_setup_hardware(struct cbe_iommu *iommu, unsigned long siz ...@@ -328,7 +328,8 @@ static void cell_iommu_setup_hardware(struct cbe_iommu *iommu, unsigned long siz
__FUNCTION__, iommu->nid, segments, pages_per_segment); __FUNCTION__, iommu->nid, segments, pages_per_segment);
/* set up the segment table */ /* set up the segment table */
page = alloc_pages_node(iommu->nid, GFP_KERNEL, 0); stab_size = segments * sizeof(unsigned long);
page = alloc_pages_node(iommu->nid, GFP_KERNEL, get_order(stab_size));
BUG_ON(!page); BUG_ON(!page);
iommu->stab = page_address(page); iommu->stab = page_address(page);
clear_page(iommu->stab); clear_page(iommu->stab);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment