x86: ioremap_nocache fix
This patch fixes a bug of ioremap_nocache. ioremap_nocache() will call __ioremap() with flags != 0 to do the real work, which will call change_page_attr_addr() if phys_addr + size - 1 < (end_pfn_map << PAGE_SHIFT). But some pages between 0 ~ end_pfn_map << PAGE_SHIFT are not mapped by identity map, this will make change_page_attr_addr failed. This patch is based on latest x86 git and has been tested on x86_64 platform. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This commit is contained in:
parent
d2e626f45c
commit
1c17f4d615
|
@ -41,7 +41,14 @@ ioremap_change_attr(unsigned long phys_addr, unsigned long size,
|
|||
if (phys_addr + size - 1 < (end_pfn_map << PAGE_SHIFT)) {
|
||||
unsigned long npages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
||||
unsigned long vaddr = (unsigned long) __va(phys_addr);
|
||||
int level;
|
||||
|
||||
/*
|
||||
* If there is no identity map for this address,
|
||||
* change_page_attr_addr is unnecessary
|
||||
*/
|
||||
if (!lookup_address(vaddr, &level))
|
||||
return err;
|
||||
/*
|
||||
* Must use a address here and not struct page because the phys addr
|
||||
* can be a in hole between nodes and not have an memmap entry.
|
||||
|
|
Loading…
Reference in New Issue