• Nicolas Pitre's avatar
    highmem: atomic highmem kmap page pinning · 3297e760
    Nicolas Pitre authored
    Most ARM machines have a non IO coherent cache, meaning that the
    dma_map_*() set of functions must clean and/or invalidate the affected
    memory manually before DMA occurs.  And because the majority of those
    machines have a VIVT cache, the cache maintenance operations must be
    performed using virtual
    addresses.
    
    When a highmem page is kunmap'd, its mapping (and cache) remains in place
    in case it is kmap'd again. However if dma_map_page() is then called with
    such a page, some cache maintenance on the remaining mapping must be
    performed. In that case, page_address(page) is non null and we can use
    that to synchronize the cache.
    
    It is unlikely but still possible for kmap() to race and recycle the
    virtual address obtained above, and use it for another page before some
    on-going cache invalidation loop in dma_map_page() is done. In that case,
    the new mapping could end up with dirty cache lines for another page,
    and the unsuspecting cache invalidation loop in dma_map_page() might
    simply discard those dirty cache lines resulting in data loss.
    
    For example, let's consider this sequence of events:
    
    	- dma_map_page(..., DMA_FROM_DEVICE) is called on a highmem page.
    
    	-->	- vaddr = page_address(page) is non null. In this case
    		it is likely that the page has valid cache lines
    		associated with vaddr. Remember that the cache is VIVT.
    
    		-->	for (i = vaddr; i < vaddr + PAGE_SIZE; i += 32)
    				invalidate_cache_line(i);
    
    	*** preemption occurs in the middle of the loop above ***
    
    	- kmap_high() is called for a different page.
    
    	-->	- last_pkmap_nr wraps to zero and flush_all_zero_pkmaps()
    		  is called.  The pkmap_count value for the page passed
    		  to dma_map_page() above happens to be 1, so the page
    		  is unmapped.  But prior to that, flush_cache_kmaps()
    		  cleared the cache for it.  So far so good.
    
    		- A fresh pkmap entry is assigned for this kmap request.
    		  The Murphy law says this pkmap entry will eventually
    		  happen to use the same vaddr as the one which used to
    		  belong to the other page being processed by
    		  dma_map_page() in the preempted thread above.
    
    	- The kmap_high() caller start dirtying the cache using the
    	  just assigned virtual mapping for its page.
    
    	*** the first thread is rescheduled ***
    
    			- The for(...) loop is resumed, but now cached
    			  data belonging to a different physical page is
    			  being discarded !
    
    And this is not only a preemption issue as ARM can be SMP as well,
    making the above scenario just as likely. Hence the need for some kind
    of pkmap page pinning which can be used in any context, primarily for
    the benefit of dma_map_page() on ARM.
    
    This provides the necessary interface to cope with the above issue if
    ARCH_NEEDS_KMAP_HIGH_GET is defined, otherwise the resulting code is
    unchanged.
    Signed-off-by: default avatarNicolas Pitre <nico@marvell.com>
    Reviewed-by: default avatarMinChan Kim <minchan.kim@gmail.com>
    Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    3297e760
highmem.c 10.2 KB