Commit a862f68a authored by Mike Rapoport's avatar Mike Rapoport Committed by Linus Torvalds

docs/core-api/mm: fix return value descriptions in mm/

Many kernel-doc comments in mm/ have the return value descriptions
either misformatted or omitted at all which makes kernel-doc script
unhappy:

$ make V=1 htmldocs
...
./mm/util.c:36: info: Scanning doc for kstrdup
./mm/util.c:41: warning: No description found for return value of 'kstrdup'
./mm/util.c:57: info: Scanning doc for kstrdup_const
./mm/util.c:66: warning: No description found for return value of 'kstrdup_const'
./mm/util.c:75: info: Scanning doc for kstrndup
./mm/util.c:83: warning: No description found for return value of 'kstrndup'
...

Fixing the formatting and adding the missing return value descriptions
eliminates ~100 such warnings.

Link: http://lkml.kernel.org/r/1549549644-4903-4-git-send-email-rppt@linux.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent bc8ff3ca
...@@ -114,10 +114,9 @@ static DEVICE_ATTR(pools, 0444, show_pools, NULL); ...@@ -114,10 +114,9 @@ static DEVICE_ATTR(pools, 0444, show_pools, NULL);
* @size: size of the blocks in this pool. * @size: size of the blocks in this pool.
* @align: alignment requirement for blocks; must be a power of two * @align: alignment requirement for blocks; must be a power of two
* @boundary: returned blocks won't cross this power of two boundary * @boundary: returned blocks won't cross this power of two boundary
* Context: !in_interrupt() * Context: not in_interrupt()
* *
* Returns a dma allocation pool with the requested characteristics, or * Given one of these pools, dma_pool_alloc()
* null if one can't be created. Given one of these pools, dma_pool_alloc()
* may be used to allocate memory. Such memory will all have "consistent" * may be used to allocate memory. Such memory will all have "consistent"
* DMA mappings, accessible by the device and its driver without using * DMA mappings, accessible by the device and its driver without using
* cache flushing primitives. The actual size of blocks allocated may be * cache flushing primitives. The actual size of blocks allocated may be
...@@ -127,6 +126,9 @@ static DEVICE_ATTR(pools, 0444, show_pools, NULL); ...@@ -127,6 +126,9 @@ static DEVICE_ATTR(pools, 0444, show_pools, NULL);
* cross that size boundary. This is useful for devices which have * cross that size boundary. This is useful for devices which have
* addressing restrictions on individual DMA transfers, such as not crossing * addressing restrictions on individual DMA transfers, such as not crossing
* boundaries of 4KBytes. * boundaries of 4KBytes.
*
* Return: a dma allocation pool with the requested characteristics, or
* %NULL if one can't be created.
*/ */
struct dma_pool *dma_pool_create(const char *name, struct device *dev, struct dma_pool *dma_pool_create(const char *name, struct device *dev,
size_t size, size_t align, size_t boundary) size_t size, size_t align, size_t boundary)
...@@ -313,7 +315,7 @@ EXPORT_SYMBOL(dma_pool_destroy); ...@@ -313,7 +315,7 @@ EXPORT_SYMBOL(dma_pool_destroy);
* @mem_flags: GFP_* bitmask * @mem_flags: GFP_* bitmask
* @handle: pointer to dma address of block * @handle: pointer to dma address of block
* *
* This returns the kernel virtual address of a currently unused block, * Return: the kernel virtual address of a currently unused block,
* and reports its dma address through the handle. * and reports its dma address through the handle.
* If such a memory block can't be allocated, %NULL is returned. * If such a memory block can't be allocated, %NULL is returned.
*/ */
...@@ -498,6 +500,9 @@ static int dmam_pool_match(struct device *dev, void *res, void *match_data) ...@@ -498,6 +500,9 @@ static int dmam_pool_match(struct device *dev, void *res, void *match_data)
* *
* Managed dma_pool_create(). DMA pool created with this function is * Managed dma_pool_create(). DMA pool created with this function is
* automatically destroyed on driver detach. * automatically destroyed on driver detach.
*
* Return: a managed dma allocation pool with the requested
* characteristics, or %NULL if one can't be created.
*/ */
struct dma_pool *dmam_pool_create(const char *name, struct device *dev, struct dma_pool *dmam_pool_create(const char *name, struct device *dev,
size_t size, size_t align, size_t allocation) size_t size, size_t align, size_t allocation)
......
...@@ -392,6 +392,8 @@ static int filemap_check_and_keep_errors(struct address_space *mapping) ...@@ -392,6 +392,8 @@ static int filemap_check_and_keep_errors(struct address_space *mapping)
* opposed to a regular memory cleansing writeback. The difference between * opposed to a regular memory cleansing writeback. The difference between
* these two operations is that if a dirty page/buffer is encountered, it must * these two operations is that if a dirty page/buffer is encountered, it must
* be waited upon, and not just skipped over. * be waited upon, and not just skipped over.
*
* Return: %0 on success, negative error code otherwise.
*/ */
int __filemap_fdatawrite_range(struct address_space *mapping, loff_t start, int __filemap_fdatawrite_range(struct address_space *mapping, loff_t start,
loff_t end, int sync_mode) loff_t end, int sync_mode)
...@@ -438,6 +440,8 @@ EXPORT_SYMBOL(filemap_fdatawrite_range); ...@@ -438,6 +440,8 @@ EXPORT_SYMBOL(filemap_fdatawrite_range);
* *
* This is a mostly non-blocking flush. Not suitable for data-integrity * This is a mostly non-blocking flush. Not suitable for data-integrity
* purposes - I/O may not be started against all dirty pages. * purposes - I/O may not be started against all dirty pages.
*
* Return: %0 on success, negative error code otherwise.
*/ */
int filemap_flush(struct address_space *mapping) int filemap_flush(struct address_space *mapping)
{ {
...@@ -453,6 +457,9 @@ EXPORT_SYMBOL(filemap_flush); ...@@ -453,6 +457,9 @@ EXPORT_SYMBOL(filemap_flush);
* *
* Find at least one page in the range supplied, usually used to check if * Find at least one page in the range supplied, usually used to check if
* direct writing in this range will trigger a writeback. * direct writing in this range will trigger a writeback.
*
* Return: %true if at least one page exists in the specified range,
* %false otherwise.
*/ */
bool filemap_range_has_page(struct address_space *mapping, bool filemap_range_has_page(struct address_space *mapping,
loff_t start_byte, loff_t end_byte) loff_t start_byte, loff_t end_byte)
...@@ -529,6 +536,8 @@ static void __filemap_fdatawait_range(struct address_space *mapping, ...@@ -529,6 +536,8 @@ static void __filemap_fdatawait_range(struct address_space *mapping,
* Since the error status of the address space is cleared by this function, * Since the error status of the address space is cleared by this function,
* callers are responsible for checking the return value and handling and/or * callers are responsible for checking the return value and handling and/or
* reporting the error. * reporting the error.
*
* Return: error status of the address space.
*/ */
int filemap_fdatawait_range(struct address_space *mapping, loff_t start_byte, int filemap_fdatawait_range(struct address_space *mapping, loff_t start_byte,
loff_t end_byte) loff_t end_byte)
...@@ -551,6 +560,8 @@ EXPORT_SYMBOL(filemap_fdatawait_range); ...@@ -551,6 +560,8 @@ EXPORT_SYMBOL(filemap_fdatawait_range);
* Since the error status of the file is advanced by this function, * Since the error status of the file is advanced by this function,
* callers are responsible for checking the return value and handling and/or * callers are responsible for checking the return value and handling and/or
* reporting the error. * reporting the error.
*
* Return: error status of the address space vs. the file->f_wb_err cursor.
*/ */
int file_fdatawait_range(struct file *file, loff_t start_byte, loff_t end_byte) int file_fdatawait_range(struct file *file, loff_t start_byte, loff_t end_byte)
{ {
...@@ -572,6 +583,8 @@ EXPORT_SYMBOL(file_fdatawait_range); ...@@ -572,6 +583,8 @@ EXPORT_SYMBOL(file_fdatawait_range);
* Use this function if callers don't handle errors themselves. Expected * Use this function if callers don't handle errors themselves. Expected
* call sites are system-wide / filesystem-wide data flushers: e.g. sync(2), * call sites are system-wide / filesystem-wide data flushers: e.g. sync(2),
* fsfreeze(8) * fsfreeze(8)
*
* Return: error status of the address space.
*/ */
int filemap_fdatawait_keep_errors(struct address_space *mapping) int filemap_fdatawait_keep_errors(struct address_space *mapping)
{ {
...@@ -623,6 +636,8 @@ EXPORT_SYMBOL(filemap_write_and_wait); ...@@ -623,6 +636,8 @@ EXPORT_SYMBOL(filemap_write_and_wait);
* *
* Note that @lend is inclusive (describes the last byte to be written) so * Note that @lend is inclusive (describes the last byte to be written) so
* that this function can be used to write to the very end-of-file (end = -1). * that this function can be used to write to the very end-of-file (end = -1).
*
* Return: error status of the address space.
*/ */
int filemap_write_and_wait_range(struct address_space *mapping, int filemap_write_and_wait_range(struct address_space *mapping,
loff_t lstart, loff_t lend) loff_t lstart, loff_t lend)
...@@ -678,6 +693,8 @@ EXPORT_SYMBOL(__filemap_set_wb_err); ...@@ -678,6 +693,8 @@ EXPORT_SYMBOL(__filemap_set_wb_err);
* While we handle mapping->wb_err with atomic operations, the f_wb_err * While we handle mapping->wb_err with atomic operations, the f_wb_err
* value is protected by the f_lock since we must ensure that it reflects * value is protected by the f_lock since we must ensure that it reflects
* the latest value swapped in for this file descriptor. * the latest value swapped in for this file descriptor.
*
* Return: %0 on success, negative error code otherwise.
*/ */
int file_check_and_advance_wb_err(struct file *file) int file_check_and_advance_wb_err(struct file *file)
{ {
...@@ -720,6 +737,8 @@ EXPORT_SYMBOL(file_check_and_advance_wb_err); ...@@ -720,6 +737,8 @@ EXPORT_SYMBOL(file_check_and_advance_wb_err);
* *
* After writing out and waiting on the data, we check and advance the * After writing out and waiting on the data, we check and advance the
* f_wb_err cursor to the latest value, and return any errors detected there. * f_wb_err cursor to the latest value, and return any errors detected there.
*
* Return: %0 on success, negative error code otherwise.
*/ */
int file_write_and_wait_range(struct file *file, loff_t lstart, loff_t lend) int file_write_and_wait_range(struct file *file, loff_t lstart, loff_t lend)
{ {
...@@ -753,6 +772,8 @@ EXPORT_SYMBOL(file_write_and_wait_range); ...@@ -753,6 +772,8 @@ EXPORT_SYMBOL(file_write_and_wait_range);
* caller must do that. * caller must do that.
* *
* The remove + add is atomic. This function cannot fail. * The remove + add is atomic. This function cannot fail.
*
* Return: %0
*/ */
int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask) int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
{ {
...@@ -867,6 +888,8 @@ static int __add_to_page_cache_locked(struct page *page, ...@@ -867,6 +888,8 @@ static int __add_to_page_cache_locked(struct page *page,
* *
* This function is used to add a page to the pagecache. It must be locked. * This function is used to add a page to the pagecache. It must be locked.
* This function does not add the page to the LRU. The caller must do that. * This function does not add the page to the LRU. The caller must do that.
*
* Return: %0 on success, negative error code otherwise.
*/ */
int add_to_page_cache_locked(struct page *page, struct address_space *mapping, int add_to_page_cache_locked(struct page *page, struct address_space *mapping,
pgoff_t offset, gfp_t gfp_mask) pgoff_t offset, gfp_t gfp_mask)
...@@ -1463,7 +1486,7 @@ EXPORT_SYMBOL(page_cache_prev_miss); ...@@ -1463,7 +1486,7 @@ EXPORT_SYMBOL(page_cache_prev_miss);
* If the slot holds a shadow entry of a previously evicted page, or a * If the slot holds a shadow entry of a previously evicted page, or a
* swap entry from shmem/tmpfs, it is returned. * swap entry from shmem/tmpfs, it is returned.
* *
* Otherwise, %NULL is returned. * Return: the found page or shadow entry, %NULL if nothing is found.
*/ */
struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
{ {
...@@ -1521,9 +1544,9 @@ EXPORT_SYMBOL(find_get_entry); ...@@ -1521,9 +1544,9 @@ EXPORT_SYMBOL(find_get_entry);
* If the slot holds a shadow entry of a previously evicted page, or a * If the slot holds a shadow entry of a previously evicted page, or a
* swap entry from shmem/tmpfs, it is returned. * swap entry from shmem/tmpfs, it is returned.
* *
* Otherwise, %NULL is returned.
*
* find_lock_entry() may sleep. * find_lock_entry() may sleep.
*
* Return: the found page or shadow entry, %NULL if nothing is found.
*/ */
struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset) struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset)
{ {
...@@ -1563,12 +1586,14 @@ EXPORT_SYMBOL(find_lock_entry); ...@@ -1563,12 +1586,14 @@ EXPORT_SYMBOL(find_lock_entry);
* - FGP_CREAT: If page is not present then a new page is allocated using * - FGP_CREAT: If page is not present then a new page is allocated using
* @gfp_mask and added to the page cache and the VM's LRU * @gfp_mask and added to the page cache and the VM's LRU
* list. The page is returned locked and with an increased * list. The page is returned locked and with an increased
* refcount. Otherwise, NULL is returned. * refcount.
* *
* If FGP_LOCK or FGP_CREAT are specified then the function may sleep even * If FGP_LOCK or FGP_CREAT are specified then the function may sleep even
* if the GFP flags specified for FGP_CREAT are atomic. * if the GFP flags specified for FGP_CREAT are atomic.
* *
* If there is a page cache page, it is returned with an increased refcount. * If there is a page cache page, it is returned with an increased refcount.
*
* Return: the found page or %NULL otherwise.
*/ */
struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset, struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
int fgp_flags, gfp_t gfp_mask) int fgp_flags, gfp_t gfp_mask)
...@@ -1656,8 +1681,7 @@ EXPORT_SYMBOL(pagecache_get_page); ...@@ -1656,8 +1681,7 @@ EXPORT_SYMBOL(pagecache_get_page);
* Any shadow entries of evicted pages, or swap entries from * Any shadow entries of evicted pages, or swap entries from
* shmem/tmpfs, are included in the returned array. * shmem/tmpfs, are included in the returned array.
* *
* find_get_entries() returns the number of pages and shadow entries * Return: the number of pages and shadow entries which were found.
* which were found.
*/ */
unsigned find_get_entries(struct address_space *mapping, unsigned find_get_entries(struct address_space *mapping,
pgoff_t start, unsigned int nr_entries, pgoff_t start, unsigned int nr_entries,
...@@ -1727,8 +1751,8 @@ unsigned find_get_entries(struct address_space *mapping, ...@@ -1727,8 +1751,8 @@ unsigned find_get_entries(struct address_space *mapping,
* indexes. There may be holes in the indices due to not-present pages. * indexes. There may be holes in the indices due to not-present pages.
* We also update @start to index the next page for the traversal. * We also update @start to index the next page for the traversal.
* *
* find_get_pages_range() returns the number of pages which were found. If this * Return: the number of pages which were found. If this number is
* number is smaller than @nr_pages, the end of specified range has been * smaller than @nr_pages, the end of specified range has been
* reached. * reached.
*/ */
unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
...@@ -1801,7 +1825,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start, ...@@ -1801,7 +1825,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
* find_get_pages_contig() works exactly like find_get_pages(), except * find_get_pages_contig() works exactly like find_get_pages(), except
* that the returned number of pages are guaranteed to be contiguous. * that the returned number of pages are guaranteed to be contiguous.
* *
* find_get_pages_contig() returns the number of pages which were found. * Return: the number of pages which were found.
*/ */
unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index, unsigned find_get_pages_contig(struct address_space *mapping, pgoff_t index,
unsigned int nr_pages, struct page **pages) unsigned int nr_pages, struct page **pages)
...@@ -1862,6 +1886,8 @@ EXPORT_SYMBOL(find_get_pages_contig); ...@@ -1862,6 +1886,8 @@ EXPORT_SYMBOL(find_get_pages_contig);
* *
* Like find_get_pages, except we only return pages which are tagged with * Like find_get_pages, except we only return pages which are tagged with
* @tag. We update @index to index the next page for the traversal. * @tag. We update @index to index the next page for the traversal.
*
* Return: the number of pages which were found.
*/ */
unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index, unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
pgoff_t end, xa_mark_t tag, unsigned int nr_pages, pgoff_t end, xa_mark_t tag, unsigned int nr_pages,
...@@ -1939,6 +1965,8 @@ EXPORT_SYMBOL(find_get_pages_range_tag); ...@@ -1939,6 +1965,8 @@ EXPORT_SYMBOL(find_get_pages_range_tag);
* *
* Like find_get_entries, except we only return entries which are tagged with * Like find_get_entries, except we only return entries which are tagged with
* @tag. * @tag.
*
* Return: the number of entries which were found.
*/ */
unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start, unsigned find_get_entries_tag(struct address_space *mapping, pgoff_t start,
xa_mark_t tag, unsigned int nr_entries, xa_mark_t tag, unsigned int nr_entries,
...@@ -2024,6 +2052,10 @@ static void shrink_readahead_size_eio(struct file *filp, ...@@ -2024,6 +2052,10 @@ static void shrink_readahead_size_eio(struct file *filp,
* *
* This is really ugly. But the goto's actually try to clarify some * This is really ugly. But the goto's actually try to clarify some
* of the logic when it comes to error handling etc. * of the logic when it comes to error handling etc.
*
* Return:
* * total number of bytes copied, including those the were already @written
* * negative error code if nothing was copied
*/ */
static ssize_t generic_file_buffered_read(struct kiocb *iocb, static ssize_t generic_file_buffered_read(struct kiocb *iocb,
struct iov_iter *iter, ssize_t written) struct iov_iter *iter, ssize_t written)
...@@ -2285,6 +2317,9 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb, ...@@ -2285,6 +2317,9 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb,
* *
* This is the "read_iter()" routine for all filesystems * This is the "read_iter()" routine for all filesystems
* that can use the page cache directly. * that can use the page cache directly.
* Return:
* * number of bytes copied, even for partial reads
* * negative error code if nothing was read
*/ */
ssize_t ssize_t
generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter) generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
...@@ -2352,6 +2387,8 @@ EXPORT_SYMBOL(generic_file_read_iter); ...@@ -2352,6 +2387,8 @@ EXPORT_SYMBOL(generic_file_read_iter);
* *
* This adds the requested page to the page cache if it isn't already there, * This adds the requested page to the page cache if it isn't already there,
* and schedules an I/O to read in its contents from disk. * and schedules an I/O to read in its contents from disk.
*
* Return: %0 on success, negative error code otherwise.
*/ */
static int page_cache_read(struct file *file, pgoff_t offset, gfp_t gfp_mask) static int page_cache_read(struct file *file, pgoff_t offset, gfp_t gfp_mask)
{ {
...@@ -2466,6 +2503,8 @@ static void do_async_mmap_readahead(struct vm_area_struct *vma, ...@@ -2466,6 +2503,8 @@ static void do_async_mmap_readahead(struct vm_area_struct *vma,
* has not been released. * has not been released.
* *
* We never return with VM_FAULT_RETRY and a bit from VM_FAULT_ERROR set. * We never return with VM_FAULT_RETRY and a bit from VM_FAULT_ERROR set.
*
* Return: bitwise-OR of %VM_FAULT_ codes.
*/ */
vm_fault_t filemap_fault(struct vm_fault *vmf) vm_fault_t filemap_fault(struct vm_fault *vmf)
{ {
...@@ -2851,6 +2890,8 @@ static struct page *do_read_cache_page(struct address_space *mapping, ...@@ -2851,6 +2890,8 @@ static struct page *do_read_cache_page(struct address_space *mapping,
* not set, try to fill the page and wait for it to become unlocked. * not set, try to fill the page and wait for it to become unlocked.
* *
* If the page does not get brought uptodate, return -EIO. * If the page does not get brought uptodate, return -EIO.
*
* Return: up to date page on success, ERR_PTR() on failure.
*/ */
struct page *read_cache_page(struct address_space *mapping, struct page *read_cache_page(struct address_space *mapping,
pgoff_t index, pgoff_t index,
...@@ -2871,6 +2912,8 @@ EXPORT_SYMBOL(read_cache_page); ...@@ -2871,6 +2912,8 @@ EXPORT_SYMBOL(read_cache_page);
* any new page allocations done using the specified allocation flags. * any new page allocations done using the specified allocation flags.
* *
* If the page does not get brought uptodate, return -EIO. * If the page does not get brought uptodate, return -EIO.
*
* Return: up to date page on success, ERR_PTR() on failure.
*/ */
struct page *read_cache_page_gfp(struct address_space *mapping, struct page *read_cache_page_gfp(struct address_space *mapping,
pgoff_t index, pgoff_t index,
...@@ -3254,6 +3297,10 @@ EXPORT_SYMBOL(generic_perform_write); ...@@ -3254,6 +3297,10 @@ EXPORT_SYMBOL(generic_perform_write);
* This function does *not* take care of syncing data in case of O_SYNC write. * This function does *not* take care of syncing data in case of O_SYNC write.
* A caller has to handle it. This is mainly due to the fact that we want to * A caller has to handle it. This is mainly due to the fact that we want to
* avoid syncing under i_mutex. * avoid syncing under i_mutex.
*
* Return:
* * number of bytes written, even for truncated writes
* * negative error code if no data has been written at all
*/ */
ssize_t __generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from) ssize_t __generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
{ {
...@@ -3338,6 +3385,10 @@ EXPORT_SYMBOL(__generic_file_write_iter); ...@@ -3338,6 +3385,10 @@ EXPORT_SYMBOL(__generic_file_write_iter);
* This is a wrapper around __generic_file_write_iter() to be used by most * This is a wrapper around __generic_file_write_iter() to be used by most
* filesystems. It takes care of syncing the file in case of O_SYNC file * filesystems. It takes care of syncing the file in case of O_SYNC file
* and acquires i_mutex as needed. * and acquires i_mutex as needed.
* Return:
* * negative error code if no data has been written at all of
* vfs_fsync_range() failed for a synchronous write
* * number of bytes written, even for truncated writes
*/ */
ssize_t generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from) ssize_t generic_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
{ {
...@@ -3364,8 +3415,7 @@ EXPORT_SYMBOL(generic_file_write_iter); ...@@ -3364,8 +3415,7 @@ EXPORT_SYMBOL(generic_file_write_iter);
* @gfp_mask: memory allocation flags (and I/O mode) * @gfp_mask: memory allocation flags (and I/O mode)
* *
* The address_space is to try to release any data against the page * The address_space is to try to release any data against the page
* (presumably at page->private). If the release was successful, return '1'. * (presumably at page->private).
* Otherwise return zero.
* *
* This may also be called if PG_fscache is set on a page, indicating that the * This may also be called if PG_fscache is set on a page, indicating that the
* page is known to the local caching routines. * page is known to the local caching routines.
...@@ -3373,6 +3423,7 @@ EXPORT_SYMBOL(generic_file_write_iter); ...@@ -3373,6 +3423,7 @@ EXPORT_SYMBOL(generic_file_write_iter);
* The @gfp_mask argument specifies whether I/O may be performed to release * The @gfp_mask argument specifies whether I/O may be performed to release
* this page (__GFP_IO), and whether the call may block (__GFP_RECLAIM & __GFP_FS). * this page (__GFP_IO), and whether the call may block (__GFP_RECLAIM & __GFP_FS).
* *
* Return: %1 if the release was successful, otherwise return zero.
*/ */
int try_to_release_page(struct page *page, gfp_t gfp_mask) int try_to_release_page(struct page *page, gfp_t gfp_mask)
{ {
......
...@@ -1504,6 +1504,8 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr, ...@@ -1504,6 +1504,8 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr,
* under mm->mmap_sem write-lock, so it can change vma->vm_flags. * under mm->mmap_sem write-lock, so it can change vma->vm_flags.
* Caller must set VM_MIXEDMAP on vma if it wants to call this * Caller must set VM_MIXEDMAP on vma if it wants to call this
* function from other places, for example from page-fault handler. * function from other places, for example from page-fault handler.
*
* Return: %0 on success, negative error code otherwise.
*/ */
int vm_insert_page(struct vm_area_struct *vma, unsigned long addr, int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
struct page *page) struct page *page)
...@@ -1831,7 +1833,9 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd, ...@@ -1831,7 +1833,9 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
* @size: size of map area * @size: size of map area
* @prot: page protection flags for this mapping * @prot: page protection flags for this mapping
* *
* Note: this is only safe if the mm semaphore is held when called. * Note: this is only safe if the mm semaphore is held when called.
*
* Return: %0 on success, negative error code otherwise.
*/ */
int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn, unsigned long size, pgprot_t prot) unsigned long pfn, unsigned long size, pgprot_t prot)
...@@ -1904,6 +1908,8 @@ EXPORT_SYMBOL(remap_pfn_range); ...@@ -1904,6 +1908,8 @@ EXPORT_SYMBOL(remap_pfn_range);
* *
* NOTE! Some drivers might want to tweak vma->vm_page_prot first to get * NOTE! Some drivers might want to tweak vma->vm_page_prot first to get
* whatever write-combining details or similar. * whatever write-combining details or similar.
*
* Return: %0 on success, negative error code otherwise.
*/ */
int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len) int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len)
{ {
...@@ -2382,12 +2388,13 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) ...@@ -2382,12 +2388,13 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
* *
* This function handles all that is needed to finish a write page fault in a * This function handles all that is needed to finish a write page fault in a
* shared mapping due to PTE being read-only once the mapped page is prepared. * shared mapping due to PTE being read-only once the mapped page is prepared.
* It handles locking of PTE and modifying it. The function returns * It handles locking of PTE and modifying it.
* VM_FAULT_WRITE on success, 0 when PTE got changed before we acquired PTE
* lock.
* *
* The function expects the page to be locked or other protection against * The function expects the page to be locked or other protection against
* concurrent faults / writeback (such as DAX radix tree locks). * concurrent faults / writeback (such as DAX radix tree locks).
*
* Return: %VM_FAULT_WRITE on success, %0 when PTE got changed before
* we acquired PTE lock.
*/ */
vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf) vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf)
{ {
...@@ -3214,6 +3221,8 @@ static vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) ...@@ -3214,6 +3221,8 @@ static vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
* *
* Target users are page handler itself and implementations of * Target users are page handler itself and implementations of
* vm_ops->map_pages. * vm_ops->map_pages.
*
* Return: %0 on success, %VM_FAULT_ code in case of error.
*/ */
vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
struct page *page) struct page *page)
...@@ -3274,11 +3283,12 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, ...@@ -3274,11 +3283,12 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
* This function handles all that is needed to finish a page fault once the * This function handles all that is needed to finish a page fault once the
* page to fault in is prepared. It handles locking of PTEs, inserts PTE for * page to fault in is prepared. It handles locking of PTEs, inserts PTE for
* given page, adds reverse page mapping, handles memcg charges and LRU * given page, adds reverse page mapping, handles memcg charges and LRU
* addition. The function returns 0 on success, VM_FAULT_ code in case of * addition.
* error.
* *
* The function expects the page to be locked and on success it consumes a * The function expects the page to be locked and on success it consumes a
* reference of a page being mapped (for the PTE which maps it). * reference of a page being mapped (for the PTE which maps it).
*
* Return: %0 on success, %VM_FAULT_ code in case of error.
*/ */
vm_fault_t finish_fault(struct vm_fault *vmf) vm_fault_t finish_fault(struct vm_fault *vmf)
{ {
...@@ -4159,7 +4169,7 @@ EXPORT_SYMBOL(follow_pte_pmd); ...@@ -4159,7 +4169,7 @@ EXPORT_SYMBOL(follow_pte_pmd);
* *
* Only IO mappings and raw PFN mappings are allowed. * Only IO mappings and raw PFN mappings are allowed.
* *
* Returns zero and the pfn at @pfn on success, -ve otherwise. * Return: zero and the pfn at @pfn on success, -ve otherwise.
*/ */
int follow_pfn(struct vm_area_struct *vma, unsigned long address, int follow_pfn(struct vm_area_struct *vma, unsigned long address,
unsigned long *pfn) unsigned long *pfn)
...@@ -4309,6 +4319,8 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm, ...@@ -4309,6 +4319,8 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm,
* @gup_flags: flags modifying lookup behaviour * @gup_flags: flags modifying lookup behaviour
* *
* The caller must hold a reference on @mm. * The caller must hold a reference on @mm.
*
* Return: number of bytes copied from source to destination.
*/ */
int access_remote_vm(struct mm_struct *mm, unsigned long addr, int access_remote_vm(struct mm_struct *mm, unsigned long addr,
void *buf, int len, unsigned int gup_flags) void *buf, int len, unsigned int gup_flags)
......
...@@ -222,6 +222,8 @@ EXPORT_SYMBOL(mempool_init_node); ...@@ -222,6 +222,8 @@ EXPORT_SYMBOL(mempool_init_node);
* *
* Like mempool_create(), but initializes the pool in (i.e. embedded in another * Like mempool_create(), but initializes the pool in (i.e. embedded in another
* structure). * structure).
*
* Return: %0 on success, negative error code otherwise.
*/ */
int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn, int mempool_init(mempool_t *pool, int min_nr, mempool_alloc_t *alloc_fn,
mempool_free_t *free_fn, void *pool_data) mempool_free_t *free_fn, void *pool_data)
...@@ -245,6 +247,8 @@ EXPORT_SYMBOL(mempool_init); ...@@ -245,6 +247,8 @@ EXPORT_SYMBOL(mempool_init);
* functions. This function might sleep. Both the alloc_fn() and the free_fn() * functions. This function might sleep. Both the alloc_fn() and the free_fn()
* functions might sleep - as long as the mempool_alloc() function is not called * functions might sleep - as long as the mempool_alloc() function is not called
* from IRQ contexts. * from IRQ contexts.
*
* Return: pointer to the created memory pool object or %NULL on error.
*/ */
mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn, mempool_t *mempool_create(int min_nr, mempool_alloc_t *alloc_fn,
mempool_free_t *free_fn, void *pool_data) mempool_free_t *free_fn, void *pool_data)
...@@ -289,6 +293,8 @@ EXPORT_SYMBOL(mempool_create_node); ...@@ -289,6 +293,8 @@ EXPORT_SYMBOL(mempool_create_node);
* Note, the caller must guarantee that no mempool_destroy is called * Note, the caller must guarantee that no mempool_destroy is called
* while this function is running. mempool_alloc() & mempool_free() * while this function is running. mempool_alloc() & mempool_free()
* might be called (eg. from IRQ contexts) while this function executes. * might be called (eg. from IRQ contexts) while this function executes.
*
* Return: %0 on success, negative error code otherwise.
*/ */
int mempool_resize(mempool_t *pool, int new_min_nr) int mempool_resize(mempool_t *pool, int new_min_nr)
{ {
...@@ -363,6 +369,8 @@ EXPORT_SYMBOL(mempool_resize); ...@@ -363,6 +369,8 @@ EXPORT_SYMBOL(mempool_resize);
* *never* fails when called from process contexts. (it might * *never* fails when called from process contexts. (it might
* fail if called from an IRQ context.) * fail if called from an IRQ context.)
* Note: using __GFP_ZERO is not supported. * Note: using __GFP_ZERO is not supported.
*
* Return: pointer to the allocated element or %NULL on error.
*/ */
void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask) void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
{ {
......
...@@ -270,7 +270,7 @@ static void wb_min_max_ratio(struct bdi_writeback *wb, ...@@ -270,7 +270,7 @@ static void wb_min_max_ratio(struct bdi_writeback *wb,
* node_dirtyable_memory - number of dirtyable pages in a node * node_dirtyable_memory - number of dirtyable pages in a node
* @pgdat: the node * @pgdat: the node
* *
* Returns the node's number of pages potentially available for dirty * Return: the node's number of pages potentially available for dirty
* page cache. This is the base value for the per-node dirty limits. * page cache. This is the base value for the per-node dirty limits.
*/ */
static unsigned long node_dirtyable_memory(struct pglist_data *pgdat) static unsigned long node_dirtyable_memory(struct pglist_data *pgdat)
...@@ -355,7 +355,7 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) ...@@ -355,7 +355,7 @@ static unsigned long highmem_dirtyable_memory(unsigned long total)
/** /**
* global_dirtyable_memory - number of globally dirtyable pages * global_dirtyable_memory - number of globally dirtyable pages
* *
* Returns the global number of pages potentially available for dirty * Return: the global number of pages potentially available for dirty
* page cache. This is the base value for the global dirty limits. * page cache. This is the base value for the global dirty limits.
*/ */
static unsigned long global_dirtyable_memory(void) static unsigned long global_dirtyable_memory(void)
...@@ -470,7 +470,7 @@ void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty) ...@@ -470,7 +470,7 @@ void global_dirty_limits(unsigned long *pbackground, unsigned long *pdirty)
* node_dirty_limit - maximum number of dirty pages allowed in a node * node_dirty_limit - maximum number of dirty pages allowed in a node
* @pgdat: the node * @pgdat: the node
* *
* Returns the maximum number of dirty pages allowed in a node, based * Return: the maximum number of dirty pages allowed in a node, based
* on the node's dirtyable memory. * on the node's dirtyable memory.
*/ */
static unsigned long node_dirty_limit(struct pglist_data *pgdat) static unsigned long node_dirty_limit(struct pglist_data *pgdat)
...@@ -495,7 +495,7 @@ static unsigned long node_dirty_limit(struct pglist_data *pgdat) ...@@ -495,7 +495,7 @@ static unsigned long node_dirty_limit(struct pglist_data *pgdat)
* node_dirty_ok - tells whether a node is within its dirty limits * node_dirty_ok - tells whether a node is within its dirty limits
* @pgdat: the node to check * @pgdat: the node to check
* *
* Returns %true when the dirty pages in @pgdat are within the node's * Return: %true when the dirty pages in @pgdat are within the node's
* dirty limit, %false if the limit is exceeded. * dirty limit, %false if the limit is exceeded.
*/ */
bool node_dirty_ok(struct pglist_data *pgdat) bool node_dirty_ok(struct pglist_data *pgdat)
...@@ -743,9 +743,6 @@ static void mdtc_calc_avail(struct dirty_throttle_control *mdtc, ...@@ -743,9 +743,6 @@ static void mdtc_calc_avail(struct dirty_throttle_control *mdtc,
* __wb_calc_thresh - @wb's share of dirty throttling threshold * __wb_calc_thresh - @wb's share of dirty throttling threshold
* @dtc: dirty_throttle_context of interest * @dtc: dirty_throttle_context of interest
* *
* Returns @wb's dirty limit in pages. The term "dirty" in the context of
* dirty balancing includes all PG_dirty, PG_writeback and NFS unstable pages.
*
* Note that balance_dirty_pages() will only seriously take it as a hard limit * Note that balance_dirty_pages() will only seriously take it as a hard limit
* when sleeping max_pause per page is not enough to keep the dirty pages under * when sleeping max_pause per page is not enough to keep the dirty pages under
* control. For example, when the device is completely stalled due to some error * control. For example, when the device is completely stalled due to some error
...@@ -759,6 +756,9 @@ static void mdtc_calc_avail(struct dirty_throttle_control *mdtc, ...@@ -759,6 +756,9 @@ static void mdtc_calc_avail(struct dirty_throttle_control *mdtc,
* *
* The wb's share of dirty limit will be adapting to its throughput and * The wb's share of dirty limit will be adapting to its throughput and
* bounded by the bdi->min_ratio and/or bdi->max_ratio parameters, if set. * bounded by the bdi->min_ratio and/or bdi->max_ratio parameters, if set.
*
* Return: @wb's dirty limit in pages. The term "dirty" in the context of
* dirty balancing includes all PG_dirty, PG_writeback and NFS unstable pages.
*/ */
static unsigned long __wb_calc_thresh(struct dirty_throttle_control *dtc) static unsigned long __wb_calc_thresh(struct dirty_throttle_control *dtc)
{ {
...@@ -1918,7 +1918,9 @@ EXPORT_SYMBOL(balance_dirty_pages_ratelimited); ...@@ -1918,7 +1918,9 @@ EXPORT_SYMBOL(balance_dirty_pages_ratelimited);
* @wb: bdi_writeback of interest * @wb: bdi_writeback of interest
* *
* Determines whether background writeback should keep writing @wb or it's * Determines whether background writeback should keep writing @wb or it's
* clean enough. Returns %true if writeback should continue. * clean enough.
*
* Return: %true if writeback should continue.
*/ */
bool wb_over_bg_thresh(struct bdi_writeback *wb) bool wb_over_bg_thresh(struct bdi_writeback *wb)
{ {
...@@ -2147,6 +2149,8 @@ EXPORT_SYMBOL(tag_pages_for_writeback); ...@@ -2147,6 +2149,8 @@ EXPORT_SYMBOL(tag_pages_for_writeback);
* lock/page writeback access order inversion - we should only ever lock * lock/page writeback access order inversion - we should only ever lock
* multiple pages in ascending page->index order, and looping back to the start * multiple pages in ascending page->index order, and looping back to the start
* of the file violates that rule and causes deadlocks. * of the file violates that rule and causes deadlocks.
*
* Return: %0 on success, negative error code otherwise
*/ */
int write_cache_pages(struct address_space *mapping, int write_cache_pages(struct address_space *mapping,
struct writeback_control *wbc, writepage_t writepage, struct writeback_control *wbc, writepage_t writepage,
...@@ -2305,6 +2309,8 @@ static int __writepage(struct page *page, struct writeback_control *wbc, ...@@ -2305,6 +2309,8 @@ static int __writepage(struct page *page, struct writeback_control *wbc,
* *
* This is a library function, which implements the writepages() * This is a library function, which implements the writepages()
* address_space_operation. * address_space_operation.
*
* Return: %0 on success, negative error code otherwise
*/ */
int generic_writepages(struct address_space *mapping, int generic_writepages(struct address_space *mapping,
struct writeback_control *wbc) struct writeback_control *wbc)
...@@ -2351,6 +2357,8 @@ int do_writepages(struct address_space *mapping, struct writeback_control *wbc) ...@@ -2351,6 +2357,8 @@ int do_writepages(struct address_space *mapping, struct writeback_control *wbc)
* *
* Note that the mapping's AS_EIO/AS_ENOSPC flags will be cleared when this * Note that the mapping's AS_EIO/AS_ENOSPC flags will be cleared when this
* function returns. * function returns.
*
* Return: %0 on success, negative error code otherwise
*/ */
int write_one_page(struct page *page) int write_one_page(struct page *page)
{ {
......
...@@ -4816,6 +4816,8 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order, ...@@ -4816,6 +4816,8 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order,
* This function is also limited by MAX_ORDER. * This function is also limited by MAX_ORDER.
* *
* Memory allocated by this function must be released by free_pages_exact(). * Memory allocated by this function must be released by free_pages_exact().
*
* Return: pointer to the allocated area or %NULL in case of error.
*/ */
void *alloc_pages_exact(size_t size, gfp_t gfp_mask) void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
{ {
...@@ -4836,6 +4838,8 @@ EXPORT_SYMBOL(alloc_pages_exact); ...@@ -4836,6 +4838,8 @@ EXPORT_SYMBOL(alloc_pages_exact);
* *
* Like alloc_pages_exact(), but try to allocate on node nid first before falling * Like alloc_pages_exact(), but try to allocate on node nid first before falling
* back. * back.
*
* Return: pointer to the allocated area or %NULL in case of error.
*/ */
void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask)
{ {
...@@ -4869,11 +4873,13 @@ EXPORT_SYMBOL(free_pages_exact); ...@@ -4869,11 +4873,13 @@ EXPORT_SYMBOL(free_pages_exact);
* nr_free_zone_pages - count number of pages beyond high watermark * nr_free_zone_pages - count number of pages beyond high watermark
* @offset: The zone index of the highest zone * @offset: The zone index of the highest zone
* *
* nr_free_zone_pages() counts the number of counts pages which are beyond the * nr_free_zone_pages() counts the number of pages which are beyond the
* high watermark within all zones at or below a given zone index. For each * high watermark within all zones at or below a given zone index. For each
* zone, the number of pages is calculated as: * zone, the number of pages is calculated as:
* *
* nr_free_zone_pages = managed_pages - high_pages * nr_free_zone_pages = managed_pages - high_pages
*
* Return: number of pages beyond high watermark.
*/ */
static unsigned long nr_free_zone_pages(int offset) static unsigned long nr_free_zone_pages(int offset)
{ {
...@@ -4900,6 +4906,9 @@ static unsigned long nr_free_zone_pages(int offset) ...@@ -4900,6 +4906,9 @@ static unsigned long nr_free_zone_pages(int offset)
* *
* nr_free_buffer_pages() counts the number of pages which are beyond the high * nr_free_buffer_pages() counts the number of pages which are beyond the high
* watermark within ZONE_DMA and ZONE_NORMAL. * watermark within ZONE_DMA and ZONE_NORMAL.
*
* Return: number of pages beyond high watermark within ZONE_DMA and
* ZONE_NORMAL.
*/ */
unsigned long nr_free_buffer_pages(void) unsigned long nr_free_buffer_pages(void)
{ {
...@@ -4912,6 +4921,8 @@ EXPORT_SYMBOL_GPL(nr_free_buffer_pages); ...@@ -4912,6 +4921,8 @@ EXPORT_SYMBOL_GPL(nr_free_buffer_pages);
* *
* nr_free_pagecache_pages() counts the number of pages which are beyond the * nr_free_pagecache_pages() counts the number of pages which are beyond the
* high watermark within all zones. * high watermark within all zones.
*
* Return: number of pages beyond high watermark within all zones.
*/ */
unsigned long nr_free_pagecache_pages(void) unsigned long nr_free_pagecache_pages(void)
{ {
...@@ -5358,7 +5369,8 @@ static int node_load[MAX_NUMNODES]; ...@@ -5358,7 +5369,8 @@ static int node_load[MAX_NUMNODES];
* from each node to each node in the system), and should also prefer nodes * from each node to each node in the system), and should also prefer nodes
* with no CPUs, since presumably they'll have very little allocation pressure * with no CPUs, since presumably they'll have very little allocation pressure
* on them otherwise. * on them otherwise.
* It returns -1 if no node is found. *
* Return: node id of the found node or %NUMA_NO_NODE if no node is found.
*/ */
static int find_next_best_node(int node, nodemask_t *used_node_mask) static int find_next_best_node(int node, nodemask_t *used_node_mask)
{ {
...@@ -6269,7 +6281,7 @@ unsigned long __init __absent_pages_in_range(int nid, ...@@ -6269,7 +6281,7 @@ unsigned long __init __absent_pages_in_range(int nid,
* @start_pfn: The start PFN to start searching for holes * @start_pfn: The start PFN to start searching for holes
* @end_pfn: The end PFN to stop searching for holes * @end_pfn: The end PFN to stop searching for holes
* *
* It returns the number of pages frames in memory holes within a range. * Return: the number of pages frames in memory holes within a range.
*/ */
unsigned long __init absent_pages_in_range(unsigned long start_pfn, unsigned long __init absent_pages_in_range(unsigned long start_pfn,
unsigned long end_pfn) unsigned long end_pfn)
...@@ -6826,7 +6838,7 @@ void __init setup_nr_node_ids(void) ...@@ -6826,7 +6838,7 @@ void __init setup_nr_node_ids(void)
* model has fine enough granularity to avoid incorrect mapping for the * model has fine enough granularity to avoid incorrect mapping for the
* populated node map. * populated node map.
* *
* Returns the determined alignment in pfn's. 0 if there is no alignment * Return: the determined alignment in pfn's. 0 if there is no alignment
* requirement (single node). * requirement (single node).
*/ */
unsigned long __init node_map_pfn_alignment(void) unsigned long __init node_map_pfn_alignment(void)
...@@ -6881,7 +6893,7 @@ static unsigned long __init find_min_pfn_for_node(int nid) ...@@ -6881,7 +6893,7 @@ static unsigned long __init find_min_pfn_for_node(int nid)
/** /**
* find_min_pfn_with_active_regions - Find the minimum PFN registered * find_min_pfn_with_active_regions - Find the minimum PFN registered
* *
* It returns the minimum PFN based on information provided via * Return: the minimum PFN based on information provided via
* memblock_set_node(). * memblock_set_node().
*/ */
unsigned long __init find_min_pfn_with_active_regions(void) unsigned long __init find_min_pfn_with_active_regions(void)
...@@ -8174,7 +8186,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc, ...@@ -8174,7 +8186,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
* pageblocks in the range. Once isolated, the pageblocks should not * pageblocks in the range. Once isolated, the pageblocks should not
* be modified by others. * be modified by others.
* *
* Returns zero on success or negative error code. On success all * Return: zero on success or negative error code. On success all
* pages which PFN is in [start, end) are allocated for the caller and * pages which PFN is in [start, end) are allocated for the caller and
* need to be freed with free_contig_range(). * need to be freed with free_contig_range().
*/ */
......
...@@ -81,6 +81,8 @@ static void read_cache_pages_invalidate_pages(struct address_space *mapping, ...@@ -81,6 +81,8 @@ static void read_cache_pages_invalidate_pages(struct address_space *mapping,
* @data: private data for the callback routine. * @data: private data for the callback routine.
* *
* Hides the details of the LRU cache etc from the filesystems. * Hides the details of the LRU cache etc from the filesystems.
*
* Returns: %0 on success, error return by @filler otherwise
*/ */
int read_cache_pages(struct address_space *mapping, struct list_head *pages, int read_cache_pages(struct address_space *mapping, struct list_head *pages,
int (*filler)(void *, struct page *), void *data) int (*filler)(void *, struct page *), void *data)
......
...@@ -1727,6 +1727,8 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list) ...@@ -1727,6 +1727,8 @@ static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list)
* This could be made much more intelligent. For now, try to avoid using * This could be made much more intelligent. For now, try to avoid using
* high order pages for slabs. When the gfp() functions are more friendly * high order pages for slabs. When the gfp() functions are more friendly
* towards high-order requests, this should be changed. * towards high-order requests, this should be changed.
*
* Return: number of left-over bytes in a slab
*/ */
static size_t calculate_slab_order(struct kmem_cache *cachep, static size_t calculate_slab_order(struct kmem_cache *cachep,
size_t size, slab_flags_t flags) size_t size, slab_flags_t flags)
...@@ -1975,6 +1977,8 @@ static bool set_on_slab_cache(struct kmem_cache *cachep, ...@@ -1975,6 +1977,8 @@ static bool set_on_slab_cache(struct kmem_cache *cachep,
* %SLAB_HWCACHE_ALIGN - Align the objects in this cache to a hardware * %SLAB_HWCACHE_ALIGN - Align the objects in this cache to a hardware
* cacheline. This can be beneficial if you're counting cycles as closely * cacheline. This can be beneficial if you're counting cycles as closely
* as davem. * as davem.
*
* Return: a pointer to the created cache or %NULL in case of error
*/ */
int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags) int __kmem_cache_create(struct kmem_cache *cachep, slab_flags_t flags)
{ {
...@@ -3542,6 +3546,8 @@ void ___cache_free(struct kmem_cache *cachep, void *objp, ...@@ -3542,6 +3546,8 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
* *
* Allocate an object from this cache. The flags are only relevant * Allocate an object from this cache. The flags are only relevant
* if the cache has no available objects. * if the cache has no available objects.
*
* Return: pointer to the new object or %NULL in case of error
*/ */
void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags) void *kmem_cache_alloc(struct kmem_cache *cachep, gfp_t flags)
{ {
...@@ -3631,6 +3637,8 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace); ...@@ -3631,6 +3637,8 @@ EXPORT_SYMBOL(kmem_cache_alloc_trace);
* node, which can improve the performance for cpu bound structures. * node, which can improve the performance for cpu bound structures.
* *
* Fallback to other node is possible if __GFP_THISNODE is not set. * Fallback to other node is possible if __GFP_THISNODE is not set.
*
* Return: pointer to the new object or %NULL in case of error
*/ */
void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid) void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid)
{ {
...@@ -3699,6 +3707,8 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller); ...@@ -3699,6 +3707,8 @@ EXPORT_SYMBOL(__kmalloc_node_track_caller);
* @size: how many bytes of memory are required. * @size: how many bytes of memory are required.
* @flags: the type of memory to allocate (see kmalloc). * @flags: the type of memory to allocate (see kmalloc).
* @caller: function caller for debug tracking of the caller * @caller: function caller for debug tracking of the caller
*
* Return: pointer to the allocated memory or %NULL in case of error
*/ */
static __always_inline void *__do_kmalloc(size_t size, gfp_t flags, static __always_inline void *__do_kmalloc(size_t size, gfp_t flags,
unsigned long caller) unsigned long caller)
...@@ -4164,6 +4174,8 @@ void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep) ...@@ -4164,6 +4174,8 @@ void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep)
* @buffer: user buffer * @buffer: user buffer
* @count: data length * @count: data length
* @ppos: unused * @ppos: unused
*
* Return: %0 on success, negative error code otherwise.
*/ */
ssize_t slabinfo_write(struct file *file, const char __user *buffer, ssize_t slabinfo_write(struct file *file, const char __user *buffer,
size_t count, loff_t *ppos) size_t count, loff_t *ppos)
...@@ -4457,6 +4469,8 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, ...@@ -4457,6 +4469,8 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
* The caller must guarantee that objp points to a valid object previously * The caller must guarantee that objp points to a valid object previously
* allocated with either kmalloc() or kmem_cache_alloc(). The object * allocated with either kmalloc() or kmem_cache_alloc(). The object
* must not be freed during the duration of the call. * must not be freed during the duration of the call.
*
* Return: size of the actual memory used by @objp in bytes
*/ */
size_t ksize(const void *objp) size_t ksize(const void *objp)
{ {
......
...@@ -939,6 +939,8 @@ EXPORT_SYMBOL(kmem_cache_destroy); ...@@ -939,6 +939,8 @@ EXPORT_SYMBOL(kmem_cache_destroy);
* *
* Releases as many slabs as possible for a cache. * Releases as many slabs as possible for a cache.
* To help debugging, a zero exit status indicates all slabs were released. * To help debugging, a zero exit status indicates all slabs were released.
*
* Return: %0 if all slabs were released, non-zero otherwise
*/ */
int kmem_cache_shrink(struct kmem_cache *cachep) int kmem_cache_shrink(struct kmem_cache *cachep)
{ {
...@@ -1528,6 +1530,8 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size, ...@@ -1528,6 +1530,8 @@ static __always_inline void *__do_krealloc(const void *p, size_t new_size,
* This function is like krealloc() except it never frees the originally * This function is like krealloc() except it never frees the originally
* allocated buffer. Use this if you don't want to free the buffer immediately * allocated buffer. Use this if you don't want to free the buffer immediately
* like, for example, with RCU. * like, for example, with RCU.
*
* Return: pointer to the allocated memory or %NULL in case of error
*/ */
void *__krealloc(const void *p, size_t new_size, gfp_t flags) void *__krealloc(const void *p, size_t new_size, gfp_t flags)
{ {
...@@ -1549,6 +1553,8 @@ EXPORT_SYMBOL(__krealloc); ...@@ -1549,6 +1553,8 @@ EXPORT_SYMBOL(__krealloc);
* lesser of the new and old sizes. If @p is %NULL, krealloc() * lesser of the new and old sizes. If @p is %NULL, krealloc()
* behaves exactly like kmalloc(). If @new_size is 0 and @p is not a * behaves exactly like kmalloc(). If @new_size is 0 and @p is not a
* %NULL pointer, the object pointed to is freed. * %NULL pointer, the object pointed to is freed.
*
* Return: pointer to the allocated memory or %NULL in case of error
*/ */
void *krealloc(const void *p, size_t new_size, gfp_t flags) void *krealloc(const void *p, size_t new_size, gfp_t flags)
{ {
......
...@@ -539,6 +539,8 @@ EXPORT_SYMBOL(truncate_inode_pages_final); ...@@ -539,6 +539,8 @@ EXPORT_SYMBOL(truncate_inode_pages_final);
* invalidate_mapping_pages() will not block on IO activity. It will not * invalidate_mapping_pages() will not block on IO activity. It will not
* invalidate pages which are dirty, locked, under writeback or mapped into * invalidate pages which are dirty, locked, under writeback or mapped into
* pagetables. * pagetables.
*
* Return: the number of the pages that were invalidated
*/ */
unsigned long invalidate_mapping_pages(struct address_space *mapping, unsigned long invalidate_mapping_pages(struct address_space *mapping,
pgoff_t start, pgoff_t end) pgoff_t start, pgoff_t end)
...@@ -664,7 +666,7 @@ static int do_launder_page(struct address_space *mapping, struct page *page) ...@@ -664,7 +666,7 @@ static int do_launder_page(struct address_space *mapping, struct page *page)
* Any pages which are found to be mapped into pagetables are unmapped prior to * Any pages which are found to be mapped into pagetables are unmapped prior to
* invalidation. * invalidation.
* *
* Returns -EBUSY if any pages could not be invalidated. * Return: -EBUSY if any pages could not be invalidated.
*/ */
int invalidate_inode_pages2_range(struct address_space *mapping, int invalidate_inode_pages2_range(struct address_space *mapping,
pgoff_t start, pgoff_t end) pgoff_t start, pgoff_t end)
...@@ -761,7 +763,7 @@ EXPORT_SYMBOL_GPL(invalidate_inode_pages2_range); ...@@ -761,7 +763,7 @@ EXPORT_SYMBOL_GPL(invalidate_inode_pages2_range);
* Any pages which are found to be mapped into pagetables are unmapped prior to * Any pages which are found to be mapped into pagetables are unmapped prior to
* invalidation. * invalidation.
* *
* Returns -EBUSY if any pages could not be invalidated. * Return: -EBUSY if any pages could not be invalidated.
*/ */
int invalidate_inode_pages2(struct address_space *mapping) int invalidate_inode_pages2(struct address_space *mapping)
{ {
......
...@@ -36,6 +36,8 @@ EXPORT_SYMBOL(kfree_const); ...@@ -36,6 +36,8 @@ EXPORT_SYMBOL(kfree_const);
* kstrdup - allocate space for and copy an existing string * kstrdup - allocate space for and copy an existing string
* @s: the string to duplicate * @s: the string to duplicate
* @gfp: the GFP mask used in the kmalloc() call when allocating memory * @gfp: the GFP mask used in the kmalloc() call when allocating memory
*
* Return: newly allocated copy of @s or %NULL in case of error
*/ */
char *kstrdup(const char *s, gfp_t gfp) char *kstrdup(const char *s, gfp_t gfp)
{ {
...@@ -58,9 +60,10 @@ EXPORT_SYMBOL(kstrdup); ...@@ -58,9 +60,10 @@ EXPORT_SYMBOL(kstrdup);
* @s: the string to duplicate * @s: the string to duplicate
* @gfp: the GFP mask used in the kmalloc() call when allocating memory * @gfp: the GFP mask used in the kmalloc() call when allocating memory
* *
* Function returns source string if it is in .rodata section otherwise it * Note: Strings allocated by kstrdup_const should be freed by kfree_const.
* fallbacks to kstrdup. *
* Strings allocated by kstrdup_const should be freed by kfree_const. * Return: source string if it is in .rodata section otherwise
* fallback to kstrdup.
*/ */
const char *kstrdup_const(const char *s, gfp_t gfp) const char *kstrdup_const(const char *s, gfp_t gfp)
{ {
...@@ -78,6 +81,8 @@ EXPORT_SYMBOL(kstrdup_const); ...@@ -78,6 +81,8 @@ EXPORT_SYMBOL(kstrdup_const);
* @gfp: the GFP mask used in the kmalloc() call when allocating memory * @gfp: the GFP mask used in the kmalloc() call when allocating memory
* *
* Note: Use kmemdup_nul() instead if the size is known exactly. * Note: Use kmemdup_nul() instead if the size is known exactly.
*
* Return: newly allocated copy of @s or %NULL in case of error
*/ */
char *kstrndup(const char *s, size_t max, gfp_t gfp) char *kstrndup(const char *s, size_t max, gfp_t gfp)
{ {
...@@ -103,6 +108,8 @@ EXPORT_SYMBOL(kstrndup); ...@@ -103,6 +108,8 @@ EXPORT_SYMBOL(kstrndup);
* @src: memory region to duplicate * @src: memory region to duplicate
* @len: memory region length * @len: memory region length
* @gfp: GFP mask to use * @gfp: GFP mask to use
*
* Return: newly allocated copy of @src or %NULL in case of error
*/ */
void *kmemdup(const void *src, size_t len, gfp_t gfp) void *kmemdup(const void *src, size_t len, gfp_t gfp)
{ {
...@@ -120,6 +127,9 @@ EXPORT_SYMBOL(kmemdup); ...@@ -120,6 +127,9 @@ EXPORT_SYMBOL(kmemdup);
* @s: The data to stringify * @s: The data to stringify
* @len: The size of the data * @len: The size of the data
* @gfp: the GFP mask used in the kmalloc() call when allocating memory * @gfp: the GFP mask used in the kmalloc() call when allocating memory
*
* Return: newly allocated copy of @s with NUL-termination or %NULL in
* case of error
*/ */
char *kmemdup_nul(const char *s, size_t len, gfp_t gfp) char *kmemdup_nul(const char *s, size_t len, gfp_t gfp)
{ {
...@@ -143,7 +153,7 @@ EXPORT_SYMBOL(kmemdup_nul); ...@@ -143,7 +153,7 @@ EXPORT_SYMBOL(kmemdup_nul);
* @src: source address in user space * @src: source address in user space
* @len: number of bytes to copy * @len: number of bytes to copy
* *
* Returns an ERR_PTR() on failure. Result is physically * Return: an ERR_PTR() on failure. Result is physically
* contiguous, to be freed by kfree(). * contiguous, to be freed by kfree().
*/ */
void *memdup_user(const void __user *src, size_t len) void *memdup_user(const void __user *src, size_t len)
...@@ -169,7 +179,7 @@ EXPORT_SYMBOL(memdup_user); ...@@ -169,7 +179,7 @@ EXPORT_SYMBOL(memdup_user);
* @src: source address in user space * @src: source address in user space
* @len: number of bytes to copy * @len: number of bytes to copy
* *
* Returns an ERR_PTR() on failure. Result may be not * Return: an ERR_PTR() on failure. Result may be not
* physically contiguous. Use kvfree() to free. * physically contiguous. Use kvfree() to free.
*/ */
void *vmemdup_user(const void __user *src, size_t len) void *vmemdup_user(const void __user *src, size_t len)
...@@ -193,6 +203,8 @@ EXPORT_SYMBOL(vmemdup_user); ...@@ -193,6 +203,8 @@ EXPORT_SYMBOL(vmemdup_user);
* strndup_user - duplicate an existing string from user space * strndup_user - duplicate an existing string from user space
* @s: The string to duplicate * @s: The string to duplicate
* @n: Maximum number of bytes to copy, including the trailing NUL. * @n: Maximum number of bytes to copy, including the trailing NUL.
*
* Return: newly allocated copy of @s or %NULL in case of error
*/ */
char *strndup_user(const char __user *s, long n) char *strndup_user(const char __user *s, long n)
{ {
...@@ -224,7 +236,7 @@ EXPORT_SYMBOL(strndup_user); ...@@ -224,7 +236,7 @@ EXPORT_SYMBOL(strndup_user);
* @src: source address in user space * @src: source address in user space
* @len: number of bytes to copy * @len: number of bytes to copy
* *
* Returns an ERR_PTR() on failure. * Return: an ERR_PTR() on failure.
*/ */
void *memdup_user_nul(const void __user *src, size_t len) void *memdup_user_nul(const void __user *src, size_t len)
{ {
...@@ -310,10 +322,6 @@ EXPORT_SYMBOL_GPL(__get_user_pages_fast); ...@@ -310,10 +322,6 @@ EXPORT_SYMBOL_GPL(__get_user_pages_fast);
* @pages: array that receives pointers to the pages pinned. * @pages: array that receives pointers to the pages pinned.
* Should be at least nr_pages long. * Should be at least nr_pages long.
* *
* Returns number of pages pinned. This may be fewer than the number
* requested. If nr_pages is 0 or negative, returns 0. If no pages
* were pinned, returns -errno.
*
* get_user_pages_fast provides equivalent functionality to get_user_pages, * get_user_pages_fast provides equivalent functionality to get_user_pages,
* operating on current and current->mm, with force=0 and vma=NULL. However * operating on current and current->mm, with force=0 and vma=NULL. However
* unlike get_user_pages, it must be called without mmap_sem held. * unlike get_user_pages, it must be called without mmap_sem held.
...@@ -325,6 +333,10 @@ EXPORT_SYMBOL_GPL(__get_user_pages_fast); ...@@ -325,6 +333,10 @@ EXPORT_SYMBOL_GPL(__get_user_pages_fast);
* pages have to be faulted in, it may turn out to be slightly slower so * pages have to be faulted in, it may turn out to be slightly slower so
* callers need to carefully consider what to use. On many architectures, * callers need to carefully consider what to use. On many architectures,
* get_user_pages_fast simply falls back to get_user_pages. * get_user_pages_fast simply falls back to get_user_pages.
*
* Return: number of pages pinned. This may be fewer than the number
* requested. If nr_pages is 0 or negative, returns 0. If no pages
* were pinned, returns -errno.
*/ */
int __weak get_user_pages_fast(unsigned long start, int __weak get_user_pages_fast(unsigned long start,
int nr_pages, int write, struct page **pages) int nr_pages, int write, struct page **pages)
...@@ -386,6 +398,8 @@ EXPORT_SYMBOL(vm_mmap); ...@@ -386,6 +398,8 @@ EXPORT_SYMBOL(vm_mmap);
* *
* Please note that any use of gfp flags outside of GFP_KERNEL is careful to not * Please note that any use of gfp flags outside of GFP_KERNEL is careful to not
* fall back to vmalloc. * fall back to vmalloc.
*
* Return: pointer to the allocated memory of %NULL in case of failure
*/ */
void *kvmalloc_node(size_t size, gfp_t flags, int node) void *kvmalloc_node(size_t size, gfp_t flags, int node)
{ {
...@@ -729,7 +743,8 @@ int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin) ...@@ -729,7 +743,8 @@ int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin)
* @buffer: the buffer to copy to. * @buffer: the buffer to copy to.
* @buflen: the length of the buffer. Larger cmdline values are truncated * @buflen: the length of the buffer. Larger cmdline values are truncated
* to this length. * to this length.
* Returns the size of the cmdline field copied. Note that the copy does *
* Return: the size of the cmdline field copied. Note that the copy does
* not guarantee an ending NULL byte. * not guarantee an ending NULL byte.
*/ */
int get_cmdline(struct task_struct *task, char *buffer, int buflen) int get_cmdline(struct task_struct *task, char *buffer, int buflen)
......
...@@ -844,7 +844,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off) ...@@ -844,7 +844,7 @@ static void *vmap_block_vaddr(unsigned long va_start, unsigned long pages_off)
* @order: how many 2^order pages should be occupied in newly allocated block * @order: how many 2^order pages should be occupied in newly allocated block
* @gfp_mask: flags for the page level allocator * @gfp_mask: flags for the page level allocator
* *
* Returns: virtual address in a newly allocated block or ERR_PTR(-errno) * Return: virtual address in a newly allocated block or ERR_PTR(-errno)
*/ */
static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
{ {
...@@ -1433,6 +1433,8 @@ struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags, ...@@ -1433,6 +1433,8 @@ struct vm_struct *__get_vm_area_caller(unsigned long size, unsigned long flags,
* Search an area of @size in the kernel virtual mapping area, * Search an area of @size in the kernel virtual mapping area,
* and reserved it for out purposes. Returns the area descriptor * and reserved it for out purposes. Returns the area descriptor
* on success or %NULL on failure. * on success or %NULL on failure.
*
* Return: the area descriptor on success or %NULL on failure.
*/ */
struct vm_struct *get_vm_area(unsigned long size, unsigned long flags) struct vm_struct *get_vm_area(unsigned long size, unsigned long flags)
{ {
...@@ -1455,6 +1457,8 @@ struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags, ...@@ -1455,6 +1457,8 @@ struct vm_struct *get_vm_area_caller(unsigned long size, unsigned long flags,
* Search for the kernel VM area starting at @addr, and return it. * Search for the kernel VM area starting at @addr, and return it.
* It is up to the caller to do all required locking to keep the returned * It is up to the caller to do all required locking to keep the returned
* pointer valid. * pointer valid.
*
* Return: pointer to the found area or %NULL on faulure
*/ */
struct vm_struct *find_vm_area(const void *addr) struct vm_struct *find_vm_area(const void *addr)
{ {
...@@ -1474,6 +1478,8 @@ struct vm_struct *find_vm_area(const void *addr) ...@@ -1474,6 +1478,8 @@ struct vm_struct *find_vm_area(const void *addr)
* Search for the kernel VM area starting at @addr, and remove it. * Search for the kernel VM area starting at @addr, and remove it.
* This function returns the found VM area, but using it is NOT safe * This function returns the found VM area, but using it is NOT safe
* on SMP machines, except for its size or flags. * on SMP machines, except for its size or flags.
*
* Return: pointer to the found area or %NULL on faulure
*/ */
struct vm_struct *remove_vm_area(const void *addr) struct vm_struct *remove_vm_area(const void *addr)
{ {
...@@ -1636,6 +1642,8 @@ EXPORT_SYMBOL(vunmap); ...@@ -1636,6 +1642,8 @@ EXPORT_SYMBOL(vunmap);
* *
* Maps @count pages from @pages into contiguous kernel virtual * Maps @count pages from @pages into contiguous kernel virtual
* space. * space.
*
* Return: the address of the area or %NULL on failure
*/ */
void *vmap(struct page **pages, unsigned int count, void *vmap(struct page **pages, unsigned int count,
unsigned long flags, pgprot_t prot) unsigned long flags, pgprot_t prot)
...@@ -1739,6 +1747,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, ...@@ -1739,6 +1747,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
* Allocate enough pages to cover @size from the page level * Allocate enough pages to cover @size from the page level
* allocator with @gfp_mask flags. Map them into contiguous * allocator with @gfp_mask flags. Map them into contiguous
* kernel virtual space, using a pagetable protection of @prot. * kernel virtual space, using a pagetable protection of @prot.
*
* Return: the address of the area or %NULL on failure
*/ */
void *__vmalloc_node_range(unsigned long size, unsigned long align, void *__vmalloc_node_range(unsigned long size, unsigned long align,
unsigned long start, unsigned long end, gfp_t gfp_mask, unsigned long start, unsigned long end, gfp_t gfp_mask,
...@@ -1806,6 +1816,8 @@ EXPORT_SYMBOL_GPL(__vmalloc_node_range); ...@@ -1806,6 +1816,8 @@ EXPORT_SYMBOL_GPL(__vmalloc_node_range);
* *
* Any use of gfp flags outside of GFP_KERNEL should be consulted * Any use of gfp flags outside of GFP_KERNEL should be consulted
* with mm people. * with mm people.
*
* Return: pointer to the allocated memory or %NULL on error
*/ */
static void *__vmalloc_node(unsigned long size, unsigned long align, static void *__vmalloc_node(unsigned long size, unsigned long align,
gfp_t gfp_mask, pgprot_t prot, gfp_t gfp_mask, pgprot_t prot,
...@@ -1845,6 +1857,8 @@ void *__vmalloc_node_flags_caller(unsigned long size, int node, gfp_t flags, ...@@ -1845,6 +1857,8 @@ void *__vmalloc_node_flags_caller(unsigned long size, int node, gfp_t flags,
* *
* For tight control over page level allocator and protection flags * For tight control over page level allocator and protection flags
* use __vmalloc() instead. * use __vmalloc() instead.
*
* Return: pointer to the allocated memory or %NULL on error
*/ */
void *vmalloc(unsigned long size) void *vmalloc(unsigned long size)
{ {
...@@ -1863,6 +1877,8 @@ EXPORT_SYMBOL(vmalloc); ...@@ -1863,6 +1877,8 @@ EXPORT_SYMBOL(vmalloc);
* *
* For tight control over page level allocator and protection flags * For tight control over page level allocator and protection flags
* use __vmalloc() instead. * use __vmalloc() instead.
*
* Return: pointer to the allocated memory or %NULL on error
*/ */
void *vzalloc(unsigned long size) void *vzalloc(unsigned long size)
{ {
...@@ -1877,6 +1893,8 @@ EXPORT_SYMBOL(vzalloc); ...@@ -1877,6 +1893,8 @@ EXPORT_SYMBOL(vzalloc);
* *
* The resulting memory area is zeroed so it can be mapped to userspace * The resulting memory area is zeroed so it can be mapped to userspace
* without leaking data. * without leaking data.
*
* Return: pointer to the allocated memory or %NULL on error
*/ */
void *vmalloc_user(unsigned long size) void *vmalloc_user(unsigned long size)
{ {
...@@ -1897,6 +1915,8 @@ EXPORT_SYMBOL(vmalloc_user); ...@@ -1897,6 +1915,8 @@ EXPORT_SYMBOL(vmalloc_user);
* *
* For tight control over page level allocator and protection flags * For tight control over page level allocator and protection flags
* use __vmalloc() instead. * use __vmalloc() instead.
*
* Return: pointer to the allocated memory or %NULL on error
*/ */
void *vmalloc_node(unsigned long size, int node) void *vmalloc_node(unsigned long size, int node)
{ {
...@@ -1916,6 +1936,8 @@ EXPORT_SYMBOL(vmalloc_node); ...@@ -1916,6 +1936,8 @@ EXPORT_SYMBOL(vmalloc_node);
* *
* For tight control over page level allocator and protection flags * For tight control over page level allocator and protection flags
* use __vmalloc_node() instead. * use __vmalloc_node() instead.
*
* Return: pointer to the allocated memory or %NULL on error
*/ */
void *vzalloc_node(unsigned long size, int node) void *vzalloc_node(unsigned long size, int node)
{ {
...@@ -1934,6 +1956,8 @@ EXPORT_SYMBOL(vzalloc_node); ...@@ -1934,6 +1956,8 @@ EXPORT_SYMBOL(vzalloc_node);
* *
* For tight control over page level allocator and protection flags * For tight control over page level allocator and protection flags
* use __vmalloc() instead. * use __vmalloc() instead.
*
* Return: pointer to the allocated memory or %NULL on error
*/ */
void *vmalloc_exec(unsigned long size) void *vmalloc_exec(unsigned long size)
{ {
...@@ -1959,6 +1983,8 @@ void *vmalloc_exec(unsigned long size) ...@@ -1959,6 +1983,8 @@ void *vmalloc_exec(unsigned long size)
* *
* Allocate enough 32bit PA addressable pages to cover @size from the * Allocate enough 32bit PA addressable pages to cover @size from the
* page level allocator and map them into contiguous kernel virtual space. * page level allocator and map them into contiguous kernel virtual space.
*
* Return: pointer to the allocated memory or %NULL on error
*/ */
void *vmalloc_32(unsigned long size) void *vmalloc_32(unsigned long size)
{ {
...@@ -1973,6 +1999,8 @@ EXPORT_SYMBOL(vmalloc_32); ...@@ -1973,6 +1999,8 @@ EXPORT_SYMBOL(vmalloc_32);
* *
* The resulting memory area is 32bit addressable and zeroed so it can be * The resulting memory area is 32bit addressable and zeroed so it can be
* mapped to userspace without leaking data. * mapped to userspace without leaking data.
*
* Return: pointer to the allocated memory or %NULL on error
*/ */
void *vmalloc_32_user(unsigned long size) void *vmalloc_32_user(unsigned long size)
{ {
...@@ -2070,10 +2098,6 @@ static int aligned_vwrite(char *buf, char *addr, unsigned long count) ...@@ -2070,10 +2098,6 @@ static int aligned_vwrite(char *buf, char *addr, unsigned long count)
* @addr: vm address. * @addr: vm address.
* @count: number of bytes to be read. * @count: number of bytes to be read.
* *
* Returns # of bytes which addr and buf should be increased.
* (same number to @count). Returns 0 if [addr...addr+count) doesn't
* includes any intersect with alive vmalloc area.
*
* This function checks that addr is a valid vmalloc'ed area, and * This function checks that addr is a valid vmalloc'ed area, and
* copy data from that area to a given buffer. If the given memory range * copy data from that area to a given buffer. If the given memory range
* of [addr...addr+count) includes some valid address, data is copied to * of [addr...addr+count) includes some valid address, data is copied to
...@@ -2087,6 +2111,10 @@ static int aligned_vwrite(char *buf, char *addr, unsigned long count) ...@@ -2087,6 +2111,10 @@ static int aligned_vwrite(char *buf, char *addr, unsigned long count)
* should know vmalloc() area is valid and can use memcpy(). * should know vmalloc() area is valid and can use memcpy().
* This is for routines which have to access vmalloc area without * This is for routines which have to access vmalloc area without
* any informaion, as /dev/kmem. * any informaion, as /dev/kmem.
*
* Return: number of bytes for which addr and buf should be increased
* (same number as @count) or %0 if [addr...addr+count) doesn't
* include any intersection with valid vmalloc area
*/ */
long vread(char *buf, char *addr, unsigned long count) long vread(char *buf, char *addr, unsigned long count)
{ {
...@@ -2149,11 +2177,6 @@ long vread(char *buf, char *addr, unsigned long count) ...@@ -2149,11 +2177,6 @@ long vread(char *buf, char *addr, unsigned long count)
* @addr: vm address. * @addr: vm address.
* @count: number of bytes to be read. * @count: number of bytes to be read.
* *
* Returns # of bytes which addr and buf should be incresed.
* (same number to @count).
* If [addr...addr+count) doesn't includes any intersect with valid
* vmalloc area, returns 0.
*
* This function checks that addr is a valid vmalloc'ed area, and * This function checks that addr is a valid vmalloc'ed area, and
* copy data from a buffer to the given addr. If specified range of * copy data from a buffer to the given addr. If specified range of
* [addr...addr+count) includes some valid address, data is copied from * [addr...addr+count) includes some valid address, data is copied from
...@@ -2167,6 +2190,10 @@ long vread(char *buf, char *addr, unsigned long count) ...@@ -2167,6 +2190,10 @@ long vread(char *buf, char *addr, unsigned long count)
* should know vmalloc() area is valid and can use memcpy(). * should know vmalloc() area is valid and can use memcpy().
* This is for routines which have to access vmalloc area without * This is for routines which have to access vmalloc area without
* any informaion, as /dev/kmem. * any informaion, as /dev/kmem.
*
* Return: number of bytes for which addr and buf should be
* increased (same number as @count) or %0 if [addr...addr+count)
* doesn't include any intersection with valid vmalloc area
*/ */
long vwrite(char *buf, char *addr, unsigned long count) long vwrite(char *buf, char *addr, unsigned long count)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment