Commit 3f502622 authored by Shinya Kuribayashi's avatar Shinya Kuribayashi Committed by Artem Bityutskiy

UBI: fix s/then/than/ typos

Signed-off-by: default avatarShinya Kuribayashi <shinya.kuribayashi.px@renesas.com>
Signed-off-by: default avatarArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
parent af7ad7a0
...@@ -27,7 +27,7 @@ config MTD_UBI_WL_THRESHOLD ...@@ -27,7 +27,7 @@ config MTD_UBI_WL_THRESHOLD
The default value should be OK for SLC NAND flashes, NOR flashes and The default value should be OK for SLC NAND flashes, NOR flashes and
other flashes which have eraseblock life-cycle 100000 or more. other flashes which have eraseblock life-cycle 100000 or more.
However, in case of MLC NAND flashes which typically have eraseblock However, in case of MLC NAND flashes which typically have eraseblock
life-cycle less then 10000, the threshold should be lessened (e.g., life-cycle less than 10000, the threshold should be lessened (e.g.,
to 128 or 256, although it does not have to be power of 2). to 128 or 256, although it does not have to be power of 2).
config MTD_UBI_BEB_RESERVE config MTD_UBI_BEB_RESERVE
......
...@@ -65,7 +65,7 @@ ...@@ -65,7 +65,7 @@
* *
* A: because when writing a sub-page, MTD still writes a full 2K page but the * A: because when writing a sub-page, MTD still writes a full 2K page but the
* bytes which are no relevant to the sub-page are 0xFF. So, basically, writing * bytes which are no relevant to the sub-page are 0xFF. So, basically, writing
* 4x512 sub-pages is 4 times slower then writing one 2KiB NAND page. Thus, we * 4x512 sub-pages is 4 times slower than writing one 2KiB NAND page. Thus, we
* prefer to use sub-pages only for EV and VID headers. * prefer to use sub-pages only for EV and VID headers.
* *
* As it was noted above, the VID header may start at a non-aligned offset. * As it was noted above, the VID header may start at a non-aligned offset.
......
...@@ -488,7 +488,7 @@ EXPORT_SYMBOL_GPL(ubi_leb_write); ...@@ -488,7 +488,7 @@ EXPORT_SYMBOL_GPL(ubi_leb_write);
* *
* This function changes the contents of a logical eraseblock atomically. @buf * This function changes the contents of a logical eraseblock atomically. @buf
* has to contain new logical eraseblock data, and @len - the length of the * has to contain new logical eraseblock data, and @len - the length of the
* data, which has to be aligned. The length may be shorter then the logical * data, which has to be aligned. The length may be shorter than the logical
* eraseblock size, ant the logical eraseblock may be appended to more times * eraseblock size, ant the logical eraseblock may be appended to more times
* later on. This function guarantees that in case of an unclean reboot the old * later on. This function guarantees that in case of an unclean reboot the old
* contents is preserved. Returns zero in case of success and a negative error * contents is preserved. Returns zero in case of success and a negative error
...@@ -571,7 +571,7 @@ EXPORT_SYMBOL_GPL(ubi_leb_erase); ...@@ -571,7 +571,7 @@ EXPORT_SYMBOL_GPL(ubi_leb_erase);
* *
* This function un-maps logical eraseblock @lnum and schedules the * This function un-maps logical eraseblock @lnum and schedules the
* corresponding physical eraseblock for erasure, so that it will eventually be * corresponding physical eraseblock for erasure, so that it will eventually be
* physically erased in background. This operation is much faster then the * physically erased in background. This operation is much faster than the
* erase operation. * erase operation.
* *
* Unlike erase, the un-map operation does not guarantee that the logical * Unlike erase, the un-map operation does not guarantee that the logical
...@@ -590,7 +590,7 @@ EXPORT_SYMBOL_GPL(ubi_leb_erase); ...@@ -590,7 +590,7 @@ EXPORT_SYMBOL_GPL(ubi_leb_erase);
* *
* The main and obvious use-case of this function is when the contents of a * The main and obvious use-case of this function is when the contents of a
* logical eraseblock has to be re-written. Then it is much more efficient to * logical eraseblock has to be re-written. Then it is much more efficient to
* first un-map it, then write new data, rather then first erase it, then write * first un-map it, then write new data, rather than first erase it, then write
* new data. Note, once new data has been written to the logical eraseblock, * new data. Note, once new data has been written to the logical eraseblock,
* UBI guarantees that the old contents has gone forever. In other words, if an * UBI guarantees that the old contents has gone forever. In other words, if an
* unclean reboot happens after the logical eraseblock has been un-mapped and * unclean reboot happens after the logical eraseblock has been un-mapped and
......
...@@ -231,7 +231,7 @@ static struct ubi_scan_volume *add_volume(struct ubi_scan_info *si, int vol_id, ...@@ -231,7 +231,7 @@ static struct ubi_scan_volume *add_volume(struct ubi_scan_info *si, int vol_id,
* case of success this function returns a positive value, in case of failure, a * case of success this function returns a positive value, in case of failure, a
* negative error code is returned. The success return codes use the following * negative error code is returned. The success return codes use the following
* bits: * bits:
* o bit 0 is cleared: the first PEB (described by @seb) is newer then the * o bit 0 is cleared: the first PEB (described by @seb) is newer than the
* second PEB (described by @pnum and @vid_hdr); * second PEB (described by @pnum and @vid_hdr);
* o bit 0 is set: the second PEB is newer; * o bit 0 is set: the second PEB is newer;
* o bit 1 is cleared: no bit-flips were detected in the newer LEB; * o bit 1 is cleared: no bit-flips were detected in the newer LEB;
...@@ -452,7 +452,7 @@ int ubi_scan_add_used(struct ubi_device *ubi, struct ubi_scan_info *si, ...@@ -452,7 +452,7 @@ int ubi_scan_add_used(struct ubi_device *ubi, struct ubi_scan_info *si,
if (cmp_res & 1) { if (cmp_res & 1) {
/* /*
* This logical eraseblock is newer then the one * This logical eraseblock is newer than the one
* found earlier. * found earlier.
*/ */
err = validate_vid_hdr(vid_hdr, sv, pnum); err = validate_vid_hdr(vid_hdr, sv, pnum);
......
...@@ -350,7 +350,7 @@ static void prot_queue_add(struct ubi_device *ubi, struct ubi_wl_entry *e) ...@@ -350,7 +350,7 @@ static void prot_queue_add(struct ubi_device *ubi, struct ubi_wl_entry *e)
* @max: highest possible erase counter * @max: highest possible erase counter
* *
* This function looks for a wear leveling entry with erase counter closest to * This function looks for a wear leveling entry with erase counter closest to
* @max and less then @max. * @max and less than @max.
*/ */
static struct ubi_wl_entry *find_wl_entry(struct rb_root *root, int max) static struct ubi_wl_entry *find_wl_entry(struct rb_root *root, int max)
{ {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment