Commit a8dfb61d authored by Richard Weinberger's avatar Richard Weinberger

Merge tag 'nand/for-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux into mtd/next

Raw NAND core changes:
* Stop using nand_release(), patched all drivers.
* Give more information about the ECC weakness when not matching the
  chip's requirement.
* MAINTAINERS updates.
* Support emulated SLC mode on MLC NANDs.
* Support "constrained" controllers, adapt the core and ONFI/JEDEC
  table parsing and Micron's code.
* Take check_only into account.
* Add an invalid ECC mode to discriminate with valid ones.
* Return an enum from of_get_nand_ecc_algo().
* Drop OOB_FIRST placement scheme.
* Introduce nand_extract_bits().
* Ensure a consistent bitflips numbering.
* BCH lib:
  - Allow easy bit swapping.
  - Rework a little bit the exported function names.
* Fix nand_gpio_waitrdy().
* Propage CS selection to sub operations.
* Add a NAND_NO_BBM_QUIRK flag.
* Give the possibility to verify a read operation is supported.
* Add a helper to check supported operations.
* Avoid indirect access to ->data_buf().
* Rename the use_bufpoi variables.
* Fix comments about the use of bufpoi.
* Rename a NAND chip option.
* Reorder the nand_chip->options flags.
* Translate obscure bitfields into readable macros.
* Timings:
  - Fix default values.
  - Add mode information to the timings structure.

Raw NAND controller driver changes:
* Fixed many error paths.
* Arasan
  - New driver
* Au1550nd:
  - Various cleanups
  - Migration to ->exec_op()
* brcmnand:
  - Misc cleanup.
  - Support v2.1-v2.2 controllers.
  - Remove unused including <linux/version.h>.
  - Correctly verify erased pages.
  - Fix Hamming OOB layout.
* Cadence
  - Make cadence_nand_attach_chip static.
* Cafe:
  - Set the NAND_NO_BBM_QUIRK flag
* cmx270:
  - Remove this controller driver.
* cs553x:
  - Misc cleanup
  - Migration to ->exec_op()
* Davinci:
  - Misc cleanup.
  - Migration to ->exec_op()
* Denali:
  - Add more delays before latching incoming data
* Diskonchip:
   - Misc cleanup
   - Migration to ->exec_op()
* Fsmc:
  - Change to non-atomic bit operations.
* GPMI:
  - Use nand_extract_bits()
  - Fix runtime PM imbalance.
* Ingenic:
  - Migration to exec_op()
  - Fix the RB gpio active-high property on qi, lb60
  - Make qi_lb60_ooblayout_ops static.
* Marvell:
   - Misc cleanup and small fixes
* Nandsim:
  - Fix the error paths, driver wide.
* Omap_elm:
  - Fix runtime PM imbalance.
* STM32_FMC2:
  - Misc cleanups (error cases, comments, timeout valus, cosmetic
    changes).
parents 3d77e6a8 86f2b225
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/mtd/arasan,nand-controller.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Arasan NAND Flash Controller with ONFI 3.1 support device tree bindings
allOf:
- $ref: "nand-controller.yaml"
maintainers:
- Naga Sureshkumar Relli <naga.sureshkumar.relli@xilinx.com>
properties:
compatible:
oneOf:
- items:
- enum:
- xlnx,zynqmp-nand-controller
- enum:
- arasan,nfc-v3p10
reg:
maxItems: 1
clocks:
items:
- description: Controller clock
- description: NAND bus clock
clock-names:
items:
- const: controller
- const: bus
interrupts:
maxItems: 1
"#address-cells": true
"#size-cells": true
required:
- compatible
- reg
- clocks
- clock-names
- interrupts
additionalProperties: true
examples:
- |
nfc: nand-controller@ff100000 {
compatible = "xlnx,zynqmp-nand-controller", "arasan,nfc-v3p10";
reg = <0x0 0xff100000 0x0 0x1000>;
clock-names = "controller", "bus";
clocks = <&clk200>, <&clk100>;
interrupt-parent = <&gic>;
interrupts = <0 14 4>;
#address-cells = <1>;
#size-cells = <0>;
};
...@@ -20,6 +20,8 @@ Required properties: ...@@ -20,6 +20,8 @@ Required properties:
"brcm,brcmnand" and an appropriate version compatibility "brcm,brcmnand" and an appropriate version compatibility
string, like "brcm,brcmnand-v7.0" string, like "brcm,brcmnand-v7.0"
Possible values: Possible values:
brcm,brcmnand-v2.1
brcm,brcmnand-v2.2
brcm,brcmnand-v4.0 brcm,brcmnand-v4.0
brcm,brcmnand-v5.0 brcm,brcmnand-v5.0
brcm,brcmnand-v6.0 brcm,brcmnand-v6.0
......
...@@ -61,6 +61,9 @@ Optional properties: ...@@ -61,6 +61,9 @@ Optional properties:
clobbered. clobbered.
- lock : Do not unlock the partition at initialization time (not supported on - lock : Do not unlock the partition at initialization time (not supported on
all devices) all devices)
- slc-mode: This parameter, if present, allows one to emulate SLC mode on a
partition attached to an MLC NAND thus making this partition immune to
paired-pages corruptions
Examples: Examples:
......
...@@ -276,8 +276,10 @@ unregisters the partitions in the MTD layer. ...@@ -276,8 +276,10 @@ unregisters the partitions in the MTD layer.
#ifdef MODULE #ifdef MODULE
static void __exit board_cleanup (void) static void __exit board_cleanup (void)
{ {
/* Release resources, unregister device */ /* Unregister device */
nand_release (mtd_to_nand(board_mtd)); WARN_ON(mtd_device_unregister(board_mtd));
/* Release resources */
nand_cleanup(mtd_to_nand(board_mtd));
/* unmap physical address */ /* unmap physical address */
iounmap(baseaddr); iounmap(baseaddr);
......
...@@ -1284,6 +1284,13 @@ S: Supported ...@@ -1284,6 +1284,13 @@ S: Supported
W: http://www.aquantia.com W: http://www.aquantia.com
F: drivers/net/ethernet/aquantia/atlantic/aq_ptp* F: drivers/net/ethernet/aquantia/atlantic/aq_ptp*
ARASAN NAND CONTROLLER DRIVER
M: Naga Sureshkumar Relli <nagasure@xilinx.com>
L: linux-mtd@lists.infradead.org
S: Maintained
F: Documentation/devicetree/bindings/mtd/arasan,nand-controller.yaml
F: drivers/mtd/nand/raw/arasan-nand-controller.c
ARC FRAMEBUFFER DRIVER ARC FRAMEBUFFER DRIVER
M: Jaya Kumar <jayalk@intworks.biz> M: Jaya Kumar <jayalk@intworks.biz>
S: Maintained S: Maintained
...@@ -3741,9 +3748,8 @@ F: Documentation/devicetree/bindings/media/cdns,*.txt ...@@ -3741,9 +3748,8 @@ F: Documentation/devicetree/bindings/media/cdns,*.txt
F: drivers/media/platform/cadence/cdns-csi2* F: drivers/media/platform/cadence/cdns-csi2*
CADENCE NAND DRIVER CADENCE NAND DRIVER
M: Piotr Sroka <piotrs@cadence.com>
L: linux-mtd@lists.infradead.org L: linux-mtd@lists.infradead.org
S: Maintained S: Orphan
F: Documentation/devicetree/bindings/mtd/cadence-nand-controller.txt F: Documentation/devicetree/bindings/mtd/cadence-nand-controller.txt
F: drivers/mtd/nand/raw/cadence-nand-controller.c F: drivers/mtd/nand/raw/cadence-nand-controller.c
...@@ -10727,9 +10733,8 @@ F: Documentation/devicetree/bindings/i2c/i2c-mt7621.txt ...@@ -10727,9 +10733,8 @@ F: Documentation/devicetree/bindings/i2c/i2c-mt7621.txt
F: drivers/i2c/busses/i2c-mt7621.c F: drivers/i2c/busses/i2c-mt7621.c
MEDIATEK NAND CONTROLLER DRIVER MEDIATEK NAND CONTROLLER DRIVER
M: Xiaolei Li <xiaolei.li@mediatek.com>
L: linux-mtd@lists.infradead.org L: linux-mtd@lists.infradead.org
S: Maintained S: Orphan
F: Documentation/devicetree/bindings/mtd/mtk-nand.txt F: Documentation/devicetree/bindings/mtd/mtk-nand.txt
F: drivers/mtd/nand/raw/mtk_* F: drivers/mtd/nand/raw/mtk_*
......
...@@ -647,7 +647,7 @@ static int doc_ecc_bch_fix_data(struct docg3 *docg3, void *buf, u8 *hwecc) ...@@ -647,7 +647,7 @@ static int doc_ecc_bch_fix_data(struct docg3 *docg3, void *buf, u8 *hwecc)
for (i = 0; i < DOC_ECC_BCH_SIZE; i++) for (i = 0; i < DOC_ECC_BCH_SIZE; i++)
ecc[i] = bitrev8(hwecc[i]); ecc[i] = bitrev8(hwecc[i]);
numerrs = decode_bch(docg3->cascade->bch, NULL, numerrs = bch_decode(docg3->cascade->bch, NULL,
DOC_ECC_BCH_COVERED_BYTES, DOC_ECC_BCH_COVERED_BYTES,
NULL, ecc, NULL, errorpos); NULL, ecc, NULL, errorpos);
BUG_ON(numerrs == -EINVAL); BUG_ON(numerrs == -EINVAL);
...@@ -1984,8 +1984,8 @@ static int __init docg3_probe(struct platform_device *pdev) ...@@ -1984,8 +1984,8 @@ static int __init docg3_probe(struct platform_device *pdev)
return ret; return ret;
cascade->base = base; cascade->base = base;
mutex_init(&cascade->lock); mutex_init(&cascade->lock);
cascade->bch = init_bch(DOC_ECC_BCH_M, DOC_ECC_BCH_T, cascade->bch = bch_init(DOC_ECC_BCH_M, DOC_ECC_BCH_T,
DOC_ECC_BCH_PRIMPOLY); DOC_ECC_BCH_PRIMPOLY, false);
if (!cascade->bch) if (!cascade->bch)
return ret; return ret;
...@@ -2021,7 +2021,7 @@ static int __init docg3_probe(struct platform_device *pdev) ...@@ -2021,7 +2021,7 @@ static int __init docg3_probe(struct platform_device *pdev)
ret = -ENODEV; ret = -ENODEV;
dev_info(dev, "No supported DiskOnChip found\n"); dev_info(dev, "No supported DiskOnChip found\n");
err_probe: err_probe:
free_bch(cascade->bch); bch_free(cascade->bch);
for (floor = 0; floor < DOC_MAX_NBFLOORS; floor++) for (floor = 0; floor < DOC_MAX_NBFLOORS; floor++)
if (cascade->floors[floor]) if (cascade->floors[floor])
doc_release_device(cascade->floors[floor]); doc_release_device(cascade->floors[floor]);
...@@ -2045,7 +2045,7 @@ static int docg3_release(struct platform_device *pdev) ...@@ -2045,7 +2045,7 @@ static int docg3_release(struct platform_device *pdev)
if (cascade->floors[floor]) if (cascade->floors[floor])
doc_release_device(cascade->floors[floor]); doc_release_device(cascade->floors[floor]);
free_bch(docg3->cascade->bch); bch_free(docg3->cascade->bch);
return 0; return 0;
} }
......
...@@ -617,6 +617,19 @@ int add_mtd_device(struct mtd_info *mtd) ...@@ -617,6 +617,19 @@ int add_mtd_device(struct mtd_info *mtd)
!(mtd->flags & MTD_NO_ERASE))) !(mtd->flags & MTD_NO_ERASE)))
return -EINVAL; return -EINVAL;
/*
* MTD_SLC_ON_MLC_EMULATION can only be set on partitions, when the
* master is an MLC NAND and has a proper pairing scheme defined.
* We also reject masters that implement ->_writev() for now, because
* NAND controller drivers don't implement this hook, and adding the
* SLC -> MLC address/length conversion to this path is useless if we
* don't have a user.
*/
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION &&
(!mtd_is_partition(mtd) || master->type != MTD_MLCNANDFLASH ||
!master->pairing || master->_writev))
return -EINVAL;
mutex_lock(&mtd_table_mutex); mutex_lock(&mtd_table_mutex);
i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL); i = idr_alloc(&mtd_idr, mtd, 0, 0, GFP_KERNEL);
...@@ -632,6 +645,14 @@ int add_mtd_device(struct mtd_info *mtd) ...@@ -632,6 +645,14 @@ int add_mtd_device(struct mtd_info *mtd)
if (mtd->bitflip_threshold == 0) if (mtd->bitflip_threshold == 0)
mtd->bitflip_threshold = mtd->ecc_strength; mtd->bitflip_threshold = mtd->ecc_strength;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
int ngroups = mtd_pairing_groups(master);
mtd->erasesize /= ngroups;
mtd->size = (u64)mtd_div_by_eb(mtd->size, master) *
mtd->erasesize;
}
if (is_power_of_2(mtd->erasesize)) if (is_power_of_2(mtd->erasesize))
mtd->erasesize_shift = ffs(mtd->erasesize) - 1; mtd->erasesize_shift = ffs(mtd->erasesize) - 1;
else else
...@@ -1074,9 +1095,11 @@ int mtd_erase(struct mtd_info *mtd, struct erase_info *instr) ...@@ -1074,9 +1095,11 @@ int mtd_erase(struct mtd_info *mtd, struct erase_info *instr)
{ {
struct mtd_info *master = mtd_get_master(mtd); struct mtd_info *master = mtd_get_master(mtd);
u64 mst_ofs = mtd_get_master_ofs(mtd, 0); u64 mst_ofs = mtd_get_master_ofs(mtd, 0);
struct erase_info adjinstr;
int ret; int ret;
instr->fail_addr = MTD_FAIL_ADDR_UNKNOWN; instr->fail_addr = MTD_FAIL_ADDR_UNKNOWN;
adjinstr = *instr;
if (!mtd->erasesize || !master->_erase) if (!mtd->erasesize || !master->_erase)
return -ENOTSUPP; return -ENOTSUPP;
...@@ -1091,12 +1114,27 @@ int mtd_erase(struct mtd_info *mtd, struct erase_info *instr) ...@@ -1091,12 +1114,27 @@ int mtd_erase(struct mtd_info *mtd, struct erase_info *instr)
ledtrig_mtd_activity(); ledtrig_mtd_activity();
instr->addr += mst_ofs; if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ret = master->_erase(master, instr); adjinstr.addr = (loff_t)mtd_div_by_eb(instr->addr, mtd) *
if (instr->fail_addr != MTD_FAIL_ADDR_UNKNOWN) master->erasesize;
instr->fail_addr -= mst_ofs; adjinstr.len = ((u64)mtd_div_by_eb(instr->addr + instr->len, mtd) *
master->erasesize) -
adjinstr.addr;
}
adjinstr.addr += mst_ofs;
ret = master->_erase(master, &adjinstr);
if (adjinstr.fail_addr != MTD_FAIL_ADDR_UNKNOWN) {
instr->fail_addr = adjinstr.fail_addr - mst_ofs;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
instr->fail_addr = mtd_div_by_eb(instr->fail_addr,
master);
instr->fail_addr *= mtd->erasesize;
}
}
instr->addr -= mst_ofs;
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(mtd_erase); EXPORT_SYMBOL_GPL(mtd_erase);
...@@ -1276,6 +1314,101 @@ static int mtd_check_oob_ops(struct mtd_info *mtd, loff_t offs, ...@@ -1276,6 +1314,101 @@ static int mtd_check_oob_ops(struct mtd_info *mtd, loff_t offs,
return 0; return 0;
} }
static int mtd_read_oob_std(struct mtd_info *mtd, loff_t from,
struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
int ret;
from = mtd_get_master_ofs(mtd, from);
if (master->_read_oob)
ret = master->_read_oob(master, from, ops);
else
ret = master->_read(master, from, ops->len, &ops->retlen,
ops->datbuf);
return ret;
}
static int mtd_write_oob_std(struct mtd_info *mtd, loff_t to,
struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
int ret;
to = mtd_get_master_ofs(mtd, to);
if (master->_write_oob)
ret = master->_write_oob(master, to, ops);
else
ret = master->_write(master, to, ops->len, &ops->retlen,
ops->datbuf);
return ret;
}
static int mtd_io_emulated_slc(struct mtd_info *mtd, loff_t start, bool read,
struct mtd_oob_ops *ops)
{
struct mtd_info *master = mtd_get_master(mtd);
int ngroups = mtd_pairing_groups(master);
int npairs = mtd_wunit_per_eb(master) / ngroups;
struct mtd_oob_ops adjops = *ops;
unsigned int wunit, oobavail;
struct mtd_pairing_info info;
int max_bitflips = 0;
u32 ebofs, pageofs;
loff_t base, pos;
ebofs = mtd_mod_by_eb(start, mtd);
base = (loff_t)mtd_div_by_eb(start, mtd) * master->erasesize;
info.group = 0;
info.pair = mtd_div_by_ws(ebofs, mtd);
pageofs = mtd_mod_by_ws(ebofs, mtd);
oobavail = mtd_oobavail(mtd, ops);
while (ops->retlen < ops->len || ops->oobretlen < ops->ooblen) {
int ret;
if (info.pair >= npairs) {
info.pair = 0;
base += master->erasesize;
}
wunit = mtd_pairing_info_to_wunit(master, &info);
pos = mtd_wunit_to_offset(mtd, base, wunit);
adjops.len = ops->len - ops->retlen;
if (adjops.len > mtd->writesize - pageofs)
adjops.len = mtd->writesize - pageofs;
adjops.ooblen = ops->ooblen - ops->oobretlen;
if (adjops.ooblen > oobavail - adjops.ooboffs)
adjops.ooblen = oobavail - adjops.ooboffs;
if (read) {
ret = mtd_read_oob_std(mtd, pos + pageofs, &adjops);
if (ret > 0)
max_bitflips = max(max_bitflips, ret);
} else {
ret = mtd_write_oob_std(mtd, pos + pageofs, &adjops);
}
if (ret < 0)
return ret;
max_bitflips = max(max_bitflips, ret);
ops->retlen += adjops.retlen;
ops->oobretlen += adjops.oobretlen;
adjops.datbuf += adjops.retlen;
adjops.oobbuf += adjops.oobretlen;
adjops.ooboffs = 0;
pageofs = 0;
info.pair++;
}
return max_bitflips;
}
int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops) int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops)
{ {
struct mtd_info *master = mtd_get_master(mtd); struct mtd_info *master = mtd_get_master(mtd);
...@@ -1294,12 +1427,10 @@ int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops) ...@@ -1294,12 +1427,10 @@ int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops)
if (!master->_read_oob && (!master->_read || ops->oobbuf)) if (!master->_read_oob && (!master->_read || ops->oobbuf))
return -EOPNOTSUPP; return -EOPNOTSUPP;
from = mtd_get_master_ofs(mtd, from); if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
if (master->_read_oob) ret_code = mtd_io_emulated_slc(mtd, from, true, ops);
ret_code = master->_read_oob(master, from, ops);
else else
ret_code = master->_read(master, from, ops->len, &ops->retlen, ret_code = mtd_read_oob_std(mtd, from, ops);
ops->datbuf);
mtd_update_ecc_stats(mtd, master, &old_stats); mtd_update_ecc_stats(mtd, master, &old_stats);
...@@ -1338,13 +1469,10 @@ int mtd_write_oob(struct mtd_info *mtd, loff_t to, ...@@ -1338,13 +1469,10 @@ int mtd_write_oob(struct mtd_info *mtd, loff_t to,
if (!master->_write_oob && (!master->_write || ops->oobbuf)) if (!master->_write_oob && (!master->_write || ops->oobbuf))
return -EOPNOTSUPP; return -EOPNOTSUPP;
to = mtd_get_master_ofs(mtd, to); if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
return mtd_io_emulated_slc(mtd, to, false, ops);
if (master->_write_oob) return mtd_write_oob_std(mtd, to, ops);
return master->_write_oob(master, to, ops);
else
return master->_write(master, to, ops->len, &ops->retlen,
ops->datbuf);
} }
EXPORT_SYMBOL_GPL(mtd_write_oob); EXPORT_SYMBOL_GPL(mtd_write_oob);
...@@ -1672,7 +1800,7 @@ EXPORT_SYMBOL_GPL(mtd_ooblayout_get_databytes); ...@@ -1672,7 +1800,7 @@ EXPORT_SYMBOL_GPL(mtd_ooblayout_get_databytes);
* @start: first ECC byte to set * @start: first ECC byte to set
* @nbytes: number of ECC bytes to set * @nbytes: number of ECC bytes to set
* *
* Works like mtd_ooblayout_get_bytes(), except it acts on free bytes. * Works like mtd_ooblayout_set_bytes(), except it acts on free bytes.
* *
* Returns zero on success, a negative error code otherwise. * Returns zero on success, a negative error code otherwise.
*/ */
...@@ -1817,6 +1945,12 @@ int mtd_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len) ...@@ -1817,6 +1945,12 @@ int mtd_lock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return -EINVAL; return -EINVAL;
if (!len) if (!len)
return 0; return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize;
}
return master->_lock(master, mtd_get_master_ofs(mtd, ofs), len); return master->_lock(master, mtd_get_master_ofs(mtd, ofs), len);
} }
EXPORT_SYMBOL_GPL(mtd_lock); EXPORT_SYMBOL_GPL(mtd_lock);
...@@ -1831,6 +1965,12 @@ int mtd_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len) ...@@ -1831,6 +1965,12 @@ int mtd_unlock(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return -EINVAL; return -EINVAL;
if (!len) if (!len)
return 0; return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize;
}
return master->_unlock(master, mtd_get_master_ofs(mtd, ofs), len); return master->_unlock(master, mtd_get_master_ofs(mtd, ofs), len);
} }
EXPORT_SYMBOL_GPL(mtd_unlock); EXPORT_SYMBOL_GPL(mtd_unlock);
...@@ -1845,6 +1985,12 @@ int mtd_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len) ...@@ -1845,6 +1985,12 @@ int mtd_is_locked(struct mtd_info *mtd, loff_t ofs, uint64_t len)
return -EINVAL; return -EINVAL;
if (!len) if (!len)
return 0; return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION) {
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
len = (u64)mtd_div_by_eb(len, mtd) * master->erasesize;
}
return master->_is_locked(master, mtd_get_master_ofs(mtd, ofs), len); return master->_is_locked(master, mtd_get_master_ofs(mtd, ofs), len);
} }
EXPORT_SYMBOL_GPL(mtd_is_locked); EXPORT_SYMBOL_GPL(mtd_is_locked);
...@@ -1857,6 +2003,10 @@ int mtd_block_isreserved(struct mtd_info *mtd, loff_t ofs) ...@@ -1857,6 +2003,10 @@ int mtd_block_isreserved(struct mtd_info *mtd, loff_t ofs)
return -EINVAL; return -EINVAL;
if (!master->_block_isreserved) if (!master->_block_isreserved)
return 0; return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
return master->_block_isreserved(master, mtd_get_master_ofs(mtd, ofs)); return master->_block_isreserved(master, mtd_get_master_ofs(mtd, ofs));
} }
EXPORT_SYMBOL_GPL(mtd_block_isreserved); EXPORT_SYMBOL_GPL(mtd_block_isreserved);
...@@ -1869,6 +2019,10 @@ int mtd_block_isbad(struct mtd_info *mtd, loff_t ofs) ...@@ -1869,6 +2019,10 @@ int mtd_block_isbad(struct mtd_info *mtd, loff_t ofs)
return -EINVAL; return -EINVAL;
if (!master->_block_isbad) if (!master->_block_isbad)
return 0; return 0;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
return master->_block_isbad(master, mtd_get_master_ofs(mtd, ofs)); return master->_block_isbad(master, mtd_get_master_ofs(mtd, ofs));
} }
EXPORT_SYMBOL_GPL(mtd_block_isbad); EXPORT_SYMBOL_GPL(mtd_block_isbad);
...@@ -1885,6 +2039,9 @@ int mtd_block_markbad(struct mtd_info *mtd, loff_t ofs) ...@@ -1885,6 +2039,9 @@ int mtd_block_markbad(struct mtd_info *mtd, loff_t ofs)
if (!(mtd->flags & MTD_WRITEABLE)) if (!(mtd->flags & MTD_WRITEABLE))
return -EROFS; return -EROFS;
if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
ret = master->_block_markbad(master, mtd_get_master_ofs(mtd, ofs)); ret = master->_block_markbad(master, mtd_get_master_ofs(mtd, ofs));
if (ret) if (ret)
return ret; return ret;
......
...@@ -35,9 +35,12 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, ...@@ -35,9 +35,12 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
const struct mtd_partition *part, const struct mtd_partition *part,
int partno, uint64_t cur_offset) int partno, uint64_t cur_offset)
{ {
int wr_alignment = (parent->flags & MTD_NO_ERASE) ? parent->writesize : struct mtd_info *master = mtd_get_master(parent);
parent->erasesize; int wr_alignment = (parent->flags & MTD_NO_ERASE) ?
struct mtd_info *child, *master = mtd_get_master(parent); master->writesize : master->erasesize;
u64 parent_size = mtd_is_partition(parent) ?
parent->part.size : parent->size;
struct mtd_info *child;
u32 remainder; u32 remainder;
char *name; char *name;
u64 tmp; u64 tmp;
...@@ -56,8 +59,9 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, ...@@ -56,8 +59,9 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
/* set up the MTD object for this partition */ /* set up the MTD object for this partition */
child->type = parent->type; child->type = parent->type;
child->part.flags = parent->flags & ~part->mask_flags; child->part.flags = parent->flags & ~part->mask_flags;
child->part.flags |= part->add_flags;
child->flags = child->part.flags; child->flags = child->part.flags;
child->size = part->size; child->part.size = part->size;
child->writesize = parent->writesize; child->writesize = parent->writesize;
child->writebufsize = parent->writebufsize; child->writebufsize = parent->writebufsize;
child->oobsize = parent->oobsize; child->oobsize = parent->oobsize;
...@@ -98,29 +102,29 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, ...@@ -98,29 +102,29 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
} }
if (child->part.offset == MTDPART_OFS_RETAIN) { if (child->part.offset == MTDPART_OFS_RETAIN) {
child->part.offset = cur_offset; child->part.offset = cur_offset;
if (parent->size - child->part.offset >= child->size) { if (parent_size - child->part.offset >= child->part.size) {
child->size = parent->size - child->part.offset - child->part.size = parent_size - child->part.offset -
child->size; child->part.size;
} else { } else {
printk(KERN_ERR "mtd partition \"%s\" doesn't have enough space: %#llx < %#llx, disabled\n", printk(KERN_ERR "mtd partition \"%s\" doesn't have enough space: %#llx < %#llx, disabled\n",
part->name, parent->size - child->part.offset, part->name, parent_size - child->part.offset,
child->size); child->part.size);
/* register to preserve ordering */ /* register to preserve ordering */
goto out_register; goto out_register;
} }
} }
if (child->size == MTDPART_SIZ_FULL) if (child->part.size == MTDPART_SIZ_FULL)
child->size = parent->size - child->part.offset; child->part.size = parent_size - child->part.offset;
printk(KERN_NOTICE "0x%012llx-0x%012llx : \"%s\"\n", printk(KERN_NOTICE "0x%012llx-0x%012llx : \"%s\"\n",
child->part.offset, child->part.offset + child->size, child->part.offset, child->part.offset + child->part.size,
child->name); child->name);
/* let's do some sanity checks */ /* let's do some sanity checks */
if (child->part.offset >= parent->size) { if (child->part.offset >= parent_size) {
/* let's register it anyway to preserve ordering */ /* let's register it anyway to preserve ordering */
child->part.offset = 0; child->part.offset = 0;
child->size = 0; child->part.size = 0;
/* Initialize ->erasesize to make add_mtd_device() happy. */ /* Initialize ->erasesize to make add_mtd_device() happy. */
child->erasesize = parent->erasesize; child->erasesize = parent->erasesize;
...@@ -128,15 +132,16 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, ...@@ -128,15 +132,16 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
part->name); part->name);
goto out_register; goto out_register;
} }
if (child->part.offset + child->size > parent->size) { if (child->part.offset + child->part.size > parent->size) {
child->size = parent->size - child->part.offset; child->part.size = parent_size - child->part.offset;
printk(KERN_WARNING"mtd: partition \"%s\" extends beyond the end of device \"%s\" -- size truncated to %#llx\n", printk(KERN_WARNING"mtd: partition \"%s\" extends beyond the end of device \"%s\" -- size truncated to %#llx\n",
part->name, parent->name, child->size); part->name, parent->name, child->part.size);
} }
if (parent->numeraseregions > 1) { if (parent->numeraseregions > 1) {
/* Deal with variable erase size stuff */ /* Deal with variable erase size stuff */
int i, max = parent->numeraseregions; int i, max = parent->numeraseregions;
u64 end = child->part.offset + child->size; u64 end = child->part.offset + child->part.size;
struct mtd_erase_region_info *regions = parent->eraseregions; struct mtd_erase_region_info *regions = parent->eraseregions;
/* Find the first erase regions which is part of this /* Find the first erase regions which is part of this
...@@ -156,7 +161,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, ...@@ -156,7 +161,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
BUG_ON(child->erasesize == 0); BUG_ON(child->erasesize == 0);
} else { } else {
/* Single erase size */ /* Single erase size */
child->erasesize = parent->erasesize; child->erasesize = master->erasesize;
} }
/* /*
...@@ -178,7 +183,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, ...@@ -178,7 +183,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
part->name); part->name);
} }
tmp = mtd_get_master_ofs(child, 0) + child->size; tmp = mtd_get_master_ofs(child, 0) + child->part.size;
remainder = do_div(tmp, wr_alignment); remainder = do_div(tmp, wr_alignment);
if ((child->flags & MTD_WRITEABLE) && remainder) { if ((child->flags & MTD_WRITEABLE) && remainder) {
child->flags &= ~MTD_WRITEABLE; child->flags &= ~MTD_WRITEABLE;
...@@ -186,6 +191,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, ...@@ -186,6 +191,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
part->name); part->name);
} }
child->size = child->part.size;
child->ecc_step_size = parent->ecc_step_size; child->ecc_step_size = parent->ecc_step_size;
child->ecc_strength = parent->ecc_strength; child->ecc_strength = parent->ecc_strength;
child->bitflip_threshold = parent->bitflip_threshold; child->bitflip_threshold = parent->bitflip_threshold;
...@@ -193,7 +199,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent, ...@@ -193,7 +199,7 @@ static struct mtd_info *allocate_partition(struct mtd_info *parent,
if (master->_block_isbad) { if (master->_block_isbad) {
uint64_t offs = 0; uint64_t offs = 0;
while (offs < child->size) { while (offs < child->part.size) {
if (mtd_block_isreserved(child, offs)) if (mtd_block_isreserved(child, offs))
child->ecc_stats.bbtblocks++; child->ecc_stats.bbtblocks++;
else if (mtd_block_isbad(child, offs)) else if (mtd_block_isbad(child, offs))
...@@ -234,6 +240,8 @@ int mtd_add_partition(struct mtd_info *parent, const char *name, ...@@ -234,6 +240,8 @@ int mtd_add_partition(struct mtd_info *parent, const char *name,
long long offset, long long length) long long offset, long long length)
{ {
struct mtd_info *master = mtd_get_master(parent); struct mtd_info *master = mtd_get_master(parent);
u64 parent_size = mtd_is_partition(parent) ?
parent->part.size : parent->size;
struct mtd_partition part; struct mtd_partition part;
struct mtd_info *child; struct mtd_info *child;
int ret = 0; int ret = 0;
...@@ -244,7 +252,7 @@ int mtd_add_partition(struct mtd_info *parent, const char *name, ...@@ -244,7 +252,7 @@ int mtd_add_partition(struct mtd_info *parent, const char *name,
return -EINVAL; return -EINVAL;
if (length == MTDPART_SIZ_FULL) if (length == MTDPART_SIZ_FULL)
length = parent->size - offset; length = parent_size - offset;
if (length <= 0) if (length <= 0)
return -EINVAL; return -EINVAL;
...@@ -419,7 +427,7 @@ int add_mtd_partitions(struct mtd_info *parent, ...@@ -419,7 +427,7 @@ int add_mtd_partitions(struct mtd_info *parent,
/* Look for subpartitions */ /* Look for subpartitions */
parse_mtd_partitions(child, parts[i].types, NULL); parse_mtd_partitions(child, parts[i].types, NULL);
cur_offset = child->part.offset + child->size; cur_offset = child->part.offset + child->part.size;
} }
return 0; return 0;
......
...@@ -213,10 +213,6 @@ config MTD_NAND_MLC_LPC32XX ...@@ -213,10 +213,6 @@ config MTD_NAND_MLC_LPC32XX
Please check the actual NAND chip connected and its support Please check the actual NAND chip connected and its support
by the MLC NAND controller. by the MLC NAND controller.
config MTD_NAND_CM_X270
tristate "CM-X270 modules NAND controller"
depends on MACH_ARMCORE
config MTD_NAND_PASEMI config MTD_NAND_PASEMI
tristate "PA Semi PWRficient NAND controller" tristate "PA Semi PWRficient NAND controller"
depends on PPC_PASEMI depends on PPC_PASEMI
...@@ -457,6 +453,14 @@ config MTD_NAND_CADENCE ...@@ -457,6 +453,14 @@ config MTD_NAND_CADENCE
Enable the driver for NAND flash on platforms using a Cadence NAND Enable the driver for NAND flash on platforms using a Cadence NAND
controller. controller.
config MTD_NAND_ARASAN
tristate "Support for Arasan NAND flash controller"
depends on HAS_IOMEM && HAS_DMA
select BCH
help
Enables the driver for the Arasan NAND flash controller on
Zynq Ultrascale+ MPSoC.
comment "Misc" comment "Misc"
config MTD_SM_COMMON config MTD_SM_COMMON
......
...@@ -25,7 +25,6 @@ obj-$(CONFIG_MTD_NAND_GPIO) += gpio.o ...@@ -25,7 +25,6 @@ obj-$(CONFIG_MTD_NAND_GPIO) += gpio.o
omap2_nand-objs := omap2.o omap2_nand-objs := omap2.o
obj-$(CONFIG_MTD_NAND_OMAP2) += omap2_nand.o obj-$(CONFIG_MTD_NAND_OMAP2) += omap2_nand.o
obj-$(CONFIG_MTD_NAND_OMAP_BCH_BUILD) += omap_elm.o obj-$(CONFIG_MTD_NAND_OMAP_BCH_BUILD) += omap_elm.o
obj-$(CONFIG_MTD_NAND_CM_X270) += cmx270_nand.o
obj-$(CONFIG_MTD_NAND_MARVELL) += marvell_nand.o obj-$(CONFIG_MTD_NAND_MARVELL) += marvell_nand.o
obj-$(CONFIG_MTD_NAND_TMIO) += tmio_nand.o obj-$(CONFIG_MTD_NAND_TMIO) += tmio_nand.o
obj-$(CONFIG_MTD_NAND_PLATFORM) += plat_nand.o obj-$(CONFIG_MTD_NAND_PLATFORM) += plat_nand.o
...@@ -58,6 +57,7 @@ obj-$(CONFIG_MTD_NAND_TEGRA) += tegra_nand.o ...@@ -58,6 +57,7 @@ obj-$(CONFIG_MTD_NAND_TEGRA) += tegra_nand.o
obj-$(CONFIG_MTD_NAND_STM32_FMC2) += stm32_fmc2_nand.o obj-$(CONFIG_MTD_NAND_STM32_FMC2) += stm32_fmc2_nand.o
obj-$(CONFIG_MTD_NAND_MESON) += meson_nand.o obj-$(CONFIG_MTD_NAND_MESON) += meson_nand.o
obj-$(CONFIG_MTD_NAND_CADENCE) += cadence-nand-controller.o obj-$(CONFIG_MTD_NAND_CADENCE) += cadence-nand-controller.o
obj-$(CONFIG_MTD_NAND_ARASAN) += arasan-nand-controller.o
nand-objs := nand_base.o nand_legacy.o nand_bbt.o nand_timings.o nand_ids.o nand-objs := nand_base.o nand_legacy.o nand_bbt.o nand_timings.o nand_ids.o
nand-objs += nand_onfi.o nand-objs += nand_onfi.o
......
...@@ -387,12 +387,15 @@ static int gpio_nand_remove(struct platform_device *pdev) ...@@ -387,12 +387,15 @@ static int gpio_nand_remove(struct platform_device *pdev)
{ {
struct gpio_nand *priv = platform_get_drvdata(pdev); struct gpio_nand *priv = platform_get_drvdata(pdev);
struct mtd_info *mtd = nand_to_mtd(&priv->nand_chip); struct mtd_info *mtd = nand_to_mtd(&priv->nand_chip);
int ret;
/* Apply write protection */ /* Apply write protection */
gpiod_set_value(priv->gpiod_nwp, 1); gpiod_set_value(priv->gpiod_nwp, 1);
/* Unregister device */ /* Unregister device */
nand_release(mtd_to_nand(mtd)); ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(mtd_to_nand(mtd));
return 0; return 0;
} }
......
This diff is collapsed.
...@@ -1494,7 +1494,7 @@ static void atmel_nand_init(struct atmel_nand_controller *nc, ...@@ -1494,7 +1494,7 @@ static void atmel_nand_init(struct atmel_nand_controller *nc,
* suitable for DMA. * suitable for DMA.
*/ */
if (nc->dmac) if (nc->dmac)
chip->options |= NAND_USE_BOUNCE_BUFFER; chip->options |= NAND_USES_DMA;
/* Default to HW ECC if pmecc is available. */ /* Default to HW ECC if pmecc is available. */
if (nc->pmecc) if (nc->pmecc)
......
This diff is collapsed.
...@@ -60,8 +60,12 @@ static int bcm47xxnflash_probe(struct platform_device *pdev) ...@@ -60,8 +60,12 @@ static int bcm47xxnflash_probe(struct platform_device *pdev)
static int bcm47xxnflash_remove(struct platform_device *pdev) static int bcm47xxnflash_remove(struct platform_device *pdev)
{ {
struct bcm47xxnflash *nflash = platform_get_drvdata(pdev); struct bcm47xxnflash *nflash = platform_get_drvdata(pdev);
struct nand_chip *chip = &nflash->nand_chip;
int ret;
nand_release(&nflash->nand_chip); ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
return 0; return 0;
} }
......
...@@ -4,7 +4,6 @@ ...@@ -4,7 +4,6 @@
*/ */
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/version.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/delay.h> #include <linux/delay.h>
...@@ -264,6 +263,7 @@ struct brcmnand_controller { ...@@ -264,6 +263,7 @@ struct brcmnand_controller {
const unsigned int *block_sizes; const unsigned int *block_sizes;
unsigned int max_page_size; unsigned int max_page_size;
const unsigned int *page_sizes; const unsigned int *page_sizes;
unsigned int page_size_shift;
unsigned int max_oob; unsigned int max_oob;
u32 features; u32 features;
...@@ -338,8 +338,38 @@ enum brcmnand_reg { ...@@ -338,8 +338,38 @@ enum brcmnand_reg {
BRCMNAND_FC_BASE, BRCMNAND_FC_BASE,
}; };
/* BRCMNAND v4.0 */ /* BRCMNAND v2.1-v2.2 */
static const u16 brcmnand_regs_v40[] = { static const u16 brcmnand_regs_v21[] = {
[BRCMNAND_CMD_START] = 0x04,
[BRCMNAND_CMD_EXT_ADDRESS] = 0x08,
[BRCMNAND_CMD_ADDRESS] = 0x0c,
[BRCMNAND_INTFC_STATUS] = 0x5c,
[BRCMNAND_CS_SELECT] = 0x14,
[BRCMNAND_CS_XOR] = 0x18,
[BRCMNAND_LL_OP] = 0,
[BRCMNAND_CS0_BASE] = 0x40,
[BRCMNAND_CS1_BASE] = 0,
[BRCMNAND_CORR_THRESHOLD] = 0,
[BRCMNAND_CORR_THRESHOLD_EXT] = 0,
[BRCMNAND_UNCORR_COUNT] = 0,
[BRCMNAND_CORR_COUNT] = 0,
[BRCMNAND_CORR_EXT_ADDR] = 0x60,
[BRCMNAND_CORR_ADDR] = 0x64,
[BRCMNAND_UNCORR_EXT_ADDR] = 0x68,
[BRCMNAND_UNCORR_ADDR] = 0x6c,
[BRCMNAND_SEMAPHORE] = 0x50,
[BRCMNAND_ID] = 0x54,
[BRCMNAND_ID_EXT] = 0,
[BRCMNAND_LL_RDATA] = 0,
[BRCMNAND_OOB_READ_BASE] = 0x20,
[BRCMNAND_OOB_READ_10_BASE] = 0,
[BRCMNAND_OOB_WRITE_BASE] = 0x30,
[BRCMNAND_OOB_WRITE_10_BASE] = 0,
[BRCMNAND_FC_BASE] = 0x200,
};
/* BRCMNAND v3.3-v4.0 */
static const u16 brcmnand_regs_v33[] = {
[BRCMNAND_CMD_START] = 0x04, [BRCMNAND_CMD_START] = 0x04,
[BRCMNAND_CMD_EXT_ADDRESS] = 0x08, [BRCMNAND_CMD_EXT_ADDRESS] = 0x08,
[BRCMNAND_CMD_ADDRESS] = 0x0c, [BRCMNAND_CMD_ADDRESS] = 0x0c,
...@@ -536,6 +566,9 @@ enum { ...@@ -536,6 +566,9 @@ enum {
CFG_BUS_WIDTH = BIT(CFG_BUS_WIDTH_SHIFT), CFG_BUS_WIDTH = BIT(CFG_BUS_WIDTH_SHIFT),
CFG_DEVICE_SIZE_SHIFT = 24, CFG_DEVICE_SIZE_SHIFT = 24,
/* Only for v2.1 */
CFG_PAGE_SIZE_SHIFT_v2_1 = 30,
/* Only for pre-v7.1 (with no CFG_EXT register) */ /* Only for pre-v7.1 (with no CFG_EXT register) */
CFG_PAGE_SIZE_SHIFT = 20, CFG_PAGE_SIZE_SHIFT = 20,
CFG_BLK_SIZE_SHIFT = 28, CFG_BLK_SIZE_SHIFT = 28,
...@@ -571,12 +604,16 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl) ...@@ -571,12 +604,16 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
{ {
static const unsigned int block_sizes_v6[] = { 8, 16, 128, 256, 512, 1024, 2048, 0 }; static const unsigned int block_sizes_v6[] = { 8, 16, 128, 256, 512, 1024, 2048, 0 };
static const unsigned int block_sizes_v4[] = { 16, 128, 8, 512, 256, 1024, 2048, 0 }; static const unsigned int block_sizes_v4[] = { 16, 128, 8, 512, 256, 1024, 2048, 0 };
static const unsigned int page_sizes[] = { 512, 2048, 4096, 8192, 0 }; static const unsigned int block_sizes_v2_2[] = { 16, 128, 8, 512, 256, 0 };
static const unsigned int block_sizes_v2_1[] = { 16, 128, 8, 512, 0 };
static const unsigned int page_sizes_v3_4[] = { 512, 2048, 4096, 8192, 0 };
static const unsigned int page_sizes_v2_2[] = { 512, 2048, 4096, 0 };
static const unsigned int page_sizes_v2_1[] = { 512, 2048, 0 };
ctrl->nand_version = nand_readreg(ctrl, 0) & 0xffff; ctrl->nand_version = nand_readreg(ctrl, 0) & 0xffff;
/* Only support v4.0+? */ /* Only support v2.1+ */
if (ctrl->nand_version < 0x0400) { if (ctrl->nand_version < 0x0201) {
dev_err(ctrl->dev, "version %#x not supported\n", dev_err(ctrl->dev, "version %#x not supported\n",
ctrl->nand_version); ctrl->nand_version);
return -ENODEV; return -ENODEV;
...@@ -591,8 +628,10 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl) ...@@ -591,8 +628,10 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
ctrl->reg_offsets = brcmnand_regs_v60; ctrl->reg_offsets = brcmnand_regs_v60;
else if (ctrl->nand_version >= 0x0500) else if (ctrl->nand_version >= 0x0500)
ctrl->reg_offsets = brcmnand_regs_v50; ctrl->reg_offsets = brcmnand_regs_v50;
else if (ctrl->nand_version >= 0x0400) else if (ctrl->nand_version >= 0x0303)
ctrl->reg_offsets = brcmnand_regs_v40; ctrl->reg_offsets = brcmnand_regs_v33;
else if (ctrl->nand_version >= 0x0201)
ctrl->reg_offsets = brcmnand_regs_v21;
/* Chip-select stride */ /* Chip-select stride */
if (ctrl->nand_version >= 0x0701) if (ctrl->nand_version >= 0x0701)
...@@ -606,8 +645,9 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl) ...@@ -606,8 +645,9 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
} else { } else {
ctrl->cs_offsets = brcmnand_cs_offsets; ctrl->cs_offsets = brcmnand_cs_offsets;
/* v5.0 and earlier has a different CS0 offset layout */ /* v3.3-5.0 have a different CS0 offset layout */
if (ctrl->nand_version <= 0x0500) if (ctrl->nand_version >= 0x0303 &&
ctrl->nand_version <= 0x0500)
ctrl->cs0_offsets = brcmnand_cs_offsets_cs0; ctrl->cs0_offsets = brcmnand_cs_offsets_cs0;
} }
...@@ -617,14 +657,32 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl) ...@@ -617,14 +657,32 @@ static int brcmnand_revision_init(struct brcmnand_controller *ctrl)
ctrl->max_page_size = 16 * 1024; ctrl->max_page_size = 16 * 1024;
ctrl->max_block_size = 2 * 1024 * 1024; ctrl->max_block_size = 2 * 1024 * 1024;
} else { } else {
ctrl->page_sizes = page_sizes; if (ctrl->nand_version >= 0x0304)
ctrl->page_sizes = page_sizes_v3_4;
else if (ctrl->nand_version >= 0x0202)
ctrl->page_sizes = page_sizes_v2_2;
else
ctrl->page_sizes = page_sizes_v2_1;
if (ctrl->nand_version >= 0x0202)
ctrl->page_size_shift = CFG_PAGE_SIZE_SHIFT;
else
ctrl->page_size_shift = CFG_PAGE_SIZE_SHIFT_v2_1;
if (ctrl->nand_version >= 0x0600) if (ctrl->nand_version >= 0x0600)
ctrl->block_sizes = block_sizes_v6; ctrl->block_sizes = block_sizes_v6;
else else if (ctrl->nand_version >= 0x0400)
ctrl->block_sizes = block_sizes_v4; ctrl->block_sizes = block_sizes_v4;
else if (ctrl->nand_version >= 0x0202)
ctrl->block_sizes = block_sizes_v2_2;
else
ctrl->block_sizes = block_sizes_v2_1;
if (ctrl->nand_version < 0x0400) { if (ctrl->nand_version < 0x0400) {
ctrl->max_page_size = 4096; if (ctrl->nand_version < 0x0202)
ctrl->max_page_size = 2048;
else
ctrl->max_page_size = 4096;
ctrl->max_block_size = 512 * 1024; ctrl->max_block_size = 512 * 1024;
} }
} }
...@@ -810,6 +868,9 @@ static void brcmnand_wr_corr_thresh(struct brcmnand_host *host, u8 val) ...@@ -810,6 +868,9 @@ static void brcmnand_wr_corr_thresh(struct brcmnand_host *host, u8 val)
enum brcmnand_reg reg = BRCMNAND_CORR_THRESHOLD; enum brcmnand_reg reg = BRCMNAND_CORR_THRESHOLD;
int cs = host->cs; int cs = host->cs;
if (!ctrl->reg_offsets[reg])
return;
if (ctrl->nand_version == 0x0702) if (ctrl->nand_version == 0x0702)
bits = 7; bits = 7;
else if (ctrl->nand_version >= 0x0600) else if (ctrl->nand_version >= 0x0600)
...@@ -868,8 +929,10 @@ static inline u32 brcmnand_spare_area_mask(struct brcmnand_controller *ctrl) ...@@ -868,8 +929,10 @@ static inline u32 brcmnand_spare_area_mask(struct brcmnand_controller *ctrl)
return GENMASK(7, 0); return GENMASK(7, 0);
else if (ctrl->nand_version >= 0x0600) else if (ctrl->nand_version >= 0x0600)
return GENMASK(6, 0); return GENMASK(6, 0);
else else if (ctrl->nand_version >= 0x0303)
return GENMASK(5, 0); return GENMASK(5, 0);
else
return GENMASK(4, 0);
} }
#define NAND_ACC_CONTROL_ECC_SHIFT 16 #define NAND_ACC_CONTROL_ECC_SHIFT 16
...@@ -1100,30 +1163,30 @@ static int brcmnand_hamming_ooblayout_free(struct mtd_info *mtd, int section, ...@@ -1100,30 +1163,30 @@ static int brcmnand_hamming_ooblayout_free(struct mtd_info *mtd, int section,
struct brcmnand_cfg *cfg = &host->hwcfg; struct brcmnand_cfg *cfg = &host->hwcfg;
int sas = cfg->spare_area_size << cfg->sector_size_1k; int sas = cfg->spare_area_size << cfg->sector_size_1k;
int sectors = cfg->page_size / (512 << cfg->sector_size_1k); int sectors = cfg->page_size / (512 << cfg->sector_size_1k);
u32 next;
if (section >= sectors * 2) if (section > sectors)
return -ERANGE; return -ERANGE;
oobregion->offset = (section / 2) * sas; next = (section * sas);
if (section < sectors)
next += 6;
if (section & 1) { if (section) {
oobregion->offset += 9; oobregion->offset = ((section - 1) * sas) + 9;
oobregion->length = 7;
} else { } else {
oobregion->length = 6; if (cfg->page_size > 512) {
/* Large page NAND uses first 2 bytes for BBI */
/* First sector of each page may have BBI */ oobregion->offset = 2;
if (!section) { } else {
/* /* Small page NAND uses last byte before ECC for BBI */
* Small-page NAND use byte 6 for BBI while large-page oobregion->offset = 0;
* NAND use byte 0. next--;
*/
if (cfg->page_size > 512)
oobregion->offset++;
oobregion->length--;
} }
} }
oobregion->length = next - oobregion->offset;
return 0; return 0;
} }
...@@ -2018,28 +2081,31 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip, ...@@ -2018,28 +2081,31 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip,
static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd, static int brcmstb_nand_verify_erased_page(struct mtd_info *mtd,
struct nand_chip *chip, void *buf, u64 addr) struct nand_chip *chip, void *buf, u64 addr)
{ {
int i, sas; struct mtd_oob_region ecc;
void *oob = chip->oob_poi; int i;
int bitflips = 0; int bitflips = 0;
int page = addr >> chip->page_shift; int page = addr >> chip->page_shift;
int ret; int ret;
void *ecc_bytes;
void *ecc_chunk; void *ecc_chunk;
if (!buf) if (!buf)
buf = nand_get_data_buf(chip); buf = nand_get_data_buf(chip);
sas = mtd->oobsize / chip->ecc.steps;
/* read without ecc for verification */ /* read without ecc for verification */
ret = chip->ecc.read_page_raw(chip, buf, true, page); ret = chip->ecc.read_page_raw(chip, buf, true, page);
if (ret) if (ret)
return ret; return ret;
for (i = 0; i < chip->ecc.steps; i++, oob += sas) { for (i = 0; i < chip->ecc.steps; i++) {
ecc_chunk = buf + chip->ecc.size * i; ecc_chunk = buf + chip->ecc.size * i;
ret = nand_check_erased_ecc_chunk(ecc_chunk,
chip->ecc.size, mtd_ooblayout_ecc(mtd, i, &ecc);
oob, sas, NULL, 0, ecc_bytes = chip->oob_poi + ecc.offset;
ret = nand_check_erased_ecc_chunk(ecc_chunk, chip->ecc.size,
ecc_bytes, ecc.length,
NULL, 0,
chip->ecc.strength); chip->ecc.strength);
if (ret < 0) if (ret < 0)
return ret; return ret;
...@@ -2377,7 +2443,7 @@ static int brcmnand_set_cfg(struct brcmnand_host *host, ...@@ -2377,7 +2443,7 @@ static int brcmnand_set_cfg(struct brcmnand_host *host,
(!!(cfg->device_width == 16) << CFG_BUS_WIDTH_SHIFT) | (!!(cfg->device_width == 16) << CFG_BUS_WIDTH_SHIFT) |
(device_size << CFG_DEVICE_SIZE_SHIFT); (device_size << CFG_DEVICE_SIZE_SHIFT);
if (cfg_offs == cfg_ext_offs) { if (cfg_offs == cfg_ext_offs) {
tmp |= (page_size << CFG_PAGE_SIZE_SHIFT) | tmp |= (page_size << ctrl->page_size_shift) |
(block_size << CFG_BLK_SIZE_SHIFT); (block_size << CFG_BLK_SIZE_SHIFT);
nand_writereg(ctrl, cfg_offs, tmp); nand_writereg(ctrl, cfg_offs, tmp);
} else { } else {
...@@ -2389,9 +2455,11 @@ static int brcmnand_set_cfg(struct brcmnand_host *host, ...@@ -2389,9 +2455,11 @@ static int brcmnand_set_cfg(struct brcmnand_host *host,
tmp = nand_readreg(ctrl, acc_control_offs); tmp = nand_readreg(ctrl, acc_control_offs);
tmp &= ~brcmnand_ecc_level_mask(ctrl); tmp &= ~brcmnand_ecc_level_mask(ctrl);
tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT;
tmp &= ~brcmnand_spare_area_mask(ctrl); tmp &= ~brcmnand_spare_area_mask(ctrl);
tmp |= cfg->spare_area_size; if (ctrl->nand_version >= 0x0302) {
tmp |= cfg->ecc_level << NAND_ACC_CONTROL_ECC_SHIFT;
tmp |= cfg->spare_area_size;
}
nand_writereg(ctrl, acc_control_offs, tmp); nand_writereg(ctrl, acc_control_offs, tmp);
brcmnand_set_sector_size_1k(host, cfg->sector_size_1k); brcmnand_set_sector_size_1k(host, cfg->sector_size_1k);
...@@ -2577,7 +2645,7 @@ static int brcmnand_attach_chip(struct nand_chip *chip) ...@@ -2577,7 +2645,7 @@ static int brcmnand_attach_chip(struct nand_chip *chip)
* to/from, and have nand_base pass us a bounce buffer instead, as * to/from, and have nand_base pass us a bounce buffer instead, as
* needed. * needed.
*/ */
chip->options |= NAND_USE_BOUNCE_BUFFER; chip->options |= NAND_USES_DMA;
if (chip->bbt_options & NAND_BBT_USE_FLASH) if (chip->bbt_options & NAND_BBT_USE_FLASH)
chip->bbt_options |= NAND_BBT_NO_OOB; chip->bbt_options |= NAND_BBT_NO_OOB;
...@@ -2764,6 +2832,8 @@ const struct dev_pm_ops brcmnand_pm_ops = { ...@@ -2764,6 +2832,8 @@ const struct dev_pm_ops brcmnand_pm_ops = {
EXPORT_SYMBOL_GPL(brcmnand_pm_ops); EXPORT_SYMBOL_GPL(brcmnand_pm_ops);
static const struct of_device_id brcmnand_of_match[] = { static const struct of_device_id brcmnand_of_match[] = {
{ .compatible = "brcm,brcmnand-v2.1" },
{ .compatible = "brcm,brcmnand-v2.2" },
{ .compatible = "brcm,brcmnand-v4.0" }, { .compatible = "brcm,brcmnand-v4.0" },
{ .compatible = "brcm,brcmnand-v5.0" }, { .compatible = "brcm,brcmnand-v5.0" },
{ .compatible = "brcm,brcmnand-v6.0" }, { .compatible = "brcm,brcmnand-v6.0" },
...@@ -3045,9 +3115,15 @@ int brcmnand_remove(struct platform_device *pdev) ...@@ -3045,9 +3115,15 @@ int brcmnand_remove(struct platform_device *pdev)
{ {
struct brcmnand_controller *ctrl = dev_get_drvdata(&pdev->dev); struct brcmnand_controller *ctrl = dev_get_drvdata(&pdev->dev);
struct brcmnand_host *host; struct brcmnand_host *host;
struct nand_chip *chip;
int ret;
list_for_each_entry(host, &ctrl->host_list, node) list_for_each_entry(host, &ctrl->host_list, node) {
nand_release(&host->chip); chip = &host->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
}
clk_disable_unprepare(ctrl->clk); clk_disable_unprepare(ctrl->clk);
......
...@@ -2223,10 +2223,12 @@ static int cadence_nand_exec_op(struct nand_chip *chip, ...@@ -2223,10 +2223,12 @@ static int cadence_nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op, const struct nand_operation *op,
bool check_only) bool check_only)
{ {
int status = cadence_nand_select_target(chip); if (!check_only) {
int status = cadence_nand_select_target(chip);
if (status) if (status)
return status; return status;
}
return nand_op_parser_exec_op(chip, &cadence_nand_op_parser, op, return nand_op_parser_exec_op(chip, &cadence_nand_op_parser, op,
check_only); check_only);
...@@ -2592,7 +2594,7 @@ cadence_nand_setup_data_interface(struct nand_chip *chip, int chipnr, ...@@ -2592,7 +2594,7 @@ cadence_nand_setup_data_interface(struct nand_chip *chip, int chipnr,
return 0; return 0;
} }
int cadence_nand_attach_chip(struct nand_chip *chip) static int cadence_nand_attach_chip(struct nand_chip *chip)
{ {
struct cdns_nand_ctrl *cdns_ctrl = to_cdns_nand_ctrl(chip->controller); struct cdns_nand_ctrl *cdns_ctrl = to_cdns_nand_ctrl(chip->controller);
struct cdns_nand_chip *cdns_chip = to_cdns_nand_chip(chip); struct cdns_nand_chip *cdns_chip = to_cdns_nand_chip(chip);
...@@ -2778,9 +2780,14 @@ static int cadence_nand_chip_init(struct cdns_nand_ctrl *cdns_ctrl, ...@@ -2778,9 +2780,14 @@ static int cadence_nand_chip_init(struct cdns_nand_ctrl *cdns_ctrl,
static void cadence_nand_chips_cleanup(struct cdns_nand_ctrl *cdns_ctrl) static void cadence_nand_chips_cleanup(struct cdns_nand_ctrl *cdns_ctrl)
{ {
struct cdns_nand_chip *entry, *temp; struct cdns_nand_chip *entry, *temp;
struct nand_chip *chip;
int ret;
list_for_each_entry_safe(entry, temp, &cdns_ctrl->chips, node) { list_for_each_entry_safe(entry, temp, &cdns_ctrl->chips, node) {
nand_release(&entry->chip); chip = &entry->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
list_del(&entry->node); list_del(&entry->node);
} }
} }
......
...@@ -546,11 +546,6 @@ static int cafe_nand_write_page_lowlevel(struct nand_chip *chip, ...@@ -546,11 +546,6 @@ static int cafe_nand_write_page_lowlevel(struct nand_chip *chip,
return nand_prog_page_end_op(chip); return nand_prog_page_end_op(chip);
} }
static int cafe_nand_block_bad(struct nand_chip *chip, loff_t ofs)
{
return 0;
}
/* F_2[X]/(X**6+X+1) */ /* F_2[X]/(X**6+X+1) */
static unsigned short gf64_mul(u8 a, u8 b) static unsigned short gf64_mul(u8 a, u8 b)
{ {
...@@ -718,10 +713,8 @@ static int cafe_nand_probe(struct pci_dev *pdev, ...@@ -718,10 +713,8 @@ static int cafe_nand_probe(struct pci_dev *pdev,
/* Enable the following for a flash based bad block table */ /* Enable the following for a flash based bad block table */
cafe->nand.bbt_options = NAND_BBT_USE_FLASH; cafe->nand.bbt_options = NAND_BBT_USE_FLASH;
if (skipbbt) { if (skipbbt)
cafe->nand.options |= NAND_SKIP_BBTSCAN; cafe->nand.options |= NAND_SKIP_BBTSCAN | NAND_NO_BBM_QUIRK;
cafe->nand.legacy.block_bad = cafe_nand_block_bad;
}
if (numtimings && numtimings != 3) { if (numtimings && numtimings != 3) {
dev_warn(&cafe->pdev->dev, "%d timing register values ignored; precisely three are required\n", numtimings); dev_warn(&cafe->pdev->dev, "%d timing register values ignored; precisely three are required\n", numtimings);
...@@ -814,11 +807,14 @@ static void cafe_nand_remove(struct pci_dev *pdev) ...@@ -814,11 +807,14 @@ static void cafe_nand_remove(struct pci_dev *pdev)
struct mtd_info *mtd = pci_get_drvdata(pdev); struct mtd_info *mtd = pci_get_drvdata(pdev);
struct nand_chip *chip = mtd_to_nand(mtd); struct nand_chip *chip = mtd_to_nand(mtd);
struct cafe_priv *cafe = nand_get_controller_data(chip); struct cafe_priv *cafe = nand_get_controller_data(chip);
int ret;
/* Disable NAND IRQ in global IRQ mask register */ /* Disable NAND IRQ in global IRQ mask register */
cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK); cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK);
free_irq(pdev->irq, mtd); free_irq(pdev->irq, mtd);
nand_release(chip); ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(chip);
free_rs(cafe->rs); free_rs(cafe->rs);
pci_iounmap(pdev, cafe->mmio); pci_iounmap(pdev, cafe->mmio);
dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr); dma_free_coherent(&cafe->pdev->dev, 2112, cafe->dmabuf, cafe->dmaaddr);
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2006 Compulab, Ltd.
* Mike Rapoport <mike@compulab.co.il>
*
* Derived from drivers/mtd/nand/h1910.c (removed in v3.10)
* Copyright (C) 2002 Marius Gröger (mag@sysgo.de)
* Copyright (c) 2001 Thomas Gleixner (gleixner@autronix.de)
*
* Overview:
* This is a device driver for the NAND flash device found on the
* CM-X270 board.
*/
#include <linux/mtd/rawnand.h>
#include <linux/mtd/partitions.h>
#include <linux/slab.h>
#include <linux/gpio.h>
#include <linux/module.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/mach-types.h>
#include <mach/pxa2xx-regs.h>
#define GPIO_NAND_CS (11)
#define GPIO_NAND_RB (89)
/* MTD structure for CM-X270 board */
static struct mtd_info *cmx270_nand_mtd;
/* remaped IO address of the device */
static void __iomem *cmx270_nand_io;
/*
* Define static partitions for flash device
*/
static const struct mtd_partition partition_info[] = {
[0] = {
.name = "cmx270-0",
.offset = 0,
.size = MTDPART_SIZ_FULL
}
};
#define NUM_PARTITIONS (ARRAY_SIZE(partition_info))
static u_char cmx270_read_byte(struct nand_chip *this)
{
return (readl(this->legacy.IO_ADDR_R) >> 16);
}
static void cmx270_write_buf(struct nand_chip *this, const u_char *buf,
int len)
{
int i;
for (i=0; i<len; i++)
writel((*buf++ << 16), this->legacy.IO_ADDR_W);
}
static void cmx270_read_buf(struct nand_chip *this, u_char *buf, int len)
{
int i;
for (i=0; i<len; i++)
*buf++ = readl(this->legacy.IO_ADDR_R) >> 16;
}
static inline void nand_cs_on(void)
{
gpio_set_value(GPIO_NAND_CS, 0);
}
static void nand_cs_off(void)
{
dsb();
gpio_set_value(GPIO_NAND_CS, 1);
}
/*
* hardware specific access to control-lines
*/
static void cmx270_hwcontrol(struct nand_chip *this, int dat,
unsigned int ctrl)
{
unsigned int nandaddr = (unsigned int)this->legacy.IO_ADDR_W;
dsb();
if (ctrl & NAND_CTRL_CHANGE) {
if ( ctrl & NAND_ALE )
nandaddr |= (1 << 3);
else
nandaddr &= ~(1 << 3);
if ( ctrl & NAND_CLE )
nandaddr |= (1 << 2);
else
nandaddr &= ~(1 << 2);
if ( ctrl & NAND_NCE )
nand_cs_on();
else
nand_cs_off();
}
dsb();
this->legacy.IO_ADDR_W = (void __iomem*)nandaddr;
if (dat != NAND_CMD_NONE)
writel((dat << 16), this->legacy.IO_ADDR_W);
dsb();
}
/*
* read device ready pin
*/
static int cmx270_device_ready(struct nand_chip *this)
{
dsb();
return (gpio_get_value(GPIO_NAND_RB));
}
/*
* Main initialization routine
*/
static int __init cmx270_init(void)
{
struct nand_chip *this;
int ret;
if (!(machine_is_armcore() && cpu_is_pxa27x()))
return -ENODEV;
ret = gpio_request(GPIO_NAND_CS, "NAND CS");
if (ret) {
pr_warn("CM-X270: failed to request NAND CS gpio\n");
return ret;
}
gpio_direction_output(GPIO_NAND_CS, 1);
ret = gpio_request(GPIO_NAND_RB, "NAND R/B");
if (ret) {
pr_warn("CM-X270: failed to request NAND R/B gpio\n");
goto err_gpio_request;
}
gpio_direction_input(GPIO_NAND_RB);
/* Allocate memory for MTD device structure and private data */
this = kzalloc(sizeof(struct nand_chip), GFP_KERNEL);
if (!this) {
ret = -ENOMEM;
goto err_kzalloc;
}
cmx270_nand_io = ioremap(PXA_CS1_PHYS, 12);
if (!cmx270_nand_io) {
pr_debug("Unable to ioremap NAND device\n");
ret = -EINVAL;
goto err_ioremap;
}
cmx270_nand_mtd = nand_to_mtd(this);
/* Link the private data with the MTD structure */
cmx270_nand_mtd->owner = THIS_MODULE;
/* insert callbacks */
this->legacy.IO_ADDR_R = cmx270_nand_io;
this->legacy.IO_ADDR_W = cmx270_nand_io;
this->legacy.cmd_ctrl = cmx270_hwcontrol;
this->legacy.dev_ready = cmx270_device_ready;
/* 15 us command delay time */
this->legacy.chip_delay = 20;
this->ecc.mode = NAND_ECC_SOFT;
this->ecc.algo = NAND_ECC_HAMMING;
/* read/write functions */
this->legacy.read_byte = cmx270_read_byte;
this->legacy.read_buf = cmx270_read_buf;
this->legacy.write_buf = cmx270_write_buf;
/* Scan to find existence of the device */
ret = nand_scan(this, 1);
if (ret) {
pr_notice("No NAND device\n");
goto err_scan;
}
/* Register the partitions */
ret = mtd_device_register(cmx270_nand_mtd, partition_info,
NUM_PARTITIONS);
if (ret)
goto err_scan;
/* Return happy */
return 0;
err_scan:
iounmap(cmx270_nand_io);
err_ioremap:
kfree(this);
err_kzalloc:
gpio_free(GPIO_NAND_RB);
err_gpio_request:
gpio_free(GPIO_NAND_CS);
return ret;
}
module_init(cmx270_init);
/*
* Clean up routine
*/
static void __exit cmx270_cleanup(void)
{
/* Release resources, unregister device */
nand_release(mtd_to_nand(cmx270_nand_mtd));
gpio_free(GPIO_NAND_RB);
gpio_free(GPIO_NAND_CS);
iounmap(cmx270_nand_io);
kfree(mtd_to_nand(cmx270_nand_mtd));
}
module_exit(cmx270_cleanup);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Mike Rapoport <mike@compulab.co.il>");
MODULE_DESCRIPTION("NAND flash driver for Compulab CM-X270 Module");
...@@ -21,9 +21,9 @@ ...@@ -21,9 +21,9 @@
#include <linux/mtd/rawnand.h> #include <linux/mtd/rawnand.h>
#include <linux/mtd/nand_ecc.h> #include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h> #include <linux/mtd/partitions.h>
#include <linux/iopoll.h>
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/io.h>
#define NR_CS553X_CONTROLLERS 4 #define NR_CS553X_CONTROLLERS 4
...@@ -89,76 +89,151 @@ ...@@ -89,76 +89,151 @@
#define CS_NAND_ECC_CLRECC (1<<1) #define CS_NAND_ECC_CLRECC (1<<1)
#define CS_NAND_ECC_ENECC (1<<0) #define CS_NAND_ECC_ENECC (1<<0)
static void cs553x_read_buf(struct nand_chip *this, u_char *buf, int len) struct cs553x_nand_controller {
struct nand_controller base;
struct nand_chip chip;
void __iomem *mmio;
};
static struct cs553x_nand_controller *
to_cs553x(struct nand_controller *controller)
{
return container_of(controller, struct cs553x_nand_controller, base);
}
static int cs553x_write_ctrl_byte(struct cs553x_nand_controller *cs553x,
u32 ctl, u8 data)
{ {
u8 status;
int ret;
writeb(ctl, cs553x->mmio + MM_NAND_CTL);
writeb(data, cs553x->mmio + MM_NAND_IO);
ret = readb_poll_timeout_atomic(cs553x->mmio + MM_NAND_STS, status,
!(status & CS_NAND_CTLR_BUSY), 1,
100000);
if (ret)
return ret;
return 0;
}
static void cs553x_data_in(struct cs553x_nand_controller *cs553x, void *buf,
unsigned int len)
{
writeb(0, cs553x->mmio + MM_NAND_CTL);
while (unlikely(len > 0x800)) { while (unlikely(len > 0x800)) {
memcpy_fromio(buf, this->legacy.IO_ADDR_R, 0x800); memcpy_fromio(buf, cs553x->mmio, 0x800);
buf += 0x800; buf += 0x800;
len -= 0x800; len -= 0x800;
} }
memcpy_fromio(buf, this->legacy.IO_ADDR_R, len); memcpy_fromio(buf, cs553x->mmio, len);
} }
static void cs553x_write_buf(struct nand_chip *this, const u_char *buf, int len) static void cs553x_data_out(struct cs553x_nand_controller *cs553x,
const void *buf, unsigned int len)
{ {
writeb(0, cs553x->mmio + MM_NAND_CTL);
while (unlikely(len > 0x800)) { while (unlikely(len > 0x800)) {
memcpy_toio(this->legacy.IO_ADDR_R, buf, 0x800); memcpy_toio(cs553x->mmio, buf, 0x800);
buf += 0x800; buf += 0x800;
len -= 0x800; len -= 0x800;
} }
memcpy_toio(this->legacy.IO_ADDR_R, buf, len); memcpy_toio(cs553x->mmio, buf, len);
} }
static unsigned char cs553x_read_byte(struct nand_chip *this) static int cs553x_wait_ready(struct cs553x_nand_controller *cs553x,
unsigned int timeout_ms)
{ {
return readb(this->legacy.IO_ADDR_R); u8 mask = CS_NAND_CTLR_BUSY | CS_NAND_STS_FLASH_RDY;
u8 status;
return readb_poll_timeout(cs553x->mmio + MM_NAND_STS, status,
(status & mask) == CS_NAND_STS_FLASH_RDY, 100,
timeout_ms * 1000);
} }
static void cs553x_write_byte(struct nand_chip *this, u_char byte) static int cs553x_exec_instr(struct cs553x_nand_controller *cs553x,
const struct nand_op_instr *instr)
{ {
int i = 100000; unsigned int i;
int ret = 0;
switch (instr->type) {
case NAND_OP_CMD_INSTR:
ret = cs553x_write_ctrl_byte(cs553x, CS_NAND_CTL_CLE,
instr->ctx.cmd.opcode);
break;
case NAND_OP_ADDR_INSTR:
for (i = 0; i < instr->ctx.addr.naddrs; i++) {
ret = cs553x_write_ctrl_byte(cs553x, CS_NAND_CTL_ALE,
instr->ctx.addr.addrs[i]);
if (ret)
break;
}
break;
case NAND_OP_DATA_IN_INSTR:
cs553x_data_in(cs553x, instr->ctx.data.buf.in,
instr->ctx.data.len);
break;
case NAND_OP_DATA_OUT_INSTR:
cs553x_data_out(cs553x, instr->ctx.data.buf.out,
instr->ctx.data.len);
break;
while (i && readb(this->legacy.IO_ADDR_R + MM_NAND_STS) & CS_NAND_CTLR_BUSY) { case NAND_OP_WAITRDY_INSTR:
udelay(1); ret = cs553x_wait_ready(cs553x, instr->ctx.waitrdy.timeout_ms);
i--; break;
} }
writeb(byte, this->legacy.IO_ADDR_W + 0x801);
if (instr->delay_ns)
ndelay(instr->delay_ns);
return ret;
} }
static void cs553x_hwcontrol(struct nand_chip *this, int cmd, static int cs553x_exec_op(struct nand_chip *this,
unsigned int ctrl) const struct nand_operation *op,
bool check_only)
{ {
void __iomem *mmio_base = this->legacy.IO_ADDR_R; struct cs553x_nand_controller *cs553x = to_cs553x(this->controller);
if (ctrl & NAND_CTRL_CHANGE) { unsigned int i;
unsigned char ctl = (ctrl & ~NAND_CTRL_CHANGE ) ^ 0x01; int ret;
writeb(ctl, mmio_base + MM_NAND_CTL);
if (check_only)
return true;
/* De-assert the CE pin */
writeb(0, cs553x->mmio + MM_NAND_CTL);
for (i = 0; i < op->ninstrs; i++) {
ret = cs553x_exec_instr(cs553x, &op->instrs[i]);
if (ret)
break;
} }
if (cmd != NAND_CMD_NONE)
cs553x_write_byte(this, cmd);
}
static int cs553x_device_ready(struct nand_chip *this) /* Re-assert the CE pin. */
{ writeb(CS_NAND_CTL_CE, cs553x->mmio + MM_NAND_CTL);
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
unsigned char foo = readb(mmio_base + MM_NAND_STS);
return (foo & CS_NAND_STS_FLASH_RDY) && !(foo & CS_NAND_CTLR_BUSY); return ret;
} }
static void cs_enable_hwecc(struct nand_chip *this, int mode) static void cs_enable_hwecc(struct nand_chip *this, int mode)
{ {
void __iomem *mmio_base = this->legacy.IO_ADDR_R; struct cs553x_nand_controller *cs553x = to_cs553x(this->controller);
writeb(0x07, mmio_base + MM_NAND_ECC_CTL); writeb(0x07, cs553x->mmio + MM_NAND_ECC_CTL);
} }
static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat, static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat,
u_char *ecc_code) u_char *ecc_code)
{ {
struct cs553x_nand_controller *cs553x = to_cs553x(this->controller);
uint32_t ecc; uint32_t ecc;
void __iomem *mmio_base = this->legacy.IO_ADDR_R;
ecc = readl(mmio_base + MM_NAND_STS); ecc = readl(cs553x->mmio + MM_NAND_STS);
ecc_code[1] = ecc >> 8; ecc_code[1] = ecc >> 8;
ecc_code[0] = ecc >> 16; ecc_code[0] = ecc >> 16;
...@@ -166,10 +241,15 @@ static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat, ...@@ -166,10 +241,15 @@ static int cs_calculate_ecc(struct nand_chip *this, const u_char *dat,
return 0; return 0;
} }
static struct mtd_info *cs553x_mtd[4]; static struct cs553x_nand_controller *controllers[4];
static const struct nand_controller_ops cs553x_nand_controller_ops = {
.exec_op = cs553x_exec_op,
};
static int __init cs553x_init_one(int cs, int mmio, unsigned long adr) static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
{ {
struct cs553x_nand_controller *controller;
int err = 0; int err = 0;
struct nand_chip *this; struct nand_chip *this;
struct mtd_info *new_mtd; struct mtd_info *new_mtd;
...@@ -183,33 +263,29 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr) ...@@ -183,33 +263,29 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
} }
/* Allocate memory for MTD device structure and private data */ /* Allocate memory for MTD device structure and private data */
this = kzalloc(sizeof(struct nand_chip), GFP_KERNEL); controller = kzalloc(sizeof(*controller), GFP_KERNEL);
if (!this) { if (!controller) {
err = -ENOMEM; err = -ENOMEM;
goto out; goto out;
} }
this = &controller->chip;
nand_controller_init(&controller->base);
controller->base.ops = &cs553x_nand_controller_ops;
this->controller = &controller->base;
new_mtd = nand_to_mtd(this); new_mtd = nand_to_mtd(this);
/* Link the private data with the MTD structure */ /* Link the private data with the MTD structure */
new_mtd->owner = THIS_MODULE; new_mtd->owner = THIS_MODULE;
/* map physical address */ /* map physical address */
this->legacy.IO_ADDR_R = this->legacy.IO_ADDR_W = ioremap(adr, 4096); controller->mmio = ioremap(adr, 4096);
if (!this->legacy.IO_ADDR_R) { if (!controller->mmio) {
pr_warn("ioremap cs553x NAND @0x%08lx failed\n", adr); pr_warn("ioremap cs553x NAND @0x%08lx failed\n", adr);
err = -EIO; err = -EIO;
goto out_mtd; goto out_mtd;
} }
this->legacy.cmd_ctrl = cs553x_hwcontrol;
this->legacy.dev_ready = cs553x_device_ready;
this->legacy.read_byte = cs553x_read_byte;
this->legacy.read_buf = cs553x_read_buf;
this->legacy.write_buf = cs553x_write_buf;
this->legacy.chip_delay = 0;
this->ecc.mode = NAND_ECC_HW; this->ecc.mode = NAND_ECC_HW;
this->ecc.size = 256; this->ecc.size = 256;
this->ecc.bytes = 3; this->ecc.bytes = 3;
...@@ -232,15 +308,15 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr) ...@@ -232,15 +308,15 @@ static int __init cs553x_init_one(int cs, int mmio, unsigned long adr)
if (err) if (err)
goto out_free; goto out_free;
cs553x_mtd[cs] = new_mtd; controllers[cs] = controller;
goto out; goto out;
out_free: out_free:
kfree(new_mtd->name); kfree(new_mtd->name);
out_ior: out_ior:
iounmap(this->legacy.IO_ADDR_R); iounmap(controller->mmio);
out_mtd: out_mtd:
kfree(this); kfree(controller);
out: out:
return err; return err;
} }
...@@ -295,9 +371,10 @@ static int __init cs553x_init(void) ...@@ -295,9 +371,10 @@ static int __init cs553x_init(void)
/* Register all devices together here. This means we can easily hack it to /* Register all devices together here. This means we can easily hack it to
do mtdconcat etc. if we want to. */ do mtdconcat etc. if we want to. */
for (i = 0; i < NR_CS553X_CONTROLLERS; i++) { for (i = 0; i < NR_CS553X_CONTROLLERS; i++) {
if (cs553x_mtd[i]) { if (controllers[i]) {
/* If any devices registered, return success. Else the last error. */ /* If any devices registered, return success. Else the last error. */
mtd_device_register(cs553x_mtd[i], NULL, 0); mtd_device_register(nand_to_mtd(&controllers[i]->chip),
NULL, 0);
err = 0; err = 0;
} }
} }
...@@ -312,26 +389,26 @@ static void __exit cs553x_cleanup(void) ...@@ -312,26 +389,26 @@ static void __exit cs553x_cleanup(void)
int i; int i;
for (i = 0; i < NR_CS553X_CONTROLLERS; i++) { for (i = 0; i < NR_CS553X_CONTROLLERS; i++) {
struct mtd_info *mtd = cs553x_mtd[i]; struct cs553x_nand_controller *controller = controllers[i];
struct nand_chip *this; struct nand_chip *this = &controller->chip;
void __iomem *mmio_base; struct mtd_info *mtd = nand_to_mtd(this);
int ret;
if (!mtd) if (!mtd)
continue; continue;
this = mtd_to_nand(mtd);
mmio_base = this->legacy.IO_ADDR_R;
/* Release resources, unregister device */ /* Release resources, unregister device */
nand_release(this); ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(this);
kfree(mtd->name); kfree(mtd->name);
cs553x_mtd[i] = NULL; controllers[i] = NULL;
/* unmap physical address */ /* unmap physical address */
iounmap(mmio_base); iounmap(controller->mmio);
/* Free the MTD device structure */ /* Free the MTD device structure */
kfree(this); kfree(controller);
} }
} }
......
This diff is collapsed.
...@@ -764,6 +764,7 @@ static int denali_write_page(struct nand_chip *chip, const u8 *buf, ...@@ -764,6 +764,7 @@ static int denali_write_page(struct nand_chip *chip, const u8 *buf,
static int denali_setup_data_interface(struct nand_chip *chip, int chipnr, static int denali_setup_data_interface(struct nand_chip *chip, int chipnr,
const struct nand_data_interface *conf) const struct nand_data_interface *conf)
{ {
static const unsigned int data_setup_on_host = 10000;
struct denali_controller *denali = to_denali_controller(chip); struct denali_controller *denali = to_denali_controller(chip);
struct denali_chip_sel *sel; struct denali_chip_sel *sel;
const struct nand_sdr_timings *timings; const struct nand_sdr_timings *timings;
...@@ -796,15 +797,6 @@ static int denali_setup_data_interface(struct nand_chip *chip, int chipnr, ...@@ -796,15 +797,6 @@ static int denali_setup_data_interface(struct nand_chip *chip, int chipnr,
sel = &to_denali_chip(chip)->sels[chipnr]; sel = &to_denali_chip(chip)->sels[chipnr];
/* tREA -> ACC_CLKS */
acc_clks = DIV_ROUND_UP(timings->tREA_max, t_x);
acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE);
tmp = ioread32(denali->reg + ACC_CLKS);
tmp &= ~ACC_CLKS__VALUE;
tmp |= FIELD_PREP(ACC_CLKS__VALUE, acc_clks);
sel->acc_clks = tmp;
/* tRWH -> RE_2_WE */ /* tRWH -> RE_2_WE */
re_2_we = DIV_ROUND_UP(timings->tRHW_min, t_x); re_2_we = DIV_ROUND_UP(timings->tRHW_min, t_x);
re_2_we = min_t(int, re_2_we, RE_2_WE__VALUE); re_2_we = min_t(int, re_2_we, RE_2_WE__VALUE);
...@@ -862,14 +854,45 @@ static int denali_setup_data_interface(struct nand_chip *chip, int chipnr, ...@@ -862,14 +854,45 @@ static int denali_setup_data_interface(struct nand_chip *chip, int chipnr,
tmp |= FIELD_PREP(RDWR_EN_HI_CNT__VALUE, rdwr_en_hi); tmp |= FIELD_PREP(RDWR_EN_HI_CNT__VALUE, rdwr_en_hi);
sel->rdwr_en_hi_cnt = tmp; sel->rdwr_en_hi_cnt = tmp;
/* tRP, tWP -> RDWR_EN_LO_CNT */ /*
* tREA -> ACC_CLKS
* tRP, tWP, tRHOH, tRC, tWC -> RDWR_EN_LO_CNT
*/
/*
* Determine the minimum of acc_clks to meet the setup timing when
* capturing the incoming data.
*
* The delay on the chip side is well-defined as tREA, but we need to
* take additional delay into account. This includes a certain degree
* of unknowledge, such as signal propagation delays on the PCB and
* in the SoC, load capacity of the I/O pins, etc.
*/
acc_clks = DIV_ROUND_UP(timings->tREA_max + data_setup_on_host, t_x);
/* Determine the minimum of rdwr_en_lo_cnt from RE#/WE# pulse width */
rdwr_en_lo = DIV_ROUND_UP(max(timings->tRP_min, timings->tWP_min), t_x); rdwr_en_lo = DIV_ROUND_UP(max(timings->tRP_min, timings->tWP_min), t_x);
/* Extend rdwr_en_lo to meet the data hold timing */
rdwr_en_lo = max_t(int, rdwr_en_lo,
acc_clks - timings->tRHOH_min / t_x);
/* Extend rdwr_en_lo to meet the requirement for RE#/WE# cycle time */
rdwr_en_lo_hi = DIV_ROUND_UP(max(timings->tRC_min, timings->tWC_min), rdwr_en_lo_hi = DIV_ROUND_UP(max(timings->tRC_min, timings->tWC_min),
t_x); t_x);
rdwr_en_lo_hi = max_t(int, rdwr_en_lo_hi, mult_x);
rdwr_en_lo = max(rdwr_en_lo, rdwr_en_lo_hi - rdwr_en_hi); rdwr_en_lo = max(rdwr_en_lo, rdwr_en_lo_hi - rdwr_en_hi);
rdwr_en_lo = min_t(int, rdwr_en_lo, RDWR_EN_LO_CNT__VALUE); rdwr_en_lo = min_t(int, rdwr_en_lo, RDWR_EN_LO_CNT__VALUE);
/* Center the data latch timing for extra safety */
acc_clks = (acc_clks + rdwr_en_lo +
DIV_ROUND_UP(timings->tRHOH_min, t_x)) / 2;
acc_clks = min_t(int, acc_clks, ACC_CLKS__VALUE);
tmp = ioread32(denali->reg + ACC_CLKS);
tmp &= ~ACC_CLKS__VALUE;
tmp |= FIELD_PREP(ACC_CLKS__VALUE, acc_clks);
sel->acc_clks = tmp;
tmp = ioread32(denali->reg + RDWR_EN_LO_CNT); tmp = ioread32(denali->reg + RDWR_EN_LO_CNT);
tmp &= ~RDWR_EN_LO_CNT__VALUE; tmp &= ~RDWR_EN_LO_CNT__VALUE;
tmp |= FIELD_PREP(RDWR_EN_LO_CNT__VALUE, rdwr_en_lo); tmp |= FIELD_PREP(RDWR_EN_LO_CNT__VALUE, rdwr_en_lo);
...@@ -1203,7 +1226,7 @@ int denali_chip_init(struct denali_controller *denali, ...@@ -1203,7 +1226,7 @@ int denali_chip_init(struct denali_controller *denali,
mtd->name = "denali-nand"; mtd->name = "denali-nand";
if (denali->dma_avail) { if (denali->dma_avail) {
chip->options |= NAND_USE_BOUNCE_BUFFER; chip->options |= NAND_USES_DMA;
chip->buf_align = 16; chip->buf_align = 16;
} }
...@@ -1336,10 +1359,17 @@ EXPORT_SYMBOL(denali_init); ...@@ -1336,10 +1359,17 @@ EXPORT_SYMBOL(denali_init);
void denali_remove(struct denali_controller *denali) void denali_remove(struct denali_controller *denali)
{ {
struct denali_chip *dchip; struct denali_chip *dchip, *tmp;
struct nand_chip *chip;
int ret;
list_for_each_entry(dchip, &denali->chips, node) list_for_each_entry_safe(dchip, tmp, &denali->chips, node) {
nand_release(&dchip->chip); chip = &dchip->chip;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
list_del(&dchip->node);
}
denali_disable_irq(denali); denali_disable_irq(denali);
} }
......
This diff is collapsed.
...@@ -956,8 +956,13 @@ static int fsl_elbc_nand_remove(struct platform_device *pdev) ...@@ -956,8 +956,13 @@ static int fsl_elbc_nand_remove(struct platform_device *pdev)
{ {
struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = fsl_lbc_ctrl_dev->nand; struct fsl_elbc_fcm_ctrl *elbc_fcm_ctrl = fsl_lbc_ctrl_dev->nand;
struct fsl_elbc_mtd *priv = dev_get_drvdata(&pdev->dev); struct fsl_elbc_mtd *priv = dev_get_drvdata(&pdev->dev);
struct nand_chip *chip = &priv->chip;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
nand_release(&priv->chip);
fsl_elbc_chip_remove(priv); fsl_elbc_chip_remove(priv);
mutex_lock(&fsl_elbc_nand_mutex); mutex_lock(&fsl_elbc_nand_mutex);
......
...@@ -1093,8 +1093,13 @@ static int fsl_ifc_nand_probe(struct platform_device *dev) ...@@ -1093,8 +1093,13 @@ static int fsl_ifc_nand_probe(struct platform_device *dev)
static int fsl_ifc_nand_remove(struct platform_device *dev) static int fsl_ifc_nand_remove(struct platform_device *dev)
{ {
struct fsl_ifc_mtd *priv = dev_get_drvdata(&dev->dev); struct fsl_ifc_mtd *priv = dev_get_drvdata(&dev->dev);
struct nand_chip *chip = &priv->chip;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
nand_release(&priv->chip);
fsl_ifc_chip_remove(priv); fsl_ifc_chip_remove(priv);
mutex_lock(&fsl_ifc_nand_mutex); mutex_lock(&fsl_ifc_nand_mutex);
......
...@@ -317,10 +317,13 @@ static int fun_probe(struct platform_device *ofdev) ...@@ -317,10 +317,13 @@ static int fun_probe(struct platform_device *ofdev)
static int fun_remove(struct platform_device *ofdev) static int fun_remove(struct platform_device *ofdev)
{ {
struct fsl_upm_nand *fun = dev_get_drvdata(&ofdev->dev); struct fsl_upm_nand *fun = dev_get_drvdata(&ofdev->dev);
struct mtd_info *mtd = nand_to_mtd(&fun->chip); struct nand_chip *chip = &fun->chip;
int i; struct mtd_info *mtd = nand_to_mtd(chip);
int ret, i;
nand_release(&fun->chip); ret = mtd_device_unregister(mtd);
WARN_ON(ret);
nand_cleanup(chip);
kfree(mtd->name); kfree(mtd->name);
for (i = 0; i < fun->mchip_count; i++) { for (i = 0; i < fun->mchip_count; i++) {
......
...@@ -608,6 +608,9 @@ static int fsmc_exec_op(struct nand_chip *chip, const struct nand_operation *op, ...@@ -608,6 +608,9 @@ static int fsmc_exec_op(struct nand_chip *chip, const struct nand_operation *op,
unsigned int op_id; unsigned int op_id;
int i; int i;
if (check_only)
return 0;
pr_debug("Executing operation [%d instructions]:\n", op->ninstrs); pr_debug("Executing operation [%d instructions]:\n", op->ninstrs);
for (op_id = 0; op_id < op->ninstrs; op_id++) { for (op_id = 0; op_id < op->ninstrs; op_id++) {
...@@ -691,7 +694,7 @@ static int fsmc_read_page_hwecc(struct nand_chip *chip, u8 *buf, ...@@ -691,7 +694,7 @@ static int fsmc_read_page_hwecc(struct nand_chip *chip, u8 *buf,
for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) { for (i = 0, s = 0; s < eccsteps; s++, i += eccbytes, p += eccsize) {
nand_read_page_op(chip, page, s * eccsize, NULL, 0); nand_read_page_op(chip, page, s * eccsize, NULL, 0);
chip->ecc.hwctl(chip, NAND_ECC_READ); chip->ecc.hwctl(chip, NAND_ECC_READ);
ret = nand_read_data_op(chip, p, eccsize, false); ret = nand_read_data_op(chip, p, eccsize, false, false);
if (ret) if (ret)
return ret; return ret;
...@@ -809,11 +812,12 @@ static int fsmc_bch8_correct_data(struct nand_chip *chip, u8 *dat, ...@@ -809,11 +812,12 @@ static int fsmc_bch8_correct_data(struct nand_chip *chip, u8 *dat,
i = 0; i = 0;
while (num_err--) { while (num_err--) {
change_bit(0, (unsigned long *)&err_idx[i]); err_idx[i] ^= 3;
change_bit(1, (unsigned long *)&err_idx[i]);
if (err_idx[i] < chip->ecc.size * 8) { if (err_idx[i] < chip->ecc.size * 8) {
change_bit(err_idx[i], (unsigned long *)dat); int err = err_idx[i];
dat[err >> 3] ^= BIT(err & 7);
i++; i++;
} }
} }
...@@ -1132,7 +1136,12 @@ static int fsmc_nand_remove(struct platform_device *pdev) ...@@ -1132,7 +1136,12 @@ static int fsmc_nand_remove(struct platform_device *pdev)
struct fsmc_nand_data *host = platform_get_drvdata(pdev); struct fsmc_nand_data *host = platform_get_drvdata(pdev);
if (host) { if (host) {
nand_release(&host->nand); struct nand_chip *chip = &host->nand;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
fsmc_nand_disable(host); fsmc_nand_disable(host);
if (host->mode == USE_DMA_ACCESS) { if (host->mode == USE_DMA_ACCESS) {
......
...@@ -190,8 +190,12 @@ gpio_nand_get_io_sync(struct platform_device *pdev) ...@@ -190,8 +190,12 @@ gpio_nand_get_io_sync(struct platform_device *pdev)
static int gpio_nand_remove(struct platform_device *pdev) static int gpio_nand_remove(struct platform_device *pdev)
{ {
struct gpiomtd *gpiomtd = platform_get_drvdata(pdev); struct gpiomtd *gpiomtd = platform_get_drvdata(pdev);
struct nand_chip *chip = &gpiomtd->nand_chip;
int ret;
nand_release(&gpiomtd->nand_chip); ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
/* Enable write protection and disable the chip */ /* Enable write protection and disable the chip */
if (gpiomtd->nwp && !IS_ERR(gpiomtd->nwp)) if (gpiomtd->nwp && !IS_ERR(gpiomtd->nwp))
......
...@@ -540,8 +540,10 @@ static int bch_set_geometry(struct gpmi_nand_data *this) ...@@ -540,8 +540,10 @@ static int bch_set_geometry(struct gpmi_nand_data *this)
return ret; return ret;
ret = pm_runtime_get_sync(this->dev); ret = pm_runtime_get_sync(this->dev);
if (ret < 0) if (ret < 0) {
pm_runtime_put_autosuspend(this->dev);
return ret; return ret;
}
/* /*
* Due to erratum #2847 of the MX23, the BCH cannot be soft reset on this * Due to erratum #2847 of the MX23, the BCH cannot be soft reset on this
...@@ -834,158 +836,6 @@ static bool prepare_data_dma(struct gpmi_nand_data *this, const void *buf, ...@@ -834,158 +836,6 @@ static bool prepare_data_dma(struct gpmi_nand_data *this, const void *buf,
return false; return false;
} }
/**
* gpmi_copy_bits - copy bits from one memory region to another
* @dst: destination buffer
* @dst_bit_off: bit offset we're starting to write at
* @src: source buffer
* @src_bit_off: bit offset we're starting to read from
* @nbits: number of bits to copy
*
* This functions copies bits from one memory region to another, and is used by
* the GPMI driver to copy ECC sections which are not guaranteed to be byte
* aligned.
*
* src and dst should not overlap.
*
*/
static void gpmi_copy_bits(u8 *dst, size_t dst_bit_off, const u8 *src,
size_t src_bit_off, size_t nbits)
{
size_t i;
size_t nbytes;
u32 src_buffer = 0;
size_t bits_in_src_buffer = 0;
if (!nbits)
return;
/*
* Move src and dst pointers to the closest byte pointer and store bit
* offsets within a byte.
*/
src += src_bit_off / 8;
src_bit_off %= 8;
dst += dst_bit_off / 8;
dst_bit_off %= 8;
/*
* Initialize the src_buffer value with bits available in the first
* byte of data so that we end up with a byte aligned src pointer.
*/
if (src_bit_off) {
src_buffer = src[0] >> src_bit_off;
if (nbits >= (8 - src_bit_off)) {
bits_in_src_buffer += 8 - src_bit_off;
} else {
src_buffer &= GENMASK(nbits - 1, 0);
bits_in_src_buffer += nbits;
}
nbits -= bits_in_src_buffer;
src++;
}
/* Calculate the number of bytes that can be copied from src to dst. */
nbytes = nbits / 8;
/* Try to align dst to a byte boundary. */
if (dst_bit_off) {
if (bits_in_src_buffer < (8 - dst_bit_off) && nbytes) {
src_buffer |= src[0] << bits_in_src_buffer;
bits_in_src_buffer += 8;
src++;
nbytes--;
}
if (bits_in_src_buffer >= (8 - dst_bit_off)) {
dst[0] &= GENMASK(dst_bit_off - 1, 0);
dst[0] |= src_buffer << dst_bit_off;
src_buffer >>= (8 - dst_bit_off);
bits_in_src_buffer -= (8 - dst_bit_off);
dst_bit_off = 0;
dst++;
if (bits_in_src_buffer > 7) {
bits_in_src_buffer -= 8;
dst[0] = src_buffer;
dst++;
src_buffer >>= 8;
}
}
}
if (!bits_in_src_buffer && !dst_bit_off) {
/*
* Both src and dst pointers are byte aligned, thus we can
* just use the optimized memcpy function.
*/
if (nbytes)
memcpy(dst, src, nbytes);
} else {
/*
* src buffer is not byte aligned, hence we have to copy each
* src byte to the src_buffer variable before extracting a byte
* to store in dst.
*/
for (i = 0; i < nbytes; i++) {
src_buffer |= src[i] << bits_in_src_buffer;
dst[i] = src_buffer;
src_buffer >>= 8;
}
}
/* Update dst and src pointers */
dst += nbytes;
src += nbytes;
/*
* nbits is the number of remaining bits. It should not exceed 8 as
* we've already copied as much bytes as possible.
*/
nbits %= 8;
/*
* If there's no more bits to copy to the destination and src buffer
* was already byte aligned, then we're done.
*/
if (!nbits && !bits_in_src_buffer)
return;
/* Copy the remaining bits to src_buffer */
if (nbits)
src_buffer |= (*src & GENMASK(nbits - 1, 0)) <<
bits_in_src_buffer;
bits_in_src_buffer += nbits;
/*
* In case there were not enough bits to get a byte aligned dst buffer
* prepare the src_buffer variable to match the dst organization (shift
* src_buffer by dst_bit_off and retrieve the least significant bits
* from dst).
*/
if (dst_bit_off)
src_buffer = (src_buffer << dst_bit_off) |
(*dst & GENMASK(dst_bit_off - 1, 0));
bits_in_src_buffer += dst_bit_off;
/*
* Keep most significant bits from dst if we end up with an unaligned
* number of bits.
*/
nbytes = bits_in_src_buffer / 8;
if (bits_in_src_buffer % 8) {
src_buffer |= (dst[nbytes] &
GENMASK(7, bits_in_src_buffer % 8)) <<
(nbytes * 8);
nbytes++;
}
/* Copy the remaining bytes to dst */
for (i = 0; i < nbytes; i++) {
dst[i] = src_buffer;
src_buffer >>= 8;
}
}
/* add our owner bbt descriptor */ /* add our owner bbt descriptor */
static uint8_t scan_ff_pattern[] = { 0xff }; static uint8_t scan_ff_pattern[] = { 0xff };
static struct nand_bbt_descr gpmi_bbt_descr = { static struct nand_bbt_descr gpmi_bbt_descr = {
...@@ -1713,7 +1563,7 @@ static int gpmi_ecc_write_oob(struct nand_chip *chip, int page) ...@@ -1713,7 +1563,7 @@ static int gpmi_ecc_write_oob(struct nand_chip *chip, int page)
* inline (interleaved with payload DATA), and do not align data chunk on * inline (interleaved with payload DATA), and do not align data chunk on
* byte boundaries. * byte boundaries.
* We thus need to take care moving the payload data and ECC bits stored in the * We thus need to take care moving the payload data and ECC bits stored in the
* page into the provided buffers, which is why we're using gpmi_copy_bits. * page into the provided buffers, which is why we're using nand_extract_bits().
* *
* See set_geometry_by_ecc_info inline comments to have a full description * See set_geometry_by_ecc_info inline comments to have a full description
* of the layout used by the GPMI controller. * of the layout used by the GPMI controller.
...@@ -1762,9 +1612,8 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf, ...@@ -1762,9 +1612,8 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
/* Extract interleaved payload data and ECC bits */ /* Extract interleaved payload data and ECC bits */
for (step = 0; step < nfc_geo->ecc_chunk_count; step++) { for (step = 0; step < nfc_geo->ecc_chunk_count; step++) {
if (buf) if (buf)
gpmi_copy_bits(buf, step * eccsize * 8, nand_extract_bits(buf, step * eccsize, tmp_buf,
tmp_buf, src_bit_off, src_bit_off, eccsize * 8);
eccsize * 8);
src_bit_off += eccsize * 8; src_bit_off += eccsize * 8;
/* Align last ECC block to align a byte boundary */ /* Align last ECC block to align a byte boundary */
...@@ -1773,9 +1622,8 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf, ...@@ -1773,9 +1622,8 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
eccbits += 8 - ((oob_bit_off + eccbits) % 8); eccbits += 8 - ((oob_bit_off + eccbits) % 8);
if (oob_required) if (oob_required)
gpmi_copy_bits(oob, oob_bit_off, nand_extract_bits(oob, oob_bit_off, tmp_buf,
tmp_buf, src_bit_off, src_bit_off, eccbits);
eccbits);
src_bit_off += eccbits; src_bit_off += eccbits;
oob_bit_off += eccbits; oob_bit_off += eccbits;
...@@ -1800,7 +1648,7 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf, ...@@ -1800,7 +1648,7 @@ static int gpmi_ecc_read_page_raw(struct nand_chip *chip, uint8_t *buf,
* inline (interleaved with payload DATA), and do not align data chunk on * inline (interleaved with payload DATA), and do not align data chunk on
* byte boundaries. * byte boundaries.
* We thus need to take care moving the OOB area at the right place in the * We thus need to take care moving the OOB area at the right place in the
* final page, which is why we're using gpmi_copy_bits. * final page, which is why we're using nand_extract_bits().
* *
* See set_geometry_by_ecc_info inline comments to have a full description * See set_geometry_by_ecc_info inline comments to have a full description
* of the layout used by the GPMI controller. * of the layout used by the GPMI controller.
...@@ -1839,8 +1687,8 @@ static int gpmi_ecc_write_page_raw(struct nand_chip *chip, const uint8_t *buf, ...@@ -1839,8 +1687,8 @@ static int gpmi_ecc_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
/* Interleave payload data and ECC bits */ /* Interleave payload data and ECC bits */
for (step = 0; step < nfc_geo->ecc_chunk_count; step++) { for (step = 0; step < nfc_geo->ecc_chunk_count; step++) {
if (buf) if (buf)
gpmi_copy_bits(tmp_buf, dst_bit_off, nand_extract_bits(tmp_buf, dst_bit_off, buf,
buf, step * eccsize * 8, eccsize * 8); step * eccsize * 8, eccsize * 8);
dst_bit_off += eccsize * 8; dst_bit_off += eccsize * 8;
/* Align last ECC block to align a byte boundary */ /* Align last ECC block to align a byte boundary */
...@@ -1849,8 +1697,8 @@ static int gpmi_ecc_write_page_raw(struct nand_chip *chip, const uint8_t *buf, ...@@ -1849,8 +1697,8 @@ static int gpmi_ecc_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
eccbits += 8 - ((oob_bit_off + eccbits) % 8); eccbits += 8 - ((oob_bit_off + eccbits) % 8);
if (oob_required) if (oob_required)
gpmi_copy_bits(tmp_buf, dst_bit_off, nand_extract_bits(tmp_buf, dst_bit_off, oob,
oob, oob_bit_off, eccbits); oob_bit_off, eccbits);
dst_bit_off += eccbits; dst_bit_off += eccbits;
oob_bit_off += eccbits; oob_bit_off += eccbits;
...@@ -2408,6 +2256,9 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip, ...@@ -2408,6 +2256,9 @@ static int gpmi_nfc_exec_op(struct nand_chip *chip,
struct completion *completion; struct completion *completion;
unsigned long to; unsigned long to;
if (check_only)
return 0;
this->ntransfers = 0; this->ntransfers = 0;
for (i = 0; i < GPMI_MAX_TRANSFERS; i++) for (i = 0; i < GPMI_MAX_TRANSFERS; i++)
this->transfers[i].direction = DMA_NONE; this->transfers[i].direction = DMA_NONE;
...@@ -2658,7 +2509,7 @@ static int gpmi_nand_probe(struct platform_device *pdev) ...@@ -2658,7 +2509,7 @@ static int gpmi_nand_probe(struct platform_device *pdev)
ret = __gpmi_enable_clk(this, true); ret = __gpmi_enable_clk(this, true);
if (ret) if (ret)
goto exit_nfc_init; goto exit_acquire_resources;
pm_runtime_set_autosuspend_delay(&pdev->dev, 500); pm_runtime_set_autosuspend_delay(&pdev->dev, 500);
pm_runtime_use_autosuspend(&pdev->dev); pm_runtime_use_autosuspend(&pdev->dev);
...@@ -2693,11 +2544,15 @@ static int gpmi_nand_probe(struct platform_device *pdev) ...@@ -2693,11 +2544,15 @@ static int gpmi_nand_probe(struct platform_device *pdev)
static int gpmi_nand_remove(struct platform_device *pdev) static int gpmi_nand_remove(struct platform_device *pdev)
{ {
struct gpmi_nand_data *this = platform_get_drvdata(pdev); struct gpmi_nand_data *this = platform_get_drvdata(pdev);
struct nand_chip *chip = &this->nand;
int ret;
pm_runtime_put_sync(&pdev->dev); pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
nand_release(&this->nand); ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
gpmi_free_dma_buffer(this); gpmi_free_dma_buffer(this);
release_resources(this); release_resources(this);
return 0; return 0;
......
...@@ -806,8 +806,12 @@ static int hisi_nfc_probe(struct platform_device *pdev) ...@@ -806,8 +806,12 @@ static int hisi_nfc_probe(struct platform_device *pdev)
static int hisi_nfc_remove(struct platform_device *pdev) static int hisi_nfc_remove(struct platform_device *pdev)
{ {
struct hinfc_host *host = platform_get_drvdata(pdev); struct hinfc_host *host = platform_get_drvdata(pdev);
struct nand_chip *chip = &host->chip;
int ret;
nand_release(&host->chip); ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
return 0; return 0;
} }
......
...@@ -75,6 +75,9 @@ extern const struct nand_manufacturer_ops micron_nand_manuf_ops; ...@@ -75,6 +75,9 @@ extern const struct nand_manufacturer_ops micron_nand_manuf_ops;
extern const struct nand_manufacturer_ops samsung_nand_manuf_ops; extern const struct nand_manufacturer_ops samsung_nand_manuf_ops;
extern const struct nand_manufacturer_ops toshiba_nand_manuf_ops; extern const struct nand_manufacturer_ops toshiba_nand_manuf_ops;
/* MLC pairing schemes */
extern const struct mtd_pairing_scheme dist3_pairing_scheme;
/* Core functions */ /* Core functions */
const struct nand_manufacturer *nand_get_manufacturer(u8 id); const struct nand_manufacturer *nand_get_manufacturer(u8 id);
int nand_bbm_get_next_page(struct nand_chip *chip, int page); int nand_bbm_get_next_page(struct nand_chip *chip, int page);
...@@ -106,6 +109,15 @@ static inline bool nand_has_exec_op(struct nand_chip *chip) ...@@ -106,6 +109,15 @@ static inline bool nand_has_exec_op(struct nand_chip *chip)
return true; return true;
} }
static inline int nand_check_op(struct nand_chip *chip,
const struct nand_operation *op)
{
if (!nand_has_exec_op(chip))
return 0;
return chip->controller->ops->exec_op(chip, op, true);
}
static inline int nand_exec_op(struct nand_chip *chip, static inline int nand_exec_op(struct nand_chip *chip,
const struct nand_operation *op) const struct nand_operation *op)
{ {
......
...@@ -826,8 +826,13 @@ static int lpc32xx_nand_probe(struct platform_device *pdev) ...@@ -826,8 +826,13 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
static int lpc32xx_nand_remove(struct platform_device *pdev) static int lpc32xx_nand_remove(struct platform_device *pdev)
{ {
struct lpc32xx_nand_host *host = platform_get_drvdata(pdev); struct lpc32xx_nand_host *host = platform_get_drvdata(pdev);
struct nand_chip *chip = &host->nand_chip;
int ret;
ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
nand_release(&host->nand_chip);
free_irq(host->irq, host); free_irq(host->irq, host);
if (use_dma) if (use_dma)
dma_release_channel(host->dma_chan); dma_release_channel(host->dma_chan);
......
...@@ -947,8 +947,12 @@ static int lpc32xx_nand_remove(struct platform_device *pdev) ...@@ -947,8 +947,12 @@ static int lpc32xx_nand_remove(struct platform_device *pdev)
{ {
uint32_t tmp; uint32_t tmp;
struct lpc32xx_nand_host *host = platform_get_drvdata(pdev); struct lpc32xx_nand_host *host = platform_get_drvdata(pdev);
struct nand_chip *chip = &host->nand_chip;
int ret;
nand_release(&host->nand_chip); ret = mtd_device_unregister(nand_to_mtd(chip));
WARN_ON(ret);
nand_cleanup(chip);
dma_release_channel(host->dma_chan); dma_release_channel(host->dma_chan);
/* Force CE high */ /* Force CE high */
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -411,6 +411,7 @@ static int elm_probe(struct platform_device *pdev) ...@@ -411,6 +411,7 @@ static int elm_probe(struct platform_device *pdev)
pm_runtime_enable(&pdev->dev); pm_runtime_enable(&pdev->dev);
if (pm_runtime_get_sync(&pdev->dev) < 0) { if (pm_runtime_get_sync(&pdev->dev) < 0) {
ret = -EINVAL; ret = -EINVAL;
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev); pm_runtime_disable(&pdev->dev);
dev_err(&pdev->dev, "can't enable clock\n"); dev_err(&pdev->dev, "can't enable clock\n");
return ret; return ret;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment