- 21 Jul, 2011 11 commits
-
-
Guennadi Liakhovetski authored
Calling mmc_request_done() under a spinlock with interrupts disabled leads to a recursive spin-lock on request retry path and to scheduling in atomic context. This patch fixes both these problems by moving mmc_request_done() to the scheduler workqueue. Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Manoj Iyer authored
Ricoh 1180:e823 does not recognize certain types of SD/MMC cards, as reported at http://launchpad.net/bugs/773524. Lowering the SD base clock frequency from 200Mhz to 50Mhz fixes this issue. This solution was suggest by Koji Matsumuro, Ricoh Company, Ltd. This change has no negative performance effect on standard SD cards, though it's quite possible that there will be one on UHS-1 cards. Signed-off-by: Manoj Iyer <manoj.iyer@canonical.com> Tested-by: Daniel Manrique <daniel.manrique@canonical.com> Cc: Koji Matsumuro <matsumur@nts.ricoh.co.jp> Cc: <stable@kernel.org> Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Adrian Hunter authored
In the case of an I/O error, the DMA will have been cleaned up in the MMC interrupt and the request structure pointer will be null. In that case, it is essential to check if the DMA is over before dereferencing host->mrq->data. Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Andy Shevchenko authored
There are a few places with the same functionality. This patch creates two functions omap_hsmmc_set_bus_width() and omap_hsmmc_set_bus_mode() to do the job. Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com> Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Andy Shevchenko authored
There are two pieces of code which are similar, but not the same. Each of them contains a bug. The SYSCTL register should be read before writing to it in omap_hsmmc_context_restore() to retain the state of the reserved bits. Before setting the clock divisor and DTO bits the value from the SYSCTL register should be masked properly. We were lucky to have no problems with DTO bits. So, make sure we have clear DTO bits properly in omap_hsmmc_set_ios(). Additionally get rid of msleep(1). The actual time is rarely higher than 30us on OMAP 3630. The resulting pieces of code are refactored into the omap_hsmmc_set_clock() function. Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com> Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Andy Shevchenko authored
There is similar code in two functions which enable the clock. Refactor this code to omap_hsmmc_start_clock(). Re-use omap_hsmmc_stop_clock() in omap_hsmmc_context_restore() as well. Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com> Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Andy Shevchenko authored
There are two places where the same calculations are done. Let's split them into a separate function. In addition, simplify by using the DIV_ROUND_UP kernel macro. Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com> Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Andy Shevchenko authored
Move the min and max frequency constants to the definition block in the source file. Signed-off-by: Andy Shevchenko <ext-andriy.shevchenko@nokia.com> Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Adrian Hunter authored
CERR and BADA were in the wrong place and there are only 32 not 35. Signed-off-by: Adrian Hunter <adrian.hunter@nokia.com> Reviewed-by: Venkatraman S <svenkatr@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Jaehoon Chung authored
We already check for ongoing async transfers when handling discard requests, but not in mmc_blk_issue_flush(). This patch fixes that omission. Tested with an SDHCI controller and eMMC4.41. Signed-off-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Per Forlin <per.forlin@linaro.org> Cc: <stable@kernel.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Documentation about the background and the design of mmc non-blocking. Host driver guidelines to minimize request preparation overhead. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Chris Ball <cjb@laptop.org>
-
- 20 Jul, 2011 29 commits
-
-
Balaji T K authored
After runtime conversion to handle clk, iclk node is not used. However fclk node is still used to get clock rate. Signed-off-by: Balaji T K <balajitk@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Balaji T K authored
* Add runtime pm support to HSMMC host controller. * Use runtime pm API to enable/disable HSMMC clock. * Use runtime autosuspend APIs to enable auto suspend delay. Based on OMAP HSMMC runtime implementation by Kevin Hilman and Kishore Kadiyala. Signed-off-by: Balaji T K <balajitk@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Balaji T K authored
lazy_disable framework in OMAP HSMMC manages multiple low power states and card is powered off after inactivity time of 8 seconds. Based on previous discussion on the list, card power (regulator) handling (when to power OFF/ON) should ideally be handled by core layer. Remove usage of lazy disable to allow core layer _only_ to handle card power. With the removal of lazy disable framework, MMC regulators are left ON until MMC_POWER_OFF via set_ios. Signed-off-by: Balaji T K <balajitk@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Philip Rakity authored
Non default Drive Strength cannot be set automatically. It is a function of the board design and only if there is a specific platform handler can it be set. The platform handler needs to take into account the board design. Pass to the platform code the necessary information. For example: The card and host controller may indicate they support HIGH and LOW drive strength. There is no way to know what should be chosen without specific board knowledge. Setting HIGH may lead to reflections and setting LOW may not suffice. There is no mechanism (like ethernet duplex or speed pulses) to determine what should be done automatically. If no platform handler is defined -- use the default value. Signed-off-by: Philip Rakity <prakity@marvell.com> Reviewed-by: Arindam Nath <arindam.nath@amd.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Change mmc_blk_issue_rw_rq() to become asynchronous. The execution flow looks like this: * The mmc-queue calls issue_rw_rq(), which sends the request to the host and returns back to the mmc-queue. * The mmc-queue calls issue_rw_rq() again with a new request. * This new request is prepared in issue_rw_rq(), then it waits for the active request to complete before pushing it to the host. * When the mmc-queue is empty it will call issue_rw_rq() with a NULL req to finish off the active request without starting a new request. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Add an additional mmc queue request instance to make way for two active block requests. One request may be active while the other request is being prepared. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Break out code without functional changes. This simplifies the code and makes way for handling two parallel requests. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar<sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Break out code from mmc_blk_issue_rw_rq to create a block request prepare function. This doesn't change any functionallity. This helps when handling more than one active block request. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
The way the request data is organized in the mmc queue struct, it only allows processing of one request at a time. This patch adds a new struct to hold mmc queue request data such as sg list, request, blk request and bounce buffers, and updates any functions depending on the mmc queue struct. This prepares for using multiple active requests in one mmc queue. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Add a test that measures how the mmc bandwidth depends on the numbers of sg elements in the sg list. The transfer size if fixed and sg length goes from a few up to 512. The purpose is to measure overhead caused by multiple sg elements. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Add four tests for read and write performance per different transfer size, 4k to 4M. * Read using blocking mmc request * Read using non-blocking mmc request * Write using blocking mmc request * Write using non-blocking mmc request The host driver must support pre_req() and post_req() in order to run the non-blocking test cases. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar<sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Add a debugfs file "testlist" to print all available tests. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar<sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
pre_req() runs dma_map_sg() and prepares the dma descriptor for the next mmc data transfer. post_req() runs dma_unmap_sg. If not calling pre_req() before mmci_request(), mmci_request() will prepare the cache and dma just like it did it before. It is optional to use pre_req() and post_req() for mmci. Signed-off-by: Per Forlin <per.forlin@linaro.org> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
pre_req() runs dma_map_sg(), post_req() runs dma_unmap_sg. If not calling pre_req() before omap_hsmmc_request(), dma_map_sg will be issued before starting the transfer. It is optional to use pre_req(). If issuing pre_req(), post_req() must be called as well. Signed-off-by: Per Forlin <per.forlin@linaro.org> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Per Forlin authored
Previously there has only been one function mmc_wait_for_req() to start and wait for a request. This patch adds: * mmc_start_req() - starts a request wihtout waiting If there is on ongoing request wait for completion of that request and start the new one and return. Does not wait for the new command to complete. This patch also adds new function members in struct mmc_host_ops only called from core.c: * pre_req - asks the host driver to prepare for the next job * post_req - asks the host driver to clean up after a completed job The intention is to use pre_req() and post_req() to do cache maintenance while a request is active. pre_req() can be called while a request is active to minimize latency to start next job. post_req() can be used after the next job is started to clean up the request. This will minimize the host driver request end latency. post_req() is typically used before ending the block request and handing over the buffer to the block layer. Add a host-private member in mmc_data to be used by pre_req to mark the data. The host driver will then check this mark to see if the data is prepared or not. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Madhusudhan Chikkature authored
Update the OMAP HSMMC entry from the MAINTAINERS file as I will no longer be able to maintain this driver. Signed-off-by: Madhusudhan Chikkature <madhu.cr@ti.com> [khilman@ti.com: change to Orphan rather than complete removal] Signed-off-by: Kevin Hilman <khilman@ti.com> Acked-by: Venkatraman S <svenkatr@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Nicolas Ferre authored
This driver has been used for years with this option enabled. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Nicolas Ferre authored
Take care of slots while going to suspend state. Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Reviewed-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Adrian Hunter authored
Unless MMC_CAP_8_BIT_DATA is set, the bus width defaults to 4. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Major Lee authored
And hook platform_8bit_width to support 8-bit bus width. Signed-off-by: Major Lee <major_lee@wistron.com> Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Dirk Brandewie <dirk.brandewie@gmail.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
If an error occurs mid way through a transaction (such as a missing CRC status response after the 2nd block written out of 3), then the FIFO may still contain data which will interfere with the next transaction. Therefore after an error has been detected, reset the fifo using the CTRL register. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
When a data write isn't acknowledged by the card (so no CRC status token is detected after the data), the error -EIO is returned instead of the -ETIMEDOUT expected by mmc_test 15 - "Correct xfer_size at write (start failure)" and 17 "Correct xfer_size at write (midway failure)". In PIO mode the reported number of bytes transferred is also exaggerated since the last block actually failed. Handle the "Write no CRC" error specially, setting the error to -ETIMEDOUT and setting the bytes_xferred to 0. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
Remove error messages for timeout and CRC failure, since the error code already indicates the problem. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
James Hogan authored
There are several situations when dw_mci_submit_data_dma() decides to fall back to PIO mode instead of using DMA, due to a short (to avoid overhead) or "complex" (e.g. with unaligned buffers) transaction, even though host->use_dma is set. However dw_mci_stop_dma() decides whether to stop DMA or set the EVENT_XFER_COMPLETE event based on host->use_dma. When falling back to PIO mode this results in data timeout errors getting missed and the driver locking up. Therefore add host->using_dma to indicate whether the current transaction is using dma or not, and adjust dw_mci_stop_dma() to use that instead. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Wonil Choi authored
Signed-off-by: Wonil Choi <wonil22.choi@samsung.com> Signed-off-by: Minho Ban <mhban@samsung.com> Cc: Ben Dooks <ben-linux@fluff.org> Signed-off-by: Kukjin Kim <kgene.kim@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Adrian Hunter authored
In general, SDHC hardware timeout cannot be avoided. Accordingly, the maximum timeout is specified to limit the maximum discard size. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Adrian Hunter authored
Some host controllers will not operate without a hardware timeout that is limited in value. However large discards require large timeouts, so there needs to be a way to specify the maximum discard size. A host controller driver may now specify the maximum discard timeout possible so that max_discard_sectors can be calculated. However, for eMMC when the High Capacity Erase Group Size is not in use, the timeout calculation depends on clock rate which may change. For that case Preferred Erase Size is used instead. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Shawn Guo authored
The use of flag ESDHC_FLAG_GPIO_FOR_CD_WP is all CD related. It does not necessarily need to bother WP in the flag name. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-
Shawn Guo authored
The function esdhc_readl_le intends to clear bit SDHCI_CARD_PRESENT, when the card detect gpio tells there is no card. But it does not clear the bit actually. The patch gives a fix on that. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Acked-by: Wolfram Sang <w.sang@pengutronix.de> Cc: <stable@kernel.org> Signed-off-by: Chris Ball <cjb@laptop.org>
-