1. 04 Sep, 2013 6 commits
    • Joel Fernandes's avatar
      dma: edma: Leave linked to Null slot instead of DUMMY slot · b267b3bc
      Joel Fernandes authored
      Dummy slot has been used as a way for missed-events not to be
      reported as missing. This has been particularly troublesome for cases
      where we might want to temporarily pause all incoming events.
      
      For EDMA DMAC, there is no way to do any such pausing of events as
      the occurence of the "next" event is not software controlled.
      Using "edma_pause" in IRQ handlers doesn't help as by then the event
      in concern from the slave is already missed.
      
      Linking a dummy slot, is seen to absorb these events which we didn't
      want to miss. So we don't link to dummy, but instead leave it linked
      to NULL set, allow an error condition and detect the channel that
      missed it.
      
      Consider the case where we have a scatter-list like:
      SG1->SG2->SG3->SG4->SG5->SG6->Null
      
      For ex, for a MAX_NR_SG of 2, earlier we were splitting this as:
      SG1->SG2->Null
      SG3->SG4->Null
      SG5->SG6->Null
      
      Now we split it as
      SG1->SG2->Null
      SG3->SG4->Null
      SG5->SG6->Dummy
      
      This approach results in lesser unwanted interrupts that occur
      for the last list split. The Dummy slot has the property of not
      raising an error condition if events are missed unlike the Null
      slot. We are OK with this as we're done with processing the
      whole list once we reach Dummy.
      Signed-off-by: default avatarJoel Fernandes <joelf@ti.com>
      [modifed duplicate s-o-b & patch title]
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      b267b3bc
    • Joel Fernandes's avatar
      dma: edma: Find missed events and issue them · c5f47990
      Joel Fernandes authored
      In an effort to move to using Scatter gather lists of any size with
      EDMA as discussed at [1] instead of placing limitations on the driver,
      we work through the limitations of the EDMAC hardware to find missed
      events and issue them.
      
      The sequence of events that require this are:
      
      For the scenario where MAX slots for an EDMA channel is 3:
      
      SG1 -> SG2 -> SG3 -> SG4 -> SG5 -> SG6 -> Null
      
      The above SG list will have to be DMA'd in 2 sets:
      
      (1) SG1 -> SG2 -> SG3 -> Null
      (2) SG4 -> SG5 -> SG6 -> Null
      
      After (1) is succesfully transferred, the events from the MMC controller
      donot stop coming and are missed by the time we have setup the transfer
      for (2). So here, we catch the events missed as an error condition and
      issue them manually.
      
      In the second part of the patch, we make handle the NULL slot cases:
      For crypto IP, we continue to receive events even continuously in
      NULL slot, the setup of the next set of SG elements happens after
      the error handler executes. This is results in some recursion problems.
      Due to this, we continously receive error interrupts when we manually
      trigger an event from the error handler.
      
      We fix this, by first detecting if the Channel is currently transferring
      from a NULL slot or not, that's where the edma_read_slot in the error
      callback from interrupt handler comes in. With this we can determine if
      the set up of the next SG list has completed, and we manually trigger
      only in this case. If the setup has _not_ completed, we are still in NULL
      so we just set a missed flag and allow the manual triggerring to happen
      in edma_execute which will be eventually called. This fixes the above
      mentioned race conditions seen with the crypto drivers.
      
      [1] http://marc.info/?l=linux-omap&m=137416733628831&w=2Signed-off-by: default avatarJoel Fernandes <joelf@ti.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      c5f47990
    • Joel Fernandes's avatar
      ARM: edma: Add function to manually trigger an EDMA channel · 96874b9a
      Joel Fernandes authored
      Manual trigger for events missed as a result of splitting a
      scatter gather list and DMA'ing it in batches. Add a helper
      function to trigger a channel incase any such events are missed.
      Signed-off-by: default avatarJoel Fernandes <joelf@ti.com>
      Acked-by: default avatarSekhar Nori <nsekhar@ti.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      96874b9a
    • Joel Fernandes's avatar
      dma: edma: Write out and handle MAX_NR_SG at a given time · 53407062
      Joel Fernandes authored
      Process SG-elements in batches of MAX_NR_SG if they are greater
      than MAX_NR_SG. Due to this, at any given time only those many
      slots will be used in the given channel no matter how long the
      scatter list is. We keep track of how much has been written
      inorder to process the next batch of elements in the scatter-list
      and detect completion.
      
      For such intermediate transfer completions (one batch of MAX_NR_SG),
      make use of pause and resume functions instead of start and stop
      when such intermediate transfer is in progress or completed as we
      donot want to clear any pending events.
      Signed-off-by: default avatarJoel Fernandes <joelf@ti.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      53407062
    • Joel Fernandes's avatar
      dma: edma: Setup parameters to DMA MAX_NR_SG at a time · 6fbe24da
      Joel Fernandes authored
      Changes are made here for configuring existing parameters to support
      DMA'ing them out in batches as needed.
      
      Also allocate as many as slots as needed by the SG list, but not more
      than MAX_NR_SG. Then these slots will be reused accordingly.
      For ex, if MAX_NR_SG=10, and number of SG entries is 40, still only
      10 slots will be allocated to DMA the entire SG list of size 40.
      
      Also enable TC interrupts for slots that are a last in a current
      iteration, or that fall on a MAX_NR_SG boundary.
      Signed-off-by: default avatarJoel Fernandes <joelf@ti.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      6fbe24da
    • Vinod Koul's avatar
      Merge branch 'topic/api_caps' into for-linus · bd127639
      Vinod Koul authored
      bd127639
  2. 03 Sep, 2013 1 commit
  3. 02 Sep, 2013 21 commits
  4. 28 Aug, 2013 2 commits
    • Lars-Peter Clausen's avatar
      dma: pl330: Fix handling of TERMINATE_ALL while processing completed descriptors · 39ff8613
      Lars-Peter Clausen authored
      The pl330 DMA driver is broken in regard to handling a terminate all request
      while it is processing the list of completed descriptors. This is most visible
      when calling dmaengine_terminate_all() from within the descriptors callback for
      cyclic transfers. In this case the TERMINATE_ALL transfer will clear the
      work_list and stop the transfer. But after all callbacks for all completed
      descriptors have been handled the descriptors will be re-enqueued into the (now
      empty) work_list. So the next time dma_async_issue_pending() is called for the
      channel these descriptors will be transferred again which will cause data
      corruption. Similar issues can occur if dmaengine_terminate_all() is not called
      from within the descriptor callback but runs on a different CPU at the same time
      as the completed descriptor list is processed.
      
      This patch introduces a new per channel list which will hold the completed
      descriptors. While processing the list the channel's lock will be held to avoid
      racing against dmaengine_terminate_all(). The lock will be released when calling
      the descriptors callback though. Since the list of completed descriptors might
      be modified (e.g. by calling dmaengine_terminate_all() from the callback) we can
      not use the normal list iterator macros. Instead we'll need to check for each
      loop iteration again if there are still items in the list. The drivers
      TERMINATE_ALL implementation is updated to move descriptors from both the
      work_list as well the new completed_list back to the descriptor pool. This makes
      sure that none of the descripts finds its way back into the work list and also
      that we do not call any futher complete callbacks after
      dmaengine_terminate_all() has been called.
      Signed-off-by: default avatarLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      39ff8613
    • Zhangfei Gao's avatar
      dmaengine: Add hisilicon k3 DMA engine driver · 8e6152bc
      Zhangfei Gao authored
      Add dmaengine driver for hisilicon k3 platform based on virt_dma
      Signed-off-by: default avatarZhangfei Gao <zhangfei.gao@linaro.org>
      Tested-by: default avatarKai Yang <jean.yangkai@huawei.com>
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      8e6152bc
  5. 27 Aug, 2013 10 commits