An error occurred fetching the project authors.
  1. 06 Sep, 2019 2 commits
  2. 05 Sep, 2019 2 commits
  3. 04 Sep, 2019 2 commits
  4. 03 Sep, 2019 3 commits
  5. 02 Sep, 2019 2 commits
  6. 30 Aug, 2019 4 commits
  7. 29 Aug, 2019 2 commits
  8. 28 Aug, 2019 2 commits
  9. 26 Aug, 2019 1 commit
    • Patrick Derichs's avatar
      Add edit_note and spec for editing quick actions · a13abd67
      Patrick Derichs authored
      Call QuickActionsService on Note update
      
      Add support for notes which just contain
      commands after editing
      
      Return http status gone (410) if note was deleted
      
      Temporary frontend addition so it is not
      failing when a note is deleted
      
      Move specs to shared examples
      
      Fix rubocop style issue
      
      Deleting note on frontend when status is 410
      
      Use guard clause for note which got deleted
      
      Simplified condition for nil note
      
      This method should no longer be called
      with nil note
      
      Refactoring of execute method to reduce
      complexity
      
      Move errors update to delete_note method
      
      Note is now deleted visually when it only
      contains commands after update
      
      Add expectation
      
      Fix style issues
      
      Changing action to fix tests
      
      Add tests for removeNote and update
      deleteNote expectations
      a13abd67
  10. 24 Aug, 2019 1 commit
  11. 23 Aug, 2019 4 commits
  12. 22 Aug, 2019 2 commits
  13. 20 Aug, 2019 2 commits
  14. 19 Aug, 2019 3 commits
  15. 17 Aug, 2019 2 commits
  16. 16 Aug, 2019 1 commit
  17. 14 Aug, 2019 4 commits
  18. 13 Aug, 2019 1 commit
    • Bob Van Landuyt's avatar
      Rework retry strategy for remote mirrors · 452bc36d
      Bob Van Landuyt authored
      **Prevention of running 2 simultaneous updates**
      
      Instead of using `RemoteMirror#update_status` and raise an error if
      it's already running to prevent the same mirror being updated at the
      same time we now use `Gitlab::ExclusiveLease` for that.
      
      When we fail to obtain a lease in 3 tries, 30 seconds apart, we bail
      and reschedule. We'll reschedule faster for the protected branches.
      
      If the mirror already ran since it was scheduled, the job will be
      skipped.
      
      **Error handling: Remote side**
      
      When an update fails because of a `Gitlab::Git::CommandError`, we
      won't track this error in sentry, this could be on the remote side:
      for example when branches have diverged.
      
      In this case, we'll try 3 times scheduled 1 or 5 minutes apart.
      
      In between, the mirror is marked as "to_retry", the error would be
      visible to the user when they visit the settings page.
      
      After 3 tries we'll mark the mirror as failed and notify the user.
      
      We won't track this error in sentry, as it's not likely we can help
      it.
      
      The next event that would trigger a new refresh.
      
      **Error handling: our side**
      
      If an unexpected error occurs, we mark the mirror as failed, but we'd
      still retry the job based on the regular sidekiq retries with
      backoff. Same as we used to
      
      The error would be reported in sentry, since its likely we need to do
      something about it.
      452bc36d