1. 26 Jan, 2008 33 commits
  2. 25 Jan, 2008 7 commits
    • Mark Fasheh's avatar
      ocfs2: clean up bh null checks · 2fe5c1d7
      Mark Fasheh authored
      If we know a buffer_head is non-null, then brelse() is unnecessary and
      put_bh() can be used instead. Also, an explicit check for NULL is
      unnecessary when using brelse(). This patch only covers buffer_head_io.c and
      resize.c, which have recently added code which exhibits this problem.
      Signed-off-by: default avatarMark Fasheh <mark.fasheh@oracle.com>
      2fe5c1d7
    • Mark Fasheh's avatar
      ocfs2: document access rules for blocked_lock_list · 7ec373cf
      Mark Fasheh authored
      ocfs2_super->blocked_lock_list and ocfs2_super->blocked_lock_count have some
      usage restrictions which aren't immediately obvious to anyone reading the
      code. It's a good idea to document this so that we avoid making costly
      mistakes in the future.
      Signed-off-by: default avatarMark Fasheh <mark.fasheh@oracle.com>
      7ec373cf
    • Joonwoo Park's avatar
      configfs: file.c fix possible recursive locking · 116ba5d5
      Joonwoo Park authored
      configfs_register_subsystem() with default_groups triggers recursive locking.
      it seems that mutex_lock_nested is needed.
      
      =============================================
      [ INFO: possible recursive locking detected ]
      2.6.24-rc6 #145
      ---------------------------------------------
      swapper/1 is trying to acquire lock:
       (&sb->s_type->i_mutex_key#3){--..}, at: [<c40c9a9e>] configfs_add_file+0x2e/0x70
      
      but task is already holding lock:
       (&sb->s_type->i_mutex_key#3){--..}, at: [<c40ca985>] configfs_register_subsystem+0x55/0x130
      
      other info that might help us debug this:
      1 lock held by swapper/1:
       #0:  (&sb->s_type->i_mutex_key#3){--..}, at: [<c40ca985>] configfs_register_subsystem+0x55/0x130
      
      stack backtrace:
      Pid: 1, comm: swapper Not tainted 2.6.24-rc6 #145
       [<c40053ba>] show_trace_log_lvl+0x1a/0x30
       [<c4005e82>] show_trace+0x12/0x20
       [<c400687e>] dump_stack+0x6e/0x80
       [<c404ec72>] __lock_acquire+0xe62/0x1120
       [<c404efb2>] lock_acquire+0x82/0xa0
       [<c43fda88>] mutex_lock_nested+0x98/0x2e0
       [<c40c9a9e>] configfs_add_file+0x2e/0x70
       [<c40c9b0c>] configfs_create_file+0x2c/0x40
       [<c40ca639>] configfs_attach_item+0x139/0x220
       [<c40ca734>] configfs_attach_group+0x14/0x140
       [<c40ca7e9>] configfs_attach_group+0xc9/0x140
       [<c40ca9f6>] configfs_register_subsystem+0xc6/0x130
       [<c45c8186>] init_netconsole+0x2b6/0x300
       [<c45a75f2>] kernel_init+0x142/0x320
       [<c4004fb3>] kernel_thread_helper+0x7/0x14
       =======================
      Signed-off-by: default avatarJoonwoo Park <joonwpark81@gmail.com>
      Signed-off-by: default avatarJoel Becker <joel.becker@oracle.com>
      Signed-off-by: default avatarMark Fasheh <mark.fasheh@oracle.com>
      116ba5d5
    • Joonwoo Park's avatar
      configfs: dir.c fix possible recursive locking · ba611edf
      Joonwoo Park authored
      configfs_register_subsystem() with default_groups triggers recursive locking.
      it seems that mutex_lock_nested is needed.
      
      =============================================
      [ INFO: possible recursive locking detected ]
      2.6.24-rc6 #141
      ---------------------------------------------
      swapper/1 is trying to acquire lock:
       (&sb->s_type->i_mutex_key#3){--..}, at: [<c40ca76f>] configfs_attach_group+0x4f/0x190
      
      but task is already holding lock:
       (&sb->s_type->i_mutex_key#3){--..}, at: [<c40ca9d5>] configfs_register_subsystem+0x55/0x130
      
      other info that might help us debug this:
      1 lock held by swapper/1:
       #0:  (&sb->s_type->i_mutex_key#3){--..}, at: [<c40ca9d5>] configfs_register_subsystem+0x55/0x130
      
      stack backtrace:
      Pid: 1, comm: swapper Not tainted 2.6.24-rc6 #141
       [<c40053ba>] show_trace_log_lvl+0x1a/0x30
       [<c4005e82>] show_trace+0x12/0x20
       [<c400687e>] dump_stack+0x6e/0x80
       [<c404ec72>] __lock_acquire+0xe62/0x1120
       [<c404efb2>] lock_acquire+0x82/0xa0
       [<c43fdad8>] mutex_lock_nested+0x98/0x2e0
       [<c40ca76f>] configfs_attach_group+0x4f/0x190
       [<c40caa46>] configfs_register_subsystem+0xc6/0x130
       [<c45c8186>] init_netconsole+0x2b6/0x300
       [<c45a75f2>] kernel_init+0x142/0x320
       [<c4004fb3>] kernel_thread_helper+0x7/0x14
       =======================
      Signed-off-by: default avatarJoonwoo Park <joonwpark81@gmail.com>
      Signed-off-by: default avatarJoel Becker <joel.becker@oracle.com>
      Signed-off-by: default avatarMark Fasheh <mark.fasheh@oracle.com>
      ba611edf
    • Joel Becker's avatar
      configfs: Remove EXPERIMENTAL · 02ac0499
      Joel Becker authored
      configfs has been alive and kicking for a while now.  It underpins some
      non-EXPERIMENTAL subsystems, such as OCFS2's cluster stack.
      Signed-off-by: default avatarJoel Becker <joel.becker@oracle.com>
      Signed-off-by: default avatarMark Fasheh <mark.fasheh@oracle.com>
      02ac0499
    • Mark Fasheh's avatar
      ocfs2: bump version number · 0e5ae032
      Mark Fasheh authored
      Bump the printed version to 1.5.0. This helps us quickly identify which
      version of Ocfs2 a bug filer is running.
      Signed-off-by: default avatarMark Fasheh <mark.fasheh@oracle.com>
      0e5ae032
    • Tao Ma's avatar
      ocfs2/dlm: Clear joining_node on hearbeat node down · 2d4b1cbb
      Tao Ma authored
      Currently the process of dlm join contains 2 steps: query join and assert join.
      After query join, the joined node will set its joining_node. So if the joining
      node happens to panic before the 2nd step, the joined node will fail to clear
      its joining_node flag because that node isn't in the domain map. It at least
      cause 2 problems.
      1. All the new join request will fail. So no new node can mount the volume.
      2. The joined node can't umount the volume since during the umount process it
         has to wait for the joining_node to be unknown. So the umount will be hanged.
      
      The solution is to clear the joining_node before we check the domain map.
      Signed-off-by: default avatarTao Ma <tao.ma@oracle.com>
      Signed-off-by: default avatarMark Fasheh <mark.fasheh@oracle.com>
      2d4b1cbb