- 27 May, 2020 27 commits
-
-
Darrick J. Wong authored
When writing to a delalloc region in the data fork, commit the new allocations (of the da reservation) as unwritten so that the mappings are only marked written once writeback completes successfully. This fixes the problem of stale data exposure if the system goes down during targeted writeback of a specific region of a file, as tested by generic/042. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
Refactor xfs_iomap_prealloc_size to be the function that dynamically computes the per-file preallocation size by moving the allocsize= case to the caller. Break up the huge comment preceding the function to annotate the relevant parts of the code, and remove the impossible check_writeio case. Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
When we're estimating a new speculative preallocation length for an extending write, we should walk backwards through the extent list to determine the number of number of blocks that are physically and logically contiguous with the write offset, and use that as an input to the preallocation size computation. This way, preallocation length is truly measured by the effectiveness of the allocator in giving us contiguous allocations without being influenced by the state of a given extent. This fixes both the problem where ZERO_RANGE within an EOF can reduce preallocation, and prevents the unnecessary shrinkage of preallocation when delalloc extents are turned into unwritten extents. This was found as a regression in xfs/014 after changing delalloc writes to create unwritten extents during writeback. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
During writeback, it's possible for the quota block reservation in xfs_iomap_write_unwritten to fail with EDQUOT because we hit the quota limit. This causes writeback errors for data that was already written to disk, when it's not even guaranteed that the bmbt will expand to exceed the quota limit. Irritatingly, this condition is reported to userspace as EIO by fsync, which is confusing. We wrote the data, so allow the reservation. That might put us slightly above the hard limit, but it's better than losing data after a write. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
-
Darrick J. Wong authored
The perag structure already has a pointer to the xfs_mount, so we don't need to pass that separately and can drop it. Having done that, move iter_flags so that the argument order is the same between xfs_inode_walk and xfs_inode_walk_ag. The latter will make things less confusing for a future patch that enables background scanning work to be done in parallel. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
We're not very consistent about function names for the incore inode iteration function. Turn them all into xfs_inode_walk* variants. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
-
Darrick J. Wong authored
Move the xfs_inode_ag_iterator function to be nearer xfs_inode_ag_walk so that we don't have to scroll back and forth to figure out how the incore inode walking function works. No functional changes. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
This is a boolean variable, so use the bool type. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
There are a number of predicate functions that help the incore inode walking code decide if we really want to apply the iteration function to the inode. These are boolean decisions, so change the return types to boolean to match. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
Refactor the two eofb-matching logics into a single helper so that we don't repeat ourselves. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
-
Darrick J. Wong authored
This is now a pointless wrapper, so kill it. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
The incore inode walk code passes a flags argument and a pointer from the xfs_inode_ag_iterator caller all the way to the iteration function. We can reduce the function complexity by passing flags through the private pointer. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
Combine xfs_inode_ag_iterator_flags and xfs_inode_ag_iterator_tag into a single wrapper function since there's only one caller of the _flags variant. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
Not used by anyone, so get rid of it. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
Use XFS_ICI_NO_TAG instead of -1 when appropriate. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Darrick J. Wong authored
Move xfs_fs_eofblocks_from_user into the only file that actually uses it, so that we don't have this function cluttering up the header file. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Eric Sandeen authored
The only grace period which can be set in the kernel today is for id 0, i.e. the default grace period for all users. However, setting an individual grace period is useful; for example: Alice has a soft quota of 100 inodes, and a hard quota of 200 inodes Alice uses 150 inodes, and enters a short grace period Alice really needs to use those 150 inodes past the grace period The administrator extends Alice's grace period until next Monday vfs quota users such as ext4 can do this today, with setquota -T To enable this for XFS, we simply move the timelimit assignment out from under the (id == 0) test. Default setting remains under (id == 0). Note that this now is consistent with how we set warnings. (Userspace requires updates to enable this as well; xfs_quota needs to parse new options, and setquota needs to set appropriate field flags.) Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Eric Sandeen authored
Move timers and warnings out of xfs_quotainfo and into xfs_def_quota so that we can utilize them on a per-type basis, rather than enforcing them based on the values found in the first enabled quota type. Signed-off-by: Eric Sandeen <sandeen@redhat.com> [zlang: new way to get defquota in xfs_qm_init_timelimits] [zlang: remove redundant defq assign] Signed-off-by: Zorro Lang <zlang@redhat.com> Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Eric Sandeen authored
xfs_get_defquota() currently takes an xfs_dquot, and from that obtains the type of default quota we should get (user/group/project). But early in init, we don't have access to a fully set up quota, so that's not possible. The next patch needs go set up default quota timers early, so switch xfs_get_defquota to take an explicit type and add a helper function to obtain that type from an xfs_dquot for the existing callers. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Eric Sandeen authored
Pass xfs_dquot rather than xfs_disk_dquot to xfs_qm_adjust_dqtimers; this makes it symmetric with xfs_qm_adjust_dqlimits and will help the next patch. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Eric Sandeen authored
There is a fair bit of whitespace damage in the quota code, so fix up enough of it that subsequent patches are restricted to functional change to aid review. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Allison Collins <allison.henderson@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Eric Sandeen authored
XFS project quota treats project hierarchies as "mini filesysems" and so rather than -EDQUOT, the intent is to return -ENOSPC when a quota reservation fails, but this behavior is not consistent. The only place we make a decision between -EDQUOT and -ENOSPC returns based on quota type is in xfs_trans_dqresv(). This behavior is currently controlled by whether or not the XFS_QMOPT_ENOSPC flag gets passed into the quota reservation. However, its use is not consistent; paths such as xfs_create() and xfs_symlink() don't set the flag, so a reservation failure will return -EDQUOT for project quota reservation failures rather than -ENOSPC for these sorts of operations, even for project quota: # mkdir mnt/project # xfs_quota -x -c "project -s -p mnt/project 42" mnt # xfs_quota -x -c 'limit -p isoft=2 ihard=3 42' mnt # touch mnt/project/file{1,2,3} touch: cannot touch ‘mnt/project/file3’: Disk quota exceeded We can make this consistent by not requiring the flag to be set at the top of the callchain; instead we can simply test whether we are reserving a project quota with XFS_QM_ISPDQ in xfs_trans_dqresv and if so, return -ENOSPC for that failure. This removes the need for the XFS_QMOPT_ENOSPC altogether and simplifies the code a fair bit. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Eric Sandeen authored
Long ago, group & project quota were mutually exclusive, and so when we turned on XFS_QMOPT_ENOSPC ("return ENOSPC if project quota is exceeded") when project quota was enabled, we only needed to disable it again for user quota. When group & project quota got separated, this got missed, and as a result if project quota is enabled and group quota is exceeded, the error code returned is incorrectly returned as ENOSPC not EDQUOT. Fix this by stripping XFS_QMOPT_ENOSPC out of flags for group quota when we try to reserve the space. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Dave Chinner authored
It's a global atomic counter, and we are hitting it at a rate of half a million transactions a second, so it's bouncing the counter cacheline all over the place on large machines. We don't actually need it anymore - it used to be required because the VFS freeze code could not track/prevent filesystem transactions that were running, but that problem no longer exists. Hence to remove the counter, we simply have to ensure that nothing calls xfs_sync_sb() while we are trying to quiesce the filesytem. That only happens if the log worker is still running when we call xfs_quiesce_attr(). The log worker is cancelled at the end of xfs_quiesce_attr() by calling xfs_log_quiesce(), so just call it early here and then we can remove the counter altogether. Concurrent create, 50 million inodes, identical 16p/16GB virtual machines on different physical hosts. Machine A has twice the CPU cores per socket of machine B: unpatched patched machine A: 3m16s 2m00s machine B: 4m04s 4m05s Create rates: unpatched patched machine A: 282k+/-31k 468k+/-21k machine B: 231k+/-8k 233k+/-11k Concurrent rm of same 50 million inodes: unpatched patched machine A: 6m42s 2m33s machine B: 4m47s 4m47s The transaction rate on the fast machine went from just under 300k/sec to 700k/sec, which indicates just how much of a bottleneck this atomic counter was. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Dave Chinner authored
Seeing massive cpu usage from xfs_agino_range() on one machine; instruction level profiles look similar to another machine running the same workload, only one machine is consuming 10x as much CPU as the other and going much slower. The only real difference between the two machines is core count per socket. Both are running identical 16p/16GB virtual machine configurations Machine A: 25.83% [k] xfs_agino_range 12.68% [k] __xfs_dir3_data_check 6.95% [k] xfs_verify_ino 6.78% [k] xfs_dir2_data_entry_tag_p 3.56% [k] xfs_buf_find 2.31% [k] xfs_verify_dir_ino 2.02% [k] xfs_dabuf_map.constprop.0 1.65% [k] xfs_ag_block_count And takes around 13 minutes to remove 50 million inodes. Machine B: 13.90% [k] __pv_queued_spin_lock_slowpath 3.76% [k] do_raw_spin_lock 2.83% [k] xfs_dir3_leaf_check_int 2.75% [k] xfs_agino_range 2.51% [k] __raw_callee_save___pv_queued_spin_unlock 2.18% [k] __xfs_dir3_data_check 2.02% [k] xfs_log_commit_cil And takes around 5m30s to remove 50 million inodes. Suspect is cacheline contention on m_sectbb_log which is used in one of the macros in xfs_agino_range. This is a read-only variable but shares a cacheline with m_active_trans which is a global atomic that gets bounced all around the machine. The workload is trying to run hundreds of thousands of transactions per second and hence cacheline contention will be occurring on this atomic counter. Hence xfs_agino_range() is likely just be an innocent bystander as the cache coherency protocol fights over the cacheline between CPU cores and sockets. On machine A, this rearrangement of the struct xfs_mount results in the profile changing to: 9.77% [kernel] [k] xfs_agino_range 6.27% [kernel] [k] __xfs_dir3_data_check 5.31% [kernel] [k] __pv_queued_spin_lock_slowpath 4.54% [kernel] [k] xfs_buf_find 3.79% [kernel] [k] do_raw_spin_lock 3.39% [kernel] [k] xfs_verify_ino 2.73% [kernel] [k] __raw_callee_save___pv_queued_spin_unlock Vastly less CPU usage in xfs_agino_range(), but still 3x the amount of machine B and still runs substantially slower than it should. Current rm -rf of 50 million files: vanilla patched machine A 13m20s 6m42s machine B 5m30s 5m02s It's an improvement, hence indicating that separation and further optimisation of read-only global filesystem data is worthwhile, but it clearly isn't the underlying issue causing this specific performance degradation. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Dave Chinner authored
Shaokun Zhang reported that XFS was using substantial CPU time in percpu_count_sum() when running a single threaded benchmark on a high CPU count (128p) machine from xfs_mod_ifree(). The issue is that the filesystem is empty when the benchmark runs, so inode allocation is running with a very low inode free count. With the percpu counter batching, this means comparisons when the counter is less that 128 * 256 = 32768 use the slow path of adding up all the counters across the CPUs, and this is expensive on high CPU count machines. The summing in xfs_mod_ifree() is only used to fire an assert if an underrun occurs. The error is ignored by the higher level code. Hence this is really just debug code and we don't need to run it on production kernels, nor do we need such debug checks to return error values just to trigger an assert. Finally, xfs_mod_icount/xfs_mod_ifree are only called from xfs_trans_unreserve_and_mod_sb(), so get rid of them and just directly call the percpu_counter_add/percpu_counter_compare functions. The compare functions are now run only on debug builds as they are internal to ASSERT() checks and so only compiled in when ASSERTs are active (CONFIG_XFS_DEBUG=y or CONFIG_XFS_WARN=y). Reported-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Dave Chinner authored
xfs: gut error handling in xfs_trans_unreserve_and_mod_sb() From: Dave Chinner <dchinner@redhat.com> The error handling in xfs_trans_unreserve_and_mod_sb() is largely incorrect - rolling back the changes in the transaction if only one counter underruns makes all the other counters incorrect. We still allow the change to proceed and committing the transaction, except now we have multiple incorrect counters instead of a single underflow. Further, we don't actually report the error to the caller, so this is completely silent except on debug kernels that will assert on failure before we even get to the rollback code. Hence this error handling is broken, untested, and largely unnecessary complexity. Just remove it. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
- 19 May, 2020 13 commits
-
-
Christoph Hellwig authored
Move freeing the dynamically allocated attr and COW fork, as well as zeroing the pointers where actually needed into the callers, and just pass the xfs_ifork structure to xfs_idestroy_fork. Also simplify the kmem_free calls by not checking for NULL first. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Both the data and attr fork have a format that is stored in the legacy idinode. Move it into the xfs_ifork structure instead, where it uses up padding. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
There are there are three extents counters per inode, one for each of the forks. Two are in the legacy icdinode and one is directly in struct xfs_inode. Switch to a single counter in the xfs_ifork structure where it uses up padding at the end of the structure. This simplifies various bits of code that just wants the number of extents counter and can now directly dereference it. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
xfs_ifree only need to free inline data in the data fork, as we've already taken care of the attr fork before (and in fact freed the fork structure). Just open code the freeing of the inline data. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Just checking di_forkoff directly is a little easier to follow. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Darrick J. Wong authored
XFS_IFORK_Q is supposed to be a predicate, not a function returning a value. Its usage is in xchk_bmap_check_rmaps is incorrect, but that function only cares about whether or not the "size" of the data is zero or not. Convert that logic to use a proper boolean, and teach the caller to skip the call entirely if the end result would be that we'd do nothing anyway. This avoids a crash later in this series. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> [hch: generalized the NULL ifor check] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Reviewed-by: Brian Foster <bfoster@redhat.com>
-
Christoph Hellwig authored
Now that we fully verify the inode forks before they are added to the inode cache, the crash reported in https://bugzilla.kernel.org/show_bug.cgi?id=204031 can't happen anymore, as we'll never let an inode that has inconsistent nextents counts vs the presence of an in-core attr fork leak into the inactivate code path. So remove the work around to try to handle the case, and just return an error and warn if the fork is not present. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
We don't call xfs_bmapi_read for the COW fork anymore, so remove the special casing. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
Call the data/attr local fork verifiers as soon as we are ready for them. This keeps them close to the code setting up the forks, and avoids a few branches later on. Also open code xfs_inode_verify_forks in the only remaining caller. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
The split between xfs_inode_verify_forks and the two helpers implementing the actual functionality is a little strange. Reshuffle it so that xfs_inode_verify_forks verifies if the data and attr forks are actually in local format and only call the low-level helpers if that is the case. Handle the actual error reporting in the low-level handlers to streamline the caller. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
xfs_ifork_ops add up to two indirect calls per inode read and flush, despite just having a single instance in the kernel. In xfsprogs phase6 in xfs_repair overrides the verify_dir method to deal with inodes that do not have a valid parent, but that can be fixed pretty easily by ensuring they always have a valid looking parent. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
There is not much point in the xfs_iread function, as it has a single caller and not a whole lot of code. Move it into the only caller, and trim down the overdocumentation to just documenting the important "why" instead of a lot of redundant "what". Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-
Christoph Hellwig authored
i_delayed_blks is set to 0 in xfs_inode_alloc and can't have anything assigned to it until the inode is visible to the VFS. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
-