[ale] Ubuntu Linux Defrag EXT4

Greg Freemyer greg.freemyer at gmail.com
Mon Sep 13 18:28:42 EDT 2010


On Mon, Sep 13, 2010 at 4:17 PM, Pat Regan <thehead at patshead.com> wrote:
> On Mon, 13 Sep 2010 13:58:58 -0400
> Greg Freemyer <greg.freemyer at gmail.com> wrote:
>
>> So if you have a vanilla ext4 setup, you should be fine to run it
>> online.  If you have something more exotic like ext4 without a
>> journal, then I would personally be very hesitant.
>
> Has anyone mentioned that no one on this list running a ext2/3/4 on
> their desktop is likely to see any boost in performance by defragging?
>
> The only ways you're likely to get noticeable fragmentation are running
> a very busy mail server or if you run for a long time with your drive
> full (and then probably only if you lower the reserved block
> percentage).
>
> Since we're talking about ext4 defragmentation...  Does anyone know how
> smart that defragger is?  The NTFS defragger than ships with Windows
> just packs all the files right up against each other at the front of the
> drive (at least it used to).  That's pretty brain dead because as soon
> as you append to any of those files you immediately start getting more
> fragmented again.
>
> Pat

Pat, I clearly know too much about the e4defrag tool, so stop reading
now if you don't want lots of detail.  The main reason I know it so
well is I'm part of a project that is using the EXT4_IOC_MOVE_EXT
ioctl for other purposes.  But I monitor the ext4 mailing list for
discussion about that ioctl to see if anything pertinent to my project
pops up.

===
The ext4 defrag'er depends on the ext4 FS kernel driver to implement
the smarts of file layout.

The user space code is very simple.  It simply uses fallocate() to
create a file of the same size as the file being defrag'ed.

(If the file is sparse, it calls fallocate() once per contiguous block
range, so the post defrag'ed file will have the same sparse
characteristics as the pre-defrag file.)

Since fallocate() allocates a large number of blocks at one time, ext4
is likely to satisfy the request with large extents at e4defrag time.
(ie. The max ext4 extent is 128MB, so optimally fallocate() for a 1GB
file would cause exactly 8 extents to be allocated when called by
e4defrag.).

e4defrag then compares the number of extents in the new file to the
number in the target file.

If the new file has less extents, it calls ioctl(EXT4_IOC_MOVE_EXT) to
replace the old data blocks with the new ones from the donor file.  It
is done all or nothing with the current userspace implementation.

(I think userspace should proceed 128MB at a time, so that even a very
large file could be defrag'ed without having to have large amount of
freespace, but that's a future optimization.  I've thought about
submitting a patch for that, but I've been too lazy to do it.)

As you imply, ext4 is pretty good at keeping files defrag'ed in the
first place, but if you have a file like a log file the slowly grows,
or a sparse file like the Virtual Disk for a VM that is growing
randomly at internal block ranges, I can see it happening.  Especially
if the partition is low on disk space.

fyi: There are patches under discussion for both the kernel and user
space portions of ext4 / e4defrag to group associated files together
on the disk.  One proposed implementation was to feed a group of files
into e4defrag that you want sequential.  It would then fallocate a
single large file big enough to provide sequential data blocks for all
the files one after another.  Then use  EXT4_IOC_MOVE_EXT to migrate
those data blocks out of the single large donor file to the smaller
individual files.

Thus if KDE startup was your big concern and you knew the order in
which the executables and libraries would be loaded, you could lay
them all out sequentially on disk.  For some reason I don't understand
that concept has not yet gotten a positive response from the kernel
defrag devel guys.

Greg



More information about the Ale mailing list