[ale] ISCSI array on virtual machine

Jeff Jansen bamakojeff at gmail.com
Mon May 2 10:45:13 EDT 2016


It started out on Debian 5 (Lenny) and has tracked with Debian stable.  I'm
not sure which kernels those were.  The file system was EXT4.

Jeff

On Thu, Apr 28, 2016 at 4:56 PM, Ed Cashin <ecashin at noserose.net> wrote:

> What filesystem and kernel versions were you using during that time?  If
> all those transitions were made using the same filesystem, I'm impressed by
> the quality of the filesystem code.
>
> In the past I have see transitions like that tickle latent bugs in the
> filesystem code (or device mapper code or md code or block layer or virtual
> memory subsystem).  I usually create a fresh filesystem and rsync
> contents.  Partly it's to get the free defragmentation, but also it's for
> fear of bugs.
>
>
>
> On Thu, Apr 28, 2016 at 4:31 PM, Jeff Jansen <bamakojeff at gmail.com> wrote:
>
>> Using LVM costs you almost nothing and it offers tremendous advantages.
>> Even if you don't need those advantages now, having them available for free
>> (or nearly) is a great asset as a sysadmin.
>>
>> Over five years we had a backup system that began as a single hard drive
>> in a single machine, which became a RAID array in a single machine, which
>> became two RAID arrays in two machines connected by DRBD and a crossover
>> cable, which became multiple RAID arrays in multiple HA machines across a
>> WAN.
>>
>> Having LVM as part of the underlying architecture made all those changes,
>> while not "easy," much easier than it would have been without it.  If you
>> use LVM but then never change partition on the 8 TB drive, you'll never
>> know it's there, and it will never cause you any trouble.  But if you do
>> ever decide to make changes, you will be immensely grateful for the
>> possibilities it opens up for you.
>>
>> HTH
>>
>> Jeff
>>
>> On Thu, Apr 28, 2016 at 10:34 AM, Todor Fassl <fassl.tod at gmail.com>
>> wrote:
>>
>>> But, Jim, what I'm asking is why bother carving it up at all? What
>>> benefit is there in that?
>>>
>>> I get that if you use LVM and ext4 file systems, you can resize the
>>> partitions. But if I made the whole 8T one big partition, I'd never have
>>> any reason to resize it.
>>>
>>> PS: One thing I forgot to mention that may be of critical importance is
>>> that quotas are enforced on this drive.
>>>
>>>
>>>
>>>
>>>
>>> On 04/28/2016 06:16 AM, Jim Kinney wrote:
>>>
>>>> I have a large drive array for my department. I use LVM to carve it up.
>>>> I
>>>> leave a huge chunk unallocated so I can extend logical partitions as
>>>> required. That dodges the need to shrink existing partitions and allows
>>>> XFS
>>>> as filesystem.
>>>> On Apr 28, 2016 3:27 AM, "Todor Fassl" <fassl.tod at gmail.com> wrote:
>>>>
>>>> With respect to your question about using LVM ... I guess that was sort
>>>>> of
>>>>> my original question. If I just allocate the whole 8T to one big
>>>>> partition,
>>>>> I'd have no reason to use LVM. But I can see the need to use LVM if I
>>>>> continue with the scheme where I split the drive into partitions for
>>>>> faculty, grads, and staff.
>>>>>
>>>>> On 04/27/2016 02:27 PM, Jim Kinney wrote:
>>>>>
>>>>> If you need de-dup, ZFS is the only choice and be ready to throw a lot
>>>>>> of RAM into the server so it can do it's job. I was looking at dedupe
>>>>>> on 80TB and the RAM hit was 250GB.
>>>>>> XFS vs EXT4.
>>>>>> XFS is the better choice.
>>>>>> XFS does everything EXT4 does except shrink. It was designed for (then
>>>>>> very) large files (video) and works quite well with smaller files.
>>>>>> It's
>>>>>> as fast as EXT4 but will handle larger files and many, many more of
>>>>>> them. I want to say exabytes but not certain. Petabytes are OK
>>>>>> filesystem sizes with XFS right now. I have no experience with a
>>>>>> filesystem of that size but I expect there to be some level of
>>>>>> metadata
>>>>>> performance hit.
>>>>>> If there's the slightest chance of a need to shrink a partition (You
>>>>>> _are_ using LVM, right?) then XFS will bite you and require
>>>>>> relocation,
>>>>>> tear down, rebuild, relocation. Not a fun process.
>>>>>> A while back, an install onto a 24 TB RAID6 array refused to budge
>>>>>> using EXT4. While EXT4 is supposed to address that kind of size, it
>>>>>> had
>>>>>> bugs and unimplemented plans for expansion features that were
>>>>>> blockers.
>>>>>> I used XFS instead and never looked back. XFS has a very complete
>>>>>> toolset for maintenance/repair needs.
>>>>>> On Wed, 2016-04-27 at 13:54 -0500, Todor Fassl wrote:
>>>>>>
>>>>>> I need to setup a new file server on a virtual machine with an
>>>>>>> attached
>>>>>>> ISCSI array. Two things I am obsessing over -- 1. Which file system
>>>>>>> to
>>>>>>> use and 2. Partitioning scheme.
>>>>>>>
>>>>>>> The ISCSI array is attached to a ubuntu 16.04 virtual machine. To
>>>>>>> tell
>>>>>>> you the truth, I don't even know how that is done. I do not manage
>>>>>>> the
>>>>>>> VMware cluster.  In fact, I think the Dell technitian actually ddid
>>>>>>> that
>>>>>>> for us. It looks like a normal 8T hard drive on /dev/sdb to the
>>>>>>> virtual
>>>>>>> machine. The ISCSI array is configured for RAID6 so from what I
>>>>>>> understand, all I have to do is choose a file system appropriate for
>>>>>>> my
>>>>>>> end user's needs. Even though the array looks like a single hard
>>>>>>> drive,
>>>>>>> I don't have to worry about software RAID or anyhthing like that.
>>>>>>>
>>>>>>> Googling shows me no clear advantage to ext4, xfs, or zfs. I haven't
>>>>>>> been able to find a page that says any one of those is an obvious
>>>>>>> choice
>>>>>>> in my situation. I have about 150 end-users with nfs mounted home
>>>>>>> directories. We also have a handful of people using Windows so the
>>>>>>> file
>>>>>>> server will have samba installed. It's a pretty good mix of large
>>>>>>> files
>>>>>>> and small files since different users are doing drastically
>>>>>>> different
>>>>>>> things. There are users who never do anything but read email and
>>>>>>> browse
>>>>>>> the web and others doing fluid dynamic simulations on small
>>>>>>> supercomputers.
>>>>>>>
>>>>>>> Secondthing I've been going back and forth on in my own mind is
>>>>>>> whether
>>>>>>> to do away with seperate partitions for faculty, staff, and grad
>>>>>>> students. My co-worker says that's probably an artifact of the days
>>>>>>> when
>>>>>>> partition sizes were limited. That was before my time here. The last
>>>>>>> 2
>>>>>>> times we rebuilt our file server, we just maintained the
>>>>>>> partitioning
>>>>>>> scheme and just made the sizes  times larger. But sometimes the
>>>>>>> faculty
>>>>>>> partition got filled up while there was still plenty of space left
>>>>>>> on
>>>>>>> the grad partition. Or it might be the other way around. If we
>>>>>>> munged
>>>>>>> them all together, that wouldn't happen. The only downside I see to
>>>>>>> doing that is that if the faculty partition gets hosed, the grad
>>>>>>> partition wouldn't be effected. But that seems like a pretty
>>>>>>> arbitrary
>>>>>>> choice. We could just assign users randomly to one partition or
>>>>>>> another.
>>>>>>> When you're setting up a NAS for use by a lot of users, is it
>>>>>>> considered
>>>>>>> best practice to split it up to limit the damage from a messed up
>>>>>>> file
>>>>>>> system? I mean, hopefully, that never happens anyway, right?
>>>>>>>
>>>>>>> Right now, I've got it configured as one gigantic 8T ext4 partition.
>>>>>>> But
>>>>>>> we won't be going live with it until the end of May so I have plenty
>>>>>>> of
>>>>>>> time to completely rebuild it.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Ale mailing list
>>>>>>> Ale at ale.org
>>>>>>> http://mail.ale.org/mailman/listinfo/ale
>>>>>>> See JOBS, ANNOUNCE and SCHOOLS lists at
>>>>>>> http://mail.ale.org/mailman/listinfo
>>>>>>>
>>>>>>>
>>>>>> --
>>>>> Todd
>>>>> _______________________________________________
>>>>> Ale mailing list
>>>>> Ale at ale.org
>>>>> http://mail.ale.org/mailman/listinfo/ale
>>>>> See JOBS, ANNOUNCE and SCHOOLS lists at
>>>>> http://mail.ale.org/mailman/listinfo
>>>>>
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Ale mailing list
>>>> Ale at ale.org
>>>> http://mail.ale.org/mailman/listinfo/ale
>>>> See JOBS, ANNOUNCE and SCHOOLS lists at
>>>> http://mail.ale.org/mailman/listinfo
>>>>
>>>>
>>> --
>>> Todd
>>> _______________________________________________
>>> Ale mailing list
>>> Ale at ale.org
>>> http://mail.ale.org/mailman/listinfo/ale
>>> See JOBS, ANNOUNCE and SCHOOLS lists at
>>> http://mail.ale.org/mailman/listinfo
>>>
>>
>>
>> _______________________________________________
>> Ale mailing list
>> Ale at ale.org
>> http://mail.ale.org/mailman/listinfo/ale
>> See JOBS, ANNOUNCE and SCHOOLS lists at
>> http://mail.ale.org/mailman/listinfo
>>
>>
>
>
> --
>   Ed Cashin <ecashin at noserose.net>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20160502/f5b4c56f/attachment.html>


More information about the Ale mailing list