[ale] Stupid question time - VG/PV size limits?

Michael Trausch mike at trausch.us
Thu Mar 5 12:39:54 EST 2015


I would expect in rhel5 that the limit is that of an unsigned 64 bit integer. But you're going to want a FS that has that limit... I recommend btrfs. It effectively replaces LVM/LVM2. It gives you multiple devices, self-healing, faster reconstruction than mdraid, faster, atomic snapshots, and so on. 

Sent from my iPad

> On Mar 5, 2015, at 10:56 AM, Lightner, Jeff <JLightner at dsservices.com> wrote:
> 
> BASIC QUESTION:
> Is there a SIZE limit for Volume Group (VG) or Physical Volume (PV) in LVM2?
>  
> Mainly I’m interested in this as it relates to RHEL5 (or derivatives such as CentOS5) but any general Linux limitations known would be appreciated.
>  
> DETAILS:
> After doing much searching I can’t find anything that explicitly suggests SIZE limits in lvm2 on RHEL5 (or Linux in general) for a VG or a PV.    I did find notes that suggest limits (or lack thereof) for QUANTITY but not SIZE for lvm2 where lvm1 did have such QUANTITY limits.
>  
> Note: 
> My question is aimed at SIZE limits. NOT at performance.  
>  
> I already know RAID6 is not optimal for databases.   I opted to do the RAID6 to maximize available storage for a test/dev environment.    We do RHEL5 because the Production environment is RHEL5 and that isn’t going to change any time soon so need to insure the test/dev environments match.
>  
> The 20 TB RAID 6 LUN is from a Dell MD1220 disk array.
>  
> Many of the limit I see in RHEL5’s limits document show 16 TB (supported – 1 EB theoretical) but it has no discussion at all of lvm2 limits specifically.     
>  
> Ultimately I successfully did the following:
> -Created  that single 20 TB LUN (RAID6)
> -The OS did discover it and set it as a /dev/sd* device, /dev/sdaer (we have many other LUNs from a separate SAN array).
> -Used parted and changed it from msdos to GPT partition table.
> -Created a single partition using all space from the drive.
> -Created a udev rules file to make create the partition as new device /dev/dmd1200s1 (instead of /dev/sdaer1) to insure we have a persistent name after reboots.  (I chose dmd* to avoid confusion with dm-* device mapper and md* meta disks.)
> -Did a pvcreate of the partition device /dev/dmd1200s1
> -Did a vgcreate using only that PV
> -Did lvcreates of 12 logical volumes (LVs) the largest of which are 6.3 TB from that VG.
> -Did the mkfs.ext4 on the 12 LVs.   (6.3 TB is well within the 16 TB supported limit RHEL5 lists for ext4).
> -Created mountpoint directories then mounted the 12 LVs on same (via fstab entries and “mount –a).
>  
> So far we’ve not put any sort of data on the filesystems but given the above worked it seems likely this shouldn’t be an issue (other than performance) but I’d feel better if I could find something that explicitly states lvm2 is valid for the 20 TB VG/PV it is using in this case.  
>  
> From my reading it appears some of the limits I saw for things like ext4 were related to the tools in e2fsprog rather than the filesystem itself.   Since the LVM tools worked flawlessly I suspect there is no similar issue for lvm2 itself or its tools but I’m just wondering if there is any place that definitively states size limits (or that none exist) for VG and PV in LVM2?    
>  
> RHEL5 by the way has a 2.6.x kernel.   I did find some discussion of 2.4 kernel limitations related to lvm1/lvm2 but those don’t apply.   I also found discussion on HP-UX regarding lvm 2 but since lvm has been on HP-UX longer than it has been in Linux I’ve never been certain how (or if) they related to each other so wouldn’t think that would be helpful (especially since HP-UX only runs on itanium these days and formerly ran on PA-RISC).
> CONFIDENTIALITY NOTICE: This e-mail may contain privileged or confidential information and is for the sole use of the intended recipient(s). If you are not the intended recipient, any disclosure, copying, distribution, or use of the contents of this information is prohibited and may be unlawful. If you have received this electronic transmission in error, please reply immediately to the sender that you have received the message in error, and delete it. Thank you
> 
> 
> 
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20150305/fd6798a6/attachment.html>


More information about the Ale mailing list