[ale] Question about Virtual Nics and Speed

JD jdp at algoloma.com
Wed Jun 5 16:01:35 EDT 2013


Could be anything; disk, bridge, networking, devices.

* What do you mean by "he was only transfer about 300MB per second."?  Is that
disk or network?
You realize that 300MB/s is .... 2.4Gbps, correct? That is pretty awesome for a
1 GigE NIC.  bits/bytes - is important.

Where I've worked
 - B is bytes
 - b is bits

Is the 300MB/s true throughput or rsync "speed up" throughput?  I did an scp
transfer test between 2 GigE systems at home - both are KVM hosts connected with
true Intel PRO/1000 NICs.  49MB/s (392 Mbps) was the result. That was between 2
WD Black drives. Between other systems, without encryption, I see about 70Mbps
throughput.

Assuming this is still an issue for your environment. what does your system
monitoring show for performance of the hostOS and guests? Anything funny in
those graphs the last week, month, 6 months?

Have you ever gotten 300MB/s before?

I would attack this issue just like it was a physical machine.
* Is the disk performance an issue?
* If the transfer is between 2 VMs, are those VMs reading/writing to the same
physical disks?
* Are transfers going through a VM switch only or through real networking
devices?  Are those commercial-grade with enough backplane to support sustained
transfers?  Is the network busy with other traffic or is this a dedicated bulk
data network?
* Is the physical NIC performing well?  Some cheap GigE NICs only get
200-300Mbps. Average throughput for industry standard NICs is 650Mbps-800Mpbs.
* Is the DB swapping? Does it have enough RAM?
* Does a file-to-file transfer work faster - is this just a DB issue or a
general networking/transfer issue? I've never seen over 800Mbps over copper.
* Is there a reason you didn't choose virtio for the guest NICs?

All my VMs have the same 500 qlen at the hostOS, but inside they see a GigE port
qlen. I'm using virtio.

Anyway ... lots of things to check. Lots of facts to be gathered.  I hope one of
these ideas will help you find the solution that works.


On 06/05/2013 02:29 PM, Chuck Payne wrote:
> Guys,
> 
> I am currently running a couple Kernel Virtual Machines (KVM) servers,
> today my db admin told me that he noticed that the speed isn't great
> on a rsync he was doing. I looked and notice that he was only transfer
> about 300MB per second. So I started digging into why is that,
> everything should be at about 1Gig in speed.
> 
> I told a look network devices on the server with ip
> 
> ip link show
> 
> My physical nic, it shows that it is a 1gig
> 
> em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
>     link/ether d4:ae:52:a5:fe:b1 brd ff:ff:ff:ff:ff:ff
> 
> The bridge doesn't show any speed
> 
> br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
>     link/ether d4:ae:52:a5:fe:b1 brd ff:ff:ff:ff:ff:ff
> 
> By my virtual nic only shows that it 500 mb
> 
> 65: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state UNKNOWN qlen 500
>     link/ether fe:54:00:47:bd:09 brd ff:ff:ff:ff:ff:ff
> 
> I am using e1000 as my virtual nics
> 
> -device e1000
> 
> Is there a setting I need to check?  Is it control by brctl? Or 500MB
> the best I will get.
> 
> --
> Terror PUP a.k.a
> Chuck "PUP" Payne


More information about the Ale mailing list