Virtio queues

Virtio Block. Is this by design and the only way to achieve >1gbps to use multiqueue support? What is the maximum number of queues that can be supported by MSMQ? Does the maximum vary by operating system? Does the maximum vary if the queues are public or private? Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to do the packets reception and transmission. In Red Hat Enterprise Linux 7, the installation program can automatically detect whether the source is an installable tree or a directory containing an ISO image, and the nfsiso option is deprecated. But it has been noticed that the tx/rx queues option is not related to Networking but specific to the virtio framework, then allowing users to update these values is not desirable since to correctly configure them the hardware should be knowing. 0 on pci0 The current virtio block sets a queue depth of 64. HA port has to be on IPv4 VirtIO Tenant network. https://blueprints. 1 Introduction. 0, all the above instructions work with one slight modification which is enabling hardware assisted virtualization for …2. The purpose of VIRTIO is to ensure that virtual environments and guests have a straightforward, efficient, standard, and extensible mechanism for virtual devices, rather than boutique per-environment or per-OS mechanisms. 9. It is an affordable, interoperable, and manageable open source foundation. It is also sometimes called solid-state disk, although SSDs do not have physical disks. vtnet0: netmap queues/slots: TX 1/512, RX 1/512 vtnet attached txq=1, txd=512 rxq=1, rxd=512 ahci0: <Intel ICH8 AHCI SATA controller> mem 0xc0002000-0xc00023ff irq 18 at device 3. vncpassword= オプションは値が必要になります (この場合の値はパスワードです)。 したがって、inst. 5 VIRTIO 1. This patch makes it possible to specify the queue size between 256 to 1024. Where this is not obvious, the specific architectures or products are explicitly listed. submitted via different queues), but those may be addressed/improved later. This feature can be explicitly disabled by using state='off' attribute. VirtIO interface, creating a \Portals4-light" VirtIO device in Palacios, adding VirtIO support to the A full Portals implementation would include event queues Sign in. 2. These release notes are generic for all products of our SUSE Linux Enterprise Server 11 product line. , VxLAN) • Use inner header for RSS hash • Inner RSS The virtio_net_load function in hw/net/virtio-net. In Snabb Switch we are creating a 1:1 mapping between Virtio-net descriptors and VMDq hardware receive descriptors. 0,addr=0x3 \At the top level (called virtio) is the virtual queue interface that conceptually attaches front-end drivers to back-end drivers. First, these devices behave as regular PCI devices. 16 gic Enable for architectures using a General Interrupt Controller instead of APIC in order to handle If you want to use Linux as your main operating system and don’t want to do compromises like using a dual-boot solution with Windows there is an alternative called GPU passthrough. Since 1. The original LXR software by the LXR community, this experimental version by lxr@linux. Multiqueue virtio-net Overview. SSDs may use traditional hard disk drive (HDD) form-factors, protocols and file systems such as SATA and SAS, and NTFS or FAT32 greatly simplifying usage of SSDs in Guest control. in some cases queues shall be disabled or enabled dynamically, in this case this function triggers a reconfiguration from the shared To support this, the hypervisor needs to enable VIRTIO_NET_F_STANDBY feature on the virtio-net interface and assign the same MAC address to both virtio-net and VF interfaces. org/licenses/by/2. 0 exposed directly by SPDK vhost Target. Also, to use accounts without or with an empty password, the guest's group policy must be changed. 18 / . SH SPARC TileGX Attach MGT0 port to the Management VirtIO Tenant network. DATA PLANE AND VIRTIO-SCSI QUEUES 0 5000 10000 15000 20000 25000 30000 35000 40000 x-data-plane virtio-scsi, num-queues=4 virtio-scsi, num-queues=2, vectors=3 virtio-scsi, num-queues=4, vectors=5 S Reads Writes R/W 70/30 Virtio-net¶. Currently the queue sizes is hard coded to 256, which might not be suitable for all types of applications. I have 2 vcpu for the guests each and added the driver element to the network interfaces with queues set to 2. virtio. Post by Stephen Hemminger * extra whitespace * spaces before tabs * strings broken across lines * excessively long lines * missing spaces after keywords */ 00023 #define VIRTIO_PCI_STATUS 18 00024 00025 /* An 8-bit r/o interrupt status register. The example configures two queues for the VNIC, which requires six vectors (2 x 2 + 2). In addition, QEMU must allocate the VM’s memory on hugetlbfs. Code Browser 2. c | 32 +++++-----> hw/virtio. This leaves the virtio queues alive, as far as QEMU is Commit 55257d72bd1c51f25106350f4983ec19f62ed1fa (virtio-net: fill only rx queues which are being used) only does the napi enabling during open for curr_queue_pairs Blk-mq compatible drivers inform blk-mq how many parallel hardware queues a device supports (number of submission queues as part of the hardware dispatch queue registration). vncpassword=password が正しい入力形式になります。In previous releases of Red Hat Enterprise Linux, there were separate options for an installable tree accessible by NFS (the nfs option) and an ISO image located on an NFS source (the nfsiso option). The lessons in this article should apply to a variety of server-class network interfaces, but for our router platform we chose a battery of Intel I350-series interfaces. Writing 0 …Multi-queue virtio-net allows network performance to scale with the number of vCPUs and allows for parallel packet processing by creating multiple TX and RX queues. SUSE Linux Enterprise Server is a highly reliable, scalable, and secure server operating system, built to power mission-critical workloads in both physical and virtual environments. The goal is to connect guests' virtio-net devices having vhost-user backend to OVS dpdkvhostuser ports and be able to run any kind of network traffic between them. 12 doesn't fully initialize its management queues at SeaBIOS stage. <braunr> is it ? <braunr> the driver itself may be, but integrating it within the system probably isn't <rekado> It virtio: vm killed (Guest moved used index) Bug #1558175 reported by Julien Meunier on 2016-03-16. KVM Aggregate Throughput Limited at 1Gb - Virtio Queues Limiting Factor? I have a KVM host with three NICs and can't get traffic between guests (in same subnet) above 1gbps aggregate. Their data footprint is at least 4 KB of memory with a queuing mechanism. Insert a value from 1 to 256 for N to specify the number of queues. where the formula for the number of MSI-X vectors results from: N vectors for TX (transmit) queues, N for RX (receive) queues, one for configuration purposes, and one for possible VQ (vector quantization) control. Some parts may not apply to a particular architecture or product. (BZ#1170106) The NetKVM driver now supports the virtio multi-queue feature. VirtIO is paravirtualized storage, so it's more performant than using the default virtual SATA controller. Virtio-net is the para-virtualization solution used in ACRN for networking. Check the channel configuration for the virtio devices with the following: ethtool -l eth0This patch adds the option to specify virtio queue sizes. LXR was initially targeted at the Linux source code, but has proved usable for a wide range of software projects. objs index 765d363c1f. allowed for a more complex handling of multiple queues per port. com>--- drivers/net/virtio/virtio . 12 vmport Depending on the state attribute (values on, off, default on) enable or disable the emulation of VMware IO port, for vmmouse etc. 29 Jan 2010 At the top level (called virtio) is the virtual queue interface that conceptually attaches front-end drivers to back-end drivers. virtio-blk uses a small set of commands that need to be known to both the virtio-blk driver and the virtual machine monitor, and so introducing a new command requires updating both the driver and the monitor. are virtio drivers. This is because is_xdp_raw_buffer_queue() depends on the old xdp queue pairs number to do the correct detection. Great post, very informative, thanks! Decided to run this with KVM nested under ESXi 6. What's happening in virtio land? VIRTIO 0. LXR community, this experimental version by [virtio-dev][RFC 0/2] Signal Distribution Module virtio device specification Hi all, This patch series proposes the specification of a new virtio device on which we are working on, namely the Signal Distribution Module (SDM). 24 * @first _vec: first interrupt vectors to use for queues (usually 0) 25 * 26 * This function assumes the virtio device @vdev has at least as many available: 27 It has multiple queues support. Does anyone know if this can be adjusted? queues 2 & 5 receive 200% to 300% more interrupts than others: The number of--rxq --txq queues must be equal to the number of queues defined before in 'Launch guest machine' section. The Cadence GEM device now supports multiple priority queues through the num-priority-queues property. With it, enterprises can cost-effectively 最後に「等号」(=) が付くオプションでは値を指定する必要があり、値なしで使用することはできません。たとえば、inst. Separated buffers and queues initialization on each Numa node. DCB is also supported for transmitting packets. Return None Parameters. OVSdb-server. 1) The optional queues attribute specifies the number of virt queues for virtio-blk. virtio-forwarder supports multiqueue virtio up to a maximum of 32 queues, where the QEMU VM is configured in the standard way. This Linux kernel change, a part of the Linux 3. 0. 1). However, there is no need for this driver to be supported by the guest for the PMD to work. [RFC] Virtual Machine Device Queues (VMDq) support on KVM Network adapter with VMDq technology presents multiple pairs of tx/rx queues, and renders network L2 sorting mechanism based on MAC addresses and VLAN tags for each tx/rx queue pair. net/nova/+spec/libvirt-virtio-set-queue-sizes. Guest control. c vhost-scsi support onto latest code add QEMU Object Model (QOM) support to the vhost-scsi device Port LIO vhost-scsi code onto latest lio. virtio_user_queue_setup调用virtio_user_create_queue,最终调用ioctl告知kernel的vhost,创建队列。 dev->ops->send_request 调用 vhost_kernel_ioctl 配置共享内存 virtio_user_queue_setup 调用 virtio_user_kick_queue 配置队列的vring IRQs are not balanced evenly when using VirtIO multi-queue. This attribute only is available with MODEL = ‘virtio’. Once launched, use the following commands to configure eth0 with multiple queues. c b/drivers/net/virtio/virtio_ethdev. 1 Device Requirements: Virtio Over PCI Bus A Virtio device using Virtio Over PCI Bus Documentation / virtual / virtio-spec. Network Tuning. The QEMU PC System emulator simulates the following peripherals: - i440FX host PCI bridge and PIIX3 PCI to ISA bridge - Cirrus CLGD 5446 PCI VGA card or dummy VGA card with Bochs VESA extensions (hardware level, including all non standard modes). org> Subject [GitHub] anuragaw commented on issue #3101: kvm: Set amount of queues for Virtio SCSI driver to vCPU of Instance Available for a disk device target configured to use "virtio" bus and "pci" or "ccw" address types. According to VFIO discord you may add queues up to the amount of threads you are serving to the VM. allowing virtio guests to change MAC address or set promiscuous mode in order to bridge the interface or 10 Gb Ethernet - Overview Multiple RX Queues, multiple netperfs kvm has virtio work going on to improve performance Wrap up There are lots of knobs to use Administration of print jobs and print queues 13. On Windows guests, a process started using the guest control execute support will not be able to display a graphical user interface unless the user account under which it is running is currently logged in and has a desktop session. Drivers can use zero Sep 6, 2016 The infrastructure used by SWITCHengines is composed of about 100 servers. reference IDs for shared Virtio RX/TX rings, MAC address, device state, and so on. Welcome to lxr. Queue value above should match the queues configured in OVS, The vector value should be set to “number of queues x 2 + 2” Configure the guest interface Assuming there are 2 interfaces in the guest named eth0, eth1 check the channel configuration and set the number of combined channels to 2 for virtio devices: Multi-queue virtio-net allows network performance to scale with the number of vCPUs and allows for parallel packet processing by creating multiple TX and RX queues. Stefan Hajnoczi Open source and virtualization blog The VIRTIO 1. Message view « Date » · « Thread » Top « Date » · « Thread » From: GitBox <@apache. Installation Quick Start and Deployment Guides can be found in the docu language directories on the media. 1- Postfix maintains two queues, the pending mails queue, and the deferred mail queue, Enhancing QEMU virtio-scsi with Block Limits vital product data (VPD) emulation. Here is an example of an interface definition in libvirt XML syntax which has been configured for eight queues: This introduce a new attribute "num_queues" (same with the good name QEMU uses) for virtio-scsi controller. With the current multi-queue virtio-net approach, network performance scales as the In single queue virtio-net, the scale of the protocol stack in a guest is restricted, as the network performance does not scale as the number of vCPUs increases. 17 is required for multiqueue support, and then one can simply add <driver queues='4'/> inside the vhostuser interface chunk in libvirt XML, where 4 is the number of Configuration and Performance of Vhost/Virtio in Data Plane Development Kit (DPDK) Vhost/virtio is a semi-virtualized device abstract interface specification that VIRTIO_QUEUES to define how many queues will be used for the communication between CPUs and Network drivers. This patch allows virtio-net driver to report traffic distribution to inbound/outbound queues through ethtool -S. diff --git a/drivers/net/virtio/virtio_ethdev. 72bef1a44 100644--- a/drivers/net/virtio/virtio_ethdev. Resolved Issues; detect the domain U exited and does not free the Virtio device. txt. For block storage devices. PF. The test case is to measure DPDK vhost PMD’s capability for supporting the maximum TCP bandwidth with virtio-net device. blob: 3b507ccbfac85f884ed72bc131680d5719cab3dc /* A network driver using Hosted by Missing Link Electronics. VM KVM vhost & virtio You can add and remove Virtio virtual interfaces on the ASAv while the system is active. 8 (QEMU 2. no. 1 feature that results in TRIM commands not disrupting the command queues, released on 12 April 2015, VirtIO block At the top level (called virtio) is the virtual queue interface that conceptually attaches front-end drivers to back-end drivers. It applies on etherboot-5. 1 PCI Device Discovery. Configuration . 12. DPDK Summit China 2017. Blk-mq-based device drivers bypass the previous Linux I/O scheduler . Specifically for virtio over PCI. 0) For virtio disks, Virtio-specific options can also be struct rpmst_virtio *rpmsg_vdev_init(struct virtio_dev *vdev, void *shm, int len); Initialize RPMsg virtio queues and shared buffers, the address of shm can be ANY. The guest and the VMM can put a data buffer in this queue, and notify the other side about the new data. c in QEMU 1. Virtio is a IO virtualization platform. The per_cpu virtnet_stats is split into receive and transmit stats and are maintained on a per receive_queue and send_queue basis. h) - use virtio_get_queue_ma) if possible in virtio. Ideally, a given 4 Dec 2012 Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to pvspinlock Notify the guest that the host supports paravirtual spinlocks for example by exposing the pvticketlocks mechanism. org/scm/etherboot -object memory-backend-file,id=mem,size=512M,mem-path=/mnt/huge,share=on \ virtio-vsock provides a way for module about the ioeventfd and the irqfd for the device as well as the guest memory regions assigned for the virtual queues for Performance Analysis of Packet Capture Methods KVM’s virtio [24] drivers are paravirtual. With a sufficiently fast device, using a queue depth of 256 can double the IOPS which can be sustained. Tsirkin <address@hidden>: > On Fri, Nov 27, 2015 at 04:02:39PM +0100, Patrik Hermansson wrote: >> This patch adds the option to specify virtio queue sizes. This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of available vcpus of guest. <rekado> To get started with virtio I'd like to write a simple driver for the entropy device which appears as a PCI device when running qemu with -device virtio-rng-pci . Multiqueue virtio is an approach that enables the processing of packet sending and receiving pmu Depending on the state attribute (values on, off, default on) enable or disable the performance monitoring unit for the guest. LineRate supports multiple send and receive queues on Virtio network interfaces. For more information, refer to SBC SWe Cloud Features Guide. This patch adds the option to specify virtio queue sizes. I/O path. 3. A solid-state drive (SSD) is a solid-state storage device that uses integrated circuit assemblies as memory to store data persistently. Hi Michael Thanks for looking into the patch and for your comments. Specifically for other transports as decided by TC. Modify the guest XML definition. A Virtio device can be implemented as any kind of PCI device: a Conventional PCI device or a PCI Express device. A complete walkthrough to do a non-root GPU passthrough with QEMU. It has multiple queues support. I am using OpenStack Icehouse with Open vSwitch and neutron flat network. pmu Depending on the state attribute (values on, off, default on) enable or disable the performance monitoring unit for the guest. virtio vs vhost. CAN data frame has 29-bit long identifier and 64-bit long payload. 5. h Virtio devices use normal bus mechanisms of interrupts and DMA which should be familiar to any device driver author. To view a copy of this licence, visit http://creativecommons. 0,addr=0x3 \ Port QEMU hw/virtio-scsi. 2016-03-15 5:50 GMT+01:00 Michael S. > > (The Xen-specific drivers don't have this issue only because xenbus > unfortunately has to be built-in. SR-IOV Device Assignment. launchpad. / drivers / net / virtio_net. x before 1. no LXR (formerly "the Linux Cross Referencer") is a software toolset for indexing and presenting source code repositories. Open-vswitch also needs to incorporate support for opening tap fds multiple times so it can create as many queues. There num_queues The device specifies the maximum number of virtqueues supported here. and choose virtio-scsi-single as scsi controller Also, do you have LF_OVS_17_Trouble-shooting the Data Plane in OVS plane VNF user space DPDK Application virtio PMD Rx Tx 128 packets Virtio queues vhostuser port PMD Physical port vIOMMU/ARM: full emulation and virtio-iommu approaches Eric Auger KVM Forum 2017. Support for Virtual Machine Device Queues (VMDq) and Data Center Bridging (DCB) to divide incoming traffic into 128 RX queues. * Experience with debugging, parsing and writing Assembly code. Drivers can use zero or more queues, depending on their need. * Designed and developed RSS feature - A technology developed by Microsoft for distributing packets among queues in multiqueue NICs - for the virtio-net paravirtualized device and implemented it in the driver and device using the BPF (berkeley packet filter) in the host. 0 Linux guest driver: working Multi-queue / fine-grained per-queue locking QEMU hardware emulation using SCSI emulation: working Added support for WRITE SAME commands Virtio-net¶. vops: Pointer to struct virtio_ops. Multiple LUNs per PCI adapter, solves virtio-blk 1:1 scalability problem SCSI passthrough (tape, CD burning, ) Status. Each remote processor (or slave) will use its own SDM slave device and update only the related virtio queues. These buffers are added to virtual queues in memory, and each device has a predefined number of queues. (The left is VIRTIO_CCW_QUEUE_MAX since I fail to find a common header files other than virtio. SUSE Linux Enterprise Server is a highly reliable, scalable, and secure server operating system, built to power mission-critical workloads in both physical and virtual environments. Here is an example XML snippet that shows such configuration. 2 allows remote attackers to cause a denial of service or possibly execute arbitrary code via vectors in which the value of curr_queues is greater than max_queues, which triggers an out-of-bounds write. . android / kernel / mediatek / android-3. Our service nodes run under libvirt/Qemu/KVM and use virtio-net network devices. 6 Inner RSS • To achieve uniform distribution for tunneled packets • Vary UDP source port if UDP is the transport (e. virtio是一套半虚拟化驱动,包括网络、磁盘等。virtio-scsi和virtio-blk都是磁盘的半虚拟化驱动,virtio-scsi解决了virtio-blk的一些限制,来看看官网的介绍 来自: zihulidejia的专栏 Network adapter with VMDq technology presents multiple pairs of tx/rx queues, We use virtio-net architecture to illustrate the framework. 8/10/2015 · boot a guest with 2 queues; use netdev_del and device_del in qemu monitor to delete the virtio-net device; check in guest and monitor (info network) that the device is removed ; use netdev_add tap,id=hn1,queues=2,vhost=on and device_add in the monitor to hot add a device in guestBug 1396578 - RFE: Backport virtio-net multi-queue enablement by default patch Summary: RFE: Backport virtio-net multi-queue enablement by default patch Status :libvirt: virtio-net multiqueue The number of queues is equal to the number of vCPUs. Issue. this function intermediately queues the virtio: Extend virtio-net PMD to support container environment The patch adds a new virtio-net PMD configuration that allows the PMD to work on host as if the PMD is in VM. if qemu is configured for 64 queues, is it a reasonable default for the guest? Virtio is a virtualization standard for network and disk device drivers where just the guest's device driver "knows" it is running in a virtual environment, and cooperates with the hypervisor. 0, 2015-06-01 outlines the Linux storage stack as of Kernel version 4. Stack Exchange network consists of 174 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. 2 PCI Device Discovery. Support for virtio-ccw revision 2. The minimum value is chosen based on the current sizes of the virtio queues. 0 source code notes –Communication between vSwitchd and ovs-dpdk, and ovdk data path prep_pci_cfg_cap (struct virtio_pci_modern_device *vdev, struct virtio_pci_region *region, size_t offset, u32 length) void vpm_iowrite8 (struct virtio_pci_modern_device *vdev, struct virtio_pci_region *region, u8 data, size_t offset) void vpm_iowrite16 (struct virtio_pci_modern_device *vdev, struct virtio_pci_region *region, u16 data, size_t The VirtIO network driver, vtnet(4), is used by FreeBSD systems running on hypervisors including bhyve(4) and Linux's KVM. The following command creates a Virtio-Block device named VirtioBlk0 from a vhost-user socket /tmp/vhost. d: Move virtio PMD to drivers/net directory Signed-off-by: Bruce Richardson <bruce. 5 Poll Mode Driver for Emulated Virtio NIC26 Network Interface Controller Drivers, Release 2. Optional vq-count and vq-size params specify number of request queues and queue depth to be used. An example of the XML: The corresponding QEMU command line: -device virtio-scsi-pci,id=scsi0,num_queues=8,bus=pci. VirtIO [10] emerged as an attempt to become the de-facto standard for virtual I/O devices in para-virtualized hypervi- sors. Pick the appropriate device model for your requirements; Bridge tuning; Enable experimental zero-copy transmit /etc/modprobe. 1 Virtio Over PCI Bus. objs b/hw/virtio/Makefile. Here is a Poll-Mode Driver (PMD) for the VirtI/O based linux paravirtualized NIC, for DPDK. Multiqueue Virtio Overview. <braunr> why entropy ? <rekado> because it's the easiest. device_status The driver writes the device status here (see 2. Any PCI device with Vendor ID 0x1AF4, and Device ID 0x1000 through 0x103F inclusive is a virtio device 3. 1 Generator usage only permitted with license. Our primary use case is Kata containers, but it should be usable in other scenarios as well. Adjusted MTU to 9000 for each VM and still the same 5Gbps I had before. Multi-queue virtio-net allows network performance to scale with the number of vCPUs and allows for parallel packet processing by creating multiple TX and RX queues. Contribute to torvalds/linux development by creating an account on GitHub. Support virtio front-end poll mode driver in guest virtual machine Support vHost raw socket interface as virtio back-end via KNI. 1 The Linux Kernel: Configuring the Kernel Part 18. Network Function Virtualization Queues Features Physical XR Router (E1000/VirtIO) 1. • Virtio crypto device is a viable solution for cloud • Virtio crypto device spec has been pushed to virtio community, defined services include: – Symmetric crypto service – Asymmetric crypto service • The groundwork of implementation has been accepted • The implementation for more service such as asym crypto Well done! I think you can improve performance even further by applying “Queues” to the VirtIO-SCSI device. Example of how to create a vhost interface with VPP as the server and with just multiple queues enabled: vpp# create vhost-user socket /tmp Virtio vhost-user Data Plane Development Kit Thomas Monjalon: summary refs log tree commit diff The virtio block scsi interface allows guest to recognize scsi target disks. Queues, IRQs, and CPUs. As each pool is bonded to Comments [guest] Only configure enabled virt queues This delays the calling config_queue until the driver has explicitly enabled the queue, which indicates both that the rest of the configuration contents are valid and that it wants this queue to be used. virtio queuesThis page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of In a single virtio-net queue, the scale of the protocol stack in a guest is restricted, as the network performance does not scale as the number of vCPUs increases. 4. Great post, very informative, thanks! Decided to run this with KVM nested under ESXi 6. Other virtio drivers, such as the disk driver, use the same queue for sending and receiving. Queue value above should match the queues configured in OVS, The vector value should be set to “number of queues x 2 + 2” Configure the guest interface Assuming there are 2 interfaces in the guest named eth0, eth1 check the channel configuration and set the number of combined channels to 2 for virtio …8/30/2018 · VirtIO multi-queue - balancing? Discussion in 'Proxmox VE: Networking and Firewall' started by David Herselman, May 16, 2018. Don’t need to stop RX/TX, when the user wants to stop a guest or a virtio-net driver on guest. • virtio device types Enable Virtio NIC multiqueue. To enable vhost-user ports to map the VM’s memory into their process address space, pass the following parameters to QEMU: [virtio-serial] qemu exit after hot plug vcpu on It however causes issues because ports own their respective queues and try to clean them up while VirtIO as a - BZ 954183 NetKVM - Static driver verifier fails with NetKVM - BZ 1190960 NetKVM - job failed due to "Received some net buffer lists out of order" w/ 4 queues - BZ 1190968 NetKVM - job "NDISTest 6. vhost-user ports access a virtio-net device’s virtual rings and packet buffers mapping the VM’s physical memory on hugetlbfs. Pseudo FS Special purpose FS proc sysfs futexfs usbfs tmpfs ramfs devtmpfs pipefs network nvme device The Linux Storage Stack Diagram version 4. 1 release, is about virtio-net: correctly handle cpu hotplug notifier during resuming. VirtIO is an abstract layer providing a set of front-end and back-end If true, this enables the virtio-net multiqueue feature. base: Pointer to struct virtio_base. queues 2 & 5 receive 200% to 300% more interrupts than others: CPU utilisation in the virtual router is subsequently also polarised: #1 David Herselman, May 16, 2018. to qemu-driver over these queues or a host can also Specification of feature negotiation, configuration and queues, from both driver and device points of view. c. For example, the virtio network driver uses two virtual queues (one for receive and one for transmit), where the virtio block driver uses only one OpenStack Liberty supports the ability to create VMs with multiple queues on their virtio interfaces. Each one uses two 10 Gb/s network ports. TCP small queues is another mechanism designed to fight bufferbloat. Reading the value will return the 00026 * current contents of the ISR and will also clear it. 1. 0, all the above instructions work with one slight modification which is enabling hardware assisted virtualization for the KVM guest. Consequently, using VirtIO The virtio driver is a paravirtualized driver which means the some of the innards of the driver, the ring buffers and registers are exposed to the guest and can be accessed by the guest as if it is writing to guest memory with less of a hit. VirtIO devices. This is because multi-queue support optimizes RX interrupt affinity and TX queue selection in order to make a specific queue private to a specific vCPU. Data Plane Development Kit Thomas Monjalon: summary refs log tree commit diff diff --git a/hw/virtio/Makefile. 18-11219-gad1d69735878 Powered by Code Browser 2. 7. - still support I/O merge in software staging queue - single virtqueue means single vq lock is needed in both submit and complete path Block multi-queue : multi dispatch queue(V3. void virtio_linkup (struct virtio_base *base, struct virtio_ops *vops, void *pci_virtio_dev, struct pci_vdev *dev, struct virtio_vq_info *queues, int backend_type) Link a virtio_base to its constants, the virtio device, and the PCI emulation. This enables guests to get high performance network and disk operations, and gives most of the performance benefits of paravirtualization. com> Resolution: driver, Virtio is using Linux kernel driver. OpenStack Liberty supports the ability to create VMs with multiple queues on their virtio interfaces. This allows a single guest to transmit multiple flows of network data using multiple CPUs simultaneously via multiple TX queues in virtio-net/vhost-net/macvtap. 0 - [1 Machine] - 1c_Mini6RSSOids" last for hours and never stop w/ 4 queues - BZ 996479 pvpanic - RFE:pvpanic driver for windows guest - pvpanic Support multiple queues in virtio-net PMD. Any PCI device with PCI Vendor ID 0x1AF4, and PCI Device ID 0x1000 through 0x107F inclusive is a virtio device. conf file (in the GRUB_CMDLINE_LINUX line) to isolate and remove interrupts from the PMD The virtual device, virtio-user, was originally introduced with vhost-user backend, as a high performance solution for IPC (Inter-Process Communication) and user space container networking. The change is by Jason Wang <jasowang [at] redhat. (Since 3. This patch enables the virtio-net tx queue size to be configurable between 256 (the default queue size) and 1024 by the user. Second very important point, VirtIO devices communicate with ring-buffers used as FIFO queues. Again, the scenario is: backend doesn't support multi-queue, QEMU/guest do, and queues=2 is passed on command line. The structure of the data is dependent on the device type. On 02/04/13 00:21, Eric Blake wrote: On 04/01/2013 10:04 AM, Osier Yang wrote: This introduce a new attribute "num_queues" (same with the good name QEMU uses) for virtio-scsi controller. Unlike the boot: prompt, this prompt allows you to edit a predefined set of boot options. ) Alright, I've implemented this in the attached patches, one for virtio-scsi(multi-queues)相较virtio-scsi,顺序读写性能没有明显提升,随机读性能高于virito-scsi,随机写性能低于virtio-scsi。 A solid-state drive which is a SATA 3. Jan 29, 2010 At the top level (called virtio) is the virtual queue interface that conceptually attaches front-end drivers to back-end drivers. Elixir Cross Referencer This requires plumbing in openvswitch to utilize linux tc to instantiate QoS flows per queue in addition to the virtio-net multiqueue work. num_queues The device specifies the maximum Multi-queue virtio-net interfaces. 8 SMMUv3 Emulation Code LOC ACRN Audio virtio definition change required - audio virtio queues increased from 2 to 4 and virtual audio device ID vendor changed to Intel Change at devicemodel/hw When I talk about master/slave instance I mean two different SDM devices, one with master and the other with slave behavior. 0. Expect no crash happened. through VirtIO queues called virtqueues, and smaller specific data through the VirtIO configuration read/write functions. 2. Hi, This is the first RFC for the libfuse side of 'virtio'fs'; a new use of FUSE for mounting host directories into virtual machine guests. Configure the grub. This makes the network performance scale across a number of vCPUs. The ACRN device model emulates virtual NICs for UOS and the frontend virtio network driver, simulating the virtual NIC and following the virtio specification. IOS XRv 9000 Hardware/Software Requirements Adds virtio offload support in us-vhost. The queue size specified by the user should be power of 2. [Qemu-devel] [question] virtio-net-device and multi-queue option, Eric Auger, 2014/09/18. -device virtio-net-pci,mq=on,vectors= 2*N+2. IIRC there is a multithread option for virtio-scsi (num_queues) I tried doing this change on a predefined bootable disk (previous configuration was disk device on virtIO bus) and Windows 10 bluescreened with INACCESSIBLE_BOOT_DEVICE, so this is probably not something you can drop in to a preexisting domain. The virtual device, virtio-user, was originally introduced with vhost-user backend, as a high performance solution for IPC (Inter-Process Communication) and user space container networking. 2 Overview Memory-based Queues PRI & ATS UPSTREAM. The Master sends I/O by allocating proper buffers in shared memory, filling the request data, and putting guest addresses of those buffers into virtqueues. Dec 4, 2012 Hi all: This series is an update version of multiqueue virtio-net driver based on Krishna Kumar's work to let virtio-net use multiple rx/tx queues to vhost-net \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 This page provides information about the design of multi-queue virtio-net, an approach enables packet sending/receiving processing to scale with the number of In a single virtio-net queue, the scale of the protocol stack in a guest is restricted, as the network performance does not scale as the number of vCPUs increases. 1 Virtio Over PCI Bus Virtio devices are commonly implemented as PCI devices. 0 through 1. c index c65ac365c. 0 queues in one VMDQ pool. 0 Belgium Licence. Performance Tuning Recommendations. The patch set adds the feature negotiation of checksum and TSO between us-vhost and vanilla Linux virtio guest, and add these offload features support in the vhost lib, and change vhost sample to test them. Does KVM virtio-net support multiple interrupt queues?; How can I enable multiple TX queues in virtio TX path. The events themself are sent over virtio queues. 添加driver queues字段至xml文件中,driver queues等于vcpu队列数 if both remote TCP logging via syslog= and remote virtio logging via virtiolog= are specified on the command line, one has to setup two rsyslogd instances on the server/host to listen to both the connections otherwise the sending rsyslog's queues get full and the forwarding stops. etherboot. org> Subject [GitHub] kiwiflyer commented on issue #3101: kvm: Set amount of queues for Virtio SCSI driver to vCPU of Instance Once VIRTIO_NET_F_MQ is cleared, the virtio driver from guest is complaining about those other flags too. Virtio devices are commonly implemented as PCI devices. This is perfectly fine from the Vhost-user spec perspective, but doesn't meet rte_vhost expectations. Explore the ideas behind virtual networking, with a focus on NIC and switch virtualization. It provides direct access to SCSI commands and bypasses the QUMU emulator to talk directly to the target SCSI device loaded in the host’s kernel. This scheduler, instead of relying on run queues, uses a red-black [PATCH v4 0/7] virtio-blk: multiqueue support v4: * Rebased onto qemu. 0/be To use multi-queue virtio-net, enable support in the guest by adding the following to the guest XML configuration (where the value of N is from 1 to 256, as the kernel supports up to 256 queues for a multi-queue tap device). Prerequisites Ensure you have hugepages enabled in kernel command line: Stack Exchange network consists of 174 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. c +++ b Use packed virtqueue format when reading and writing descriptors to/from the ring. Virtio_user with vhost-kernel backend is a solution for exceptional path, such as KNI which exchanges packets with kernel networking stack. TCP Small Queues goal is to reduce number of TCP packets in xmit queues (qdisc & device queues), to reduce RTT and cwnd bias, part of the bufferbloat problem. The device capability bits (available via ioctl for evdev) live in virtio device config space. All the virtio class drivers depend on > virtio and virtio_ring, which means that adding them to any other > package would require that package to depend on virtio-modules. This introduce a new attribute "num_queues" (same with the good name QEMU uses) for virtio-scsi controller. g. Reported-by: Andrew Thornton <andrewth@google. Drivers can use zero Multiqueue Virtio Overview, Requirements and Setup for Multiqueue Virtio Interfaces , Setting Virtual Machine Metadata for Multiple Queues. 4 sources from git://git. Oct 09, 2018. Specification of device-specific configuration. As describe in the virtio- specification there are three different ways of implementing the transport of a virtio device: Hmm, I spoke too soon, Windows goes into repair mode if the disk is changed to virtio-scsi, so it's best to either set it during install or add a second dummy disk using virtio-scsi, install the drivers, then switch the main disk to virtio-scsi. 0 mmap (anonymous pages) iscsi_tcp network /dev/rbd* Block-based FS read(2) write(2) open(2) stat(2) chmod(2) been already confirmed not to work with rte_vhost. For libvirt configured VMs, libvirt version >= 1. 6. virtio is an input Intel Virtual Machine Device Queues Interface Type and Instance Number HeretheNICisidentifiedbyitstypeusingitsLinuxdevicedrivername(virtio_net,vmxnet3,ixgbe,i40e, etc)anditsinstancenumber Supporting Virtualization Standard for Network Devices in the virtio device by using the Base Address Register (BAR). They look exactly like evdev events, except that the struct fields are always in little endian byte order. How to use vhost-scsi with Intel P3700 NVME SSD in KVM. com> > ---> hw/virtio. Configuring the KVM hypervisor to run a LineRate guest LineRate supports multiple send and receive queues on Virtio network interfaces. This patch allows to boot from network using the virtio-net adapter provided by KVM. Add the virtio-scsi controller to your config, set the queues to the number of vCPUs you have, and then set all your disk images to use SCSI in virt-manager and it should just work. 17 ~ ) - with all advantage of block multi-queue - use multi virtqueue as dispatch queues for improving scalability and throughput At the top level (called virtio) is the virtual queue interface that conceptually attaches front-end drivers to back-end drivers. Figure2. Ran the ethtool command with 2 as well, no improvement. For network devices. vhost-net \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1 Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net 6 Sep 2016 The infrastructure used by SWITCHengines is composed of about 100 servers. How do I enable multi-queue virtio-net so that when Nova creates and launches a VM instance, the guest XML configuration file is generated with a number of queues set. Re: [Qemu-devel] [question] virtio-net-device and multi-queue option Background: What is in QEMU • Paravirtualized driver specification • Common mechanisms and layouts for device discovery, I/O queues, etc. It supports Port Hotplug functionality. rte_vhost waits for all queues to be fully initialized before it allows the entire device to be + To use more queues driver must negotiate the VIRTIO_NET_F_MQ + feature; initialize up to max_virtqueue_pairs of each of + transmit and receive queues; + execute – Often used in real hardware to distribute flows to multiple queues • Myricom libpcap, PF_RING/DNA/ZC, Intel x710 – Also works with virtio and vmxnet3 in virtual world Virtual RSS – Tell hypervisor to add multiple queues per virtual NIC (libvirt: virtio device, queues=N) – Tell guest to use multiple queues (Linux: ethtool –L) L4 VIRTIO Transport Layer. + * @first_vec: first interrupt vectors to use for queues (usually 0) + * + * This function assumes the virtio device @vdev has at least as many available [RFC] Virtual Machine Device Queues (VMDq) support on KVM Network adapter with VMDq technology presents multiple pairs of tx/rx queues, and renders network L2 sorting mechanism based on MAC addresses and VLAN tags for each tx/rx queue pair. Any explanations about how *it should* work are welcomed. Virtio is a Linux platform for I/O virtualization, providing a common set of I/O virtualization drivers. virtio-user-scsi-pci in QEMU 2. rip igb_disable_rx_queues after no shutdown of interface. The VMDq queues support 32768 buffers and I'd like to match this on the QEMU/Virtio-net side -- or at least come close. The flow is as below: virtio-net1 vhost-user0 vhost-user1 virtio-net2. com> --- drivers/net/Makefile | 2 +- drivers/net/virtio/Makefile These two drivers are communicating via the virtio protocol that is based on virtual queues of buffers. These interfaces can be configured to expose multiple queues. In the case of networking, network cards have mainly two queues: the reception queue and the transmission queue. h) - use virtio_get_queue_ma) if possible in virtio. Intel NICs with SRIOV. Note that it does not make sense to take what is configured in Qemu automatically into the guest, since the qemu config defines the maximum value. In this case, function will get shared memory from system shared memory pools. Note: UDP RSS takes effect only if you are injecting various source UDP ports. virtio guest side implementation: PCI, virtio device, virtio net and virtqueue For both virtio netdev RX/TX queues, detach_buf() which is called by This patch enables the virtio-net tx queue size to be configurable between 256 and 1024 by the user. The Virtio-Block driver allows creating SPDK bdevs from Virtio-Block devices. pmu Depending on the state attribute (values on, off, default on) enable or disable the performance monitoring unit for the guest. richardson at intel. e. The preemption are not frequent, few per seconds, but with 256 descriptor per virtio queue, just one preemption of the vCPU will lead to packet drop, as the 256 slots are filled during the preemption: this is the case for NFV VMs, where the per queue packet rate is above 1 Mpps (1 million of packets per second). 0 standard refines the hardware specification so that implementors have a clear reference Using shared memory DPDK queues, switched the Ethernet data and passed these packet rates through up to 10 DPDK realtime tasks running on host and/or in VMs. Virtio. This feature allows multiple vCPUs to be sending and receiving traffic simultaneously, improving network throughput. Virtio-net backend service encapsulates multiple vqueue elements into multiple AIO control blocks and virtio-input basically sends linux evdev events over virtio. objs @@ -9,6 virtio guest side implementation: PCI, virtio device, virtio net and virtqueue Bit operations in openflow table processing DPDK-OVS v1. The standard Linux kernel driver for this vNIC is called virtio-net, it depends on virtio-ring and virtio-pci. Configuration of interrupt device receive queues, interrupt Ensure virtio ring endianness Dynamic interrupt coalescing Large (4) rte_vhost rework for start/stopping particular queues SPDK vhost-FS (9PFS w/ BlobFS) Generated on 2018-Aug-22 from project linux revision v4. We have a config space, some dedicated memory, and interruptions. Virtio can use various different busses, thus the standard is split into virtio general and bus-specific sections. 8 SMMUv3 Emulation Code LOC Overview. It's well worth it. VirtIO queues are designed for large data transfers. 0 3. Linux kernel source tree. You must require 6 vNICs for enabling PKT port redundancy. Xen Guide, Release 2. Snapshots 1. It supports RTE_ETH_EVENT_INTR_LSC and RTE_ETH_EVENT_QUEUE_STATE events. commit | author | age: 6ae9ed: 1: From bf9cd171c0979ae6a2eee0bee7841c23c69a1f42 Mon Sep 17 00:00:00 2001: CS: 2: Message-Id: <bf9cd171c0979ae6a2eee0bee7841c23c69a1f42 Hi Jens, I'm not very sure about this bisect, but anyway it'd be good to inform you of a possible problem on commit commit 3a02db083a78c9f3c9b69305ab513f9422d91b08 Other queues are polled as soon as they're initialized. Launch virtio-pmd with 1 queue first then in testpmd, change the number to 2 queues. Multiple Queue { Why virtio/vhost is a CPU intensive workload, meaning CPU is the bottleneck Use multiple queue with multiple CPU could result to linear virtio-net-pmd DESCRIPTION. virtio queues If the vdev has RPMsg name service feature, this API will create an name service endpoint. It recently gained support for multiple queues, along with a significant cleanup and support for a few additional features. Virtio device specification: part of VIRTIO 1. We set queues before reset which will cause a crash[1]. c - AdapterRoutes should use ccw limit - introduce a vector to queue mapping for virito devices and speedup the MSI-X masking and unmasking through this. Signed-off-by: Jens Freimann <jfreimann@redhat. For entropy devices. All VirtIO devices use one or more ring buffers on the guest machine to communicate with the host machine. -net中包含vlan参数,即使没有显示指定它依然与queues参数相冲突。 vhost=on 并不知道有什么用。 不加的话好像也是可以的。 News and feature lists of Linux and BSD distributions. specifies the number of transmit queues for new device. CSCvc36805. Reconnaissance of Virtio: What’s new and how it’s all connected? SR-IOV depends on #of Queues - Management Flexibility – external SDN capable - IRQ The way that virtio network cards work is quite simple. Since we will use a VirtIO based device, let’s see how it works. 1Physical and Virtual Function Infrastructure Virtio-scsi is designed to be the next gen replacement for the current virtio-blk driver. This means that each SDM device will have its own pair of virtio queues (gh_vq and hg_vq). BZ - 636020 - PATCH: virtio_console: Fix poll blocking even though there is data to read; BZ - 636046 - Disallow 0-sized writes to virtio ports to go through to host (leading to VM crash) BZ - 636053 - read from virtio-serial returns if the host side is not connect to pipe; BZ - 636100 - TPM driver is not enabled in kernel-xen GSOC 2017 | Virgl Windows Driver In the kernel, we check the command, and can submit it through the Virtio queues, and hope for the best. linux. To this end openvswitch might want to become macvtap aware. c - AdapterRoutes should use ccw limit - introduce a vector to queue mapping for virito devices and speedup the MSI-X masking and unmasking through this. 2462 2463 Device Operation: request queues 2464 2465 The driver queues requests to an arbitrary request queue, Comments [guest] Only configure enabled virt queues This delays the calling config_queue until the driver has explicitly enabled the queue, which indicates both that the rest of the configuration contents are valid and that it wants this queue to be used. With it, enterprises can cost-effectively A solid-state drive (SSD) is a solid-state storage device that uses integrated circuit assemblies as memory to store data persistently. Ideally, a given Multiqueue Virtio Overview, Requirements and Setup for Multiqueue Virtio Interfaces , Setting Virtual Machine Metadata for Multiple Queues. A Virtio-Block request looks as follows. They are based on the notion of virtqueue. The scale of the protocol stack in guest in restricted because of the single queue virtio-net: The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. This framework controls mailbox queues and * blk_mq_virtio_map_queues - provide a default queue mapping for virtio device: 22 * @set: tagset to provide the mapping for: 23 * @vdev: virtio device associated with @set. vIOMMU/ARM: full emulation and virtio-iommu approaches Eric Auger KVM Forum 2017. VF. . objs +++ b/hw/virtio/Makefile. And after the queue number changes, the virtio-pmd can use 2 queues to RX/TX packets normally. What's done Kernel driver. With it, enterprises can cost-effectively Red Hat Enterprise Linux インストールシステムには、管理者用に各種の起動オプションが含まれています。これらの起動オプションを使用すると、特定の機能を有効 (または無効) にすることでインストールプログラムのデフォルト動作を変更することができます。 The > prompt on BIOS-based AMD64 and Intel 64 systems, accessed by highlighting an entry in the boot menu and pressing the Tab key. Inspecting Postfix’s email queue. In this case, the driver sets the number of queues equal to the number of guest vCPUs. QEMU Internals: vhost architecture This post explains how vhost provides in-kernel virtio devices for KVM. git/master * Included latest performance results v3: * Drop Patch 1 to batch guest notify for non-dataplane The Linux AIO completion BH and the virtio-blk batch notify BH changed order in the AioContext->first_bh list as a side-effect of moving the BH from hw/block/dataplane Linux Kernel Buffer Oveflow in virtio-net GRO Fragmentation Processing Lets Remote Users Cause the Target System to Crash or Potentially Execute Arbitrary Code Source Message Contents Subject: [El-errata] ELSA-2015-3098 Important: Oracle Linux 6 Unbreakable Enterprise kernel security and bugfix update 本文以virtio网络驱动为例,分析virtio驱动是如何处理网络IO的,驱动的定义如下 static struct virtio_device_id id_table[] = { { VIRTIO_ID_NET A Virtio device using Virtio Over PCI Bus MUST expose to guest an interface that meets the specification requirements of the appropriate PCI specification: and respectively. Discussion in 'Proxmox VE: ,queues=X. For example, the virtio network driver uses two virtual queues (one for receive and one for transmit), where the virtio block driver uses only one Software Packages in "stretch", Subsection utils 2vcard queues and status management) argonaut-fai daemon to proxy WebDAV request to the Spice virtio channel This work is licensed under the Creative Commons Attribution 2. git code Update vhost-scsi to implement latest virtio-scsi device specification Ensure vhost-scsi I/O still works Design libvirt integration for LIO QEMU SCSI target: (The left is VIRTIO_CCW_QUEUE_MAX since I fail to find a common header files other than virtio. The virtio-win package now contains the qxl driver for Windows Server 2008 R2, and the driver thus does not have to be downloaded manually. Generating PDF documents from print jobs Installation of the virtIO drivers for KVM instances 16. 17 is required for multiqueue support, and then one can simply add <driver queues='4'/> inside the vhostuser interface chunk in libvirt XML, where 4 is the number of On Wed, Dec 02, 2009 at 01:04:14PM +0100, Juan Quintela wrote: > > Signed-off-by: Juan Quintela <quintela@redhat. Specifically for virtio over mmio. The goal of the OASIS Virtual I/O Device (VIRTIO) TC is to simplify virtual devices, making them more extensible and more recognizable. 0 Community (led by Rusty Russell) Independent, informal document QEMU, lguest, Linux, FreeBSD, VirtualBox OASIS Committee (chaired by Rusty Russell) Formal process, formal document QEMU, lguest, Linux, FreeBSD, VirtualBox, Xen, etc + * blk_mq_virtio_map_queues - provide a default queue mapping for virtio device + * @set: tagset to provide the mapping for + * @vdev: virtio device associated with @set. Attach PKT0 and PKT1 ports to SR-IOV and Provider network. DPDK VM2VM iperf test setup A Virtio device using Virtio Over PCI Bus MUST expose to guest an interface that meets the specification requirements of the appropriate PCI specification: and respectively. and PCI Device ID 0x1000 through 0x107F inclusive is a virtio device. This post explains how to view messages in the postfix queue, another post on this blog explains how to delete or selectively delete from the postfix queue. Support virtio front-end poll mode driver in guest virtual machine Support vHost raw socket interface as virtio back > [RFC] Virtual Machine Device Queues (VMDq) support on KVM > Network adapter with VMDq technology presents multiple pairs of tx/rx queues, > and renders network L2 sorting mechanism based on MAC addresses and VLAN tags > [RFC] Virtual Machine Device Queues (VMDq) support on KVM > Network adapter with VMDq technology presents multiple pairs of tx/rx queues, > and renders network L2 sorting mechanism based on MAC addresses and VLAN tags virtio-forwarder supports multiqueue virtio up to a maximum of 32 queues, where the QEMU VM is configured in the standard way. a1f3136fd4 100644--- a/hw/virtio/Makefile. The network driver uses two queues: one for sending and one for receiving data. The vector value is computed with ‘number of queues x 2 + 2'. com> on Tue Oct 29 15:11:07 2013 +0800. Bufferbloat fight: TCP small queues
2014-08-07