Tag Archive for: DPDK

OVS – DPDK on Openstack Newton

In this tutorial we will show detailed instructions and debugging info, in order to deploy a DPDK enabled OVS on an Openstack Newton environment, on Ubuntu 16.04.

First and foremost you must have a working Openstack Newton environment with OVS networking.

Secondly you need to have a DPDK enabled OVS, built and running on your system.

The easy way to do that is to just download and configure the official package.

Following these instructions:

https://software.intel.com/en-us/articles/using-open-vswitch-with-dpdk-on-ubuntu

sudo apt-get install openvswitch-switch-dpdk
sudo update-alternatives --set ovs-vswitchd /usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk

This will install the ovs with dpdk in it. But we also need to add some parameters to the configuration files, and also enable it.

However, before that we need to build DPDK and reserve some hugepages in order to make it run successfully.

The easiest way I have found to do so is, download the DPDK source from dpdk.org, and then run the ./<DPDK-dir>/tools/dpdk-setup.sh

Then select the reserve hugepages option and enter the number. (For us it was 4096 x 2MB hugepages).

Now it is time to configure our OVS files

at the /etc/default/openvswitch-switch, an example configuration would be:

DPDK_OPTS='--dpdk -c 0x3 -n 4 --socket-mem 512 --vhost-owner libvirt-qemu:kvm --vhost-perm 0660'

SIDENOTE: The vhost-perm parameter is very important, as it may lead to a permission denied error in kvm, when binding the port to the VM

So one more thing needs to be configured at the /etc/libvirt/qemu.conf

You need to set:

user = "root"
group = "root"

Then as OVS is running exectute this command:

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

And then restart the ovs service.

service openvswitch-switch restart.

Check the logs to see the successful execution

You should see something like this at the /var/log/openvswitch/ovs-vswitchd.log:

dpdk|INFO|DPDK Enabled, initializing
dpdk|INFO|No vhost-sock-dir provided - defaulting to /var/run/openvswitch
dpdk|INFO|EAL ARGS: ovs-vswitchd --socket-mem 1024,0 -c 0x00000001
dpdk|INFO|DPDK pdump packet capture enabled
ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation
ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3

One cause of failure would occur if someone does not reserve sufficient hugepages, or does not fill the configuration files correctly.

From then on, the Openstack part remains to be configured

Now comes the first tricky part. Most guides state that you need to configure the tag [OVS] at the ml2_conf.ini, like this:

[OVS]
datapath_type=netdev
vhostuser_socket_dir=/var/run/openvswitch

what they fail to state is that at the Newton release you need to change the /etc/neutron/plugins/ml2/openvswitch_agent.ini , which overwrites the ml2_conf.ini .

Once OVS is correctly configured with DPDK support, vhost-user interfaces are completely transparent to the guest. However, guests must request large pages. This can be done through flavors. For example:

openstack flavor set m1.large –property hw:mem_page_size=large

At last we are ready to setup and boot a DPDK-port enabled VM.

We can boot it through an already created network, or create a new network.

SR-IOV in Openstack – Various Tips, Hacks and Setups

Single Root I/O virtualization (SR-IOV) in networking is a very useful and strong feature for virtualized network deployements.

SRIOV is a specification that allows a PCI device, for example a NIC or a Graphic Card, to share access to its resources among various PCI hardware functions:

Physical Function (PF) (meaning the real physical device), from it a number of one or more Virtual Functions (VF) are generated.

Supposedly we have one NIC and we want to share its resources among various Virtual Machines, or in terms of NFV various VNFCs of a VNF.

We can split the PF into numerous VFs and distribute each one to a different VM.

The routing and forwarding of the packets is done through L2 routing where the packets are forwarded to the matching MAC VF.

The purpose of this post is to share a few tips and hacks we came across during our general activities related to SRIOV.

A very good tutorial for SRIOV setup : https://samamusingworld.wordpress.com/2015/01/10/sriov-pci-passthrough-feature-with-openstack/

 

SRIOV VF Mirroring

Let’s say you want to send the same flows and packets to 2 VMs simultaneously.

if you enter the ip link show you should see something like this:

p2p1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether a0:36:9f:68:fc:f4 brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
vf 1 MAC fa:16:3e:c0:d8:11, spoof checking on, link-state auto
vf 2 MAC fa:16:3e:a1:43:57, spoof checking on, link-state auto
vf 3 MAC fa:16:3e:aa:33:59, spoof checking on, link-state auto

In order to perform our mirroring and send all traffic both ways we need to change the MAC address both on the VM and on the VF and disable the spoof check.

Let’s change vf2 -> vf3

On the VM:

ifconfig eth0 down
ifconfig eth0 hw ether fa:16:3e:aa:33:59
ifconfig eth0 up

On the host – VF:

ip link set eth0 down
ip link set eth0 vf 2 mac fa:16:3e:aa:33:59
ip link set eth0 vf 2 spoofchk off
ip link set eth0 up

After that we have 2 VFs with the same MAC.

But it will still do not work. What you have to do is, change again the vf 2 to something resembling the latest MAC

ip link set eth0 vf 2 mac fa:16:3e:aa:33:58

After these changes through the experiments we performed we managed to mirror the traffic on 2 different VFs.

 

SRIOV Openstack setup with flat networking – no VLAN

In openstack the default setup and various tutorials use the VLAN networking. Meaning the routing is done through MAC and VLAN.

In one of our tests we had trouble creating traffic matching both rules, so we investigated the no VLAN option.

Even though the setup of SRIOV over flat networking in Openstack is pretty simple, we did not find any tutorial, or a note underlining its simplicity.

The steps are pretty straightforward :

neutron net-create –-provider:physical_network=physnet1 –-provider:network_type=flat <Network_Name>
neutron subnet-create <Network_Name> <CIDR> –name <Subnet_Name> –allocation-pool=<start_ip>, end=<end_ip>
neutron port-create <Network_Id>  –binding:vnic-type direct

And launch VM with port you have just created.

nova boot –flavor <Flavor_Id> –image <Image-id> –nic port-id=<Port_Id> <VM_Name>