Tag Archive for: Openstack

OVS – DPDK on Openstack Newton

In this tutorial we will show detailed instructions and debugging info, in order to deploy a DPDK enabled OVS on an Openstack Newton environment, on Ubuntu 16.04.

First and foremost you must have a working Openstack Newton environment with OVS networking.

Secondly you need to have a DPDK enabled OVS, built and running on your system.

The easy way to do that is to just download and configure the official package.

Following these instructions:

https://software.intel.com/en-us/articles/using-open-vswitch-with-dpdk-on-ubuntu

sudo apt-get install openvswitch-switch-dpdk
sudo update-alternatives --set ovs-vswitchd /usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk

This will install the ovs with dpdk in it. But we also need to add some parameters to the configuration files, and also enable it.

However, before that we need to build DPDK and reserve some hugepages in order to make it run successfully.

The easiest way I have found to do so is, download the DPDK source from dpdk.org, and then run the ./<DPDK-dir>/tools/dpdk-setup.sh

Then select the reserve hugepages option and enter the number. (For us it was 4096 x 2MB hugepages).

Now it is time to configure our OVS files

at the /etc/default/openvswitch-switch, an example configuration would be:

DPDK_OPTS='--dpdk -c 0x3 -n 4 --socket-mem 512 --vhost-owner libvirt-qemu:kvm --vhost-perm 0660'

SIDENOTE: The vhost-perm parameter is very important, as it may lead to a permission denied error in kvm, when binding the port to the VM

So one more thing needs to be configured at the /etc/libvirt/qemu.conf

You need to set:

user = "root"
group = "root"

Then as OVS is running exectute this command:

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

And then restart the ovs service.

service openvswitch-switch restart.

Check the logs to see the successful execution

You should see something like this at the /var/log/openvswitch/ovs-vswitchd.log:

dpdk|INFO|DPDK Enabled, initializing
dpdk|INFO|No vhost-sock-dir provided - defaulting to /var/run/openvswitch
dpdk|INFO|EAL ARGS: ovs-vswitchd --socket-mem 1024,0 -c 0x00000001
dpdk|INFO|DPDK pdump packet capture enabled
ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation
ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3

One cause of failure would occur if someone does not reserve sufficient hugepages, or does not fill the configuration files correctly.

From then on, the Openstack part remains to be configured

Now comes the first tricky part. Most guides state that you need to configure the tag [OVS] at the ml2_conf.ini, like this:

[OVS]
datapath_type=netdev
vhostuser_socket_dir=/var/run/openvswitch

what they fail to state is that at the Newton release you need to change the /etc/neutron/plugins/ml2/openvswitch_agent.ini , which overwrites the ml2_conf.ini .

Once OVS is correctly configured with DPDK support, vhost-user interfaces are completely transparent to the guest. However, guests must request large pages. This can be done through flavors. For example:

openstack flavor set m1.large –property hw:mem_page_size=large

At last we are ready to setup and boot a DPDK-port enabled VM.

We can boot it through an already created network, or create a new network.

SR-IOV in Openstack – Various Tips, Hacks and Setups

Single Root I/O virtualization (SR-IOV) in networking is a very useful and strong feature for virtualized network deployements.

SRIOV is a specification that allows a PCI device, for example a NIC or a Graphic Card, to share access to its resources among various PCI hardware functions:

Physical Function (PF) (meaning the real physical device), from it a number of one or more Virtual Functions (VF) are generated.

Supposedly we have one NIC and we want to share its resources among various Virtual Machines, or in terms of NFV various VNFCs of a VNF.

We can split the PF into numerous VFs and distribute each one to a different VM.

The routing and forwarding of the packets is done through L2 routing where the packets are forwarded to the matching MAC VF.

The purpose of this post is to share a few tips and hacks we came across during our general activities related to SRIOV.

A very good tutorial for SRIOV setup : https://samamusingworld.wordpress.com/2015/01/10/sriov-pci-passthrough-feature-with-openstack/

 

SRIOV VF Mirroring

Let’s say you want to send the same flows and packets to 2 VMs simultaneously.

if you enter the ip link show you should see something like this:

p2p1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether a0:36:9f:68:fc:f4 brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
vf 1 MAC fa:16:3e:c0:d8:11, spoof checking on, link-state auto
vf 2 MAC fa:16:3e:a1:43:57, spoof checking on, link-state auto
vf 3 MAC fa:16:3e:aa:33:59, spoof checking on, link-state auto

In order to perform our mirroring and send all traffic both ways we need to change the MAC address both on the VM and on the VF and disable the spoof check.

Let’s change vf2 -> vf3

On the VM:

ifconfig eth0 down
ifconfig eth0 hw ether fa:16:3e:aa:33:59
ifconfig eth0 up

On the host – VF:

ip link set eth0 down
ip link set eth0 vf 2 mac fa:16:3e:aa:33:59
ip link set eth0 vf 2 spoofchk off
ip link set eth0 up

After that we have 2 VFs with the same MAC.

But it will still do not work. What you have to do is, change again the vf 2 to something resembling the latest MAC

ip link set eth0 vf 2 mac fa:16:3e:aa:33:58

After these changes through the experiments we performed we managed to mirror the traffic on 2 different VFs.

 

SRIOV Openstack setup with flat networking – no VLAN

In openstack the default setup and various tutorials use the VLAN networking. Meaning the routing is done through MAC and VLAN.

In one of our tests we had trouble creating traffic matching both rules, so we investigated the no VLAN option.

Even though the setup of SRIOV over flat networking in Openstack is pretty simple, we did not find any tutorial, or a note underlining its simplicity.

The steps are pretty straightforward :

neutron net-create –-provider:physical_network=physnet1 –-provider:network_type=flat <Network_Name>
neutron subnet-create <Network_Name> <CIDR> –name <Subnet_Name> –allocation-pool=<start_ip>, end=<end_ip>
neutron port-create <Network_Id>  –binding:vnic-type direct

And launch VM with port you have just created.

nova boot –flavor <Flavor_Id> –image <Image-id> –nic port-id=<Port_Id> <VM_Name>

 

Launching Docker containers in OpenStack

In this post we will show how to configure a compute node in OpenStack to launch Docker containers. We assume that you already have a working OpenStack installation. The configuration we describe below worked for OpenStack Juno, while the controller and compute nodes were running Ubuntu 14.04 LTS.

Install Docker and Docker driver for OpenStack on the compute node

Installing Docker in Ubuntu (Docker only works for 64 bit Ubuntu OS):

 sudo sh -c "curl https://get.docker.io/gpg | apt-key add -"
 sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
 sudo apt-get update
 sudo apt-get install lxc-docker

Add nova to the docker group and restart the compute service to pick up the change:

sudo usermod -G docker nova
sudo service nova-compute restart

Install the nova-docker driver:

sudo apt-get install python-pip
sudo apt-get install python-dev
git clone https://github.com/stackforge/nova-docker
cd nova-docker
git checkout stable/juno
sudo python setup.py install

 

Configuring the compute and the controller nodes for Docker

Nova configuration in the compute node

With admin privileges edit the configuration file /etc/nova/nova.conf according to the following options:

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver
vif_plugging_is_fatal=False 
vif_pligging_timeout=0

Create the directory /etc/nova/rootwrap.d, if it does not already exist, and inside that directory create a file “docker.filters” with the following content:

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root 
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

Glance configuration in the controller node

Glance also needs to be configured to support the Docker container format, in /etc/glance/glance-api.conf (found in controller):

[DEFAULT]
container_formats = ami,ari,aki,bare,ovf,docker

And then restart the Glance service:

service glance-api restart
service glance-registry restart

Now, back to the compute node

Do the following:

sudo chmod 666/var/run/docker.sock
sudo chmod 777/var/run/libvirt/libvirt-sock
service nova-compute restart
sudo service docker restart

And edit the /etc/nova/nova-compute.conf:

[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver
#[libvirt]
#virt_type=kvm

 

Launching Docker containers

Example: A minimal container that runs an http server (thttpd) on port 80

In the compute node:

docker pull larsks/thttpd
docker save larsks/thttpd | glance image-create --is-public True --container-format docker --disk-format raw --name larsks/thttpd

In the controller node, first source your OpenStack RC file and then boot your docker instance:

source demo-openrc.sh
nova boot --image "larsks/thttpd" --flavor m1.small --nic net-id=fa234617-3ec6-481c-a17e-89bd54fce60b --availability_zone=nova:node3 docker-test-vm

Now, check instances in Openstack Horizon to see our newly created instance is running. Then, assign a floating IP address through Openstack horizon and try http://<assigned_ip>/ to see if it is working.

That’s all!

Opendaylight (Helium) – Openstack (Juno) integration for NFVI implementation

In the frame of the EU funded ICT T-NOVA Project, Medianetlab will host one of the project’s pilot sites. The project has presented an initial reference demonstrator architecture based on the integration of Openstack and Opendaylight constituting the Virtualised Infrastructure Manager for the Network Function VIrtualisation Infrastructure (NFVI. The first take on T-NOVA high-level architecture of the project is presented in the public deliverable D2.21.

Reference NFVI-PoP architecture

Reference NFVI-PoP architecture

Medianetlab provides lessons learned and guidelines used for the appropriate deployment from scratch of ODL, plus Openstack over Ubuntu using GRE networking. The setup and the details of the deployment setup and configurations used are available here.