Tag Archive for: NFV

Project SONATA announces the 3.0 release

SONATA announces the launch of an upgraded release of its integrated service platform, SONATA 3.0, which includes all the software components developed, integrated, tested and qualified within the time frame of the project.

Read our press release here

SONATA Service Platform Release 2.0

SONATA H2020 project, where NCSRD is participating, officially announces the launch of its release 2.0, the third official software release delivered since the project started in July 2015.

SONATA is developing a Network Functions Virtualisation (NFV) service framework that provides a programming model and development tool chain for virtualized services, fully integrated with a DevOps-enabled service platform and orchestration system. SONATA results include:

  • SONATA Service Platform, which accommodates Communication Service Providers’ needs in the new and much more challenging 5G environment.
  • SONATA Network Service Software Development Kit (SDK) that provides an invaluable set of tools for assisting developers in the development and testing of NFV-based services.

These SONATA building blocks complement each other in the context of next generation of mobile networks and telecommunication standards referred as 5G, focusing on optimal use of the available network and cloud infrastructure.

 

SONATA 2.0 Release main improvements
  • SONATA Service Platform  Tool for automatic and complete installation
  •  New Security policy (HTTPS and user registration).
  •  New modules in the Gatekeeper that provide monetization capability and business intelligence support.
  • Function Lifecycle Managers and improved Specific Managers infrastructure in the MANO Framework.
  •  Initial support of container-based VIMs, multi-PoP service deployment, Service Function chaining configuration and VNF image pre-deploymentand management in the Infrastructure Abstraction Layer.
  •  Extended monitoring functionality.  Improved Catalogues and Repositories
  •  Use of the platform’s GUI by Service Platform managers to manage VIMs

    SONATA SDK

  •  Automated versioning of development setups. Improvement in the validation functionality for detecting bugs in the developed network services/functions.
  •  Updated SDK tools to support multiple platforms and authentication/security options.
  •  The SDK emulator has been extended. It is a key asset which provides a very rapid testing environment to deploy developed services
  •  Additional debugging and monitoring functionality for easy inspection and visualisation of network interfaces and links.

OVS – DPDK on Openstack Newton

In this tutorial we will show detailed instructions and debugging info, in order to deploy a DPDK enabled OVS on an Openstack Newton environment, on Ubuntu 16.04.

First and foremost you must have a working Openstack Newton environment with OVS networking.

Secondly you need to have a DPDK enabled OVS, built and running on your system.

The easy way to do that is to just download and configure the official package.

Following these instructions:

https://software.intel.com/en-us/articles/using-open-vswitch-with-dpdk-on-ubuntu

sudo apt-get install openvswitch-switch-dpdk
sudo update-alternatives --set ovs-vswitchd /usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk

This will install the ovs with dpdk in it. But we also need to add some parameters to the configuration files, and also enable it.

However, before that we need to build DPDK and reserve some hugepages in order to make it run successfully.

The easiest way I have found to do so is, download the DPDK source from dpdk.org, and then run the ./<DPDK-dir>/tools/dpdk-setup.sh

Then select the reserve hugepages option and enter the number. (For us it was 4096 x 2MB hugepages).

Now it is time to configure our OVS files

at the /etc/default/openvswitch-switch, an example configuration would be:

DPDK_OPTS='--dpdk -c 0x3 -n 4 --socket-mem 512 --vhost-owner libvirt-qemu:kvm --vhost-perm 0660'

SIDENOTE: The vhost-perm parameter is very important, as it may lead to a permission denied error in kvm, when binding the port to the VM

So one more thing needs to be configured at the /etc/libvirt/qemu.conf

You need to set:

user = "root"
group = "root"

Then as OVS is running exectute this command:

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

And then restart the ovs service.

service openvswitch-switch restart.

Check the logs to see the successful execution

You should see something like this at the /var/log/openvswitch/ovs-vswitchd.log:

dpdk|INFO|DPDK Enabled, initializing
dpdk|INFO|No vhost-sock-dir provided - defaulting to /var/run/openvswitch
dpdk|INFO|EAL ARGS: ovs-vswitchd --socket-mem 1024,0 -c 0x00000001
dpdk|INFO|DPDK pdump packet capture enabled
ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation
ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3

One cause of failure would occur if someone does not reserve sufficient hugepages, or does not fill the configuration files correctly.

From then on, the Openstack part remains to be configured

Now comes the first tricky part. Most guides state that you need to configure the tag [OVS] at the ml2_conf.ini, like this:

[OVS]
datapath_type=netdev
vhostuser_socket_dir=/var/run/openvswitch

what they fail to state is that at the Newton release you need to change the /etc/neutron/plugins/ml2/openvswitch_agent.ini , which overwrites the ml2_conf.ini .

Once OVS is correctly configured with DPDK support, vhost-user interfaces are completely transparent to the guest. However, guests must request large pages. This can be done through flavors. For example:

openstack flavor set m1.large –property hw:mem_page_size=large

At last we are ready to setup and boot a DPDK-port enabled VM.

We can boot it through an already created network, or create a new network.

MediaNet Lab in the first open-call experiments of SoftFIRE project

The proposal of the MediaNet Lab in the first open-call of the SoftFIRE project has been accepted and therefore MediaNet Lab will participate in the first experiments of the SoftFIRE project. SoftFIRE aims at creating an ecosystem of organizations around the technological field of NFV/SDN and and its evolution towards 5G developments. In order to achieve this goal, SoftFIRE plans to aggregate as many as possible organizations and people around an enabling platform. The SoftFIRE platform consists in a federated testbed that comprises very different experimental frameworks and is available for experimenting new services, applications and functional extensions of the platform.

The aim of the MedialNet lab experiments is the development and validation of the necessary extensions to the current SoftFIRE federated testbed, in order to enhance it with the capability to execute experiments involving satellite communication systems.

More details will follow in due time, once the design of the experiment and the expected results have been fully defined.

 

VITAL NFV Manager

In the framework of the EC-funded research project VITAL, MediaNet lab presented at EC premises during the second review meeting, the v2.0 of the VITAL NFV Manager: A lightweight NFV Orchestrator, capable of composing, deploying, instantiating and managing a Virtual Satellite Network (i.e. a Network Service which includes beyond the terrestrial VNFs, also virtualised Satellite Network Functions). The v2.0 of the NFV Manager has been upgraded with a graphical user interface, which provides resource monitoring per NFVI-PoP of the NFV infrastructure and per instantiated Network Service/Virtual Satellite Network.

The NFV Manager is compatible with the Openstack CCP and Opendaylight Platform, and the has been developed utilizing the Meteor Framework, Javascript and Python. For the needs of the satellite components the OpenSAND emulator platform has been integrated.

The following figures present some screenshots from the monitoring dashboard of the NFV Manager and the NS.VSN composition tool.

NFV Manager Dashboard

1

NS/VSN Composition by NFV Manager

2

A basic version of the NFV Manager is planned to be released as opensource.

 

SR-IOV in Openstack – Various Tips, Hacks and Setups

Single Root I/O virtualization (SR-IOV) in networking is a very useful and strong feature for virtualized network deployements.

SRIOV is a specification that allows a PCI device, for example a NIC or a Graphic Card, to share access to its resources among various PCI hardware functions:

Physical Function (PF) (meaning the real physical device), from it a number of one or more Virtual Functions (VF) are generated.

Supposedly we have one NIC and we want to share its resources among various Virtual Machines, or in terms of NFV various VNFCs of a VNF.

We can split the PF into numerous VFs and distribute each one to a different VM.

The routing and forwarding of the packets is done through L2 routing where the packets are forwarded to the matching MAC VF.

The purpose of this post is to share a few tips and hacks we came across during our general activities related to SRIOV.

A very good tutorial for SRIOV setup : https://samamusingworld.wordpress.com/2015/01/10/sriov-pci-passthrough-feature-with-openstack/

 

SRIOV VF Mirroring

Let’s say you want to send the same flows and packets to 2 VMs simultaneously.

if you enter the ip link show you should see something like this:

p2p1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether a0:36:9f:68:fc:f4 brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
vf 1 MAC fa:16:3e:c0:d8:11, spoof checking on, link-state auto
vf 2 MAC fa:16:3e:a1:43:57, spoof checking on, link-state auto
vf 3 MAC fa:16:3e:aa:33:59, spoof checking on, link-state auto

In order to perform our mirroring and send all traffic both ways we need to change the MAC address both on the VM and on the VF and disable the spoof check.

Let’s change vf2 -> vf3

On the VM:

ifconfig eth0 down
ifconfig eth0 hw ether fa:16:3e:aa:33:59
ifconfig eth0 up

On the host – VF:

ip link set eth0 down
ip link set eth0 vf 2 mac fa:16:3e:aa:33:59
ip link set eth0 vf 2 spoofchk off
ip link set eth0 up

After that we have 2 VFs with the same MAC.

But it will still do not work. What you have to do is, change again the vf 2 to something resembling the latest MAC

ip link set eth0 vf 2 mac fa:16:3e:aa:33:58

After these changes through the experiments we performed we managed to mirror the traffic on 2 different VFs.

 

SRIOV Openstack setup with flat networking – no VLAN

In openstack the default setup and various tutorials use the VLAN networking. Meaning the routing is done through MAC and VLAN.

In one of our tests we had trouble creating traffic matching both rules, so we investigated the no VLAN option.

Even though the setup of SRIOV over flat networking in Openstack is pretty simple, we did not find any tutorial, or a note underlining its simplicity.

The steps are pretty straightforward :

neutron net-create –-provider:physical_network=physnet1 –-provider:network_type=flat <Network_Name>
neutron subnet-create <Network_Name> <CIDR> –name <Subnet_Name> –allocation-pool=<start_ip>, end=<end_ip>
neutron port-create <Network_Id>  –binding:vnic-type direct

And launch VM with port you have just created.

nova boot –flavor <Flavor_Id> –image <Image-id> –nic port-id=<Port_Id> <VM_Name>

 

Opendaylight (Helium) – Openstack (Juno) integration for NFVI implementation

In the frame of the EU funded ICT T-NOVA Project, Medianetlab will host one of the project’s pilot sites. The project has presented an initial reference demonstrator architecture based on the integration of Openstack and Opendaylight constituting the Virtualised Infrastructure Manager for the Network Function VIrtualisation Infrastructure (NFVI. The first take on T-NOVA high-level architecture of the project is presented in the public deliverable D2.21.

Reference NFVI-PoP architecture

Reference NFVI-PoP architecture

Medianetlab provides lessons learned and guidelines used for the appropriate deployment from scratch of ODL, plus Openstack over Ubuntu using GRE networking. The setup and the details of the deployment setup and configurations used are available here.

 

 

FP7 ICT T-NOVA Kick-off meeting

Logo
FP7 ICT T-NOVA was organised and hosted at NCSR Demokritos premises by MediaNet Lab. Project T-NOVA has officially started at 1/1/2014 and its duration is 3 years. The project is coordinated by NCSRD (Dr. A. Kourtis).

With the aim of promoting the NFV concept, T-NOVA introduces a novel enabling framework, allowing operators not only to deploy virtualized Network Functions (NFs) for their own needs, but also to offer them to their customers, as value-added services. Virtual network appliances (gateways, proxies, firewalls, transcoders, analyzers etc.) can be provided on-demand as-a-Service, eliminating the need to acquire, install and maintain specialized hardware at customers’ premises.

For these purposes, T-NOVA will design and implement a management/orchestration platform for the automated provision, configuration, monitoring and optimization of Network Functions-as-a-Service (NFaaS) over virtualised Network/IT infrastructures. T-NOVA leverages and enhances cloud management architectures for the elastic provision and (re-) allocation of IT resources assigned to the hosting of Network Functions. It also exploits and extends Software Defined Networking platforms for efficient management of the network infrastructure.

Furthermore, in order to facilitate the involvement of diverse actors in the NFV scene and attract new market entrants, T-NOVA establishes a “NFV Marketplace”, in which network services and Functions by several developers can be published and brokered/traded. Via the Marketplace, customers can browse and select the services and virtual appliances which best match their needs, as well as negotiate the associated SLAs and be charged under various billing models. A novel business case for NFV is thus introduced and promoted.

T-NOVA is an Integrated Project co-funded by the European Commission / 7th Framework Programme, Grant Agreement no. 619520. Its duration is 36 months (January 2014 – December 2016).

Medianetlab entering SDN and NFV Era

Medianet Lab will lead and coordinate FP7 Integrated Project T-NOVA that was submitted in ICT Call 11 (FP7-ICT-2013-11) at Objective ICT-2013.1.1.: Future Networks.

T-NOVA project plans to exploit the emerging concept of Network Functions Virtualisation (NFV), migrating network functions originally performed by hardware elements to virtualised infrastructures, deployed as software components. T-NOVA allows operators not only to deploy virtualized Network Functions (NFs) for their own needs, but also to offer them to their customers, as value-added services. T-NOVA leverages and enhances cloud management architectures for the elastic provision and (re-) allocation of IT resources assigned to the hosting of Network Functions. It also exploits and extends Software Defined Networking (SDN) platforms for efficient management of the network infrastructure. Furthermore T-NOVA establishes a “NFV Marketplace”, in which network services and Functions by several developers can be published and brokered/traded. Via the Marketplace, customers can browse and select the services and virtual appliances which best match their needs, as well as negotiate the associated SLAs and be charged under various billing models. A novel business case for NFV is thus introduced and promoted.

Read more