Opendaylight (Helium) – Openstack (Juno) integration for NFVI implementation

In the frame of the EU funded ICT T-NOVA Project, Medianetlab will host one of the project’s pilot sites. The project has presented an initial reference demonstrator architecture based on the integration of Openstack and Opendaylight constituting the Virtualised Infrastructure Manager for the Network Function VIrtualisation Infrastructure (NFVI. The first take on T-NOVA high-level architecture of the project is presented in the public deliverable D2.21.

Reference NFVI-PoP architecture

Reference NFVI-PoP architecture

Medianetlab provides lessons learned and guidelines used for the appropriate deployment from scratch of ODL, plus Openstack over Ubuntu using GRE networking. The setup and the details of the deployment setup and configurations used are available here.

 

 

OpenStack Juno – OpenDaylight Helium SR2 integration over Ubuntu 14.04 (LTS) using GRE Tunnels

This guide describes in detail the steps needed for integrating OpenStack Juno with Neutron ML2 networking plugin with OpenDaylight Helium SR2 using GRE Tunnels. Be careful to replace <SOMETHING HERE> with the appropriate values.

Also it is important to know that one OpenDaylight manages only one OpenStack deployment.

The guide consists of 9 sections.

  1. Prerequisites
  2. Erase all instances, networks, routers and ports in the Controller Node
  3. Configure OpenvSwitches in Network and Compute Nodes
  4. Configure ml2_conf.ini in all Nodes
  5. Configure Neutron Database in the Controller Node
  6. Create Initial Networks in the Controller Node
  7. Launch Instances in the Controller Node
  8. Verify Everything
  9. Troubleshooting
  10. Resources

If you need help you can contact me at chsakkas@iit.demokritos.gr

1. Prerequisites

You must have a working OpenStack Juno deployment in Ubuntu 14.04 (LTS). To install it use the official guide provided by the OpenStack community,available hereIt is mandatory to install everything until Chapter 6 (Network Component with Neutron). Installing Chapter 7 (Dashboard Horizon)  is recommended.

The networks required are:

  • Management network 10.0.0.0/24
  • Tunnel Network 10.0.1.0/24
  • External Network 203.0.113.0/24

The OpenStack nodes required for this guide are:

  • Controller node: Management Network, (External Network if you want public access to the controller)
  • Network node: Management Network, External Network
  • Compute node 1: Management Network, Tunnel Network
  • Compute node 2: Management Network, Tunnel Network

If you have followed the official document you should have them already.

Additionally you must have OpenDaylight Helium SR2 installed in the Management Network. You MUST install it in a separate machine.

We want OpenDaylight to communicate with OpenFlow 1.3.

Edit etc/custom.properties and uncomment line ovsdb.of.version=1.3

Then start OpenDaylight and connect to the console.

Now you are connected to OpenDaylight’s console. Install all the required features.

Wait for the feature installation to finish.

To verify that everything is working use the following. An empty network list should be returned.

If you want to monitor OpenDaylight there are 2 log files.

 

2. Erase all instacnes, networks, routers and ports in the Controller Node

You must delete all existing instances, networks, routers and ports from all tenants. Default installation has admin and demo.

If you want, you can do it from Horizon dashboards or use the following commands.

Do the same with demo-openrc

If some ports cannot be deleted do the following:

Verify that everything is empty.

Stop the neutron-server service for the duration of the configuration.

A message saying that the neutron-server is stopped should appear. If not press it again to make sure it is stopped.

 

3. Configure OpenvSwitches in Network and Compute Nodes

The neutron plugin in every node must be removed (or stopped and disabled) because only OpenDaylight will be controlling openvswitches.

Clear openvswitch database and start it again.

The last command must return an empty openvswitch. You should see only <OPENVSWITCH ID> and version.

Use the following command to configure tunnel end-points.

Nothing will appear if this command is entered correctly. To verify the configuration you can use:

ONLY NETWORK NODE SECTION START

Create the bridge br-ex that is needed for the external network for OpenStack.

ONLY NETWORK NODE SECTION END

Connect every openvswitch with the OpenDaylight controller.

If everything went ok you can see 4 switches in OpenDaylight. 3 br-int and 1 br-ex.

 

4. Configure ml2_conf.ini in all Nodes

Controller Node

Edit vi /etc/neutron/plugins/ml2/ml2_conf.ini and put the following configuration.

Network Node

Edit vi /etc/neutron/plugins/ml2/ml2_conf.ini and put the following configuration.

Compute Nodes

Edit vi /etc/neutron/plugins/ml2/ml2_conf.ini and put the following configuration.

 

5. Configure Neutron Database in the Controller Node

Reset the neutron database, in order to be configured with OpenDaylight.

If everything is ok, without errors you can start the neutron-server.

 

6. Create Initial Networks in the Controller Node

 

7. Launch Instances in the Controller Node

Get preferred <HYPERVISOR NAME> from the command below.

Get demo <NETWORK ID> from the command below.

Get <IMAGE NAME> from the command below.

Launch the instances!

 

8. Verify Everything

If everyting works correctly you will be able to ping every VM.

Also you should be able to see the gre tunnels from ovs-vsctl show in each node.

 

9. Troubleshooting

If networking between VMs is not working after a while, try to restart OpenvSwitch in Network and Compute Nodes.

 

 

10. Resources

 

Acknowledgments

This work was done in the frame of FP7 T-NOVA EU project.