OpenStack Juno – OpenDaylight Helium SR2 integration over Ubuntu 14.04 (LTS) using GRE Tunnels
This guide describes in detail the steps needed for integrating OpenStack Juno with Neutron ML2 networking plugin with OpenDaylight Helium SR2 using GRE Tunnels. Be careful to replace <SOMETHING HERE> with the appropriate values.
Also it is important to know that one OpenDaylight manages only one OpenStack deployment.
The guide consists of 9 sections.
- Prerequisites
- Erase all instances, networks, routers and ports in the Controller Node
- Configure OpenvSwitches in Network and Compute Nodes
- Configure ml2_conf.ini in all Nodes
- Configure Neutron Database in the Controller Node
- Create Initial Networks in the Controller Node
- Launch Instances in the Controller Node
- Verify Everything
- Troubleshooting
- Resources
If you need help you can contact me at chsakkas@iit.demokritos.gr
1. Prerequisites
You must have a working OpenStack Juno deployment in Ubuntu 14.04 (LTS). To install it use the official guide provided by the OpenStack community,available here. It is mandatory to install everything until Chapter 6 (Network Component with Neutron). Installing Chapter 7 (Dashboard Horizon) is recommended.
The networks required are:
- Management network 10.0.0.0/24
- Tunnel Network 10.0.1.0/24
- External Network 203.0.113.0/24
The OpenStack nodes required for this guide are:
- Controller node: Management Network, (External Network if you want public access to the controller)
- Network node: Management Network, External Network
- Compute node 1: Management Network, Tunnel Network
- Compute node 2: Management Network, Tunnel Network
If you have followed the official document you should have them already.
Additionally you must have OpenDaylight Helium SR2 installed in the Management Network. You MUST install it in a separate machine.
1
2
3
4
|
apt–get install openjdk–7–jdk
wget https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.2.2-Helium-SR2/distribution-karaf-0.2.2-Helium-SR2.zip
unzip distribution–karaf–0.2.2–Helium–SR2.zip
cd distribution–karaf–0.2.2–Helium–SR2
|
We want OpenDaylight to communicate with OpenFlow 1.3.
Edit etc/custom.properties and uncomment line ovsdb.of.version=1.3
Then start OpenDaylight and connect to the console.
1
2
3
|
./bin/start
//Wait some seconds
./bin/client
|
Now you are connected to OpenDaylight’s console. Install all the required features.
1
|
feature:install odl–base–all odl–aaa–authn odl–restconf odl–nsf–all odl–adsal–northbound odl–mdsal–apidocs odl–ovsdb–openstack odl–ovsdb–northbound odl–dlux–core
|
Wait for the feature installation to finish.
To verify that everything is working use the following. An empty network list should be returned.
1
|
curl –u admin:admin http://<OPENDAYLIGHT MANAGEMENT IP>:8080/controller/nb/v2/neutron/networks
|
If you want to monitor OpenDaylight there are 2 log files.
1
2
|
tail –f data/log/karaf.log
tail –f logs/web_access_log_2015–02.txt
|
2. Erase all instacnes, networks, routers and ports in the Controller Node
You must delete all existing instances, networks, routers and ports from all tenants. Default installation has admin and demo.
If you want, you can do it from Horizon dashboards or use the following commands.
1
2
3
4
5
6
7
8
9
10
11
12
13
|
source admin–openrc
nova list
nova delete <INSTANCE ID>
neutron port–list
neutron port–delete <PORT ID>
neutron router–list
neutron router–gateway–clear <ROUTER ID>
neutron router–delete <ROUTER ID>
neutron net–list
neutron net–delete <NEWORK ID>
|
Do the same with demo-openrc
If some ports cannot be deleted do the following:
1
2
3
4
|
mysql –uroot –p
use neutron;
delete from ports;
exit
|
Verify that everything is empty.
1
2
3
4
|
nova list
neutron port–list
neutron router–list
neutron net–list
|
Stop the neutron-server service for the duration of the configuration.
1
|
service neutron–server stop
|
A message saying that the neutron-server is stopped should appear. If not press it again to make sure it is stopped.
3. Configure OpenvSwitches in Network and Compute Nodes
The neutron plugin in every node must be removed (or stopped and disabled) because only OpenDaylight will be controlling openvswitches.
1
|
apt–get purge neutron–plugin–openvswitch–agent
|
Clear openvswitch database and start it again.
1
2
3
4
5
|
service openvswitch–switch stop
rm –rf /var/log/openvswitch/*
rm –rf /etc/openvswitch/conf.db
service openvswitch–switch start
ovs–vsctl show
|
The last command must return an empty openvswitch. You should see only <OPENVSWITCH ID> and version.
Use the following command to configure tunnel end-points.
1
|
ovs–vsctl set Open_vSwitch <OPENVSWITCH ID> other_config={‘local_ip’=‘<TUNNEL INTERFACE IP>’}
|
Nothing will appear if this command is entered correctly. To verify the configuration you can use:
1
|
ovs–vsctl list Open_vSwitch
|
ONLY NETWORK NODE SECTION START
Create the bridge br-ex that is needed for the external network for OpenStack.
1
2
|
ovs–vsctl add–br br–ex
ovs–vsctl add–port br–ex <INTERFACE NAME OF EXTERNAL NETWORK>
|
ONLY NETWORK NODE SECTION END
Connect every openvswitch with the OpenDaylight controller.
1
|
ovs–vsctl set–manager tcp:<OPENDAYLIGHT MANAGEMENT IP>:6640
|
If everything went ok you can see 4 switches in OpenDaylight. 3 br-int and 1 br-ex.
4. Configure ml2_conf.ini in all Nodes
Controller Node
Edit vi /etc/neutron/plugins/ml2/ml2_conf.ini and put the following configuration.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = opendaylight
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ml2_odl]
password = admin
username = admin
url = http://<OPENDAYLIGHT MANAGEMENT IP>:8080/controller/nb/v2/neutron
|
Network Node
Edit vi /etc/neutron/plugins/ml2/ml2_conf.ini and put the following configuration.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
|
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = opendaylight
[ml2_type_flat]
flat_networks = external
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = <TUNNEL INTERFACE IP>
enable_tunneling = True
bridge_mappings = external:br–ex
[agent]
tunnel_types = gre
[ml2_odl]
password = admin
username = admin
url = http://<OPENDAYLIGHT MANAGEMENT IP>:8080/controller/nb/v2/neutron
|
Compute Nodes
Edit vi /etc/neutron/plugins/ml2/ml2_conf.ini and put the following configuration.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = opendaylight
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = <TUNNEL INTERFACE IP>
enable_tunneling = True
[agent]
tunnel_types = gre
[ml2_odl]
password = admin
username = admin
url = http://<OPENDAYLIGHT MANAGEMENT IP>:8080/controller/nb/v2/neutron
|
5. Configure Neutron Database in the Controller Node
Reset the neutron database, in order to be configured with OpenDaylight.
1
2
3
4
5
6
7
|
mysql –uroot –p
drop database neutron;
create database neutron;
grant all privileges on neutron.* to ‘neutron’@‘localhost’ identified by ‘<YOUR NEUTRON PASSWORD>’;
grant all privileges on neutron.* to ‘neutron’@‘%’ identified by ‘<YOUR NEUTRON PASSWORD>’;
exit
su –s /bin/sh –c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno” neutron
|
If everything is ok, without errors you can start the neutron-server.
1
|
service neutron–server start
|
6. Create Initial Networks in the Controller Node
1
2
3
4
5
6
7
8
9
|
source admin–openrc
neutron net–create ext–net —router:external True —provider:physical_network external —provider:network_type flat
neutron subnet–create ext–net —name ext–subnet —allocation–pool start=<FLOATING EXTERNAL IP START>,end=<FLOATING EXTERNAL IP END> —disable–dhcp —gateway <EXTERNAL NETWORK GATEWAY> <EXTERNAL NETWORK CIDR>
source demo–openrc
neutron net–create demo–net
neutron subnet–create demo–net —name demo–subnet —gateway 192.168.6.1 192.168.6.0/24
neutron router–create demo–router
neutron router–interface–add demo–router demo–subnet
neutron router–gateway–set demo–router ext–net
|
7. Launch Instances in the Controller Node
Get preferred <HYPERVISOR NAME> from the command below.
1
2
|
source admin–openrc
nova hypervisor–list
|
Get demo <NETWORK ID> from the command below.
1
2
|
source demo–openrc
neutron net–list
|
Get <IMAGE NAME> from the command below.
1
|
nova image–list
|
Launch the instances!
1
2
3
4
|
nova boot —flavor m1.tiny —image <IMAGE NAME> —nic net–id=<NETWORK ID> test1 —availability_zone=nova:<HYPERVISOR NAME>
nova boot —flavor m1.tiny —image <IMAGE NAME> —nic net–id=<NETWORK ID> test2 —availability_zone=nova:<HYPERVISOR NAME>
nova boot —flavor m1.tiny —image <IMAGE NAME> —nic net–id=<NETWORK ID> test3 —availability_zone=nova:<HYPERVISOR NAME>
nova boot —flavor m1.tiny —image <IMAGE NAME> —nic net–id=<NETWORK ID> test4 —availability_zone=nova:<HYPERVISOR NAME>
|
8. Verify Everything
If everyting works correctly you will be able to ping every VM.
Also you should be able to see the gre tunnels from ovs-vsctl show in each node.
9. Troubleshooting
If networking between VMs is not working after a while, try to restart OpenvSwitch in Network and Compute Nodes.
1
|
service openvswitch–switch restart
|
10. Resources
- http://docs.openstack.org/juno/install-guide/install/apt/openstack-install-guide-apt-juno.pdf
- https://wiki.opendaylight.org/view/OpenStack_and_OpenDaylight
- http://www.opendaylight.org/software/downloads
Acknowledgments
This work was done in the frame of FP7 T-NOVA EU project.