Comparing SDN Controllers: Open Daylight and ONOS

Overview

Over the last few months, we have been doing testing of the SDN controllers Open Daylight Helium SR3 (mostly via the Brocade Vyatta Controller v1.2) and ONOS Cardinal v1.2.  In this initial article we will start to compare the controllers, focusing on scale, specifically the number of switches that can be handled, by running both OpenFlow 1.0 and 1.3 switches emulated via IXIA and using physical via Pica8 switches.

Note: In the latest version of ONOS (Cardinal) v1.2, there is an issue ONOS handling emulated OpenFlow v1.3 from IXIA, so all scale testing in ONOS was done using OpenFlow 1.0.  Also, in ONOS the term “node” references a copy of ONOS (we run two nodes in our tests) while in Open Daylight, the term “node” refers to a OpenFlow switch.

User Interface

One of the main differences between the ONOS and Open Daylight/BVC is in the controls and information available directly from the Graphical User Interface (GUI).

ONOS

The ONOS GUI has multiple panes including Summary, Node(s) and Controls.

ONOS 1.2

ONOS GUI With 300 Switches

The ONOS GUI displays end hosts in a well defined fashion, you can see them spanning out from the switches.

ONOS-96Hosts

ONOS 96 Hosts Visible

Open Daylight

The default Open Daylight GUI is defined by the features installed and can include features such as a pane to display Nodes, a Yang UI and Yang Visualizer.

Open Daylight

Open Daylight GUI

When attempting to display end hosts, the Open Daylight GUI is not as clean as ONOS as the hosts are interlaced with switches.

ODL Nodes

Open Daylight with 400 Nodes

In the above screen capture of Open Daylight you can see both Nodes (Switches) and Hosts.

Brocade Vyatta Controller

The GUI for the Brocade Vyatta Controller (BVC) is cleaner than the default Open Daylight GUI and in this screen shot has extra modules for their Vyatta vRouter 5600 EMS and their “Path Explorer” application.

BVC 1.2 With EMS 5600

Brocade Vyatta Controller GUI

The current way of displaying Hosts and Switches in Open Daylight/BVC is not easy to work with nor does it scale well.

Scale

In scale testing, we started with 100 Switches and scaled up to 400 Switches, each switch holding 12 hosts.  While Open Daylight (via BVC) was able to scale to 400 switches, ONOS stopped functioning before 400.

Here is BVC with 400 Switches, 800 links and multiple (96 out of 4800) hosts sending traffic to each other.

BVC with 400 Switches

BVC with 400 Switches and Multiple Hosts

BVC 400 Switches Building

BVC 400 Switches Installing Hosts

Here is ONOS when it has reached capacity and is no longer able to handle the number of switches/links/hosts that are being sent to it.

ONOS With 400 Switches

The above screen shot shows two ONOS nodes with 400 switches, 800 links and 0 hosts (we are attempting to send traffic between 48 hosts).  While the devices (switches) are in the database, the hosts are not in the database and the GUI has become unstable and no longer shows any information.

Thoughts

Both ONOS and Open Daylight are solid products when acting as SDN controllers with multiple southbound and northbound interfaces.  The testing done here only focuses on OpenFlow and specifically on scale.  The Brocade version of Open Daylight is well packaged and has some nice extras such as the EMS application which ties in the Brocade Vyatta vRouter 5600.  ONOS continues to focus on providing tools and information in their GUI and 300 switches is a perfectly reasonable amount and certainly more than anyone should put on one or two controllers.

Using the Brocade Vyatta Controller EMS App

As part of our continuing work for NetDEF, we continue to install, setup and test the latest SDN controller tools from companies such as Brocade.

The Brocade Vyatta Controller (BVC) v1.2 is based on OpenDaylight Helium SR2 (the current release is SR3) and comes enabled with certain features that are not enabled by default in OpenDaylight such as OpenFlow and a GUI interface.

For our testing we utilized BVC, a pair of Brocade Vyatta vRouter 5600s, and the Brocade vRouter 5600 EMS App for BVC.

In the diagram you can see that we have attached a 5600 to each of the OpenFlow switches.

We followed the installation documentation from Brocade without issue, first installing BVC 1.2, then adding the Path Explorer app and finally the 5600 EMS app.

Once installed you can log into the controller on port 9000 and you should see the following toolbar on the lefthand side

 

Note the addition of the Vyatta vRouter 5600 EMS link.  Clicking on the link gives you the following display:

Screenshot 2015-04-27 09.05.39

Here you can see that we have already added the two vRouter 5600’s show in our diagram.  The main configuration change we needed to make on the vRouters was to enable netconf and add the netconf-ssh port of 830 to the configuration

We then selected the two vRouters (R1 and R2) and clicked on the “Create Tunnel” box and waited for about two minutes while the system built the tunnels.

Screenshot 2015-04-26 19.09.57

 

Screenshot 2015-04-26 19.16.38

 

Looking on the vRouters we saw the following configuration had been added

Obviously the pre-shared-secret being “vyatta” is a bit concerning but since we are aware of it, we can fix it manually.

The EMS app does exactly what it says it will do, configure tunnels between multiple vRouters.

One thing to note is that the EMS app is still limited, for example it does not allow us to configure which interfaces are used for the tunnel interfaces, configure a pre-shared-secret, etc.  We found the EMS app useful for creating the configurations for the vRouters that can be modified to fit your needs.

Also, just like the standard OpenDaylight dlux GUI, the Brocade GUI appears to still use the D3 gravity javascript code to display the network topology, which is pretty but can be hard to work with.

Thank you as always to Brocade and especially Lisa Caywood for being our contact and providing the software (BVC, vRouter 5600’s) necessary to do the testing.

Installing and Using Distributed ONOS

In our work with SDN controllers we concentrate on a few including On.Lab’s ONOS, The OpenDaylight Project and RYU.  In this post we will discuss setting up a distributed ONOS cluster.

First things first, the ONOS wiki pages give a good overview of the requirements:

  • Ubuntu Server 14.04 LTS 64-bit
  • 2GB or more RAM
  • 2 or more processors
  • Oracle Java 8 JRE and Maven installed

For our setup, we used Ubuntu 14.0.4.2 (the latest at the time) installed as an OpenSSH server and we installed Java and Maven from the instructions from the ONOS wiki:

We then installed the required packages on a OSX VM

  • Java 8 JDK
  • Apache Maven (3.0 and later)
  • git
  • Apache Karaf (3.0.3 and later)

Once the dependencies were installed we were able to grab the code from ONOS github page and build it using the “mvn clean install” command using the directions shown on the ONOS wiki

Once built, the process to install the software and get it working properly is not perfectly documented, we ended up suffering issues with websocket failures which were related to the ONOS systems not being setup properly, all of the information is on the ONOS wiki but on different pages.  Here is what we found worked:

Once the install finished, we were able to access the ONOS installations at http://192.168.21.129:8181/onos/ui/index.html and http://192.168.21.130:8181/onos/ui/index.html

Distributed ONOS

Once up, we needed to modify a few files on each server and restart the servers:

/opt/onos/config/cluster.json

and /opt/onos/config/tablets.json

Since ONOS will restart automatically, you simply need to do a system:shutdown.

By default ONOS uses 192.168.56.* as the IP address blocks, so we had to modify them to match our setup.

On the switches we set the controllers

And we were then able to ping between our physical hosts connected to ports 1 and 2 on each switch.

Using The Brocade Vyatta Controller – Part 1

As part of NetDEF, I’ve been working with different SDN controllers, including; the Brocade Vyatta Controller v1.1.1 (BVC), the OpenDaylight Controller (Helium Release) and the ONOS v1.0 Controller.  Of the three, the Brocade Controller has been the most user-friendly and straightforward.

To install the Brocade Vyatta Controller, simply sign up, download, read the quick guide and follow the instructions.  As Lisa Caywood points out in her blog post, there is even a nice video “Install Brocade Vyatta Controller” with links to the files needed to install BVC.

The Setup

BVC_Diagram_1

For my testing, I used a Ubuntu 14.0.4 Server VM with 6G RAM and 32G Disk to run BVC.  For OpenFlow switches I used a pair of Pica8 3290’s running PicaOS v2.5 in crossflow mode.  For end hosts I used four VMs Linux VMs and eight IXIA ports.  The BVC was connected to the switches via the management network.

Testing

My first test was to ping between VM1-VM4. Which showed the correct information in the BVC topology screen:

BVC 4 Hosts

Next, I installed the BVC Path Explorer (the installation was simple and went as shown in the documentation).  I added a few paths, including one that crossed switches and everything worked as expected.

BVC Path Explorer

Once I had everything working as expected with four hosts, I added a few more (about 84).

BVC Zoom Out 80 Hosts

The BVC had no issue adding all of the hosts and allowing them to be interacted with.

BVC Close up 80 Nodes

I also did some testing using postman (a chrome REST API plugin).  Thanks to Keith Burns, who pointed this tool out to me.

Screenshot 2015-01-31 17.21.29

Above is the output of the GET topology command, neatly formatted in JSON.

Screenshot 2015-01-31 17.24.50

 

Above is the output of the OpenDaylight inventory API call, showing some of my hosts.

While I am just starting my testing and plan to do more extensive work utilizing the Vyatta vRouter connector, IXIA OpenFlow tester and other tools/add-ons, I am impressed with the release of BVC 1.1.1.  The software and tools appear to be reasonably stable while the documentation is clear and professional.

Can an OpenFlow Switch Replace a Tap Aggregator?

I had an interesting conversation during lunch about the future of network taps and aggregators now that OpenFlow switches can do many of the same types of operations.  In my testing I have used Pica8 switches to replicate traffic, lots of traffic, using static OpenFlow commands.  For example, here is a design where I take 10G of  traffic and mirror it across 5 ports.

 

OpenFlow Tap Aggregation 1 to 5

Here is the configuration I use for a Pica8 3922 to do the replication a 10G stream to 5 ports.

This configuration assumes that you have already configured the switch to run in OpenFlow mode.

First we setup a new bridge, br0 and add the interfaces (1-6) to it.

# Add Bridge br0 - for PCAP Replication - 1st Port
##############################
# te-1/1/1 is input te-1/1/2, te-1/1/3, te-1/1/4, te-1/1/5, te-1/1/6 are output
#-------------------------------------------------------------------------------------
$VSCTL add-br br0 -- set bridge br0 datapath_type=pica8 other-config=datapath-id=100
$VSCTL add-port br0 te-1/1/1 -- set interface te-1/1/1 type=pica8
$VSCTL add-port br0 te-1/1/2 -- set interface te-1/1/2 type=pica8
$VSCTL add-port br0 te-1/1/3 -- set interface te-1/1/3 type=pica8
$VSCTL add-port br0 te-1/1/4 -- set interface te-1/1/4 type=pica8
$VSCTL add-port br0 te-1/1/5 -- set interface te-1/1/5 type=pica8
$VSCTL add-port br0 te-1/1/6 -- set interface te-1/1/6 type=pica8

Next we remove the default flow so that we can program the ports specifically.

# Remove Default Flow (not treating this as HUB!)
ovs-ofctl del-flows br0
# Add replication flow 1 -> 2,3,4,5,6
ovs-ofctl add-flow br0 in_port=1,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:2,3,4,5,6

Finally we drop all of the ingress traffic from the ports that the mirror traffic is going out of.

# Drop ingress traffic from mirror ports
ovs-ofctl add-flow br0 in_port=2,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br0 in_port=3,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br0 in_port=4,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br0 in_port=5,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br0 in_port=6,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop

This same configuration can be extended to include more ports, repeated to mirror different traffic to other ports, etc.

In the next post, I will cover filtering traffic to better control egress data.

Testing Vyatta 6.5 R1 Under VMWare – Preliminary Results

Testing Vyatta Subscription Edition 6.5 R1

We here at SDN Testing have been working with Router Analysis, Inc. our parent company doing testing of Vyatta Subscription Edition 6.5 R1 under VMWare.  Testing of Vyatta on hardware is located on Router Analysis.

For the VMWare setup we ran VMWare Hypervisor v5.1 on our spec setup as built by IXSystems:

SuperMicro X9SAE-V
Intel I7-3770 / 32G ECC RAM
Four Intel I340-T2 NICS (8 Ports Total)
Intel 520 Series 240GB SSD

We setup one of the Intel I340-T2 NICs using VMDirectPassThrough and utilized it for the upstream ports to the network.  The other 3 NICs were setup using VMWares default vSwitch configuration, each port was in it’s own vSwitch.  Those six ports were connected to another system generating packets while two VMs were created on the local machine to finish the total of eight.

Vyatta was given a VM with 2vCPUs and 4GB of RAM.

The following diagram shows the setup.

From previous tests, which will be included in the full report, we knew that each vSwitch port can forward 23% Line-Rate IMIX traffic (Tri-Modal: 7 x 64 bytes, 4 x 512 bytes, 1 x 1518) when 8 ports are in use.

Therefore each tenant was configured to send 234 Mbps of traffic outbound through the two uplink ports for a total of 1.9 Gbps.

The next steps were to configure features:

uRPF, ACLs and QoS (Shaping and Policing)

There was no impact from the features as the traffic limitation appeared to have nothing to do with Vyatta and more to do with the VMWare setup.  We were able to send IMIX traffic without issue out the two uplink interfaces at 1.9 Gbps total.

From our testing, we have concluded that Vyatta Subscription Edition 6.5 R1 behaves as expected when used as a multi-tenant virtual router.  Easily supporting the traffic and features needed in the role.

Note: We are planning to test the different vSwitches available to VMWare in the future, if we find one that behaves better we will re-run these tests.

Evaluating Midokura’s MidoNet Solution

This article is the first in a set of articles that will walk through the evaluation of Midokura’s MidoNet product.  In the first article we will discuss Midokura’s solution, what it is made of, how it works and what expectations have been set with regard to performance and the solution it aims to solve.

Midokura Logo

An Overview of Midokura:

SDN Startup Midokura launched this week at the OpenStack conference in San Diego and has gained a lot of attention in the last week or two.  If you are looking for a good write up on Midokura’s Midonet solution check out Shamus McGillicuddy‘s article on TechTarget called “Midokura network virtualization

Ivan Pepelnjak did a nice technical write up back in August titled “Midokura’s Midonet Layer 2-4 Virtual Network Solution

The Testing View:

In order to build a complete test plan we must first understand the parts of the solution.

By reading articles and looking at the Midokura website, we came up with a good idea of what Midokura was doing.

Midokura Midonet offers a way for companies solve complex network scaling issues using commodity PC hardware.  Midonet creates a fully-meshed overlay network on top of an existing IP network.

MidoNet is a collection of different components:

  • The MidoNet Agent, which runs on each node, fully processing packets as they enter the network and making sure they are delivered to the right host.
  • A routing component based to handle L3 packets (something Nicira’s solution does not offer)
  • A fast, distributed database to keep all of the flow, forwarding, filtering and other data needed to create the virtual topology.
  • Tools to keep everything in synchronization and flowing.

Once we finished the paper evaluation, we reached out to Midokura to get more specific information.

Discussion with Midokura:

We were lucky to be able to grab about 30 minutes of Ben Cherian’s (CSO of Midokura) time on Friday night, only two days before the 2012 OpenStack conference.  As our ultimate goal is to test the MidoNet product, we need a good understanding of MidoNet.  From our discussion we learned the following:

Lab tests have shown a fully utilized 10GE Interface driven by a single core from a multi-core processor.

In MidoNet, Midokura uses some well known Open Source products.  For routing, Midokura uses Quagga.  For the databases Apache Cassandra.  The underlying vSwitch is Open vSwitch and Apache Zookeeper is used to keep everything in sync.

GRE was given as an example in the virtual network overlay as it has been tested and is known to work.  Other protocols could and probably are supported but have not had enough lab time to be called supported.

Evaluating MidoNet Part 1:

As we break down MidoNet’s design we can surmise that as new flows enter the network, the lookup on the first packet should take the longest. Once the first packets have been processed and a flow created, other packets matching the same flow should have a lower latency.

As all inbound packets must pass through the edge, the test plan should look at the time it takes for new packet flows to be inspected, looked up and installed into the flow table. This can be done by measuring the latency of packets crossing the system and comparing the first packet to the later ones once the latency has stabilized. Initially no features should be enabled.

Once a base measurement has been taken and verified, features such as packet filtering should be enabled.  The features should be applied and extended, for example with packet filtering it would start with simple IP destination address filtering.  IP source address, source port, destination port, window size and other knobs should be enabled and measurements taken to see how each new addition affects the lookup time and possibly forwarding performance.

After the base and IP filtering tests are complete you can evaluate the results to build more specific test cases that push MidoNet.

In the next article we will talk more about the MidoNet design what we know about its underlying components and how to test scale.

The Vyatta Cloud Router Story

Vyatta and their approach to Cloud Routers

A few weeks ago I had the pleasure of speaking with Scott Sneddon, Cloud Solutions Architect at Vyatta Inc.  I’ve known Scott since the late 1990’s when he and I both worked for Exodus Communications.

Vyatta is one of the few full featured software based routing vendors in the market today.  Their product is a mix of OpenSource and proprietary software combined together creating a router that can not only live in the cloud but will in the future be able to utilize some of the hardware such as Intel’s Sandy Bridge (and later generation Ivy Bridge) processors as Network Processors.

Network Processors are key to hardware forwarding routers such as the Juniper T series and the Cisco Carrier Routing System allowing them to perform forwarding and features at line rate, something that routers using software based forwarding struggle with.  To get a better picture of software vs hardware forwarding you can read Router Analysis’ Enterprise Edge Router Upgrade Guide where I discuss the Cisco 7200 which uses a software forwarding engine and compare it with higher performance routers with hardware forwarding capabilities.

Vyatta offers a full featured router solution by including VPN, Firewall and other features normally found in hardware locked solutions in their software product.  I feel that Vyatta has a jump on other vendors in the True Virtual Data Center space.  One of the most important parts of the Virtual Data Center is the router and it’s ability to perform equal to or greater than the hardware based router it is replacing.  Using software forwarding alone Vyatta claims to be able to handle up to 2Mpps, which depending on packet size can easily be multiple gigabits of traffic.  In testing Vyatta is seeing up to 11Mpps using an Intel Sandy Bridge processor as a network processor.

A quick note about integrated firewalls: While software firewalls contained within the same hardware as the routers, switches and/or hosts are very useful, they are not a replacement for hardware firewalls.  In security (which I do not claim to be an expert at) the separation of networks using physical links is key.  There is some great information available in this thread on the Cisco support forums where they are discussing the ASA 1000V.

Vyatta keeps a tight relationship with the OpenSource community by hosting Vyatta.org where you can find free versions of Vyatta’s Core Software along with community support, documentation and forums.

SDN Testing, the software defined side of Router Analysis plans to put the Vyatta product through rigorous testing in the coming weeks.

What is SDN and What are we Testing?

SDN Testing

SDN stands for “Software Defined Network”, a simple name with thousands of different meanings.  As defined by WikiPedia “SDN separates the control plane from the data plane in network switches and routers. Under SDN, the control plane is implemented in software in servers separate from the network equipment and the data plane is implemented in commodity network equipment.”

The most important aspect of SDN is the separation of the control and data planes.  This decoupling allows end users (Service Providers, Enterprises) to use commodity hardware to build and expand their networks.

Some of the major players in the SDN space are Vyatta, Cisco, Big Switch and Nicira.  Nicira was recently purchased by VMWare and is being merged into VMWares core product offerings.

Vyatta offers a Quagga based software router with firewall and VPN support.  I recently talked with Vyatta and found their vision and commitment to the Open Source community great.  Vyatta is currently the top player when it comes to software defined routers.

Cisco offers many of the parts needed to create SDNs but some parts have not been released yet.  Cisco has released the Nexus 1000v software switch, the ASA 1000v software firewall and has announced the CSR 1000v IOS XE based software router.  Once Cisco gets the full solution out, they have a chance to leapfrog over the competition due to their history and ability to re-use their current software features.

Big Switch offers an SDN Controller.  Currently they are offering Floodlight, a Open Source version of their product with promises of a commercial version coming soon.

The last company I will cover is Nicira.  Nicira provides what they call NVP, or the Nicira Virtualization Platform.  They combine their software with the Open Source OpenVSwitch to provide a fully software controlled and forwarded network.

Testing the Virtual Datacenter

SDN Testing

One of the goals of the SDN Testing is to test SDN concepts such as the Virtual Datacenter Replacement.  A True Virtual Data Center consists of a few pieces:

A software based router with the ability to connect multiple uplinks and use a routing protocol to load-balance between them.

A software based switch with industry standard features and a good control interface.

A software based firewall with the ability to separate the front and backend systems safely.

The front and back-end systems themselves.