Comparing SDN Controllers: Open Daylight and ONOS


Over the last few months, we have been doing testing of the SDN controllers Open Daylight Helium SR3 (mostly via the Brocade Vyatta Controller v1.2) and ONOS Cardinal v1.2.  In this initial article we will start to compare the controllers, focusing on scale, specifically the number of switches that can be handled, by running both OpenFlow 1.0 and 1.3 switches emulated via IXIA and using physical via Pica8 switches.

Note: In the latest version of ONOS (Cardinal) v1.2, there is an issue ONOS handling emulated OpenFlow v1.3 from IXIA, so all scale testing in ONOS was done using OpenFlow 1.0.  Also, in ONOS the term “node” references a copy of ONOS (we run two nodes in our tests) while in Open Daylight, the term “node” refers to a OpenFlow switch.

User Interface

One of the main differences between the ONOS and Open Daylight/BVC is in the controls and information available directly from the Graphical User Interface (GUI).


The ONOS GUI has multiple panes including Summary, Node(s) and Controls.

ONOS 1.2

ONOS GUI With 300 Switches

The ONOS GUI displays end hosts in a well defined fashion, you can see them spanning out from the switches.


ONOS 96 Hosts Visible

Open Daylight

The default Open Daylight GUI is defined by the features installed and can include features such as a pane to display Nodes, a Yang UI and Yang Visualizer.

Open Daylight

Open Daylight GUI

When attempting to display end hosts, the Open Daylight GUI is not as clean as ONOS as the hosts are interlaced with switches.

ODL Nodes

Open Daylight with 400 Nodes

In the above screen capture of Open Daylight you can see both Nodes (Switches) and Hosts.

Brocade Vyatta Controller

The GUI for the Brocade Vyatta Controller (BVC) is cleaner than the default Open Daylight GUI and in this screen shot has extra modules for their Vyatta vRouter 5600 EMS and their “Path Explorer” application.

BVC 1.2 With EMS 5600

Brocade Vyatta Controller GUI

The current way of displaying Hosts and Switches in Open Daylight/BVC is not easy to work with nor does it scale well.


In scale testing, we started with 100 Switches and scaled up to 400 Switches, each switch holding 12 hosts.  While Open Daylight (via BVC) was able to scale to 400 switches, ONOS stopped functioning before 400.

Here is BVC with 400 Switches, 800 links and multiple (96 out of 4800) hosts sending traffic to each other.

BVC with 400 Switches

BVC with 400 Switches and Multiple Hosts

BVC 400 Switches Building

BVC 400 Switches Installing Hosts

Here is ONOS when it has reached capacity and is no longer able to handle the number of switches/links/hosts that are being sent to it.

ONOS With 400 Switches

The above screen shot shows two ONOS nodes with 400 switches, 800 links and 0 hosts (we are attempting to send traffic between 48 hosts).  While the devices (switches) are in the database, the hosts are not in the database and the GUI has become unstable and no longer shows any information.


Both ONOS and Open Daylight are solid products when acting as SDN controllers with multiple southbound and northbound interfaces.  The testing done here only focuses on OpenFlow and specifically on scale.  The Brocade version of Open Daylight is well packaged and has some nice extras such as the EMS application which ties in the Brocade Vyatta vRouter 5600.  ONOS continues to focus on providing tools and information in their GUI and 300 switches is a perfectly reasonable amount and certainly more than anyone should put on one or two controllers.

Using the Brocade Vyatta Controller EMS App

As part of our continuing work for NetDEF, we continue to install, setup and test the latest SDN controller tools from companies such as Brocade.

The Brocade Vyatta Controller (BVC) v1.2 is based on OpenDaylight Helium SR2 (the current release is SR3) and comes enabled with certain features that are not enabled by default in OpenDaylight such as OpenFlow and a GUI interface.

For our testing we utilized BVC, a pair of Brocade Vyatta vRouter 5600s, and the Brocade vRouter 5600 EMS App for BVC.

In the diagram you can see that we have attached a 5600 to each of the OpenFlow switches.

We followed the installation documentation from Brocade without issue, first installing BVC 1.2, then adding the Path Explorer app and finally the 5600 EMS app.

Once installed you can log into the controller on port 9000 and you should see the following toolbar on the lefthand side


Note the addition of the Vyatta vRouter 5600 EMS link.  Clicking on the link gives you the following display:

Screenshot 2015-04-27 09.05.39

Here you can see that we have already added the two vRouter 5600’s show in our diagram.  The main configuration change we needed to make on the vRouters was to enable netconf and add the netconf-ssh port of 830 to the configuration

We then selected the two vRouters (R1 and R2) and clicked on the “Create Tunnel” box and waited for about two minutes while the system built the tunnels.

Screenshot 2015-04-26 19.09.57


Screenshot 2015-04-26 19.16.38


Looking on the vRouters we saw the following configuration had been added

Obviously the pre-shared-secret being “vyatta” is a bit concerning but since we are aware of it, we can fix it manually.

The EMS app does exactly what it says it will do, configure tunnels between multiple vRouters.

One thing to note is that the EMS app is still limited, for example it does not allow us to configure which interfaces are used for the tunnel interfaces, configure a pre-shared-secret, etc.  We found the EMS app useful for creating the configurations for the vRouters that can be modified to fit your needs.

Also, just like the standard OpenDaylight dlux GUI, the Brocade GUI appears to still use the D3 gravity javascript code to display the network topology, which is pretty but can be hard to work with.

Thank you as always to Brocade and especially Lisa Caywood for being our contact and providing the software (BVC, vRouter 5600’s) necessary to do the testing.

Installing and Using Distributed ONOS

In our work with SDN controllers we concentrate on a few including On.Lab’s ONOS, The OpenDaylight Project and RYU.  In this post we will discuss setting up a distributed ONOS cluster.

First things first, the ONOS wiki pages give a good overview of the requirements:

  • Ubuntu Server 14.04 LTS 64-bit
  • 2GB or more RAM
  • 2 or more processors
  • Oracle Java 8 JRE and Maven installed

For our setup, we used Ubuntu (the latest at the time) installed as an OpenSSH server and we installed Java and Maven from the instructions from the ONOS wiki:

We then installed the required packages on a OSX VM

  • Java 8 JDK
  • Apache Maven (3.0 and later)
  • git
  • Apache Karaf (3.0.3 and later)

Once the dependencies were installed we were able to grab the code from ONOS github page and build it using the “mvn clean install” command using the directions shown on the ONOS wiki

Once built, the process to install the software and get it working properly is not perfectly documented, we ended up suffering issues with websocket failures which were related to the ONOS systems not being setup properly, all of the information is on the ONOS wiki but on different pages.  Here is what we found worked:

Once the install finished, we were able to access the ONOS installations at and

Distributed ONOS

Once up, we needed to modify a few files on each server and restart the servers:


and /opt/onos/config/tablets.json

Since ONOS will restart automatically, you simply need to do a system:shutdown.

By default ONOS uses 192.168.56.* as the IP address blocks, so we had to modify them to match our setup.

On the switches we set the controllers

And we were then able to ping between our physical hosts connected to ports 1 and 2 on each switch.

Using The Brocade Vyatta Controller – Part 1

As part of NetDEF, I’ve been working with different SDN controllers, including; the Brocade Vyatta Controller v1.1.1 (BVC), the OpenDaylight Controller (Helium Release) and the ONOS v1.0 Controller.  Of the three, the Brocade Controller has been the most user-friendly and straightforward.

To install the Brocade Vyatta Controller, simply sign up, download, read the quick guide and follow the instructions.  As Lisa Caywood points out in her blog post, there is even a nice video “Install Brocade Vyatta Controller” with links to the files needed to install BVC.

The Setup


For my testing, I used a Ubuntu 14.0.4 Server VM with 6G RAM and 32G Disk to run BVC.  For OpenFlow switches I used a pair of Pica8 3290’s running PicaOS v2.5 in crossflow mode.  For end hosts I used four VMs Linux VMs and eight IXIA ports.  The BVC was connected to the switches via the management network.


My first test was to ping between VM1-VM4. Which showed the correct information in the BVC topology screen:

BVC 4 Hosts

Next, I installed the BVC Path Explorer (the installation was simple and went as shown in the documentation).  I added a few paths, including one that crossed switches and everything worked as expected.

BVC Path Explorer

Once I had everything working as expected with four hosts, I added a few more (about 84).

BVC Zoom Out 80 Hosts

The BVC had no issue adding all of the hosts and allowing them to be interacted with.

BVC Close up 80 Nodes

I also did some testing using postman (a chrome REST API plugin).  Thanks to Keith Burns, who pointed this tool out to me.

Screenshot 2015-01-31 17.21.29

Above is the output of the GET topology command, neatly formatted in JSON.

Screenshot 2015-01-31 17.24.50


Above is the output of the OpenDaylight inventory API call, showing some of my hosts.

While I am just starting my testing and plan to do more extensive work utilizing the Vyatta vRouter connector, IXIA OpenFlow tester and other tools/add-ons, I am impressed with the release of BVC 1.1.1.  The software and tools appear to be reasonably stable while the documentation is clear and professional.

OpenFlow Tap Aggregation Part 2 : Multiple Source / Destinations

In the last post, we covered configuring a Pica8 3922 as a One-to-Many port replicator.

For this post, we will not repeat the initial setup steps, just the extra details.

Here is a design where we take four ports and bridge them and then mirror the resulting traffic out four more ports.


# Add Bridge br20 - for TAP Span - 1st Port
# Bridged : te-1/1/21, te-1/1/22, te-1/1/23, te-1/1/24
# Output : te-1/1/25, te-1/1/26, te-1/1/27, te-1/1/28
$VSCTL add-br br20 -- set bridge br20 datapath_type=pica8 other-config=datapath-id=120
$VSCTL add-port br20 te-1/1/21 -- set interface te-1/1/21 type=pica8
$VSCTL add-port br20 te-1/1/22 -- set interface te-1/1/22 type=pica8
$VSCTL add-port br20 te-1/1/23 -- set interface te-1/1/23 type=pica8
$VSCTL add-port br20 te-1/1/24 -- set interface te-1/1/24 type=pica8
$VSCTL add-port br20 te-1/1/25 -- set interface te-1/1/25 type=pica8
$VSCTL add-port br20 te-1/1/26 -- set interface te-1/1/26 type=pica8
$VSCTL add-port br20 te-1/1/27 -- set interface te-1/1/27 type=pica8
$VSCTL add-port br20 te-1/1/28 -- set interface te-1/1/28 type=pica8
# Remove Default Flow (not treating this as HUB!)
ovs-ofctl del-flows br20
# Add replication flow from each bridged port to each of the other ports in the group
ovs-ofctl add-flow br20 in_port=21,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:22,23,24,25,26,27,28
ovs-ofctl add-flow br20 in_port=22,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:21,23,24,25,26,27,28
ovs-ofctl add-flow br20 in_port=23,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:21,22,24,25,26,27,28
ovs-ofctl add-flow br20 in_port=24,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:21,22,23,25,26,27,28
# Drop ingress traffic from mirror ports
ovs-ofctl add-flow br20 in_port=25,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br20 in_port=26,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br20 in_port=27,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br20 in_port=28,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop

Can an OpenFlow Switch Replace a Tap Aggregator?

I had an interesting conversation during lunch about the future of network taps and aggregators now that OpenFlow switches can do many of the same types of operations.  In my testing I have used Pica8 switches to replicate traffic, lots of traffic, using static OpenFlow commands.  For example, here is a design where I take 10G of  traffic and mirror it across 5 ports.


OpenFlow Tap Aggregation 1 to 5

Here is the configuration I use for a Pica8 3922 to do the replication a 10G stream to 5 ports.

This configuration assumes that you have already configured the switch to run in OpenFlow mode.

First we setup a new bridge, br0 and add the interfaces (1-6) to it.

# Add Bridge br0 - for PCAP Replication - 1st Port
# te-1/1/1 is input te-1/1/2, te-1/1/3, te-1/1/4, te-1/1/5, te-1/1/6 are output
$VSCTL add-br br0 -- set bridge br0 datapath_type=pica8 other-config=datapath-id=100
$VSCTL add-port br0 te-1/1/1 -- set interface te-1/1/1 type=pica8
$VSCTL add-port br0 te-1/1/2 -- set interface te-1/1/2 type=pica8
$VSCTL add-port br0 te-1/1/3 -- set interface te-1/1/3 type=pica8
$VSCTL add-port br0 te-1/1/4 -- set interface te-1/1/4 type=pica8
$VSCTL add-port br0 te-1/1/5 -- set interface te-1/1/5 type=pica8
$VSCTL add-port br0 te-1/1/6 -- set interface te-1/1/6 type=pica8

Next we remove the default flow so that we can program the ports specifically.

# Remove Default Flow (not treating this as HUB!)
ovs-ofctl del-flows br0
# Add replication flow 1 -> 2,3,4,5,6
ovs-ofctl add-flow br0 in_port=1,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:2,3,4,5,6

Finally we drop all of the ingress traffic from the ports that the mirror traffic is going out of.

# Drop ingress traffic from mirror ports
ovs-ofctl add-flow br0 in_port=2,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br0 in_port=3,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br0 in_port=4,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br0 in_port=5,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br0 in_port=6,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop

This same configuration can be extended to include more ports, repeated to mirror different traffic to other ports, etc.

In the next post, I will cover filtering traffic to better control egress data.

Network Function Virtualization and SDN

Network Function Virtualization or NFV for short is a call to action that looks to aggregate resources across networking, compute and storage.

A collaboration of many of the large players in the telecommunications space, the original NFV call to action focused on lowering the amount of proprietary hardware necessary to launch and operate services.  The solution as discussed in the paper, already exists: high-end, virtualized computing resources.

With the addition of SDN, the combination of virtualized computing resources with storage and programmable switches allows Network Operators to add to their solutions portfolios while avoiding vendor lock-in and forced upgrade cycles. Bringing to life Software-led Infrastructure.

Values of the NFV concept.

Some of the values to the NFV concept are speed, agility and cost reduction.  By centralizing designs around commodity server hardware, network operators can:

  • Do a single PoP/Site design based on commodity compute hardware.
    • Avoiding designs involving one-off installs of appliances that have different power, cooling and space needs simplifies planning.
  • Utilize resources more effectively.
    • Virtualization allows providers to allocate only the necessary resources needed by each feature/function.
  • Deploy network functions without having to send engineers to each site.
    • “Truck Rolls” are costly both from a time and money standpoint.

Generic processors can be repurposed.

Another key point brought up in the NFV call to action paper is the availability of network optimizations in current generation CPUs.

The throughput and functionality of software based routers, switches and other high-touch packet processing devices is constrained by having to send each packet through the processor.  With tools such as Intel’s DPDK and the Linux based NAPI, Intel’s Sandy Bridge and Ivy Bridge processors can be optimized to provide high-speed forwarding paths allowing for much higher throughput rates.

Programmable switches can move packets for lower costs

While not called out in the NFV call to action, programmable switches are a gateway to NFV nirvana.

OpenFlow, a standard for programming data paths into networking equipment, provides one way to separate the control and data planes.  OpenFlow allows for the use of commodity hardware for control plane decision making and inexpensive programmable switches for packet forwarding.  Other benefits to the separation of the control and data planes are:

  • The addition of new features and data sources without changing hardware.
    • The availability of northbound APIs in OpenFlow controllers allow you to plug in features as necessary.
  • Ease of scaling the control plane.
    • Since the control plane is run on commodity hardware and has the ability to be virtualized, you can add or subtract resources as needed.
  • Programmable hardware allows for better accuracy in the planning of future features.
    • Programmable hardware allows the end user to design and deploy a feature or function themselves, rather than relying on the roadmap of the equipment provider.


Network Operators have realized that solutions to their current issues are available today.  The combination of key parts of both SDN and Virtualization allows Network Operators to deploy features and functions in a timely and cost effective manner.  By using commodity hardware,  costs can be managed and resources allocated effectively.

Testing Vyatta 6.5 R1 Under VMWare – Preliminary Results

Testing Vyatta Subscription Edition 6.5 R1

We here at SDN Testing have been working with Router Analysis, Inc. our parent company doing testing of Vyatta Subscription Edition 6.5 R1 under VMWare.  Testing of Vyatta on hardware is located on Router Analysis.

For the VMWare setup we ran VMWare Hypervisor v5.1 on our spec setup as built by IXSystems:

SuperMicro X9SAE-V
Intel I7-3770 / 32G ECC RAM
Four Intel I340-T2 NICS (8 Ports Total)
Intel 520 Series 240GB SSD

We setup one of the Intel I340-T2 NICs using VMDirectPassThrough and utilized it for the upstream ports to the network.  The other 3 NICs were setup using VMWares default vSwitch configuration, each port was in it’s own vSwitch.  Those six ports were connected to another system generating packets while two VMs were created on the local machine to finish the total of eight.

Vyatta was given a VM with 2vCPUs and 4GB of RAM.

The following diagram shows the setup.

From previous tests, which will be included in the full report, we knew that each vSwitch port can forward 23% Line-Rate IMIX traffic (Tri-Modal: 7 x 64 bytes, 4 x 512 bytes, 1 x 1518) when 8 ports are in use.

Therefore each tenant was configured to send 234 Mbps of traffic outbound through the two uplink ports for a total of 1.9 Gbps.

The next steps were to configure features:

uRPF, ACLs and QoS (Shaping and Policing)

There was no impact from the features as the traffic limitation appeared to have nothing to do with Vyatta and more to do with the VMWare setup.  We were able to send IMIX traffic without issue out the two uplink interfaces at 1.9 Gbps total.

From our testing, we have concluded that Vyatta Subscription Edition 6.5 R1 behaves as expected when used as a multi-tenant virtual router.  Easily supporting the traffic and features needed in the role.

Note: We are planning to test the different vSwitches available to VMWare in the future, if we find one that behaves better we will re-run these tests.

What is Commodity Hardware?

Commodity Hardware is not the old machine you found under a desk..

You will often see “runs on commodity hardware” on a SDN product website.  What does it really mean?

Sometimes when Router Analysis gets a new piece of SDN software to test the results come out much better or worse than what the vendor claims.  The most recent occurrence of lower performance happened as we started testing Vyatta’s R6.5 in our lab.

When testing you need to have an initial benchmark that you can use to tell you what effect features and other changes you make to the system configuration have.  For Vyatta we took a simple approach.  Fire up the LiveCD image and run traffic across 2 GbE ports.

Router Analysis previously interviewed Vyatta spokesman Scott Sneddon and found out that the performance of a Vyatta system could be as high as 2Mpps (2 Million packets per second) at 64 bytes.  Our first run produced 1.78Mpps.  After a short Twitter discussion, it was determined that the number was expected with the hardware we are using (Intel i7-3770 CPU, 32G ram, Intel c216 chipset and Intel Ethernet Interface cards).

We decided to do a little testing to see what the bottleneck was and how we could reduce it.  As we debugged the issues and changed the hardware setup we got the Vyatta system up to 2Mpps.  Adding in another pair of ports and doing more tweaking we got to 2.9Mpps then early this morning 3.6Mpps.  3.6Mpps is a really good number for a software based forwarding router.  To compare, the Cisco 7200 VXR with a NPE-G2 can do 2Mpps and the 7200 VXR is a very specifically designed system with a proprietary OS designed for one thing, routing.

Besting the Cisco 7200 with PC hardware is pretty good, but doing it with PC hardware that is forwarding at almost 2x the pps and at the time of this writing 4x the bandwidth (we have tested up to 6Gbps on the Vyatta, the 7200 supports 1.8) is impressive.

Cost wise, the test system we are using is not that expensive.  Custom built for us by IXSystems the system is based on a Supermicro motherboard with the Intel c216 chipset, a QuadCore Intel i7-3770 CPU and 6 PCie slots.  Since we outfit the machines with 32GB of ECC ram, it is easy to take the difference in memory cost and apply it towards the necessary network cards.  Fully configured the system would come out well below $2000 US (or less if you source the NICs on the used market).

What is commodity hardware?  It’s still new hardware and it’s still good quality.  If you want to just throw together something from parts you find around the office, you can.  But if you intend to have a reliable and functional system, you need to do the hardware design properly.