Installing and Using Distributed ONOS

In our work with SDN controllers we concentrate on a few including On.Lab’s ONOS, The OpenDaylight Project and RYU.  In this post we will discuss setting up a distributed ONOS cluster.

First things first, the ONOS wiki pages give a good overview of the requirements:

  • Ubuntu Server 14.04 LTS 64-bit
  • 2GB or more RAM
  • 2 or more processors
  • Oracle Java 8 JRE and Maven installed

For our setup, we used Ubuntu 14.0.4.2 (the latest at the time) installed as an OpenSSH server and we installed Java and Maven from the instructions from the ONOS wiki:

We then installed the required packages on a OSX VM

  • Java 8 JDK
  • Apache Maven (3.0 and later)
  • git
  • Apache Karaf (3.0.3 and later)

Once the dependencies were installed we were able to grab the code from ONOS github page and build it using the “mvn clean install” command using the directions shown on the ONOS wiki

Once built, the process to install the software and get it working properly is not perfectly documented, we ended up suffering issues with websocket failures which were related to the ONOS systems not being setup properly, all of the information is on the ONOS wiki but on different pages.  Here is what we found worked:

Once the install finished, we were able to access the ONOS installations at http://192.168.21.129:8181/onos/ui/index.html and http://192.168.21.130:8181/onos/ui/index.html

Distributed ONOS

Once up, we needed to modify a few files on each server and restart the servers:

/opt/onos/config/cluster.json

and /opt/onos/config/tablets.json

Since ONOS will restart automatically, you simply need to do a system:shutdown.

By default ONOS uses 192.168.56.* as the IP address blocks, so we had to modify them to match our setup.

On the switches we set the controllers

And we were then able to ping between our physical hosts connected to ports 1 and 2 on each switch.

OpenFlow Tap Aggregation Part 2 : Multiple Source / Destinations

In the last post, we covered configuring a Pica8 3922 as a One-to-Many port replicator.

For this post, we will not repeat the initial setup steps, just the extra details.

Here is a design where we take four ports and bridge them and then mirror the resulting traffic out four more ports.

OpenFlowBridge

# Add Bridge br20 - for TAP Span - 1st Port
#######################################################################################
# Bridged : te-1/1/21, te-1/1/22, te-1/1/23, te-1/1/24
# Output : te-1/1/25, te-1/1/26, te-1/1/27, te-1/1/28
#-------------------------------------------------------------------------------------
$VSCTL add-br br20 -- set bridge br20 datapath_type=pica8 other-config=datapath-id=120
$VSCTL add-port br20 te-1/1/21 -- set interface te-1/1/21 type=pica8
$VSCTL add-port br20 te-1/1/22 -- set interface te-1/1/22 type=pica8
$VSCTL add-port br20 te-1/1/23 -- set interface te-1/1/23 type=pica8
$VSCTL add-port br20 te-1/1/24 -- set interface te-1/1/24 type=pica8
$VSCTL add-port br20 te-1/1/25 -- set interface te-1/1/25 type=pica8
$VSCTL add-port br20 te-1/1/26 -- set interface te-1/1/26 type=pica8
$VSCTL add-port br20 te-1/1/27 -- set interface te-1/1/27 type=pica8
$VSCTL add-port br20 te-1/1/28 -- set interface te-1/1/28 type=pica8
# Remove Default Flow (not treating this as HUB!)
ovs-ofctl del-flows br20
# Add replication flow from each bridged port to each of the other ports in the group
ovs-ofctl add-flow br20 in_port=21,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:22,23,24,25,26,27,28
ovs-ofctl add-flow br20 in_port=22,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:21,23,24,25,26,27,28
ovs-ofctl add-flow br20 in_port=23,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:21,22,24,25,26,27,28
ovs-ofctl add-flow br20 in_port=24,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=output:21,22,23,25,26,27,28
# Drop ingress traffic from mirror ports
ovs-ofctl add-flow br20 in_port=25,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br20 in_port=26,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br20 in_port=27,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop
ovs-ofctl add-flow br20 in_port=28,dl_dst="*",dl_src="*",dl_type="*",dl_vlan_pcp="*",dl_vlan="*",actions=drop

Network Function Virtualization and SDN

Network Function Virtualization or NFV for short is a call to action that looks to aggregate resources across networking, compute and storage.

A collaboration of many of the large players in the telecommunications space, the original NFV call to action focused on lowering the amount of proprietary hardware necessary to launch and operate services.  The solution as discussed in the paper, already exists: high-end, virtualized computing resources.

With the addition of SDN, the combination of virtualized computing resources with storage and programmable switches allows Network Operators to add to their solutions portfolios while avoiding vendor lock-in and forced upgrade cycles. Bringing to life Software-led Infrastructure.

Values of the NFV concept.

Some of the values to the NFV concept are speed, agility and cost reduction.  By centralizing designs around commodity server hardware, network operators can:

  • Do a single PoP/Site design based on commodity compute hardware.
    • Avoiding designs involving one-off installs of appliances that have different power, cooling and space needs simplifies planning.
  • Utilize resources more effectively.
    • Virtualization allows providers to allocate only the necessary resources needed by each feature/function.
  • Deploy network functions without having to send engineers to each site.
    • “Truck Rolls” are costly both from a time and money standpoint.

Generic processors can be repurposed.

Another key point brought up in the NFV call to action paper is the availability of network optimizations in current generation CPUs.

The throughput and functionality of software based routers, switches and other high-touch packet processing devices is constrained by having to send each packet through the processor.  With tools such as Intel’s DPDK and the Linux based NAPI, Intel’s Sandy Bridge and Ivy Bridge processors can be optimized to provide high-speed forwarding paths allowing for much higher throughput rates.

Programmable switches can move packets for lower costs

While not called out in the NFV call to action, programmable switches are a gateway to NFV nirvana.

OpenFlow, a standard for programming data paths into networking equipment, provides one way to separate the control and data planes.  OpenFlow allows for the use of commodity hardware for control plane decision making and inexpensive programmable switches for packet forwarding.  Other benefits to the separation of the control and data planes are:

  • The addition of new features and data sources without changing hardware.
    • The availability of northbound APIs in OpenFlow controllers allow you to plug in features as necessary.
  • Ease of scaling the control plane.
    • Since the control plane is run on commodity hardware and has the ability to be virtualized, you can add or subtract resources as needed.
  • Programmable hardware allows for better accuracy in the planning of future features.
    • Programmable hardware allows the end user to design and deploy a feature or function themselves, rather than relying on the roadmap of the equipment provider.

Summary

Network Operators have realized that solutions to their current issues are available today.  The combination of key parts of both SDN and Virtualization allows Network Operators to deploy features and functions in a timely and cost effective manner.  By using commodity hardware,  costs can be managed and resources allocated effectively.

What is Commodity Hardware?

Commodity Hardware is not the old machine you found under a desk..

You will often see “runs on commodity hardware” on a SDN product website.  What does it really mean?

Sometimes when Router Analysis gets a new piece of SDN software to test the results come out much better or worse than what the vendor claims.  The most recent occurrence of lower performance happened as we started testing Vyatta’s R6.5 in our lab.

When testing you need to have an initial benchmark that you can use to tell you what effect features and other changes you make to the system configuration have.  For Vyatta we took a simple approach.  Fire up the LiveCD image and run traffic across 2 GbE ports.

Router Analysis previously interviewed Vyatta spokesman Scott Sneddon and found out that the performance of a Vyatta system could be as high as 2Mpps (2 Million packets per second) at 64 bytes.  Our first run produced 1.78Mpps.  After a short Twitter discussion, it was determined that the number was expected with the hardware we are using (Intel i7-3770 CPU, 32G ram, Intel c216 chipset and Intel Ethernet Interface cards).

We decided to do a little testing to see what the bottleneck was and how we could reduce it.  As we debugged the issues and changed the hardware setup we got the Vyatta system up to 2Mpps.  Adding in another pair of ports and doing more tweaking we got to 2.9Mpps then early this morning 3.6Mpps.  3.6Mpps is a really good number for a software based forwarding router.  To compare, the Cisco 7200 VXR with a NPE-G2 can do 2Mpps and the 7200 VXR is a very specifically designed system with a proprietary OS designed for one thing, routing.

Besting the Cisco 7200 with PC hardware is pretty good, but doing it with PC hardware that is forwarding at almost 2x the pps and at the time of this writing 4x the bandwidth (we have tested up to 6Gbps on the Vyatta, the 7200 supports 1.8) is impressive.

Cost wise, the test system we are using is not that expensive.  Custom built for us by IXSystems the system is based on a Supermicro motherboard with the Intel c216 chipset, a QuadCore Intel i7-3770 CPU and 6 PCie slots.  Since we outfit the machines with 32GB of ECC ram, it is easy to take the difference in memory cost and apply it towards the necessary network cards.  Fully configured the system would come out well below $2000 US (or less if you source the NICs on the used market).

What is commodity hardware?  It’s still new hardware and it’s still good quality.  If you want to just throw together something from parts you find around the office, you can.  But if you intend to have a reliable and functional system, you need to do the hardware design properly.

Network Hardware and SDN

How Does SDN Fit Into The Virtual Data Center?

One thing that needs to be cleared up is the definition of  Network or Networking Hardware.  In the definition of SDN from ONF they discuss the decoupling of the Control and Data Plane with the Data Plane being defined as Network Hardware.  Here is where things can get confusing.

What is Network Hardware?

Wikipedia says the following :

Networking hardware or networking equipment typically refers to devices facilitating the use of a computer network. Typically, this includes gatewaysroutersnetwork bridgesswitcheshubs, and repeaters. Also, hybrid network devices such as multilayer switchesprotocol convertersbridge routersproxy serversfirewallsnetwork address translatorsmultiplexersnetwork interface controllerswireless network interface controllersmodemsISDN terminal adaptersline driverswireless access pointsnetworking cables and other related hardware.”

Essentially anything that is not a end system is network hardware.  The current reality of SDN is that it tends to mean Programmable Switches when it says Network Hardware.  Switches are generally made built on fabrics that allow ports to transmit traffic to and from other ports.

If we were to be true to the current SDN message, we would only look at Programmable Switches.  Reality is there are other ways you can create a Data Plane i.e. Network Hardware.  One of these ways is using a Network Processor (which I covered in the earlier article on Vyatta).

A Computer with a few (or many) Interfaces and a Network Processor is what a Router is.  We can easily see this by looking at the design of Juniper Networks Routers.

This is one of the interesting things I see in the SDN space: Companies that can take advantage of generic hardware and add value.   These companies will create more tools for the architects and operators to use when pushing packets.

We plan to cover as many SDN related topics as possible here at SDN Testing, those that exist today and those that will come in the future.

The Virtual Data Center Reality

Virtual Data Centers start to become reality.

Previously posted on Router Analysis

With the recent announcement of the CSR 1000v from Cisco, there are now two commercial Virtual Data Center stories (three if we look at the VMWare vCNS products and use one of the other vendors products for a router) Cisco and Vyatta.

What is a Virtual Data Center?  There will be a lot of different answers but in my view it consists of the following:

  • A pair of redundant Routers with multiple provider uplinks
  • A pair of redundant Firewalls
  • A pair of Load Balancers
  • Front and Backend Servers

In my previous life designing and building Internet Data Centers we would have build this entire setup out of separate parts taking up an entire rack or two.  Now it could be done in a single blade server with multiple redundant power supplies or a pair of highly spec’d servers.

Now, I want to be clear here: I don’t think that the software based Firewalls are up to the task of the hardware based ones.  I think most security companies/consultants would agree that there is a danger when you host both your servers and your firewall on the same shared hardware.  You could design the setup in a way that the ASA is only hosted on it’s own blade(s) but there is still the inherent risk of a misconfiguration or privilege escalation hack allowing someone to bypass the firewall.

Sadly the way around the security issue is to put a physical firewall in the line.  This can be easily done, so it’s mainly just a CapEx issue.

For routers at this time, we only have Cisco and Vyatta commercially.  They both are offering strong products but Cisco’s CSR 1000v is more feature rich supporting many protocols and features that come from using the previously designed and coded Cisco code base.

In the coming weeks and months I am going to be writing about the products available in the space and what limitations still need to be overcome.  I will be working with Cisco, Vyatta, VMWare and others to try and compile as much information as possible.

Summary: The Virtual Datacenter is here.  It’s not perfect, but all of the parts exist from multiple vendors.  The world of Virtualization just got a lot more interesting.

What are your thoughts?