Version 11 (modified by 11 years ago) ( diff ) | ,
---|
*Draft* An Intro to OpenFlow@ORBIT
This page is meant to get you up and running quickly with OpenFlow-related experiments/development on the ORBIT testbeds.
sections
I. A simple OpenFlow Network
II. More complex examples
III. Installation
I. A simple OpenFlow Network
We begin with a simple setup of a Mininet network controlled by a controller (Floodlight) running on a separate Sandbox node, which looks like this:
network node1-1 link node1-2 [Mininet]------[Floodlight]
1.1 Some prerequisites - Using the prepackaged node image
To make things easier, we provide images pre-installed with several potentially useful packages, including:
- Floodlight : A development-friendly controller platform
- mininet : OpenFlow network prototyping tool/emulation
- cbench : Controller benchmarking tool
- liboftrace : OpenFlow message parser/analyzer for pcap files
- Wireshark+OF dissector : Wireshark with a plugin for OpenFlow messages
This makes things easy since you can image multiple nodes with the same image, and pick and choose what to run where.
The image is named of-pkg.ndz
. omf
can be used to image nodes with it:
$ omf load -i of-pkg.ndz
The nodes will be off after it's imaged. Turn them on:
$ omf tell -a on
Once on, you can log into them as root using their names, e.g. node1-1.
1.1.1 node/Sandbox layout
When you log onto a Sandbox, you are logged into the console machine, from which you can use omf
and the likes to image and log onto/manage the nodes.
Each node (save those on sandbox4) have two interfaces. The first, eth1, connects to your console connection for managing the nodes, and is assigned an IP address of the form 10.1x.y.z, where x = sandbox number, and y and z = node number e.g. if your node is named node1-2, and is part of Sandbox8, it will be 10.18.1.2. Do not take down this interface or change its address - you will lose your connection. The second, eth0, is down by default, and is open to any kind of use. Both are gigabit links and can be used for experimentation, but in general, the second one should be used unless there are specific circumstances.
1.1.2 managing/configuring nodes
This is done by using SSH to log into the nodes as root. Logging into each is okay, but can get cumbersome if you have many nodes, on which you have to do the exact same thing. In this case, commands may also be issued via SSH from the console, without manually logging into each node (and ending up with a dozen terminal windows):
user@console.sb8:~$ ssh -o StrictHostKeyChecking="no" root@node1-1 "command_to_run_1;command_to_run_2"
This runs command_to_run_1 and command_to_run_2 on node1-1 as if you'd logged into it to issue it at the shell.
Each command is delimited by a semicolon, and the full string is surrounded by double quotes. The -o StrictHostKeyChecking="no"
stops SSH from checking host keys and is optional.
This can be used in a script to run from the console to quickly set up many nodes. We use it in some of the following examples to make it easier to show what is happening where.
1.1.3 Installing your own tools
For people interested in learning more about/installing these packages, they can refer to Section II for a summary and quick setup instructions for each and links to more information.
1.2 Running the network
As a two-node example, we image the nodes on Sandbox8, as explained in Section 1.1. One is used for the controller, and the other, the Mininet network.
- Bring up and assign addresses to eth0 of the nodes. Both should be in the same IP block. If done from console, the commands look like this:
$ ssh root@node1-1 "ifconfig eth0 inet 192.168.1.1 up" $ ssh root@node1-2 "ifconfig eth0 inet 192.168.1.2 up"
The nodes should now be able to ping eachother via eth0:$ ssh root@node1-1 "ping -c 1 192.168.1.2" PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data. 64 bytes from 192.168.1.2: icmp_req=1 ttl=64 time=0.614 ms --- 192.168.1.2 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms
- Start the controller on one node. We arbitrarily pick node1-1. From a shell on node1-1, launch Floodlight:
# cd floodlight # java -jar target/floodlight.jar
After you give it a few seconds, Floodlight should be listening to port 6633 on all interfaces available on the node (eth0, 1, and lo). If you want, you can start uptcpdump
or something similar on a separate terminal on node1-1 to begin capturing control messages:# tcpdump -i lo port 6633
Alternatively, you can starttcpdump
to write to a .pcap file for later analysis withwireshark
with the OpenFlow plugin, orofstats
oroftrace
, which are part of liboftrace.# tcpdump -w outfile.pcap -i lo port 6633
- Launch Mininet. From another shell on node1-2:
# mn --topo=single,2 --controller=remote,ip=192.168.1.1
This will give you a virtual network of two hosts and one switch pointed to the running Floodlight instance on node1-1. Once at the prompt, try pinging one host from the other:mininet> h1 ping h2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=8.19 ms 64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.164 ms 64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.025 ms 64 bytes from 10.0.0.2: icmp_req=4 ttl=64 time=0.024 ms ^C --- 10.0.0.2 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2999ms rtt min/avg/max/mdev = 0.024/2.101/8.193/3.517 ms
Notice how the first ping takes much longer. This is due to the flow installation process triggered by the first ping (Specifically, the ARPs sent by the hosts) as the switch suffers a flow table miss. At the same time, you should see (lots of) packets being captured by tcpdump in node1-1's terminal:
root@node1-1:~/floodlight# tcpdump -i eth0 port 6633 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 20:18:30.188181 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [S], seq 3242563912, win 14600, options [mss 1460,sackOK,TS val 699854 ecr 0,nop,wscale 4], length 0 20:18:30.188321 IP 192.168.1.1.6633 > 192.168.1.2.41631: Flags [S.], seq 2665849071, ack 3242563913, win 14480, options [mss 1460,sackOK,TS val 700809 ecr 699854,nop,wscale 4], length 0 20:18:30.188466 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [.], ack 1, win 913, options [nop,nop,TS val 699854 ecr 700809], length 0 20:18:30.188618 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [F.], seq 1, ack 1, win 913, options [nop,nop,TS val 699854 ecr 700809], length 0 20:18:30.190310 IP 192.168.1.1.6633 > 192.168.1.2.41631: Flags [.], ack 2, win 905, options [nop,nop,TS val 700810 ecr 699854], length 0 20:18:30.224204 IP 192.168.1.1.6633 > 192.168.1.2.41631: Flags [P.], seq 1:9, ack 2, win 905, options [nop,nop,TS val 700818 ecr 699854], length 8 20:18:30.224426 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [R], seq 3242563914, win 0, length 0 20:18:30.402564 IP 192.168.1.2.41632 > 192.168.1.1.6633: Flags [S], seq 1611313095, win 14600, options [mss 1460,sackOK,TS val 699908 ecr 0,nop,wscale 4], length 0 20:18:30.402585 IP 192.168.1.1.6633 > 192.168.1.2.41632: Flags [S.], seq 367168075, ack 1611313096, win 14480, options [mss 1460,sackOK,TS val 700863 ecr 699908,nop,wscale 4], length 0 ...
1.2.1 Using Wireshark
In the above example, tcpdump
can be replaced by wireshark
. Wireshark is "friendlier" in that it has a GUI and an OpenFlow dissector plugin is available for it. In order to use Wireshark, you must enable X11 forwarding from your workstation to the node, with the -X or -Y flag for ssh
e.g.:
ssh -X -l root node1-1
1.2.2 Using OpenVswitch directly
Mininet's datapaths are backed by OVS. Therefore, if you have a Mininet install, you get OVS for "free". You can use OVS directly for your data plane.
II More complex examples
It is possible to run multiple instances of controllers (for whatever reason), or different logical components together in the same network. This section shows two examples of more complex SDN network setups - multiple controller instances and with FlowVisor, a network hypervisor.
Sections
2.1 Multiple Controllers
You may have multiple controllers in the same logical space of the control plane for various reasons - special applications, fail-over, distributed control planes, etc.
- 2.1.1 On multiple hosts
- 2.1.2 On the same host
2.1.1 On multiple hosts
If each controller is running on its own host (machine, VM, etc.), there is little to change; if you have hosts A,B, and C, and Floodlight instances running on each, switches can be pointed to targets A:6633, B:6633, C:6633, or any combination thereof (switches can be pointed to multiple controllers).
2.1.2 On the same host === #2_1_2
The Floodlight configuration file
Multiple instances of Floodlight may be run on the same host, as long as each controller listens on a separate set of sockets. In this case, all controllers would be on the same IP address(es), so you must change the ports they are listening on. These ports include the OpenFlow control port (TCP 6633), REST API (TCP 8080), and debug (TCP 6655).
In Floodlight, this value can be changed by modifying the file floodlightdefault.properties, located in src/main/resources/ of the Floodlight sources. (Currently) It looks like this:
floodlight.modules=\ net.floodlightcontroller.jython.JythonDebugInterface,\ net.floodlightcontroller.counter.CounterStore,\ net.floodlightcontroller.storage.memory.MemoryStorageSource,\ net.floodlightcontroller.core.internal.FloodlightProvider,\ net.floodlightcontroller.threadpool.ThreadPool,\ net.floodlightcontroller.devicemanager.internal.DeviceManagerImpl,\ net.floodlightcontroller.devicemanager.internal.DefaultEntityClassifier,\ net.floodlightcontroller.staticflowentry.StaticFlowEntryPusher,\ net.floodlightcontroller.firewall.Firewall,\ net.floodlightcontroller.forwarding.Forwarding,\ net.floodlightcontroller.linkdiscovery.internal.LinkDiscoveryManager,\ net.floodlightcontroller.topology.TopologyManager,\ net.floodlightcontroller.flowcache.FlowReconcileManager,\ net.floodlightcontroller.debugcounter.DebugCounter,\ net.floodlightcontroller.debugevent.DebugEvent,\ net.floodlightcontroller.perfmon.PktInProcessingTime,\ net.floodlightcontroller.ui.web.StaticWebRoutable,\ net.floodlightcontroller.loadbalancer.LoadBalancer,\ org.sdnplatform.sync.internal.SyncManager,\ org.sdnplatform.sync.internal.SyncTorture,\ net.floodlightcontroller.devicemanager.internal.DefaultEntityClassifier org.sdnplatform.sync.internal.SyncManager.authScheme=CHALLENGE_RESPONSE org.sdnplatform.sync.internal.SyncManager.keyStorePath=/etc/floodlight/auth_credentials.jceks org.sdnplatform.sync.internal.SyncManager.dbPath=/var/lib/floodlight/
Several entries can be added to this list to tweak TCP port values. Unfortunately, these entries may change fairly frequently due to active development.
- net.floodlightcontroller.restserver.RestApiServer.port = 8080
- net.floodlightcontroller.core.internal.FloodlightProvider.openflowport = 6633
- net.floodlightcontroller.jython.JythonDebugInterface.port = 6655
Each entry should be on its own line, with no spaces or newlines in between lines. For example, to change the port that Floodlight listens for switches on from the default of 6633 to 6634, append:
net.floodlightcontroller.core.internal.FloodlightProvider.openflowport = 6634
To the .properties file. Then, point Floodlight to the configuration file with the -cf
flag:
java -jar target/floodlight.jar -cf src/main/resources/floodlightdefault.properties
The file specified after -cf will be read in, and the values in it used to configure the controller instance. You should be able to confirm the change:
# netstat -nlp | grep 6634 ... tcp6 0 0 :::6634 :::* LISTEN 2029/java ...
Launching multiple controllers
Each instance of the controller run on the same host can be pointed to its own .properties file with the -cf
flag, with different port value parameters. Begin by making as many copies of the default .properties file as you will have controllers. Going with a similar example as earlier, you can have one host A and three Floodlight instances 1,2, and 3, configured as below:
1 2 3 FloodlightProvider.openflowport 6633 6634 6635 RestApiServer.port 8080 8081 8082 JythonDebugInterface.port 6655 6656 6657
No ports should be shared by the three instances, or else they will probably throw errors at startup and exit shortly after. With a .properties file for each instance under resources/ (named 1,2, and 3.properties for this example), you can launch the controllers in a loop for example:
for i in `seq 1 3`; do java -jar target/floodlight.jar -cf src/main/resources/$i.properties 1>/dev/null 2>&1 & done
This should launch three backgrounded instances of Floodlight.
2.2 With FlowVisor (Network virtualization/slicing)
A more typical case you might encounter is a network that is sliced, or virtualized.
- 2.2.1 A brief intro to network virtualization
- 2.2.2 Virtualization with multiple hosts
- 2.2.3 On the same host
2.2.1 A brief intro to network virtualization
A virtualized network is organized as below:
[controller 1] [controller 2] [controller 3] \ | / \ | / [network hypervisor]-[policies] | [network]
A network hyperviser like FlowVisor sits between the control and data plane, intercepting and re-writing the contents of the OpenFlow control channel to one or more controllers running independently of one another. Ultimately, the network hypervisor provides each controller with an illusion that it is the only controller in the network. It accomplishes this by
- Rewriting the topology information conveyed by OpenFlow (in the form of PORT_STATs and PacketIns triggered by LLDP messages) before it reaches each controller, allowing it to only work on a subset, or slice, of the network, and
- Mapping the PacketIns/PacketOuts to and from each controller to the proper sets of switches and switch ports.
How the re-writing occurs depends on a set of admin-defined policies.
2.2.2 Virtualization with multiple hosts
We begin by introducing a simple example of a virtualized topology:
[Floodlight 1] [Floodlight 2] \ / [FlowVisor] | [Mininet]
Each component above will be run on a separate node. Since we need more than two nodes, you may want to reserve either Sandboxes 4 or 9. The components can also be run on the same node, with the caveats discussed in the next section, 2.2.3.
Here, Mininet will be used to emulate a three-switch, three-host data plane:
h1 h2 h3 | | | s1---s2---s3
This data plane will be sliced so that one Floodlight instance will control switches s1 and s2, and the other, s3.
2.2.3 On the same host
As with the case of multiple controllers on the same VM/host, you must be careful that neither FlowVisor nor the controllers listen on the same sets of ports. For the multiple controllers, this can be avoided as described in Section 2.1.2. FlowVisor and Floodlight conflict on ports 6633 and 8080.
III Installation
The following are the installation steps and basic usage for the software that are found on the image. For more information, refer to their respective pages; Floodlight and Mininet in particular have very thorough docs.
Quick links:
3.1 Floodlight
3.2 Mininet
3.3 CBench
3.4 liboftrace
3.5 Wireshark
Note, the following examples are for Ubuntu, since that's what is used at WINLAB. A quick search will often bring up hints/steps for CentOS/RHEL and OSX, but for the most part, you will have to experiment a bit.
3.1 Floodlight
docs: http://docs.projectfloodlight.org/display/floodlightcontroller/Floodlight+Documentation
For the most part the following is a repetition of some of the things there. Truth be told, if you plan to modify/develop on Floodlight it is better to just install it on a local machine where you can use eclipse (either that, or you can try to X11 forward, but that doesn't always go well).
dependencies
sudo apt-get install git-core build-essential default-jdk ant python-dev eclipse
installation
The following fetches and builds the latest stable release:
git clone git://github.com/floodlight/floodlight.git cd floodlight git checkout fl-last-passed-build ant
To import as a project on Eclipse, run the following while in the same directory:
ant eclipse
run
Assuming everything worked out:
java -jar target/floodlight.jar
from the floodlight/ directory launches Floodlight. It will output a bunch of messages while it searches for, loads, and initializes modules. You can refer to the output attached below for what it should look like - there may be warnings, but they should be harmless.
This command also launches in the foreground, so you can either launch it in a terminal multiplexer like screen
or tmux
, or with a 1>logfile 2>&1 &
tacked to the end. The former is probably recommended.
development
Tutorials and other information can be found here: http://docs.projectfloodlight.org/display/floodlightcontroller/For+Developers
3.2 Mininet
website: http://mininet.org/
It is highly recommended to run trough the docs, especially the following:
- FAQs: https://github.com/mininet/mininet/wiki/FAQ
- Getting Started: http://mininet.org/download/ Getting Started
- Sample Workflow: http://mininet.org/sample-workflow/ Sample Workflow
- Walkthrough: http://mininet.org/walkthrough/
If you post to the list especially before you read the FAQ's, you will likely just be asked if you have checked them.
installation/build
The VM is the recommended way to run Mininet on your machine.
The following is for a native install (as on the node image).
The method differs for different versions of Ubuntu. The following is for 12.04. For others, refer to this page. The following also takes care of the dependencies.
sudo apt-get install mininet/precise-backports
Then disable ovs-controller
:
sudo service openvswitch-controller stop sudo update-rc.d openvswitch-controller disable
You may also need to start open Vswitch:
sudo service openvswitch-switch start
You can verify that it works with the following:
sudo mn --test pingall
This sets up a 2-host, 1-switch topology and pings between the hosts. The output looks similar to this:
*** Creating network *** Adding controller *** Adding hosts: h1 h2 *** Adding switches: s1 *** Adding links: (h1, s1) (h2, s1) *** Configuring hosts h1 h2 *** Starting controller *** Starting 1 switches s1 *** Ping: testing ping reachability h1 -> h2 h2 -> h1 *** Results: 0% dropped (0/2 lost) *** Stopping 2 hosts h1 h2 *** Stopping 1 switches s1 ... *** Stopping 1 controllers c0 *** Done completed in 0.460 seconds
run
There are many flags and options associated with launching Mininet. mn --help
will display them.
For example, to start the same topology as the pingall test, but with a controller running separately from Mininet:
# mn --topo=single,2 --controller=remote,ip=10.18.1.1 --mac *** Creating network *** Adding controller *** Adding hosts: h1 h2 *** Adding switches: s1 *** Adding links: (h1, s1) (h2, s1) *** Configuring hosts h1 h2 *** Starting controller *** Starting 1 switches s1 *** Starting CLI: mininet>
- —topo=single,2 : one switch with two hosts
- —controller=remote,ip=10.18.1.1 : controller at 10.18.1.1
- —mac : non-random MAC addresses
Some useful ones are:
- controller external to Mininet, at IP addr and port p:
--controller=remote,ip=[addr],port=[p]
- non-random host MAC addresses (starting at 00:00:00:00:00:01 for h1)
--mac
usage
You can find available commands for the command line by typing ?
at the prompt. exit
quits Mininet.
Some basic examples:
- display topology:
mininet> net c0 s1 lo: s1-eth1:h1-eth0 s1-eth2:h2-eth0 h1 h1-eth0:s1-eth1 h2 h2-eth0:s1-eth2
- display host network info:
mininet> h1 ifconfig h1-eth0 Link encap:Ethernet HWaddr 00:00:00:00:00:01 inet addr:10.0.0.1 Bcast:10.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::200:ff:fe00:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:135 errors:0 dropped:124 overruns:0 frame:0 TX packets:7 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8906 (8.9 KB) TX bytes:558 (558.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
- ping host 1 from host 2
mininet> h2 ping h1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_req=1 ttl=64 time=10.0 ms ^C --- 10.0.0.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 10.026/10.026/10.026/0.000 ms
scripting
Mininet has a Python API, whose docs can be found online: http://mininet.org/api/
Examples can also be found here: https://github.com/mininet/mininet/tree/master/examples
Once you write a script, you can run it as a script:
python mn_script.py
3.3 Cbench
website: http://docs.projectfloodlight.org/display/floodlightcontroller/Cbench+(New)
dependencies
sudo apt-get install autoconf automake libtool libsnmp-dev libpcap-dev
installation/build
git clone git://gitosis.stanford.edu/openflow.git cd openflow; git checkout -b mybranch origin/release/1.0.0 git clone git://gitosis.stanford.edu/oflops.git git submodule init && git submodule update wget http://hyperrealm.com/libconfig/libconfig-1.4.9.tar.gz tar -xvzf libconfig-1.4.9.tar.gz cd libconfig-1.4.9 ./configure sudo make && sudo make install cd ../oflops/ sh ./boot.sh ; ./configure --with-openflow-src-dir=${OF_PATH}/openflow/ make install
run
Run from the cbench directory under oflops:
cd cbench cbench -c localhost -p 6633 -m 10000 -l 10 -s 16 -M 1000 -t
- -c localhost : controller at loopback
- -p 6633 : controller listaning at port 6633
- -m 10000 : 10000 ms (10 sec) per test
- -l 10 : 10 loops(trials) per test
- -s 16 : 16 emulated switches
- -M 1000 : 1000 unique MAC addresses(hosts) per switch
- -t : throughput testing
for the complete list, use the -h
flag.
The output for the above command looks like this:
cbench: controller benchmarking tool running in mode 'throughput' connecting to controller at localhost:6633 faking 16 switches offset 1 :: 3 tests each; 10000 ms per test with 10 unique source MACs per switch learning destination mac addresses before the test starting test with 0 ms delay after features_reply ignoring first 1 "warmup" and last 0 "cooldown" loops connection delay of 0ms per 1 switch(es) debugging info is off 16:53:14.384 16 switches: flows/sec: 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 total = 0.028796 per ms 16:53:24.485 16 switches: flows/sec: 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 total = 0.031999 per ms 16:53:34.590 16 switches: flows/sec: 24 24 24 24 24 24 24 24 24 24 24 24 24 24 24 24 total = 0.038380 per ms RESULT: 16 switches 2 tests min/max/avg/stdev = 32.00/38.38/35.19/3.19 responses/s
3.4 liboftrace (ofdump/ofstats)
docs:
https://github.com/capveg/oftrace/blob/master/README
http://www.openflow.org/wk/index.php/Liboftrace
dependencies
sudo apt-get install libpcap-dev swig libssl-dev
installation/build
git clone git://github.com/capveg/oftrace.git cd oftrace ./boot.sh ./configure --with-openflow-src-dir=${OF_PATH}/openflow/ make && make install
run
There are two tools pre-packaged with liboftrace (as per a mailing-list entry):
- ofstats: a program which calculates the controller processing delay, i.e., the difference in time between a packet_in message and the corresponding packet_out or flow_mod message.
- ofdump: a program that simply lists openflow message types with timestamps by switch/controller pair.
Both have the same syntax:
[ofstats|ofdump] [controller IP] [OF port]
Without the arguments it defaults to localhost:6633.
For example, with a pcap file named sample.pcap from a tcpdump
session sniffing for traffic from a controller at 192.168.1.5, port 6637:
ofdump:
# ofdump sample.pcap 192.168.1.5 6637 DBG: tracking NEW stream : 192.168.1.5:6637-> 192.168.1.6:47598 DBG: tracking NEW stream : 192.168.1.6:47598-> 192.168.1.5:6637 FROM 192.168.1.5:6637 TO 192.168.1.6:47598 OFP_TYPE 0 LEN 8 TIME 0.000000 FROM 192.168.1.6:47598 TO 192.168.1.5:6637 OFP_TYPE 0 LEN 8 TIME 0.026077 FROM 192.168.1.5:6637 TO 192.168.1.6:47598 OFP_TYPE 5 LEN 8 TIME 0.029839 FROM 192.168.1.6:47598 TO 192.168.1.5:6637 OFP_TYPE 6 LEN 128 TIME 0.1070415 ... FROM 192.168.1.6:47598 TO 192.168.1.5:6637 OFP_TYPE 10 LEN 60 TIME 0.2038485 --- 2 sessions: 0 0 FROM 192.168.1.5:6637 TO 192.168.1.6:47598 OFP_TYPE 13 LEN 24 TIME 0.2038523 FROM 192.168.1.6:47598 TO 192.168.1.5:6637 OFP_TYPE 10 LEN 60 TIME 0.2038573 FROM 192.168.1.5:6637 TO 192.168.1.6:47598 OFP_TYPE 13 LEN 24 TIME 0.2038614 FROM 192.168.1.6:47598 TO 192.168.1.5:6637 OFP_TYPE 10 LEN 60 TIME 0.2038663 FROM 192.168.1.5:6637 TO 192.168.1.6:47598 OFP_TYPE 13 LEN 24 TIME 0.2038704 Total OpenFlow Messages: 20015
ofstats:
# ofstats sample.pcap 192.168.1.5 6637 Reading from pcap file 1.pcap for controller 192.168.1.5 on port 6637 DBG: tracking NEW stream : 192.168.1.5:6637-> 192.168.1.6:47598 DBG: tracking NEW stream : 192.168.1.6:47598-> 192.168.1.5:6637 0.008088 secs_to_resp buf_id=333 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued 0.000454 secs_to_resp buf_id=334 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued 0.000437 secs_to_resp buf_id=335 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued 0.000534 secs_to_resp buf_id=336 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued 0.000273 secs_to_resp buf_id=337 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued 0.000486 secs_to_resp buf_id=338 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued 0.000379 secs_to_resp buf_id=339 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued 0.000275 secs_to_resp buf_id=340 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued ... 0.000135 secs_to_resp buf_id=10330 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued 0.000132 secs_to_resp buf_id=10331 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued 0.000131 secs_to_resp buf_id=10332 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
Since the outputs are dumped to stdout it is probably best to redirect it to a file for parsing later, like so:
# ofstats sample.pcap 192.168.1.5 6637 > outfile
3.5 Wireshark
website(wireshark): http://www.wireshark.org
docs(plugin): https://bitbucket.org/barnstorm/of-dissector
dependencies
wireshark(source):
sudo apt-get install libpcap-dev bison flex libgtk2.0-dev build-essential
plugin:
sudo apt-get install scons mercurial
installation/build
You need the source for Wireshark to build the plugin. At the time of this writing Wireshark is at v.1.10.
wget http://wiresharkdownloads.riverbed.com/wireshark/src/wireshark-1.10.0.tar.bz2 tar -xjf wireshark-1.10.0.tar.bz2 cd wireshark-1.10.0/ ./configure
The above is sufficient for the plugin. Installing Wireshark from source e.g. with make;make install
can take a while, so you may choose to install the binary, i.e. do:
apt-get install wireshark
If you decide to build from source, also install libwiretap1
.
Next fetch and build the plugin:
hg clone https://bitbucket.org/barnstorm/of-dissector cd of-dissector/ export WIRESHARK=${WS_ROOT}/wireshark-1.10.0/ cd src scons install cp openflow.so /usr/lib/wireshark/libwireshark1/plugins/
Where ${WS_ROOT} is the directory you've untarred the Wireshark source to. The plugin directory may also differ depending on if you installed Wireshark from source or not - if you did, the path will be something similar to /usr/local/lib/wireshark/plugins/1.10.0/
run
Run Wireshark as root:
sudo wireshark
You should see openflow.so in the list of plugins if you go to Help > About Wireshark > plugins.