Changes between Initial Version and Version 1 of Documentation/fSDN/bMininet


Ignore:
Timestamp:
Dec 7, 2014, 10:29:16 PM (9 years ago)
Author:
ssugrim
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Documentation/fSDN/bMininet

    v1 v1  
     1=== Getting started with Mininet ===
     2
     3This page documents the usage/installation of mininet as well as it's interaction with OEDL.
     4The main source is the mininet homepage located at http://mininet.org/.
     5
     6==== 0. Installation ==== #install
     7
     8While mininet can be installed via an apt-package, the version that is in the repository is very old and does not have some important features so we will instead install by source. References can be found [http://mininet.org/download/ here]
     9
     10 1. Install build perquisite packages:
     11    {{{
     12    apt-get install build-essential git-core
     13    }}}
     14 2. Clone the Repository:
     15    {{{
     16    apt-get install build-essential git-core
     17    }}}
     18 3. According to the note located [https://github.com/mininet/mininet/blob/master/INSTALL here], the default install comes with more packages than we need, so instead we will call the install script with the ''' -nv ''' flag
     19    {{{
     20    mininet/util/install.sh -nv
     21    }}}
     22This will install the base mini-net dependencies with only open-vswitch support. Once installed you can try the pingall test to ensure the installation succeeded:
     23{{{
     24root@node1-1:~# sudo mn --test pingall
     25*** Creating network
     26*** Adding controller
     27*** Adding hosts:
     28h1 h2
     29*** Adding switches:
     30s1
     31*** Adding links:
     32(h1, s1) (h2, s1)
     33*** Configuring hosts
     34h1 h2
     35*** Starting controller
     36c0
     37*** Starting 1 switches
     38s1
     39*** Waiting for switches to connect
     40s1
     41*** Ping: testing ping reachability
     42h1 -> h2
     43h2 -> h1
     44*** Results: 0% dropped (2/2 received)
     45*** Stopping 1 controllers
     46c0
     47*** Stopping 1 switches
     48s1 ..
     49*** Stopping 2 links
     50
     51*** Stopping 2 hosts
     52h1 h2
     53*** Done
     54completed in 0.976 seconds
     55}}}
     56
     57
     58==== 1. Running the network ==== #nw_setup
     59
     60As a two-node example, we image the nodes on Sandbox8, as explained in [#imaging Section 1.1]. One is used for the controller, and the other, the Mininet network.
     61
     62 1. ''Bring up and assign addresses to eth0 of the nodes''. Both should be in the same IP block. If done from console, the commands look like this:
     63 {{{
     64$ ssh root@node1-1 "ifconfig eth0 inet 192.168.1.1 up"
     65$ ssh root@node1-2 "ifconfig eth0 inet 192.168.1.2 up"
     66 }}} 
     67 The nodes should now be able to ping eachother via eth0:
     68 {{{
     69$ ssh root@node1-1 "ping -c 1 192.168.1.2"
     70PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
     7164 bytes from 192.168.1.2: icmp_req=1 ttl=64 time=0.614 ms
     72
     73--- 192.168.1.2 ping statistics ---
     741 packets transmitted, 1 received, 0% packet loss, time 0ms
     75rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms
     76 }}}
     77
     78 2. ''Start the controller on one node''. We arbitrarily pick node1-1. From a shell on node1-1, launch Floodlight:
     79 {{{
     80# cd floodlight
     81# java -jar target/floodlight.jar
     82 }}}
     83 After you give it a few seconds, Floodlight should be listening to port 6633 on all interfaces available on the node (eth0, 1, and lo). If you want, you can start up `tcpdump` or something similar on a separate terminal on node1-1 to begin capturing control messages:
     84 {{{
     85# tcpdump -i lo port 6633
     86 }}}
     87 Alternatively, you can start `tcpdump` to write to a .pcap file for later analysis with `wireshark` with the !OpenFlow plugin.
     88 {{{
     89# tcpdump -w outfile.pcap -i lo port 6633
     90 }}}
     91 3. ''Launch Mininet''. From another shell on node1-2:
     92 {{{
     93# mn --topo=single,2 --controller=remote,ip=192.168.1.1
     94 }}}
     95 This will give you a virtual network of two hosts and one switch pointed to the running Floodlight instance on node1-1. Once at the prompt, try pinging one host from the other:
     96 {{{
     97mininet> h1 ping h2
     98PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
     9964 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=8.19 ms
     10064 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.164 ms
     10164 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.025 ms
     10264 bytes from 10.0.0.2: icmp_req=4 ttl=64 time=0.024 ms
     103^C
     104--- 10.0.0.2 ping statistics ---
     1054 packets transmitted, 4 received, 0% packet loss, time 2999ms
     106rtt min/avg/max/mdev = 0.024/2.101/8.193/3.517 ms
     107 }}}
     108Notice how the first ping takes much longer. This is due to the flow installation process triggered by the first ping (Specifically, the ARPs sent by the hosts) as the switch suffers a flow table miss. At the same time, you should see (lots of) packets being captured by tcpdump in node1-1's terminal:
     109 {{{
     110root@node1-1:~/floodlight# tcpdump -i eth0 port 6633
     111tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
     112listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
     11320:18:30.188181 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [S], seq 3242563912, win 14600, options [mss 1460,sackOK,TS val 699854 ecr 0,nop,wscale 4], length 0
     11420:18:30.188321 IP 192.168.1.1.6633 > 192.168.1.2.41631: Flags [S.], seq 2665849071, ack 3242563913, win 14480, options [mss 1460,sackOK,TS val 700809 ecr 699854,nop,wscale 4], length 0
     11520:18:30.188466 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [.], ack 1, win 913, options [nop,nop,TS val 699854 ecr 700809], length 0
     11620:18:30.188618 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [F.], seq 1, ack 1, win 913, options [nop,nop,TS val 699854 ecr 700809], length 0
     11720:18:30.190310 IP 192.168.1.1.6633 > 192.168.1.2.41631: Flags [.], ack 2, win 905, options [nop,nop,TS val 700810 ecr 699854], length 0
     11820:18:30.224204 IP 192.168.1.1.6633 > 192.168.1.2.41631: Flags [P.], seq 1:9, ack 2, win 905, options [nop,nop,TS val 700818 ecr 699854], length 8
     11920:18:30.224426 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [R], seq 3242563914, win 0, length 0
     12020:18:30.402564 IP 192.168.1.2.41632 > 192.168.1.1.6633: Flags [S], seq 1611313095, win 14600, options [mss 1460,sackOK,TS val 699908 ecr 0,nop,wscale 4], length 0
     12120:18:30.402585 IP 192.168.1.1.6633 > 192.168.1.2.41632: Flags [S.], seq 367168075, ack 1611313096, win 14480, options [mss 1460,sackOK,TS val 700863 ecr 699908,nop,wscale 4], length 0
     122...
     123 }}}
     124
     125=== 1.2.1 Using Wireshark ===
     126In the above example, `tcpdump` can be replaced by `wireshark`. Wireshark is "friendlier" in that it has a GUI and an !OpenFlow dissector plugin is available for it. In order to use Wireshark, you must enable X11 forwarding from your workstation to the node, with the -X or -Y flag for `ssh` e.g.:
     127{{{
     128ssh -X -l root node1-1
     129}}}
     130
     131=== 1.2.2 Using OpenVswitch directly ===
     132Mininet's datapaths are backed by OVS. Therefore, if you have a Mininet install, you get OVS for "free". You can use OVS directly for your data plane.
     133
     134----
     135= II More complex examples = #II
     136It is possible to run multiple instances of controllers (for whatever reason), or different logical components together in the same network.
     137This section shows two examples of more complex SDN network setups - multiple controller instances and with !FlowVisor, a network hypervisor.
     138
     139=== Sections ===
     140 [#multi 2.1 Multiple Controllers ][[BR]]
     141 [#fv 2.2 Network virtualization/slicing ][[BR]]
     142== 2.1 Multiple Controllers == #multi
     143You may have multiple controllers in the same logical space of the control plane for various reasons - special applications, fail-over, distributed control planes, etc.
     144 
     145 * 2.1.1 On multiple hosts
     146 * 2.1.2 On the same host
     147
     148=== 2.1.1 On multiple hosts ===
     149If each controller is running on its own host (machine, VM, etc.), there is little to change; if you have hosts A,B, and C, and Floodlight instances running on each, switches can be pointed to targets A:6633, B:6633, C:6633, or any combination thereof (switches can be pointed to multiple controllers). 
     150
     151=== 2.1.2 On the same host === #s2_1_2
     152==== The Floodlight configuration file ====
     153Multiple instances of Floodlight may be run on the same host, as long as each controller listens on a separate set of sockets. In this case, all controllers would be on the same IP address(es), so you must change the ports they are listening on. These ports include the !OpenFlow control port (TCP 6633), REST API (TCP 8080), and debug (TCP 6655).
     154
     155In Floodlight, this value can be changed by modifying the file floodlightdefault.properties, located in src/main/resources/ of the Floodlight sources. (Currently) It looks like this:
     156{{{
     157floodlight.modules=\
     158net.floodlightcontroller.jython.JythonDebugInterface,\
     159net.floodlightcontroller.counter.CounterStore,\
     160net.floodlightcontroller.storage.memory.MemoryStorageSource,\
     161net.floodlightcontroller.core.internal.FloodlightProvider,\
     162net.floodlightcontroller.threadpool.ThreadPool,\
     163net.floodlightcontroller.devicemanager.internal.DeviceManagerImpl,\
     164net.floodlightcontroller.devicemanager.internal.DefaultEntityClassifier,\
     165net.floodlightcontroller.staticflowentry.StaticFlowEntryPusher,\
     166net.floodlightcontroller.firewall.Firewall,\
     167net.floodlightcontroller.forwarding.Forwarding,\
     168net.floodlightcontroller.linkdiscovery.internal.LinkDiscoveryManager,\
     169net.floodlightcontroller.topology.TopologyManager,\
     170net.floodlightcontroller.flowcache.FlowReconcileManager,\
     171net.floodlightcontroller.debugcounter.DebugCounter,\
     172net.floodlightcontroller.debugevent.DebugEvent,\
     173net.floodlightcontroller.perfmon.PktInProcessingTime,\
     174net.floodlightcontroller.ui.web.StaticWebRoutable,\
     175net.floodlightcontroller.loadbalancer.LoadBalancer,\
     176org.sdnplatform.sync.internal.SyncManager,\
     177org.sdnplatform.sync.internal.SyncTorture,\
     178net.floodlightcontroller.devicemanager.internal.DefaultEntityClassifier
     179org.sdnplatform.sync.internal.SyncManager.authScheme=CHALLENGE_RESPONSE
     180org.sdnplatform.sync.internal.SyncManager.keyStorePath=/etc/floodlight/auth_credentials.jceks
     181org.sdnplatform.sync.internal.SyncManager.dbPath=/var/lib/floodlight/
     182}}}
     183
     184Several entries can be added to this list to tweak TCP port values. Unfortunately, these entries may change fairly frequently due to active development.
     185 * net.floodlightcontroller.restserver.!RestApiServer.port = 8080
     186 * net.floodlightcontroller.core.internal.!FloodlightProvider.openflowport = 6633
     187 * net.floodlightcontroller.jython.!JythonDebugInterface.port = 6655
     188
     189Each entry should be on its own line, with no spaces or newlines in between lines. For example, to change the port that Floodlight listens for switches on from the default of 6633 to 6634, append:
     190{{{
     191net.floodlightcontroller.core.internal.FloodlightProvider.openflowport = 6634
     192}}} 
     193To the .properties file. Then, point Floodlight to the configuration file with the `-cf` flag:
     194{{{
     195java -jar target/floodlight.jar -cf src/main/resources/floodlightdefault.properties
     196}}}
     197The file specified after -cf will be read in, and the values in it used to configure the controller instance. You should be able to confirm the change:
     198{{{
     199# netstat -nlp | grep 6634
     200...
     201tcp6       0      0 :::6634                 :::*                    LISTEN      2029/java       
     202...
     203}}}
     204
     205==== Launching multiple controllers ====
     206Each instance of the controller run on the same host can be pointed to its own .properties file with the `-cf` flag, with different port value parameters. Begin by making as many copies of the default .properties file as you will have controllers. Going with a similar example as earlier, you can have one host A and three Floodlight instances 1,2, and 3, configured as below:
     207
     208 || || 1 || 2 || 3 || 
     209 || !FloodlightProvider.openflowport || 6633  || 6634  || 6635  ||
     210 || !RestApiServer.port || 8080 || 8081 || 8082 ||
     211 || !JythonDebugInterface.port || 6655 || 6656 || 6657 ||
     212
     213No ports should be shared by the three instances, or else they will probably throw errors at startup and exit shortly after.
     214With a .properties file for each instance under resources/ (named 1,2, and 3.properties for this example), you can launch the controllers in a loop for example:
     215{{{
     216for i in `seq 1 3`; do
     217   java -jar target/floodlight.jar -cf src/main/resources/$i.properties 1>/dev/null 2>&1 &
     218done
     219}}}
     220This should launch three backgrounded instances of Floodlight.
     221
     222== 2.2 Network virtualization/slicing == #fv
     223A more typical case you might encounter is a network that is sliced, or virtualized.
     224
     225 * 2.2.1 A brief intro to network virtualization
     226 * 2.2.2 Virtualization with multiple hosts
     227 * 2.2.3 On the same host
     228
     229=== 2.2.1 A brief intro to network virtualization ===
     230A virtualized network is organized as below:
     231{{{
     232[controller 1] [controller 2] [controller 3]
     233             \       |       /
     234              \      |      /
     235            [network hypervisor]-[policies]
     236                     |
     237                  [network]
     238}}}
     239A network hyperviser like [http://onlab.us/flowvisor.html FlowVisor] sits between the control and data plane, intercepting and re-writing the contents of the !OpenFlow control channel to one or more controllers running independently of one another. Ultimately, the network hypervisor provides each controller with an illusion that it is the only controller in the network. It accomplishes this by
     240
     241 1. Rewriting the topology information conveyed by !OpenFlow (in the form of PORT_STATs and !PacketIns triggered by LLDP messages) before it reaches each controller, allowing it to only work on a subset, or slice, of the network, and
     242 2. Mapping the !PacketIns/!PacketOuts to and from each controller to the proper sets of switches and switch ports.   
     243
     244How the re-writing occurs depends on a set of admin-defined policies.
     245
     246
     247
     248=== 2.2.2 Virtualization with multiple hosts ===
     249We begin by introducing a simple example of a virtualized topology:
     250{{{
     251[Floodlight 1] [Floodlight 2]
     252           \    /
     253         [FlowVisor]
     254              |
     255          [Mininet]   
     256}}}
     257Each component above will be run on a separate node. Since we need more than two nodes, you may want to reserve either Sandboxes 4 or 9. The components can also be run on the same node, with the caveats discussed in the next section, 2.2.3.
     258
     259Here, Mininet will be used to emulate a three-switch, three-host data plane:
     260{{{
     261h1   h2   h3
     262 |    |    |
     263s1---s2---s3
     264}}}
     265This data plane will be sliced so that one Floodlight instance will control switches s1 and s2, and the other, s3. 
     266
     267
     268
     269=== 2.2.3 On the same host ===
     270As with the case of multiple controllers on the same VM/host, you must be careful that neither !FlowVisor nor the controllers listen on the same sets of ports. For the multiple controllers, this can be avoided as described in [#s2_1_2 Section 2.1.2]. !FlowVisor and Floodlight conflict on ports 6633 and 8080.     
     271
     272----
     273= III Installation = #III
     274The following are the installation steps and basic usage for the software that are found on the image. For more information, refer to their respective pages; Floodlight and Mininet in particular have very thorough docs.
     275
     276Quick links: [[BR]]
     277 [#floodlight 3.1 Floodlight][[BR]]
     278 [#mn 3.2 Mininet][[BR]]
     279 [#cbench 3.3 CBench] [[BR]]
     280 [#loft 3.4 liboftrace] [[BR]]
     281 [#ws 3.5 Wireshark] [[BR]]
     282 [#fvisor 3.6 FlowVisor] [[BR]]
     283
     284Note, the following examples are for Ubuntu, since that's what is used at WINLAB. A quick search will often bring up hints/steps for CentOS/RHEL and OSX, but for the most part, you will have to experiment a bit.
     285 
     286== 3.1 Floodlight == #floodlight
     287docs: http://docs.projectfloodlight.org/display/floodlightcontroller/Floodlight+Documentation [[BR]]
     288
     289For the most part the following is a repetition of some of the things there. Truth be told, if you plan to modify/develop on Floodlight it is better to just install it on a local machine where you can use eclipse (either that, or you can try to X11 forward, but that doesn't always go well).
     290=== dependencies ===
     291{{{
     292sudo apt-get install git-core build-essential default-jdk ant python-dev eclipse
     293}}}
     294=== installation ===
     295The following fetches and builds the latest stable release:
     296{{{
     297git clone git://github.com/floodlight/floodlight.git
     298cd floodlight
     299git checkout fl-last-passed-build
     300ant
     301}}}
     302To import as a project on Eclipse, run the following while in the same directory:
     303{{{
     304ant eclipse
     305}}}
     306=== run ===
     307Assuming everything worked out:
     308{{{
     309java -jar target/floodlight.jar
     310}}}
     311from the floodlight/ directory launches Floodlight. It will output a bunch of messages while it searches for, loads, and initializes modules. You can refer to the output attached below for what it should look like - there may be warnings, but they should be harmless. 
     312
     313This command also launches in the foreground, so you can either launch it in a terminal multiplexer like `screen` or `tmux`, or with a `1>logfile 2>&1 &` tacked to the end. The former is probably recommended.   
     314=== development ===
     315Tutorials and other information can be found here: http://docs.projectfloodlight.org/display/floodlightcontroller/For+Developers
     316
     317== 3.2 Mininet == #mn
     318website: http://mininet.org/ [[BR]]
     319It is highly recommended to run trough the docs, especially the following:
     320 * FAQs: https://github.com/mininet/mininet/wiki/FAQ
     321 * Getting Started: http://mininet.org/download/ Getting Started
     322 * Sample Workflow: http://mininet.org/sample-workflow/ Sample Workflow
     323 * Walkthrough: http://mininet.org/walkthrough/   
     324
     325If you post to the list especially before you read the FAQ's, you will likely just be asked if you have checked them.
     326 
     327=== installation/build ===
     328The [https://github.com/mininet/mininet/downloads/ VM] is the recommended way to run Mininet on your machine. [[BR]]
     329The following is for a native install (as on the node image).
     330
     331The method differs for different versions of Ubuntu. The following is for 12.04. For others, refer to [http://www.projectfloodlight.org/getting-started/ this] page.
     332The following also takes care of the dependencies.
     333{{{
     334sudo apt-get install mininet/precise-backports
     335}}}
     336Then disable `ovs-controller`:
     337{{{
     338sudo service openvswitch-controller stop
     339sudo update-rc.d openvswitch-controller disable
     340}}}
     341You may also need to start open Vswitch:
     342{{{
     343sudo service openvswitch-switch start
     344}}}
     345You can verify that it works with the following:
     346{{{
     347sudo mn --test pingall
     348}}}
     349This sets up a 2-host, 1-switch topology and pings between the hosts. The output looks similar to this:
     350{{{
     351*** Creating network
     352*** Adding controller
     353*** Adding hosts:
     354h1 h2
     355*** Adding switches:
     356s1
     357*** Adding links:
     358(h1, s1) (h2, s1)
     359*** Configuring hosts
     360h1 h2
     361*** Starting controller
     362*** Starting 1 switches
     363s1
     364*** Ping: testing ping reachability
     365h1 -> h2
     366h2 -> h1
     367*** Results: 0% dropped (0/2 lost)
     368*** Stopping 2 hosts
     369h1 h2
     370*** Stopping 1 switches
     371s1 ...
     372*** Stopping 1 controllers
     373c0
     374*** Done
     375completed in 0.460 seconds
     376}}}
     377=== run ===
     378There are many flags and options associated with launching Mininet. `mn --help` will display them. [[BR]]
     379For example, to start the same topology as the pingall test, but with a controller running separately from Mininet:
     380{{{
     381# mn --topo=single,2 --controller=remote,ip=10.18.1.1 --mac
     382*** Creating network
     383*** Adding controller
     384*** Adding hosts:
     385h1 h2
     386*** Adding switches:
     387s1
     388*** Adding links:
     389(h1, s1) (h2, s1)
     390*** Configuring hosts
     391h1 h2
     392*** Starting controller
     393*** Starting 1 switches
     394s1
     395*** Starting CLI:
     396mininet>
     397}}}
     398 * --topo=single,2 : one switch with two hosts
     399 * --controller=remote,ip=10.18.1.1 : controller at 10.18.1.1 
     400 * --mac : non-random MAC addresses
     401Some useful ones are:
     402 * controller external to Mininet, at IP addr and port p:
     403{{{
     404--controller=remote,ip=[addr],port=[p]
     405}}}
     406 * non-random host MAC addresses (starting at 00:00:00:00:00:01 for h1)
     407{{{
     408--mac
     409}}}
     410=== usage ===
     411You can find available commands for the command line by typing `?` at the prompt. `exit` quits Mininet. [[BR]]
     412Some basic examples:
     413 * display topology:
     414{{{
     415mininet> net
     416c0
     417s1 lo:  s1-eth1:h1-eth0 s1-eth2:h2-eth0
     418h1 h1-eth0:s1-eth1
     419h2 h2-eth0:s1-eth2
     420}}}
     421 * display host network info:
     422{{{
     423mininet> h1 ifconfig
     424h1-eth0   Link encap:Ethernet  HWaddr 00:00:00:00:00:01 
     425          inet addr:10.0.0.1  Bcast:10.255.255.255  Mask:255.0.0.0
     426          inet6 addr: fe80::200:ff:fe00:1/64 Scope:Link
     427          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
     428          RX packets:135 errors:0 dropped:124 overruns:0 frame:0
     429          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
     430          collisions:0 txqueuelen:1000
     431          RX bytes:8906 (8.9 KB)  TX bytes:558 (558.0 B)
     432
     433lo        Link encap:Local Loopback 
     434          inet addr:127.0.0.1  Mask:255.0.0.0
     435          inet6 addr: ::1/128 Scope:Host
     436          UP LOOPBACK RUNNING  MTU:16436  Metric:1
     437          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
     438          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
     439          collisions:0 txqueuelen:0
     440          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
     441}}}
     442 * ping host 1 from host 2
     443{{{
     444mininet> h2 ping h1
     445PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
     44664 bytes from 10.0.0.1: icmp_req=1 ttl=64 time=10.0 ms
     447^C
     448--- 10.0.0.1 ping statistics ---
     4491 packets transmitted, 1 received, 0% packet loss, time 0ms
     450rtt min/avg/max/mdev = 10.026/10.026/10.026/0.000 ms
     451}}}
     452=== scripting ===
     453Mininet has a Python API, whose docs can be found online: http://mininet.org/api/ [[BR]]
     454Examples can also be found here: https://github.com/mininet/mininet/tree/master/examples
     455
     456Once you write a script, you can run it as a script:
     457{{{
     458python mn_script.py
     459}}}
     460
     461== 3.3 Cbench == #cbench
     462website: http://docs.projectfloodlight.org/display/floodlightcontroller/Cbench+(New)
     463
     464=== dependencies ===
     465{{{
     466sudo apt-get install autoconf automake libtool libsnmp-dev libpcap-dev
     467}}}
     468=== installation/build ===
     469{{{
     470git clone git://gitosis.stanford.edu/openflow.git
     471cd openflow; git checkout -b mybranch origin/release/1.0.0
     472git clone git://gitosis.stanford.edu/oflops.git
     473git submodule init && git submodule update
     474wget http://hyperrealm.com/libconfig/libconfig-1.4.9.tar.gz
     475tar -xvzf libconfig-1.4.9.tar.gz
     476cd libconfig-1.4.9
     477./configure
     478sudo make && sudo make install
     479cd ../oflops/
     480sh ./boot.sh ; ./configure --with-openflow-src-dir=${OF_PATH}/openflow/
     481make install
     482}}}
     483Where OF_PATH is where you had cloned the !OpenFlow repository to.
     484
     485=== run ===
     486Run from the cbench directory under oflops:
     487{{{
     488cd cbench
     489cbench -c localhost -p 6633 -m 10000 -l 10 -s 16 -M 1000 -t
     490}}}
     491 * -c localhost : controller at loopback
     492 * -p 6633 : controller listaning at port 6633
     493 * -m 10000 : 10000 ms (10 sec) per test
     494 * -l 10 : 10 loops(trials) per test
     495 * -s 16 : 16 emulated switches
     496 * -M 1000 : 1000 unique MAC addresses(hosts) per switch
     497 * -t : throughput testing
     498for the complete list, use the `-h` flag.
     499
     500The output for the above command looks like this:
     501{{{
     502cbench: controller benchmarking tool
     503   running in mode 'throughput'
     504   connecting to controller at localhost:6633
     505   faking 16 switches offset 1 :: 3 tests each; 10000 ms per test
     506   with 10 unique source MACs per switch
     507   learning destination mac addresses before the test
     508   starting test with 0 ms delay after features_reply
     509   ignoring first 1 "warmup" and last 0 "cooldown" loops
     510   connection delay of 0ms per 1 switch(es)
     511   debugging info is off
     51216:53:14.384 16  switches: flows/sec:  18  18  18  18  18  18  18  18  18  18  18  18  18  18  18  18   total = 0.028796 per ms
     51316:53:24.485 16  switches: flows/sec:  20  20  20  20  20  20  20  20  20  20  20  20  20  20  20  20   total = 0.031999 per ms
     51416:53:34.590 16  switches: flows/sec:  24  24  24  24  24  24  24  24  24  24  24  24  24  24  24  24   total = 0.038380 per ms
     515RESULT: 16 switches 2 tests min/max/avg/stdev = 32.00/38.38/35.19/3.19 responses/s
     516}}}
     517
     518== 3.4 liboftrace (ofdump/ofstats) == #loft
     519docs: [[BR]]
     520 https://github.com/capveg/oftrace/blob/master/README [[BR]]
     521 http://www.openflow.org/wk/index.php/Liboftrace
     522
     523=== dependencies ===
     524{{{
     525sudo apt-get install libpcap-dev swig libssl-dev
     526}}}
     527=== installation/build ===
     528{{{
     529git clone git://github.com/capveg/oftrace.git
     530cd oftrace
     531./boot.sh
     532./configure --with-openflow-src-dir=${OF_PATH}/openflow/
     533make && make install
     534}}}
     535=== run ===
     536There are two tools pre-packaged with liboftrace (as per a [https://mailman.stanford.edu/pipermail/openflow-discuss/2009-April/000133.html mailing-list entry]):
     537 1. ofstats: a program which calculates the controller processing delay, i.e., the difference in time between a packet_in message and the corresponding packet_out or flow_mod message.
     538 1. ofdump: a program that simply lists openflow message types with timestamps by switch/controller pair.
     539Both have the same syntax:
     540{{{
     541[ofstats|ofdump] [controller IP] [OF port]
     542}}}
     543Without the arguments it defaults to localhost:6633.
     544
     545For example, with a pcap file named sample.pcap from a `tcpdump` session sniffing for traffic from a controller at 192.168.1.5, port 6637: [[BR]]
     546ofdump:
     547{{{
     548# ofdump sample.pcap 192.168.1.5 6637
     549DBG: tracking NEW stream : 192.168.1.5:6637-> 192.168.1.6:47598
     550DBG: tracking NEW stream : 192.168.1.6:47598-> 192.168.1.5:6637
     551FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 0      LEN 8   TIME 0.000000
     552FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 0      LEN 8   TIME 0.026077
     553FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 5      LEN 8   TIME 0.029839
     554FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 6      LEN 128 TIME 0.1070415
     555
     556...
     557
     558FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 10     LEN 60  TIME 0.2038485
     559 --- 2 sessions:  0 0
     560FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 13     LEN 24  TIME 0.2038523
     561FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 10     LEN 60  TIME 0.2038573
     562FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 13     LEN 24  TIME 0.2038614
     563FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 10     LEN 60  TIME 0.2038663
     564FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 13     LEN 24  TIME 0.2038704
     565Total OpenFlow Messages: 20015
     566}}}
     567ofstats:
     568{{{
     569# ofstats sample.pcap 192.168.1.5 6637 
     570Reading from pcap file 1.pcap for controller 192.168.1.5 on port 6637
     571DBG: tracking NEW stream : 192.168.1.5:6637-> 192.168.1.6:47598
     572DBG: tracking NEW stream : 192.168.1.6:47598-> 192.168.1.5:6637
     5730.008088        secs_to_resp buf_id=333 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
     5740.000454        secs_to_resp buf_id=334 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued
     5750.000437        secs_to_resp buf_id=335 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
     5760.000534        secs_to_resp buf_id=336 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
     5770.000273        secs_to_resp buf_id=337 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued
     5780.000486        secs_to_resp buf_id=338 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued
     5790.000379        secs_to_resp buf_id=339 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
     5800.000275        secs_to_resp buf_id=340 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
     581...
     5820.000135        secs_to_resp buf_id=10330 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
     5830.000132        secs_to_resp buf_id=10331 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
     5840.000131        secs_to_resp buf_id=10332 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
     585}}}
     586
     587Since the outputs are dumped to stdout it is probably best to redirect it to a file for parsing later, like so:
     588{{{
     589# ofstats sample.pcap 192.168.1.5 6637 > outfile
     590}}}
     591
     592== 3.5 Wireshark == #ws
     593website(wireshark): http://www.wireshark.org [[BR]]
     594docs(plugin): https://bitbucket.org/barnstorm/of-dissector
     595
     596=== dependencies ===
     597wireshark(source):
     598{{{
     599sudo apt-get install libpcap-dev bison flex libgtk2.0-dev build-essential
     600}}}
     601plugin:
     602{{{
     603sudo apt-get install scons mercurial
     604}}}
     605
     606=== installation/build ===
     607You need the source for Wireshark to build the plugin. At the time of this writing Wireshark is at v.1.10. 
     608{{{
     609wget http://wiresharkdownloads.riverbed.com/wireshark/src/wireshark-1.10.0.tar.bz2
     610tar -xjf wireshark-1.10.0.tar.bz2
     611cd wireshark-1.10.0/
     612./configure
     613}}}
     614
     615The above is sufficient for the plugin. Installing Wireshark from source e.g. with `make;make install` can take a while, so you may choose to install the binary, i.e. do:
     616{{{
     617apt-get install wireshark
     618}}}
     619If you decide to build from source, also install `libwiretap1`.
     620
     621Next fetch and build the plugin:
     622{{{
     623hg clone https://bitbucket.org/barnstorm/of-dissector
     624cd of-dissector/
     625export WIRESHARK=${WS_ROOT}/wireshark-1.10.0/
     626cd src
     627scons install
     628cp openflow.so /usr/lib/wireshark/libwireshark1/plugins/
     629}}}
     630Where ${WS_ROOT} is the directory you've untarred the Wireshark source to. The plugin directory may also differ depending on if you installed Wireshark from source or not - if you did, the path will be something similar to /usr/local/lib/wireshark/plugins/1.10.0/
     631
     632=== run ===
     633Run Wireshark as root:
     634{{{
     635sudo wireshark
     636}}}
     637
     638You should see openflow.so in the list of plugins if you go to Help > About Wireshark > plugins.
     639
     640=== 3.6 FlowVisor === #fvisor
     641website: http://onlab.us/flowvisor.html