Changes between Version 15 and Version 16 of Internal/OpenFlow/ofTopology


Ignore:
Timestamp:
Jul 17, 2012, 5:48:02 AM (12 years ago)
Author:
akoshibe
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Internal/OpenFlow/ofTopology

    v15 v16  
    11= Building Network Topologies =
    2 This page aims to describe some things-to-consider when setting up topologies for experimentation.
     2This page aims to be a compilation of various notes regarding things to consider when, as well as techniques for, setting up topologies for network experiments. We consider the case where nodes share a broadcast domain at L2, and can hear all traffic.
     3
     4The first section describes methods for simulating point-to-point links between two nodes sharing a single switch. The second section describes possible ways to describe a desired topology to an automated system so that it can build the topology for a user.   
     5
     6== Contents ==
    37 I. [#p2p Simulating Point-to-Point Links] [[BR]]
    48 II. [#tdescr Topology Description Methods]
    59
    610== Prerequisites ==
    7 This page assumes that you have a setup similar to [http://www.orbit-lab.org/wiki/Documentation/OpenFlow SB9], as well as a node with a working install of NetFPGA drivers or !OpenvSwitch, depending on how links are being set up. For the !OpenFlow methods, you also need a !OpenFlow controller that allows you to push flows to your software defined switch. You should have access to the switch that the nodes are sharing as well, since you need to slice it into VLANs. The following links describe setup and use of theses components (internal links):
     11We assume that you have a setup similar to [http://www.orbit-lab.org/wiki/Documentation/OpenFlow SB9], of several nodes connected to shared switch(es). For NetFPGA-based topology setups, you need nodes with NetFPGA drivers and bitfiles. For software-based switching, we use !OpenvSwitch or Linux bridging. For the !OpenFlow methods, you also need a !OpenFlow controller that allows you to push flows to your software defined switch. You should have access to the switch that the nodes are sharing as well, since you need to configure VLANs and the likes. The following links describe setup and use of theses components (internal links):
    812 
    913 * [http://www.orbit-lab.org/wiki/Documentation/OpenFlow/vSwitchImage OpenVswitch] - A software-defined virtual switch with !OpenFlow support, no special hardware required.
     
    1216 * As for the !OpenFlow controller, there is a [http://www.orbit-lab.org/wiki/Internal/OpenFlow/Controllers collection] to choose from.
    1317
    14 The system used here is Ubuntu10.10 (kernel: 2.6.35-30-generic). Command syntax will change depending on your distro.
     18The nodes that were used for the topologies in this page are NetFPGA cubes running Ubuntu10.10 (kernel: 2.6.35-30-generic).
    1519
    1620= I. Simulating point-to-point Links = #p2p
    17 
    18 This section aims to provide a rough overview of the steps one needs to take in order to simulate point-to-point links between nodes sharing a single switch (e.g. within the same broadcast domain), using standard and !OpenFlow-controlled nodes. In general, we want to partition the shared switch so that the nodes are isolated from each other, and then introduce relays that can move traffic between these partitions in a controlled manner. The way the traffic is relayed produces the topology. The general topology we use to describe our methods is the following:
    19 {{{
    20  A-[r]-B
    21 }}}
    22 Where A and B are nodes 'trapped' in their partitions, and [r] is a relay node that straddles the partitioning on the shared switch. We call A and B ''end nodes'' and [r] a ''network node''; From this logic it follows that the partition is the ''link'' (despite it actually being a logical setup on the shared switch, rather than a wire). Given that the partitions have identifiers, such as IP block addresses, nodes on the same link must share the identifier, and relays must know the identifiers of all partitions that it connects.
    23  
     21This section introduces topology setup using the simplest case of a single link between two nodes.
     22
     23== 1.1 Overview ==
     24In general, a topology describes the restrictions on traffic flow between multiple nodes. We build a topology by first partitioning the shared switch so that the nodes are isolated from each other, and then introducing relays that can move traffic between these partitions in a controlled manner. The way the traffic is relayed produces the topology. Our base topology, in which all nodes can reach eachother, is a fully connected graph:
     25{{{
     26(A)   A - B         
     27       \ /         
     28        C           
     29}}}
     30
     31We build the following topology from the one shown above to demonstrate our methods:
     32{{{
     33(B)
     34     A-[r]-B
     35}}}
     36Where A and B are nodes 'trapped' in their partitions, and [r] (C in fig. A) is a relay node that straddles the partitioning on the shared switch. We call A and B ''end nodes'' and [r] a ''network node'' joining together two links; From this logic it follows that the partition is the ''link'' (despite it actually being a logical setup on the shared switch, rather than a wire). Given that the partitions have identifiers, such as IP block addresses, nodes on the same link must share the identifier, and relays must know the identifiers of all partitions that it connects.
     37
     38We cover a handful of ways to realize the setup in Fig. B. 
    2439== Contents ==
    25  1.1 [#pre1 Some Considerations] [[BR]]
    26  1.2 [#basic Non-OpenFlow Methods] [[BR]]
    27   1.2.1 [#KernIP Kernel IP routing] (Layer 3) [[BR]]
    28   1.2.2 [#brctl Linux Bridge] (Layer 2) [[BR]]
    29   1.2.3 [#filt Packet filters] (Layers 2-4)
    30  1.3 [#of OpenFlow Methods]
    31   1.3.1 [#OVS OpenvSwitch] [[BR]]
    32   1.3.2 [#nfpga NetFPGA OpenFlow switch] [[BR]]
    33   1.3.3 [#pt Prototyping] - With Mininet [[BR]]
    34  1.4 [#morals1 Summary]
    35 !OpenFlow is rather layer-agnostic, defining traffic rules based on a combination of any of the 12 packet header fields that may be used for matching under the !OpenFlow standard. These fields correspond to layers 1~4.   
    36 
    37 == 1.1 Some Considerations == #pre1
     40 1.2 [#pre1 Some Considerations] [[BR]]
     41 1.3 [#basic Non-OpenFlow Methods] [[BR]]
     42  1.3.1 [#KernIP Kernel IP routing] (Layer 3) [[BR]]
     43  1.3.2 [#brctl Linux Bridge] (Layer 2) [[BR]]
     44  1.3.3 [#filt Packet filters] (pending) (Layers 2-4)
     45 1.4 [#of OpenFlow Methods] (Layers 1-4)
     46  1.4.1 [#OVS OpenvSwitch] [[BR]]
     47  1.4.2 [#nfpga NetFPGA OpenFlow switch] [[BR]]
     48  1.4.3 [#pt Prototyping] - With Mininet (pending) [[BR]]
     49 1.5 [#morals1 Summary]
     50
     51== 1.2 Some Considerations == #pre1
    3852The techniques used to partition the broadcast domain will heavily depend on two things:
    3953 1. the type of experiment
     
    5872Note, virtual interfaces are workarounds to being restricted to one physical interface. Any setup with nodes with multiple interfaces (e.g. using NetFPGAs) will not require the above configs, lest you want more interfaces than you have. For nodes with multiple physical interfaces, the steps describing 'eth0.xxx' can be replaced by the names of each unique interface. Keep in mind, however, that if the interface is connected to a switchport configured as a trunk, it must also be made VLAN aware even if it does not hold multiple virtual interfaces.     
    5973
    60 == 1.2 non-OpenFlow Methods == #basic
     74== 1.3 non-OpenFlow Methods == #basic
    6175These methods should work on any *nix machine, so they can serve as "sanity checks" for the system you are using as the network node.
    6276
    63 === 1.2.1 Kernel IP routing === #KernIP
     77=== 1.3.1 Kernel IP routing === #KernIP
    6478Kernel IP routing has the least requirements, in that no extra packages are required if you have multiple Ethernet ports on your node. As its name indicates, it works strictly at layer 3. Partitioning occurs across IP blocks; you would need one block per link. It can be combined with VLANs and/or virtual interfaces if you are limited in the number of physical interfaces you have on your relay.   
    6579
     
    118132Do this for each remote subnet that the node should be able to communicate with. Once all of the nodes are configured, you should be able to ping end-to-end. 
    119133
    120 === 1.2.2 Linux Bridge === #brctl
     134=== 1.3.2 Linux Bridge === #brctl
    121135In terms of implementation, this is probably the simplest method.
    122136
     
    142156The Linux Foundation keeps a page that may be useful for various troubleshooting: http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge
    143157
    144 == 1.3 !OpenFlow Methods == #of
     158== 1.4 !OpenFlow Methods == #of
    145159This section assumes that you have all of the !OpenFLow components (e.g. OVS, NetFPGA drivers) set up and working, and that you have several choices of controller. The controller used primarily in this section is the Big Switch Networks (BSN) controller. 
    146 === 1.3.1 !OpenvSwitch === #OVS
     160=== 1.4.1 !OpenvSwitch === #OVS
    147161!OpenvSwitch (OVS) is a user-space software defined switch with !OpenFlow support, complete with its own implementation of a controller. It can, and is assumed to be, built as a kernel module throughout this page.
    148162
     
    227241}}}
    228242
    229 === 1.3.2 NetFPGA !OpenFlow switch === #nfpga
     243=== 1.4.2 NetFPGA !OpenFlow switch === #nfpga
    230244This method is probably the most involved and difficult to get right, although in theory would be the best since you would get the programmatic flexibility of the OVS switch and the speed of a hardware-implemented device.
    231245
     
    247261This set of flows basically implements VLAN stitching based on source MAC address. Unlike in the Linux bridge, one cannot see the VLAN-stripped packets on the virtual interface (tap0 on the NFPGA, br0 on bridge); they will already have the proper tag, since the processing is probably occurring in the FPGA and not in the kernel. 
    248262
    249 == 1.4 Morals of the story == #morals1
     263== 1.5 Morals of the story == #morals1
    250264For quick setup of a network toppology using nodes sharing a medium, point-to-point links should be defined at as low a layer as possible. The next best thing (that is even better because of its flexibility) to actually going in and connecting up the topology using cables is to carve up the shared switch into VLANs. This lets you restrict the broadcast domain however you want, without hard-wiring everything, even when you are restricted to just one physical interface per node.
    251265
     
    268282== 2.1 Some Considerations == #pre2
    269283The minimum requirements for a format are:
    270  * ability to describe a graph (nodes and edges)
     284 * ability to describe a graph (node-edge relations)
    271285 * ability to add descriptions to nodes and edges
    272  * can be parsed easily
    273 Being easy to read and write is a added bonus. 
     286 * can be parsed systematically
     287
     288Additionally useful properties are:
     289 * Being human readable/writable
     290 * support in various programming languages (notably Ruby)   
    274291
    275292Some things that we would like to be able to describe are
     
    277294 * node behavior (a switch, host, router, service...)
    278295 * edge weight (throughput, connectivity, delay...)
     296 * dynamic behavior (changes across time)
    279297
    280298== 2.2 Some Formats == #fmts
     299We focus on the formats described in [https://gephi.org/users/supported-graph-formats/ this] link.
     300
    281301=== 2.2.1 GEXF === #gexf
    282302link: http://gexf.net/format/index.html [[BR]]