[[TOC(heading=Tutorial TOC, Tutorial, Tutorial/Testbed, Tutorial/HowtoWriteScripts, Tutorial/HelloWorld, Tutorial/CollectMeasurements, Tutorial/AnalyzeResults, depth=2)]] [wiki:Tutorial Back] = Testbed Overview = The '''ORBIT Radio Grid Testbed''' is operated as a shared service to allow a number of projects to conduct wireless network experiments on-site or remotely. Although only one experiment can run on the testbed at a time, automating the use of the testbed allows each one to run quickly, saving the results to a database for later analysis. In other words, Orbit may be viewed as a set of services into which one inputs an experimental definition and one receives the experimental results as output as illustrated in Figure 1 below. The experimental definition is a script that interfaces to the ORBIT Services. These services can reboot each of the nodes in the 20x20 grid, then load an operating system, any modified system software and application software on each node, then set the relevant parameters for the experiment in each grid node and in each non-grid node needed to add controlled interference or monitor traffic and interference. The script also specifies the filtering and collection of the experimental data and generates a database schema to support subsequent analysis of that data. [[Image(architecture-50.png)]] [[BR]]Figure 1. Experiment Support Architecture [[BR]] == 1. Hardware Components == The Orbit grid as illustrated in Figure 2 consists of a multiply interconnected, 20-by-20 grid of '''ORBIT Radio Nodes''', some non-grid nodes to control R/F spectrum measurements and interference sources, and front-end, application and back-end servers. These servers support various ORBIT services. [[Image(hardware-50.png)]] [[BR]]Figure 2. ORBIT Hardware. Each '''ORBIT Radio Node''' is a PC with a 1 GHz VIA C3 processor, 512 MB of RAM, 20 GB of local disk, two 100BaseT Ethernet ports, two 802.11 a/b/g cards and a Chassis Manager to control the node, see Figure 3. The Chassis Manager has a 10BaseT Ethernet port. The two 100BaseT Ethernet ports are for Data and Control. The Data ports are available to the experimenter. The Control port is used to load and control the ORBIT node and collect measurements. [[Image(node-50.png)]] [[BR]]Figure 3. Orbit node. [[BR]] == 2. Software components == === 2.1. Experiment Control === The main component of the Experiment Management Service is the '''Node Handler''' that functions as an Experiment Controller. It multicasts commands to the nodes at the appropriate time and keeps track of their execution. The '''Node Agent''' software component resides on each node, where it listens and executes the commands from the '''Node Handler'''. It also reports information back to the '''Node Handler'''. The combination of these two components gives the user the controls over the testbed, and enables the automated collection of experimental results. Because the '''Node Handler''' uses a rule-based approach to monitoring and controlling experiments, occasional feedback from experimenters may be required to fine tune its operation. Figure 4 illustrates the execution of an experiment from the user's point-of-view. Finally, using the '''Node Handler''' (via a dedicated experiment called ''imageNodes'', see here), the user can quickly load hard disk images onto the nodes of his/her experiment. This ''imaging process'' allows different groups of nodes to run different OS images. It relies on a scalable multicast protocol and the operation of a disk-loading ''Frisbee'' server from M. Hibler et al. ([http://www.cs.utah.edu/flux/papers/frisbee-usenix03-base.html link]). [[Image(OMF-User-View.png)]] [[BR]]Figure 1. Execution of an Experiment from a User's point-of-view === 2.2 Measurement & Result Collection === The '''Collection Server''' collects experimental results. There is one instance of the '''Collection Server''' per experiment. The Berkeley database is used for scalability, and an SQL database is used for persistent data archiving. One multicast channel per experiment is used for logical segregation of data and for scalability. Besides ORBIT Services, other ORBIT-developed software includes '''Libmac''' and the '''ORBIT Measurement Framework'''. Each provides procedures that may be called in user-developed applications. '''Libmac''' is a user-space C library that allows applications to inject and capture MAC layer frames, manipulate wireless interface parameters at both aggregate and per-frame levels, and communicate wireless interface parameters over the air on a per-frame level. The '''ORBIT Measurement Framework (OML)''' has a client/server architecture illustrated in Figure 4 below. '''OML''' uses IP multicast to send data to the measurement server as it is collected. '''OML''' includes a client application programming interface (API). A developer can use this client API through a web interface to define the measurement points and parameters for his or her application. Such definitions are saved as an XML-based configuration file and source code for the measurement client is automatically generated that contains application-specific methods that handle type-safe data collection. This source code can be compiled and linked with the application. [[Image(oml-50.png)]] [[BR]]Figure 4. '''OML''' component architecture. User-developed software includes the script for the experiment, the application(s) and any modifications to system software. The user-developed script for the experiment contains three static components: the configuration of nodes, the configuration of the application software, and the configuration of the '''ORBIT Measurements Framework and Library (OML)'''. The application is one or more programs that implement the intended behavior of the active nodes in the experiment. Modified system software may include modified Linux components or custom device drivers. Each of these types of user-developed software is covered in more detail in the Testbed Experiments section of this tutorial.