|Version 18 (modified by 15 years ago) ( diff ),|
The ORBIT Radio Grid Testbed is operated as a shared service to allow a number of projects to conduct wireless network experiments on-site or remotely. Although only one experiment can run on the testbed at a time, automating the use of the testbed allows each one to run quickly, saving the results to a database for later analysis.
In other words, Orbit may be viewed as a set of services into which one inputs an experimental definition and one receives the experimental results as output as illustrated in Figure 1 below. The experimental definition is a script that interfaces to the ORBIT Services. These services can reboot each of the nodes in the 20x20 grid, then load an operating system, any modified system software and application software on each node, then set the relevant parameters for the experiment in each grid node and in each non-grid node needed to add controlled interference or monitor traffic and interference. The script also specifies the filtering and collection of the experimental data and generates a database schema to support subsequent analysis of that data.
1. Hardware Components
The Orbit grid as illustrated in Figure 2 consists of a multiply interconnected, 20-by-20 grid of ORBIT Radio Nodes, some non-grid nodes to control R/F spectrum measurements and interference sources, and front-end, application and back-end servers. These servers support various ORBIT services.
Figure 2. ORBIT Hardware.
Each ORBIT Radio Node is a PC with a 1 GHz VIA C3 processor, 512 MB of RAM, 20 GB of local disk, two 100BaseT Ethernet ports, two 802.11 a/b/g cards and a Chassis Manager to control the node, see Figure 3. The Chassis Manager has a 10BaseT Ethernet port. The two 100BaseT Ethernet ports are for Data and Control. The Data ports are available to the experimenter. The Control port is used to load and control the ORBIT node and collect measurements.
2. Software components
2.1. Experiment Control
The main component of the Experiment Management Service is the Node Handler that functions as an Experiment Controller. It multicasts commands to the nodes at the appropriate time and keeps track of their execution. The Node Agent software component resides on each node, where it listens and executes the commands from the Node Handler. It also reports information back to the Node Handler. The combination of these two components gives the user the controls over the testbed, and enables the automated collection of experimental results. Because the Node Handler uses a rule-based approach to monitoring and controlling experiments, occasional feedback from experimenters may be required to fine tune its operation. Figure 4 illustrates the execution of an experiment from the user's point-of-view.
Finally, using the Node Handler (via a dedicated image nodesexperiment, which will be described later), the user can quickly load hard disk images onto the nodes of his/her experiment. This imaging process allows different groups of nodes to run different OS images. It relies on a scalable multicast protocol and the operation of a disk-loading Frisbee server from M. Hibler et al. (link). Similarly, the user can also use the Node Handler save the image of a node's disk into an archive file.
The user can perform all these actions on the testbed(s) via the generic command orbit, which is the access point to control the Node Handler, his/her experiment, and the nodes on the testbed(s).
# To see a list of available commands orbit help # To see the usage and more help about a particular command, e.g. the "load" command orbit help load
2.2. Measurement & Result Collection
The ORBIT Measurement Framework & Library (OML) is responsible for collecting the experimental results. It is based on a client/server architecture as illustrated in Figure 5 below.
One instance of an OML Collection Server is started by the Node Handler for a particular experiment execution. This server will listen and collect experimental results from the various nodes involved in the experiment. It uses an SQL database for persistent data archiving of these results.
On each experimental node, one OML Collection Client is associated with each experimental applications. The details and "How-To" of such association will be presented in a following part of this tutorial. In the context of this introduction to the testbed, the client-side measurement collection can be viewed as follows. The application will forward any required measurements or outputs to the OML collection client. This OML client will optionally apply some filter/processing to these measurements/outputs, and then sends them to the OML Collection Server (currently over one multicast channel per experiment for logical segregation of data and for scalability)
There are two alternative methods for the user to interface their experimental applications with the OML Collection Clients and to define the requested measurement points and parameters. These methods and measurement definitions will be presented in details later in this tutorial.
Finally, the ORBIT platform also provides the Libmac library. Libmac is a user-space C library that allows applications to inject and capture MAC layer frames, manipulate wireless interface parameters at both aggregate and per-frame levels, and communicate wireless interface parameters over the air on a per-frame level. Users can interface their experimental applications with Libmac to collect MAC layer measurements from their experiments. This other section of the documentation provides more information on Libmac and its operations.
- architecture-50.png (26.7 KB ) - added by 17 years ago.
- node-50.png (41.3 KB ) - added by 17 years ago.
- oml-50.png (40.3 KB ) - added by 17 years ago.
- OMF-User-View.png (55.5 KB ) - added by 15 years ago.
- OML-overview.png (45.4 KB ) - added by 15 years ago.
Service Dependence - New Page.png
) - added by 8 years ago.
Image Process - New Page.png
) - added by 8 years ago.
Image Process FLow
Download all attachments as: .zip