Changes between Version 17 and Version 18 of Other/Summer/2023/RobotTestbed


Ignore:
Timestamp:
Jul 20, 2023, 3:22:15 PM (12 months ago)
Author:
katrinacelario
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Other/Summer/2023/RobotTestbed

    v17 v18  
    66**Group Members:** Jeremy Hui, Katrina Celario, Julia Rodriguez, Laura Liu, Jose Rubio, Michael Cai
    77
    8 == Project Summary ==
    9 The main purpose of the project is to focus on the Internet of Things (IoT) and its transformative potential when intertwined with Machine Learning (ML). To explore this subject, the group continues the work of the ''!SenseScape Testbed'', an IoT experimentation platform for indoor environments containing a variety of sensors, location-tracking nodes, and robots. This testbed enables IoT applications, such as but not limited to, human activity and speech recognition and indoor mobility tracking. In addition, this project advocates for energy efficiency, occupant comfort, and context representation. The ''!SenseScape Testbed'' provides an adaptable environment for labelling and testing advanced ML algorithms centered around IoT.
     8== Project Overview ==
     9The main purpose of the project is to focus on the Internet of Things (IoT) and its transformative potential when intertwined with Machine Learning (ML). To explore this subject, the group continues the work of the '''''!SenseScape Testbed''''', an IoT experimentation platform for indoor environments containing a variety of sensors, location-tracking nodes, and robots. This testbed enables IoT applications, such as but not limited to, human activity and speech recognition and indoor mobility tracking. In addition, this project advocates for energy efficiency, occupant comfort, and context representation. The ''!SenseScape Testbed'' provides an adaptable environment for labelling and testing advanced ML algorithms centered around IoT.
    1010
    11 == Project Overview ==
     11== Project Goals ==
     12'''Previous Groups Work''': **https://dl.acm.org/doi/abs/10.1145/3583120.3589838**
    1213
    13 Based on the future research section, there seem to be two main goals: adding a web-based reservation system for remote access to the robot and automating the activity labeling process using the natural language descriptions of the data provided in video format.
    14  
    15 For the remote reservation/experimentation features, we need to create a user-friendly webpage that not only makes it easier for the user to execute commands, but also keeps them from changing things that they shouldn’t have access to. The remote access to the LoCoBot can be achieved through ROS/SSH as long as all machines involved are connected to the same network (in our case this could be a VPN).
     14Based on the future research section, there are two main goals the group wants to accomplish.
    1615
    17 The video auto-labeling can be done using neural networks (ex: CNN and LSTM) in an encoder-decoder architecture for both feature extraction and the language model. For activities that cannot be classified within a specific amount of certainty, the auto-labeling tool could save the time stamp and/or video clip and notify the user that it requires manual labeling. After being fully trained, the network would simply choose the label with the highest probability and could possibly mark that data as “uncertain”. 
     16The first goal is to create a website that includes both real-time information on the sensors and a reservation system for remote access to the robot. For the sensors, the website should display the name of the sensor, whether it is online and the most recent time it was seen gathering data, and the actual continuous data streaming in. For the remote reservation/experimentation features, the website must be user-friendly so that not only is it easy for the user to execute commands, but also restricts them from changing things that they shouldn’t have access to. The group strives to allow remote access to the !LoCoBot through ROS (Robotic Operating System) and SSH (Secure Shell) as long as all machines involved are connected to the same network (VPN).
     17 
    1818
     19The second goal is automating the labeling process of the activity within the environment using the natural language descriptions of the data provided in video format. The video auto-labeling can be done using neural networks (ex: CNN and LSTM) in an encoder-decoder architecture for both feature extraction and the language model. For activities that cannot be classified within a specific amount of certainty, the auto-labeling tool could save the time stamp and/or video clip and notify the user that it requires manual labeling. In the case the network is fully trained, it would simply choose the label with the highest probability and possibly mark that data as “uncertain”. The main goal is to connect this video data to the sensor data in hopes to bridge the gap between sensor-to-text.
    1920
    20 * **Web-based remote reservation/experimentation**
    21     → remote access to robot through ROS/SSH
    22     → create a user-friendly webpage that not only makes it easier for the user to execute commands, but also keeps them from accessing/changing things that they shouldn’t 
    23 
    24 
    25 * **Automatic labeling using video/captioning tools**
    26     → Neural network models for captioning involve two main elements:
    27         → Feature Extraction
    28 
    29         → Language Model
    3021
    3122== Progress Overview ==