wiki:Other/Summer/2023/RobotTestbed

Version 28 (modified by jwh139, 10 months ago) ( diff )

Robotic IoT Smartspace Testbed

    Robotic IoT SmartSpace Testbed

    WINLAB Summer Internship 2023

    Group Members: Jeremy HuiUG, Katrina CelarioUG, Matthew GrimalovskyUG, Julia RodriguezUG, Logan PasternakUG, Laura LiuHS, Jose RubioHS, Michael CaiHS, Hedaya WalterGR, Sonya Yuan SunGR

    Project Overview

    The main purpose of the project is to focus on the Internet of Things (IoT) and its transformative potential when intertwined with Machine Learning (ML). To explore this subject, the group continues the work of the SenseScape Testbed, an IoT experimentation platform for indoor environments containing a variety of sensors, location-tracking nodes, and robots. This testbed enables IoT applications, such as but not limited to, human activity and speech recognition and indoor mobility tracking. In addition, this project advocates for energy efficiency, occupant comfort, and context representation. The SenseScape Testbed provides an adaptable environment for labeling and testing advanced ML algorithms centered around IoT.

    Hardware

    This project is centered on a specific piece of hardware referred to as a MAESTRO, a custom multi-modal sensor designed by the previous group. The MAESTRO is capable of perceiving different types of data from its environment (listed below) and is connected to a Raspberry Pi, a microcomputer with the Raspberry Pi OS Lite (Legacy) installed. In addition, the group is leaning towards using a Raspberry Pi Camera Module for the camera.

    Maestro r-pi camera

    The attachments on the MAESTRO include:

    ADXL345: measures acceleration experienced by the sensor

    BME680: measures temperature, humidity, pressure, and gas resistance

    TCS3472: measures and converts color and light intensity into digital values

    MXL90393: measures magnetic field in x, y, and z axis

    ZRE200GE: detects human presence or motion by sensing infrared radiation emitted by warm bodies

    MAX9814:amplifies audio signals captured by connected microphone

    NCS36000: detects and controls PIR sensor

    MCP3008: converts analog signals from sensor and converts it into a digital value that a computer can read

    LMV3xx: allows full voltage range of the power supply to be utilized

    Project Goals

    Previous Group's Work: https://dl.acm.org/doi/abs/10.1145/3583120.3589838

    Based on the future research section, there are two main goals the group wants to accomplish.

    The first goal is to create a website that includes both real-time information on the sensors and a reservation system for remote access to the robot. For the sensors, the website should display the name of the sensor, whether it is online and the most recent time it was seen gathering data, and the actual continuous data streaming in. For the remote reservation/experimentation features, the website must be user-friendly so that not only is it easy for the user to execute commands, but also restricts them from changing things that they shouldn’t have access to. The group strives to allow remote access to the LoCoBot through ROS (Robotic Operating System) and SSH (Secure Shell) as long as all machines involved are connected to the same network (VPN).

    The second goal is automating the labeling process of the activity within the environment using the natural language descriptions of video data. The video auto-labeling can be done training neural networks (ex: CNN and LSTM) in an encoder-decoder architecture for both feature extraction and the language model. For activities that cannot be classified within a specific amount of certainty, the auto-labeling tool could save the time stamp and/or video clip and notify the user that it requires manual labeling. In the case the network is fully trained, it would simply choose the label with the highest probability and possibly mark that data as “uncertain”. The main goal is to connect this video data to the sensor data in hopes to bridge the gap between sensor-to-text.

    The Projects's Three Phases

    The progression of this project relies on three milestones, each with unique and specific goals. Moving forward, each phase is more advanced than the last.

    Phase One:

    For the first phase, the group is looking for the MAESTROs to recognize a predetermined set of activities in an office environment, in this case WINLAB. The plan is to set the MAESTROs in a grid like coordinate system, considering both the location of outlets and the "predetermined activities" that will be conducted. In addition to the MAESTROs, there will be multiple cameras in place capturing continuous video data of human activity. This video data will be used for the automatic labeling. Phase one is the foundation for the rest of the milestones moving forward.

    Phase Two:

    For the second phase, the group is looking for the MAESTROs to communicate with each other about what is happening in their immediate space using zero-shot or few-shot recognition.

    zero-shot: ability of a large language model to perform a task or generate responses for which it has not been explicitly trained.

    few-shot: ability of a large language model to recognize or classify new objects or categories which only a few labeled examples or shots.

    Phase Three

    For the third and final phase, the group is looking for the MAESTROs to communicate with each other to create a narrative of the activity in the given space. The model could be queried about the "memory" of the space and will give ranging descriptions based on the desired scope of the answer (1 hour vs 1 year). Seen in this phase, the large language model is the core of the project; however, the MAESTROs must be deployed first.

    Progress Overview

    WEEK ONE

    Week 1 Presentation

    • Familiarized on project topics by reading relevant research papers
    • Set up ROS (Robotic Operating System) on Ubuntu distro
    • Ran elementary python scripts on robot (talker/listener)

    • Interacted with robot at CoRE

    WEEK TWO

    Week 2 Presentation

    • Started to build the website
    • Learned the aspects of a Raspberry pi by starting with a blank SD:

    → Installed Raspberry Pi operating system and ROS

    → Connected to a ZeroTier network

    → Downloaded packages necessary for the data collecting python scripts

    WEEK THREE

    Week 3 Presentation

    • Explored the possibility of including Unity in the project

    → Created 3D avatar in Unity which mimics live webcam feed

    Digital Twin Week 3

    • Researched necessary equipment needed for experiments
    • Started to clone original SD card that contained all python scripts, packages, and ROS onto blank SD cards

    → The cloned cards were inserted into 25 Raspberry Pi

    → Each pi was given a unique name

    • Made significant progress on the website aspect

    Website Week 3

    WEEK FOUR

    Week 4 Presentation

    • Measured dimensions of WINLAB
    • Added backend email/appointment system for website
    • Create VR environment for possible future use

    WEEK FIVE

    Week 5 Presentation

    • Automated process of connecting & checking wifi connection (pinging)
    • Created a tunnel between R-pi and data base

    • Successfully created backend email sender & appointment form on website
    • Added foundation for interactive grid on website

    WEEK SIX

    Week 6 Presentation

    • Continued exploring Unity

    → Began connecting Unity/ROS - SLAM for robot navigation

    • Set up remote desktop on orbit node for Unity

    WEEK SEVEN

    Week 7 Presentation

    • Set up PTP on Pi's

    → Pi's do not have hardware timestamping → No IEEE 1588 , must aim Pi at boundary server → Solution: use software emulation → Build kernel that allows ptp4l to run

    • Looked into using Raspberry Pi cameras for camera data

    → Successfully captured videos and viewed them using VLC

    • Created the coordinate system for the test room

    → Discussed the predetermined activities and based layout on them & outlet placement

    → Turn on Light → Walk in/out of room

    WEEK EIGHT

    WEEK NINE

    WEEK TEN

    Attachments (19)

    Note: See TracWiki for help on using the wiki.