== **Self-Driving Vehicular Project**\\ **Team**: Aaron Cruz [UG], Arya Shetty [UG], Brandon Cheng [UG], Tommy Chu [UG], Vineal Sunkara [UG], Divya Krishna [HS], Siddarth Malhotra [HS] **Advisors**: Ivan Seskar and Jennifer Shane ---- **Project Description & Goals**:\\ Build and train miniature autonomous cars to drive in a miniature city. \\ RASCAL (Robotic Autonomous Scale Car for Adaptive Learning): Using the car sensors, offload image and control data onto a server node. This node will use a neural network that will train the vehicle to move around on its own given the image data it sees through its camera.\\ [[Image(gitlab.png, 500px)]] [[Image(SDC Week 2.png, 690px)]]\\ [https://gitlab.orbit-lab.org/self-driving-car-2023/ GitLab] **Technologies**: ROS (Robot Operating System), Pytorch ---- **Week 1**:\\ [https://docs.google.com/presentation/d/1_K0UlIok6UAKQ4FFww5lPMrha5UKXG6zL87LEalwGbc/edit? Week 1 Slides] Progress: - Familiarize with past summer's work:[https://gitlab.orbit-lab.org/self-driving-car-2023/upcar GitLab], RASCAL setup, [https://www.orbit-lab.org/attachment/wiki/Other/Summer/2023/OH2023/P10a.jpg Software Architecture] - Debug issue with RASCAL's pure pursuit ---- **Week 2**:\\ [https://docs.google.com/presentation/d/1-SxxC-grfdlDhFj4ZKn8_SqMMATA9l2CJ__-rpB7mYw/edit?usp=sharing Week 2 Slides] Progress: - Setup X11 forwarding for GUI applications through SSH - Visual odometry using Realsense Camera and [https://wiki.ros.org/rtabmap_ros/TutorialsOldInterface/StereoHandHeldMapping#Bring-up_example_with_RealSense_cameras rtabmap] - Streamline data pipeline that processes bag data (car camera + control data) into .mp4 video - Detect ARUCO markers from a given image using Python & OpenCV libraries - Setup Intersection server (node with GPU) - Develop PyTorch MNIST model - Trained "yellow thing" neural network - Line up perspective drawing with camera to determine FOV ---- **Week 3**:\\ [https://docs.google.com/presentation/d/1t6Z2rI3c8sEDKmA4Er4-tidhTmySG4w02xafc-y60Hs/edit#slide=id.g2e30c923967_0_17 Week 3 Slides] Progress: - Created web display assassin to eliminate web server when closing ROS - Tested "yellow thing" model, great results - SSHFS setup - Calibrate Realsense camera - Created "snap picture" button on web display for convenience - Developed python script to detect ARUCO marker and estimate camera position - Tested point cloud mapping with rtabmap - Attempt sensor fusion with encoder odometry and visual odometry - Data augmentation to artificially generate new camera perspectives from existing images ---- **Week 4**:\\ [https://docs.google.com/presentation/d/1wfkr_YybtKg2IlEaGD3EdoBfMPnrObRBXZ5UF_F5z-k/edit?usp=sharing Week 4 Slides] Progress: - Refined aruco marker detection for more accurate car pose estimation - Trained model with video instead of images - Improved data pipeline from car sensors to server - Refined data augmentation to simulate new camera perspectives - Added more data visualization (Replayer) to display steering curve, path, and images to web server ---- **Week 5**:\\ [https://docs.google.com/presentation/d/1ib4H_31kK6K8KJLAAKhDrqawSpKOKh3EucpwhRRhLlc/edit?usp=sharing Week 5 Slides] Progress: - Aruco Marker Detection now updates car position within XY plane. Finished self-calibration system - Addressed normalization and cropping problems - Introduced Grad-CAM heat map - Resolved Python version mismatch issue - Visualized training data bias through histogram - Smoothed data to reduce inconsistency in training data - Simulation Camera - skews closest image to simulate new view - Added web display improvements - search commands, controller keybinds ---- **Week 6**:\\ [Week 6 Slides] Progress: ---- **Week 7**:\\ [Week 7 Slides] Progress: ---- **Week 8**:\\ [Week 8 Slides] Progress: ---- **Week 9**:\\ [Week 9 Slides] Progress: ---- **Week 10**:\\ [Week 10 Slides] Progress: ---- **Additional Resources:**