wiki:Other/Summer/2020/SmartIntersection

Version 17 (modified by NicholasMeegan1, 4 years ago) ( diff )

Smart Intersection

    Smart Intersection - daily traffic flow

    Bryan Zhu, Kevin Zhang, Nicholas Meegan

    Project Website

    https://bzz3ru.wixsite.com/smartintersection

    Project Objective

    The goal of this project is to create a method for estimating the statistics for vehicle count/traffic flow into one intersection in New York City. As an example, record videos of the northbound traffic on Amsterdam Avenue, as vehicles are entering the 120th St./Amsterdam Av. intersection. Using YOLOv3 deep learning model, detect and count vehicles as they approach/enter the intersection from south, making sure that there is no double-counting. Use 180 second long video fragments (approximately two traffic light cycles), and repeat up to half a dozen times a day, for a number of workweek/weekend days during the same times of each day. Compare the vehicle count (traffic flow) as a function of the time of the day. Utilize NVIDIA DeepStream deployed on COSMOS GPU compute servers to run the model. The method should be generalizable/expandable to any direction of vehicle movement, when appropriate camera views are available.

    Reading Material

    Week 1 Activities

    • Get ORBIT/COSMOS account and familiarize oneself with the testbed procedures
    • Learn about YOLOv3 deep learning models for object detection
    • Read about NVIDIA DeepStream
    • Explore the image (set of computing tools) available on COSMOS, which uses DeepStream and can deploy YOLOv3
    • Record and save 6 videos during one day (to be repeated when the method is debugged and fully functional)
    • Brainstorm about vehicle counting/traffic flow estimation methodology

    Week 1 Weekly Meeting Presentation: https://docs.google.com/presentation/d/1Sf9hzpo3WQsEPwbhKfic2xWCskH1EViD-3SNb_foouA/edit?usp=sharing

    Week 2 Activities

    • Understand the concepts of object detection in 3D Point Cloud
    • Gain an understanding of NVIDIA’s DeepStream SDK
    • Get comfortable deploying YOLOv3 on the COSMOS testbed
    • Use existing datasets to play around with DeepStream and YOLOv3

    Week 2 Weekly Meeting Presentation: https://docs.google.com/presentation/d/1Cl8MbsSU3ZAq5lpRuE0eVBSwnIX4jTUAci5uAgP7Vt8/edit?usp=sharing

    Week 2 Team Meeting Presentation: https://docs.google.com/presentation/d/1O2yCze4fmVOAFGCi0u6WTZq8VTFhLc_J448skeygguw/edit?usp=sharing

    Week 3 Activities

    • Investigate existing RGB-D (RGB + depth map) object detectors whose models we can immediately put to use for inference
    • Look into existing 3D Point Cloud object detection implementations
    • Learn how to run DeepStream's YOLOv3 implementation
    • Investigate DeepStream Python bindings for use with YOLO

    Week 3 Weekly Meeting Presentation: https://docs.google.com/presentation/d/13vqiw0kkyT0_XPzPv3NiowIvxc22SapfxM_ZbAKn1Cc/edit?usp=sharing

    Week 3 Team Meeting Presentation: https://docs.google.com/presentation/d/1jwq6h05mw1vHt6_C1Br4MM_0LQDZRlIEoSvEasJ5Rsg/edit?usp=sharing

    Week 4 Activities

    • Investigate YOLOv4 and its use with TensorRT
    • Look into getting output/data processing based on the outputs from DeepStream
    • Look into the DeepStream tracker to build on top of
    • Build a presentation slide set to inform the intern class about DeepStream and YOLOv3

    DeepStream and YOLOv3 Overview + Demonstration Presentation Slides: https://docs.google.com/presentation/d/1HxFIeoxCXxvbDuS0BVnAreUocIz04w508vAwigs4EFs/edit?usp=sharing

    Video recording DeepStream and YOLOv3 Overview + Demonstration Presentation: https://drive.google.com/file/d/13fkoHQgZHS0HY7QQ2-tXj8-ZI1u2jp4N/view


    Week 4 Weekly Meeting Presentation: https://docs.google.com/presentation/d/1R50VqBbzwy0204_ZUZyb323N7cR4mwhgdUcYBD7yOGE/edit?usp=sharing

    Week 4 Team Meeting Presentation: https://docs.google.com/presentation/d/1fVNlO-hJczEXf4CQadozM4927N1Ghm42NzTR0tRnLDw/edit?usp=sharing

    Week 5 Activities

    • Keep trying to get YOLOv4 running as a DeepStream app
    • Augment DeepStream YOLO inference output with bounding box class confidence scores
    • Begin setup of a pub/sub system for inference output
    • Investigate ways of recombining inference output with input video stream (NTP/OpenCV)

    Week 5 Weekly Meeting Presentation: https://docs.google.com/presentation/d/1lUB_HH3MoxlQUo5O9BN-lFMwreRaF0qPwgySyKkmjIc/edit?usp=sharing

    Week 5 Team Meeting Presentation: https://docs.google.com/presentation/d/16W6Lp8ouqKu9JPEil2aFbliPtWatrIM56krBP5P6zMc/edit?usp=sharing

    Week 6 Activities

    • Implement a publisher within the DeepStream app using high-level C bindings for ZeroMQ provided by CZMQ
    • Attempt to run the DeepStream app on a live video stream
    • Investigate ways to sync video frames and inferred bounding boxes on different machines synced via NTP (Network Time Protocol)
    • Continue working with OpenCV in order to add bounding box information to the input video stream

    Week 6 Weekly Meeting Presentation: https://docs.google.com/presentation/d/1ZGjBbafobl9CBx4RCXkqRufrvhkJglMhrGW5I7ZSo-w/edit?usp=sharing

    Week 6 Team Meeting Presentation: https://docs.google.com/presentation/d/1YTUnloztvLmBBdHV84G-aswjYQllsdTe1CvybC8uxu0/edit?usp=sharing

    Week 7 Activities

    • Start the implementation of the subscriber class (Download the ZeroMQ Library (ZMQPP) + Baseline/barebones necessities)
    • Continue developing the publisher class in ZeroMQ
    • Continue working on adding bounding boxes on video stream through the use of multi-threading and synchronization schemes (mutexes, condition variables)

    Week 7 Weekly Meeting Presentation: https://docs.google.com/presentation/d/1u7kZZMSVsy16dc8yXuwwEzmmHIA-N70FDFVEi0LD8Wg/edit?usp=sharing

    Week 7 Team Meeting Presentation: https://docs.google.com/presentation/d/1TjGrt2WbLwWloTiuHCarV99iBq0aVN0XHchZYNPv9Us/edit?usp=sharing

    Note: See TracWiki for help on using the wiki.