== Real-Time Cyber Physical Systems Application on !MobilityFirst ==
[[TOC(Other/Summer/2015*, depth=3)]]
=== Github Repo ===
https://github.com/MrHohn/opencv-CPS
=== Introduction ===
Most Cyber Physical Systems (http://en.wikipedia.org/wiki/Cyber-physical_system) are characterized by stringent network requirements both in terms of latency (i.e. less than 100 msec response time) and scalability (i.e. Trillion-order scalability for CPS devices/objects ).
MobilityFirst, thorugh its name based Virtual Network Architecture used to provide Application Specific Routing to services and applications, can be exploited in the attempt of assisting these applications to meet their requirements.
The goal of the project is to implement a CPS Application based on computer vision (i.e. object recognition) that will be used in conjunction with MobilityFirst’s Virtual Network to showcase the benefits of the network service.
=== Preliminary Goal ===
[[Image(preliminary-goal.JPG, 800px)]]
=== Outline of the Project ===
[[Image(outline.JPG, 800px)]]
=== Tasks ===
* '''Part A: Cyber Physical System (Higher Priority Tasks)'''
* Get familiar with camera system available
* Implement application that transmits video over the network in standard format. Requirements:
* Control of frame per second transmitted over the network
* Potentially start with transmitting single frames (i.e. still pictures)
* Implement server application for object recognition. Standard libraries are available. Random references:
* http://dl.acm.org/citation.cfm?id=2742663
* http://synrg.csl.illinois.edu/papers/overlay.pdf
* Collect training set of objects/buildings
* Implement simple graphical interface to display processing results
* '''Part B: MobilityFirst'''
* Get familiar with MobilityFirst and its prototype
* Run basic experiments on Orbit
* Get familiar with MobilityFirst Network API
* Test basic applications using MobilityFirst’s VN with ASR
* '''Integration'''
* CBS application port to the MobilityFirst API. Two options:
* Through IP-to-MF proxy and vice-versa
* Replacement of network logic through native MF implementation (preferred option)
* Test on Orbit MF topology
* Run on top of MobilityFirst VN with ASR
=== Image Processing ===
We implement SURF algorithm to process object identify and match.
The core idea of SURF algorithm can be summarized to the followings:
1.Use Hessian Matrix and Scale Space to calculate key points of an image[[BR]]
2.Use SURFDescriptorExtractor to find feature vectors and to complete related calculations.[[BR]]
3.Use BruteForce to match feature vectors within two images.[[BR]]
4.Use drawMatches to draw matched key points.
From the perspective of image processing each picture is consisted of n × m pixels which can obtain its corresponding determinant value of Hessian Matrix. Given a point p = (x,y) within an image [1] the Hessian matrix H(p, σ) at point and scale σ, is defined as follows:
[[Image(formula1.png)]][[BR]]
where [[Image(formula2.png)]] etc. are the second-order derivatives of the grayscale image.
After the calculations of determinant value of each pixels a scale-space that contains repeatedly smoothed images through a Gaussian filter will be constructed. A scale-space can be regarded as an image pyramid built with multiple layers of an image with distinct scales. Determinant value of each pixel will be compared with that of other surrounding 26 pixels consisting of 9 upward ones, 9 downward ones and 8 encompassing ones. The maximum one will become a key point. From a circular region around the key point, a square region will be built to find its orientation and extract the SURF descriptor in order to construct its feature vector. After the calculations of all feature vectors of two images by following the above method, BruteForce algorithm matches feature vectors by calculating the shortest Euler distance of each of two vectors in all vectors. Each two feature vectors with the shortest distance is a pair of matched vector. At last we use drawMatches to draw matched key points.
=== Weekly Summary ===
Week 1
Familiarizing with Mobility First.
Completing Orbit Tutorials.
Week 2
Explore Object recognition algorithms
Development of Image processing with basic functionality
Implement camera sampling apps with integrated webcam and mjpg-streamer
Week 3*
Further development of Image processing and camera apps
Week 4*
Complete the integration of client and server part, the system supports real-time video transmission and image matching on Linux
Week 5-6*
implemented mobility first network on real video transmission to replace TCP/IP protocol
add message distribution API on mobility first code so that it will assign to each connecting socket a thread number as an identification
revise image processing code to support multiple clients while considering on memory optimization
start to learn the concept of fog computing
* To be updated
Future Goals
Incorporation with Google glass for live object recognition and information display
Build CPS project on Android platform
Client side low resolution object verification for coherent information display
=== Team ===
[[Image(Karthic.png, 200px)]] [[Image(Wuyang.png, 222px)]] [[Image(IMG_5284.JPG, 218px)]] [[Image(Shan.png, 210px)]] [[Image(Avi.png, 222px)]] [[BR]]
{{{
#!html
Karthikeyan Ganesan
Wuyang Zhang Zihong Zheng Shantanu Ghosh Avi Cooper
}}}