Changes between Version 90 and Version 91 of Other/Summer/2023/Inference


Ignore:
Timestamp:
Jul 30, 2023, 9:16:09 PM (11 months ago)
Author:
tm897
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Other/Summer/2023/Inference

    v90 v91  
    1313== Overview
    1414As machine learning models become increasingly advanced and complex, running these models on less powerful devices is becoming increasingly difficult, especially when accounting for latency. However, it is also not efficient to run everything using only the cloud as it creates too much traffic. A viable solution to this problem is edge computing, where we use the edge (the networks in between user devices and the cloud) for computation.
     15
     16
     171. Trained a small and large Neural Networks (DENSENET & Mobilenet V2) on the CIFAR10 dataset
     182. Performed PCA and SVM on NNs to familiarize ourselves with PyTorch
     193. Loaded the MNIST database (image) onto an orbit node
     204. Connected 2 nodes with client-server architecture and extracted data for time and accuracy measurements
     215. Compared performances of both neural networks on the CIFAR10 dataset
     22  * Established connection between two nodes
     23  * Communicated test data between nodes to compare accuracy and delay between our NN models
     246. Worked with professors/mentors and read papers to understand the concepts of early exit, split computing, accuracy/latency tradeoff, and distributed deep neural networks over the edge cloud
     257.Split the NN ResNet 18 using split computing onto two different devices and ran an inference across a network
     268. Used Network Time Protocol (NTP) and sent data in "packages" (chunks) to collect latency and delay data
     279.Explored different research questions with the data collected: __________
     2810. Limited CPU power in terminal to imitate mobile devices
     2911. Implemented different threshold values based on confidence for sending the data to the edge and server for inference
     30  * Generated graphs for threshold vs latency, accuracy vs latency, ____ (LINK TO GRAPHS)
     319. Retrained neural network to achieve 88% accuracy and collected new graphs (LINK)
     3210. Introduced a delay in the inference as well as data transfer to simulate a queue
    1533
    1634== Week 1
     
    5169== Week 4
    5270**Summary**
    53   * Created and Trained a ‘Small’ and ’Large’ Neural Networks
    54   * Compared their performances on the CIFAR10 dataset
     71  * Compared performances of both neural networks on the CIFAR10 dataset
    5572  * Established connection between two nodes
    5673  * Communicated test data between nodes to compare accuracy and delay between our NN models