Changes between Version 4 and Version 5 of Other/Summer/2025/r3


Ignore:
Timestamp:
Jun 23, 2025, 6:04:50 PM (11 hours ago)
Author:
jam1092
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Other/Summer/2025/r3

    v4 v5  
    1919
    2020||= Learning to Help (L2H) =||= Feature Extraction for Distributed Systems (PCA) =||
    21 ||  The recently proposed Learning to Help (L2H) model proposed training a server model given a fixed local (client) model. This differs from the Learning to Defer (L2D) framework which trains the client for a fixed (expert) server.  L2H demonstrates its applicability in a number of different scenarios of practical interest in which access to the server may be limited by cost, availability, or policy.|| Implement a distributed feature extraction method, specifically Principal Component Analysis (PCA), on the Orbit testbed. Enable multiple nodes in the Orbit network to collaboratively learn the eigenvectors to reduce the dimensions of new data samples. This compressed data will then be fed into a pre-trained machine learning model for inference. The central idea is that collaboration among nodes can speed up the process of learning these eigenvectors improving the efficiency of our learning or inference tasks.
     21||  The recently proposed Learning to Help (L2H) model proposed training a server model given a fixed local (client) model. This differs from the Learning to Defer (L2D) framework which trains the client for a fixed (expert) server.  L2H demonstrates its applicability in a number of different scenarios of practical interest in which access to the server may be limited by cost, availability, or policy. || Implement a distributed feature extraction method, specifically Principal Component Analysis (PCA), on the Orbit testbed. Enable multiple nodes in the Orbit network to collaboratively learn the eigenvectors to reduce the dimensions of new data samples. This compressed data will then be fed into a pre-trained machine learning model for inference. The central idea is that collaboration among nodes can speed up the process of learning these eigenvectors improving the efficiency of our learning or inference tasks.
    2222||
    2323
     
    4242[https://docs.google.com/presentation/d/1CoeVqW_KkwaUJu5cP5dy_3ICegnhyhhBIlqUHjwtDFQ/edit?usp=sharing Week 3 Slides]
    4343
     44[https://docs.google.com/presentation/d/1pnY_DhRFG13IRgjqnUoJyrN_iZFM_V6ZZzkz9gru-JI/edit?usp=sharing Week 3 Content Slides]
     45
    4446- Officially split into sub-groups \\ - L2H (Joshua & Madhav) \\ - PCA (Aayan, Nihal, & Hasan)
    4547
     
    4951=== Week 4 ===
    5052[https://docs.google.com/presentation/d/1m7BtaOoItuCyNeE5cEjFde8bFQstZb9ObNr1WhVC1B0/edit?usp=sharing Week 4 Slides]
     53
     54[https://docs.google.com/presentation/d/17FawluRSIRMfZgn4Bd5y3A1g9PfSynj2MYy2c-dy5uQ/edit?usp=sharing Week 4 Content Slides]
     55
     56Created the Project Wiki you are currently viewing.
     57
     58||= L2H=||= PCA=||
     59|| - Tested PTP on one node and got negative latencies \\ - Switched to Monotonic Clock \\ - Adjusted cost \\ - Ran L2H across 2 nodes \\ - Graphed Results of L2H || - Made distributed Code \\ - Implemented PCA on one node \\ - Implemented kPCA on one node ||
     60
     61=== Week 5 ===
     62[https://docs.google.com/presentation/d/10QSHvUPM6LBKI4O6dKvgsKKWQAxWdv9wpkmh8T4fspk/edit?usp=sharing Week 5 Slides]
    5163
    5264||= L2H=||= PCA=||