wiki:Other/Summer/2024/pLS

Version 32 (modified by dirky9000, 4 months ago) ( diff )

Added week 9 pictures

Privacy Leakage Study and Protection for Virtual Reality Devices

Advisor: Dr. Yingying (Jennifer) Chen

Mentors: Changming LiGR, Honglu LiGR, Tianfang ZhangGR

Team: Dirk Catpo RiscoGR, Brody VallierHS, Emily YaoHS

Final Poster

Project Overview

Augmented reality/virtual reality (AR/VR) is used for many applications and have been used for many purposes ranging from communicating and tourism, all the way to healthcare. Accessing the built-in motion sensors does not require user permissions, as most VR applications need to access this information in order to function. However, this introduces the possibility of privacy vulnerabilities: zero-permission motion sensors can be used in order to infer live speech, which is a problem when that speech may include sensitive information.

Project Goal

The purpose of this project is to extract motion data from AR/VR devices inertial measurement unit (IMU), and then input this data to a large language model (LLM) to predict what the user is doing

Weekly Updates

Week 1

Week 1 Presentation

Progress

  • Read research paper [1] regarding an eavesdropping attack called Face-Mic

Next Week Goals

  • We plan to meet with our mentors and get more information on the duties and expectations of our project

Week 2

Week 2 Presentation

Progress

  • Read research paper [2] regarding LLMs comprehending the physical world
  • Build a connection between research paper and also privacy concerns of AR/VR devices

Next Week Goals

  • Get familiar with AR/VR device:
    • Meta Quest
    • How to use device
    • Configure settings on host computer
  • Extract motion data from IMU
    • Connecting motion sensor application program interface (API) to access data
    • Data processing method

Week 3

Week 3 Presentation

Progress

  • Set up host computer and android studio environment
  • Started extracting data from the inertial measurement unit (IMU)
  • Recorded and ran trials of varying head motions

Next Week Goals

  • Run more tests to collect more data
  • Design more motions for data collection *Different head motions
    • Rotational
    • Linear
  • Combinations of head motions
    • Looking left and then right
    • Looking up and then down

Week 4

Week 4 Presentation

Progress

  • Designed and collected more motion data
    • Looking up then back middle
    • Looking right then back middle
    • Moving head around in a shape
    • Moving head back and forward
  • Used MATLAB noise removing functions to clean graphs
    • Original
    • Smooth
    • Lowpass
    • Findpeaks
  • 3D visual of acceleration to show time and position

Next Week Goals

  • Find a way to get hand motion data using Android Studio
  • Work on fixed prompts to get accurate LLM results using ChatGPT 4o and ChatGPT 4

Week 5

Week 5 Presentation

Progress

  • Enabling hand motion data collection using VR device
    • Utilize Android Studio and VrApi to access VR controller interface and extract motion data
  • Conducted additional motion experiments to gather comprehensive data sets
    • Motion with both head and hand activities
  • Implemented 3D plots to visualize and analyze hand motion data for accuracy

Next Week Goals

  • Utilize motion research paper [3] to model more motion activities
  • Start building a CNN model that can recognize activity based on motion data

Week 6

Week 6 Presentation

Progress

  • Made a list of different motion data to capture and train a convolutional neural network (CNN)
    • Do research on previous work based on raw motion data and CNN’s
  • Specifics of the motion data
    • Samples: 250 per motion
    • Users: Dirk, Brody, and Emily
    • Motions: front raise, side raise, head right, head left, head up, and head down
  • Design prompt for LLM and see output results

Next Week Goals

  • Start getting more motion data
  • Start using LLM to analyze the collected data
    • Use the designed prompts
    • Design and try more prompt structures and compare LLM responses

Week 7

Week 7 Presentation

Progress

  • Collected 250 samples from six different motions to enlarge datasets for the CNN task
  • Designed prompts with specific parts for LLMs to establish activity recognition tasks
  • Tested prompt with different LLMs
    • Inaccurate results from ChatGPT 4o using side raise motion
    • Inaccurate results from Gemini Advanced using side raise motion

Next Week Goals

  • Improve the prompt design to get more accurate prediction results from LLMs
  • Begin developing a CNN using samples collected from the six motion patterns

Week 8

Week 8 Presentation

Progress

  • Built a 1-dimensional (1D) convolution neural network (CNN) for activity recognition
    • 1D CNN Architecture
    • Training and Validation Graph
    • Confusion Matrix
  • Improving the prompt design to get more accurate predictions results from LLM
    • Previous prompt
    • New Prompt
  • LLM’s accuracy of classifying six different motions with new prompt

Next Week Goals

  • Use MATLAB to convert time domain motion data into frequency domain for more data representation
  • Improve LLM results (78.18%) to be more similar to CNN results (98.22%) by improving fixed prompt

Week 9

Week 9 Presentation

Progress

  • Built threat models for this AR/VR human activity recognition project (HAR)
  • Conducted feature extraction methods and used a support vector machine (SVM) model to select effective features
  • Improved LLM fixed prompt from a previous 78.18% to a 90.6%

Next Week Goals

  • Improve LLM fixed prompt to get better results than 90.6% by adding statistical features into the prompt derived from SVM results (99.33%)

Week 10

Week 10 Presentation

Progress

  • Placeholder

Next Week Goals

  • Placeholder

Links to Presentations

Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8 Week 9 Week 10 Final Presentation

References

[1] Shi, C., Xu, X., Zhang, T., Walker, P., Wu, Y., Liu, J., Saxena, N., Chen, Y. and Yu, J., 2021, October. Face-Mic: inferring live speech and speaker identity via subtle facial dynamics captured by AR/VR motion sensors. In Proceedings of the 27th Annual International Conference on Mobile Computing and Networking (pp. 478-490).

[2] Xu, H., Han, L., Yang, Q., Li, M. and Srivastava, M., 2024, February. Penetrative ai: Making llms comprehend the physical world. In Proceedings of the 25th International Workshop on Mobile Computing Systems and Applications (pp. 1-7).

[3] Garcia, M., Ronfard, R. and Cani, M.P., 2019, October. Spatial motion doodles: Sketching animation in vr using hand gestures and laban motion analysis. In Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games (pp. 1-10).

[4] Moya Rueda, Fernando, et al. "Convolutional neural networks for human activity recognition using body-worn sensors." Informatics. Vol. 5. No. 2. MDPI, 2018.

[5] MBrownlee, Jason. “1D Convolutional Neural Network Models for Human Activity Recognition.” MachineLearningMastery.Com, 27 Aug. 2020, machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/#comments.

Attachments (36)

Note: See TracWiki for help on using the wiki.