= Privacy Leakage Study and Protection for Virtual Reality Devices {{{#!html

Advisor: Dr. Yingying (Jennifer) Chen

Mentors: Changming LiGR, Honglu LiGR, Tianfang ZhangGR

Team: Dirk Catpo RiscoGR, Brody VallierHS, Emily YaoHS

Final Poster }}} == Project Overview Augmented reality/virtual reality (AR/VR) is used for many applications and have been used for many purposes ranging from communicating and tourism, all the way to healthcare. Accessing the built-in motion sensors does not require user permissions, as most VR applications need to access this information in order to function. However, this introduces the possibility of privacy vulnerabilities: zero-permission motion sensors can be used in order to infer live speech, which is a problem when that speech may include sensitive information. == Project Goal The purpose of this project is to extract motion data from AR/VR devices inertial measurement unit (IMU), and then input this data to a large language model (LLM) to predict what the user is doing == Weekly Updates === Week 1 **[https://docs.google.com/presentation/d/1VZrjZfJtpEVUlbCmo1g7Ra4WfqlPO7QHdEdDePhxg54/edit#slide=id.g20dd9abe089_0_0, Week 1 Presentation]** **Progress** * Read research paper ![1] regarding an eavesdropping attack called Face-Mic **Next Week Goals** * We plan to meet with our mentors and get more information on the duties and expectations of our project === Week 2 **[https://docs.google.com/presentation/d/1ks7pjpWpulA2y2GsImNmkVBnvmLo06NcEL-JPvpfuwg/edit#slide=id.g20de48f9cd7_0_70, Week 2 Presentation]** **Progress** * Read research paper ![2] regarding LLMs comprehending the physical world * Build a connection between research paper and also privacy concerns of AR/VR devices **Next Week Goals** * Get familiar with AR/VR device: * Meta Quest * How to use device * Configure settings on host computer * Extract motion data from IMU * Connecting motion sensor application program interface (API) to access data * Data processing method === Week 3 **[https://docs.google.com/presentation/d/1HyIgimaEBOFLwYhSPUpR2JkvApu04FP9c8duMg6zeYc/edit#slide=id.g20de48f9cd7_0_70, Week 3 Presentation]** **Progress** * Set up host computer and android studio environment * Started extracting data from the inertial measurement unit (IMU) * [[Image(Week_3_Extracting_Data_IMU.png, width=500, height=400)]] * Recorded and ran trials of varying head motions * [[Image(Week_3_Head_Forward.gif, width=250, height=400)]][[Image(Week_3_Head_Rotation.gif, width=250, height=400)]] * [[Image(Week_3_Acc_Graphs.png, width=500, height=400)]][[Image(Week_3_Gyro_Graphs.png, width=500, height=400)]] **Next Week Goals** * Run more tests to collect more data * Design more motions for data collection *Different head motions * Rotational * Linear * Combinations of head motions * Looking left and then right * Looking up and then down === Week 4 **[https://docs.google.com/presentation/d/1KRUhHDRpHhzC8x8RDTdJcpnMpVKybfsiIKIJ-DGMdBU/edit#slide=id.g20de48f9cd7_0_70, Week 4 Presentation]** **Progress** * Designed and collected more motion data * Looking up then back middle * Looking right then back middle * Moving head around in a shape * Moving head back and forward * Used MATLAB noise removing functions to clean graphs * Original * [[Image(Week_4_Original_Graph.png, width=500, height=150)]] * Smooth * [[Image(Week_4_Original_Smooth_Graph.png, width=500, height=150)]] * Lowpass * [[Image(Week_4_Original_Smooth_Lowpass_Graph.png, width=500, height=150)]] * Findpeaks * [[Image(Week_4_Original_Smooth_Lowpass_Findpeaks_Graph.png, width=500, height=150)]] * 3D visual of acceleration to show time and position * [[Image(Week_4_3D_XYZ_Graph.png, width=500, height=400)]][[Image(Week_4_3D_XY_Graph.png, width=500, height=400)]] **Next Week Goals** * Find a way to get hand motion data using Android Studio * Work on fixed prompts to get accurate LLM results using ChatGPT 4o and ChatGPT 4 === Week 5 **[https://docs.google.com/presentation/d/1Ub-aKYOBnlRYxKw9RfWrc-aVimDxZ3YlwLOPu0zaYlc/edit#slide=id.g2e889759ea4_0_30, Week 5 Presentation]** **Progress** * Enabling hand motion data collection using VR device * Utilize Android Studio and !VrApi to access VR controller interface and extract motion data * Conducted additional motion experiments to gather comprehensive data sets * Motion with both head and hand activities * [[Image(Week_5_Three_Motions.gif, width=250, height=400)]] * Implemented 3D plots to visualize and analyze hand motion data for accuracy * [[Image(Week_5_Left_3D_Graph.png, width=500, height=400)]][[Image(Week_5_Head_3D_Graph.png, width=500, height=400)]][[Image(Week_5_Right_3D_Graph.png, width=500, height=400)]] **Next Week Goals** * Utilize motion research paper ![3] to model more motion activities * Start building a CNN model that can recognize activity based on motion data === Week 6 **[https://docs.google.com/presentation/d/1RKGL3x1_bact6RlaXQpRFMf6e8vnYp15zUTWYiKm1UY/edit#slide=id.g2e889759ea4_0_30, Week 6 Presentation]** **Progress** * Made a list of different motion data to capture and train a convolutional neural network (CNN) * Do research on previous work based on raw motion data and CNN’s * Specifics of the motion data * Samples: 250 per motion * Users: Dirk, Brody, and Emily * Motions: front raise, side raise, head right, head left, head up, and head down * [[Image(Week_6_Front_Raise.gif, width=250, height=400)]][[Image(Week_6_Side_Raise.gif, width=250, height=400)]][[Image(Week_6_Head_Right.gif, width=250, height=400)]][[Image(Week_6_Head_Left.gif, width=250, height=400)]][[Image(Week_6_Head_Up.gif, width=250, height=400)]][[Image(Week_6_Head_Down.gif, width=250, height=400)]] * Design prompt for LLM and see output results * [[Image(Week_6_LLM_Output.png, width=500, height=400)]] **Next Week Goals** * Start getting more motion data * Start using LLM to analyze the collected data * Use the designed prompts * Design and try more prompt structures and compare LLM responses === Week 7 **[https://docs.google.com/presentation/d/1mz0NYl02uq7MzdPm-ePGp7_0iRGWYou3dY3CDbdslGI/edit#slide=id.g2e889759ea4_0_30, Week 7 Presentation]** **Progress** * Collected 250 samples from six different motions to enlarge datasets for the CNN task * Designed prompts with specific parts for LLMs to establish activity recognition tasks * [[Image(Week_7_Prompt.png, width=600, height=250)]] * Tested prompt with different LLMs * Inaccurate results from ChatGPT 4o using side raise motion * [[Image(Week_7_ChatGPT_Output.png, width=800, height=400)]][[Image(Week_6_Side_Raise.gif, width=250, height=400)]] * Inaccurate results from Gemini Advanced using side raise motion * [[Image(Week_7_Gemini_Output.png, width=800, height=400)]][[Image(Week_6_Side_Raise.gif, width=250, height=400)]] **Next Week Goals** * Improve the prompt design to get more accurate prediction results from LLMs * Begin developing a CNN using samples collected from the six motion patterns === Week 8 **[https://docs.google.com/presentation/d/110vsm3g8x1BW4J_OO6c98H_GSFWvOW5-4_shkLrZxmg/edit#slide=id.g2e889759ea4_0_30, Week 8 Presentation]** **Progress** * Built a 1-dimensional (1D) convolution neural network (CNN) for activity recognition * Improving the prompt design to get more accurate predictions results from LLM * Setting CNN as baseline to adjust the LLM’s performance **Next Week Goals** * Use MATLAB to convert time domain motion data into frequency domain for more data representation * Improve LLM results (78.18%) to be more similar to CNN results (98.22%) by improving fixed prompt === Week 9 **[https://docs.google.com/presentation/d/1KE8qQEEzUupf4TrK-Y2B2nJP1qJqPZkHbaJRBsWVK1E/edit#slide=id.g2e889759ea4_0_30, Week 9 Presentation]** **Progress** * Placeholder **Next Week Goals** * Placeholder === Week 10 **[https://docs.google.com/presentation/d/1VZrjZfJtpEVUlbCmo1g7Ra4WfqlPO7QHdEdDePhxg54/edit#slide=id.g20dd9abe089_0_0, Week 10 Presentation]** **Progress** * Placeholder **Next Week Goals** * Placeholder == Links to Presentations {{{#!html

Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7 Week 8 Week 9 Week 10 Final Presentation

}}} == References {{{#!html

[1] Shi, C., Xu, X., Zhang, T., Walker, P., Wu, Y., Liu, J., Saxena, N., Chen, Y. and Yu, J., 2021, October. Face-Mic: inferring live speech and speaker identity via subtle facial dynamics captured by AR/VR motion sensors. In Proceedings of the 27th Annual International Conference on Mobile Computing and Networking (pp. 478-490).

[2] Xu, H., Han, L., Yang, Q., Li, M. and Srivastava, M., 2024, February. Penetrative ai: Making llms comprehend the physical world. In Proceedings of the 25th International Workshop on Mobile Computing Systems and Applications (pp. 1-7).

[3] Garcia, M., Ronfard, R. and Cani, M.P., 2019, October. Spatial motion doodles: Sketching animation in vr using hand gestures and laban motion analysis. In Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games (pp. 1-10).

[4] Moya Rueda, Fernando, et al. "Convolutional neural networks for human activity recognition using body-worn sensors." Informatics. Vol. 5. No. 2. MDPI, 2018.

[5] MBrownlee, Jason. “1D Convolutional Neural Network Models for Human Activity Recognition.” MachineLearningMastery.Com, 27 Aug. 2020, machinelearningmastery.com/cnn-models-for-human-activity-recognition-time-series-classification/#comments.

}}}