Carnegie Mellon University

white background with red blue and green hexagons

October 10, 2023

CMU’s Synergy Lab presents multiple papers on ubiquitous sensing at UbiComp

Researchers from Carnegie Mellon's Systems, Networking, and Energy Efficiency (Synergy) Lab will present several multi-year studies on their work around ubiquitous sensing at this week's ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp).

The works unveil several innovative systems and explain how the collected data can be converted to offer useful insights, all while ensuring the privacy of the individuals being monitored.

Led by School of Computer Science Associate Professor Yuvraj Agarwal, the Synergy Lab is focused on developing more energy-efficient computing in buildings, improving the security and privacy of Internet of Things (IoT) devices, and advancing mobile systems.

Mites: Design and Deployment of a General-Purpose Sensing Infrastructure for Buildings
Sudershan Boovaraghavan, Chen Chen, Anurag Maravi, Mike Czapik, Yang Zhang, Chris Harrison, Yuvraj Agarwal

There is increasing interest in deploying building-scale, general-purpose, and high-fidelity sensing to drive emerging smart building applications. However, the real-world deployment of such systems has been challenging due to the lack of key system and architectural primitives needed to support these emerging applications. Most existing sensing systems are purpose-built, consisting of hardware that senses a limited set of environmental factors, typically at low fidelity for short-term deployment. Furthermore, prior systems with high-fidelity sensing and machine learning fail to scale effectively and have fewer protections, if any, for privacy and security. For these reasons, IoT deployments in buildings are generally short-lived or executed only as a proof of concept.

To address these issues, Agarwal and fellow researchers developed Mites, a scalable, end-to-end hardware-software system for supporting and managing distributed general-purpose sensors in buildings. Their design includes robust protections for privacy and security, essential features for scalable data management and machine learning to support diverse applications in buildings.

“All the sensor data processing happens on the edge, so the raw data collected never leaves the sensor,” said Sudershan Boovaraghavan, a Ph.D. student in the Software and Societal Systems Department. "Only heavily featurized data is sent from the Mites devices securely to on-campus servers, enabling us to ensure building occupants’ privacy."

Researchers deployed the Mites system and 314 Mites devices in CMU’s Tata Consultancy Services Hall. Their comprehensive evaluations of the system used a series of microbenchmarks and end-to-end evaluations to show how they achieved their design goals. The study’s authors included five proof-of-concept applications in different domains, including building management and maintenance, occupancy modeling and activity monitoring, to demonstrate the extensibility of the Mites system to support compelling IoT applications. The researchers also explained the real-world challenges faced and lessons learned over their five-year journey of their stack’s iterative design, development, and deployment to create an unprecedented smart building testbed on the Carnegie Mellon campus.

synlab1010-gateway-layer.png


Overview of the Mites system and deployment. Each room has a Mites device on the wall and in the ceiling, with larger rooms and shared areas having multiple devices in the ceiling. Each device sends an encrypted stream of featurized data for 12 sensor dimensions to the Mites software backend, which provides several key features supporting large-scale data collection and APIs for application development.


 

VAX: Using Existing Video and Audio-based Activity Recognition Models to Bootstrap Privacy-Sensitive Sensors
Prasoon Patidar, Mayank Goel, Yuvraj Agarwal

The use of audio and video for Human Activity Recognition (HAR) is common, given the richness of the data and the availability of pre-trained machine learning models using large sets of labeled training data. However, audio and video sensors also lead to significant consumer privacy concerns. Researchers have thus explored alternative data sources and gathering methods that are less privacy-invasive, such as mmWave doppler radars, IMUs and motion sensors. However, the key limitation of these approaches is that most of them do not readily generalize across environments and require significant training effort in every home or location to recognize activities accurately. Recent work has proposed cross-modality transfer learning approaches to alleviate the lack of trained labeled data with some success.

In this paper, the researchers generalized this concept to create a novel system called VAX (Video/Audio to ‘X’), where training labels acquired from existing video and audio models are used to train ML models for a wide range of ‘X’ privacy-sensitive sensors. Notably, in VAX, once the ML models for the privacy-sensitive sensors are trained, with little to no user involvement, the audio and video sensors can be removed to protect the user’s privacy.

The study’s authors built and deployed VAX in 10 participants’ homes while they performed 17 common activities of daily living. The researcher’s evaluation results showed that after training, VAX can use its onboard camera and microphone to detect approximately 15 out of 17 activities with an average accuracy of 90%. For the activities that can be detected using a camera and a microphone, VAX trained a per-home model for the privacy-preserving sensors. Their results showed that VAX is significantly better than the baseline supervised-learning approach of using one labeled instance per activity in each home — with an average accuracy of 79% — since VAX reduces the user burden of providing activity labels by eight times.

TAO: Context Detection from Daily Activity Patterns Using Temporal Analysis and Ontology
Sudershan Boovaraghavan, Prasoon Patidar, Yuvraj Agarwal

Translating fine-grained activity detection, such as a phone ringing or talking interspersed with silence and walking, into semantically meaningful and richer contextual information, like detecting that someone was exercising while having a 20-minute phone call, is essential toward enabling a range of health care and human-computer interaction applications. Prior work has proposed building knowledge maps of activity patterns but has had limited success in capturing complex, real-world context patterns.

To move this idea forward, Agarwal, Boovaraghavan and Prasoon Patidar, a Ph.D. student in S3D, present TAO, built a hybrid system that leverages both an ontological and a novel temporal clustering approach to detect high-level contexts from human activities. TAO characterizes sequential activities that happen one after the other and activities that are interleaved or occur in parallel to detect a richer set of contexts more accurately than prior work.

“TAO enables us to take activities like typing on a keyboard, using a mouse or writing and categorize them into a higher level description, such as working in the office,” said Patidar. “This approach provides the information needed to inform applications without relaying tons of granular information.”

The researchers evaluated TAO on real-world activity datasets and showed that their system achieves, on average, 87% and 80% accuracy for context detection in Casas and Extrasensory, respectively. They deployed and evaluated TAO in a real-world setting with eight participants using their system for three hours each, demonstrating TAO’s ability to capture semantically meaningful contexts in the real world.

To showcase the usefulness of contexts, the study’s authors prototyped wellness applications that assessed productivity and stress and showed that the wellness metrics calculated using contexts provided by TAO were much closer to the ground truth — within 1.1% — as compared to the baseline approach, which averaged within 30%.

 

synlab-1010-3.png


Overview of the TAO’s system architecture. TAO leverages OWL-based ontologies and temporal clustering approaches to identify context from the stream of activities obtained from Human Activity Recognition (HAR) systems. The contexts detected by TAO are then sent to the wellness application which then infers productivity and stress.


 

While each of these three innovative systems presented in the above papers works independently of one another, researchers said there is potential to utilize the features of TAO and/or VAX to further strengthen the Mites end-to-end hardware-software system. Each system has the potential to offer building managers and occupants valuable insights and functionality, all while preserving the privacy of individuals working in offices and areas outfitted with the sensors.

“Among many other capabilities, the Mites stack enables users to build custom machine learning models that convert low-level featurized sensor data into activities and inferences they are interested in. However, this is still a manual process, requiring the users to give labels or examples for those activities,” said Agarwal. “VAX can help reduce the labeling effort for privacy-preserving modalities and helps expand the scope of our sensors by training the model using audio and video. Last but not least, TAO enables us to convert these activities into an even higher-level semantic abstraction of context, ultimately supporting smart building applications towards improving occupant wellness and productivity.”

To learn more about the work taking place at Carnegie Mellon’s Synergy Lab, visit https://www.synergylabs.org.