All Work
Title
Topic
-
‘Blend: A Unified Data Discovery System’
“Data discovery is an iterative and incremental process that necessitates the execution of multiple data discovery queries to identify the desired tables from large and diverse data lakes. Current methodologies concentrate on single discovery tasks such as join, correlation, or union discovery. However, in practice, a series of these approaches and their corresponding index structures are necessary. … This paper presents BLEND, a comprehensive data discovery system that empowers users to develop ad-hoc discovery tasks without the need to develop new algorithms or build a new index structure.” Find the paper and full list of authors at ArXiv.
-
‘Detecting Receptivity for mHealth Interventions’
“Just-In-Time Adaptive Interventions (JITAI) have the potential to provide effective support for health behavior by delivering the right type and amount of intervention at the right time. … Previous research has explored the association of context and user-specific traits on receptivity and built machine-learning models to detect receptivity after the study was completed. However, for effective intervention delivery, JITAI systems need to make in-the-moment decisions about a user’s receptivity. In this study, we deployed machinelearning models in a chatbot-based digital coach to predict receptivity for physical-activity interventions.” Find the paper and full list of authors in SIGMOBILE.
-
‘Modeling Self-Propagating Malware With Epidemiological Models’
“Self-propagating malware (SPM) is responsible for large financial losses and major data breaches with devastating social impacts that cannot be understated. Well-known campaigns such as WannaCry and Colonial Pipeline have been able to propagate rapidly on the Internet and cause widespread service disruptions. To date, the propagation behavior of SPM is still not well understood. … Here, we address this gap by performing a comprehensive analysis of a newly proposed epidemiological-inspired model for SPM propagation, the Susceptible-Infected-Infected Dormant-Recovered (SIIDR) model.” Find the paper and full list of authors at Applied Network Science.
-
‘Latent Space Symmetry Discovery’
“Equivariant neural networks require explicit knowledge of the symmetry group. Automatic symmetry discovery methods aim to relax this constraint and learn invariance and equivariance from data. However, existing symmetry discovery methods are limited to linear symmetries in their search space and cannot handle the complexity of symmetries in real-world, often high-dimensional data. We propose a novel generative model, Latent LieGAN (LaLiGAN), which can discover nonlinear symmetries from data. It learns a mapping from data to a latent space where the symmetries become linear and simultaneously discovers symmetries in the latent space.” Find the paper and list of authors at ArXiv.
-
‘ICML 2023 Topological Deep Learning Challenge : Design and Results’
“This paper presents the computational challenge on topological deep learning that was hosted within the ICML 2023 Workshop on Topology and Geometry in Machine Learning. The competition asked participants to provide open-source implementations of topological neural networks from the literature by contributing to the python packages TopoNetX (data processing) and TopoModelX (deep learning). The challenge attracted twenty-eight qualifying submissions in its two-month duration. This paper describes the design of the challenge and summarizes its main findings.” Find the paper and full list of authors at ArXiv.
-
‘Chameleon: Increasing Label-Only Membership Leakage With Adaptive Poisoning’
“The integration of machine learning (ML) in numerous critical applications introduces a range of privacy concerns for individuals who provide their datasets for model training. One such privacy risk is Membership Inference (MI), in which an attacker seeks to determine whether a particular data sample was included in the training dataset. … MI attacks capitalize on access to the model’s predicted confidence scores to successfully perform membership inference, and employ data poisoning to further enhance their effectiveness. … We show that existing label-only MI attacks are ineffective at inferring membership.” Find the paper and full list of authors at ArXiv.
-
Panel discussion for CSCW ’23: ‘Getting Data for CSCW Research’
“This panel will bring together a group of scholars from diverse methodological backgrounds to discuss critical aspects of data collection for CSCW research. This discussion will consider the rapidly evolving ethical, practical, and data access challenges, examine the solutions our community is currently deploying and envision how to ensure vibrant CSCW research going forward.” Find the full list of panelists in the Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing.
-
‘Lower Bounds on Anonymous Whistleblowing’
“Anonymous transfer … allows a sender to leak a message anonymously by participating in a public non-anonymous discussion where everyone knows who said what. … The work of [ACM22] presented a lower bound on anonymous transfer, ruling out constructions with strong anonymity guarantees. … They also provided a (heuristic) upper bound, giving a scheme with weak anonymity guarantees. … In this work, we present improved lower bounds on anonymous transfer, that rule out both of the above possibilities.” Find the paper and full list of authors at the Cryptology ePrint Archive.
-
‘Relaxed Octahedral Group Convolution for Learning Symmetry Breaking in 3D Physical Systems’
“Deep equivariant models use symmetries to improve sample efficiency and generalization. However, the assumption of perfect symmetry in many of these models can sometimes be restrictive, especially when the data does not perfectly align with such symmetries. Thus, we introduce relaxed octahedral group convolution for modeling 3D physical systems in this paper. This flexible convolution technique provably allows the model to both maintain the highest level of equivariance that is consistent with data and discover the subtle symmetry-breaking factors in the physical systems.” Find the paper and full list of authors at ArXiv.
-
‘An Example of (Too Much) Hyper-Parameter Tuning in Suicide Ideation Detection’
“This work starts with the TWISCO baseline, a benchmark of suicide-related content from Twitter. We find that hyper-parameter tuning can improve this baseline by 9%. We examined 576 combinations of hyper-parameters: learning rate, batch size, epochs and date range of training data. Reasonable settings of learning rate and batch size produce better results than poor settings.” Find the paper and full list of authors in the Proceedings of the International AAAI Conference on Web and Social Media.
-
‘Unified Concept Editing in Diffusion Models’
“Text-to-image models suffer from various safety issues that may limit their suitability for deployment. Previous methods have separately addressed individual issues of bias, copyright and offensive content in text-to-image models. However, in the real world, all of these issues appear simultaneously in the same model. We present a method that tackles all issues with a single approach. Our method, Unified Concept Editing (UCE), edits the model without training using a closed-form solution and scales seamlessly to concurrent edits on text-conditional diffusion models.” Find the paper and full list of authors at ArXiv.
-
‘A Function Interpretation Benchmark for Evaluating Interpretability Methods’
“Labeling neural network submodules with human-legible descriptions is useful for many downstream tasks: such descriptions can surface failures, guide interventions, and perhaps even explain important model behaviors. To date, most mechanistic descriptions of trained networks have involved small models, narrowly delimited phenomena, and large amounts of human labor. Labeling all human-interpretable sub-computations in models of increasing size and complexity will almost certainly require tools that can generate and validate descriptions automatically. … This paper introduces FIND (Function INterpretation and Description), a benchmark suite for evaluating the building blocks of automated interpretability methods.” Find the paper and authors list at ArXiv.
-
‘The Arrangement of Marks Impacts Afforded Messages: Ordering, Partitioning, Spacing and Coloring in Bar Charts’
“Data visualizations present a massive number of potential messages to an observer. … The message that a viewer tends to notice — the message that a visualization ‘affords’ — is strongly affected by how values are arranged in a chart, e.g., how the values are colored or positioned. … We present a set of empirical evaluations of how different messages … are afforded by variations in ordering, partitioning, spacing and coloring of values, within the ubiquitous case study of bar graphs. In doing so, we introduce a quantitative method that is easily scalable, reviewable and replicable.” Find the paper and…
-
‘Mechanic Maker 2.0: Reinforcement Learning for Evaluating Generated Rules’
“Automated game design (AGD), the study of automatically generating game rules, has a long history in technical games research. AGD approaches generally rely on approximations of human play, either objective functions or AI agents. Despite this, the majority of these approximators are static, meaning they do not reflect human player’s ability to learn and improve in a game. In this paper, we investigate the application of Reinforcement Learning (RL) as an approximator for human play for rule generation.” Find the paper and full list of authors at ArXiv.
-
‘E(2)-Equivariant Graph Planning for Navigation’
“Learning for robot navigation presents a critical and challenging task. The scarcity and costliness of real-world datasets necessitate efficient learning approaches. In this letter, we exploit Euclidean symmetry in planning for 2D navigation, which originates from Euclidean transformations between reference frames and enables parameter sharing. To address the challenges of unstructured environments, we formulate the navigation problem as planning on a geometric graph and develop an equivariant message passing network to perform value iteration. Furthermore, to handle multi-camera input, we propose a learnable equivariant layer to lift features to a desired space.” Find the paper and authors list at ArXiv.
-
‘GME: GPU-based Microarchitectural Extensions To Accelerate Homomorphic Encryption’
“Fully Homomorphic Encryption (FHE) enables the processing of encrypted data without decrypting it. … Despite its promise of strong data privacy and security guarantees, FHE introduces a slowdown of up to five orders of magnitude as compared to the same computation using plaintext data. This overhead is presently a major barrier to the commercial adoption of FHE. In this work, we leverage GPUs to accelerate FHE, capitalizing on a well-established GPU ecosystem available in the cloud.” Find the paper and full list of authors at ArXiv.
-
‘Dropout Attacks’
“Dropout is a common operator in deep learning, aiming to prevent overfitting by randomly dropping neurons during training. This paper introduces a new family of poisoning attacks against neural networks named DROPOUTATTACK. DROPOUTATTACK attacks the dropout operator by manipulating the selection of neurons to drop instead of selecting them uniformly at random. We design, implement, and evaluate four DROPOUTATTACK variants that cover a broad range of scenarios. These attacks can slow or stop training, destroy prediction accuracy of target classes, and sabotage either precision or recall of a target class.” Find the paper and full list of authors at ArXiv.
-
‘O(k) -Equivariant Dimensionality Reduction on Stiefel Manifolds’
“Many real-world datasets live on high-dimensional Stiefel and Grassmannian manifolds, Vk(ℝN) and Gr(k,ℝN) respectively, and benefit from projection onto lower-dimensional Stiefel (respectively, Grassmannian) manifolds. In this work, we propose an algorithm called Principal Stiefel Coordinates (PSC) to reduce data dimensionality from Vk(ℝN) to Vk(ℝn) in an O(k)-equivariant manner (k≤n≪N).” Find the paper and full list of authors at ArXiv.
-
‘”It’s a Fair Game”, or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents’
“The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users’ perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs.” Find the paper and list of authors at ArXiv.
-
‘Talk2Care: Facilitating Asynchronous Patient-Provider Communication With Large-Language-Model’
“Despite the plethora of telehealth applications to assist home-based older adults and healthcare providers, basic messaging and phone calls are still the most common communication methods, which suffer from limited availability, information loss, and process inefficiencies. One promising solution to facilitate patient-provider communication is to leverage large language models (LLMs) with their powerful natural conversation and summarization capability. However, there is a limited understanding of LLMs’ role during the communication. … We built an LLM-powered communication system, Talk2Care, and designed interactive components for both [older adults and healthcare providers].” Find the paper and full list of authors at ArXiv.
-
‘Metrics and Methods for Robustness Evaluation of Neural Networks With Generative Models’
“Recent studies have shown that modern deep neural network classifiers are easy to fool, assuming that an adversary is able to slightly modify their inputs. Many papers have proposed adversarial attacks, defenses and methods to measure robustness to such adversarial perturbations. However, most commonly considered adversarial examples are based on perturbations in the input space of the neural network that are unlikely to arise naturally. … In this paper, we propose several metrics to measure robustness of classifiers to natural adversarial examples, and methods to evaluate them.” Find the paper and full list of authors at ArXiv.
-
‘Predicting GPU Failures With High Precision Under Deep Learning Workloads’
“Graphics processing units (GPUs) are the de facto standard for processing deep learning (DL) tasks. In large-scale GPU clusters, GPU failures are inevitable and may cause severe consequences. For example, GPU failures disrupt distributed training, crash inference services, and result in service level agreement violations. In this paper, we study the problem of predicting GPU failures using machine learning (ML) models to mitigate their damages.” Find the paper and full list of authors in the Proceedings of the 16th ACM International Conference on Systems and Storage.
-
Machine learning hardware at a billionth of the power cost
With a DARPA Young Faculty Award, Aatmesh Shrivastava designs machine learning hardware that uses less power — by a factor of billions.
-
Kwong Chan receives Outstanding Paper award
Kwong Chan, an academic specialist in marketing and executive director of the DATA Initiative, received an Emerald Literati Award for Outstanding Paper for his article “How Fakes Make It Through: The Role of Review Features Versus Consumer Characteristics,” published in the Journal of Consumer Marketing.
-
‘Building Better Human-Agent Teams: Tradeoffs in Helpfulness and Humanness in Voice’
“We manipulate the helpfulness and voice type of a voice-only agent teammate to examine subjective and objective outcomes in twenty teams with one agent and at least three humans during a problem solving task. Our results show that agent helpfulness, but not the humanness of the agent’s voice, significantly alters perceptions of agent intelligence and trust in agent teammates, as well as affects team performance. Additionally, we find that the humanness of an agent’s voice negatively interacts with agent helpfulness to flip its effect on perceived anthropomorphism and perceived animacy.” Find the paper and full list of authors at ArXiv.
-
‘NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers’
“Deep-learning (DL) compilers such as TVM and TensorRT are increasingly being used to optimize deep neural network (DNN) models to meet performance, resource utilization and other requirements. Bugs in these compilers can result in models whose semantics differ from the original ones, producing incorrect results that corrupt the correctness of downstream applications. … In this work, we propose a new fuzz testing approach for finding bugs in deep-learning compilers.” Find the paper and full list of authors at in the Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems.
-
‘Symmetric Models for Visual Force Policy Learning’
“While it is generally acknowledged that force feedback is beneficial to robotic control, applications of policy learning to robotic manipulation typically only leverage visual feedback. Recently, symmetric neural models have been used to significantly improve the sample efficiency and performance of policy learning across a variety of robotic manipulation domains. This paper explores an application of symmetric policy learning to visual-force problems. We present Symmetric Visual Force Learning (SVFL), a novel method for robotic control which leverages visual and force feedback.” Find the paper and full list of authors at ArXiv.
-
‘LLM-Powered Conversational Voice Assistants: Interaction Patterns, Opportunities, Challenges, and Design Guidelines’
“Conventional Voice Assistants (VAs) rely on traditional language models to discern user intent and respond to their queries, leading to interactions that often lack a broader contextual understanding, an area in which Large Language Models (LLMs) excel. However, current LLMs are largely designed for text-based interactions, thus making it unclear how user interactions will evolve if their modality is changed to voice. In this work, we investigate whether LLMs can enrich VA interactions via an exploratory study … with varied constraints, stakes and objectivity.” Find the paper and full list of authors at ArXiv.
-
‘Rethinking Human-AI Collaboration in Complex Medical Decision Making: A Case Study in Sepsis Diagnosis’
“Today’s AI systems for medical decision support often succeed on benchmark datasets in research papers but fail in real-world deployment. This work focuses on the decision making of sepsis, an acute life-threatening systematic infection. … Our aim is to explore the design requirements for AI systems that can support clinical experts in making better decisions for the early diagnosis of sepsis. … We argue that a human-centered AI system needs to support human experts in the intermediate stages of a medical decision-making process, … instead of focusing only on the final decision.” Find the paper and full list of authors…