Yiwei Lu

Publications

(* denotes equal contribution)

BridgePure: Revealing the Fragility of Black-box Data Protection
Yihan Wang*, Yiwei Lu*, Xiaoshan Gao, Gautam Kamath, Yaoliang Yu

We show black-box data protections can be substantially bypassed if a small set of unprotected in-distribution data is available. This small set can be used to train a diffusion bridge model which effectively remove the protection from any previously unseen data within the same distribution.

Alignment Calibration: Machine Unlearning for Contrastive Learning under Auditing
ICML 2024 NextGenAISafety Workshop (Oral) / arXiv

We address machine unlearning for contrastive learning pretraining schemes via a novel method called Alignment Calibration. We also propose new auditing tools for data owners to easily validate the effect of unlearning.

On the Robustness of Neural Networks Quantization against Data Poisoning Attacks
Yiwei Lu, Yihan Wang, Guojun Zhang, Yaoliang Yu
ICML 2024 NextGenAISafety Workshop / paper

We find that neural network quantization offers improved robustness against different data poisoning attacks.

Disguised Copyright Infringement of Latent Diffusion Models
Yiwei Lu*, Matthew Y.R. Yang*, Zuoqiu Liu*, Gautam Kamath, Yaoliang Yu
ICML 2024 / ICML 2024 Generative AI and Law Workshop / arXiv

We reveal the threat of disguised copyright infringement of latent diffusion models, where one constructs a disguise that looks drastically different from the copyrighted sample yet still induces the effect of training Latent Diffusion Models on it.

Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
IEEE SaTML 2024 / arXiv

We study indiscriminate data poisoning attacks against pre-trained feature extractors for fine-tuning and transfer learning tasks and propose feature targeted attacks to address optimization difficulty under constraints.

Understanding Neural Network Binarization with Forward and Backward Proximal Quantizers
NeurIPS 2023 / paper

We propose forward backward proximal quantizers for understanding approximate gradients in neural network quantization and provide a a new tool for designing new algorithms.

f-MICL: Understanding and Generalizing InfoNCE-based Contrastive Learning
Transactions on Machine Learning Research / paper

We propose a general and novel loss function on contrastive learning based on f-mutual information. Additionally, we propose a f-Gaussain similarity funcntion with better interpretability and empirical performance.

CM-GAN: Stablizing GAN Training with Consistency Models
ICML 2023 Wrokshop on Structured Probabilistic Inference & Generative Modeling / paper

We propose CM-GAN by combining the main strengths of diffusions and GANs while mitigating their major drawbacks.

Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
ICML , 2023 / paper

We find (1) existing indiscriminate attacks are not well-designed (or optimized), and we reduce the performance gap with a new attack; (2) there exists some intrinsic barriers of data poisoning attacks, namely that when the poisoning fraction is smaller than a (easy to calculate) threshold, no attack succeeds.

Indiscriminate Data Poisoning Attacks on Neural Networks
Transactions on Machine Learning Research (also appeared in NeurIPS 2022 ML Safety Workshop and Trustworthy and Socially Responsible Machine Learning (TSRML) Workshop) / paper / code

We find that neural networks are surprisingly hard to (indiscriminate) poison and give better attacks.

f-mutual Information Contrastive Learning
NeurIPS 2021 workshop on self-supervised learning (Contributed Talk) / paper / poster / talk

We propose a general and novel loss function on contrastive learning based on f-mutual information.

Few-shot Scene-adaptive Anomaly Detection
ECCV, 2020 (Spotlight) / arXiv / code

We propose a more realistic problem setting for anomaly detection in surveillance videos and solve it using a meta-learning based algorithm.

Similarity Learning via Kernel Preserving Embedding
AAAI, 2019 / PDF

We propose a novel similarity learning framework by minimizing the reconstruction error of kernel matrices, rather than the reconstruction error of original data adopted by existing work.

Semantic Segmentation in Compressed Videos
Yiwei Lu*, Ang Li*, Yang Wang
MMSP, 2019 / PDF

We propose a ConvLSTM-based model to perform semantic segmentation on compressed videos directly. This significantly speed up the training and test speed.

Thesis

Trustworthy Machine Learning with Data in the Wild - Yiwei Lu, Ph.D. thesis, Cheriton School of Computer Science, University of Waterloo, 2025.

Anomaly Detection in Surveillance Videos using Deep Learning - Yiwei Lu, M.Sc. thesis, Department of Computer Science, University of Manitoba, June 2020.