Sotabase
Home
Researchers
Career
·
Assistant Professor
,
Columbia University
2024–
·
Postdoctoral Researcher
,
New York University
2021–2024
·
Ph.D. in Mathematics
,
University of Maryland
2014–2020
·
B.Sc. in Mathematics
,
University of Maryland
2010–2014
Publications
(220)
Baseline Defenses for Adversarial Attacks Against Aligned Language Models
arXiv.org · 2023
600
cited
Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models
Computer Vision and Pattern Recognition · 2022
431
cited
SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training
arXiv.org · 2021
429
cited
Universal Guidance for Diffusion Models
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) · 2023
394
cited
Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery
Neural Information Processing Systems · 2023
368
cited
Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
Neural Information Processing Systems · 2022
367
cited
A Cookbook of Self-Supervised Learning
arXiv.org · 2023
366
cited
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
IEEE Transactions on Pattern Analysis and Machine Intelligence · 2020
360
cited
The Intrinsic Dimension of Images and Its Impact on Learning
International Conference on Learning Representations · 2021
352
cited
Adversarially Robust Distillation
AAAI Conference on Artificial Intelligence · 2019
248
cited
LiveBench: A Challenging, Contamination-Free LLM Benchmark
arXiv.org · 2024
243
cited
Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text
International Conference on Machine Learning · 2024
222
cited
Understanding and Mitigating Copying in Diffusion Models
Neural Information Processing Systems · 2023
204
cited
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
International Conference on Machine Learning · 2020
192
cited
On the Reliability of Watermarks for Large Language Models
International Conference on Learning Representations · 2023
184
cited
Adversarial Examples Make Strong Poisons
Neural Information Processing Systems · 2021
157
cited
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Neural Information Processing Systems · 2021
155
cited
LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
International Conference on Learning Representations · 2021
153
cited
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
IEEE International Conference on Acoustics, Speech, and Signal Processing · 2020
143
cited
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
International Conference on Machine Learning · 2022
112
cited
Show all 220 papers →
Sotabase
Micah Goldblum | Researcher Profile | Sotabase | Sotabase