Sotabase
Home
Researchers
Career
·
Asst. Prof.
,
Duke
Publications
(20)
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
USENIX Security Symposium · 2020
270
cited
GLAZE: Protecting Artists from Style Mimicry by Text-to-Image Models
USENIX Security Symposium · 2023
249
cited
Backdoor Attacks Against Deep Learning Systems in the Physical World
Computer Vision and Pattern Recognition · 2020
241
cited
Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks
Conference on Computer and Communications Security · 2019
79
cited
Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks
USENIX Security Symposium · 2020
55
cited
"Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World
Conference on Computer and Communications Security · 2021
51
cited
SALSA: Attacking Lattice Cryptography with Transformers
IACR Cryptology ePrint Archive · 2022
50
cited
Piracy Resistant Watermarks for Deep Neural Networks.
2019
46
cited
Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
arXiv.org · 2020
33
cited
Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
arXiv.org · 2020
26
cited
Finding Naturally Occurring Physical Backdoors in Image Datasets
Neural Information Processing Systems · 2022
24
cited
SalsaPicante: A Machine Learning Attack on LWE with Binary Secrets
IACR Cryptology ePrint Archive · 2023
24
cited
SoK: Anti-Facial Recognition Technology
IEEE Symposium on Security and Privacy · 2021
19
cited
Data Isotopes for Data Provenance in DNNs
Proceedings on Privacy Enhancing Technologies · 2022
17
cited
SALSA VERDE: a machine learning attack on Learning With Errors with sparse small secrets
IACR Cryptology ePrint Archive · 2023
16
cited
Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models
Conference on Computer and Communications Security · 2022
15
cited
Assessing Privacy Risks from Feature Vector Reconstruction Attacks
arXiv.org · 2022
7
cited
Natural Backdoor Datasets
arXiv.org · 2022
6
cited
Using Honeypots to Catch Adversarial Attacks on Neural Networks
2019
3
cited
An Embarrassingly Simple Key Prompt Protection Mechanism for Large Language Models
Sotabase
Emily Wenger | Researcher Profile | Sotabase | Sotabase