Web Analytics

Address: 371 Fairfield Way
Unit 4155, Storrs, CT 06269

Office: ITE-265
Email: yuan.hong AT uconn.edu

Short Bio

Dr. Yuan Hong is an Associate Professor and Collins Aerospace Professor in the School of Computing at University of Connecticut (UConn) and affiliated with the Connecticut Advanced Computing Center (CACC). Prior to joining UConn in 2022, he was an Assistant Professor in Computer Science and Cybersecurity Program Director at Illinois Institute of Technology. He received his Ph.D degree from Rutgers University, M.Sc degree from Concordia University, Montreal, Canada, and B.Sc degree from Beijing Institute of Technology, respectively. He is a recipient of the NSF CAREER Award (2021), Cisco Research Award (2022, 2023), CCS Distinguished Paper Award (2024), and the finalist of the Meta Research Award (2021). He also received a National Physics Olympiad Prize in China. His research spans Security, Privacy, and Trustworthy Machine Learning, with a focus on areas such as differential privacy, secure computation, applied cryptography, adversarial attacks and provable defenses in machine learning, computer vision, (large) language models and cyber-physical systems (CPS). His research works have been published in top-tier computer science conferences (e.g., S&P, CCS, USENIX Security, NDSS, SIGMOD, VLDB, NeurIPS, CVPR, ECCV, EMNLP, KDD, AAAI), as well as top interdisciplinary journals (e.g., multiple IEEE/ACM Trans, T-ITS, TR_C). He is a Senior Member of the ACM and IEEE.

We are always looking for postdocs, Ph.D. students, visiting scholars/students, and undergraduate researchers. Please email your application materials to Dr. Yuan Hong if you are interested in our research.

UConn@CSRankings: Security & Crypto (29th), Overall (63th)

News

  • [Recent Conference TPC] USENIX Security'26, S&P'26, NDSS'26, CCS'25, USENIX Security'25, NDSS'25
  • [06/2025] Our work on rectifying the privacy and efficacy measurements for machine unlearning (RULI) is accepted to USENIX Security'25 (Acceptance Rate: TBD). Congrats to Nima, Shenao and the team!
  • [05/2025] Congrats to Shuya for receiving the Taylor L. Booth Graduate Fellowship (SoC's highest honor for Ph.D. students). Also, congrats to other students who received the Predoctoral Fellowships!
  • [04/2025] Hanbin will continue to do research internship in LLM Security at TikTok in Summer 2025. Congrats!
  • [03/2025] Our work on certifying adapters (robustness) is accepted to IJCNN'25. Congrats to the team!
  • [02/2025] Shenao will do research internship at VISA Research in Summer 2025. Congrats!
  • [02/2025] Two works are accepted to CODASPY'25 (Acceptance Rate: 31/148=20.9%). Congrats to Shuya, Bingyu and the team!
  • [12/2024] Xinyu has successfully defended her doctoral dissertation. She will join Alibaba Group as a Researcher in LLMs and Robustness, Congrats!
  • [12/2024] Our work on information-theoretic robust and privacy-preserving representations learning is accepted to AAAI'25 (Acceptance Rate: 3032/12957=23.4%). Congrats to Ben, Leila and the team!
  • [10/2024] Our work on the distributed backdoor attacks and certified defenses on FedGL recieved the CCS'24 Distinguished Paper Award. Congrats to all the co-authors!
  • [09/2024] Our provably robust watermark for FedGL is accepted to NeurIPS'24 (Acceptance Rate: 25.8%). Congrats to Yuxin and the team!
  • [08/2024] Media report for our CodeBreaker (USENIX Security'24): Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code.
  • [07/2024] Congrats to Shenao for receiving the USENIX Security'24 Student Travel Award. Thanks for the generous support!
  • [07/2024] Our optimization-based atttack (breaking SOTA poisoning defenses to federated learning) is accepted to CIKM'24 (Acceptance Rate: 347/1531=23%). Congrats to Yuxin and the team!
  • [07/2024] Our certified black-box attack (breaking SOTA defenses with provable confidence and limited resources) is accepted to CCS'24 (Acceptance Rate: 331/1964=16.9%). Congrats to Hanbin, Xinyu and the team!
  • [07/2024] Our certified defenses for distributed backdoor attacks on federated graph learning is accepted to CCS'24 (Acceptance Rate: 331/1964=16.9%). Congrats to Yuxin and the team!
  • [06/2024] Our DP data streaming mechanism under the delay-allowed framework is accepted to NDSS'25 (Acceptance Rate: 211/1311=16.1%). Congrats to Xiaochen, Shuya and the team!
  • [06/2024] Our LLM-assisted backdoor attack to LLM-fine-tuned code generation/completion models is accepted to USENIX Security'24 (Acceptance Rate: 417/2276=~18%). Congrats to Shenao, Hanbin and the team!

Selected Recent Publications


Teaching

  • Principles of Databases: Fall 25
  • Cybersecurity Lab: Fall 23, Spring 24, Fall 24
  • Computer Security: Spring 23, Spring 25
  • CSE Design Project: 2022-2023, 2025-2026
  • Cryptography: Spring 21, Spring 20
  • Data Privacy and Security: Fall 21, Fall 20, Spring 19, Spring 18
  • Database Organization: Spring 22, Fall 19, Fall 18, Fall 17
  • Doctoral Seminar: Spring 18
  • Earlier Teaching: Cybercrime, Forensics, Computer Network