Jump to content

Draft:Thomas Ristenpart

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 2401hz (talk | contribs) at 04:59, 10 June 2024 (Submitting using AfC-submit-wizard). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.



Thomas Ristenpart
Alma materUniversity of California, San Diego (PhD)
Scientific career
FieldsComputer Security
InstitutionsCornell Tech
Doctoral advisorMihir Bellare

Thomas Ristenpart is a professor of computer security at Cornell Tech.

Biography

Ristenpart received his B.S. in computer science and engineering from the University of California, Davis in 2003, where he also received his M.S. under Matt Bishop in 2005. He then moved to the University of California, San Diego where he received his Ph.D. in computer science under Mihir Bellare.

Research

Ristenpart's research touches on many areas of computer security. Three of his papers are among the highest cited computer security papers of all time.[1] In cryptography, Ristenpart developed Honey Encryption, a technique that can encrypt data in a way that, if decrypted incorrectly, will return fake data.[2] Ristenpart also developed techniques to develop typo-tolerant passwords[3], allowing users to authenticate even if they have mistyped their password.

In his cloud security work, Ristenpart found that users on Microsoft's Azure and Amazon's EC2 services could arrange to be placed on the same virtual machine as another user and therefore exploit a side-channel attack to learn information about their data.

Recently, Ristenpart has studied machine learning privacy and security. He was one of the first researchers to show that machine learning models can leak details about their training datasets[4]. He showed that if a machine learning model is trained on images of peoples faces, then it is possible to reconstruct images of the people contained in the training dataset.

Ristenpart also showed that it is possible to "steal" a machine learning model and reverse-engineer how it works querying the model[5]. Once stolen, it is possible to use the stolen model to generate proprietary data used to train it.

Ristenpart was the Program Chair the USENIX Security Symposium in 2017; Crypto in 2020, and the IEEE Symposium on Security and Privacy in 2022 and 2023.

Awards

Ristenpart received a Best Paper at USENIX Security 2014, ACM CHI 2018, USENIX Security 2020, CSCW 2020, CHI 2022, USENIX Security 2023, and test of time awards for his paper at CCS 2009 and CCS 2012. (https://tech.cornell.edu/news/cornell-tech-faculty-win-test-of-time-award-at-ccs-2022/)

References

  1. ^ Rieck, Konrad. "Top-100 Security Papers". www.mlsec.org. Retrieved 2024-06-10.
  2. ^ ""Honey Encryption" Will Bamboozle Attackers with Fake Secrets". MIT Technology Review. Retrieved 2024-06-10.
  3. ^ "Password Autocorrect Without Compromising Security". threatpost.com. 2016-06-06. Retrieved 2024-06-10.
  4. ^ "Artificial intelligence may put private data at risk | Cornell Chronicle". news.cornell.edu. Retrieved 2024-06-10.
  5. ^ "Stealing an AI algorithm and its underlying data is a "high-school level exercise"". Quartz. 2016-09-22. Retrieved 2024-06-10.

External links

https://rist.tech.cornell.edu/