Keynote speakers

We are thrilled to announce three keynote speakers for EthiCS 2023

Dr. Yan Shoshitaishvili

Title

Reconciling the Hacker Spirit with Ethical Cybersecurity Education

Abstract

Hackers are the dragons of the digital world. We are shrouded in mystery, steeped in intrigue, and imbued with a sort of romance. But unlike the dragons of times long gone, hackers are real, and it is critical to the continued security of our society to create more ethical hackers in a scalable way. But an ethical hacker is still a hacker, and in order for them to effectively operate in the open seas of cyberspace, whether in the academic, industry, or government world, we must teach students to understand and adopt the hacker mindset while maintaining their ethical integrity and instincts.

This talk will convey my thoughts on the teaching of ethical hackers, derived from my experiences teaching cybersecurity at various stages of student academic careers and in leading cybersecurity competitions and real-world vulnerability research efforts. I don't claim to have the answers, but I hope that the thoughts shared here will make good starting points for deeper discussion.

Bio

Yan Shoshitaishvili is an Assistant Professor at Arizona State University, where he pursues parallel passions of cybersecurity research, real-world impact, and education. His research focuses on automated program analysis and vulnerability detection techniques. Aside from publishing dozens of research papers in top academic venues, Yan led Shellphish’s participation in the DARPA Cyber Grand Challenge, achieving the creation of a fully autonomous hacking system that won third place in the competition.

Underpinning much of his research is angr, the open-source program analysis framework created by Yan and his collaborators. This framework has powered hundreds of research papers, helped find thousands of security bugs, and continues to be used in research labs and companies around the world.

When he is not doing research, Yan participates in the enthusiast and educational cybersecurity communities. He is a Captain Emeritus of Shellphish, one of the oldest ethical hacking groups in the world, and a founder of the Order of the Overflow, with whom he ran DEF CON CTF, the “world championship” of cybersecurity competitions, from 2018 through 2021. Now, he helps demystify the hacking scene as a co-host of the CTF RadiOOO podcast and forge connections between the government and the hacking community through his participation on CISA’s Technical Advisory Council. In order to inspire students to pursue cybersecurity (and, ultimately, compete at DEF CON!), Yan created pwn.college, an open practice-makes-perfect learning platform that is revolutionizing cybersecurity education for aspiring hackers around the world.

Dr. Tariq Elahi

Title

Ensuring Safety and Facilitating Research on the Live Tor Network

Abstract Tor is an anonymous communication network. It provides real people real protections in a real world. Tor is a distributed network operated by volunteers spread around the globe. Unlike ordinary networks that employ network telemetry to monitor network health, the Tor network is unmonitored*. The main reason is that Tor can resist local network adversaries (such as ISPs, governments, and random strangers on the Internet). However, global adversaries are outside its threat model and hence the network-wide monitoring and reporting of network traffic patterns, statistics, and more could provide a global view that may potentially lead to privacy breaches for Tor users. The security and privacy research community has a good working relationship with the Tor network. Indeed, the design of Tor has been strengthened over time due to the research contributions of the community. Often these attacks and defences need to be tested out on the live Tor network to evaluate their effectiveness. The challenge is to balance the needs of research (and Tor’s long-term health and security) with the safety of Tor users, relay operators, and other stakeholders. In this talk I will provide an overview of how the community and Tor have tried to tackle this challenge with the creation of the Tor Research Safety Board, availability of privacy-preserving telemetry tools, and the process of doing empirical research on the Tor network.

Bio

Dr. Tariq Elahi is assistant professor in Security and Privacy at University of Edinburgh where he heads the Networks & Systems Security & Privacy Lab. He has made significant contributions to the fields of anonymous communications (ACN), censorship resistance (CRS), and privacy-preserving data analytics. Over his academic career he has developed his research agenda for the systematic design and analysis of privacy building blocks, systems, and networks. His work has identified system weaknesses and produced designs that have subsequently been adopted in the real-world. Dr Elahi's is Co-I in the UK's National Research Excellence Centre for Protecting Citizens Online (REPHRAIN) and is PI on a New Investigator Award, both investigating the engineering and practical challenges to embedding privacy into network infrastructures. Dr Elahi has also been actively involved in international collaborations as a member of large EU-funded international projects such as PRIME and most recently PANORAMIX. He was the co-chair of the PETS co-located annual Hot Topics in Privacy Enhancing Technologies Workshop (HotPETS’17, ‘18) and is the founder and chair of the Tor Research Safety Board; a panel of privacy experts that provide vetting for empirical studies on the live Tor network. Prior to taking up his current post, Dr Elahi was a postdoctoral fellow in the COSIC group at KU, Leuven, Belgium. He obtained his PhD from the University of Waterloo.

Fatemehsadat Mireshghallah

Title

How Much Can We Trust Large Language Models?

Abstract

Large language Models (LLMs, e.g., GPT-3, OPT, TNLG,…) are shown to have a remarkably high performance on standard benchmarks, due to their high parameter count, extremely large training datasets, and significant compute. Although the high parameter count in these models leads to more expressiveness, it can also lead to higher memorization, which, coupled with large unvetted, web-scraped datasets can cause different negative societal and ethical impacts such as leakage of private, sensitive information and generation of harmful text. In this talk, we will go over how these issues affect the trustworthiness of LLMs, and zoom in on how we can measure the leakage and memorization of these models, and mitigate it through differentially private training. Finally we will discuss what it would actually mean for LLMs to be privacy preserving, and what are the future research directions on making large models trustworthy.

Bio

Fatemehsadat Mireshghallah is a Ph.D. Candidate at UC San Diego’s Computer Science and Engineering Department. Her research focuses on understanding the learning and memorization patterns in large language models, probing them for safety issues (such as bias) and providing tools to limit the leakage of private information in such models. She is a recipient of the National Center for Women & IT (NCWIT) Collegiate award in 2020 for her work on privacy-preserving inference, a finalist of the Qualcomm Innovation Fellowship in 2021 and a recipient of the 2022 Rising star in Adversarial ML award.