Speakers
Jeff Dean
Keynote Speaker
Exciting Directions in Machine Learning for Computer Systems
Jeff Dean joined Google in mid-1999, and is currently Google's Chief Scientist, focusing on AI advances for Google DeepMind and Google Research. His areas of focus include machine learning and AI and applications of AI to problems that help billions of people in societally beneficial ways. Jeff co-founded Google Brain and is a co-designer and implementer of Tensorflow, MapReduce, BigTable and Spanner. He has been involved in several ML for Systems projects, including Learned Index Structures and ML for chip floorplanning.
Richard Ho
Hardware
Navigating Scaling and Efficiency Challenges of ML Systems
Richard is Head of Hardware at OpenAI working to co-optimize ML models and the massive compute hardware they run on. Richard was one of the early engineers working on Google TPUs and helped lead the team through TPUv5. Before Google, Richard was part of the D. E. Shaw Research team that built the Anton 1 and Anton 2 molecular dynamics simulation supercomputers, both of which won the Gordon Bell Prize. Richard started his career as co-founder and Chief Architect of 0-In Design Automation, a pioneer in formal verification tools for chip design which was acquired by Mentor Graphics/Siemens. Richard has a Ph.D. in Computer Science from Stanford University and M.Eng, B.Sc. from University of Manchester, UK.
Tim Kraska
Retrospective
TBD
Tim Kraska is an associate professor of electrical engineering and computer science at MIT and director of applied science for Amazon Web Services. His work focuses on Learned Systems: ML for Systems, and Systems for ML.
Natasha Jacques
Special Topic: MARL
TBD
Natasha Jacques is an Assistant Professor at the University of Washington Paul G. Allen School of Computer Science & Engineering, where she leads the Social RL Lab. She is also a Senior Research Scientist at Google DeepMind. Her work develops algorithms for Social Reinforcement Learning, which combine insights from social learning and multi-agent training to improve AI agents’ learning, generalization, coordination, and human-AI interaction. She is a pioneer in using RL for finetuning language models and learning from human feedback.
Ahmed El-Kishky
Special Topic: CodeGen
OpenAI o1 Competing in International Olympiad of Informatics
Ahmed El-Kishky is a Research Lead at OpenAI, where he focuses on advancing language models and improving AI reasoning through reinforcement learning. He was instrumental in developing OpenAI o1, a model built for complex problem-solving, and led the creation of OpenAI o1-IOI, which competed in prestigious programming competitions such as the International Olympiad in Informatics. Ahmed earned his Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign, where his research centered on scalable machine learning algorithms and natural language processing.