-
University students will compete for cash prizes in a competition to securely advance LLMs that code.
-
Before applying to the Amazon Trusted AI Challenge, please review the rules to ensure you are eligible.
-
Find answers to some of our frequently asked questions about the Amazon Trusted AI Challenge.
Frequently asked questions
What is the Amazon Trusted AI Challenge?
The Amazon Trusted AI Challenge is an annual university competition dedicated to accelerating the field of artificial intelligence (AI). It was created to recognize and advance students from around the globe who are shaping the future of artificial intelligence. Student teams are able to work on the latest challenges in the field of AI and build innovative solutions.
How does the Trusted AI Challenge support research?
The Amazon Trusted AI Challenge is a testbed for university students to experiment with and advance AI at scale. Participating teams in a competition compete to develop innovative and effective solutions to the specific challenge. Teams receive a number of forms of support, including stipends, AWS credits, and consultation and mentoring from the Amazon Trusted AI Challenge team.
What is the goal of the Trusted AI Challenge?
The goal of the Trusted AI Challenge is to make AI responsible and safer for all, with a focus this year on preventing AI from assisting with writing malicious code or writing code with security vulnerabilities. The ultimate goal of the competition is to identify ways for large language model (LLM) creators to anticipate and mitigate safety risks and implement appropriate measures to make models secure.
Who can apply to participate in the Trusted AI Challenge?
The Amazon Trusted AI Challenge is open to full-time students (undergraduate or graduate) with some exceptions (see Challenge Rules). Proof of enrollment will be required to participate.
What are the prizes for winning the Trusted AI Challenge?
From the evaluation at the finals event, $700,000 in cash prizes will be allocated across the top four performing teams.
We are focusing on advancing the capabilities of coding LLMs, exploring new techniques to automatically identify possible vulnerabilities and effectively secure these models.
Rohit Prasad, Senior Vice President and Head Scientist, Amazon AGI
Related content
-
June 13, 2024The fight against hallucination in retrieval-augmented-generation models starts with a method for accurately assessing it.
-
February 15, 2024In addition to its practical implications, recent work on “meaning representations” could shed light on some old philosophical questions.
-
May 03, 2023Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.