An anonymous reader quotes a report from The Register: This year’s DEF CON AI Village has invited hackers to show up, dive in, and find bugs and biases in large language models (LLMs) built by OpenAI, Google, Anthropic, and others. The collaborative event, which AI Village organizers describe as “the largest red teaming exercise ever for any group of AI models,” will host “thousands” of people, including “hundreds of students from overlooked institutions and communities,” all of whom will be tasked with finding flaws in LLMs that power today’s chat bots and generative AI. Think: traditional bugs in code, but also problems more specific to machine learning, such as bias, hallucinations, and jailbreaks — all of which ethical and security professionals are now having to grapple with as these technologies scale. DEF CON is set to run from August 10 to 13 this year in Las Vegas, USA.
For those participating in the red teaming this summer, the AI Village will provide laptops and timed access to LLMs from various vendors. Currently this includes models from Anthropic, Google, Hugging Face, Nvidia, OpenAI, and Stability. The village people’s announcement also mentions this is “with participation from Microsoft,” so perhaps hackers will get a go at Bing. We’re asked for clarification about this. Red teams will also have access to an evaluation platform developed by Scale AI. There will be a capture-the-flag-style point system to promote the testing of “a wide range of harms,” according to the AI Village. Whoever gets the most points wins a high-end Nvidia GPU. The event is also supported by the White House Office of Science, Technology, and Policy; America’s National Science Foundation’s Computer and Information Science and Engineering (CISE) Directorate; and the Congressional AI Caucus.