A few months back, we wrote about the Animal-A.I. Olympics, a then in-development competition which aimed to test top artificial intelligence agents by putting them through cognition tests designed for animals. This was intended to be open to anyone who wanted to create an A.I. they thought would be able to pass a battery of tests, all meant to measure some aspect of bot intelligence. Jump forward to the present day, and the contest has officially launched — with its creators releasing Version 1.0 of the test environment, and announcing the official rules for entrants, increased prize money, and other crucial information.
“For prizes, we now have $32,000 equivalent value, with $20,000 total in cash and travel reimbursement for the top three entries and the most biologically plausible entry,” Matthew Crosby, a postdoctoral A.I. researcher working on the project, told Digital Trends. “We are also giving out $10,000 worth of AWS credits half-way through — $500 to each of the top 20 entries — that can be used during the second half of the competition.”
As far as tests go, the team has bumped the number up from the originally planned 100 to a massive 300, split across 10 categories. All tests are either pass or fail, meaning a maximum score of 300 for the contest. These categories are arranged in order of increasing difficulty, starting with simple food retrieval in empty environments and ending with problems which require more complex causal reasoning to solve. However, participants won’t know what the tests involve — meaning they’ll have to create as general purpose an A.I. as possible.
Submissions open on July 8 and the contest itself will run through November 2019. During that time, participants can enter multiple times, which gives plenty of opportunity for improving personal scores. The final results will be presented at the NeurIPS 2019 conference in December.
“Given the wide variety of tasks, every interesting idea has the potential to win at least some prize,” Crosby continued. “We encourage everyone to just download the environment and play around with it to see what they can come up with. We also encourage them to submit their entry even in the early stages so that they can see how well it’s doing on the tasks. Given the variety of tasks, even a simple agent might solve some tasks that others are struggling at.”
Any further advice? “Try to make an agent that behaves like an animal,” Crosby said. “It should always want to get the most food it can. It should be keen to explore its environment when food is not readily available, and [be] able to make intelligent decisions when faced with multiple possibilities.”
And, just like that, your July Fourth holiday got filled up with homework!