Interview with Gabriel Synnaeve, researcher at Facebook

Gabriel Synnaeve, project leader of the CherryPi Bot and researcher at Facebook

Before the round robin started, we took the liberty of asking Gabriel Synnaeve some questions about CherryPi, the AI behind it and the upcoming tournament. Due to some minor delays (christmas holiday, work and whatnot) there was a small delay in publishing the article. But as CherryPi is still in the tournament, it is not yet too late.

With Facebook hiring Dan Gant, author of PurpleWave, and Vegard Mella, author of tscmoo, not forgetting yourself, they have acquired a lot of know-how. Could you tell us a little bit more about the team, the distribution of responsibilities?

We are working with Dan and Vegard as they have a useful expertize in rule-based bots, a deep knowledge of StarCraft in particular, and a willingness to learn machine learning. They are working on making the bot better, as a baseline to compare to for the learning approaches, as well as a teacher to learn with. I am doing machine learning research, and coordinating the engineering and research needs of working on a StarCraft bot.

What can you tell us about CherryPi? What kind of learning does it use? (Is there a project page? Any kind of information would be very welcome.)

CherryPi is a rules-based bot, the code can be found on AIIDE's website, for SSCAIT it is a very similar version (with only a few bugfixes an some openings variations). The only learning it currently does is using a bandit algorithm (UCB-1) to pick openings out of a list, several existing bots have a similar functionality.

After participating in AIIDE 2017, where CherryPi placed 6th, it caught some flak from Wired.com. Do you think that criticism was merited?

The headline is unfortunate, but the content of the article is truthful to the state of AI research on StarCraft. We did not "sneak" into the competition as we announced that we were from Facebook, we wanted to see how our baseline bot fared, and we rushed the submission and introduced bugs at the last minute. Between being very conservative and having no real shot at winning, and tweaking our rules until the last minute (even with the risk of introducing bugs), we chose the later.

What do you expect CherryPi to achieve, both result- and AI-wise, in SSCAIT 2017?

Only Vegard and Dan are working actively on SSCAIT. As I said, it is somewhat debugged version (with slight tweaks) of the AIIDE bot. We do not aim at a specific spot, in particular the bracket format puts quite some randomness in the results, but we are confident that we will play "less bugged" strategies as compared to AIIDE, as this time we tested a bit on the SSCAIT ladder beforehand. It still has some blind spots (rules...), so we are looking forward to some beautiful and some goofy games.

What do you think of the competitors?

I love them, I think it is awesome that there is such an active community, with a few different approaches, and several high quality bots.

Where are Google and Alibaba, with their bots?

We do not know.

Why did you choose Broodwar, and not Starcraft II, as a platform for CherryPi?

Because we already started working on CherryPi and on our research when the SC2 API was released in August.

When will you, or anyone else, reliably beat top human players at Starcraft Broodwar?

It is very hard to make such a prediction.

Any final shoutouts, comments, juicy bits of gossip?

Huge thanks to the whole StarCraft team at Facebook, and to the StarCraft AI community for the emulation and good times.

Gabriel, thank you for your time.


Written by Paul Paradies for SSCAIT

Published on: 2018-02-10