On the 8th December 2007 the Silicon Minds challenge was launched at the Machine Learning and Games workshop of the NIPS conference in the picturesque Canadian ski resort of Whistler.
The contest is to be judged on 3 criteria:
The submission must include the following:
To this end over the last week we have been having fun creating a mini-sample submission attached based on the board game Go demonstrating:
Disclaimer: The mini-game sample itself actually uses a rule based AI and should not be seen as an example of AI or game quality - more a sample of documentation quality and the submission files required.
What we do hope is that you will find the sample a useful reference on how to document your code (specifically the XGoLibrary project), how to comment code you have taken from other sources and how to write the design summary.
On coding standards, if you are starting afresh for the competition we suggest you take a look at the MSDN section ".Net Framework General Reference - Design Guidelines for Class Library Developers" for guidance. In summary this is what we would like to see in the code:
Don't forget though 60% of the score is based on Game AI... hope you enjoy the contest - we are really looking forward to seeing your submitted games.
Last but not least a quick special thanks to Joaquin Quiñonero Candela for working so hard organising the challenge.
Update 1: We have noticed that the PDF still had some review comments; they are removed now. Also, rendering during the scoring was accidently reset to constant black (rather than an alpha value proportional to the expected propability of the territory outcome). Finally, we slightly refined the passing algorithm of the AI: If the human player passes then the AI uses the Monte Carlo scorer to estimate whether or not each piece is already determined in colour up to a probability of at least 80%. If so, the AI passes already.
Update 2: We have noticed a couple of further bugs and easy improvements that can be made: The rule-based AI would start to fill in own eyes if the eye is either on the side of the board or in one of the four corners. This was not intended and has been fixe now. Also, the Monte Carlo sampling can be sped up significantly by only checking for move validity and "eye-ness" after a random move from the list of empty vertices has been determined. Checking a vertex for being empty is a lot faster than checking if a move is valid on that vertex.
The guys from the Microsoft Research group have released a sample entry for the Dream Build Play warm
The guys from the Microsoft Research group have released a sample entry for the Dream Build Play warm up Comp " Silicon Minds ". This sample goes through the basics of what the judges are looking for, for those who are entering please have a