I just completed my 12 week internship in the Applied Games Group. I was working with Ralf, Thore and Phil on applying model-based reinforcement learning to race car driving. This is one of the most interesting projects I've worked on. Next to my workstation I had an XBox 360 development kit hooked up to a wide screen television, which gave some of the other interns at Microsoft Research the impression that I was just playing video games all day. While I did play quite a bit, the point of my work was to get the cars in Project Gotham Racing 3 to learn to drive themselves through experience.
I implemented a simplified version of the Adaptive Modeling and Planning System (AMPS) described in my Ph.D. thesis (available here). We found that AMPS learns to drive competently after a single training lap, but the other standard algorithms that I implemented did not perform nearly as well. The strength of AMPS is partly due to its ability to generalize from limited experience through dynamic state abstraction. Currently, most racing games on the market include AI opponents with hard-coded behavior, but I think that opponents that are able to learn and adapt to your driving style and exploit your personal weaknesses can make for some very interesting gameplay.