There's a sentiment I've seen echoed on various forums regarding Flight Simulator, with regards to testing-- especially beta testing. This sentiment is captured over at Avsim by a poster who claims "...nobody outside the MS building is a beta tester. You are a marketing tool and an extra set of eyes." (part of a longer thread). The poster was previously involved in testing a console title.
Now, I'm *not* a tester. I'm an artist. We make bugs, we don't find them.[:)] But I do happen to sit next to somone who is a tester, so I feel fully qualified to shoot my mouth off about the testing process here in the ACES game studio.
I imagine that console testing might very well happen as Mr._Al describes. I don't know. My experience deals with PC titles, and even there I can tell you with confidence that testing a title like Flight Simulator is a different kettle of fish than testing Half Life 2.
PC titles have to worry about a broad spectrum of configurations (how video cards/ sound cards/ memory/ CPU/, etc interact) whereas consoles don't. That means configuration testing is a part of the external testing process-- an important part. Even Microsoft doesn't have an infinite number of machines to test various configurations on-- we use a representative sample (which is a lot of machines) instead. Having a wide variety of configurations used in the "wild" (external test) as it were, drastically increases the odds that you'll catch a major crashing or hanging bug. In a sense that might be considered an "extra set of eyes," but the feedback and input (via crash logs and written feedback) are not the only way external beta testers affect final quality, and are critical to a successful release.
Simulations-- like Flight Simulator, are different than other PC titles. Flight Sim, for example, is a pretty big sandbox-- fly anywhere (except the poles...), any time of day, in a variety of aircraft technologies. You look at a game/sim like Forza, for example, and yes it's pretty realistic. Tons of cars, tons of detail. Yes, it edges into the simulations space. But...can you drive from Lillestrøm to Sørumsand? Umm... nope. You're locked into the (very detailed) areas provided. The limitation of Forza and the breadth of Flight Simulator require different testing models, differing areas of expertise. We have a great group of full time testers. They come from a variety of backgrounds, but they are certainly subject matter experts in their various areas of expertise, and yes, a bunch of 'em are pilots. That being said, we bring various outside experts onto our Beta as necessary and useful; meteorologists, air traffic controllers, pilots, and the like. Sometimes those experts are also part of our existing user community. They are listened to.
There's also the 3rd party. The mod/add-on community for Flightsim is quite large and contains both professional and amateur components, some of which have large (passionate and vocal) followings. While the team's focus has to be on the core product, we invest a significant amount of time and resources into supporting backwards compatability. Part of that back compat testing is done in house to be sure, but we invite many representative samples of add-on developers from multiple areas and "hardcore" users to our Betas to provide feedback and input precisely on those areas.
All those people have ideas and feedback. The vast majority of bugs logged are found by our internal teams (they'd better, since that's their fulltime job-- to find bugs), but the Flight Sim Beta has the highest "valid" (usually meaning "non duplicated") bugs found by external process. That ain't 'cause the code's crappy. It's 'cause the Beta testers are both informed and passionate about what they do. They care about what they're doing and give great feedback. Which makes better product.
That being said, the rap sometimes given to the testers is that they "didn't do their job, just look at this bug (fill in your own example here)!" to which I'd say;
"Hold on there Sparky! Not so fast!"
I've mentioned before this post that there are "bugs" and there are incomplete features, and that one is often confused with the other. I'm not walking some fine semantics line when I say this, it's just the nature of the beast. In part, it's why we have version numbers: we build upon what's gone before, and take advantage of new opportunities. This is not meant to excuse poor or incomplete implementation: sometimes we're guilty of that. Usually it's because time and resources don't allow for more, and we decided that something is better than nothing. That choice isn't always the right choice, but on the whole I think we've done okay.
Let me bring this post back to where I started. Here at ACES, our Beta testers (and our fulltime testers too!) have a more than average impact on the development of our products. Their suggestions and input have changed feature implementation and increased quality. Not every suggestion leads to an immediate implementation, but don't assume that because it's not in the product that it wasn't asked for, or remarked upon. (For that matter, don't assume that the team doesn't argue feature set back and forth as well. We do.) Our Beta testers are more than a "marketing tool." (we ask them to sign an NDA, and frankly would prefer them not to really mention they've tested the product at all), and certainly more than "an extra set of eyes" (although when you've got a whole planet to deliver, you need every set of eyes you can get!).
There's a lively comment or two (including the person originally quoted) in the comments section of this post, and a follow up from me with a little more exposition in it, please check 'em out...