I was just talking with my friend Glen about my post from yesterday about awesome AI bots that have to "see" in order to know where things are. If you can't be bothered reading the previous post, here's the basics:
Bots shouldn't have intimiate awareness of object locations. In Joint Ops, if there's a solid object (eg. wall) between you and a bot, it can't see you. If there's a soft object (eg. smoke, bush, curtain, net, etc.) between you, it knows where you are and can shoot with utmost accuracy. Additionally, as soon as a bot knows you exist, it tends to know how fast you're moving, and in what direction, so it will rarely miss with a leading shot.
The first-order approximation to a groovy solution is to have the bot render the world in three colours: black for background cruft, green for friendly players/objects, and red for enemies. It can then use this cheaty segmentation to classify objects and react appropriately.
The awesome solution is to have the bot build up a "mental map" of the static world, by moving around and seeing how things move (ie. close things move faster). Once it has an image of the static world, it can detect anomolies (ie. it can predict what it should see at any particular time/place/direction, and if there's a difference between the prediciton and what it really sees, that difference is probably a dynamic object). Then it can classify them/etc. however it pleases.
The idea I that struck while chatting with Glen is a second-order approximation to the solution, where we skip the stage of building a mental map of the static world, and just render the actual static world straight from the game data. Then we could render the real world, replete with dynamic objects, and differences can be collated/classified/etc. as per the final solution.
This solution pleases me in that 1) it doesn't require nearly as much AI, nor does it require a period of time for the bot to learn the world, and 2) it still allows camouflage to play a part. If you're wearing a speckly green uniform, and you stand amongst a bunch of speckly green grass, you're not going to stand out much. Especially if the bot works at low resolution and low colour depth.
At low resolution, with minimal effects (only the ones that matter, like smoke — bots don't care about eye candy), rendering the two frames, diffing the images, and doing some basic image segmentation/classification ought to allow someone to run a couple of such bots on a client machine, even while playing the game themself! The bottleneck is entirely in the classification part, which can probably be handled by cheating a bit, as long as it feels like the bots are seeing and thinking, instead of just cheating. I suggest that if a bot sees a reasonable sized blob of pixels that don't belong, there's nothing wrong with it being handed a hint by the game and being told which team that blob belongs to.
I am well pleased. Now someone must implement this bot, that can't see through smoke or bushes, is hindered by camouflage, and gets completely confused when you turn out the lights.