I've been doing some theoretical research on Learning algorithms, theoretical as in not-code-heavy but more towards understanding the ideas behind existing learning approaches, their fields of use and how to put them into use in games. Genetic algorithms and Neural networks, while very interesting, seem generally useless for games. They're good for all these "impossible to explain" problems like learning to interpret handwriting, learning to interpret images, learning to climb as fast and securely as possible. Very useful for robots (oh how I wish I had money), but game worlds have very few such problems. More interesting for games is something known as Reinforcement learning, which in very short terms is like teaching an animal. You give them treats when they do things you like (like rolling over), and reprimands (hrm) when they do bad things (like chewing on your shoes). Other interesting things were drive based actions, which means that the agent has a key set of drives (good ones are hunger and curiosity), which makes the agent act based on how high a drive value is, and expectation/prediction learning, meaning that the agent expects something to happen due to a certain action. These two combined make for some really interesting things. Imagine this: a creature is somewhat hungry, so he needs to find food. He notices a button, and knows from previous experience that something happens when you push a button. His hunger drive isn't overly strong yet, which would reduce his actions to more reliable ones, so he decides to push the button. Food appears. He eats the food because he was somewhat hungry. Now only his curiosity remains, so he presses the button again, but doesn't eat the food. After a couple of presses on this button, he comes to the conclusion that food appears at a 100% probability when pushing this button. Now this button is boring, and will no longer satisfy his curiosity, but is noted as a reliable source of food.
That was a very simple example of a problem (how to satisfy curiosity and hunger). For a long time, I've been trying to figure out how to describe the problem of strategy in a game. This is hard for several reasons. For one it's hard to determine what is part of a strategy, and what is not. Second, it's tough to tell wether the strategy was bad, or if it was badly executed. The latter especially is what usually makes AI programmers want to hardcode strategies into the agents, rather than let them try things out, as there's a high probability that they will learn wrong, as they can't be reasoned with, you can't easily tell them *why* it went wrong. The why is interesting, how to detect where it went wrong and to adapt the strategy based on this, or just try again.
Mauve did something cool with his bots a long time ago. He made them test different ways of fighting a specific player, and depending on rate of success, it would prefer that type of fighting technique against that particular player. It was very simple, and just involved three different hardcoded ways of fighting which it chose between. Even if very simple, it is still an example of learning AI. The question is just which things are to be considered learnable, and which things to use for input to the learning system. You have to conserve CPU, and AI programmer sanity
I'm personally most interested in squad based AI, be it an encounter of an enemy squad in a singleplayer game like halo, halflife, any ubisoft shooter, among many others, or a highly team-work reliant online game like say counterstrike (in which a single round really is much like a squad encounter in one of the above games). I have not yet constructed anything usable in the field of a learning squad, but have a couple of ideas.
Comment.