Capture the Flag AI
November 15th, 2006
This last quarter I was taking an AI (artificial intelligence) class. Our final project was to create an AI player for a simple capture the flag game. At the end of the quarter all the teams would compete in a tournament. Whoever won the tournament got an A on their AI and bragging rights!
My friend Jonathan, the same one who went to Florida with my roommate Eric and I, was my partner and we put in many hours on it. Unfortunately we were unable to polish it up completely. We had no idea how well we would do in this tournament as there are a lot of really smart people in the class who spent a lot of time on it.
We found out the standings today. We got 4th place! Check it out
To read a paper we wrote about our player, you can read the rest of this post.
Beanzarth
When we designed Beanzarth, our capture the flag AI program, our goal was to create a solid base that would be able to play whatever intricate strategies we came up with. We had high goals for it. It was going to preprocess the entire board dividing it up into sections with different attributes so we could choose the stealthiest paths and find chokepoints to defend. It would attempt to confuse the enemy by changing strategies mid-game if one did not work out. By using A* along with a hierarchical strategy, we succeeded in building a powerful and flexible base.
Although we could have written our AI without searching, we felt that using A* would give us the flexibility and power to do what we wanted, as well as reinforce our knowledge. There aren’t a whole lot of peculiarities about our implementation. To use our A* , one would instantiate the class with the current state and then pass in the desired state as an argument to the method. We used the traditional “open” and “closed” lists to store viewed and unviewed states. While we looked into using some more advanced heuristics, in an attempt to get a more natural and useful path, we eventually decided to stay with Euclidean distance.
In order for every possible successor state to be determined, we had to move each bot to every possible position it could reach. We implemented this by cycling through each of the cardinal directions and moving the bot in that direction – assuming that it is a legal move and the bot has not already been there. At each position, a new successor state is created assuming that a duplicate successor state does not exist. In order to improve performance, we decided to eliminate moves that require moving in opposite directions (for example, moving around a single water tile: up, right, right, down). We felt that this trivial case was not worth the effort or the computation. See successorHelper in State for more information.
Beanzarth was originally planned to be goal oriented; however, due to time restraints we were not able to fully implement this idea. Instead we set up a system of six possible strategies that our AI could play and created logic for each of these states: shooting, evading, targeting, returning the flag, camping, and reaching the flag. These states are evaluated in order and if one of them returns a move, then that move is played and the rest are ignored. For instance, if none of the bots can shoot an enemy but one is targeted, then that bot will try to evade its pursuer. The shooting strategy is fairly trivial – if one of our bots has a target and can still see it, then shoot the enemy (see kill in Strategy for more details). Evade is more complex. If one of our bots is targeted, then our program finds all of the possible successor states and then uses A* with an evasion heuristic to determine the best move that will place our bot out of the enemy’s sights (see evade in Strategy for more details). If none of the previous strategies have been found viable, then they will try and target an enemy. If there is more than one possible target then the enemy closest to our flag is targeted (see target in Strategy for more details).
The last two strategies we used are almost identical in implementation. They are “BringFlagBack” and “GetFlag.” BringFlagBack tests to see if we have the flag and if so, generates a new state with the flag carrier’s position at the position of our flag. It passes this to A* . GetFlag does the same thing, except its goal state has both of our flag-runners goal positions at the enemy flag. At the time of turning, we had two slightly different implementations of this. One used one runner until he died and then used the other. The second one attempted to coordinate the two runners, but was ditched at the last minute because it was less effective.
Due to time constraints, we were unable to reach a bug-free state with our camping algorithm. At run time, our program would process the given map and find camping positions based on terrain and distance from our flag. Mountains take precedence over forests and forests take precedence over plains. We would then pick the best single camping position above and below the flag – thus we could create a crossfire around our flag.
While we are happy with our current AI, there are many features we had planned on implementing, or wished to implement that were either infeasible, too time-consuming, or failed to work as well as we had planned. One of the most interesting, and most important features was setting up waypoints and pathways. Being able to tell a bot to go to the enemy base through the middle or the bottom of the map would have been very useful. We did implement this with limited success. Unfortunately we under-prioritized this feature, thus we did not set apart enough time to devise a plan to correctly implement it.
Before we wrote any code we considered preprocessing the board to simplify it and allow A* to search more efficiently, similar to the Level of Detail maps that Kynapse uses. However, this depends on being able to set up waypoints; thus, as that was not completed, we were unable to implement this. The last feature, which we have already talked about, deals with using different strategies to capture the flag. Previously mentioned was sending a single bot at a time or two working cooperatively. Our architecture easily allowed for more strategies and these could even be changed in the middle of a match if one was not working. This is different from learning, as all strategies would be programmed in. Different strategies could be chosen in case one did not work against an opposing player.
As with any project, Beanzarth still has much room for improvement. Had we more time, we could have implemented several additional offensive and defensive strategies. Overall, we are satisfied with our current program and how well it works. Our hierarchy of strategies made it very easy to write the different aspects of our AI and A* made decisions easy.
Sorry, comments are closed for this article.