Home    General Programming    Artificial Intelligence    Math    Physics    Graphics    Networking    Audio Programming   
Audio/Visual Design    Game Design    Production    Business of Games    Game Studies    Conferences    Schools    Contact   
State of the Industry
Architecture
State Machines
Learning
Scripting
A* pathfinding
Pathfinding / Movement
Group Movement
Group Cooperation
Strategy / Tactical
Animation Control
Camera Control
Randomness
Player Prediction
Fuzzy Logic
Neural Nets
Genetic Algorithms
Natural Language
Tips and Advice
Tools and Libraries
Genre: RTS / Strategy
Genre: RPG / Adventure
Genre: FPS / Action
Genre: Racing
Genre: Sports
Genre: Board Games
Middleware
Open Source
All Articles
Game Programming Gems
Game Programming Gems 2
Game Programming Gems 3
Game Programming Gems 4
Game Programming Gems 5
Game Programming Gems 6
Game Programming Gems 7
AI Game Programming Wisdom
AI Game Programming Wisdom 2
AI Game Programming Wisdom 3
AI Game Programming Wisdom 4
AI Summit GDC 2009
GPU Gems
GPU Gems 2
GPU Gems 3
ShaderX
ShaderX2
ShaderX3
ShaderX4
ShaderX5
Massively Multiplayer Game Development
Massively Multiplayer Game Development 2
Secrets of the Game Business
Introduction to Game Development
GDC Proceedings
Game Developer Magazine
Gamasutra


Artificial Intelligence: Strategy and Tactical


The MARPO Methodology: Planning and Orders

Brett Laming (Rockstar Leeds)
AI Game Programming Wisdom 4, 2008.
Abstract: This paper elaborates on a previously eluded to AI design paradigm, nicknamed MARPO, that continues to produce flexible and manageable AI from first principles. It applies the rationales behind these principles to create a goal-based, hierarchical state machine that embraces the beauty of rule-based reasoning systems. Grounded in industry experience, it avoids the common pitfalls of this approach, and shows how MARPO discipline maximizes efficiency, flexibility, manageability and successfulness of the end result.

Risk-Adverse Pathfinding Using Influence Maps

Ferns Paanakker (Wishbone Games B.V.)
AI Game Programming Wisdom 4, 2008.
Abstract: This article describes a pathfinding algorithm that allows the use of Influence Maps (IM) to mark hostile and friendly regions. The algorithm allows us to find the optimal path from point A to point B very quickly while taking into consideration the different threat and safety regions in the environment. This allows units to balance the risk while traversing their path, thus allowing for more depth of gameplay.

RTS Terrain Analysis: An Image-Processing Approach

Julio Obelleiro, Ra�l Sampedro, and David Hern�ndez Cerpa (Enigma Software Productions)
AI Game Programming Wisdom 4, 2008.
Abstract: In an RTS game, terrain data can be precomputed and used at runtime to help the AI in its decision making. This article introduces a terrain analysis technique based on simple image processing operations which, combined with pathfinding data, produces precise information about relevant areas of the map.

An Advanced Motivation-Driven Planning Architecture

David Hern�ndez Cerpa and Julio Obelleiro (Enigma Software Productions)
AI Game Programming Wisdom 4, 2008.
Abstract: As game AI complexity increases, imperative techniques such as Finite State Machines become unmanageable, inflexible, and problematical for code maintenance. Planning architectures tackle with this complexity introducing a new decision making paradigm. This article describes a new hierarchical planning technique based on STRIPS, GOAP, and HTN. It features a motivational approach together with the capability to handle parallel goal planning which favors the appearance of emergent behaviors. Advanced characteristics include, among others, partial replanning or mixing of planning and execution with the use of parameters at planning time to represent the current world state. The architecture, used in the strategy game War Leaders: Clash of Nations, allows high levels of code reusability and modularity, being easily adaptable to game design changes that commonly arise during a complete game development.

Command Hierarchies Using Goal-Oriented Action Planning

David Pittman (Stormfront Studios)
AI Game Programming Wisdom 4, 2008.
Abstract: Goal-based AI agent architectures are a popular choice in character-driven games because of the apparent intelligence the agents display in deciding how to pursue their goals. These games often also demand coordinated behavior between the members of a group, which introduces some complexity in resolving the autonomous behavior of the individuals with the goal of the collective. This article introduces a technique for integrating military-style command hierarchies with the Goal-Oriented Action Planning (GOAP) architecture. An UnrealScript-based example of the framework is used to illustrate the concepts in practice for a squad-based first-person shooter (FPS), and practical optimizations are suggested to help the technique scale to the larger numbers of units required for real-time strategy (RTS) games.

Practical Logic-Based Planning

Daniel Wilhelm (California Institute of Technology)
AI Game Programming Wisdom 4, 2008.
Abstract: An efficient, easy-to-implement planner is presented based on the principles of logic programming. The planner relies on familiar IF/THEN structures and constructs plans efficiently, but it is not as expressive as other proposed planners. Many easy extensions to the planner are discussed such as inserting and removing rules dynamically, supporting continuous values, adding negations, and finding the shortest plan. Accompanying source code provides easy-to-follow implementations of the planner and the proposed extensions.

Simulation-Based Planning in RTS Games

Frantisek Sailer, Marc Lanctot, and Michael Buro (University of Alberta)
AI Game Programming Wisdom 4, 2008.
Abstract: Sophisticated cognitive processes such as planning, learning, and opponent modeling are still the exception in modern video game AI systems. However, with the advent of multi-core computer architectures and more available memory, using more computing intensive techniques will become possible. In this paper we present the adversarial real-time planning algorithm RTSplan which is based on rapid game simulations. Starting with a set of scripted strategies RTSplan simulates determines the outcome of playing strategy pairs and uses the obtained result matrix to assign probabilities to strategies to be followed next. RTSplan is constantly replanning and therefore able to adjust to changes promptly. With an opponent modeling extension, RTSplan is able to soundly defeat individual strategies in our army deployment application. In addition, RTSplan can make use of existing AI scripts to create more challenging AI systems. Therefore it is well-suited for video games.

Particle Filters and Simulacra for More Realistic Opponent Tracking

Christian J. Darken (The MOVES Institute), Bradley G. Anderegg (Alion Science and Technology Corporation)
AI Game Programming Wisdom 4, 2008.
Abstract: Tracking the possible location of an opponent is a potentially important game AI capability for enabling intelligent hiding from or searching for the opponent. This article provides an introduction to particle filters for this purpose. Particle filters postulate a set of specific coordinates where the opponent might be as opposed to estimating probabilities that the opponent is in particular regions of the level, as is done in the occupancy map technique. By their very nature, particle filters have a very different performance profile from occupancy maps, and thus represent an interesting alternative. We also show how adding a small amount of intelligence to the particles, transforming them to simulacra, can improve the quality of tracking.

Using Bayesian Networks to Reason About Uncertainty

Devin Hyde
AI Game Programming Wisdom 4, 2008.
Abstract: This article provides the reader with an understanding of the fundamentals of Bayesian networks. The article will work through several examples, which show how a Bayesian network can be created to model a problem description that could be part of a video game. By the end of the article the reader will have the knowledge necessary to form and solve similar problems on their own. An implementation of our solution to the examples, which shows how beliefs are updated based on different observations, is provided on the accompanying CD-ROM.

The Engagement Decision

Baylor Wetzel (Brown College)
AI Game Programming Wisdom 4, 2008.
Abstract: Before every battle comes the question - can I win this battle? Should I attack or should I run? There are a variety of ways to answer this question. This article compares several, from simple power calculations through Monte Carlo simulations, discussing the pros and cons of each and the situations where each is appropriate.

Automatically Generating Score Functions for Strategy Games

Sander Bakkes and Pieter Spronck (Maastricht University, The Netherlands)
AI Game Programming Wisdom 4, 2008.
Abstract: Modern video games present complex environments in which their AI is expected to behave realistically, or in a "human-like" manner. One feature of human behavior is the ability to assess the desirability of the current strategic situation. This type of assessment can be modeled in game AI using a "score function." Due to the complex nature of modern strategy games, the determination of a good score function can be difficult. This difficulty arises in particular from the fact that score functions usually operate in an imperfect information environment. In this article, we show that machine learning techniques can produce a score function that gives good results despite this lack of information.

Automatic Generation of Strategies

Pieter Spronck and Marc Ponsen (Maastricht University, The Netherlands)
AI Game Programming Wisdom 4, 2008.
Abstract: Machine learning techniques can support AI developers in designing, tuning, and debugging tactics and strategies. In this article, we discuss how a genetic algorithm can be used to automatically discover strong strategies. We concentrate on the representation of a strategy in the form of a chromosome, the design of genetic operators to manipulate such chromosomes, the design of a fitness function, and discuss the evolutionary process itself. The techniques and their results are demonstrated in the game of Wargus.

SquadSmart - Hierarchical Planning and Coordinated Plan Execution for Squads of Characters

Peter Gorniak, Ian Davis (Mad Doc Software)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2007.
Abstract: This paper presents an application of Hierarchical Task Network (HTN) planning to a squad-based military simulation. The hierarchical planner produces collaborative plans for the whole squad in real time, generating the type of highly coordinated behaviours typical for armed combat situations involving trained professionals. Here, we detail the extensions to HTN planning necessary to provide real-time planning and subsequent collaborative plan execution. To make full hierarchical planning feasible in a game context we employ a planner compilation technique that saves memory allocations and speeds up symbol access. Additionally, our planner can be paused and resumed, making it possible to impose a hard limit on its computation time during any single frame. For collaborative plan execution we describe several synchronization extensions to the HTN framework, allowing agents to participate in several plans at once and to act in parallel or in sequence during single plans. Overall, we demonstrate that HTN planning can be used as an expressive and powerful real-time planning framework for tightly coupled groups of in-game characters.

Probabilistic Target Tracking and Search Using Occupancy Maps

Dami�n Isla (Bungie Studios)
AI Game Programming Wisdom 3, 2006.
Abstract: This article will introduce Occupancy Maps, a technique for probabilistically tracking object positions. Occupancy Maps, an application of a broader Expectation Theory, can result in more interesting and realistic searching behaviors, and can also be used to generate emotional reactions to search events, like surprise (at finding a target in an unexpected place) and confusion (at failing to find a target in an expected place). It is also argued that the use of more in-depth knowledge-modeling techniques such as Occupancy Maps can relieve some of the complexity of a traditional FSM or HFSM approach to search behavior.

Dynamic Tactical Position Evaluation

Remco Straatman and Arjen Beij (Guerrilla Games), William van der Sterren (CGF-AI)
AI Game Programming Wisdom 3, 2006.
Abstract: Dynamic tactical position evaluation is essential in making tactical shooters less linear and more responsive to the player and to changes in the game world. Designer placed hints for positioning and detailed scripting are impractical for games with unpredictable situations due to player freedom and dynamic environments. This article describes the techniques used to address these issues for Guerrilla's console titles Killzone and Shellshock Nam '67. The basic position evaluation mechanism is explained and its application when selecting tactical positions and finding tactical paths. Some alternative uses of the technique are given, such as generating intelligent scanning positions and suppressive fire, and the practical issues of configuration and performance are discussed.

Finding Cover in Dynamic Environments

Christian J. Darken (The MOVES Institute), Gregory H. Paull (Secret Level Inc.)
AI Game Programming Wisdom 3, 2006.
Abstract: In this article, we describe our approach to improved cover finding with an emphasis on adaptability to dynamic environments. The technique described here combines level annotation with the sensor grid algorithm. The strength of level annotation is its modest computational requirements. The strength of the sensor grid algorithm is its ability to handle dynamic environments and to find smaller cover opportunities in static environments. Each approach is useful by itself, but combining the two can provide much of the benefit of both. In a nutshell, our approach relies on cover information stored in the candidate cover positions placed throughout the level whenever possible and performs a focused run-time search in the immediate vicinity of the agent if the level annotation information is insufficient. This allows it to be fast and yet able to react to changes in the environment that occur during play.

Coordinating Teams of Bots with Hierarchical Task Network Planning

Hector Munoz-Avila and Hai Hoang (Lehigh University)
AI Game Programming Wisdom 3, 2006.
Abstract: This article presents the use of Hierarchical-Task-Network (HTN) representations to model strategic game AI. We demonstrate the use of hierarchical planning techniques to coordinate a team of bots in an FPS game.

Prioritizing Actions in a Goal-Based RTS AI

Kevin Dill (Blue Fang Games)
AI Game Programming Wisdom 3, 2006.
Abstract: In this article we outline the architecture of our strategic AI and discuss a variety of techniques that we used to generate priorities for its goals. This engine provided the opposing player AI of our real-time strategy games Kohan 2: Kings of War and Axis & Allies. The architecture is easily extensible, flexible enough to be used in a variety of different types of games, and sufficiently powerful to provide a good challenge for an average player on a random, unexplored map without unfair advantages.

Simulating a Plan

Petar Kotevski (Genuine Games)
AI Game Programming Wisdom 3, 2006.
Abstract: The article describes a methodology of supplementing traditional FSMs with contextual information about the internal state of the agent and the environment that the agent is in, by defining game events and deriving rules for responses to a given game event. This creates a completely non-scripted experience that varies with every different player, because in essence the system responds to game events generated by the player himself. By defining simple rules for enemy behavior and environments in which those rules can be clearly seen, it is possible to simulate group behavior where no underlying code for it is present. The system described is completely deterministic, thus easy to maintain, QA, and debug. It is also not computationally expensive, so rather large populations of AI agents can be simulated using the proposed system.

Using the Quantified Judgment Model for Engagement Analysis

Michael Ramsey
Game Programming Gems 6, 2006.

AI Wall Building in Empire Earth II

Tara Teich, Ian Lane Davis (Mad Doc Software)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2006.
Abstract: Real-Time Strategy games are among the most popular genres of commercial PC games, and also have widely applicable analogs in the field of Serious Games such as military simulations, city planning, and other forms of simulation involving multi-agent coordination and an underlying economy. One of the core tasks in playing a traditional Real-Time Strategy game is building a base in an effective manner and defending it well. Creating an AI that can construct a successful wall was one of the more challenging areas of development on Empire Earth� II, as building a wall requires analysis of the terrain and techniques from computational geometry. An effective wall can hold off enemy troops and keep battles away from the delicate economy inside the base.

Automatic Cover Finding with Navigation Meshes

Borut Pfeifer (Radical Entertainment)
Game Programming Gems 5, 2005.

Using Lanchester Attrition Models to Predict the Results of Combat

John Bolton (Page 44 Studios)
Game Programming Gems 5, 2005.

A Goal-Based Architecture for Opposing Player AI

Kevin Dill (Blue Fang Games), Denis Papp (TimeGate Studios)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2005.
Abstract: This paper describes a goal-based architecture which provides a single source for all high level decisions made by AI players in real-time strategy games. The architecture is easily extensible, flexible enough to be rapidly adapted to multiple different games, and powerful enough to provide a good challenge on a random, unexplored map without unfair advantages or visible cheating. This framework was applied successfully in the development of two games at TimeGate Studios � Kohan2: Kings of War and Axis & Allies.

Agent Architecture Considerations for Real-Time Planning in Games

Jeff Orkin (Monolith)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2005.
Abstract: Planning in real-time offers several benefits over the more typical techniques of implementing Non-Player Character (NPC) behavior with scripts or finite state machines. NPCs that plan their actions dynamically are better equipped to handle unexpected situations. The modular nature of the goals andactions that make up the plan facilitates re-use, sharing, and maintenance of behavioral building blocks. These benefits, however, come at the cost of CPU cycles. In order to simultaneously plan for several NPCs in real-time, while continuing to share the processor with the physics, animation, and rendering systems, careful consideration must taken with the supporting architecture. The architecture must support distributed processing and caching of costly calculations. These considerations have impacts that stretch beyond the architecture of the planner, and affect the agent architecture as a whole. This paper describes lessons learned while implementing real-time planning for NPCs for F.E.A.R., a AAA first person shooter shipping for PC in 2005.

Implementing Practical Planning for Game AI

Jamie Cheng (Relic Entertainment), Finnegan Southey (University of Alberta, Computer Science)
Game Programming Gems 5, 2005.

Ten Fingers of Death: Algorithms for Combat Killing

Roger Smith, Don Stoner (Titan Corporation)
Game Programming Gems 4, 2004.

Narrative Combat: Using AI to Enhance Tension in an Action Game

Borut Pfeifer (Radical Entertainment)
Game Programming Gems 4, 2004.

Advanced Wall Building for RTS Games

Mario Grimani (Sony Online Entertainment)
Game Programming Gems 4, 2004.

Jumping, Climbing, and Tactical Reasoning: How to Get More Out of a Navigation System

Christopher Reed, Benjamin Geisler (Raven Software / Activision)
AI Game Programming Wisdom 2, 2003.
Abstract: Few AI related systems are more common and pervasive in games than character navigation. As 3D game engines become more and more complex, characters will look best if they too adapt with equally complex behavior. From opening a door, to hopping over an errant boulder and crouching behind it, keeping AI tied to the environment of your game is often one of the most difficult and important challenges.

Typically these complex behaviors are handled by scripts or a hand coded decision maker. However, we will show that the points and edges within a navigation system are a natural place to store environment specific information. It is possible to automatically detect many properties about the area around a point or edge. This approach allows an AI character to make use of embedded environment information for tactical reasoning as well as low level animation and steering.

Constraining Autonomous Character Behavior with Human Concepts

Jeff Orkin (Monolith Productions)
AI Game Programming Wisdom 2, 2003.
Abstract: A current trend in Game AI is the move from scripted to autonomous character behavior. Autonomous behavior offers several benefits. Autonomous characters can handle unexpected events that a script might not have anticipated, producing emergent gameplay. Level designers can focus on creating worlds packed with opportunities for characters to showcase their behaviors, rather than getting bogged down scripting the actions of individual characters. Various articles have described how to design goal-based autonomous behavior, where characters select the most relevant behavior based on their desires, sensory input, and proximity to objects of interest. In theory it sounds simple enough to drop a character with a palette of goals into a level filled with tagged objects, and let him take care of himself. In practice, there are many additional factors that need to be considered to get believable behavior from an autonomous character. This article presents a number of factors that should be considered as inputs into the relevancy calculation of a character's goals, in order to produce the most believable decisions. These factors are based on findings encountered during the developement of Monolith Production's No One Lives Forever 2: A Spy in H.A.R.M.'s Way.

Applying Goal-Oriented Action Planning to Games

Jeff Orkin (Monolith Productions)
AI Game Programming Wisdom 2, 2003.
Abstract: A number of games have implemented characters with goal directed decision-making capabilities. A goal-directed character displays some measure of intelligence by autonomously deciding to activate the behavior that will satisfy the most relevant goal at any instance. Goal-Oriented Action Planning (GOAP) is a decision-making architecture that takes the next step, and allows characters to decide not only what to do, but how to do it. A character that formulates his own plan to satisfy his goals exhibits less repetitive, predictable behavior, and can adapt his actions to custom fit his current situation. In addition, the structured nature of a GOAP architecture facilitates authoring, maintaining, and re-using behaviors. This article explores how games can benefit from the addition of a real-time planning system, using problems encountered during the development of Monolith Production's No One Lives Forever 2: A Spy in H.A.R.M.'s Way to illustrate these points.

Hierarchical Planning in Dynamic Worlds

Neil Wallace (Black & White Studios / Lionhead Studios)
AI Game Programming Wisdom 2, 2003.

Using a Spatial Database for Runtime Spatial Analysis

Paul Tozour (Retro Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: AI developers have employed a number of different techniques for performing spatial reasoning about a game world using precomputed "hints" placed by level designers or automated game-world analysis tools. However, as game worlds increasingly feature larger numbers of AI characters and moveable physically-modeled objects, it becomes increasingly important to model the ways that the dynamic aspects of the ever-changing game world influence an AI's spatial reasoning. We discuss a spatial database technique that allows you to perform spatial reasoning about any number of different factors that can potentially affect an AI agent's reasoning about the game environment and techniques for combining multiple factors together to construct desirability heuristics. A spatial database can also allow you to implicitly coordinate the activities of multiple AI agents simply by virtue of sharing the same data structure.

Performing Qualitative Terrain Analysis in Master of Orion 3

Kevin Dill, Alex Sramek (Quicksilver Software, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: One challenge for many strategy game AIs is the need to perform qualitative terrain analysis. By qualitative we mean that the analysis is based on fundamental differences between different types of locations - for instance areas that are visible to our opponents, areas that are impassible, or areas vulnerable to enemy fire. In Master of Orion 3 we identify stars that are inside or outside of our empire's borders, those that are threatened by our opponents, and those that are contested (shared with an opponent). This information is used to identify locations where we need to concentrate our defenses and to help us expand into areas that minimize our defensive needs while maximizing the territory we control.

In this article we will present the algorithms used to make the qualitative distinctions given above and the ways in which the AI uses that information. The lessons we would most like the reader to take away from this article are not the specifics of the algorithms used but rather the thought processes involved in applying qualitative reasoning to terrain analysis. The important questions to address are: what are the qualitative distinctions we should look for, how can we recognize them, and what uses can the AI make of that information. Our algorithms are but a single example of how these questions can be answered.

The Unique Challenges of Turn-Based AI

Soren Johnson (Firaxis Games)
AI Game Programming Wisdom 2, 2003.
Abstract: Writing a turn-based AI presents a number of unique programming and game design challenges. The common thread uniting these challenges is the user's complete control over the game's speed. Players willing to invest extreme amounts of time into micro-management and players looking to streamline their gaming experience via automated decision-making present two very different problems for the AI to handle. Further, the ability to micro-analyze turn-based games makes predictability, cheating, and competitive balance extremely important issues. This article outlines how the Civilization III development team dealt with these challenges, using specific examples to illuminate some practical solutions useful to a programmer tasked with creating an AI for a turn-based game.

Wall Building for RTS Games

Mario Grimani (Sony Online Entertainment)
AI Game Programming Wisdom 2, 2003.
Abstract: Most real-time strategy games include walls or similar defensive structures that act as barriers for unit movement. Having a general-purpose wall-building algorithm increases the competitiveness of computer opponents and provides a new set of options for the random mission generation. The article discusses a wall building algorithm that uses the greedy methodology to build a wall that fits the definition, protects the desired location, and meets the customizable acceptance criteria. The algorithm takes advantage of the natural barriers and map edges to minimize the cost of building a wall. The algorithm discussion focuses on importance of traversal and heuristic functions, details of implementation, and various real world problems. Advanced topics such as minimum/maximum distance requirements, placement of gates and an unusual wall configurations are elaborated on. Full source code and a demo are supplied.

Strategic Decision-Making with Neural Networks and Influence Maps

Penny Sweetser (School of ITEE, University of Queensland)
AI Game Programming Wisdom 2, 2003.
Abstract: Influence maps provide a strategic perspective in games that allows strategic assessment and decisions to be made based on the current game state. Influence maps consist of several layers, each representing different variables in the game, layered over a geographical representation of the game map. When a decision needs to be made by the AI player, some or all of these layers are combined via a weighted sum to provide an overall idea of the suitability of each area on the map for the current decision. However, the use of a weighted sum has certain limitations.

This article explains how a neural network can be used in place of a weighted sum, to analyze the data from the influence map and make a strategic decision. First, this article will summarize influence maps, describe the current application of a weighted sum and outline the associated advantages and disadvantages. Following this, it will explain how a neural network can be used in place of a weighted sum and the benefits and drawbacks associated with this alternative. Additionally, it will go into detail about how a neural network can be implemented for this application, illustrated with diagrams.

Multi-Tiered AI Layers and Terrain Analysis for RTS Games

Tom Kent (Freedom Games, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: RTS games tend to handle soldier AIs individually, giving each unit specific tasks from the computer player. Creating complicated, cooperative tactics are impossible for such systems without an immense effort in coding. To develop complex, large-scale plans, a mechanism is needed to reduce the planning devoted to the individual units. Some games already collect individual soldiers into squads. This reduces the planning necessary by a factor of ten, as one hundred soldiers can be collected into ten squads. However, this concept can be taken farther, with squads collected into platoons, platoons into companies, and so on. The versatility such groupings give an AI system are immense. This article will explore the implementation of a multi-tiered AI system in RTS-type games, including the various AI tiers, a set of related maps used by the AI tiers and an example to illustrate the system.

Designing a Multi-Tiered AI Framework

Michael Ramsey (2015, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: The MTAIF allows an AI to be broken up into three concrete layers, strategic, operational and a tactical layer. This allows for an AI programmer to have various AIs focus on specific tasks, while at the same time having a consistent overall focus. The MTAIF allows for the strategic layer to be focused exclusively on matters that can affect an empire on a holistic scale, while at the operational level the AI is in tune with reports from the tactical level. A differing factor from many other architectures is that the MTAIF does not allow decisions to be made on a tactical scale that would violate the overall strategic policies. This in turn forces highlevel strategic policies to be enforced in tactical situations, without the AI devolving into a reactionary based AI.

Strategic and Tactical Reasoning with Waypoints

Lars Lid�n (Valve Software)
AI Game Programming Wisdom, 2002.
Abstract: Non-player characters (NPCs) commonly use waypoints for navigation through their virtual world. This article will demonstrate how preprocessing the relationships between these waypoints can be used to dynamically generate combat tactics for NPCs in a first-person shooter or action adventure game. By precalculating and storing tactical information about the relationship between waypoints in a bit string class, NPCs can quickly find valuable tactical positions and exploit their environment. Issues discussed include fast map analysis, safe pathfinding, using visibility, intelligent attack positioning, flanking, static waypoint analysis, pinch points, squad tactics, limitations, and advanced issues.

Recognizing Strategic Dispositions: Engaging the Enemy

Steven Woodcock (Wyrd Wyrks)
AI Game Programming Wisdom, 2002.

Tactical Team AI Using a Command Hierarchy

John Reynolds (Creative Asylum)
AI Game Programming Wisdom, 2002.
Abstract: Team-based AI is becoming an increasingly trendy selling point for first- and third-person action games. Often, this is limited to scripted sequences or simple "I need backup" requests. However, by using a hierarchy of decision-making, it is possible to create some very convincing teams that make decisions in real time.

Terrain Analysis in an RTS-The Hidden Giant

Daniel Higgins (Stainless Steel Software)
Game Programming Gems 3, 2002.

Tactical Path-Finding with A*

William van der Sterren (CGF-AI)
Game Programming Gems 3, 2002.

Influence Mapping

Paul Tozour (Ion Storm Austin)
Game Programming Gems 2, 2001.
Abstract: Influence mapping is a powerful and proven AI technique for reasoning about the world on a spatial level. Although influence maps are most often used in strategy games, they have many uses in other genres as well. Among other things, an influence map allows your AI to assess the major areas of control by different factions, precisely identify the boundary of control between opposing forces, identify "choke points" in the terrain, determine which areas require further exploration, and inform the base-construction AI systems to allow you to place buildings in the most appropriate locations.

Strategic Assessment Techniques

Paul Tozour (Ion Storm Austin)
Game Programming Gems 2, 2001.
Abstract: This article discusses two useful techniques for strategic decision-making. These are easiest to understand in the context of strategy game AI, but they have applications to other game genres as well. The resource allocation tree describes a data structure that allows an AI system to continuously compare its desired resource allocation to its actual current resources in order to determine what to build or purchase next. The dependency graph is a data structure that represents a game's "tech tree," and we discuss a number of ways that an AI can perform inference on the dependency graph in order to construct long-term strategic plans and perform human-like reasoning about what its opponents are attempting to accomplish.

Terrain Reasoning for 3D Action Games

William van der Sterren (CGF-AI)
Game Programming Gems 2, 2001.

40% off discount
"Latest from a must have series"
Game
Programming
Gems 7



"Cutting-edge graphics techniques"
GPU Gems 3


"Newest AI techniques from commercial games"
AI Game
Programming
Wisdom 4




Home