Home    General Programming    Artificial Intelligence    Math    Physics    Graphics    Networking    Audio Programming   
Audio/Visual Design    Game Design    Production    Business of Games    Game Studies    Conferences    Schools    Contact   
State of the Industry
Architecture
State Machines
Learning
Scripting
A* pathfinding
Pathfinding / Movement
Group Movement
Group Cooperation
Strategy / Tactical
Animation Control
Camera Control
Randomness
Player Prediction
Fuzzy Logic
Neural Nets
Genetic Algorithms
Natural Language
Tips and Advice
Tools and Libraries
Genre: RTS / Strategy
Genre: RPG / Adventure
Genre: FPS / Action
Genre: Racing
Genre: Sports
Genre: Board Games
Middleware
Open Source
All Articles
Game Programming Gems
Game Programming Gems 2
Game Programming Gems 3
Game Programming Gems 4
Game Programming Gems 5
Game Programming Gems 6
Game Programming Gems 7
AI Game Programming Wisdom
AI Game Programming Wisdom 2
AI Game Programming Wisdom 3
AI Game Programming Wisdom 4
GPU Gems
GPU Gems 2
GPU Gems 3
ShaderX
ShaderX2
ShaderX3
ShaderX4
ShaderX5
Massively Multiplayer Game Development
Massively Multiplayer Game Development 2
Secrets of the Game Business
Introduction to Game Development
GDC Proceedings
Game Developer Magazine
Gamasutra


Artificial Intelligence: All Articles


Structure vs. Style (Photoshop of AI)

Chris Hecker (EA Maxis)
GDC 2008 (free PowerPoint & Audio recording).
Abstract: My 2008 Game Developers Conference lecture was titled Structure vs. Style, wherein I analyzed how we solve what I call "hard interactive problems". This lecture became mildly famous in game Artificial Intelligence circles because in it I posit we will eventually have a Photoshop of AI, whatever that means! The lecture talks about the characteristics I think a tool like this will need to possess to be worthy of the name, but it's very hard to know what it really means, or how to get there.

Situationist Game AI

Adam Russell
AI Game Programming Wisdom 4, 2008.
Abstract: This article examines the tension in game content production between the systematic reduction of specific cases to general rules on the one hand, and the deliberate construction of unique player experiences on the other. We shall see how market and design trends are pushing games towards hybrid styles that combine these two approaches, before accusing most work in game AI of remaining too closely tied to the reduction to general rules in its commitment to strongly autonomous game agents. A quick review of related themes in sociology and psychology sets us up for the last part of the article, exploring the notion of what we call a 'situationist' game AI, capable of meeting this hybrid challenge.

Artificial Personality: A Personal Approach to AI

Benjamin Ellinger (Microsoft)
AI Game Programming Wisdom 4, 2008.
Abstract: Artificial personality is a powerful conceptual framework for creating compelling artificial intelligence in most type of games. It gives direction and focus to the underlying algorithms that make up all AI, encouraging a style of play which revolves around understanding and exploiting personality archetypes such as the coward, the defender, the psycho, etc. This technique was used successfully in Bicycle?Texas Hold'em from Carbonated Games in 2006, published by MSN Games.

Creating Designer Tunable AI

Borut Pfeifer (Electronic Arts)
AI Game Programming Wisdom 4, 2008.
Abstract: This article describes tips and techniques for working with designers to create better AI systems and improve their utilization in-game. It covers the advantages and disadvantages of various methods for allowing designers control over AI systems, guidelines for how much to expose in scripting systems, how to organize tunable data for data driven systems, and pitfalls to avoid in implementing such systems. It also discusses how to consider designer workflow in system design and communication tips to make sure designers understand how to use these systems.

AI as a Gameplay Analysis Tool

Neil Kirby (Bell Laboratories)
AI Game Programming Wisdom 4, 2008.
Abstract: AI is an effective tool for analyzing gameplay. This article uses case studies of two popular casual games, Minesweeper and Sudoku, to show how small amounts of AI can illuminate what core gameplay actually is. This can be most easily applied to casual games. Writing such AI leads to new gameplay concepts. A potential two-player Minesweeper from that case study is shown. Demonstration software for both games is included.

Ecological Balance in AI Design

Adam Russell
AI Game Programming Wisdom 4, 2008.
Abstract: This article considers the ways in which entrenched methods of game design can lead to unproductive tensions with advances in game AI technology. This issue encompasses not only methods of thinking about game design, but also styles of design documentation, and working relationships between designers and AI coders when iterating on game features. The result is not only a failure to produce useful increases in gameplay complexity. In some cases the result is actually a reduction in complexity, due to the inability of outdated design approaches to effectively control a more complex AI architecture.

Company of Heroes Squad Formations Explained

Chris Jurney (Kaos Studios)
AI Game Programming Wisdom 4, 2008.
Abstract: This article describes all the techniques used to produce the squad formation movement in Company of Heroes. The squads controlled with this system have very tactical and visually interesting motion that handles obstacles and destructible environments with minimal impact on performance. A variety of techniques are described that, when used together, produce high quality squad motion.

Turning Spaces into Places

Adam Russell
AI Game Programming Wisdom 4, 2008.
Abstract: This article explores the complex relationship between the forms of spatial representation employed by game agents and the forms of behavior that are easily supported by them. We shall see how most game agents typically reduce space to little more than a list of individual entities with objective spatial features existing in a task-neutral navigational representation of the global environment, and we will see how this is likely to severely limit their behavioral sophistication. This observation leads us into an extended discussion of the much richer notions of place found in the philosophical literature, before returning to practical themes with a review of place-based models in game AI. We will discuss affordance theory, smart object models, terrain analysis, influence mapping and informed environments, relating these specific approaches back to the general philosophical notions of place identified in the middle section of the article.

Dynamically Updating a Navigation Mesh via Efficient Polygon Subdivision

Paul Marden (DigiPen Institute of Technology), Forrest Smith (Gas Powered Games)
AI Game Programming Wisdom 4, 2008.
Abstract: Many 3D games rely on some sort of navigation mesh for pathfinding. However, most methods of NavMesh generation are not suitable for run-time updates for reflecting a dynamic environment. This article proposes a novel use of line-clipping to create a system that can be dynamically updated in real time.

Intrinsic Detail in Navigation Mesh Generation

Colt "MainRoach" McAnlis (Ensemble Studios), James Stewart (Stormfront Studios)
AI Game Programming Wisdom 4, 2008.
Abstract: This article describes a method to generate high-fidelity paths over very large terrains. The key to this approach is the Restricted Quadtree Triangulation (RQT), which provides the minimal representation of a height map given a world-space error metric. RQT is well suited to navigation meshes because it represents low-frequency terrain (flat plains, for example), with the fewest needed vertices while preserving detail in high-frequency regions. At runtime, path generation determines a preliminary course over a high-tolerance representation of the entire terrain, then refines the initial path in subsequent frames by paging low-tolerance navigation meshes for terrain chunks in the order they occur along the path. Crucial details-a narrow valley through an otherwise impassable mountain range, for example-are omitted by naive simplifications but preserved by RQT. This approach is less complex than various schemes to stitch streaming data, avoids backtracking during path refinement, and works transparently alongside other navigation mesh simplifications described in previous volumes of this series.

Navigation Mesh Generation: An Empirical Approach

David Hamm (Red Storm Entertainment)
AI Game Programming Wisdom 4, 2008.
Abstract: Automatic generation of navigation meshes can increase the speed and quality of level content creation. This article presents a new empirical approach to mesh generation that relies on directly sampling the world geometry for navigability data. The algorithm is well suited to a wide range of detailed environments and results in a relatively uniform triangle mesh ideal for pathfinding use. Implementation approaches, optimizations, and extensions are also discussed.

Navigation Graph Generation in Highly Dynamic Worlds

Ramon Axelrod (AIseek)
AI Game Programming Wisdom 4, 2008.
Abstract: Game designers are introducing more and more dynamic changes into worlds that have previously been largely entirely static. This makes for a compelling playing experience, but can be a major headache for AI programmers. If the physical world can change at any moment, how can the AI of the NPCs keep up? This paper addresses this challenge with an innovative technique for updating the key AI data structure (the navigation graph) in real-time. The technique starts with the game's raw geometry ("polygon soup") and processes this on the CPU (or even GPU!), generating or updating the navigation graph automatically.

Fast Pathfinding Based on Triangulation Abstractions

Doug Demyen (BioWare Corp.), Michael Buro (University of Alberta)
AI Game Programming Wisdom 4, 2008.
Abstract: Pathfinding for games is a multidimensional problem. The industry is making increasing demands for solutions that are fast, use minimal pre-computation and memory, work for large and complex environments and objects of multiple sizes, etc. This article presents two search algorithms - TA* (Triangulation A*) and TRA* (Triangulation Reduction A*) - which successfully address these requirements. TA* finds paths on a Constrained Delaunay Triangulation environment representation, while TRA* works on a reduced graph calculated from this triangulation.

Automatic Path Node Generation for Arbitrary 3D Environments

John W. Ratcliff (Simutronics Corporation)
AI Game Programming Wisdom 4, 2008.
Abstract: This article presents an automated way to create a compact and efficient navigable space mesh for an arbitrary static 3d game environment. It has been used in several commercial products and proven to provide an excellent knowledge base, allowing AI bots to navigate extremely complex 3d worlds even over vast distances.

Risk-Adverse Pathfinding Using Influence Maps

Ferns Paanakker (Wishbone Games B.V.)
AI Game Programming Wisdom 4, 2008.
Abstract: This article describes a pathfinding algorithm that allows the use of Influence Maps (IM) to mark hostile and friendly regions. The algorithm allows us to find the optimal path from point A to point B very quickly while taking into consideration the different threat and safety regions in the environment. This allows units to balance the risk while traversing their path, thus allowing for more depth of gameplay.

Practical Pathfinding in Dynamic Environments

Per-Magnus Olsson (Link�ping University)
AI Game Programming Wisdom 4, 2008.
Abstract: The article discusses the subject of pathfinding in dynamic environments. It features tried and tested techniques for handling the addition, removal and most importantly modification of objects during the game. It covers how both nodes and edges can be used to store valuable information that speed up searches as well as information that is used in the actual searches. As pathfinding graphs become larger and more detailed, it is useful to catch unnecessary searches can before the actual call to the pathfinder is made. It is described how this can be done as well as how to verify existing paths.

Postprocessing for High-Quality Turns

Chris Jurney (Kaos Studios)
AI Game Programming Wisdom 4, 2008.
Abstract: This article describes a system to achieve high quality vehicle motion for units that move primarily by sliding along a predefined path. The system refines the paths generated by a standard smoothed A* into routes that obey the limited turning capabilities of units. A palette of possible turns to use for each corner in the original path is defined and a search technique to quickly determine the optimal turn for each corner is described. A way to avoid speed discontinuities when changing paths is also specified.

Memory-Efficient Pathfinding Abstractions

Nathan Sturtevant (University of Alberta)
AI Game Programming Wisdom 4, 2008.
Abstract: Several different types of hierarchical pathfinding abstractions have been proposed in game-development literature, and many more are likely being used in published games. This article describes a pathfinding abstraction that is specifically designed to minimize the memory overhead of the abstraction. In addition to describing the abstraction itself, we also describe in detail how the abstraction can be used for pathfinding, including many small optimizations that are important for practical use. We measure the performance experimentally, showing a 100-fold improvement over the worst-case performance of A*.

A Flexible AI Architecture for Production and Prototyping of Games

Terry Wellmann (High Voltage Software)
AI Game Programming Wisdom 4, 2008.
Abstract: This article describes an architecture that is well suited for production or prototyping of a wide variety of games. The goal of the article is to give the reader a solid understanding of the factors to consider when designing a flexible AI architecture and focuses on the concepts and critical components necessary to successfully design an AI system that is easy to understand, build, maintain and extend. The article covers, in detail, the concepts of the architecture, decision making process, decision weighting, decision chaining, agent coordination and cooperation, as well as handling special cases.

Embracing Declarative AI with a Goal-Based Approach

Kevin Dill (Blue Fang Games)
AI Game Programming Wisdom 4, 2008.
Abstract: The vast majority of computer games developed today use either scripting or FSMs for their high-level AI architecture. While these are both powerful techniques, they are what one might think of as "procedural AI," which leaves the bulk of the decision-making in the hands of the developer. Goal-based AI is an alternative architecture that has been used in a number of successful games across multiple genres. In contrast to the techniques listed above, one might think of it as "declarative AI." Rather than telling the AI what to do, the role of the developer is to tell the AI what factors to consider when selecting an action and how to weigh them. Using this information, the AI will examine the current situation and select its actions accordingly. This paper briefly discusses the most common procedural AI architectures, followed by one popular declarative alternative, goal-based AI. Finally, we will discuss hybrid approaches where we can capture some of the best of both worlds.

The MARPO Methodology: Planning and Orders

Brett Laming (Rockstar Leeds)
AI Game Programming Wisdom 4, 2008.
Abstract: This paper elaborates on a previously eluded to AI design paradigm, nicknamed MARPO, that continues to produce flexible and manageable AI from first principles. It applies the rationales behind these principles to create a goal-based, hierarchical state machine that embraces the beauty of rule-based reasoning systems. Grounded in industry experience, it avoids the common pitfalls of this approach, and shows how MARPO discipline maximizes efficiency, flexibility, manageability and successfulness of the end result.

Getting Started with Decision Making and Control Systems

Alex J. Champandard (AiGameDev.com)
AI Game Programming Wisdom 4, 2008.
Abstract: A robust decision-making and control system is the best place to start with any AI engine. The behavior tree described in this article covers the major elements: implementing low-level tasks with latent execution, building a framework for managing them concurrently, assembling them in hierarchies using standard composites, and designing the system for depth-first search.

Knowledge-Based Behavior System: A Decision Tree/Finite State Machine Hybrid

Nachi Lau (LucasArts)
AI Game Programming Wisdom 4, 2008.
Abstract: In the modern role-playing game (RPG) development environment, designing and implementing a behavior system that meets the diverse needs of designers and programmers can be a challenge. The article will first identify the requirements of a desirable behavior decision system from the points of view of designers and programmers. It will then introduce a knowledge-based approach stemming from the decision tree and finite-state machine methods, which meets the requirements of a desirable decision making system. Through an actual AI example where the three methods are applied, the article will illustrate their strengths and weaknesses, hence demonstrating the value of the knowledge-based approach.

The Emotion Component: Giving Characters Emotions

Ferns Paanakker (Wishbone Games B.V.), Erik van der Pluijm
AI Game Programming Wisdom 4, 2008.
Abstract: In this article we discuss an "Emotion Component" that can be used to model complex emotions, allowing you to implement more human-like behavior in game characters. The Emotion Component is set to function either as a separate unit or in conjunction with other AI processes. With this component the emotions in a game character influence its behavior and color its perception of the world. Game characters using the Emotion Component internally keep track of ("feel") a condition ("emotional state") that influences their behavior and reactions in such a way that the human player is persuaded that the character is experiencing emotions such as fear, anger, admiration, love, and greed.

Generic Perception System

Frank Puig Placeres
AI Game Programming Wisdom 4, 2008.
Abstract: The perception system described in this article presents a way to simplify AI logic and expand the capabilities of NPCs by providing prioritized information about the environment as well as tactical data. The system has been designed to enable time slicing, priority scanning, goal negotiation, short- and long-term memory, and simulation of reflex times, among other features. This system can also be scaled to reduce the performance impact of a high number of agents interacting in the world and to incorporate new complex objects with attached goals and scripts, which make it very suitable for implementing complex character behaviors in current and next-generation games.

Peer-To-Peer Distributed Agent Processing

Borut Pfeifer (Electronic Arts)
AI Game Programming Wisdom 4, 2008.
Abstract: This article covers techniques for distributing agent processing in peer-to-peer (P2P) games. It discusses mechanisms to distribute processing to address both functional concerns (such as for streaming games) and performance concerns (distributing processing load). It also considers efficient communication mechanisms for agents running on separate machines to coordinate behavior and serialization of AI state for transferring ownership between peers.

AI Architectures for Multiprocessor Machines

Jessica D. Bayliss, Ph.D. (Rochester Institute of Technology, Information Technology Department)
AI Game Programming Wisdom 4, 2008.
Abstract: The proliferation of consoles with multiple cores means that games must now be threaded to run on these architectures. This changes the overall architecture for a game, but how can AI best be threaded? Options range from splitting the AI into individuals that run on different processing units to the more traditional planning system that has been functionally decomposed. Hybrid approaches such as a blackboard system are also possible and must be considered within the framework of a whole game system.

Level Up for Finite State Machines: An Interpreter for Statecharts

Philipp Kolhoff (KING Art), J�rn Loviscach (Hochschule Bremen)
AI Game Programming Wisdom 4, 2008.
Abstract: Finite state machines have become the norm for game intelligence, be it to control the behavior of a non-player character or to formalize the game's rules. However, in practice they have a number of shortcomings that lead, for instance, to an explosion in the number of states or transitions. To solve many such issues, one can generalize finite state machines to statecharts, a notion introduced by David Harel in 1987. This chapter describes how statecharts overcome many of the limits of finite state machines, for instance by supporting nested states, parallel states, and continuous activities. The chapter focuses on the practical issues of building a statechart interpreter and integrating it with existing code. Two reference implementations are provided: first, a lean version in C++, ready to be added to your own game code; and second, a full-fledged demonstration in C# including a graphical statechart editor and debugger with automatic layout.

Building a Behavior Editor for Abstract State Machines

Igor Borovikov (FrameFree Technologies), Aleksey Kadukin (Electronic Arts)
AI Game Programming Wisdom 4, 2008.
Abstract: This chapter describes the workflow and data structures used for scripting behaviors in the Abstract State Machine (ASM) framework. ASMs are introduced in the context of a behavior system for game agents. The focus of the paper is on how object-oriented extensions to ASM, Command Port integration of the Behavior Editor with Autodesk Maya, and a dual XML file format contribute to the usability of the behavior editor. The chapter also describes how offline manipulation of ASM definitions enabled the addition of parameters and referencing for behaviors without modifying the run-time code of the AI system.

Multi-Axial Dynamic Threshold Fuzzy Decision Algorithm

Dave Mark (Intrinsic Algorithm LLC)
AI Game Programming Wisdom 4, 2008.
Abstract: The Multi-axial Dynamic Threshold Fuzzy Decision Algorithm (MADTFDA) allows the designer to combine two or more constantly changing values and then compare the result to a defined numerical threshold in order to make a decision. MADTFDA is designed as a more flexible replacement for the "weighted sum" approach to combining factors. The additional flexibility is a valuable tool, allowing the designer to easily visualize the interactions of the decision inputs and enabling the programmer to create quick, robust, parameterized decision calls that accurately reflect the needs of the designer. The article covers the concept behind MADTFDA, its various uses as an AI design tool, and the use of the code that is included on the CD-ROM.

RTS Terrain Analysis: An Image-Processing Approach

Julio Obelleiro, Ra�l Sampedro, and David Hern�ndez Cerpa (Enigma Software Productions)
AI Game Programming Wisdom 4, 2008.
Abstract: In an RTS game, terrain data can be precomputed and used at runtime to help the AI in its decision making. This article introduces a terrain analysis technique based on simple image processing operations which, combined with pathfinding data, produces precise information about relevant areas of the map.

An Advanced Motivation-Driven Planning Architecture

David Hern�ndez Cerpa and Julio Obelleiro (Enigma Software Productions)
AI Game Programming Wisdom 4, 2008.
Abstract: As game AI complexity increases, imperative techniques such as Finite State Machines become unmanageable, inflexible, and problematical for code maintenance. Planning architectures tackle with this complexity introducing a new decision making paradigm. This article describes a new hierarchical planning technique based on STRIPS, GOAP, and HTN. It features a motivational approach together with the capability to handle parallel goal planning which favors the appearance of emergent behaviors. Advanced characteristics include, among others, partial replanning or mixing of planning and execution with the use of parameters at planning time to represent the current world state. The architecture, used in the strategy game War Leaders: Clash of Nations, allows high levels of code reusability and modularity, being easily adaptable to game design changes that commonly arise during a complete game development.

Command Hierarchies Using Goal-Oriented Action Planning

David Pittman (Stormfront Studios)
AI Game Programming Wisdom 4, 2008.
Abstract: Goal-based AI agent architectures are a popular choice in character-driven games because of the apparent intelligence the agents display in deciding how to pursue their goals. These games often also demand coordinated behavior between the members of a group, which introduces some complexity in resolving the autonomous behavior of the individuals with the goal of the collective. This article introduces a technique for integrating military-style command hierarchies with the Goal-Oriented Action Planning (GOAP) architecture. An UnrealScript-based example of the framework is used to illustrate the concepts in practice for a squad-based first-person shooter (FPS), and practical optimizations are suggested to help the technique scale to the larger numbers of units required for real-time strategy (RTS) games.

Practical Logic-Based Planning

Daniel Wilhelm (California Institute of Technology)
AI Game Programming Wisdom 4, 2008.
Abstract: An efficient, easy-to-implement planner is presented based on the principles of logic programming. The planner relies on familiar IF/THEN structures and constructs plans efficiently, but it is not as expressive as other proposed planners. Many easy extensions to the planner are discussed such as inserting and removing rules dynamically, supporting continuous values, adding negations, and finding the shortest plan. Accompanying source code provides easy-to-follow implementations of the planner and the proposed extensions.

Simulation-Based Planning in RTS Games

Frantisek Sailer, Marc Lanctot, and Michael Buro (University of Alberta)
AI Game Programming Wisdom 4, 2008.
Abstract: Sophisticated cognitive processes such as planning, learning, and opponent modeling are still the exception in modern video game AI systems. However, with the advent of multi-core computer architectures and more available memory, using more computing intensive techniques will become possible. In this paper we present the adversarial real-time planning algorithm RTSplan which is based on rapid game simulations. Starting with a set of scripted strategies RTSplan simulates determines the outcome of playing strategy pairs and uses the obtained result matrix to assign probabilities to strategies to be followed next. RTSplan is constantly replanning and therefore able to adjust to changes promptly. With an opponent modeling extension, RTSplan is able to soundly defeat individual strategies in our army deployment application. In addition, RTSplan can make use of existing AI scripts to create more challenging AI systems. Therefore it is well-suited for video games.

Particle Filters and Simulacra for More Realistic Opponent Tracking

Christian J. Darken (The MOVES Institute), Bradley G. Anderegg (Alion Science and Technology Corporation)
AI Game Programming Wisdom 4, 2008.
Abstract: Tracking the possible location of an opponent is a potentially important game AI capability for enabling intelligent hiding from or searching for the opponent. This article provides an introduction to particle filters for this purpose. Particle filters postulate a set of specific coordinates where the opponent might be as opposed to estimating probabilities that the opponent is in particular regions of the level, as is done in the occupancy map technique. By their very nature, particle filters have a very different performance profile from occupancy maps, and thus represent an interesting alternative. We also show how adding a small amount of intelligence to the particles, transforming them to simulacra, can improve the quality of tracking.

Using Bayesian Networks to Reason About Uncertainty

Devin Hyde
AI Game Programming Wisdom 4, 2008.
Abstract: This article provides the reader with an understanding of the fundamentals of Bayesian networks. The article will work through several examples, which show how a Bayesian network can be created to model a problem description that could be part of a video game. By the end of the article the reader will have the knowledge necessary to form and solve similar problems on their own. An implementation of our solution to the examples, which shows how beliefs are updated based on different observations, is provided on the accompanying CD-ROM.

The Engagement Decision

Baylor Wetzel (Brown College)
AI Game Programming Wisdom 4, 2008.
Abstract: Before every battle comes the question - can I win this battle? Should I attack or should I run? There are a variety of ways to answer this question. This article compares several, from simple power calculations through Monte Carlo simulations, discussing the pros and cons of each and the situations where each is appropriate.

A Goal Stack-Based Architecture for RTS AI

David Hern�ndez Cerpa (Enigma Software Productions)
AI Game Programming Wisdom 4, 2008.
Abstract: An RTS game may have dozens or hundreds of individual units. This presents some interesting challenges for the AI system. One approach to managing this complexity is to make decisions on different abstraction levels. The AI for the RTS part of the game War Leaders: Clash of Nations is divided in three levels. This article is focused on the architecture developed for the lower two of these three levels, which correspond to the AI levels for units, groups, and formations. This architecture is based on the concept of a goal stack as a mechanism to drive the entire agent behavior together with orders, events, and behaviors.

A Versatile Constraint-Based Camera System

Julien Hamaide (10Tacle Studios Belgium/Elsewhere Entertainment)
AI Game Programming Wisdom 4, 2008.
Abstract: This article extends "Autonomous Camera Control with Constraint Satisfaction Methods" written by Bourne and Sattar and published in AI Game Programming Wisdom 3. It proposes a solution to several problems we encountered during the development of a 3D platform game. One important problem we faced was the shaking of the camera position. We have studied the problem and extracted a set of mathematical conditions to ensure stability. A special set of mathematical conditions are applied to both visibility and collision constraints. The original system allows the camera to move in the entire world. Our project needed a scripted path, thus we proposed a solution to apply the constraint-based camera to limited space such as planes and splines. Speed limiting and using the solver for orientation is explored as future work.

Seeing in 1D: Projecting the World onto a Line

Andrew Slasinski
AI Game Programming Wisdom 4, 2008.
Abstract: With a little bit of cleverness, the GPU can be coerced into performing non-graphics related operations for you with great results. Visibility determination is one of the few components of AI that can take advantage of the massively parallel nature of new GPU hardware. Although rendering a 3D scene onto a 1D line loses a lot of information that rendering to a 2D plane may not miss, the complexity of searching a 2D texture for targets is many times larger. This technique is also simple enough such that smaller, 2D sprite based games might take advantage of it for some new gameplay ideas using searchlights, security guards, or dynamic levels used by the player to stay out of sight.

Reaction Time with Fitts?Law

Baylor Wetzel (Brown College)
AI Game Programming Wisdom 4, 2008.
Abstract: How quickly should an AI aim its weapon at the player? Not knowing the proper answer, developers often make up a number and hope for the best, which can often lead to complaints that the AI has unrealistic response times. Psychologists have studied human reaction time for decades. In this article, we discuss Fitts' Law, which uses the size of a target and its distance from the cursor to predict an agent's response time.

Enabling Actions of Opportunity with a Light-Weight Subsumption Architecture

Habib Loew (ArenaNet), Chad Hinkle (Nintendo of America Inc.)
AI Game Programming Wisdom 4, 2008.
Abstract: With the ever increasing physical and graphical fidelity in games, players are beginning to demand similar increases in the performance of unit AI. Unfortunately, unit AI is still most often based on simple finite state machines (FSMs) or, occasionally, rule-based systems. While these methods allow for relatively easy development and behavioral tuning, their structure imposes inherent limitations on the versatility of the units they control. In this article we propose an alternate methodology which allows units to effectively pursue multiple simultaneous goals. While our method isn't a panacea by any means, it has the potential to lead to far more flexible, "realistic" unit AI.

Toward More Humanlike NPCs for First-/Third-Person Shooter Games

Darren Doherty and Colm O�Riordan (National University of Ireland Galway)
AI Game Programming Wisdom 4, 2008.
Abstract: This article presents ideas to provide NPCs with more humanlike qualities and a greater sense of individuality in order to create more immersive game-playing experiences that capture and hold the attention and interest of players. We discuss how providing NPCs with personality, emotions, human sensing and memory can enable them to behave in a more humanlike fashion and make NPCs more distinctive. In addition, we discuss the impact that physiological stressors might have on NPCs' behavior and the different weapon handling skills of NPCs, and how these factors can contribute to making the NPCs of FTPS games more individual and humanlike.

Stop Getting Side-Tracked by Side-Quests

Curtis Onuczko, Duane Szafron, and Jonathan Schaeffer (University of Alberta)
AI Game Programming Wisdom 4, 2008.
Abstract: Computer role-playing games often contain a complex main story-line and a series of smaller optional independent mini-stories called side-quests. Side-quests create an open world feeling, provide rewards and experience to the player for exploring optional game content, and build upon the background of the main story without affecting it. The more side-quests you add to your game, the richer the game experience will be for the player. However, more side-quests means more work generating content. This article discusses a Side-QUEst GENerator (SQUEGE) tool that will minimize the amount of time and effort needed to add side-quests to a game story. By using the patterns that exist in stories, SQUEGE intelligently provides an interesting and meaningful structure to the side-quests it produces. The result is a set of automatically generated side-quest outlines. The outlines can then be adapted, giving the game author authorial control over the side-quests generated. Finally, a programmer can use the adapted outlines to create the necessary game scripts in a straightforward manner. This process makes the creation of a large number of side-quests both easy and efficient, saving precious time and resources. The generator is easily extendable to allow for the addition of new patterns.

Spoken Dialogue Systems

Hugo Pinto and Roberta Catizone (University of Sheffield)
AI Game Programming Wisdom 4, 2008.
Abstract: This article provides an overview of modern dialog systems. We start by presenting the issues of voice recognition, language processing, dialog management, language generation and speech synthesis. Next, we analyze two robust speech-based interactive systems, NICE and TRIPS, examining how they solved each of the issues involved in spoken dialog processing. Finally, we examine the particulars of the game domain and provide suggestions on how to approach it, with illustrations from the case studies.

Implementing Story-Driven Games with the Aid of Dynamical Policy Models

Fabio Zambetta (School of Computer Science & IT, RMIT University)
AI Game Programming Wisdom 4, 2008.
Abstract: In this article we introduce a mathematical model of conflict that enhances Richardson's model of Arms Race accounting for interactive scenarios, such as the ones provided by Computer Role Playing Games. Accordingly, an HCP (Hybrid Control Process) is devised that can be combined with fuzzy rules to provide help in modeling non-linear interactive stories. The model presented here can be adopted by game AI programmers to better support the game designers' job, and to provide an interesting and unconventional type of gameplay to players. We also introduce the multi-disciplinary project Two Familes: A Tale of New Florence, which will illustrate the applications of our model.

Individualized NPC Attitudes with Social Networks

Christian J. Darken (The MOVES Institute), John D. Kelly (U.S. Navy)
AI Game Programming Wisdom 4, 2008.
Abstract: This article introduces a method for largely automating NPC changes in attitude due to a player action. The method resolves the conflicting loyalties of the NPC's to produce a single number per NPC that can be used to update the NPC's feelings toward the player and drive future player-NPC interactions. The mechanics of the method are based on a constrained linear system, so it is computationally efficient, requiring only a single matrix multiplication in many applications.

Scripting Your Way to Advanced AI

Alistair Doulin (Auran Games)
AI Game Programming Wisdom 4, 2008.
Abstract: Using script to empower designers with the ability to create advanced AI allows for more natural and specialized AI behaviors. This article discusses the best practices for achieving this using GameMonkey Script and gives an example of its usage in Battlestar Galactica for Xbox Live Arcade.

Dialogue Managers

Hugo Pinto (University of Sheffield)
AI Game Programming Wisdom 4, 2008.
Abstract: This article presents the main techniques and paradigms of dialog management, with references to games, industrial applications and academic research. We cover dialog managers based on stacks, finite-state machines, frames, inference-engines and planners. For each technique, we point its strengths, applicability and issues when integrating into a broader dialog system in a game setting.

Learning Winning Policies in Team-Based First-Person Shooter Games

Stephen Lee-Urban, Megan Smith, and H�ctor Mu�oz-Avila (Lehigh University)
AI Game Programming Wisdom 4, 2008.
Abstract: This article presents the use of an online reinforcement learning algorithm, called RETALIATE, to automatically acquire team AI in FPS domination-style games. We present the learning problem and state model from which we draw some lessons for designing AI in these game genres.

Adaptive Computer Games: Easing the Authorial Burden

Manish Mehta, Santi Onta��n, Ashwin Ram (Georgia Institute of Technology)
AI Game Programming Wisdom 4, 2008.
Abstract: Artificial intelligence behaviors in games are typically implemented using static, hand-authored scripts. Hand-authoring results in two issues. First, it leads to excessive authorial burden where the author has to craft behaviors for all the possible circumstances that might occur in the game world. Second, it results in games that are brittle to changing world dynamics. In this paper, we present our work to address these two issues by presenting techniques that a) reduce the burden of writing behaviors, and b) increase the adaptivity of those behaviors. We describe a behavior learning system that can learn behavior from human demonstrations and also automatically adapt behaviors when they are not achieving their intended purpose.

Player Modeling for Interactive Storytelling: A Practical Approach

David Thue, Vadim Bulitko, and Marcia Spetch (University of Alberta)
AI Game Programming Wisdom 4, 2008.
Abstract: As computer graphics becomes less of a differentiator in the video game market, many developers are turning to AI and storytelling to ensure that their title stands out from the rest. To date, these have been approached as separate, incompatible tasks; AI engineers feel shackled by the constraints imposed by a story, and the story's authors fear the day that an AI character grabs their leading actor and throws him off a bridge. In this article, we attempt to set aside these differences, bringing AI engineers together with authors through a key intermediary: a player model. Following an overview of the present state of storytelling in commercial games, we present PaSSAGE (Player-Specific Stories via Automatically Generated Events), a storytelling AI that both learns and uses a player model to dynamically adapt a game's story. By combining the knowledge and expertise of authors with a learned player model, PaSSAGE automatically creates engaging and personalized stories that are adapted to appeal to each individual player.

Automatically Generating Score Functions for Strategy Games

Sander Bakkes and Pieter Spronck (Maastricht University, The Netherlands)
AI Game Programming Wisdom 4, 2008.
Abstract: Modern video games present complex environments in which their AI is expected to behave realistically, or in a "human-like" manner. One feature of human behavior is the ability to assess the desirability of the current strategic situation. This type of assessment can be modeled in game AI using a "score function." Due to the complex nature of modern strategy games, the determination of a good score function can be difficult. This difficulty arises in particular from the fact that score functions usually operate in an imperfect information environment. In this article, we show that machine learning techniques can produce a score function that gives good results despite this lack of information.

Automatic Generation of Strategies

Pieter Spronck and Marc Ponsen (Maastricht University, The Netherlands)
AI Game Programming Wisdom 4, 2008.
Abstract: Machine learning techniques can support AI developers in designing, tuning, and debugging tactics and strategies. In this article, we discuss how a genetic algorithm can be used to automatically discover strong strategies. We concentrate on the representation of a strategy in the form of a chromosome, the design of genetic operators to manipulate such chromosomes, the design of a fitness function, and discuss the evolutionary process itself. The techniques and their results are demonstrated in the game of Wargus.

A Practical Guide to Reinforcement Learning in First-Person Shooters

Michelle McPartland (University of Queensland)
AI Game Programming Wisdom 4, 2008.
Abstract: Reinforcement learning (RL) is well suited to FPS bots as it is able to learn short term reactivity as well as long term planning. This article briefly introduces the basics of RL and then describes a popular RL algorithm called Sarsa. It shows how RL can be used to allow FPS bots to learn some of the behaviors that are required to play deathmatch games and presents the results of several experiments.

Implementation Walkthrough of a Homegrown "Abstract State Machine" Style System in a Commercial Sports Game

Brian Schwab (Sony Computer Entertainment of America)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2008.
Abstract: When I began working at Sony Computer Entertainment of America in 2002, the AI system they were using was very dated. Over the next few years, I designed and developed an almost completely data driven system that has proven to be very powerful, extremely extensible, and designer friendly. This system uses a homegrown data structure, the use of which in many ways resembles the software method of using Abstract State Machines for decomposing complex logical constructs iteratively. This paper will provide an overview of the construction and usage of the system, as well as the pros and cons of this type of game AI engine.

Otello: A Next-Generation Reputation System for Humans and NPCs

Michael Sellers (Online Alchemy)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2008.
Abstract: This paper introduces Online Alchemy�s Otello technology as a way to enable reputational capabilities beyond any found in games or other online social contexts today. This technology allows participants to quickly and easily assess another�s reputation in ways meaningful to them, and enables individuals -- both players and non-player characters (NPCs) -- to contribute to an individual�s reputation in unique and novel ways. Otello also enables new forms of 'relational gameplay' that feature social management, effectively an extension of resource management into the social realm. The player�s actions and opinions affect others, including how they see the player, and how ideas and opinions propagate through a population.

Navigating detailed worlds with a complex, physically driven locomotion: NPC Skateboarder AI in EA�s skate

Mark Wesley (Electronic Arts Black Box)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2008.
Abstract: This talk describes the motivation, design and implementation behind the AI for the NPC Skateboarders in skate. The complexity of the physically driven locomotion used in skate means that, at any given point, there is an extremely large number of degrees of freedom in potential motion. In addition to this, the rules governing whether it is possible to navigate from any given point A to a secondary point B are entirely dependent on the skateboarder�s state at point A. The state required at point A involves a large number of variables, as well as a complex set of previously executed maneuvers to have reached it.

Automatic Generation of Game Level Solutions as Storyboards

David Pizzi, Marc Cavazza, Jean-Luc Lugrin (University of Teesside), Alex Whittaker (Eidos)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2008.
Abstract: Interactive Storytelling techniques are attracting much interest for their potential to develop new game genres but also as another form of procedural content generation, specifically dedicated to game events rather than objects or characters. However, one issue constantly raised by game developers, when discussing gameplay implications of Interactive Storytelling techniques, is the possible loss of designer control over the dynamically generated storyline. Joint research with industry has suggested a new potential use for Interactive Storytelling technologies, which stands precisely as an assistance to game design. Its basic philosophy is to generate various/all possible solutions to a given game level using the player character as the main agent, and gameplay actions as the basic elements of solution generation. We present a fully-implemented prototype which uses the blockbuster game HitmanTM as an application. This system uses Heuristic Search Planning to generate level solutions, each legal game action being described as a planning operator. The description of the initial state, the level�s objective as well as the level layout, constitute the input data. Other parameters for the simulation include the Hitman�s style, which influences the choice of certain actions and privileges a certain style of solution (e.g. stealth versus violent). As a design tool, it seemed appropriate to generate visual output which would be consistent with the current design process. In order to achieve this, we have adapted original HitmanTM storyboards for their use with a generated solution: we attach elements of storyboards to the planning operators so that a complete solution generates a comic strip similar to an instantiated storyboard for the solution generated. We illustrate system behaviour with specific examples of solution generation.

SquadSmart - Hierarchical Planning and Coordinated Plan Execution for Squads of Characters

Peter Gorniak, Ian Davis (Mad Doc Software)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2007.
Abstract: This paper presents an application of Hierarchical Task Network (HTN) planning to a squad-based military simulation. The hierarchical planner produces collaborative plans for the whole squad in real time, generating the type of highly coordinated behaviours typical for armed combat situations involving trained professionals. Here, we detail the extensions to HTN planning necessary to provide real-time planning and subsequent collaborative plan execution. To make full hierarchical planning feasible in a game context we employ a planner compilation technique that saves memory allocations and speeds up symbol access. Additionally, our planner can be paused and resumed, making it possible to impose a hard limit on its computation time during any single frame. For collaborative plan execution we describe several synchronization extensions to the HTN framework, allowing agents to participate in several plans at once and to act in parallel or in sequence during single plans. Overall, we demonstrate that HTN planning can be used as an expressive and powerful real-time planning framework for tightly coupled groups of in-game characters.

Custom Tool Design for Game AI

P.J. Snavely (Sony Computer Entertainment America)
AI Game Programming Wisdom 3, 2006.
Abstract: Artificial intelligence systems in games have become so complex that often one engineer cannot write the entire structure alone. Using the Basketball Artificial Intelligence Tool (BAiT) we were able to integrate the artificial intelligence for NBA 2007 based entirely upon designer data entry and manipulation. While this approach has many positives there are also some drawbacks to implementing a system like this. There are also some necessary precautions that one should take before even attempting this process.

Using STL and Patterns for Game AI

James Freeman-Hargis (Midway Games)
AI Game Programming Wisdom 3, 2006.
Abstract: Game AI programmers are notorious for reinventing the wheel. But many of the data structures, algorithms and architectures they need have already been done in flexible and reusable ways. This article is intended to serve as a reference for a variety of patterns. While entire volumes have been written to discuss the STL and design patterns in general, this article will provide an introductory overview of the STL and inspect those specific design patterns that have proven the most useful in game AI development. We need to talk about the STL because it provides a series of pre-defined data structures that will not only make life simpler, but which take much of the burden of nuts and bolts implementation away and allow the AI developer to focus on what's really interesting anyway—the AI.

Declarative AI Design for Games—Considerations for MMOGs

Nathan Combs
AI Game Programming Wisdom 3, 2006.
Abstract: The design of behaviors in games and massively multiplayer online games (MMOGs) is based on a style of scripting that is consistent with a cinematic perspective of game design. This style is paradigmatic of how AI is conceptualized in games. This article claims that this approach is not likely to scale in the future and calls for a more declarative style of developing and conceptualizing AI. The objective of this article is to acquaint games AI developers with thoughts and techniques that form a declarative AI design.

Designing for Emergence

Benjamin Wootton (University of Leeds)
AI Game Programming Wisdom 3, 2006.
Abstract: As gamers demand more realistic AI and more dynamic, non-linear, and interactive game worlds, traditional methods of developing AI are beginning to show their limitations in terms of salability, robustness and general fitness for purpose. Emergence and the broader "emergent approach" to game design hold great potential as an efficient tool for avoiding these limitations by allowing high-level behaviors and flexible game environments to emerge from low level building blocks without the need for any hard-coded or scripted behaviors. Our goals in this article are to both demonstrate this case, and to explain in practical terms how emergence can be captured by the game designer.

Fun Game AI Design for Beginners

Matt Gilgenbach (Heavy Iron Studios)
AI Game Programming Wisdom 3, 2006.
Abstract: This article is meant to provide food for thought on a number of issues involving AI design. Creating predictable, understandable and consistent AI that doesn't beat the player all the time is no easy task. The AI programmer must make sure that the AI gives the player time to react, doesn't have cheap shots against the player and isn't too simple or too complex. The AI is meant to enrich the player's enjoyment of the game, not to frustrate them, so these rules are important to consider in order to create an enjoyable experience for the player. If you are developing a game AI the best thing you can do (besides considering these rules) is to come up with your own rules from games that you enjoy playing.

Strategies for Multi-Processor AI

Sergio Garces (Pyro Studios)
AI Game Programming Wisdom 3, 2006.
Abstract: With multi-processor hardware becoming commonplace, it is necessary to develop a new architecture that allows the AI engine to execute in parallel in multiple threads. We describe several approaches that try to minimize dependencies and avoid locking, in order to build an efficient concurrent system, while keeping productivity high and preventing threading bugs.

Academic AI Research and Relations with the Game Industry

Christian Baekkelund (Massachusetts Institute of Technology (MIT))
AI Game Programming Wisdom 3, 2006.
Abstract: Historically, a substantial divide has existed between game AI developers and the general AI research community. Game AI developers have typically viewed academic research AI as too far removed from practical use, and academic AI researchers have remained largely uninterested in many of the common problems faced in game development. However, each group has much to gain from better communication and cooperation. While a great deal needs to be done from both sides of the divide, this article will focus on what game developers can do to better understand the academic AI research community and form better relations.

Writing AI as Sport

Peter Cowling (University of Bradford, UK)
AI Game Programming Wisdom 3, 2006.
Abstract: AI has been a sport for many decades. In this article we discuss some of the major competitions between AI game players and discuss the impact on the media and the public of success in these competitions. We discuss some of our own experiences in running AI competitions and provided pointers to running a successful competition. We consider non-programmatic ways that AI has been created, and how this might be use in a new genre of game where the player trains the AI for each player rather than controlling them directly.

Cooperative Pathfinding

David Silver (University of Alberta)
AI Game Programming Wisdom 3, 2006.
Abstract: Cooperative pathfinding is a general technique for coordinating the movements of multiple units. Units communicate their planned paths, enabling other units to avoid their intended routes. This article explains how to implement cooperative pathfinding using a space-time A* search. Moreover, it provides a number of improvements and optimizations, which allow cooperative pathfinding to be implemented both efficiently and robustly.

Improving on Near-Optimality: More Techniques for Building Navigation Meshes

Fredrik Farnstrom (Rockstar San Diego)
AI Game Programming Wisdom 3, 2006.
Abstract: New techniques for automatically building navigation meshes for use in pathfinding are presented, building on Paul Tozour's article "Building a Near-Optimal Navigation Mesh." Polygons are subdivided around walls and other static obstacles with precise cuts that reduce the number of polygons. The same subdivision method can be used for merging overlapping polygons, and the height of the agent is taken into account by extruding polygons. An additional technique for merging the resulting polygons is presented. To improve performance, a simple spatial data structure based on a hash table is used.

Smoothing a Navigation Mesh Path

Geraint Johnson (Sony Computer Entertainment Europe)
AI Game Programming Wisdom 3, 2006.
Abstract: It is becoming increasingly common to use a navigation mesh as the search space representation for pathfinding in games. We present a path-smoothing algorithm for use with a navigation mesh. The algorithm converts a rough path of navigation mesh cells found by A* into a curved path which an agent can follow. We use B�zier splines to generate a rounded curve which is guaranteed to stay on the surface of the navigation mesh, keeping the agent safe. We explain a string-pulling technique used to make the smoothed path as direct as possible.

Preprocessed Pathfinding Using the GPU

Renaldas Zioma (Digital Illusions Canada Inc.)
AI Game Programming Wisdom 3, 2006.
Abstract: This article proposes GPU-based implementations for two popular algorithms used to solve the all-pairs shortest paths problem: Dijkstra's algorithm, and the Floyd-Warshall algorithm. These algorithms are used to preprocess navigation mesh data for fast pathfinding. This approach can offload pathfinding-related CPU computations to the GPU at the expense of latency. However, once the solution table is generated, this approach minimizes the latency time for a specific path search, thus giving the game a better sense of interactivity. The biggest benefit of this approach is gained in systems with multiple agents simultaneously requesting paths in the same search space. Although the article describes a GPU-specific implementation for a navigation mesh, any other multi-processor environment or discrete search space representation can be used.

Flow Fields for Movement and Obstacle Avoidance

Bob Alexander (Zipper Interactive Inc.)
AI Game Programming Wisdom 3, 2006.
Abstract: There are many algorithms in AI which can produce conflicting results. For example, in collision avoidance, avoiding one object can result in hitting another. The AI must resolve these conflicts and find a solution that avoids all objects simultaneously. Resolution is often achieved using iterative processing or prioritization techniques. However, by using flow fields this problem can be solved for all objects simultaneously. In this article we will see how flow fields can be an elegant solution to many other problems as well, such as, smoothing A* results and controlling movement during battle.

Autonomous Camera Control with Constraint Satisfaction Methods

Owen Bourne and Abdul Sattar (Institute for Integrated and Intelligent Systems)
AI Game Programming Wisdom 3, 2006.
Abstract: Producing a robust autonomous camera that can interact with the dense and dynamic environments of interactive games is a difficult and tricky process. Avoiding occlusions, displaying multiple targets, and coherent movements are all problems that are difficult to solve. The constraint satisfaction approach can be used to effectively solve these problems while providing a number of benefits, including extendibility, robustness, and intelligence. This article covers the theory and implementation details for a fully autonomous constraint-based camera system that can be used in arbitrary environments. The included source-code demonstrates the use of the camera system in an interactive environment.

Insect AI 2: Implementation Strategies

Nick Porcino (LucasArts, a Lucasfilm Company)
AI Game Programming Wisdom 3, 2006.
Abstract: The integration of AI into a game engine where the agent is simulated and run under physical control can be a challenge. The AI's internal model of the world is likely to be very simple relative to the complexity of the game world, yet the AI has to function in a reasonable and efficient manner. This article shows how to usefully integrate Insect AI into systems where the physics, collision, and animation systems are black boxes not directly under AI control, and are not even directly accessible by the AI. It also discusses practicalities of implementation including integration with pre-existing AI algorithms in a game engine.

Intelligent Steering Using Adaptive PID Controllers

Euan Forrester (Next Level Games Inc.)
AI Game Programming Wisdom 3, 2006.
Abstract: As physics systems become more complex and are embedded more deeply into our games, our jobs as AI programmers become more difficult. AI characters need to operate under the same physical restrictions as the player to maintain the visual continuity of the game, to reduce the player's sense of being cheated by the computer, and to reduce the development workload necessary to create multiple physics systems which must interact with one another. Although this problem can be solved by using standard PID (Proportional-Integral-Derivative) controllers, they are difficult to tune for physics systems whose characteristics vary over time. Fortunately, control engineering provides a solution to this problem: adaptive controllers. This article focuses on Model Reference Adaptive Controllers: controllers which attempt to make the AI character's behavior match a predefined model as closely as possible within the physical constraints imposed by the game. The article comes with full source code for a demo that lets you change the handling characteristics of a missile flying towards a moving target, and watch while the PID coefficients are updated in real-time.

Fast, Neat, and Under Control: Arbitrating Between Steering Behaviors

Heni Ben Amor, Jan Murray, and Oliver Obst (University of Koblenz)
AI Game Programming Wisdom 3, 2006.
Abstract: Steering behaviors are a convenient way of creating complex and lifelike movements from simple reactive procedures. However, the process of merging those behaviors is not trivial and the resulting steering command can lead to suboptimal or even catastrophic results. This article presents a solution to these problems by introducing inverse steering behaviors (ISBs) for controlling physical agents. Based on the original concept of steering behaviors, ISBs facilitate improved arbitration between different behaviors by doing a cost based analysis of several steering vectors instead of relying on one solution only.

Real-Time Crowd Simulation Using AI.implant

Paul Kruszewski (BGT BioGraphic Technologies, Inc.)
AI Game Programming Wisdom 3, 2006.
Abstract: Next-generation gaming hardware such as the Xbox 360 and PlayStation 3 will allow the creation and visualization of large visually rich but virtually uninhabited cities. It remains an open problem to efficiently create and control large numbers of vehicles and pedestrians within these environments. We present a system originating from the special effects industry, and expanded in the military simulation industry, that has been successfully evolved into a practical and scalable real-time urban crowd simulation game pipeline with a behavioral fidelity that previously has only been available for non-real-time applications such as films and cinematics.

Flexible Object-Composition Architecture

Sergio Garces (Pyro Studios)
AI Game Programming Wisdom 3, 2006.
Abstract: Object-composition architectures provide an easy way to assemble game objects as a collection of components, each of them with a specific and modular function. Archetypes are used to define what components an object consist of, and therefore what objects do. Archetype definition is data-driven, empowering designers to experiment with gameplay. The last ingredient in the mix is good tools, which might take advantage of data inheritance to increase productivity.

A Goal-Based, Multi-Tasking Agent Architecture

Elizabeth Gordon (Frontier Developments, Ltd.)
AI Game Programming Wisdom 3, 2006.
Abstract: This article describes a goal-based, multi-tasking agent architecture for computer game characters. It includes some mechanisms for representing and requesting information about the game world, as well as a method for selecting a set of compatible goals to execute based on the availability of necessary items. Finally, the article includes a brief discussion of techniques for designing and debugging goal-based systems.

Orwellian State Machines

Igor Borovikov (Sony Computer Entertainment America)
AI Game Programming Wisdom 3, 2006.
Abstract: The article explores a methodology for building game AI based on subsumption, command hierarchy, messaging and finite state machines. The approach is derived from a metaphor of bureaucratic dictatorship. This metaphor helps in the analysis and practical design of particular AI subsystems on both the individual and group layers. The resulting architecture is called an Orwellian State Machine (OSM).

A Flexible AI System through Behavior Compositing

Matt Gilgenbach (Heavy Iron Studios), Travis McIntosh (Naughty Dog)
AI Game Programming Wisdom 3, 2006.
Abstract: This article proposes a new way of defining AI states as modular behaviors, so code can be reused between NPCs with a minimal amount of effort. With this system, state transitions are not explicitly recorded in a table like many finite state machine implementations. Every behavior has a "runnable" condition and a priority, so the state transitions are determined by checking these conditions in sorted order. Common issues that arise with this implementation are addressed including performance, ease of refactoring, and interdependencies.

Goal Trees

Geraint Johnson (Sony Computer Entertainment Europe)
AI Game Programming Wisdom 3, 2006.
Abstract: We present a generic AI architecture for implementing the behavior of game agents. All levels of behavior, from tactical maneuvers to path-following and steering, are implemented as goals. Each goal can set up one or more subgoals to achieve its aim, so that a tree structure is formed with the primary goal of the agent at its root. Potential primary goals are experts on when they should be selected, and scripts can also force behavior at any level by providing a sequence of primary goals. The architecture is more robust than a finite state machine (FSM) and more efficient than a full planning system.

A Unified Architecture for Goal Planning and Navigation

Dominic Filion
AI Game Programming Wisdom 3, 2006.
Abstract: Graph networks, traversed by standard algorithms such as A*, are the staple of most pathfinding systems. The formalization of navigation algorithms into a search graph that represents spatial positioning is one of the most effective ideas in game AI. However ubiquitous graph networks may be in pathfinding, their use in more general problem domains in modern games seems to be less common. Couldn't we extend the standard pathfinding arsenal—graph networks and A*—to other problem sets? This is the idea that we will be exploring in this article.

Prioritizing Actions in a Goal-Based RTS AI

Kevin Dill (Blue Fang Games)
AI Game Programming Wisdom 3, 2006.
Abstract: In this article we outline the architecture of our strategic AI and discuss a variety of techniques that we used to generate priorities for its goals. This engine provided the opposing player AI of our real-time strategy games Kohan 2: Kings of War and Axis & Allies. The architecture is easily extensible, flexible enough to be used in a variety of different types of games, and sufficiently powerful to provide a good challenge for an average player on a random, unexplored map without unfair advantages.

Extending Simple Weighted-Sum Systems

Sergio Garces (Pyro Studios)
AI Game Programming Wisdom 3, 2006.
Abstract: Decision-making is an important function of every AI engine. One popular technique involves calculating a weighted sum, which combines a number of factors into a desirability value for each option, and then selecting the option with the highest score. Some extensions, such as the incorporation of behavioral inertia, the use of response curves, or the combination of the system with a rule-based engine, can turn the weighted sum into a very robust, flexible approach for controlling behavior.

AI Waterfall: Populating Large Worlds Using Limited Resources

Sandeep V. Kharkar (Indie Built, Inc.)
AI Game Programming Wisdom 3, 2006.
Abstract: This article presents an architecture that simplifies the process of populating large worlds with interesting and varied actors using a relatively small number of AI agents. The architecture derives its concept from faux waterfalls that recycle the same water to create the illusion of continuous flow. The architecture can be broken down into two distinct parts: a director class that moves the actors around the stage and provides them with a script for the role they play, and a set of game-specific actors that play the part they are assigned until they are asked to go back in the wings for a costume change. One section of the article is dedicated to optimization techniques for the architecture. The code for the underlying architecture is included with the article.

An Introduction to Behavior-Based Systems for Games

Aaron Khoo (Microsoft)
AI Game Programming Wisdom 3, 2006.
Abstract: Behavior-based systems are an efficient way of controlling NPCs in video games. By taking advantage of simpler propositional logic, these systems are able reason efficiently and react quickly to changes in the environment. The developer builds the AI system one behavior layer at a time, and then aggregates the results of all the behaviors into a final output value using resolution system. The resulting systems are equivalent to finite state machines, but are not constructed in the traditional state-transition manner. The resulting behavior-based system can often be mostly stateless, hence avoiding most of the messy state transitions that need to be built into FSMs to handle various contingencies.

Simulating a Plan

Petar Kotevski (Genuine Games)
AI Game Programming Wisdom 3, 2006.
Abstract: The article describes a methodology of supplementing traditional FSMs with contextual information about the internal state of the agent and the environment that the agent is in, by defining game events and deriving rules for responses to a given game event. This creates a completely non-scripted experience that varies with every different player, because in essence the system responds to game events generated by the player himself. By defining simple rules for enemy behavior and environments in which those rules can be clearly seen, it is possible to simulate group behavior where no underlying code for it is present. The system described is completely deterministic, thus easy to maintain, QA, and debug. It is also not computationally expensive, so rather large populations of AI agents can be simulated using the proposed system.

Probabilistic Target Tracking and Search Using Occupancy Maps

Dami�n Isla (Bungie Studios)
AI Game Programming Wisdom 3, 2006.
Abstract: This article will introduce Occupancy Maps, a technique for probabilistically tracking object positions. Occupancy Maps, an application of a broader Expectation Theory, can result in more interesting and realistic searching behaviors, and can also be used to generate emotional reactions to search events, like surprise (at finding a target in an unexpected place) and confusion (at failing to find a target in an expected place). It is also argued that the use of more in-depth knowledge-modeling techniques such as Occupancy Maps can relieve some of the complexity of a traditional FSM or HFSM approach to search behavior.

Dynamic Tactical Position Evaluation

Remco Straatman and Arjen Beij (Guerrilla Games), William van der Sterren (CGF-AI)
AI Game Programming Wisdom 3, 2006.
Abstract: Dynamic tactical position evaluation is essential in making tactical shooters less linear and more responsive to the player and to changes in the game world. Designer placed hints for positioning and detailed scripting are impractical for games with unpredictable situations due to player freedom and dynamic environments. This article describes the techniques used to address these issues for Guerrilla's console titles Killzone and Shellshock Nam '67. The basic position evaluation mechanism is explained and its application when selecting tactical positions and finding tactical paths. Some alternative uses of the technique are given, such as generating intelligent scanning positions and suppressive fire, and the practical issues of configuration and performance are discussed.

Finding Cover in Dynamic Environments

Christian J. Darken (The MOVES Institute), Gregory H. Paull (Secret Level Inc.)
AI Game Programming Wisdom 3, 2006.
Abstract: In this article, we describe our approach to improved cover finding with an emphasis on adaptability to dynamic environments. The technique described here combines level annotation with the sensor grid algorithm. The strength of level annotation is its modest computational requirements. The strength of the sensor grid algorithm is its ability to handle dynamic environments and to find smaller cover opportunities in static environments. Each approach is useful by itself, but combining the two can provide much of the benefit of both. In a nutshell, our approach relies on cover information stored in the candidate cover positions placed throughout the level whenever possible and performs a focused run-time search in the immediate vicinity of the agent if the level annotation information is insufficient. This allows it to be fast and yet able to react to changes in the environment that occur during play.

Coordinating Teams of Bots with Hierarchical Task Network Planning

Hector Munoz-Avila and Hai Hoang (Lehigh University)
AI Game Programming Wisdom 3, 2006.
Abstract: This article presents the use of Hierarchical-Task-Network (HTN) representations to model strategic game AI. We demonstrate the use of hierarchical planning techniques to coordinate a team of bots in an FPS game.

Training Digital Monsters to Fight in the Real World

James Boer and John Corpening (ArenaNet)
AI Game Programming Wisdom 3, 2006.
Abstract: This article discusses how we approached and solved the problem of creating compelling AI agents for Digimon Rumble Arena 2, a one to four-player brawler. This consisted of two major challenges: How to pathfind through and respond intelligently to highly dynamic and interactive environments, and how to program a wide variety of characters to play effectively in ten different game types without incurring a combinatorial explosion of code complexity.

The Suffering: Game AI Lessons Learned

Greg Alt (Surreal Software)
AI Game Programming Wisdom 3, 2006.
Abstract: This article presents a collection of lessons learned through building and evolving an AI architecture through the development of three games for PS2, XBox, and PC: The Lord of The Rings: The Fellowship of the Ring; The Suffering; and The Suffering: Ties That Bind. The lessons cover alternate uses for A* and pathfinding, visualizations to aid AI development and debugging, benefits of a fine-grained hierarchical behavior system, and the combination of autonomy and scripted behavior for non-player characters (NPCs).

Environmental Awareness in Game Agents

Penny Sweetser (The University of Queensland)
AI Game Programming Wisdom 3, 2006.
Abstract: Agents make up an important part of game worlds, ranging from the characters and monsters that live in the world to the armies that the player controls. Despite their importance, agents in current games rarely display an awareness of their environment or react appropriately, which severely detracts from the believability of the game. Some games have included agents with a basic awareness of other agents, but they are still unaware of important game events or environmental conditions. This chapter describes an agent design that combines cellular automata for environmental modeling with influence maps for agent decision-making. The result is simple, flexible game agents that are able to respond to natural phenomena (e.g. rain or fire), while pursuing a goal.

Fast and Accurate Gesture Recognition for Character Control

Markus W�� (Foolscap Vienna)
AI Game Programming Wisdom 3, 2006.
Abstract: This article describes a simple, yet fast and accurate, way of gesture recognition that we have used in Punch'n'Crunch, a gesture-based fun-boxing game. The presented system is a very interesting way to control characters, but can also be used to recognize letters, numbers, and other arbitrary symbols. Gestures allow a more natural way for triggering a multitude of different commands.

Being a Better Buddy: Interpreting the Player's Behavior

William van der Sterren (CGF-AI)
AI Game Programming Wisdom 3, 2006.
Abstract: In shooter games, the player's activity can be interpreted by the AI to recognize certain tactical behaviors. Based on this, the AI can direct the friendly NPCs to better assist the player. To interpret and classify the player's activity, a na�ve Bayes classifier is used. With careful design of the inputs to this classifier, some post-processing of its output, and by gathering good training data, the player's activity can be interpreted in an efficient and robust way.

Ant Colony Organization for MMORPG and RTS Creature Resource Gathering

Jason Dunn (H2Code)
AI Game Programming Wisdom 3, 2006.
Abstract: This article provides details about the implementation of ant colonies for pathfinding in massively multiplayer and real-time strategy games. Details include the effects of pheromones and individual ant behavior, as well as what variables to focus on when adapting the provided source code. Readers are taught how to control the elasticity of path seeking and path reinforcement.

RTS Citizen Unit AI

Shawn Shoemaker (Stainless Steel Studios)
AI Game Programming Wisdom 3, 2006.
Abstract: Unit AI refers to the micro-level artificial intelligence that controls a specific unit in an RTS game, and how that unit reacts to input from the player and the game world. Citizens present a particular challenge for unit AI as the citizen is a super unit, combining the unit AI for very other RTS unit. This article discusses some real world problems and solutions for citizen unit AI, taken from the development of the three RTS titles including Empire Earth. In Addition, this article discusses additional features necessary for the citizen, such a build queuing and "smart" citizens.

A Combat Flight Simulation AI Framework

Phil Carlisle (University of Bolton and Ace Simulations Ltd.)
AI Game Programming Wisdom 3, 2006.
Abstract: This article covers the AI framework requirements specific to an air combat based flight simulation. It explains the general AI framework that should already be in place before continuing on to describe the air combat flight simulation specific data structures, algorithms and requirements that need to be in place to deliver a playable AI opponent for such simulations.

Opinion Systems

Adam Russell (Lionhead Studios)
AI Game Programming Wisdom 3, 2006.
Abstract: Modeling the formation and effect of opinions about the player character in a simulated social environment is a difficult problem for game AI, but one increasingly worth tackling. This article discusses some of the wisdom gained during the construction of perhaps the most complex opinion system ever seen in a commercial game, that of Lionhead Studios' Fable.

An Analysis of Far Cry: Instincts' Anchor System

Eric Martel (Ubisoft Montreal)
AI Game Programming Wisdom 3, 2006.
Abstract: An overview of the anchor system used in Far Cry Instincts to control dynamic cut-scenes and specific character reactions. This article explains a few choices made when the system was designed and then turns into a quick post-mortem style review of the system. Programmers working on a dynamic cut-scene system should find pertinent information that will allow them to build a better and more flexible system.

Creating a Visual Scripting System

Matthew McNaughton and Thomas Roy (University of Alberta)
AI Game Programming Wisdom 3, 2006.
Abstract: Scripting is frequently used to implement the story-related behavior of characters and other objects in a game. However, writing scripts can be time-consuming and difficult for the game designers responsible for conceiving of and implementing the story. We introduce a powerful, extensible, and expressive visual scripting framework to solve these problems, with a sample implementation. The framework works with any existing scripting language in any game. We used the ideas in this article to develop ScriptEase, a visual scripting tool, and have used it in several successful case studies where 10th grade students developed scripted game modules for Neverwinter Nights.

Intelligent Story Direction in the Interactive Drama Architecture

Brian Magerko (Michigan State University)
AI Game Programming Wisdom 3, 2006.
Abstract: Interactive drama is a new field of entertainment that attempts to give the player a dramatic situation that is tailored towards their interaction with the storyworld. A particular technique for creating interactive drama is the use of an automated story director, an intelligent agent that is responsible for managing the performance of synthetic characters in response to authored story content and player actions in the world. This article addresses some of the issues with using a story director, and introduces the Interactive Drama Architecture as a case study for story direction in a general architecture.

Practical Algorithms for In-Game Learning

John Manslow
AI Game Programming Wisdom 3, 2006.
Abstract: This article describes fast and efficient techniques that can be used for in-game learning. With a strong emphasis on the requirements that techniques must satisfy to be used in-game, it presents a range of practical techniques that can be used to produce learning and adaptation during gameplay, including moving average estimators, probability estimators, percentile estimators, single layer neural networks, nearest neighbor estimators, and decision trees. The article concludes by presenting an overview of two different types of stochastic optimization algorithms and describes a new way to produce adaptive difficulty levels.

A Brief Comparison of Machine Learning Methods

Christian Baekkelund (Massachusetts Institute of Technology (MIT))
AI Game Programming Wisdom 3, 2006.
Abstract: When considering whether or not to use machine learning methods in a game, it is important to be aware of their capabilities and limitations. As different methods have different strengths and weaknesses, it is paramount that the correct learning method be selected for a given task. This article will give a brief overview of the strengths, weaknesses, and general capabilities of common machine learning methods and the differences between them. Armed with this information, an AI programmer will better be able to go about the task of selecting "The right tool for the job."

Introduction to Hidden Markov Models

Robert Zubek (Electronic Arts / Maxis)
AI Game Programming Wisdom 3, 2006.
Abstract: Hidden Markov models are a probabilistic technique that provides an inexpensive and intuitive means of modeling stochastic processes. This article introduces the models, presents the computational details of tracking processes over time, and shows how they can be used to track player's movement and behavior based on scattered and uncertain observations.

Preference-Based Player Modeling

Jeroen Donkers and Pieter Spronck (Universiteit Maastricht, The Netherlands)
AI Game Programming Wisdom 3, 2006.
Abstract: This article describes how to create models of players based on their preferences for certain game states and shows how these models can be used to predicts a player's actions. We show how this enables the computer to reason more intelligently about its actions, to adapt to the player, and thereby to act as a more challenging and entertaining opponent. The article describes two ways to create models, player model search and probabilistic player model search, and illustrates their application with the help of pseudo-code. Finally, the article provides an example of how these techniques could be used to enhance a computer's diplomatic reasoning in a strategy game.

Dynamic Scripting

Pieter Spronck (Universiteit Maastricht, The Netherlands)
AI Game Programming Wisdom 3, 2006.
Abstract: Dynamic scripting is a technique that can be used to adapt the behavior of NPCs during gameplay. It creates scripts on the fly by extracting rules from a rulebase according to probabilities that are derived from weights that are associated with each rule. The weights adapt to reflect the performance of the scripts that are generated, so that rules that are consistently associated with the best scripts will quickly develop large weights and be selected more frequently. Dynamic scripting has been successfully applied to a wide range of genres and has been validated experimentally in RTS games and RPGs.

Encoding Schemes and Fitness Functions for Genetic Algorithms

Dale Thomas (Q Games)
AI Game Programming Wisdom 3, 2006.
Abstract: Genetic algorithms (GAs) have great potential in game AI and they have been widely discussed in the game development community. Many attempts to apply GAs in practice have only led to frustration and disappointment, however, because many introductory texts encourage na�ve implementations of GAs that do not include the application specific enhancements that are often required in practice. This article addresses this problem by describing the roles played by the encoding scheme, genetic operators, and fitness function in a GA and describes how each of them can be designed in an application specific way to achieve maximum evolutionary performance.

A New Look at Learning and Games

Christian Baekkelund (Massachusetts Institute of Technology (MIT))
AI Game Programming Wisdom 3, 2006.
Abstract: Most discussions of the application of learning methods in games adhere to a fairly rigid view of when and where they should be applied. Typically, they advocate the use of such algorithms to facilitate non-player character (NPC) adaptation during gameplay and occasionally promote their use as part of the development process as a tool that can assist in the creation of NPC AI. This article attempts to broaden the discussion over the application of modeling and optimization algorithms that are typically used to produce learning by discussing alternative ways to use them in game AI, as well as more generally in the game development process.

Constructing Adaptive AI Using Knowledge-Based Neuroevolution

Ryan Cornelius, Kenneth O. Stanley, and Risto Miikkulainen (The University of Texas at Austin)
AI Game Programming Wisdom 3, 2006.
Abstract: Machine learning can increase the appeal of videogames by allowing non-player characters (NPCs) to adapt to the player in real-time. Although techniques such as real-time NeuroEvolution of Augmenting Topologies (rtNEAT) have achieved some success in this area by evolving artificial neural network (ANN) controllers for NPCs, rtNEAT NPCs are not smart out-of-the-box and significant evolution is often required before they develop even a basic level of competence. This article describes a technique that solves this problem by allowing developers to convert their existing finite state machines (FSMs) into functionally equivalent ANNs that can be used with rtNEAT. This means that rtNEAT NPCs will start out with all the abilities of standard NPCs and be able to evolve new behaviors of potentially unlimited complexity.

Applying Model-Based Decision-Making Methods to Games: Applying the Locust AI Engine to Quake III

Armand Prieditis
Game Programming Gems 6, 2006.

Achieving Coordination with Autonomous NPCs

Diego Garc�s (FX Interactive)
Game Programming Gems 6, 2006.

Behavior-Based Robotic Architectures for Games

Hugo Pinto and Luis Otavio Alvares
Game Programming Gems 6, 2006.

Constructing a Goal-Oriented Robot for Unreal Tournament Using Fuzzy Sensors, Finite-State Machines, and Behavior Networks

Hugo Pinto and Luis Otavio Alvares
Game Programming Gems 6, 2006.

A Goal-Oriented Unreal Bot: Building a Game Agent with Goal-Oriented Behavior and Simple Personality Using Extended Behavior Networks

Hugo Pinto and Luis Otavio Alvares
Game Programming Gems 6, 2006.

Short-Term Memory Modeling Using a Support Vector Machine

Julien Hamaide
Game Programming Gems 6, 2006.

Using the Quantified Judgment Model for Engagement Analysis

Michael Ramsey
Game Programming Gems 6, 2006.

Designing a Multilayer, Pluggable AI Engine

S�bastien Schertenleib (Swiss Federal Institute of Technology)
Game Programming Gems 6, 2006.

A Fuzzy-Control Approach to Managing Scene Complexity

Gabriyel Wong
Game Programming Gems 6, 2006.

Scripting Language Survey

Diego Garc�s (FX Interactive)
Game Programming Gems 6, 2006.

Binding C/C++ Objects to Lua

Waldemar Celes (PUC-Rio), Luiz Henrique de Figueiredo (Institute for Pure and Applied Mathematics), Roberto Ierusalimschy (PUC-Rio)
Game Programming Gems 6, 2006.

Programming Advanced Control Mechanisms with Lua Coroutines

Waldemar Celes (PUC-Rio), Luiz Henrique de Figueiredo (Institute for Pure and Applied Mathematics), Roberto Ierusalimschy (PUC-Rio)
Game Programming Gems 6, 2006.

Managing High-Level Script Execution Within Multithreaded Environments

S�bastien Schertenleib (Swiss Federal Institute of Technology)
Game Programming Gems 6, 2006.

AI Wall Building in Empire Earth II

Tara Teich, Ian Lane Davis (Mad Doc Software)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2006.
Abstract: Real-Time Strategy games are among the most popular genres of commercial PC games, and also have widely applicable analogs in the field of Serious Games such as military simulations, city planning, and other forms of simulation involving multi-agent coordination and an underlying economy. One of the core tasks in playing a traditional Real-Time Strategy game is building a base in an effective manner and defending it well. Creating an AI that can construct a successful wall was one of the more challenging areas of development on Empire Earth?II, as building a wall requires analysis of the terrain and techniques from computational geometry. An effective wall can hold off enemy troops and keep battles away from the delicate economy inside the base.

Large-Scale Stack-Based State Machines

James Boer
Game Programming Gems 5, 2005.

Building Lua into Games

Matthew Harmen (eV Interative Corporation)
Game Programming Gems 5, 2005.

Visual Design of State Machines

Scott Jacobs
Game Programming Gems 5, 2005.

Automatic Cover Finding with Navigation Meshes

Borut Pfeifer (Radical Entertainment)
Game Programming Gems 5, 2005.

Fast Target Ranking Using an Artificial Potential Field

Markus Breyer (Factor 5)
Game Programming Gems 5, 2005.

Using Lanchester Attrition Models to Predict the Results of Combat

John Bolton (Page 44 Studios)
Game Programming Gems 5, 2005.

Implementing Practical Planning for Game AI

Jamie Cheng (Relic Entertainment), Finnegan Southey (University of Alberta, Computer Science)
Game Programming Gems 5, 2005.

Optimizing a Decision Tree Query Algorithm for Multithreaded Architectures

Chuck DeSylva (Intel Corporation)
Game Programming Gems 5, 2005.

Parallel AI Development with PVM

Michael Ramsey (2015 Inc)
Game Programming Gems 5, 2005.

Beyond A*

Mario Grimani (Xtreme Strategy Games), Matthew Titelbaum (Monolith Productions)
Game Programming Gems 5, 2005.

Introduction to Single-Speaker Speech Recognition

Julien Hamaide
Game Programming Gems 5, 2005.

Advanced Pathfinding with Minimal Replanning Cost: Dynamic A Star (D*)

Marco Tombesi
Game Programming Gems 5, 2005.

Massively Multiplayer Scripting Systems

Jon Parise (Electronic Arts)
Massively Multiplayer Game Development 2, 2005.

A Goal-Based Architecture for Opposing Player AI

Kevin Dill (Blue Fang Games), Denis Papp (TimeGate Studios)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2005.
Abstract: This paper describes a goal-based architecture which provides a single source for all high level decisions made by AI players in real-time strategy games. The architecture is easily extensible, flexible enough to be rapidly adapted to multiple different games, and powerful enough to provide a good challenge on a random, unexplored map without unfair advantages or visible cheating. This framework was applied successfully in the development of two games at TimeGate Studios ?Kohan2: Kings of War and Axis & Allies.

Agent Architecture Considerations for Real-Time Planning in Games

Jeff Orkin (Monolith)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2005.
Abstract: Planning in real-time offers several benefits over the more typical techniques of implementing Non-Player Character (NPC) behavior with scripts or finite state machines. NPCs that plan their actions dynamically are better equipped to handle unexpected situations. The modular nature of the goals andactions that make up the plan facilitates re-use, sharing, and maintenance of behavioral building blocks. These benefits, however, come at the cost of CPU cycles. In order to simultaneously plan for several NPCs in real-time, while continuing to share the processor with the physics, animation, and rendering systems, careful consideration must taken with the supporting architecture. The architecture must support distributed processing and caching of costly calculations. These considerations have impacts that stretch beyond the architecture of the planner, and affect the agent architecture as a whole. This paper describes lessons learned while implementing real-time planning for NPCs for F.E.A.R., a AAA first person shooter shipping for PC in 2005.

Semi-Automated Gameplay Analysis by Machine Learning

Finnegan Southey, Gang Xiao, Robert C. Holte, Mark Trommelen (University of Alberta), John Buchanan (Electronic Arts)
PDF link, Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2005.
Abstract: While presentation aspects like graphics and sound are important to a successful commercial game, it is likewise important that the gameplay, the non-presentational behaviour of the game, is engaging to the player. Considerable effort is invested in testing and re.ning gameplay throughout the development process. We present an overall view of the gameplay management problem and, more concretely, our recent research on the gameplay analysis part of this task. This consists of an active learning methodology, implemented in software tools, for largely automating the analysis of game behaviour in order to augment the abilities of game designers. The SAGA-ML (semi-automated gameplay analysis by machine learning) system is demonstrated in a real commercial context, Electronic Arts' FIFA'99 Soccer title, where it has identi.ed exploitable weaknesses in the game that allow easy scoring by players.

Ten Fingers of Death: Algorithms for Combat Killing

Roger Smith, Don Stoner (Titan Corporation)
Game Programming Gems 4, 2004.

Third-Person Camera Navigation

Jonathan Stone (Double Fine Productions)
Game Programming Gems 4, 2004.

Narrative Combat: Using AI to Enhance Tension in an Action Game

Borut Pfeifer (Radical Entertainment)
Game Programming Gems 4, 2004.

NPC Decision Making: Dealing with Randomness

Karen Pivazyan (Stanford University)
Game Programming Gems 4, 2004.

An Object-Oriented Utility-Based Decision Architecture

John Hancock (LucasArts)
Game Programming Gems 4, 2004.

A Distributed-Reasoning Voting Architecture

John Hancock (LucasArts)
Game Programming Gems 4, 2004.

Attractors and Repulsors

John M. Olsen (Microsoft)
Game Programming Gems 4, 2004.

Advanced Wall Building for RTS Games

Mario Grimani (Sony Online Entertainment)
Game Programming Gems 4, 2004.

Artificial Neural Networks on Programmable Hardware

Thomas Rolfes
Game Programming Gems 4, 2004.

Tactical Path-Finding Using Stochastic Maps on the GPU

Khanh Phong Ly
ShaderX3, 2004.

Common Game AI Techniques

Steve Rabin (Nintendo of America)
AI Game Programming Wisdom 2, 2003.
Abstract: This article provides a survey of common game AI techniques that are well known, understood, and widely used. Each technique is explained in simplest terms along with references for delving deeper. Techniques include A* Pathfinding, Command Hierarchies, Dead Reckoning, Emergent Behavior, Flocking, Formations, Influence Mapping, Level-of-Detail AI, Manager Task Assignment, Obstacle Avoidance, Scripting, State Machines, Stack-Based State Machines, Subsumption Architectures, Terrain Analysis, and Trigger Systems.

Promising Game AI Techniques

Steve Rabin (Nintendo of America)
AI Game Programming Wisdom 2, 2003.
Abstract: This article provides a survey of promising game AI techniques that are on the forefront of game AI. Each technique is explained in simplest terms along with references for delving deeper. Techniques include Bayesian Networks, Blackboard Architectures, Decision Tree Learning, Filtered Randomness, Fuzzy Logic, Genetic Algorithms, N-Gram Statistical Prediction, Neural Networks, Perceptrons, Planning, Player Modeling, Production Systems, Reinforcement Learning, Reputation Systems, Smart Terrain, Speech Recognition and Text-to-Speech, and Weakness Modification Learning.

New Paradigms in Artificial Intelligence

Dale Thomas (AILab, University of Z�rich)
AI Game Programming Wisdom 2, 2003.
Abstract: This article introduces some new ideas in the field of Artificial Intelligence (AI). Many researchers are looking more toward nature for inspiration, finding many useful design solutions to the problem of behaving in a dirty, noisy world. While traditional AI techniques (OldAI) have had much success in formal domains, such as chess, they often do not scale well and are sometimes impossible to apply in less discrete domains.

A better understanding of the techniques inspired by natural intelligence (NewAI) in addition to OldAI techniques will lead to a much more complete toolbox for an AI designer. This will allow agents to be designed to behave more naturally and a better understanding of why they fail in particular situations, leading to more believable motions and behaviors in games.

Artificial Stupidity: The Art of Intentional Mistakes

Lars Lid�n
AI Game Programming Wisdom 2, 2003.
Abstract: What makes a game entertaining and fun does not necessarily correspond to making its opponent characters smarter. The player is, after all, supposed to win. However, letting a player win because of badly programmed artificial intelligence is unacceptable. Fun can be maximized when the mistakes made by computer opponents are intentional. By finely tuning opponent's mistakes, one can prevent computer opponents from looking dumb, while ensuring that the player is still capable of winning. Additionally by catching, identifying and appropriately handling genuine problems with an AI system, one can turn situations in which computer opponents would otherwise look dumb into entertainment assets. Surprisingly many game developers pay scant attention to such ideas. Developers' efforts are often so concentrated on making their computer opponents smart that they neglect to adequately address how the AI makes the game fun.

Arcade AI Doesn't Have to Be Dumb

Steven Woodcock (GameAI.com)
AI Game Programming Wisdom 2, 2003.
Abstract: Good game AI is tricky no write no matter what your resources are. When you're faced with limited CPU and RAM, such as with an arcade game or on a handheld, it can be nearly impossible. Arcade AI Doesn't Have to be Dumb covers various techniques used in the development of the Sega arcade game Behind Enemy Lines which helped give its AIs a bit more spontaneity and seeming intelligence than found in most shooters while not using up much memory or CPU in the process.

The Statistics of Random Numbers

James Freeman-Hargis
AI Game Programming Wisdom 2, 2003.
Abstract: Random numbers are used most heavily by Artificial Intelligence and games in general. To ignore their potential is to make the game predictable and boring. Using them incorrectly can be just as bad as ignoring them outright. Understanding how random numbers are generated, their limitations and their capabilities, can remove many difficulties of using them in your game. This article offers insight into random numbers, their generation, and methods to separate good ones from bad.

Filtered Randomness for AI Decisions and Game Logic

Steve Rabin (Nintendo of America)
AI Game Programming Wisdom 2, 2003.
Abstract: Conventional wisdom suggests that the better the random number generator, the more unpredictable your game will be. However, according to psychology studies, true randomness over the short term often looks decidedly unrandom to humans. This article shows how to make random AI decisions and game logic look more random to players, while still maintaining strong statistical randomness. Full source code, ready to drop into your game, is supplied on the book's CD-ROM.

Search Space Representations

Paul Tozour (Retro Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: Navigation in games is about much more than the search algorithm used. An equally important (and often overlooked) consideration is the way the game represents the game world to allow agents to perform a search on it (the "search space representation"). Different games have used nearly every kind of search space representation imaginable. This article discusses the relative strengths and weaknesses of square and hexagonal grids, quadtrees, corner graphs, waypoint graphs, circle/cylinder-based waypoint graphs, space-filling volumes, and triangle-based and N-sided-convex-poly-based navigation meshes for navigating in different types of games. We also discuss additional issues that may change the relative merits of different representations, such as the different movement capabilities of different units and the need to interact with a local pathfinding / dynamic obstacle avoidance system.

Inexpensive Precomputed Pathfinding Using a Navigation Set Hierarchy

Mike Dickheiser (Red Storm Entertainment)
AI Game Programming Wisdom 2, 2003.
Abstract: The increasing use of precomputed navigation data in today's computer games has led developers to experience both the joys of lightning-fast best path determination and the agony of the memory cost associated with storing all that information. Often, the memory requirements are prohibitive - especially on console platforms - and much slower traditional solutions are required. In this article we present a hierarchical scheme that retains virtually all of the processing speed of the typical precomputed solutions, while dramatically reducing the memory requirements.

Path Look-up Tables - Small is Beautiful

William van der Sterren (CGF-AI)
AI Game Programming Wisdom 2, 2003.
Abstract: The fastest way to "find" a path from waypoint A to B is not to search. It is much faster to look up a path from a pre-computed table. Being able to find paths ten to two hundred times faster than with A* may make a big difference. This frees up CPU budget for other AI decisions. It allows the use of paths and travel times in a much larger portion of the AI's reasoning. However, path lookup tables are not without disadvantages. The amount of memory required for the tables often prohibit their use for anything other than small levels.

This article discusses optimizations of path lookup tables, and takes a look at two designs that offer the performance benefits at lower costs. First, a path lookup matrix using indices that consumes only one fourth of traditional path lookup tables. Second, an area-based path lookup table which consumes even less, and scales much better, at the costs of a more complex lookup.

An Overview of Navigation Systems

Alex J. Champandard (AI Depot)
AI Game Programming Wisdom 2, 2003.

Jumping, Climbing, and Tactical Reasoning: How to Get More Out of a Navigation System

Christopher Reed, Benjamin Geisler (Raven Software / Activision)
AI Game Programming Wisdom 2, 2003.
Abstract: Few AI related systems are more common and pervasive in games than character navigation. As 3D game engines become more and more complex, characters will look best if they too adapt with equally complex behavior. From opening a door, to hopping over an errant boulder and crouching behind it, keeping AI tied to the environment of your game is often one of the most difficult and important challenges.

Typically these complex behaviors are handled by scripts or a hand coded decision maker. However, we will show that the points and edges within a navigation system are a natural place to store environment specific information. It is possible to automatically detect many properties about the area around a point or edge. This approach allows an AI character to make use of embedded environment information for tactical reasoning as well as low level animation and steering.

Hunting Down the Player in a Convincing Manner

Alex McLean (Pivotal Games Ltd.)
AI Game Programming Wisdom 2, 2003.
Abstract: This article is concerned with how to make a game character convincingly hunt or search towards a goal. Gamers expect intelligent behavior from opponents but sometimes it's all too easy to let the AI cheat a little too much. In order to bring about believable searching behavior it is often not sufficient to simply route a game character directly towards its goal; the path will be too direct, too contrived and generally afford little in the way of gameplay possibilities. We must ensure that the character explores and looks like it's trying to find its goal by a process of search rather than direct, shortest-path route following. This article shows how to do this effectively and with low processing cost. The end result is convincing searching and/or hunting behavior that gradually homes in on a goal.

Simple parameters are available to control how quickly goal discovery is likely to happen and also the directness of the resultant path. The method assumes the existence of a working pathfinding/routing system with the described technique being equally suited to 2D and 3D environments. The discussion will show the benefits and scope of indirect paths in terms of the opportunities offered for gameplay, perceived character intelligence and believability.

Avoiding Dynamic Obstacles and Hazards

Geraint Johnson (Computer Artworks Ltd.)
AI Game Programming Wisdom 2, 2003.
Abstract: Static obstacle avoidance is, barring efficiency considerations, a solved problem in games. The A* algorithm is generally used to search a graph data structure representing the navigable terrain in the level to find a route to a goal. However, many game agents still cope badly with dynamic obstacles encountered along the route, often relying entirely on collision code to get them out of trouble. Bumping into entities not only looks unintelligent, it can also have negative game-play implications, especially if the entity is a hazard.

This article outlines a pragmatic approach to solving this problem at the level of short-range movement. Inspired by flocking algorithms, the method involves taking an agent's desired velocity and adding "repulsion" vectors from nearby entities in the agent's memory. The resulting velocity will tend to send the agent around dynamic obstacles and hazards. A nice feature is that two agents on a collision course will intelligently sidestep in opposite directions in order to avoid each other. Moreover, the situation in which a short-range destination is completely blocked by an entity is detected early, so that a new long-range route can be found well before a collision has taken place. The approach is fast and produces very convincing avoidance behavior.

Intelligent Steering Using PID Controllers

Euan Forrester (Electronic Arts Black Box)
AI Game Programming Wisdom 2, 2003.
Abstract: In order to achieve the realism demanded by many of today's games, physics simulations have become more complex and accurate. Although realistic physics simulations are often rewarding for human players to control, they can be frustrating from an AI programmer's perspective. As these simulations become more complex, the effects of a given input to the system become less clear, and it becomes more difficult to write a simple if...then...else logic tree to cope with every possible circumstance. Thus, new methods of controlling objects operating under these rules must be developed.

In the context of game AI, the primary use for such control is in the steering of objects operating under the constraints of a physics system. The article will detail a solution to this problem by applying an engineering algorithm known as a Proportional-Integral-Derivative (PID) Controller that has been used for over 50 years. The article comes with full source code to a demo that let's you interactively play with the PID variables that control a rocket steering toward a moving target.

An AI Approach to Creating an Intelligent Camera System

Phil Carlisle (Team17 Software Ltd.)
AI Game Programming Wisdom 2, 2003.
Abstract: In this article, we will attempt to outline one method of implementing a camera system capable of handling a diverse and dynamic three-dimensional environment. We will detail the approaches taken during development of a to-be-released title, outlining the issues we encountered and how these were overcome.

Constraining Autonomous Character Behavior with Human Concepts

Jeff Orkin (Monolith Productions)
AI Game Programming Wisdom 2, 2003.
Abstract: A current trend in Game AI is the move from scripted to autonomous character behavior. Autonomous behavior offers several benefits. Autonomous characters can handle unexpected events that a script might not have anticipated, producing emergent gameplay. Level designers can focus on creating worlds packed with opportunities for characters to showcase their behaviors, rather than getting bogged down scripting the actions of individual characters. Various articles have described how to design goal-based autonomous behavior, where characters select the most relevant behavior based on their desires, sensory input, and proximity to objects of interest. In theory it sounds simple enough to drop a character with a palette of goals into a level filled with tagged objects, and let him take care of himself. In practice, there are many additional factors that need to be considered to get believable behavior from an autonomous character. This article presents a number of factors that should be considered as inputs into the relevancy calculation of a character's goals, in order to produce the most believable decisions. These factors are based on findings encountered during the developement of Monolith Production's No One Lives Forever 2: A Spy in H.A.R.M.'s Way.

Simple Techniques for Coordinated Behavior

Jeff Orkin (Monolith Productions)
AI Game Programming Wisdom 2, 2003.
Abstract: There are a number of common problems that arise when developing AI systems for combat with multiple enemies. Agents block each other�s line of fire. Agents follow the exact same path to a target, and often clump up at a destination. Some agents are oblivious to a threat while others nearby are getting shot or even killed. Multiple agents decide to do the exact same action or animation simultaneously. It would seem that a group behavior layer of complex higher-level reasoning would be needed to solve these problems. In fact, these problems can be solved with simple techniques that use existing systems and leverage information that individual agents already have. This article describes simple techniques that can be used to solve coordination problems, using examples from Monolith Productions' "No One Lives Forever 2: A Spy in H.A.R.M.'s Way."

Team Member AI in an FPS

John Reynolds (Creative Asylum Ltd.)
AI Game Programming Wisdom 2, 2003.
Abstract: The use of teammates has become very popular among the first and third person action genres in recent years, in both the simulation and arcade sub-genres. However, implementing convincing teammates who will not run in your path while you are shooting, nor disappear into a far corner of the map, is quite an involved process. By implementing some key rules it is possible to create teammates who can usefully back you up in the thick of the action, follow instructions reliably, and survive with you until the end of the game.

Applying Goal-Oriented Action Planning to Games

Jeff Orkin (Monolith Productions)
AI Game Programming Wisdom 2, 2003.
Abstract: A number of games have implemented characters with goal directed decision-making capabilities. A goal-directed character displays some measure of intelligence by autonomously deciding to activate the behavior that will satisfy the most relevant goal at any instance. Goal-Oriented Action Planning (GOAP) is a decision-making architecture that takes the next step, and allows characters to decide not only what to do, but how to do it. A character that formulates his own plan to satisfy his goals exhibits less repetitive, predictable behavior, and can adapt his actions to custom fit his current situation. In addition, the structured nature of a GOAP architecture facilitates authoring, maintaining, and re-using behaviors. This article explores how games can benefit from the addition of a real-time planning system, using problems encountered during the development of Monolith Production's No One Lives Forever 2: A Spy in H.A.R.M.'s Way to illustrate these points.

Hierarchical Planning in Dynamic Worlds

Neil Wallace (Black & White Studios / Lionhead Studios)
AI Game Programming Wisdom 2, 2003.

Goal Directed Behavior using Composite Tasks

Eric Dybsand (Glacier Edge Technology)
AI Game Programming Wisdom 2, 2003.
Abstract: This article will introduce the reader to goal directed behavior and offers several examples of games that have used it to increase the believability of the agents in those games. The article then goes on to discuss the implementation of the Composite Task concept that was designed and developed to provide goal directed behavior for the agents in a military tactical combat training simulator. Finally, the simulator itself is briefly discussed and references to additional information on goal directed behavior are provided.

Simplified Animation Selection

Chris Hargrove (Gas Powered Games)
AI Game Programming Wisdom 2, 2003.
Abstract: This article describes an animation selection mechanism for determining the active animations of an arbitrary number of animation channels, based on a narrow set of discrete inputs and events, in a manner that's easy to manipulate for both artists and AI programmers. The system allows for just a few simple inputs (such as a character's cardinal movement direction, posture, weapon type, etc) and isolated triggered events (such as waves or taunts) to determine the entire animation state of a character at a given time, even in the presence of hundreds of animations.

The animation channels, input names and values, and control-flow "actions" are all configurable via a simple artist-friendly scripting language, allowing the artist to take nearly full control over the animation selection pipeline. In addition, the AI programmer's job is made easier due to the simplified conduit between a character's abstract behavior and its animation inputs. The result is an animation selection scheme that gives the artist a level of control usually only available to programmers, without losing the simplicity and flexibility of other data-driven approaches.

Pluggable Animations

Chris Hargrove (Gas Powered Games)
AI Game Programming Wisdom 2, 2003.
Abstract: This article discusses an extensible plug-in based animation pipeline that combines the handling of pre-built and dynamically-generated animation facilities into a single unified mechanism. This allows artists and AI programmers to take advantage of procedural animation effects in the same manner as regular animations, adding an additional level of flexibility and control to the look of your characters.

Animations are created based on a set of "abilities" that activate and deactivate at different points in time within the animation's length. These abilities can perform any number of effects on the character, from a simple application of pre-built animation frame data, to a complex on-the-fly Inverse Kinematics operation using external "satellite" points in space provided by an external source, to esoteric visual effects like bone attachment manipulation and vertex deformation. The abilities themselves are provided as plug-ins, and new abilities can be added during the development process (or in some cases even afterward, by "mod" authors) without changing the core of the animation pipeline. The process of creating of these kinds of animations can be made friendly to artists without much effort, via a simple GUI dialog box based primarily around a single list view control.

Intelligent Movement Animation for NPCs

Greg Alt (Surreal Software), Kristin King
AI Game Programming Wisdom 2, 2003.
Abstract: This article describes an intelligent movement animation system for non-player characters (NPCs). This system is used in the PC and PS2 versions of Fellowship of the Ring and two upcoming games from Surreal Software. First, the article briefly explains steering behaviors and animation systems. Next, it describes the middle layer between them. This layer includes a system for NPC movement, a movement animation behavior, and an animation controller. The movement animation behavior ensures that the animation being played and the way it is being played are appropriate, given the NPC's current movement. The animation controller provides a simple high-level interface to the underlying animation system. Finally, the article also gives some tips on gotchas that can come up during implementation of the middle layer and some ideas for further enhancements.

The Ultimate Guide to FSMs in Games

Ryan Houlette, Dan Fu (Stottler Henke Associates, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: The intention of this article is to give a comprehensive overview of FSMs in games. This article examines various FSM architectures and discusses the systems that surround them. The FSM is studied at the game integration level, the update scheme, and efficiency/optimization. Extensions are discussed for extending state functionality (OnEnter, OnExit), extending the structure into hierarchical and fuzzy state machines, and coordinating multiple FSMs. Various FSM schemes are compared and contrasted.

Stack-Based Finite-State Machines

Paul Tozour (Retro Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: Finite-state machines are a very popular technique for developing game AI, but they lack any intrinsic capability for remembering the way a client has traversed the state graph. We discuss a technique for extending the traditional finite-state machine with a state stack in order to allow it to remember previous states, thereby allowing AI agents to resume the execution of behaviors that were previously interrupted. In some cases, such as modeling behaviors with an FSM, this can often allow us to create much simpler and more concise finite-state machines than would be possible with a standard FSM.

Implementing a Data-Driven Finite-State Machine

Gilberto Rosado (DigiPen Institute of Technology)
AI Game Programming Wisdom 2, 2003.
Abstract: This article describes an implementation of a data-driven Finite State Machine (FSM) class. Using a data-driven design allows quick tweaking of state transition logic without having to recompile any source code, as well as the ability to associate different character behavior to different AI characters through external data files. The FSM class presented in this article instantiates FSMs as defined in external data files, automates the evaluation of state transition logic and provides the functionality to define functions to be executed when entering, updating, and exiting states.

Finite-State Machine Scripting Language for Designers

Eric Yiskis (Sammy Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: AI is often implemented with finite state machines (FSM's) or layers of finite state machines, which are difficult for game designers to edit. Looking at typical AI FSM's, there are design patterns that occur repeatedly. We can use these patterns to make a custom scripting language that is both powerful and approachable. The technique can be further extended into a "stack machine" (pushdown automata) so that characters have better memory of previous behaviors.

A Subsumption Architecture For Character-Based Games

Eric Yiskis (Sammy Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: The Subsumption Architecture was invented in 1986 by Rodney Brooks to give robust real-world behavior to robots. The technique works equally well for the "virtual robots" of the video game world. It cleanly decomposes the implementation of an AI driven character into concurrently executing layers of finite state machines (FSMs). Lower layers take care of immediate goals; upper layers take care of long-term goals. The architecture solves three major problems with character AI: minor setbacks causing a character to lose focus on a long term goal, characters getting stuck on a goal that is no longer relevant, and robust handling of animation and character physics.

An Architecture for A-Life

Nick Porcino (LucasArts)
AI Game Programming Wisdom 2, 2003.
Abstract: This chapter presents Insect AI, a straight forward architecture, notation, and design methodology for artificial life. The principles and techniques are derived from neuroethology, the study of neural control of behavior. Simple computational units are introduced and examined, and the creation of Insect AI agents is demonstrated. Insect AI agents exhibit a number of interesting properties which satisfy the characteristics of motivated behavior as defined in the ethological literature - behaviors can be grouped and sequenced, the agents are goal directed, behavior can change based on the internal state of the agent, and behaviors can persist if stimuli are removed. A number of agents are created as examples, ranging from a simple light follower to an artificial insect that shows all the characteristics of motivated behavior.

A Flexible Tagging System for AI Resource Selection

Paul Tozour (Retro Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: As game designs increasingly evolve away from linear, scripted gameplay experiences and toward open-ended worlds and gameplay based on emergent behaviors, gameplay has become much less predictable, and it has become increasingly difficult to create content that exactly matches the specific situation the user will experience at any given moment. Although in an ideal world, it would be possible to create content that responds to all of the different possible game states, open-ended game designs present far too many unpredictable situations, and one can never hope to create enough audio or animation content to handle all of them. However, it is possible to fit some of the specifics of the situation some of the time, and create content at varying levels of specificity. We present a flexible tagging system that allows you to create art and audio content across a wide spectrum from the most general to the most specific, along with a simple resource-selection algorithm that allows you to select the most situation-specific piece of content to use in any given situation. We also discuss potential applications of this system for audio and animation assets in detail.

Motivational Graphs: A New Architecture for Complex Behavior Simulation

Julien Devade, Dr. Jean-Yves Donnart, Dr. Emmanuel Chiva, St�phane Maru�jouls (MASA Group)
AI Game Programming Wisdom 2, 2003.
Abstract: Recent research in cognitive science and ethology has led to the development of biologically-inspired autonomous behavior models. Such models differ from classical AI models since they account for both internal state and environmental constraints. They define a new generation of systems, closer to Artificial Life and situated cognition than to classical AI.

In the present article, we introduce a new architecture based on such models. Applied to game development, this architecture enables designers and developers to easily describe, model and implement realistic autonomous software agents. This architecture, called a motivational graph, is a hybrid between rule-based approaches and connectionist systems. Especially, it uses concepts such as activity propagation to trigger modules within a hyperconnected graph. In this article, we demonstrate the benefits of this approach: multitasking, opportunism, tradeoff and emergence.

Minimizing Agent Processing in Conflict Desert Storm

Sebastian Grinke (Pivotal Games)
AI Game Programming Wisdom 2, 2003.

Using a Spatial Database for Runtime Spatial Analysis

Paul Tozour (Retro Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: AI developers have employed a number of different techniques for performing spatial reasoning about a game world using precomputed "hints" placed by level designers or automated game-world analysis tools. However, as game worlds increasingly feature larger numbers of AI characters and moveable physically-modeled objects, it becomes increasingly important to model the ways that the dynamic aspects of the ever-changing game world influence an AI's spatial reasoning. We discuss a spatial database technique that allows you to perform spatial reasoning about any number of different factors that can potentially affect an AI agent's reasoning about the game environment and techniques for combining multiple factors together to construct desirability heuristics. A spatial database can also allow you to implicitly coordinate the activities of multiple AI agents simply by virtue of sharing the same data structure.

Performing Qualitative Terrain Analysis in Master of Orion 3

Kevin Dill, Alex Sramek (Quicksilver Software, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: One challenge for many strategy game AIs is the need to perform qualitative terrain analysis. By qualitative we mean that the analysis is based on fundamental differences between different types of locations - for instance areas that are visible to our opponents, areas that are impassible, or areas vulnerable to enemy fire. In Master of Orion 3 we identify stars that are inside or outside of our empire's borders, those that are threatened by our opponents, and those that are contested (shared with an opponent). This information is used to identify locations where we need to concentrate our defenses and to help us expand into areas that minimize our defensive needs while maximizing the territory we control.

In this article we will present the algorithms used to make the qualitative distinctions given above and the ways in which the AI uses that information. The lessons we would most like the reader to take away from this article are not the specifics of the algorithms used but rather the thought processes involved in applying qualitative reasoning to terrain analysis. The important questions to address are: what are the qualitative distinctions we should look for, how can we recognize them, and what uses can the AI make of that information. Our algorithms are but a single example of how these questions can be answered.

The Unique Challenges of Turn-Based AI

Soren Johnson (Firaxis Games)
AI Game Programming Wisdom 2, 2003.
Abstract: Writing a turn-based AI presents a number of unique programming and game design challenges. The common thread uniting these challenges is the user's complete control over the game's speed. Players willing to invest extreme amounts of time into micro-management and players looking to streamline their gaming experience via automated decision-making present two very different problems for the AI to handle. Further, the ability to micro-analyze turn-based games makes predictability, cheating, and competitive balance extremely important issues. This article outlines how the Civilization III development team dealt with these challenges, using specific examples to illuminate some practical solutions useful to a programmer tasked with creating an AI for a turn-based game.

Random Map Generation for Strategy Games

Shawn Shoemaker (Stainless Steel Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: While there are numerous articles dedicated to the generation of random maps for games, there is little published information on random maps for strategy games in particular. This subset of map generation presents distinct challenges as evident by the relatively few games that implement them. While the techniques described here can be used to create maps suitable for any type of game, this system is specifically designed to create a variety of successful random maps for real-time strategy games. This article describes the random map generation implementation as found in the RTS game Empire Earth (EE) developed by Stainless Steel Studios.

Transport Unit AI for Strategy Games

Shawn Shoemaker (Stainless Steel Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: Unit AI refers to the micro-level artificial intelligence that controls a specific unit in a game and how that unit reacts to input from the player and the game world. Transports present a particular challenge for unit AI as many units must work together to achieve their common goal, all the while attempting to minimize player frustration. This article discusses the general transport unit AI challenge and a successful solution. Land, air, naval, and building transports (such as fortresses and town centers) will be discussed and a class hierarchy implementation will be suggested. Algorithms for the loading (including the calculation for rendezvous points) and unloading of transports will be presented as well as warnings for particular pitfalls.

This article assumes some sort of finite-state-machine-based unit AI system and is applicable to any game in which there are multiple units in need of transporting. This article details the transport unit AI as found in the Real-Time Strategy (RTS) game Empire Earth (EE) developed by Stainless Steel Studios.

Wall Building for RTS Games

Mario Grimani (Sony Online Entertainment)
AI Game Programming Wisdom 2, 2003.
Abstract: Most real-time strategy games include walls or similar defensive structures that act as barriers for unit movement. Having a general-purpose wall-building algorithm increases the competitiveness of computer opponents and provides a new set of options for the random mission generation. The article discusses a wall building algorithm that uses the greedy methodology to build a wall that fits the definition, protects the desired location, and meets the customizable acceptance criteria. The algorithm takes advantage of the natural barriers and map edges to minimize the cost of building a wall. The algorithm discussion focuses on importance of traversal and heuristic functions, details of implementation, and various real world problems. Advanced topics such as minimum/maximum distance requirements, placement of gates and an unusual wall configurations are elaborated on. Full source code and a demo are supplied.

Strategic Decision-Making with Neural Networks and Influence Maps

Penny Sweetser (School of ITEE, University of Queensland)
AI Game Programming Wisdom 2, 2003.
Abstract: Influence maps provide a strategic perspective in games that allows strategic assessment and decisions to be made based on the current game state. Influence maps consist of several layers, each representing different variables in the game, layered over a geographical representation of the game map. When a decision needs to be made by the AI player, some or all of these layers are combined via a weighted sum to provide an overall idea of the suitability of each area on the map for the current decision. However, the use of a weighted sum has certain limitations.

This article explains how a neural network can be used in place of a weighted sum, to analyze the data from the influence map and make a strategic decision. First, this article will summarize influence maps, describe the current application of a weighted sum and outline the associated advantages and disadvantages. Following this, it will explain how a neural network can be used in place of a weighted sum and the benefits and drawbacks associated with this alternative. Additionally, it will go into detail about how a neural network can be implemented for this application, illustrated with diagrams.

Multi-Tiered AI Layers and Terrain Analysis for RTS Games

Tom Kent (Freedom Games, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: RTS games tend to handle soldier AIs individually, giving each unit specific tasks from the computer player. Creating complicated, cooperative tactics are impossible for such systems without an immense effort in coding. To develop complex, large-scale plans, a mechanism is needed to reduce the planning devoted to the individual units. Some games already collect individual soldiers into squads. This reduces the planning necessary by a factor of ten, as one hundred soldiers can be collected into ten squads. However, this concept can be taken farther, with squads collected into platoons, platoons into companies, and so on. The versatility such groupings give an AI system are immense. This article will explore the implementation of a multi-tiered AI system in RTS-type games, including the various AI tiers, a set of related maps used by the AI tiers and an example to illustrate the system.

Designing a Multi-Tiered AI Framework

Michael Ramsey (2015, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: The MTAIF allows an AI to be broken up into three concrete layers, strategic, operational and a tactical layer. This allows for an AI programmer to have various AIs focus on specific tasks, while at the same time having a consistent overall focus. The MTAIF allows for the strategic layer to be focused exclusively on matters that can affect an empire on a holistic scale, while at the operational level the AI is in tune with reports from the tactical level. A differing factor from many other architectures is that the MTAIF does not allow decisions to be made on a tactical scale that would violate the overall strategic policies. This in turn forces highlevel strategic policies to be enforced in tactical situations, without the AI devolving into a reactionary based AI.

Racing Vehicle Control using Insect Intelligence

Alex Darby (FreeStyleGames Ltd.)
AI Game Programming Wisdom 2, 2003.
Abstract: Despite their simplicity and inability to adapt by learning, insects and other simple animals manage to survive and navigate in the complex and unpredictable real world very well. Since evolution tends to find very efficient solutions to the problems faced by living creatures a lot of the mechanisms used by simple animals are relatively efficient, and in addition often have a potential for behavioral richness far beyond the extra processing power it takes to model them.

This article presents a robust and extensible system architecture which is based around emergent behaviors, and several techniques which utilize principles derived from the results of academic AI research into modeling insect level intelligence - in particular vision based steering behavior utilizing simple compound eye-like sensors.

Fast and Efficient Approximation of Racing Lines

John Manslow
AI Game Programming Wisdom 2, 2003.
Abstract: Racing game AI has developed to the point where it is able to challenge even the best players. To do this, an AI usually relies heavily on information stored along the length of a track, which provides it with instructions on how it should approach upcoming sections. Critically, this information is derived during a game's development, almost always from the way in which human players drive each track, and will therefore not be available for random or player created tracks. This prevents random track generators and track editors being shipped with many racing games, because it would not also be possible to provide a challenging AI that could compete against the player on all the resulting tracks. This article presents an algorithm that can be used to quickly and efficiently derive approximations to racing lines, thus providing information vital to an AI. A demonstration implementation of the algorithm in C++ is included with the article.

The Art of Surviving a Simulation Title

Dr. Brett Laming (Argonaut Sheffield)
AI Game Programming Wisdom 2, 2003.
Abstract: This article aims to simplify the task of writing simulation AI by providing a number of guidelines. These were adopted after working on the space simulation, "Independence War 2" (I-War 2) and continue to bring success to the futuristic racing game, "Powerdrome". The guidelines cover many aspects from higher-level design and keeping it simple to diagnostic support and impressing the user. While they far from guarantee a successful and stress free implementation, they at least put the developer on the right path.

Dead Reckoning in Sports and Strategy Games

Fran�ois Dominic Laram�e
AI Game Programming Wisdom 2, 2003.
Abstract: Dead reckoning is a set of techniques used to calculate the motion of objects not entirely within an agent's control. This article explores the equations required to implement dead reckoning, and shows how it can apply in a variety of game contexts, for example the calculation of the optimal trajectory for a pass or a shot in a sports simulation, as well as multiple wargame problems.

Building a Sports AI Architecture

Terry Wellmann (High Voltage Software, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: This article focuses on the sport of basketball; however, the concepts presented in the article are applicable to a wide variety of games. The goal of the article is to give the reader a solid understanding about the things to consider when designing an architecture for a sports game. The article also describes the concepts and critical components necessary to successfully design an AI system that is easy to understand, build, maintain and extend.

The article covers, in detail, the concepts of agent plans, team management, agent AI, and touches on the critical points of agent mechanics. The architecture presented in the article serves as the foundation for Microsoft's NBA Inside Drive franchise and has been used in three shipped versions of the game.

Optimized Script Execution

Alexander Herz (Lionhead Studios Ltd.)
AI Game Programming Wisdom 2, 2003.
Abstract: The slow speed with which script languages are executed (compared to native code) places many limitations on a script language's potential applications. Previously only infrequently executed code placed outside of the game's inner loops has been deemed suitable for scripting, and for this reason script languages have typically only been used for story telling or customizable event-handling purposes.

Using optimization techniques presented in this article, it possible to increase script execution efficiency to near-native performance levels, enabling the use of scripting in the game's core logic, or even in a high performance 3D rendering system. The flexibility gained from switching to script-based logic means that even modifying a program's innermost mechanics is trivial, and does not come with the performance penalties one would expect.

Three stages of script execution are individually examined, and optimizations for each are presented. The techniques and principles presented can be easily applied to any existing scripting engine.

Advanced Script Debugging

Alexander Herz (Lionhead Studios Ltd.)
AI Game Programming Wisdom 2, 2003.
Abstract: Poor or missing script debugging facilities are among the principal reasons for scripts not being fully exploited in a development environment. Often, errors encountered during script execution result in crashes of the game or a "screen of death" style representation of the virtual machine's internal state just after the crash has occured.

In-depth knowledge of the virtual machine is required to interpret such information, and often the virtual machine's source code needs to be examined in order to identify the problem, which may have been caused by a virtual machine command executed long before the crash. Because all external developers (including the mod community) and most in-house script programmers lack this in-depth information, they will restrict their programming style to simple constructs that can be fixed using a trial and error process in case of a problem. Therefore even the most powerful script languages are doomed to be misused if they do not support proper debugging mechanisms.

This article shows how to incorporate known debugging capabilities from common development environments into your script system, giving your script designers and the mod community the ability to create complex scripts and use the language to its full extent, while also shortening the development period because of improved debugging capabilities for scripts.

Adding Error Reporting to Scripting Languages

Jeff Orkin (Monolith Studios)
AI Game Programming Wisdom 2, 2003.
Abstract: Custom scripting languages are a controversial game development tool. Scripting languages empower non-programmers by moving game AI logic out of the C++ code. While this empowerment certainly comes with some risks, the benefits are that additional team members can create behaviors, designers can tweak AI more directly, and the AI logic is more accessible to the mod community. The most common complaint about scripting languages is that they are difficult to debug. This concern is exacerbated if non-programmers intend to write scripts. If the scripting language compiler or interpreter only gives feedback like "syntax error," non-programmers are not going to get very far. Fortunately, this problem is easily solved. The same techniques used to define the grammar of valid syntax can be used to identify and report scripting errors in plain English. This article describes how to harness the power of Lex and Yacc to generate meaningful errors when compiling scripts. The article includes C++, Lex, and Yacc code for a simplistic language called Simple.

Empowering Designers: Defining Fuzzy Logic Behavior through Excel-Based Spreadsheets

P.J. Snavely (Sony Computer Entertainment America)
AI Game Programming Wisdom 2, 2003.
Abstract: Putting game development back into the hands of the game's designers is critical to keeping a project on schedule. How does that happen? What is the easiest way to let a game designer work on their own with a minimum amount of interaction from a technical source? Using Visual Basic for Applications and some basic applications, it is possible to design an interface which does both of these things, as well as having the added benefit of letting finished code stay finished.

A Modular Camera Architecture for Intelligent Control

Sandeep Kharkar (Microsoft Corporation)
AI Game Programming Wisdom 2, 2003.
Abstract: Cameras play a vital role in the user experience of any game. A robust camera solution can make the difference between a game that is awkward to play and a game that plays smoothly and feels great. Unfortunately, cameras tend to be a low priority item in many game development schedules and the effort is limited to the point where the cameras stop being a nuisance. One of the reasons that the efforts stop early is the lack of a solid architecture that allows rapid, data driven experimentation with camera behaviors.

This article presents a component based camera architecture that allows non-programmers to take over the development of cameras at the point where they make the transition between technical coding and creative effort. The architecture will demonstrate the use of common AI techniques to enhance the robustness and creativity of the camera solution for any game. The techniques presented in the article will primarily benefit games that have a third-person perspective, but will also provide useful tips for other types of games.

Player Modeling for Adaptive Games

Ryan Houlette (Stottler Henke Associates, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: This article describes a lightweight, flexible machine learning technique we call player modeling, designed to help add adaptivity to your game AI. The basic idea is simple: the game maintains a profile of each player that captures the skills, weaknesses, preferences, and other characteristics of that player. This model is updated by the game as it interacts with the player. In turn, the game AI can query the player model to determine how best to adapt its behavior to that particular player - for example, by asking which of several possible tactics will be most challenging to the player. Using player modeling, a game's AI can adapt both during the course of a single play as well as over multiple sessions, resulting in a computer opponent that changes and evolves with time to suit the player.

The article first defines the player model concept in more detail and then discusses strategies for designing a model to suit your game. It then presents a basic player model implementation. Subsequent sections describe how to actually integrate the modeling system with your game, including both how to update the model and how to make use of the information that it contains. The remainder of the article presents several advanced concepts, including a hierarchical player model, alternate model update methods, and other uses for the player model.

Constructing a Decision Tree Based on Past Experience

Dan Fu, Ryan Houlette (Stottler Henke Associates, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: In recent years, decision trees have gained popularity within the game development community as a practical learning method that can help an AI adapt to a player. Instead of picking from a canned set of reactions to player action, the AI has the opportunity to do something much more powerful: anticipate the player's action before he acts. In this article, we discuss a decision tree learning algorithm called ID3, which constructs a decision tree that identifies the telltale features of an experience to predict its outcome. We then establish ID3's role in Black & White, building on an earlier article in the first edition of AI Game Programming Wisdom. Finally, we consider some important aspects and extensions to the approach, and provide s ample code which implements a simple form of ID3.

Understanding Pattern Recognition Methods

Jouni Smed, Harri Hakonen, Timo Kaukoranta (Department of Information Technology, University of Turku, Finland)
AI Game Programming Wisdom 2, 2003.
Abstract: The task of pattern recognition is to abstract relevant information from the game world and, based on the retrieved information, construct concepts and deduce patterns for the use of higher level reasoning and decision-making systems. We view pattern recognition in computer games from two perspectives: functional and methodological. In the functional approach, we analyze what is required from pattern recognition. We conclude that it can act in different roles, which in turn affect the choice of a method and its implementation. These roles depend on the level of decision-making, the stance toward the player, and the use of the modeled knowledge. In the methodological approach, we review a branch of pattern recognition techniques arising from soft computing. We discuss methods related to optimization, adaptation, and uncertainty. Our intention is to clarify where these methods should be used.

Using Reinforcement Learning to Solve AI Control Problems

John Manslow
AI Game Programming Wisdom 2, 2003.
Abstract: During the development of a game's AI many difficult and complex control problems often have to be solved. How should the control surfaces of an aircraft be adjusted so that it follows a particular path? How should a car steer to follow a racing line? What sequences of actions should a real time strategy AI perform to maximize its chances of winning? Reinforcement learning (RL) is an extremely powerful machine learning technique that allows a computer to discover its own solutions to these types problems by trial and error. This article assumes no prior knowledge of RL and introduces the fundamental principles of it by showing how it can be used to allow a computer to learn how to control a simulated racing car. C++ source code for RL and a skeleton implementation of racing game AI are included with the article.

Getting Around the Limits of Machine Learning

Neil Kirby (Lucent Technologies Bell Laboratories)
AI Game Programming Wisdom 2, 2003.
Abstract: To some AI programmers, the Holy Grail of AI would be a game that learns in the field and gets better the more it is played. Multiplayer network games especially would become challenging to the most skillful players as the AI learns and uses the best plays from the best players. This article examines some of the limitations of machine learning and some of the ways around them. It analyzes learning in three current games. It considers technical and gameplay issues with learning in games.

How to Build Neural Networks for Games

Penny Sweetser (School of ITEE, University of Queensland)
AI Game Programming Wisdom 2, 2003.
Abstract: Neural networks are a machine learning technique inspired by the human brain. They are a flexible technique that has a wide range of applications in a variety of industries. This article will first introduce neural networks, describing their biological inspiration. Then, it will describe the important components of neural networks and demonstrate how they can be implemented with example code. Next, it will explain how neural networks can be trained, both in-game and prior to shipping, and how a trained neural network can be used for decision-making, classification and prediction. Finally, it will discuss the various applications of neural networks in games, describing previous uses and giving ideas for future applications. Each of these sections will be illustrated with relevant game examples and sample code where appropriate.

How to Build Evolutionary Algorithms for Games

Penny Sweetser (School of ITEE, University of Queensland)
AI Game Programming Wisdom 2, 2003.
Abstract: Evolutionary algorithm is the broad term given to the group of optimization and search algorithms that are based on evolution and natural selection, including genetic algorithms, evolutionary computation and evolutionary strategies. Evolutionary algorithms have many advantages, in that they are robust search methods for large, complex or poorly-understood search spaces and nonlinear problems. However, they also have many disadvantages, in that they are time-consuming to develop and resource intensive when in operation. This article will introduce evolutionary algorithms, describing what they are, how they work, and how they are developed and employed, illustrated with example code. Finally, the different applications of evolutionary algorithms in games will be discussed, including examples of possible applications in different types of games.

Adaptive AI: A Practical Example

Soren Johnson (Firaxis Games)
AI Game Programming Wisdom 2, 2003.
Abstract: Because most game AIs are either hared-coded or based on pre-defined scripts, players can quickly learn to anticipate how the AI will behave in certain situations. While the player will develop new strategies over time, the AI will always act as it did when the box was opened, suffering from strategic arrested development. This article describes the adaptive AI of a simple turn-based game called "Advanced Protection."

This practical example of an adaptive AI displays a number of advantages over a static AI. First, the system can dynamically switch between strategies depending on the actual performance of the player - experts will be treated like experts, and novices will be treated like novices. Next, the rules and parameters of the game will be exactly the same for all strategies, which means the AI will not need to "cheat" in order to challenge expert players. Finally, the system can ensure that the AI's "best" strategies truly are the best for each individual player.

Building Better Genetic Algorithms

Mat Buckland (www.ai-junkie.com)
AI Game Programming Wisdom 2, 2003.
Abstract: Genetic algorithms are slowly but surely gaining popularity with game developers. Mostly as an in-house tool for tweaking NPC parameters such as ID used in the development of the bots for Quake3, but we are also beginning to see genetic algorithms used in-game, either as an integral part of the gameplay or as an aid for the user.

Unfortunately, many of today's programmers only know the basics of genetic algorithms, not much beyond the original paradigm devised by John Holland back in the mid sixties. This article will bring them up to date with some of the tools available to give improved performance. Techniques discussed will include various scaling techniques, speciation, fitness sharing, and other tips designed to help speedy convergence whilst retaining population diversity. In short, showing you how to get the most from your genetic algorithms.

Advanced Genetic Programming: New Lessons From Biology

Fran�ois Dominic Laram�e
AI Game Programming Wisdom 2, 2003.
Abstract: Genetic programming is a powerful evolutionary mechanism used to create near-optimal solutions to difficult problems. One of the major issues with traditional GP paradigms has been the relative brittleness of the organisms generated by the process: many source code organisms do not compile at all, or produce other kinds of nonsensical results. Recent advances in genetic programming, namely the grammatical evolution scheme based on such biological concepts as degenerate and cyclical DNA and gene polymorphism, promise ways to eliminate this problem and create programs that converge on a solution faster. This article explains grammatical evolution, its biological underpinnings, and a handful of other ways to refine evolutionary computing schemes, like co-evolution

The Importance of Growth in Genetic Algorithms

Dale Thomas (AILab, University of Z�rich)
AI Game Programming Wisdom 2, 2003.
Abstract: The purpose of this article is to introduce some newer concepts relating to the field of Genetic Algorithms (GA). GAs can introduce variability and adaptability into a game leading to non-linear gameplay and opponents who tailor their strategies to that of the player. Many limitations of mainstream GA implementations can be overcome with some simple additions. Using growth, co-evolution, speciation and other new techniques can alleviate limitations on complexity, designer bias, premature convergence and many more handicaps. These additions can reduce the disadvantages of current GAs and allow the advantages to make games much more unpredictable and challenging.

SAPI: An Introduction to Speech Recognition

James Matthews (Generation5)
AI Game Programming Wisdom 2, 2003.
Abstract: This article looks at providing newcomers to SAPI an easy-to-follow breakdown of how to get a simple SAPI application working. It looks briefly at setting up SAPI, how to construct the XML grammar files, handling SAPI messages and using the SAPI text-to-speech functionality. All these concepts are tied together using an demonstration application designed to make learning SAPI simple yet entertaining.

SAPI: Extending the Basics

James Matthews (Generation5)
AI Game Programming Wisdom 2, 2003.
Abstract: This article extends upon the previous one by discussing concepts like dynamic grammar, additional XML grammar tags, altering voices and more SAPI events. The chapter uses a simple implementation of Go Fish! to demonstrate the concepts presented.

Conversational Agents: Creating Natural Dialogue between Players and Non-Player Characters

Penny Drennan (School of ITEE, University of Queensland)
AI Game Programming Wisdom 2, 2003.
Abstract: The quality of interactions between non-player characters (NPCs) and the player is an important area of Artificial Intelligence in games that is still in need of improvement. Game players frequently express that they want to see opponents and NPCs that appear to possess intelligence in games. However, most dialogue between players and NPCs in computer games is currently scripted, which does not add to the appearance of intelligence in the NPC. This article addresses these problems by giving an overview of NPCs in current games and presents a method called conversational agents, for improving dialogue between players and NPCs. Conversational agents are software agents that consist of models of personality and emotion, which allow them to demonstrate believable conversational behavior. The advantages of conversational agents include their ability to portray emotions and personality through dialogue. However, they also have disadvantage, in that they can be time consuming to develop.

This article will begin by discussing the conversational behavior of NPCs in current games. We will not be looking at the artificial intelligence (AI) capabilities of NPCs, only their ability to interact with the player. We will then discuss the components of a conversational agent - how to give it the appearance of personality and emotion. We will also look at the input that the agent needs to get from the environment, and what we want the agent to say to the player. We will conclude with the advantages and disadvantages of using conversational agents in games.

Parallel-State Machines for Believable Characters

Thor Alexander (Hard Coded Games)
Massively Multiplayer Game Development, 2003.

Creating a �Safe Sandbox?for Game Scripting

Matthew Walker (NCsoft Corporation)
Massively Multiplayer Game Development, 2003.

Precise Game Event Broadcasting with Python

Matthew Walker (NCsoft Corporation)
Massively Multiplayer Game Development, 2003.

Relational Database Management Systems Primer

Jay Lee (NCsoft Corporation)
Massively Multiplayer Game Development, 2003.

Leveraging Relational Database Management Systems to Data-Drive MMP Gameplay

Jay Lee (NCsoft Corporation)
Massively Multiplayer Game Development, 2003.

Data-Driven Systems for MMP Games

Sean Riley (NCsoft Corporation)
Massively Multiplayer Game Development, 2003.

Managing Game State Data Using a Database

Christian Lange (Origin Systems, Inc)
Massively Multiplayer Game Development, 2003.

Considerations for Movement and Physics in MMP Games

Jay Lee (NCsoft Corporation)
Massively Multiplayer Game Development, 2003.

Client-Side Movement Prediction

Mark Brockington (BioWare Corp)
Massively Multiplayer Game Development, 2003.

Building a Massively Multiplayer Game Simulation Framework, Part 2: Behavioral Modeling

Thor Alexander (Hard Coded Games)
Massively Multiplayer Game Development, 2003.

The Evolution of Game AI

Paul Tozour (Ion Storm Austin)
AI Game Programming Wisdom, 2002.
Abstract: This article provides a big-picture summary of game AI: what it is, where it's going, where it's been, how it can grow, and what makes it so different from any other discipline. We give a broad overview of the evolution of AI in games since the birth of the videogame, as well as the evolution of a game AI within the scope of a game's development. We also describe many of the academic AI techniques that have been applied to games, explain the important distinctions between the needs and approaches mainstream AI and game AIs of various genres, and discuss some of the ways that game AI technologies are likely to grow in the future.

The Illusion of Intelligence

Bob Scott (Stainless Steel Studios)
AI Game Programming Wisdom, 2002.

Solving the Right Problem

Neil Kirby (Bell Labs)
AI Game Programming Wisdom, 2002.
Abstract: This article will talk about going back to first principles: What is the problem we are trying to solve? Why is that important - what are we really trying to do? Too early programmers settle on an answer to the first when they should more carefully examine the second. A clear example comes from the AI Roundtables at the GDC. At first the company fixated on trying to do speech input. In a world where the NPCs talk, they should be able to listen as well. This left them with the huge mountain of work to do speech recognition, and even if they could climb that mountain, the bigger mountain of natural language processing was waiting hidden behind it. After all, even if Dragon Dictate parses everything you say as well as your officemate does, it surely does not have the ability to make sense of it and respond intelligently. Instead of trying to climb these two large mountains of work, the company stepped back to the question of why it is important. It was important to give a more immersive and natural experience. So instead of doing speech, they implemented gestures. Large motion gestures are universally understood. The shrug that means "I don't know" means "I don't know" to just about everybody. They got a much better result by solving a different problem! While this article won't give people the easier problem to solve, it should help get them thinking about the process. It will also give a number of things other designers do to inspire them to "think outside the box."

12 Tips from the Trenches

Jeff Orkin (Monolith Productions)
AI Game Programming Wisdom, 2002.
Abstract: This article is intended to give developers who are new to Game AI a head start. It gives an overview of many techniques professional Game AI developers have found useful, that may not be immediately obvious to a novice. Topics covered include precomputing navigation, building a world with AI hints, providing fallbacks, finite-state machine organization, and data-driven approaches. Many tips reference other articles in AI Game Programming Wisdom for more detail.

Building an AI Diagnostic Toolset

Paul Tozour (Ion Storm Austin)
AI Game Programming Wisdom, 2002.
Abstract: This article describes invaluable techniques that real developers use to tweak, test, and diagnose their AI during the development cycle. We describe literally dozens of specific ways you can instrument your AI to help you tweak and test it more quickly and figure out what's wrong when your AI breaks.

A General Purpose Trigger System

Jeff Orkin (Monolith Productions)
AI Game Programming Wisdom, 2002.
Abstract: This article describes the implementation of a general-purpose centralized Trigger System. A Trigger System is used to keep track of events in the game world, and to optimize processing agents need to perform to recognize these events. Centralizing the Trigger System allows for culling by priority and proximity before delivering Trigger events to agents. The article and companion CD include working code for a stimulus-response Trigger System. Enhancements are discussed to extend the system to handle processing a grouping hierarchy of agents, in addition to individual agents.

A Data-Driven Architecture for Animation Selection

Jeff Orkin (Monolith Productions)
AI Game Programming Wisdom, 2002.
Abstract: Animation selection is a common task for AI systems. Due to advances in animation technology, it is now common to provide a much wider range of animations for characters, including specific animations for specific situations. Rather than simply playing a "Run" animation, characters may play a specific "RunWithSword", "AngryRun", or "InjuredRun" animation. The Action Table is a simple data-driven approach to animation selection. This article describes the implementation of the Action Table, and goes on to describe how this technique can be extended to handle randomization and dynamic animation lists.

Realistic Character Behavior with Prioritized, Categorized Animation

Jeff Orkin (Monolith Productions)
AI Game Programming Wisdom, 2002.
Abstract: Skeletal animation systems allow AI programmers to creates realistic behavior for characters by playing multiple, layered animations simultaneously. The challenge comes in trying to manage these independent layers of animation. This article describes the implementation of a system in which layers of animation are prioritized, and categorized by the region of the body they affect. This data-driven approach moves the management of the layers out of the code. The article and companion CD provide code for this layering system. Handling of blended transitions between animations is discussed using a bone caching technique.

Designing a GUI Tool to Aid in the Development of Finite State Machines

Phil Carlisle (Team17 Software)
AI Game Programming Wisdom, 2002.

The Beauty of Response Curves

Bob Alexander (Stormfront Studios)
AI Game Programming Wisdom, 2002.

Simple and Efficient Line-of-Sight for 3D Landscapes

Tom Vykruta (Surreal Software)
AI Game Programming Wisdom, 2002.

An Open Source Fuzzy Logic Library

Michael Zarozinski (Louder Than A Bomb! Software)
AI Game Programming Wisdom, 2002.
Abstract: This article introduces the Free Fuzzy Logic Library (FFLL), an open source library that can load files that adhere to the IEC 61131-7 Fuzzy Control Language (FCL) standard. FFLL provides a solid base of code that you are free to enhance, extend, and improve. Whether used for rapid prototyping or as a component in an AI engine, FFLL can save significant time and money. The entire library and a sample program is included on the book's CD.

Basic A* Pathfinding Made Simple

James Matthews (Generation5)
AI Game Programming Wisdom, 2002.

Generic A* Pathfinding

Daniel Higgins (Stainless Steel Software)
AI Game Programming Wisdom, 2002.

Pathfinding Design Architecture

Daniel Higgins (Stainless Steel Software)
AI Game Programming Wisdom, 2002.

How to Achieve Lightning Fast A*

Daniel Higgins (Stainless Steel Software)
AI Game Programming Wisdom, 2002.

Practical Optimizations for A* Path Generation

Timothy Cain (Troika Games)
AI Game Programming Wisdom, 2002.
Abstract: The A* algorithm is probably the most widely used path algorithm in games, but in its pure form, A* can use a great deal of memory and take a long time to execute. While most optimizations deal with improving the estimate heuristic or with storing and searching the open and closed lists more efficiently, this article examines methods of restricting A* to make it faster and more responsive to changing map conditions. Such A* restrictions take the form of artificially constricting the search space, using partial solutions, or short-circuiting the algorithm altogether. For each restriction, the situations in which these optimizations will prove most useful are discussed.

Simple, Cheap Pathfinding

Chris Charla (Digital Eclipse Software), Mike Mika (Digital Eclipse Software)
AI Game Programming Wisdom, 2002.
Abstract: There are several cases in which using a lightweight AI method for pathfinding is appropriate, especially on low-powered hand-held gaming systems like the Game Boy Advance or various cell phones. This article presents a simple scheme in which a four-sensored, or whiskered robot, can move through an environment with surprisingly lifelike results. This scheme was successfully used in a number of published games for the Game Boy Color (including NFL Blitz, Disney's Tarzan, and Alice in Wonderland), as well as in several other games for mobile devices.

Preprocessed Solution for Open Terrain Environments

Smith Surasmith (Angel Studios)
AI Game Programming Wisdom, 2002.

Building a Near-Optimal Navigation Mesh

Paul Tozour (Ion Storm Austin)
AI Game Programming Wisdom, 2002.
Abstract: When you want your AI characters to perform pathfinding in a fully 3D environment, the kind of data structure you select to perform pathfinding will have an enormous impact on the performance of your pathfinding and the quality of the paths. A navigation mesh is one of the best ways to pathfind in these kinds of game worlds. It provides very fast pathfinding and allows you to find the optimal path from any arbitrary point in the game world to any other. This article describes in minute detail how to take arbitrary 3D world geometry ("polygon soup") and automatically construct and optimize a navigation mesh as a preprocessing step.

Realistic Turning between Waypoints

Marco Pinter (Badass Games)
AI Game Programming Wisdom, 2002.

Navigating Doors, Elevators, Ledges, and Other Obstacles

John Hancock (LucasArts Entertainment)
AI Game Programming Wisdom, 2002.

Simple Swarms as an Alternative to Flocking

Tom Scutt (Gatehouse Games)
AI Game Programming Wisdom, 2002.
Abstract: Craig Reynold's flocking algorithms have been well documented and are highly successful at producing natural-looking movement in groups of agents. However, the algorithms can be computationally expensive, especially where there are a large number of agents or a complex environment to detect against. For this reason, they are not always suited to real-time applications such as video games. This article details a much simpler algorithm for producing natural-looking movement in large swarms of creatures involving tens or hundreds of agents. Although this algorithm cannot guarentee separation of creatures within the swarm, the overall impression organic movement is very convincing.

Strategic and Tactical Reasoning with Waypoints

Lars Lid�n (Valve Software)
AI Game Programming Wisdom, 2002.
Abstract: Non-player characters (NPCs) commonly use waypoints for navigation through their virtual world. This article will demonstrate how preprocessing the relationships between these waypoints can be used to dynamically generate combat tactics for NPCs in a first-person shooter or action adventure game. By precalculating and storing tactical information about the relationship between waypoints in a bit string class, NPCs can quickly find valuable tactical positions and exploit their environment. Issues discussed include fast map analysis, safe pathfinding, using visibility, intelligent attack positioning, flanking, static waypoint analysis, pinch points, squad tactics, limitations, and advanced issues.

Recognizing Strategic Dispositions: Engaging the Enemy

Steven Woodcock (Wyrd Wyrks)
AI Game Programming Wisdom, 2002.

Squad Tactics: Team AI and Emergent Maneuvers

William van der Sterren (CGF-AI)
AI Game Programming Wisdom, 2002.
Abstract: AI squad behavior is made up of coordinated individual actions towards a joint goal. There are two basic coordination styles: centralized control by a leader, and decentralized cooperation between individuals. This chapter discusses the latter style in detail. Decentralized cooperation can already be realized with minor changes to "standard individual AI". This chapter illustrates how some tactical squad maneuvers can emerge from these coordinating individual AIs, using a squad assault as an example. The limitations of the approach are illustrated using a second example: a squad ambush. This chapter precedes and complements the chapter "Squad Tactics: Planned Maneuvers".

Squad Tactics: Planned Maneuvers

William van der Sterren (CGF-AI)
AI Game Programming Wisdom, 2002.
Abstract: AI squad behavior can also be realized by designing an explicit team leader, responsible for planning and managing the squad's maneuver. This AI team leader assesses the squad's state, picks and plans the most appropriate squad maneuver. He executes the squad maneuver by issuing orders, and by interpreting feedback and information from the squad members. This is illustrated using a bounding overwatch squad advance. This centralized style to squad AI is more complex than the emergent behavior in "Squad Tactics: Team AI and Emergent Maneuvers". However, it does provide largely autonomous operating squads, able to execute complex maneuvers, and often combines well with some decentralized cooperation among squad members.

Tactical Team AI Using a Command Hierarchy

John Reynolds (Creative Asylum)
AI Game Programming Wisdom, 2002.
Abstract: Team-based AI is becoming an increasingly trendy selling point for first- and third-person action games. Often, this is limited to scripted sequences or simple "I need backup" requests. However, by using a hierarchy of decision-making, it is possible to create some very convincing teams that make decisions in real time.

Formations

Chad Dawson (Stainless Steel Studios)
AI Game Programming Wisdom, 2002.
Abstract: In games today, formations are expected for any type of cohesive group movement. From squad-based first-person shooters to sports sims to real-time strategy games, anytime that a group is moving or working together it is expected to do so in an orderly, intelligent fashion. This article will cover standard military formations, facing issues, mixed formations, spacing distance, ranks, unit mobility, group pathfindng, and dealing with obstacles.

Architecting a Game AI

Bob Scott (Stainless Steel Studios)
AI Game Programming Wisdom, 2002.

An Efficient AI Architecture using Prioritized Task Categories

Alex McLean (Pivotal Games)
AI Game Programming Wisdom, 2002.
Abstract: Real-time games have many diverse subsections: rendering, AI, collision detection, player-input and audio are just a few. Each of these tasks has a finite amount of time in which to execute, each is trying to do so as quickly as possible, and all of them must work together to give a rich, detailed gaming world. This article concentrates on the AI component and specifically, how to distribute it over time and make it fast for real-time games. It also details how to avoid processing until it's absolutely necessary. The goal will be to structure our AI so that it can execute quickly and efficiently. Two benefits will be realized by doing this; our games will run more smoothly and we'll have freed up the necessary processing power to bring about even more advanced AI.

An Architecture Based on Load Balancing

Bob Alexander (Stormfront Studios)
AI Game Programming Wisdom, 2002.

A Simple Inference Engine for a Rule-Based Architecture

Mike Christian (Paradigm Entertainment)
AI Game Programming Wisdom, 2002.

Implementing a State Machine Language

Steve Rabin (Nintendo of America)
AI Game Programming Wisdom, 2002.
Abstract: This article presents a robust way to structure your state machines with a simple language. This State Machine Language will not only provide structure, but it will unleash some powerful concepts that will make programming games much easier. While the language itself is simple, it embodies some very important software engineering principles such as simplicity, maintainability, robustness, and ease of debuggine. The following article, "Enhancing a State Machine Language through Messaging," expands on this language with a powerful communication technique using messages. Each article has full soure code on the accompanying CD-ROM.

Enhancing a State Machine Language through Messaging

Steve Rabin (Nintendo of America)
AI Game Programming Wisdom, 2002.
Abstract: The previous article, "Implementing a State Machine Language," set the groundwork for a powerful language that can structure state machines in a simple, readable, and very debuggable format. In this article, that language will be expanded to encompass the problem of communication between AI game objects. This communication technique will revolutionize the State Machine Language by allowing complicated control flow and timers. Full source code is included on the accompanying CD-ROM.

Blackboard Architectures

Damian Isla, Bruce Blumberg (M.I.T. Synthetic Characters Group)
AI Game Programming Wisdom, 2002.
Abstract: The blackboard architecture is a simple technique for handling coordination between agents. Although simple to implement, the architecture has proven elegant and powerful enough to be useful for problems ranging from synthetic character control to natural language understanding and other reasoning problems. This article explains the canonical blackboard architecture and shows many examples of how a game AI can benefit.

Introduction to Bayesian Networks and Reasoning Under Uncertainty

Paul Tozour (Ion Storm Austin)
AI Game Programming Wisdom, 2002.
Abstract: Since the 1990s, probabilistic inference techniques, and the specific subfield of Bayesian networks, have become immensely popular in the academic AI community. The game AI field, however, seems to have missed the boat. This is unfortunate, because Bayesian reasoning techniques can be extraordinarily helpful in getting your AI to reason about situations in a human-like fashion. This article provides a thorough introduction to the underlying concepts of probabilistic reasoning techniques and Bayesian networks, and describes a number of specific examples of the ways you can use them in game AI systems to perform more human-like reasoning.

A Rule-Based Architecture using Dempster-Shafer Theory

Fran�ois Dominic Laram�e
AI Game Programming Wisdom, 2002.
Abstract: DST is a variant of probability theory that explicitly models ignorance and uncertainty. Instead of reasoning on discrete events, it manipulates sets of possible events when evidence is imprecise or partially contradictory. Since DST obeys axioms that are less restrictive than those of classic probability, it may apply in more circumstances.

An Optimized Fuzzy Logic Architecture for Decision-Making

Thor Alexander (Hard Coded Games)
AI Game Programming Wisdom, 2002.

A Flexible Goal-Based Planning Architecture

John O'Brien (Red Storm Entertainment)
AI Game Programming Wisdom, 2002.

First-Person Shooter AI Architecture

Paul Tozour (Ion Storm Austin)
AI Game Programming Wisdom, 2002.
Abstract: This article provides a basic introduction to building an AI architecture for a first-person shooter game (such as Quake or Unreal) or a first-person sneaker (such as Thief: The Dark Project). We discuss the major components of an FPS AI (including specific subsystems for animation, movement and pathfinding, behavior, combat, sensory modelling, and scripting and trigger systems) and how those components should fit together.

Architecting an RTS AI

Bob Scott (Stainless Steel Studios)
AI Game Programming Wisdom, 2002.
Abstract: RTS games are one of the more thorny genres as far as AI is concerned, and a good architecture is necessary to ensure success. Most examples presented in this article are taken from the work done on Empire Earth. Issues include game components (civilization manager, build manager, unit manager, resource manager, research manager, and combat manager), difficulty levels, challenges (random maps, wall building, island hopping, resource management, stalling), and overall strategies.

An Economic Approach to Goal-Directed Reasoning in an RTS

Vernon Harmon (LucasArts Entertainment)
AI Game Programming Wisdom, 2002.
Abstract: In this article, we discuss one approach to creating an agent for a real-time strategy game, using the Utility Model. This approach takes Economic theories and concepts regarding consumer choice, and creates a mapping onto our game agent's decision space. We explain relevant AI terminology (goal-directed reasoning, reactive systems, planning, heuristic functions) and Economic terminology (utility, marginal utility, cost, production possibilities), and introduce a simplistic RTS example to provide a framework for the concepts.

The Basics of Ranged Weapon Combat

Paul Tozour (Ion Storm Austin)
AI Game Programming Wisdom, 2002.
Abstract: This article gives a brief introduction to the problems of firing ranged weapons. We discuss to-hit rolls, aim point selection, ray-testing, avoiding friendly fire incidents, dead reckoning, and calculating weapon trajectories for ballistic weapons.

Level-Of-Detail AI for a Large Role-Playing Game

Mark Brockington (BioWare)
AI Game Programming Wisdom, 2002.
Abstract: With thousands of objects demanding AI time slices in Neverwinter Nights, it would be difficult to satisfy all creatures and maintain a playable frame rate. The level-of-detail AI schemes used allowed the game to achieve the perception of thousands of actors thinking simultaneously. The article discusses how to subdivide your game objects into categories, and how certain time-intensive actions (such as pathfinding and combat) can be reduced to make more efficient use of the time available to AI.

A Dynamic Reputation System Based on Event Knowledge

Greg Alt (Surreal Software), Kristin King
AI Game Programming Wisdom, 2002.
Abstract: This article describes a non-player character (NPC) reputation system (a mechanism for dynamically managing NPCs' opinions of each other and of the player in order to influence the NPCs' actions). Most existing reputation systems manage NPCs' opinions globally. The reputation system this article describes instead changes a specific NPC's opinions only if the NPC has direct or indirect knowledge of events that trigger a change. The article describes the data structures required for the reputation system, the way they work together to make the complete system, and the way the system fits into the overall design of NPC behavior.

Representing a Race Track for the AI

Gari Biasillo (Electronic Arts Canada)
AI Game Programming Wisdom, 2002.
Abstract: This article is the first in a series of three racing AI articles and describes a practical representation of a racetrack for an AI system. The representation includes defining sectors, interfaces, the driving lines (racing line, overtaking line), path type, terrain type, walls, hairpin turns, and brake/throttle points. Methods for determining the current sector and the distance along a sector are also discussed.

Racing AI Logic

Gari Biasillo (Electronic Arts Canada)
AI Game Programming Wisdom, 2002.
Abstract: This is the second article in a series of three racing AI articles that describes how to implement an AI capable of racing a car around a track. Although the AI will follow predefined driving lines, it will not rigidly follow the track like a train on rails, but merely use these lines as a guide. This goal is to have the AI produce an output that emulates human input; specifically joystick and/or key presses. Using this method, the game engine only needs to gather input from the AI controller instead of a human input device. The article will cover the basic AI framework (FSM, fixed-time step, controlling the car, simplifying with 2D), traversing sectors (anticipating the road ahead, hairpin turns), driving to a target, overtaking, handling under-steer and over-steer (detecting the car's stability, testing for stability, correcting the car), wall avoidance, other states (airborne, off the track), and catch-up logic.

Training an AI to Race

Gari Biasillo (Electronic Arts Canada)
AI Game Programming Wisdom, 2002.
Abstract: This is the final article of the three article series, and shows ways to train the AI to race optimally around a racetrack. Issues covered include tuning the car handling (adjusting parameters, converging on optimum values, modifying parameter values, modifying the range, training at high simulation speeds) and real-time editing (real-time track modification, user control overriding AI).

Competitive AI Racing under Open Street Conditions

Joseph C. Adzima (Motocentric)
AI Game Programming Wisdom, 2002.

Camera AI for Replays

Sandeep V. Kharkar (Microsoft)
AI Game Programming Wisdom, 2002.

Simulating Real Animal Behavior

Sandeep V. Kharkar (Microsoft)
AI Game Programming Wisdom, 2002.

Agent Cooperation in FSMs for Baseball

P.J. Snavely (Acclaim Entertainment)
AI Game Programming Wisdom, 2002.

Intercepting a Ball

Noah Stein (Vision Scape Interactive)
AI Game Programming Wisdom, 2002.

Scripting: Overview and Code Generation

Lee Berger (Turbine Entertainment Software)
AI Game Programming Wisdom, 2002.

Scripting: The Interpreter Engine

Lee Berger (Turbine Entertainment Software)
AI Game Programming Wisdom, 2002.

Scripting: System Integration

Lee Berger (Turbine Entertainment Software)
AI Game Programming Wisdom, 2002.

Creating Scripting Languages for Non-Programmers

Falko Poiker (Relic Entertainment)
AI Game Programming Wisdom, 2002.

Scripting for Undefined Circumstances

Jonty Barnes (Lionhead Studios), Jason Hutchens (Amristar)
AI Game Programming Wisdom, 2002.
Abstract: Games are increasingly allowing the player to set the agenda. Want to while away hours mucking around with the game physics by throwing rocks into crowds of villagers? No problem! On the other hand, a strong storyline helps to inform the player of their goals, and provides a context for their actions. Traditionally, storylines in games have been advanced via cinematic sequences, and it is common for these to be displayed using the game engine. Can we resolve the conflict that occurs when we simultaneously afford the player the freedom to set the agenda and the game designers the ability to impose a storyline? What if a crucial moment in the story depends on the presence of the harmless little villager that the player unthinkingly threw into the ocean at the beginning of the game? Even worse, what if a non-player character under AI control intrudes into a cinematic sequence and begins to wreak havoc? In this article we discuss the features that were implemented in the game "Black & White" to allow the game designers to create storyline-advancing "Challenges" without compromising the unpredictable nature of the game.

The Perils of AI Scripting

Paul Tozour (Ion Storm Austin)
AI Game Programming Wisdom, 2002.
Abstract: Scripting is an enormously popular technique for developing game AI systems, but it can also be enormously dangerous. This article describes some of the considerations that you should think about very carefully before you jump on the scripting bandwagon, many of which you might not otherwise discover until it's too late. We also describe a number of the things that can go wrong when your lofty scripting language ambitions collide with the realities of game development.

How Not To Implement a Basic Scripting Language

Mark Brockington (BioWare), Mark Darrah (BioWare)
AI Game Programming Wisdom, 2002.
Abstract: This paper goes into some of the mistakes that were made while writing the scripting languages for Baldur's Gate and Neverwinter Nights. The four major points, which are covered with anecdotes: the lack of up-front design, ignoring early-adopter feedback, believing the code will only be used for one project, and believing the language will be used for one specific task.

Learning and Adaptation in Games

John Manslow
AI Game Programming Wisdom, 2002.
Abstract: It is anticipated that the widespread adoption of learning in games will be one of the most important advances ever to be made in game AI. Genuinely adaptive AIs will change the way that games are played by forcing the player to continually search for new strategies to defeat the AI. This article presents a detailed examination of the different approaches available for adding learning and adaptation to games and draws on the author's experiences of AI development to provide numerous practical examples. The reader is guided through the decisions that must be made when designing an adaptive AI, and summaries of the problems that are most frequently encountered with practical implementations are provided along with descriptions of effective solutions. The CD that accompanies the book contains source code for a genetic programming class, which can be used to evolve rule-based AI, and genetic algorithm and population-based incremental learner classes, which can be used to evolve AI more generally. The practical application of all these classes is illustrated by evolving an AI that successfully navigates a simple environment.

Varieties of Learning

Richard Evans (Lionhead Studios)
AI Game Programming Wisdom, 2002.

GoCap: Game Observation Capture

Thor Alexander (Hard Coded Games)
AI Game Programming Wisdom, 2002.

Pattern Recognition with Sequential Prediction

Fri Mommersteeg (Eindhoven University of Technology, Netherlands)
AI Game Programming Wisdom, 2002.
Abstract: This article provides a simple but efficient algorithm for recognizing repetitive patterns in number sequences. Pattern recognition is something that humans are very good at, but for a computer this is not so easy. Too often a game AI can be beaten by repeatedly performing the same trick, just because it is unable to perceive the pattern. This article explains how to deal with this problem and shows you how to map game events onto useful number sequences. Furthermore, it describes a few possible applications of the algorithm in computer games.

Using N-Gram Statistical Models to Predict Player Behavior

Fran�ois Dominic Laram�e
AI Game Programming Wisdom, 2002.
Abstract: N-Grams are statistical constructs used to predict sequences of events in situations that exhibit the property of local structure. Language is one such context: the probability of hearing the word "fries" is higher if one has just heard the word "french" than if one has just heard the word "fruit". Some games, specifically fighting games in which players develop signature move combinations, also exhibit this property. The article describes how to train an AI to recognize patterns and predict the human player's next move using N-Gram models.

Practical Natural Language Learning

Jonty Barnes (Lionhead Studios), Jason Hutchens (Amristar)
AI Game Programming Wisdom, 2002.
Abstract: The perception of intelligence seems to be directly related to the observation of behavior that is surprising yet sensible. Natural language interfaces were common features of computer entertainment software prior to the advent of sophisticated computer graphics, but these were often repetitive in nature: encountering the same scripted conversation over and over again quickly becomes boring. Stochastic language models have the ability to acquire various features of a language from observations they make, and these features can be used generatively to produce novel utterances that have the properties of being both surprising and sensible. In this article we show how such a system, when used to host in-game socially-oriented conversations, can greatly contribute towards the subjective impression of intelligence experienced by the player.

Testing Undefined Behavior as a Result of Learning

Jonty Barnes (Lionhead Studios), Jason Hutchens (Amristar)
AI Game Programming Wisdom, 2002.
Abstract: We consider learning to be the essence of Artificial Intelligence. Non-player characters, when granted the ability to learn, are given the potential to surprise and entertain the player in completely unexpected ways. This is very reinforcing from the player's point of view, but a nightmare for a testing department. How can they assure the quality of a game that may behave completely differently depending on the who's playing it? In this article we show, via a case study of the computer game "Black & White", exactly how a testing department can achieve their goals when the product they're testing features unpredictable learning AI.

Imitating Random Variations in Behavior using a Neural Network

John Manslow
AI Game Programming Wisdom, 2002.
Abstract: As game AI has increased in sophistication, it has become possible to create computer controlled agents that display remarkably human-like behavior. One of the few indications that an agent is non-organic is the frequently clinical nature of their actions, an effect exacerbated by the often ad hoc mechanisms used to add random variations. This article shows how neural networks can be taught to imitate the actual random variations in behavior that are exhibited by real people. This makes it possible to simulate the playing styles of different sports personalities in unprecedented detail - even the extent to which, for example, the cueing direction and position of the cue ball relative to the cushion affect the accuracy of a pool player's shots. The article assumes minimal knowledge of neural networks and illustrates the techniques through their application to a real game. The CD that accompanies the book contains all the source code for the game, along with that for the neural network class, which is designed as a plug-in component that can easily be transferred to other applications.

Genetic Algorithms: Evolving the Perfect Troll

Fran�ois Dominic Laram�e
AI Game Programming Wisdom, 2002.
Abstract: Genetic Algorithms mimic the process of natural selection to evolve solutions to problems that cannot be solved analytically. Candidate solutions, generated at random, are tested and evaluated for their fitness; the best of them are then bred and the process repeated over many generations, until an individual of satisfactory performance is found. This article explains the biological foundations of genetic algorithms and illustrates their behavior with an example: evolving a troll for a fantasy game.

The Dark Art of Neural Networks

Alex J. Champandard (Artificial Intelligence Depot)
AI Game Programming Wisdom, 2002.

Using Lex and Yacc to Parse Custom Data Files

Paul Kelly
Game Programming Gems 3, 2002.

Optimized Machine Learning with GoCap

Thor Alexander (Hard Coded Games)
Game Programming Gems 3, 2002.

Area Navigation: Expanding the Path-Finding Paradigm

Ben Board (Big Blue Box) and Mike Ducker (Lionhead Studios)
Game Programming Gems 3, 2002.

Function Pointer-Based, Embedded Finite-State Machines

Charles Farris (VR1 Entertainment)
Game Programming Gems 3, 2002.

Terrain Analysis in an RTS-The Hidden Giant

Daniel Higgins (Stainless Steel Software)
Game Programming Gems 3, 2002.

A Extensible Trigger System for AI Agents, Objects, and Quests

Steve Rabin (Nintendo of America)
Game Programming Gems 3, 2002.

Tactical Path-Finding with A*

William van der Sterren (CGF-AI)
Game Programming Gems 3, 2002.

A Fast Approach To Navigation Meshes

Stephen White (Naughty Dog) and Christopher Christensen (Naughty Dog)
Game Programming Gems 3, 2002.

Choosing a Relationship Between Path-Finding and Collision

Thomas Young
Game Programming Gems 3, 2002.

Managing AI with Micro-Threads

Simon Carter (Big Blue Box Studios)
Game Programming Gems 2, 2001.

Micro-Threads for Game Object AI

Bruce Dawson (Humongous Entertainment)
Game Programming Gems 2, 2001.
Abstract: Presents code and concepts to create hundreds of low-overhead threads by manipulating the stack. This technique has notable benefits in terms of AI load balancing and the author has implemented the architecture on systems ranging from the PC to the GameBoy.

A Generic Fuzzy State Machine in C++

Eric Dybsand (Glacier Edge Technology)
Game Programming Gems 2, 2001.
Abstract: Fuzzy Logic provides an attractive alternative to more crisp forms of finite state decision making. This article builds on the presentation of the Finite-State Machine class from the first Game Programming Gems book, by introducing a generic Fuzzy-State Machine class in C++. The concepts of fuzzy logic are presented and an example of applicability for computer game AI is offered. The FSMclass and FSMstate classes from the first GEMS book are converted into fuzzy logic versions, and source code is provided for review.

Using a Neural Network in a Game: A Concrete Example

John Manslow
Game Programming Gems 2, 2001.
Abstract: Neural networks are a powerful artificial intelligence technique that are based on an abstraction of the neurocomputational functions of the human brain. One of their most important characteristics is that they can learn by example, and do not need to be programmed in the conventional sense. For example, Codemasters (the developers of Colin McRae Rally 2.0) discovered that a neural network could learn how to drive a rally car by imitating the developers' play, thus avoiding the need to construct a complex set of rules. This article guides the reader through all the steps that are necessary to incorporate neural networks into their own game. Assuming no prior understanding, the article presents a case study of applying one of the most popular, easy to use, and effective neural networks, the multilayer perceptron, to a real game. All the steps required for successful neural network development are described, as are the most common problems, and their solutions. The CD that accompanies the book includes all the source code for the game, and the neural network class that lies at the heart of its AI. The class is designed to be used as a drop-in module in other games and hence contains no application specific code.

A High-Performance Tile-based Line-of-Sight and Search System

Matt Pritchard (Ensemble Studios)
Game Programming Gems 2, 2001.

An Architecture for RTS Command Queuing

Steve Rabin (Nintendo of America)
Game Programming Gems 2, 2001.
Abstract: Explains the concept of Command Queuing in an RTS along with several ways to implement it. Command Queuing is the idea that the player should be able to queue up any sequence of command orders (Move, Attack, Patrol, Repair, etc) for a particular unit. Some commands that cycle, such as Patrol, present specific challanges in order to acheive the right behavior. Solutions to these difficulties are discussed along with detailed diagrams.

Stratagies for Optimizing AI

Steve Rabin (Nintendo of America)
Game Programming Gems 2, 2001.
Abstract: Presents 11 strategies for optimizing AI, along with tips and examples for each.
1. Use event-driven behavior rather than polling.
2. Reduce redundant calculations.
3. Centralize cooperation with managers.
4. Run the AI less often.
5. Distribute the processing over several frames.
6. Employ level-of-detail AI.
7. Solve only part of the problem.
8. Do the hard work offline.
9. Use emergent behavior to avoid scripting.
10. Amortize query costs with continuous bookkeeping.
11. Rethink the problem.

Influence Mapping

Paul Tozour (Ion Storm Austin)
Game Programming Gems 2, 2001.
Abstract: Influence mapping is a powerful and proven AI technique for reasoning about the world on a spatial level. Although influence maps are most often used in strategy games, they have many uses in other genres as well. Among other things, an influence map allows your AI to assess the major areas of control by different factions, precisely identify the boundary of control between opposing forces, identify "choke points" in the terrain, determine which areas require further exploration, and inform the base-construction AI systems to allow you to place buildings in the most appropriate locations.

Strategic Assessment Techniques

Paul Tozour (Ion Storm Austin)
Game Programming Gems 2, 2001.
Abstract: This article discusses two useful techniques for strategic decision-making. These are easiest to understand in the context of strategy game AI, but they have applications to other game genres as well. The resource allocation tree describes a data structure that allows an AI system to continuously compare its desired resource allocation to its actual current resources in order to determine what to build or purchase next. The dependency graph is a data structure that represents a game's "tech tree," and we discuss a number of ways that an AI can perform inference on the dependency graph in order to construct long-term strategic plans and perform human-like reasoning about what its opponents are attempting to accomplish.

Terrain Reasoning for 3D Action Games

William van der Sterren (CGF-AI)
Game Programming Gems 2, 2001.

Flocking with Teeth: Predators and Prey

Steven Woodcock (Wyrd Wyrks)
Game Programming Gems 2, 2001.

Expanded Geometry for Points-of-Visibility Pathfinding

Thomas Young
Game Programming Gems 2, 2001.

Optimizing Points-of-Visibility Pathfinding

Thomas Young
Game Programming Gems 2, 2001.

Imploding Combinatorial Explosion in a Fuzzy System

Michael Zarozinski (Louder Than A Bomb! Software)
Game Programming Gems 2, 2001.

Designing A General Robust AI Engine

Steve Rabin (Nintendo of America Inc)
Game Programming Gems, 2000.

A Finite-State Machine Class

Eric Dybsand (Glacier Edge Technology)
Game Programming Gems, 2000.
Abstract: Simple Finite-State Machines are powerful tools used in many computer game AI implementations. This article introduces a generic C++ class that implements a Finite-State Machine that is useful to the novice for learning about Finite-State Machines and as a building block for more complex AI implementations in development projects. The processes of a Finite-State Machine are presented, an example game implementation is offered as an example of Finite-State Machine usage, and source code illustrates how finite-state functionality can be implemented in a generic manner.

Game Trees

Jan Svarovsky (Mucky Foot Productions)
Game Programming Gems, 2000.

The Basics of A* for Path Planning

Bryan Stout
Game Programming Gems, 2000.

A* Aesthetic Optimizations

Steve Rabin (Nintendo of America)
Game Programming Gems, 2000.

A* Speed Optimizations

Steve Rabin (Nintendo of America)
Game Programming Gems, 2000.

Simplified 3D Movement and Pathfinding Using Navigation Meshes

Greg Snook (Mighty Studios)
Game Programming Gems, 2000.

Flocking: A Simple Technique for Simulating Group Behavior

Steven Woodcock (Wyrd Wyrks)
Game Programming Gems, 2000.

Fuzzy Logic for Video Games

Mason McCuskey (Spin Studios)
Game Programming Gems, 2000.

A Neural-Net Primer

Andr?LeMothe (Xtreme Games)
Game Programming Gems, 2000.

40% off discount
"Latest from a must have series"
Game
Programming
Gems 7



"Cutting-edge graphics techniques"
GPU Gems 3


"Newest AI techniques from commercial games"
AI Game
Programming
Wisdom 4




ugg boots clearance canada goose cyber monday moncler outlet
Home