Home    General Programming    Artificial Intelligence    Math    Physics    Graphics    Networking    Audio Programming   
Audio/Visual Design    Game Design    Production    Business of Games    Game Studies    Conferences    Schools    Contact   
Home
General Programming
Artificial Intelligence
Mathematics
Physics
Graphics
Networking
Audio Programming
Audio/Visual Design
Game Design
Production
Business of Games
Game Studies
Conferences
Schools
Contact
Game Programming Gems
Game Programming Gems 2
Game Programming Gems 3
Game Programming Gems 4
Game Programming Gems 5
Game Programming Gems 6
Game Programming Gems 7
AI Game Programming Wisdom
AI Game Programming Wisdom 2
AI Game Programming Wisdom 3
AI Game Programming Wisdom 4
GPU Gems
GPU Gems 2
GPU Gems 3
ShaderX
ShaderX2
ShaderX3
ShaderX4
ShaderX5
Massively Multiplayer Game Development
Massively Multiplayer Game Development 2
Secrets of the Game Business
Introduction to Game Development
GDC Proceedings
Game Developer Magazine
Gamasutra


AI Game Programming Wisdom 3
53 Articles, Edited by Steve Rabin, 2006.


Survey of Lowest Known Online Prices

  • $44.07 (37% off) Amazon.com (free shipping)
  • $69.95 (0% off) BarnesAndNoble.com



  • Section 1: General Wisdom

    Custom Tool Design for Game AI

    P.J. Snavely (Sony Computer Entertainment America)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Artificial intelligence systems in games have become so complex that often one engineer cannot write the entire structure alone. Using the Basketball Artificial Intelligence Tool (BAiT) we were able to integrate the artificial intelligence for NBA 2007 based entirely upon designer data entry and manipulation. While this approach has many positives there are also some drawbacks to implementing a system like this. There are also some necessary precautions that one should take before even attempting this process.

    Using STL and Patterns for Game AI

    James Freeman-Hargis (Midway Games)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Game AI programmers are notorious for reinventing the wheel. But many of the data structures, algorithms and architectures they need have already been done in flexible and reusable ways. This article is intended to serve as a reference for a variety of patterns. While entire volumes have been written to discuss the STL and design patterns in general, this article will provide an introductory overview of the STL and inspect those specific design patterns that have proven the most useful in game AI development. We need to talk about the STL because it provides a series of pre-defined data structures that will not only make life simpler, but which take much of the burden of nuts and bolts implementation away and allow the AI developer to focus on what's really interesting anyway—the AI.

    Declarative AI Design for Games—Considerations for MMOGs

    Nathan Combs
    AI Game Programming Wisdom 3, 2006.
    Abstract: The design of behaviors in games and massively multiplayer online games (MMOGs) is based on a style of scripting that is consistent with a cinematic perspective of game design. This style is paradigmatic of how AI is conceptualized in games. This article claims that this approach is not likely to scale in the future and calls for a more declarative style of developing and conceptualizing AI. The objective of this article is to acquaint games AI developers with thoughts and techniques that form a declarative AI design.

    Designing for Emergence

    Benjamin Wootton (University of Leeds)
    AI Game Programming Wisdom 3, 2006.
    Abstract: As gamers demand more realistic AI and more dynamic, non-linear, and interactive game worlds, traditional methods of developing AI are beginning to show their limitations in terms of salability, robustness and general fitness for purpose. Emergence and the broader "emergent approach" to game design hold great potential as an efficient tool for avoiding these limitations by allowing high-level behaviors and flexible game environments to emerge from low level building blocks without the need for any hard-coded or scripted behaviors. Our goals in this article are to both demonstrate this case, and to explain in practical terms how emergence can be captured by the game designer.

    Fun Game AI Design for Beginners

    Matt Gilgenbach (Heavy Iron Studios)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article is meant to provide food for thought on a number of issues involving AI design. Creating predictable, understandable and consistent AI that doesn't beat the player all the time is no easy task. The AI programmer must make sure that the AI gives the player time to react, doesn't have cheap shots against the player and isn't too simple or too complex. The AI is meant to enrich the player's enjoyment of the game, not to frustrate them, so these rules are important to consider in order to create an enjoyable experience for the player. If you are developing a game AI the best thing you can do (besides considering these rules) is to come up with your own rules from games that you enjoy playing.

    Strategies for Multi-Processor AI

    Sergio Garces (Pyro Studios)
    AI Game Programming Wisdom 3, 2006.
    Abstract: With multi-processor hardware becoming commonplace, it is necessary to develop a new architecture that allows the AI engine to execute in parallel in multiple threads. We describe several approaches that try to minimize dependencies and avoid locking, in order to build an efficient concurrent system, while keeping productivity high and preventing threading bugs.

    Academic AI Research and Relations with the Game Industry

    Christian Baekkelund (Massachusetts Institute of Technology (MIT))
    AI Game Programming Wisdom 3, 2006.
    Abstract: Historically, a substantial divide has existed between game AI developers and the general AI research community. Game AI developers have typically viewed academic research AI as too far removed from practical use, and academic AI researchers have remained largely uninterested in many of the common problems faced in game development. However, each group has much to gain from better communication and cooperation. While a great deal needs to be done from both sides of the divide, this article will focus on what game developers can do to better understand the academic AI research community and form better relations.

    Writing AI as Sport

    Peter Cowling (University of Bradford, UK)
    AI Game Programming Wisdom 3, 2006.
    Abstract: AI has been a sport for many decades. In this article we discuss some of the major competitions between AI game players and discuss the impact on the media and the public of success in these competitions. We discuss some of our own experiences in running AI competitions and provided pointers to running a successful competition. We consider non-programmatic ways that AI has been created, and how this might be use in a new genre of game where the player trains the AI for each player rather than controlling them directly.



    Section 2: Pathfinding

    Cooperative Pathfinding

    David Silver (University of Alberta)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Cooperative pathfinding is a general technique for coordinating the movements of multiple units. Units communicate their planned paths, enabling other units to avoid their intended routes. This article explains how to implement cooperative pathfinding using a space-time A* search. Moreover, it provides a number of improvements and optimizations, which allow cooperative pathfinding to be implemented both efficiently and robustly.

    Improving on Near-Optimality: More Techniques for Building Navigation Meshes

    Fredrik Farnstrom (Rockstar San Diego)
    AI Game Programming Wisdom 3, 2006.
    Abstract: New techniques for automatically building navigation meshes for use in pathfinding are presented, building on Paul Tozour's article "Building a Near-Optimal Navigation Mesh." Polygons are subdivided around walls and other static obstacles with precise cuts that reduce the number of polygons. The same subdivision method can be used for merging overlapping polygons, and the height of the agent is taken into account by extruding polygons. An additional technique for merging the resulting polygons is presented. To improve performance, a simple spatial data structure based on a hash table is used.

    Smoothing a Navigation Mesh Path

    Geraint Johnson (Sony Computer Entertainment Europe)
    AI Game Programming Wisdom 3, 2006.
    Abstract: It is becoming increasingly common to use a navigation mesh as the search space representation for pathfinding in games. We present a path-smoothing algorithm for use with a navigation mesh. The algorithm converts a rough path of navigation mesh cells found by A* into a curved path which an agent can follow. We use B�zier splines to generate a rounded curve which is guaranteed to stay on the surface of the navigation mesh, keeping the agent safe. We explain a string-pulling technique used to make the smoothed path as direct as possible.

    Preprocessed Pathfinding Using the GPU

    Renaldas Zioma (Digital Illusions Canada Inc.)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article proposes GPU-based implementations for two popular algorithms used to solve the all-pairs shortest paths problem: Dijkstra's algorithm, and the Floyd-Warshall algorithm. These algorithms are used to preprocess navigation mesh data for fast pathfinding. This approach can offload pathfinding-related CPU computations to the GPU at the expense of latency. However, once the solution table is generated, this approach minimizes the latency time for a specific path search, thus giving the game a better sense of interactivity. The biggest benefit of this approach is gained in systems with multiple agents simultaneously requesting paths in the same search space. Although the article describes a GPU-specific implementation for a navigation mesh, any other multi-processor environment or discrete search space representation can be used.



    Section 3: Movement

    Flow Fields for Movement and Obstacle Avoidance

    Bob Alexander (Zipper Interactive Inc.)
    AI Game Programming Wisdom 3, 2006.
    Abstract: There are many algorithms in AI which can produce conflicting results. For example, in collision avoidance, avoiding one object can result in hitting another. The AI must resolve these conflicts and find a solution that avoids all objects simultaneously. Resolution is often achieved using iterative processing or prioritization techniques. However, by using flow fields this problem can be solved for all objects simultaneously. In this article we will see how flow fields can be an elegant solution to many other problems as well, such as, smoothing A* results and controlling movement during battle.

    Autonomous Camera Control with Constraint Satisfaction Methods

    Owen Bourne and Abdul Sattar (Institute for Integrated and Intelligent Systems)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Producing a robust autonomous camera that can interact with the dense and dynamic environments of interactive games is a difficult and tricky process. Avoiding occlusions, displaying multiple targets, and coherent movements are all problems that are difficult to solve. The constraint satisfaction approach can be used to effectively solve these problems while providing a number of benefits, including extendibility, robustness, and intelligence. This article covers the theory and implementation details for a fully autonomous constraint-based camera system that can be used in arbitrary environments. The included source-code demonstrates the use of the camera system in an interactive environment.

    Insect AI 2: Implementation Strategies

    Nick Porcino (LucasArts, a Lucasfilm Company)
    AI Game Programming Wisdom 3, 2006.
    Abstract: The integration of AI into a game engine where the agent is simulated and run under physical control can be a challenge. The AI's internal model of the world is likely to be very simple relative to the complexity of the game world, yet the AI has to function in a reasonable and efficient manner. This article shows how to usefully integrate Insect AI into systems where the physics, collision, and animation systems are black boxes not directly under AI control, and are not even directly accessible by the AI. It also discusses practicalities of implementation including integration with pre-existing AI algorithms in a game engine.

    Intelligent Steering Using Adaptive PID Controllers

    Euan Forrester (Next Level Games Inc.)
    AI Game Programming Wisdom 3, 2006.
    Abstract: As physics systems become more complex and are embedded more deeply into our games, our jobs as AI programmers become more difficult. AI characters need to operate under the same physical restrictions as the player to maintain the visual continuity of the game, to reduce the player's sense of being cheated by the computer, and to reduce the development workload necessary to create multiple physics systems which must interact with one another. Although this problem can be solved by using standard PID (Proportional-Integral-Derivative) controllers, they are difficult to tune for physics systems whose characteristics vary over time. Fortunately, control engineering provides a solution to this problem: adaptive controllers. This article focuses on Model Reference Adaptive Controllers: controllers which attempt to make the AI character's behavior match a predefined model as closely as possible within the physical constraints imposed by the game. The article comes with full source code for a demo that lets you change the handling characteristics of a missile flying towards a moving target, and watch while the PID coefficients are updated in real-time.

    Fast, Neat, and Under Control: Arbitrating Between Steering Behaviors

    Heni Ben Amor, Jan Murray, and Oliver Obst (University of Koblenz)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Steering behaviors are a convenient way of creating complex and lifelike movements from simple reactive procedures. However, the process of merging those behaviors is not trivial and the resulting steering command can lead to suboptimal or even catastrophic results. This article presents a solution to these problems by introducing inverse steering behaviors (ISBs) for controlling physical agents. Based on the original concept of steering behaviors, ISBs facilitate improved arbitration between different behaviors by doing a cost based analysis of several steering vectors instead of relying on one solution only.

    Real-Time Crowd Simulation Using AI.implant

    Paul Kruszewski (BGT BioGraphic Technologies, Inc.)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Next-generation gaming hardware such as the Xbox 360 and PlayStation 3 will allow the creation and visualization of large visually rich but virtually uninhabited cities. It remains an open problem to efficiently create and control large numbers of vehicles and pedestrians within these environments. We present a system originating from the special effects industry, and expanded in the military simulation industry, that has been successfully evolved into a practical and scalable real-time urban crowd simulation game pipeline with a behavioral fidelity that previously has only been available for non-real-time applications such as films and cinematics.



    Section 4: Architecture

    Flexible Object-Composition Architecture

    Sergio Garces (Pyro Studios)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Object-composition architectures provide an easy way to assemble game objects as a collection of components, each of them with a specific and modular function. Archetypes are used to define what components an object consist of, and therefore what objects do. Archetype definition is data-driven, empowering designers to experiment with gameplay. The last ingredient in the mix is good tools, which might take advantage of data inheritance to increase productivity.

    A Goal-Based, Multi-Tasking Agent Architecture

    Elizabeth Gordon (Frontier Developments, Ltd.)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article describes a goal-based, multi-tasking agent architecture for computer game characters. It includes some mechanisms for representing and requesting information about the game world, as well as a method for selecting a set of compatible goals to execute based on the availability of necessary items. Finally, the article includes a brief discussion of techniques for designing and debugging goal-based systems.

    Orwellian State Machines

    Igor Borovikov (Sony Computer Entertainment America)
    AI Game Programming Wisdom 3, 2006.
    Abstract: The article explores a methodology for building game AI based on subsumption, command hierarchy, messaging and finite state machines. The approach is derived from a metaphor of bureaucratic dictatorship. This metaphor helps in the analysis and practical design of particular AI subsystems on both the individual and group layers. The resulting architecture is called an Orwellian State Machine (OSM).

    A Flexible AI System through Behavior Compositing

    Matt Gilgenbach (Heavy Iron Studios), Travis McIntosh (Naughty Dog)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article proposes a new way of defining AI states as modular behaviors, so code can be reused between NPCs with a minimal amount of effort. With this system, state transitions are not explicitly recorded in a table like many finite state machine implementations. Every behavior has a "runnable" condition and a priority, so the state transitions are determined by checking these conditions in sorted order. Common issues that arise with this implementation are addressed including performance, ease of refactoring, and interdependencies.

    Goal Trees

    Geraint Johnson (Sony Computer Entertainment Europe)
    AI Game Programming Wisdom 3, 2006.
    Abstract: We present a generic AI architecture for implementing the behavior of game agents. All levels of behavior, from tactical maneuvers to path-following and steering, are implemented as goals. Each goal can set up one or more subgoals to achieve its aim, so that a tree structure is formed with the primary goal of the agent at its root. Potential primary goals are experts on when they should be selected, and scripts can also force behavior at any level by providing a sequence of primary goals. The architecture is more robust than a finite state machine (FSM) and more efficient than a full planning system.

    A Unified Architecture for Goal Planning and Navigation

    Dominic Filion
    AI Game Programming Wisdom 3, 2006.
    Abstract: Graph networks, traversed by standard algorithms such as A*, are the staple of most pathfinding systems. The formalization of navigation algorithms into a search graph that represents spatial positioning is one of the most effective ideas in game AI. However ubiquitous graph networks may be in pathfinding, their use in more general problem domains in modern games seems to be less common. Couldn't we extend the standard pathfinding arsenal—graph networks and A*—to other problem sets? This is the idea that we will be exploring in this article.

    Prioritizing Actions in a Goal-Based RTS AI

    Kevin Dill (Blue Fang Games)
    AI Game Programming Wisdom 3, 2006.
    Abstract: In this article we outline the architecture of our strategic AI and discuss a variety of techniques that we used to generate priorities for its goals. This engine provided the opposing player AI of our real-time strategy games Kohan 2: Kings of War and Axis & Allies. The architecture is easily extensible, flexible enough to be used in a variety of different types of games, and sufficiently powerful to provide a good challenge for an average player on a random, unexplored map without unfair advantages.

    Extending Simple Weighted-Sum Systems

    Sergio Garces (Pyro Studios)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Decision-making is an important function of every AI engine. One popular technique involves calculating a weighted sum, which combines a number of factors into a desirability value for each option, and then selecting the option with the highest score. Some extensions, such as the incorporation of behavioral inertia, the use of response curves, or the combination of the system with a rule-based engine, can turn the weighted sum into a very robust, flexible approach for controlling behavior.

    AI Waterfall: Populating Large Worlds Using Limited Resources

    Sandeep V. Kharkar (Indie Built, Inc.)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article presents an architecture that simplifies the process of populating large worlds with interesting and varied actors using a relatively small number of AI agents. The architecture derives its concept from faux waterfalls that recycle the same water to create the illusion of continuous flow. The architecture can be broken down into two distinct parts: a director class that moves the actors around the stage and provides them with a script for the role they play, and a set of game-specific actors that play the part they are assigned until they are asked to go back in the wings for a costume change. One section of the article is dedicated to optimization techniques for the architecture. The code for the underlying architecture is included with the article.

    An Introduction to Behavior-Based Systems for Games

    Aaron Khoo (Microsoft)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Behavior-based systems are an efficient way of controlling NPCs in video games. By taking advantage of simpler propositional logic, these systems are able reason efficiently and react quickly to changes in the environment. The developer builds the AI system one behavior layer at a time, and then aggregates the results of all the behaviors into a final output value using resolution system. The resulting systems are equivalent to finite state machines, but are not constructed in the traditional state-transition manner. The resulting behavior-based system can often be mostly stateless, hence avoiding most of the messy state transitions that need to be built into FSMs to handle various contingencies.

    Simulating a Plan

    Petar Kotevski (Genuine Games)
    AI Game Programming Wisdom 3, 2006.
    Abstract: The article describes a methodology of supplementing traditional FSMs with contextual information about the internal state of the agent and the environment that the agent is in, by defining game events and deriving rules for responses to a given game event. This creates a completely non-scripted experience that varies with every different player, because in essence the system responds to game events generated by the player himself. By defining simple rules for enemy behavior and environments in which those rules can be clearly seen, it is possible to simulate group behavior where no underlying code for it is present. The system described is completely deterministic, thus easy to maintain, QA, and debug. It is also not computationally expensive, so rather large populations of AI agents can be simulated using the proposed system.



    Section 5: Tactics and Planning

    Probabilistic Target Tracking and Search Using Occupancy Maps

    Dami�n Isla (Bungie Studios)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article will introduce Occupancy Maps, a technique for probabilistically tracking object positions. Occupancy Maps, an application of a broader Expectation Theory, can result in more interesting and realistic searching behaviors, and can also be used to generate emotional reactions to search events, like surprise (at finding a target in an unexpected place) and confusion (at failing to find a target in an expected place). It is also argued that the use of more in-depth knowledge-modeling techniques such as Occupancy Maps can relieve some of the complexity of a traditional FSM or HFSM approach to search behavior.

    Dynamic Tactical Position Evaluation

    Remco Straatman and Arjen Beij (Guerrilla Games), William van der Sterren (CGF-AI)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Dynamic tactical position evaluation is essential in making tactical shooters less linear and more responsive to the player and to changes in the game world. Designer placed hints for positioning and detailed scripting are impractical for games with unpredictable situations due to player freedom and dynamic environments. This article describes the techniques used to address these issues for Guerrilla's console titles Killzone and Shellshock Nam '67. The basic position evaluation mechanism is explained and its application when selecting tactical positions and finding tactical paths. Some alternative uses of the technique are given, such as generating intelligent scanning positions and suppressive fire, and the practical issues of configuration and performance are discussed.

    Finding Cover in Dynamic Environments

    Christian J. Darken (The MOVES Institute), Gregory H. Paull (Secret Level Inc.)
    AI Game Programming Wisdom 3, 2006.
    Abstract: In this article, we describe our approach to improved cover finding with an emphasis on adaptability to dynamic environments. The technique described here combines level annotation with the sensor grid algorithm. The strength of level annotation is its modest computational requirements. The strength of the sensor grid algorithm is its ability to handle dynamic environments and to find smaller cover opportunities in static environments. Each approach is useful by itself, but combining the two can provide much of the benefit of both. In a nutshell, our approach relies on cover information stored in the candidate cover positions placed throughout the level whenever possible and performs a focused run-time search in the immediate vicinity of the agent if the level annotation information is insufficient. This allows it to be fast and yet able to react to changes in the environment that occur during play.

    Coordinating Teams of Bots with Hierarchical Task Network Planning

    Hector Munoz-Avila and Hai Hoang (Lehigh University)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article presents the use of Hierarchical-Task-Network (HTN) representations to model strategic game AI. We demonstrate the use of hierarchical planning techniques to coordinate a team of bots in an FPS game.



    Section 6: Genre Specific

    Training Digital Monsters to Fight in the Real World

    James Boer and John Corpening (ArenaNet)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article discusses how we approached and solved the problem of creating compelling AI agents for Digimon Rumble Arena 2, a one to four-player brawler. This consisted of two major challenges: How to pathfind through and respond intelligently to highly dynamic and interactive environments, and how to program a wide variety of characters to play effectively in ten different game types without incurring a combinatorial explosion of code complexity.

    The Suffering: Game AI Lessons Learned

    Greg Alt (Surreal Software)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article presents a collection of lessons learned through building and evolving an AI architecture through the development of three games for PS2, XBox, and PC: The Lord of The Rings: The Fellowship of the Ring; The Suffering; and The Suffering: Ties That Bind. The lessons cover alternate uses for A* and pathfinding, visualizations to aid AI development and debugging, benefits of a fine-grained hierarchical behavior system, and the combination of autonomy and scripted behavior for non-player characters (NPCs).

    Environmental Awareness in Game Agents

    Penny Sweetser (The University of Queensland)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Agents make up an important part of game worlds, ranging from the characters and monsters that live in the world to the armies that the player controls. Despite their importance, agents in current games rarely display an awareness of their environment or react appropriately, which severely detracts from the believability of the game. Some games have included agents with a basic awareness of other agents, but they are still unaware of important game events or environmental conditions. This chapter describes an agent design that combines cellular automata for environmental modeling with influence maps for agent decision-making. The result is simple, flexible game agents that are able to respond to natural phenomena (e.g. rain or fire), while pursuing a goal.

    Fast and Accurate Gesture Recognition for Character Control

    Markus W�� (Foolscap Vienna)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article describes a simple, yet fast and accurate, way of gesture recognition that we have used in Punch'n'Crunch, a gesture-based fun-boxing game. The presented system is a very interesting way to control characters, but can also be used to recognize letters, numbers, and other arbitrary symbols. Gestures allow a more natural way for triggering a multitude of different commands.

    Being a Better Buddy: Interpreting the Player's Behavior

    William van der Sterren (CGF-AI)
    AI Game Programming Wisdom 3, 2006.
    Abstract: In shooter games, the player's activity can be interpreted by the AI to recognize certain tactical behaviors. Based on this, the AI can direct the friendly NPCs to better assist the player. To interpret and classify the player's activity, a na�ve Bayes classifier is used. With careful design of the inputs to this classifier, some post-processing of its output, and by gathering good training data, the player's activity can be interpreted in an efficient and robust way.

    Ant Colony Organization for MMORPG and RTS Creature Resource Gathering

    Jason Dunn (H2Code)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article provides details about the implementation of ant colonies for pathfinding in massively multiplayer and real-time strategy games. Details include the effects of pheromones and individual ant behavior, as well as what variables to focus on when adapting the provided source code. Readers are taught how to control the elasticity of path seeking and path reinforcement.

    RTS Citizen Unit AI

    Shawn Shoemaker (Stainless Steel Studios)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Unit AI refers to the micro-level artificial intelligence that controls a specific unit in an RTS game, and how that unit reacts to input from the player and the game world. Citizens present a particular challenge for unit AI as the citizen is a super unit, combining the unit AI for very other RTS unit. This article discusses some real world problems and solutions for citizen unit AI, taken from the development of the three RTS titles including Empire Earth. In Addition, this article discusses additional features necessary for the citizen, such a build queuing and "smart" citizens.

    A Combat Flight Simulation AI Framework

    Phil Carlisle (University of Bolton and Ace Simulations Ltd.)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article covers the AI framework requirements specific to an air combat based flight simulation. It explains the general AI framework that should already be in place before continuing on to describe the air combat flight simulation specific data structures, algorithms and requirements that need to be in place to deliver a playable AI opponent for such simulations.



    Section 7: Scripting and Dialogue

    Opinion Systems

    Adam Russell (Lionhead Studios)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Modeling the formation and effect of opinions about the player character in a simulated social environment is a difficult problem for game AI, but one increasingly worth tackling. This article discusses some of the wisdom gained during the construction of perhaps the most complex opinion system ever seen in a commercial game, that of Lionhead Studios' Fable.

    An Analysis of Far Cry: Instincts' Anchor System

    Eric Martel (Ubisoft Montreal)
    AI Game Programming Wisdom 3, 2006.
    Abstract: An overview of the anchor system used in Far Cry Instincts to control dynamic cut-scenes and specific character reactions. This article explains a few choices made when the system was designed and then turns into a quick post-mortem style review of the system. Programmers working on a dynamic cut-scene system should find pertinent information that will allow them to build a better and more flexible system.

    Creating a Visual Scripting System

    Matthew McNaughton and Thomas Roy (University of Alberta)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Scripting is frequently used to implement the story-related behavior of characters and other objects in a game. However, writing scripts can be time-consuming and difficult for the game designers responsible for conceiving of and implementing the story. We introduce a powerful, extensible, and expressive visual scripting framework to solve these problems, with a sample implementation. The framework works with any existing scripting language in any game. We used the ideas in this article to develop ScriptEase, a visual scripting tool, and have used it in several successful case studies where 10th grade students developed scripted game modules for Neverwinter Nights.

    Intelligent Story Direction in the Interactive Drama Architecture

    Brian Magerko (Michigan State University)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Interactive drama is a new field of entertainment that attempts to give the player a dramatic situation that is tailored towards their interaction with the storyworld. A particular technique for creating interactive drama is the use of an automated story director, an intelligent agent that is responsible for managing the performance of synthetic characters in response to authored story content and player actions in the world. This article addresses some of the issues with using a story director, and introduces the Interactive Drama Architecture as a case study for story direction in a general architecture.



    Section 8: Learning and Adaptation

    Practical Algorithms for In-Game Learning

    John Manslow
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article describes fast and efficient techniques that can be used for in-game learning. With a strong emphasis on the requirements that techniques must satisfy to be used in-game, it presents a range of practical techniques that can be used to produce learning and adaptation during gameplay, including moving average estimators, probability estimators, percentile estimators, single layer neural networks, nearest neighbor estimators, and decision trees. The article concludes by presenting an overview of two different types of stochastic optimization algorithms and describes a new way to produce adaptive difficulty levels.

    A Brief Comparison of Machine Learning Methods

    Christian Baekkelund (Massachusetts Institute of Technology (MIT))
    AI Game Programming Wisdom 3, 2006.
    Abstract: When considering whether or not to use machine learning methods in a game, it is important to be aware of their capabilities and limitations. As different methods have different strengths and weaknesses, it is paramount that the correct learning method be selected for a given task. This article will give a brief overview of the strengths, weaknesses, and general capabilities of common machine learning methods and the differences between them. Armed with this information, an AI programmer will better be able to go about the task of selecting "The right tool for the job."

    Introduction to Hidden Markov Models

    Robert Zubek (Electronic Arts / Maxis)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Hidden Markov models are a probabilistic technique that provides an inexpensive and intuitive means of modeling stochastic processes. This article introduces the models, presents the computational details of tracking processes over time, and shows how they can be used to track player's movement and behavior based on scattered and uncertain observations.

    Preference-Based Player Modeling

    Jeroen Donkers and Pieter Spronck (Universiteit Maastricht, The Netherlands)
    AI Game Programming Wisdom 3, 2006.
    Abstract: This article describes how to create models of players based on their preferences for certain game states and shows how these models can be used to predicts a player's actions. We show how this enables the computer to reason more intelligently about its actions, to adapt to the player, and thereby to act as a more challenging and entertaining opponent. The article describes two ways to create models, player model search and probabilistic player model search, and illustrates their application with the help of pseudo-code. Finally, the article provides an example of how these techniques could be used to enhance a computer's diplomatic reasoning in a strategy game.

    Dynamic Scripting

    Pieter Spronck (Universiteit Maastricht, The Netherlands)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Dynamic scripting is a technique that can be used to adapt the behavior of NPCs during gameplay. It creates scripts on the fly by extracting rules from a rulebase according to probabilities that are derived from weights that are associated with each rule. The weights adapt to reflect the performance of the scripts that are generated, so that rules that are consistently associated with the best scripts will quickly develop large weights and be selected more frequently. Dynamic scripting has been successfully applied to a wide range of genres and has been validated experimentally in RTS games and RPGs.

    Encoding Schemes and Fitness Functions for Genetic Algorithms

    Dale Thomas (Q Games)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Genetic algorithms (GAs) have great potential in game AI and they have been widely discussed in the game development community. Many attempts to apply GAs in practice have only led to frustration and disappointment, however, because many introductory texts encourage na�ve implementations of GAs that do not include the application specific enhancements that are often required in practice. This article addresses this problem by describing the roles played by the encoding scheme, genetic operators, and fitness function in a GA and describes how each of them can be designed in an application specific way to achieve maximum evolutionary performance.

    A New Look at Learning and Games

    Christian Baekkelund (Massachusetts Institute of Technology (MIT))
    AI Game Programming Wisdom 3, 2006.
    Abstract: Most discussions of the application of learning methods in games adhere to a fairly rigid view of when and where they should be applied. Typically, they advocate the use of such algorithms to facilitate non-player character (NPC) adaptation during gameplay and occasionally promote their use as part of the development process as a tool that can assist in the creation of NPC AI. This article attempts to broaden the discussion over the application of modeling and optimization algorithms that are typically used to produce learning by discussing alternative ways to use them in game AI, as well as more generally in the game development process.

    Constructing Adaptive AI Using Knowledge-Based Neuroevolution

    Ryan Cornelius, Kenneth O. Stanley, and Risto Miikkulainen (The University of Texas at Austin)
    AI Game Programming Wisdom 3, 2006.
    Abstract: Machine learning can increase the appeal of videogames by allowing non-player characters (NPCs) to adapt to the player in real-time. Although techniques such as real-time NeuroEvolution of Augmenting Topologies (rtNEAT) have achieved some success in this area by evolving artificial neural network (ANN) controllers for NPCs, rtNEAT NPCs are not smart out-of-the-box and significant evolution is often required before they develop even a basic level of competence. This article describes a technique that solves this problem by allowing developers to convert their existing finite state machines (FSMs) into functionally equivalent ANNs that can be used with rtNEAT. This means that rtNEAT NPCs will start out with all the abilities of standard NPCs and be able to evolve new behaviors of potentially unlimited complexity.

    37% off discount
    "Latest from a must have series"
    Game
    Programming
    Gems 7



    "Cutting-edge graphics techniques"
    GPU Gems 3


    "Newest AI techniques from commercial games"
    AI Game
    Programming
    Wisdom 4




    ugg boots clearance canada goose cyber monday moncler outlet
    Home