Home    General Programming    Artificial Intelligence    Math    Physics    Graphics    Networking    Audio Programming   
Audio/Visual Design    Game Design    Production    Business of Games    Game Studies    Conferences    Schools    Contact   
State of the Industry
Architecture
State Machines
Learning
Scripting
A* pathfinding
Pathfinding / Movement
Group Movement
Group Cooperation
Strategy / Tactical
Animation Control
Camera Control
Randomness
Player Prediction
Fuzzy Logic
Neural Nets
Genetic Algorithms
Natural Language
Tips and Advice
Tools and Libraries
Genre: RTS / Strategy
Genre: RPG / Adventure
Genre: FPS / Action
Genre: Racing
Genre: Sports
Genre: Board Games
Middleware
Open Source
All Articles
Game Programming Gems
Game Programming Gems 2
Game Programming Gems 3
Game Programming Gems 4
Game Programming Gems 5
Game Programming Gems 6
Game Programming Gems 7
AI Game Programming Wisdom
AI Game Programming Wisdom 2
AI Game Programming Wisdom 3
AI Game Programming Wisdom 4
GPU Gems
GPU Gems 2
GPU Gems 3
ShaderX
ShaderX2
ShaderX3
ShaderX4
ShaderX5
Massively Multiplayer Game Development
Massively Multiplayer Game Development 2
Secrets of the Game Business
Introduction to Game Development
GDC Proceedings
Game Developer Magazine
Gamasutra


Artificial Intelligence: Learning


Learning Winning Policies in Team-Based First-Person Shooter Games

Stephen Lee-Urban, Megan Smith, and H�ctor Mu�oz-Avila (Lehigh University)
AI Game Programming Wisdom 4, 2008.
Abstract: This article presents the use of an online reinforcement learning algorithm, called RETALIATE, to automatically acquire team AI in FPS domination-style games. We present the learning problem and state model from which we draw some lessons for designing AI in these game genres.

Adaptive Computer Games: Easing the Authorial Burden

Manish Mehta, Santi Onta��n, Ashwin Ram (Georgia Institute of Technology)
AI Game Programming Wisdom 4, 2008.
Abstract: Artificial intelligence behaviors in games are typically implemented using static, hand-authored scripts. Hand-authoring results in two issues. First, it leads to excessive authorial burden where the author has to craft behaviors for all the possible circumstances that might occur in the game world. Second, it results in games that are brittle to changing world dynamics. In this paper, we present our work to address these two issues by presenting techniques that a) reduce the burden of writing behaviors, and b) increase the adaptivity of those behaviors. We describe a behavior learning system that can learn behavior from human demonstrations and also automatically adapt behaviors when they are not achieving their intended purpose.

Player Modeling for Interactive Storytelling: A Practical Approach

David Thue, Vadim Bulitko, and Marcia Spetch (University of Alberta)
AI Game Programming Wisdom 4, 2008.
Abstract: As computer graphics becomes less of a differentiator in the video game market, many developers are turning to AI and storytelling to ensure that their title stands out from the rest. To date, these have been approached as separate, incompatible tasks; AI engineers feel shackled by the constraints imposed by a story, and the story's authors fear the day that an AI character grabs their leading actor and throws him off a bridge. In this article, we attempt to set aside these differences, bringing AI engineers together with authors through a key intermediary: a player model. Following an overview of the present state of storytelling in commercial games, we present PaSSAGE (Player-Specific Stories via Automatically Generated Events), a storytelling AI that both learns and uses a player model to dynamically adapt a game's story. By combining the knowledge and expertise of authors with a learned player model, PaSSAGE automatically creates engaging and personalized stories that are adapted to appeal to each individual player.

Automatically Generating Score Functions for Strategy Games

Sander Bakkes and Pieter Spronck (Maastricht University, The Netherlands)
AI Game Programming Wisdom 4, 2008.
Abstract: Modern video games present complex environments in which their AI is expected to behave realistically, or in a "human-like" manner. One feature of human behavior is the ability to assess the desirability of the current strategic situation. This type of assessment can be modeled in game AI using a "score function." Due to the complex nature of modern strategy games, the determination of a good score function can be difficult. This difficulty arises in particular from the fact that score functions usually operate in an imperfect information environment. In this article, we show that machine learning techniques can produce a score function that gives good results despite this lack of information.

Automatic Generation of Strategies

Pieter Spronck and Marc Ponsen (Maastricht University, The Netherlands)
AI Game Programming Wisdom 4, 2008.
Abstract: Machine learning techniques can support AI developers in designing, tuning, and debugging tactics and strategies. In this article, we discuss how a genetic algorithm can be used to automatically discover strong strategies. We concentrate on the representation of a strategy in the form of a chromosome, the design of genetic operators to manipulate such chromosomes, the design of a fitness function, and discuss the evolutionary process itself. The techniques and their results are demonstrated in the game of Wargus.

A Practical Guide to Reinforcement Learning in First-Person Shooters

Michelle McPartland (University of Queensland)
AI Game Programming Wisdom 4, 2008.
Abstract: Reinforcement learning (RL) is well suited to FPS bots as it is able to learn short term reactivity as well as long term planning. This article briefly introduces the basics of RL and then describes a popular RL algorithm called Sarsa. It shows how RL can be used to allow FPS bots to learn some of the behaviors that are required to play deathmatch games and presents the results of several experiments.

Practical Algorithms for In-Game Learning

John Manslow
AI Game Programming Wisdom 3, 2006.
Abstract: This article describes fast and efficient techniques that can be used for in-game learning. With a strong emphasis on the requirements that techniques must satisfy to be used in-game, it presents a range of practical techniques that can be used to produce learning and adaptation during gameplay, including moving average estimators, probability estimators, percentile estimators, single layer neural networks, nearest neighbor estimators, and decision trees. The article concludes by presenting an overview of two different types of stochastic optimization algorithms and describes a new way to produce adaptive difficulty levels.

A Brief Comparison of Machine Learning Methods

Christian Baekkelund (Massachusetts Institute of Technology (MIT))
AI Game Programming Wisdom 3, 2006.
Abstract: When considering whether or not to use machine learning methods in a game, it is important to be aware of their capabilities and limitations. As different methods have different strengths and weaknesses, it is paramount that the correct learning method be selected for a given task. This article will give a brief overview of the strengths, weaknesses, and general capabilities of common machine learning methods and the differences between them. Armed with this information, an AI programmer will better be able to go about the task of selecting "The right tool for the job."

Introduction to Hidden Markov Models

Robert Zubek (Electronic Arts / Maxis)
AI Game Programming Wisdom 3, 2006.
Abstract: Hidden Markov models are a probabilistic technique that provides an inexpensive and intuitive means of modeling stochastic processes. This article introduces the models, presents the computational details of tracking processes over time, and shows how they can be used to track player's movement and behavior based on scattered and uncertain observations.

Preference-Based Player Modeling

Jeroen Donkers and Pieter Spronck (Universiteit Maastricht, The Netherlands)
AI Game Programming Wisdom 3, 2006.
Abstract: This article describes how to create models of players based on their preferences for certain game states and shows how these models can be used to predicts a player's actions. We show how this enables the computer to reason more intelligently about its actions, to adapt to the player, and thereby to act as a more challenging and entertaining opponent. The article describes two ways to create models, player model search and probabilistic player model search, and illustrates their application with the help of pseudo-code. Finally, the article provides an example of how these techniques could be used to enhance a computer's diplomatic reasoning in a strategy game.

Dynamic Scripting

Pieter Spronck (Universiteit Maastricht, The Netherlands)
AI Game Programming Wisdom 3, 2006.
Abstract: Dynamic scripting is a technique that can be used to adapt the behavior of NPCs during gameplay. It creates scripts on the fly by extracting rules from a rulebase according to probabilities that are derived from weights that are associated with each rule. The weights adapt to reflect the performance of the scripts that are generated, so that rules that are consistently associated with the best scripts will quickly develop large weights and be selected more frequently. Dynamic scripting has been successfully applied to a wide range of genres and has been validated experimentally in RTS games and RPGs.

Encoding Schemes and Fitness Functions for Genetic Algorithms

Dale Thomas (Q Games)
AI Game Programming Wisdom 3, 2006.
Abstract: Genetic algorithms (GAs) have great potential in game AI and they have been widely discussed in the game development community. Many attempts to apply GAs in practice have only led to frustration and disappointment, however, because many introductory texts encourage na�ve implementations of GAs that do not include the application specific enhancements that are often required in practice. This article addresses this problem by describing the roles played by the encoding scheme, genetic operators, and fitness function in a GA and describes how each of them can be designed in an application specific way to achieve maximum evolutionary performance.

A New Look at Learning and Games

Christian Baekkelund (Massachusetts Institute of Technology (MIT))
AI Game Programming Wisdom 3, 2006.
Abstract: Most discussions of the application of learning methods in games adhere to a fairly rigid view of when and where they should be applied. Typically, they advocate the use of such algorithms to facilitate non-player character (NPC) adaptation during gameplay and occasionally promote their use as part of the development process as a tool that can assist in the creation of NPC AI. This article attempts to broaden the discussion over the application of modeling and optimization algorithms that are typically used to produce learning by discussing alternative ways to use them in game AI, as well as more generally in the game development process.

Constructing Adaptive AI Using Knowledge-Based Neuroevolution

Ryan Cornelius, Kenneth O. Stanley, and Risto Miikkulainen (The University of Texas at Austin)
AI Game Programming Wisdom 3, 2006.
Abstract: Machine learning can increase the appeal of videogames by allowing non-player characters (NPCs) to adapt to the player in real-time. Although techniques such as real-time NeuroEvolution of Augmenting Topologies (rtNEAT) have achieved some success in this area by evolving artificial neural network (ANN) controllers for NPCs, rtNEAT NPCs are not smart out-of-the-box and significant evolution is often required before they develop even a basic level of competence. This article describes a technique that solves this problem by allowing developers to convert their existing finite state machines (FSMs) into functionally equivalent ANNs that can be used with rtNEAT. This means that rtNEAT NPCs will start out with all the abilities of standard NPCs and be able to evolve new behaviors of potentially unlimited complexity.

Short-Term Memory Modeling Using a Support Vector Machine

Julien Hamaide
Game Programming Gems 6, 2006.

Optimizing a Decision Tree Query Algorithm for Multithreaded Architectures

Chuck DeSylva (Intel Corporation)
Game Programming Gems 5, 2005.

Player Modeling for Adaptive Games

Ryan Houlette (Stottler Henke Associates, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: This article describes a lightweight, flexible machine learning technique we call player modeling, designed to help add adaptivity to your game AI. The basic idea is simple: the game maintains a profile of each player that captures the skills, weaknesses, preferences, and other characteristics of that player. This model is updated by the game as it interacts with the player. In turn, the game AI can query the player model to determine how best to adapt its behavior to that particular player - for example, by asking which of several possible tactics will be most challenging to the player. Using player modeling, a game's AI can adapt both during the course of a single play as well as over multiple sessions, resulting in a computer opponent that changes and evolves with time to suit the player.

The article first defines the player model concept in more detail and then discusses strategies for designing a model to suit your game. It then presents a basic player model implementation. Subsequent sections describe how to actually integrate the modeling system with your game, including both how to update the model and how to make use of the information that it contains. The remainder of the article presents several advanced concepts, including a hierarchical player model, alternate model update methods, and other uses for the player model.

Constructing a Decision Tree Based on Past Experience

Dan Fu, Ryan Houlette (Stottler Henke Associates, Inc.)
AI Game Programming Wisdom 2, 2003.
Abstract: In recent years, decision trees have gained popularity within the game development community as a practical learning method that can help an AI adapt to a player. Instead of picking from a canned set of reactions to player action, the AI has the opportunity to do something much more powerful: anticipate the player's action before he acts. In this article, we discuss a decision tree learning algorithm called ID3, which constructs a decision tree that identifies the telltale features of an experience to predict its outcome. We then establish ID3's role in Black & White, building on an earlier article in the first edition of AI Game Programming Wisdom. Finally, we consider some important aspects and extensions to the approach, and provide s ample code which implements a simple form of ID3.

Understanding Pattern Recognition Methods

Jouni Smed, Harri Hakonen, Timo Kaukoranta (Department of Information Technology, University of Turku, Finland)
AI Game Programming Wisdom 2, 2003.
Abstract: The task of pattern recognition is to abstract relevant information from the game world and, based on the retrieved information, construct concepts and deduce patterns for the use of higher level reasoning and decision-making systems. We view pattern recognition in computer games from two perspectives: functional and methodological. In the functional approach, we analyze what is required from pattern recognition. We conclude that it can act in different roles, which in turn affect the choice of a method and its implementation. These roles depend on the level of decision-making, the stance toward the player, and the use of the modeled knowledge. In the methodological approach, we review a branch of pattern recognition techniques arising from soft computing. We discuss methods related to optimization, adaptation, and uncertainty. Our intention is to clarify where these methods should be used.

Using Reinforcement Learning to Solve AI Control Problems

John Manslow
AI Game Programming Wisdom 2, 2003.
Abstract: During the development of a game's AI many difficult and complex control problems often have to be solved. How should the control surfaces of an aircraft be adjusted so that it follows a particular path? How should a car steer to follow a racing line? What sequences of actions should a real time strategy AI perform to maximize its chances of winning? Reinforcement learning (RL) is an extremely powerful machine learning technique that allows a computer to discover its own solutions to these types problems by trial and error. This article assumes no prior knowledge of RL and introduces the fundamental principles of it by showing how it can be used to allow a computer to learn how to control a simulated racing car. C++ source code for RL and a skeleton implementation of racing game AI are included with the article.

Getting Around the Limits of Machine Learning

Neil Kirby (Lucent Technologies Bell Laboratories)
AI Game Programming Wisdom 2, 2003.
Abstract: To some AI programmers, the Holy Grail of AI would be a game that learns in the field and gets better the more it is played. Multiplayer network games especially would become challenging to the most skillful players as the AI learns and uses the best plays from the best players. This article examines some of the limitations of machine learning and some of the ways around them. It analyzes learning in three current games. It considers technical and gameplay issues with learning in games.

How to Build Neural Networks for Games

Penny Sweetser (School of ITEE, University of Queensland)
AI Game Programming Wisdom 2, 2003.
Abstract: Neural networks are a machine learning technique inspired by the human brain. They are a flexible technique that has a wide range of applications in a variety of industries. This article will first introduce neural networks, describing their biological inspiration. Then, it will describe the important components of neural networks and demonstrate how they can be implemented with example code. Next, it will explain how neural networks can be trained, both in-game and prior to shipping, and how a trained neural network can be used for decision-making, classification and prediction. Finally, it will discuss the various applications of neural networks in games, describing previous uses and giving ideas for future applications. Each of these sections will be illustrated with relevant game examples and sample code where appropriate.

How to Build Evolutionary Algorithms for Games

Penny Sweetser (School of ITEE, University of Queensland)
AI Game Programming Wisdom 2, 2003.
Abstract: Evolutionary algorithm is the broad term given to the group of optimization and search algorithms that are based on evolution and natural selection, including genetic algorithms, evolutionary computation and evolutionary strategies. Evolutionary algorithms have many advantages, in that they are robust search methods for large, complex or poorly-understood search spaces and nonlinear problems. However, they also have many disadvantages, in that they are time-consuming to develop and resource intensive when in operation. This article will introduce evolutionary algorithms, describing what they are, how they work, and how they are developed and employed, illustrated with example code. Finally, the different applications of evolutionary algorithms in games will be discussed, including examples of possible applications in different types of games.

Adaptive AI: A Practical Example

Soren Johnson (Firaxis Games)
AI Game Programming Wisdom 2, 2003.
Abstract: Because most game AIs are either hared-coded or based on pre-defined scripts, players can quickly learn to anticipate how the AI will behave in certain situations. While the player will develop new strategies over time, the AI will always act as it did when the box was opened, suffering from strategic arrested development. This article describes the adaptive AI of a simple turn-based game called "Advanced Protection."

This practical example of an adaptive AI displays a number of advantages over a static AI. First, the system can dynamically switch between strategies depending on the actual performance of the player - experts will be treated like experts, and novices will be treated like novices. Next, the rules and parameters of the game will be exactly the same for all strategies, which means the AI will not need to "cheat" in order to challenge expert players. Finally, the system can ensure that the AI's "best" strategies truly are the best for each individual player.

Building Better Genetic Algorithms

Mat Buckland (www.ai-junkie.com)
AI Game Programming Wisdom 2, 2003.
Abstract: Genetic algorithms are slowly but surely gaining popularity with game developers. Mostly as an in-house tool for tweaking NPC parameters such as ID used in the development of the bots for Quake3, but we are also beginning to see genetic algorithms used in-game, either as an integral part of the gameplay or as an aid for the user.

Unfortunately, many of today's programmers only know the basics of genetic algorithms, not much beyond the original paradigm devised by John Holland back in the mid sixties. This article will bring them up to date with some of the tools available to give improved performance. Techniques discussed will include various scaling techniques, speciation, fitness sharing, and other tips designed to help speedy convergence whilst retaining population diversity. In short, showing you how to get the most from your genetic algorithms.

Advanced Genetic Programming: New Lessons From Biology

Fran�ois Dominic Laram�e
AI Game Programming Wisdom 2, 2003.
Abstract: Genetic programming is a powerful evolutionary mechanism used to create near-optimal solutions to difficult problems. One of the major issues with traditional GP paradigms has been the relative brittleness of the organisms generated by the process: many source code organisms do not compile at all, or produce other kinds of nonsensical results. Recent advances in genetic programming, namely the grammatical evolution scheme based on such biological concepts as degenerate and cyclical DNA and gene polymorphism, promise ways to eliminate this problem and create programs that converge on a solution faster. This article explains grammatical evolution, its biological underpinnings, and a handful of other ways to refine evolutionary computing schemes, like co-evolution

The Importance of Growth in Genetic Algorithms

Dale Thomas (AILab, University of Z�rich)
AI Game Programming Wisdom 2, 2003.
Abstract: The purpose of this article is to introduce some newer concepts relating to the field of Genetic Algorithms (GA). GAs can introduce variability and adaptability into a game leading to non-linear gameplay and opponents who tailor their strategies to that of the player. Many limitations of mainstream GA implementations can be overcome with some simple additions. Using growth, co-evolution, speciation and other new techniques can alleviate limitations on complexity, designer bias, premature convergence and many more handicaps. These additions can reduce the disadvantages of current GAs and allow the advantages to make games much more unpredictable and challenging.

Training an AI to Race

Gari Biasillo (Electronic Arts Canada)
AI Game Programming Wisdom, 2002.
Abstract: This is the final article of the three article series, and shows ways to train the AI to race optimally around a racetrack. Issues covered include tuning the car handling (adjusting parameters, converging on optimum values, modifying parameter values, modifying the range, training at high simulation speeds) and real-time editing (real-time track modification, user control overriding AI).

Learning and Adaptation in Games

John Manslow
AI Game Programming Wisdom, 2002.
Abstract: It is anticipated that the widespread adoption of learning in games will be one of the most important advances ever to be made in game AI. Genuinely adaptive AIs will change the way that games are played by forcing the player to continually search for new strategies to defeat the AI. This article presents a detailed examination of the different approaches available for adding learning and adaptation to games and draws on the author's experiences of AI development to provide numerous practical examples. The reader is guided through the decisions that must be made when designing an adaptive AI, and summaries of the problems that are most frequently encountered with practical implementations are provided along with descriptions of effective solutions. The CD that accompanies the book contains source code for a genetic programming class, which can be used to evolve rule-based AI, and genetic algorithm and population-based incremental learner classes, which can be used to evolve AI more generally. The practical application of all these classes is illustrated by evolving an AI that successfully navigates a simple environment.

Varieties of Learning

Richard Evans (Lionhead Studios)
AI Game Programming Wisdom, 2002.

GoCap: Game Observation Capture

Thor Alexander (Hard Coded Games)
AI Game Programming Wisdom, 2002.

Pattern Recognition with Sequential Prediction

Fri Mommersteeg (Eindhoven University of Technology, Netherlands)
AI Game Programming Wisdom, 2002.
Abstract: This article provides a simple but efficient algorithm for recognizing repetitive patterns in number sequences. Pattern recognition is something that humans are very good at, but for a computer this is not so easy. Too often a game AI can be beaten by repeatedly performing the same trick, just because it is unable to perceive the pattern. This article explains how to deal with this problem and shows you how to map game events onto useful number sequences. Furthermore, it describes a few possible applications of the algorithm in computer games.

Using N-Gram Statistical Models to Predict Player Behavior

Fran�ois Dominic Laram�e
AI Game Programming Wisdom, 2002.
Abstract: N-Grams are statistical constructs used to predict sequences of events in situations that exhibit the property of local structure. Language is one such context: the probability of hearing the word "fries" is higher if one has just heard the word "french" than if one has just heard the word "fruit". Some games, specifically fighting games in which players develop signature move combinations, also exhibit this property. The article describes how to train an AI to recognize patterns and predict the human player's next move using N-Gram models.

Practical Natural Language Learning

Jonty Barnes (Lionhead Studios), Jason Hutchens (Amristar)
AI Game Programming Wisdom, 2002.
Abstract: The perception of intelligence seems to be directly related to the observation of behavior that is surprising yet sensible. Natural language interfaces were common features of computer entertainment software prior to the advent of sophisticated computer graphics, but these were often repetitive in nature: encountering the same scripted conversation over and over again quickly becomes boring. Stochastic language models have the ability to acquire various features of a language from observations they make, and these features can be used generatively to produce novel utterances that have the properties of being both surprising and sensible. In this article we show how such a system, when used to host in-game socially-oriented conversations, can greatly contribute towards the subjective impression of intelligence experienced by the player.

Testing Undefined Behavior as a Result of Learning

Jonty Barnes (Lionhead Studios), Jason Hutchens (Amristar)
AI Game Programming Wisdom, 2002.
Abstract: We consider learning to be the essence of Artificial Intelligence. Non-player characters, when granted the ability to learn, are given the potential to surprise and entertain the player in completely unexpected ways. This is very reinforcing from the player's point of view, but a nightmare for a testing department. How can they assure the quality of a game that may behave completely differently depending on the who's playing it? In this article we show, via a case study of the computer game "Black & White", exactly how a testing department can achieve their goals when the product they're testing features unpredictable learning AI.

Imitating Random Variations in Behavior using a Neural Network

John Manslow
AI Game Programming Wisdom, 2002.
Abstract: As game AI has increased in sophistication, it has become possible to create computer controlled agents that display remarkably human-like behavior. One of the few indications that an agent is non-organic is the frequently clinical nature of their actions, an effect exacerbated by the often ad hoc mechanisms used to add random variations. This article shows how neural networks can be taught to imitate the actual random variations in behavior that are exhibited by real people. This makes it possible to simulate the playing styles of different sports personalities in unprecedented detail - even the extent to which, for example, the cueing direction and position of the cue ball relative to the cushion affect the accuracy of a pool player's shots. The article assumes minimal knowledge of neural networks and illustrates the techniques through their application to a real game. The CD that accompanies the book contains all the source code for the game, along with that for the neural network class, which is designed as a plug-in component that can easily be transferred to other applications.

Genetic Algorithms: Evolving the Perfect Troll

Fran�ois Dominic Laram�e
AI Game Programming Wisdom, 2002.
Abstract: Genetic Algorithms mimic the process of natural selection to evolve solutions to problems that cannot be solved analytically. Candidate solutions, generated at random, are tested and evaluated for their fitness; the best of them are then bred and the process repeated over many generations, until an individual of satisfactory performance is found. This article explains the biological foundations of genetic algorithms and illustrates their behavior with an example: evolving a troll for a fantasy game.

The Dark Art of Neural Networks

Alex J. Champandard (Artificial Intelligence Depot)
AI Game Programming Wisdom, 2002.

Optimized Machine Learning with GoCap

Thor Alexander (Hard Coded Games)
Game Programming Gems 3, 2002.

40% off discount
"Latest from a must have series"
Game
Programming
Gems 7



"Cutting-edge graphics techniques"
GPU Gems 3


"Newest AI techniques from commercial games"
AI Game
Programming
Wisdom 4




ugg boots clearance canada goose cyber monday moncler outlet
Home