Table of Contents

The PacMan capture-the-flag contest

You are going to compete on a variant of the classic PacMan game. On the last day, we will have a PacMan Tournament where your agents can prove their ability.

Here is an example maze:

Thgjere are two teams: “Blue” and “Red”, each consisting of two agents on screen. The maze is split into two halves. When agents are located in their half of the maze, they are ghosts and can “eat” opponent PacMans. When they move to the opponent half, they become PacMans and can eat food. When one agent gets eaten, it is “reborn” as a ghost at the farthest corner of its own half.

Score and Game Over

At the beginning the score is 0. Each food item eaten by the “Red” team adds +1 to the overall score and each food item eaten by the “Blue” teams subtracts -1 from the overall score. If an agent takes too long to make a move (default 0.5 seconds, can be changed with the –maxMoveTime command-line switch) its team gets a penalty (default 2.0 points), it can be changed with the –timePenalty command-line switch), i.e. +timePenalty gets added to the overall score if a “Blue” agent is too slow, or -timePenalty is subtracted from the overall score if a “Red” agent is too slow. When an agent gets eaten, the eating team receives a killing bonus of +2 (can be changed with the –killPoints command line switch). The game is over either when one team eats all available food, or when a predefined timeout is elapsed (default 3000 moves, can be changed with the –time command-line switch). A final positive score means that the “Red” win, a negative one means that the “Blue” win.

Your Task

You will need to write an algorithm that controls the agents. You will have to design and implement defense and attack strategies, and write planning algorithms to navigate through the maze and dodge/tackle the opponent agents. To test your agents you will be able to let them play against the default ones or our own ones 8-)! At the end of Day4 we plan to have a tournament: all agents will play against each other and the winner team is going to get a prize (in addition to glory and fame)! Be aware that more important that fancy strategies is the quality of your code: it is well tested? Does it conform to standards? The team with the better pylint score is going to get a prize also!

Set up

  1. First, create the directory where all the fun shall be happening:
    mkdir /home/student/project
    cd /home/student/project
  2. Then, clone the central repository with the game files:
    git clone <name>@escher.fuw.edu.pl:/git/autumnschool/pacman game

    Remember to pull every once in a while to get the last bug-fixes! ;)

  3. Your pacman agents are going to live in your very own group repository. Clone it with:
    git clone <name>@escher.fuw.edu.pl:/git/autumnschool/groupX
  4. Set up the PYTHONPATH with:
    export PYTHONPATH=$PYTHONPATH:/home/student/project/game
  5. You can make this setting for the PYTHONPATH permanent if you add the previous line to the /home/student/.bashrc file and restart the terminal.

Remember to check in your revisions early and discuss the changes with your group in order to reduce the number of conflicts. Also, remember that everyone likes useful commit messages.

Start the Game

You can start a demo game with:

game/start_game.py

Try

game/start_game.py --help

for a list of command-line options.

Acknowledgments

The game is courtesy of John DeNero at the University of California Berkeley, where he is using it for the Artificial Intelligence course. The original page contains a general description of how to run matches and write your own agents.

Example Agents

All example agents can be found in:

project/game/agents

To run a game using the two example agent factories. A *DrunkAgentsFactory* and a *MouseAgentsFactory* try:

python start_game.py agents/drunk_agents.py agents/mouse_agents.py

The agents serve to demonstrate some basic techniques such storing game state position and moves, and how to navigate without knowing anything about the maze other than what can be seen. In pacman however you know what the maze looks like, and you should probably use a shortest-path algorithm to navigate from A to B. This will most certainly be more effective than these poor mice ;)

Writing agents 101

How to try your own agents

Put your agents in an a file with the the coolest name you can think of (it will be shown, when your team is winning…), located in your group directory /home/student/project/groupX. You should first of all test your agents using the agents' testing framework. Once you are confident that your agents are working properly you can let them take part to the game by setting your <agent_factory_file>.py as a factory for the “Red” team. You should always choose your opponent explicitly and don't rely on the defaults:

You can slow down the game setting the –fps T option, where T is the number of moves per second (default is something like 25 moves/second).

The agent factory

In order to tell the game which agents to use, you need to create a file which contains your agent factory. The factory is a simple class named Factory whith a method get_agents_list, which simply returns a list of agents you want to use. When you run a game with start_game.py our_agents.py, it will search our_agents.py for a class named Factory and use it.

As an example, have a look at game/agents/offense_defense_agents.py

from agents.capture_agents_lib import OffensiveReflexAgent, DefensiveReflexAgent
from pacman.basic_agents import BasicAgentFactory
 
class Factory(BasicAgentFactory):
    "Returns one defensive reflex agent and offensive reflex agent"
    def get_agents_list(self):
        return [OffensiveReflexAgent, DefensiveReflexAgent]

As the definitions for the agents themselves live in agents.capture_agents_lib, we don’t need to add anything else to this file and can just use it.

The agent

Writing a (simple) agent is not difficult: you need to write a sub-class of game.BasicAgent. For a brownian agent, there is only one method you need to use: agent.choose_action(game_state), which is called at every round and has to return one of game.Directions.{NORTH, SOUTH, EAST, WEST, STOP}.

In the choose_action method, the agents most of the time examine the game state (an instance of capture.GameState), and take a decision about what to do next. (If no decision is made, you’ll get a penalty *and* a random direction is chosen. So it is even worse than a brownian agent…) See below for a description of the methods in GameState.

This is the definition of the mother class:

This is an example agent that makes moves at random (with hopefully no time penalty):

import random
class BrownianAgent(BasicAgent):
    def choose_action(self, game_state):
        actions = game_state.getLegalActions(self.index)
        return random.choice(actions)

BasicAgent

We provide a base class for all agents you are going to use: basic_agent.BasicAgent. You can find it in game/pacman/basic_agents.py. To implement your own agents, there are two main methods that might want to override: register_initial_state, which allows you to initialize your agent before the game begins (e.g., if you want to analyze the maze), and choose_action, which returns the action your agent should perform at each turn.

This is a list of the methods defined in BasicAgent:

class BasicAgent(Agent, object):
    """A new base class for agents.
    It defines some useful helper methods to interact with the game.
    This class is based on captureAgents.CaptureAgent; I removed stuff
    that I don't think is of general use, and used a more pythonic style.
 
    Interesting internal variables:
        index -- index for this agent
        is_red -- True if you're on the red team, False if you're blue
    """
 
    def registerInitialState(self, game_state):
        # internal use only
 
    def register_initial_state(self, game_state):
        """This method handles the initial setup of the agent.
        You should override it if you have any business to do
        before the game begins (e.g., analyzing the maze).
        """
 
    # Handle agent's actions
    # ######################
 
    def getAction(self, game_state):
        """Calls choose_action on a grid position, but continues on
        half positions.
 
        If you subclass BasicAgent, you shouldn't need to override this method.
        """
 
    def choose_action(self, game_state):
        """Return the next action the agent needs to perform.
        This is the method you need to override if you want your
        agent to do something sensible.
 
        Input arguments:
        game_state -- a capture.GameState instance that contain the current
                      state of the game
        """
 
    # Text messages on screen
    # #######################
 
    def _update_txt_message(self, game_state):
        """Moves the message on screen at the current position
        of the agent."""
 
    def say(self, txt):
        """Display a message on screen."""
 
    # Helper functions
    # ################
 
    def get_food(self, game_state, enemy_food=True):
        """Return the food on the maze belonging to one of the teams.
 
        Return a game.Grid instance 'food', where food[x][y] == True if
        there is food you should eat or protect (base on the value of
        enemy_food) in that square.
 
        Keyword arguments:
        enemy_food -- If True, returns the food you're supposed to eat,
                    otherwise return the food you should protect.
                    (default: True)
        """
 
    def get_team_indices(self, game_state, enemy_indices=True):
        """Return a list of the indices of the agents of one of the teams.
 
        Keyword arguments:
        enemy_indices -- If True, returns the indices of the opposing team,
                    otherwise return the indices of your own team.
                    (default: True)
        """
 
    def get_team_positions(self, game_state, enemy_pos=True):
        """Return a list with the position of the members of a team.
 
        If an opponent is too far (> 5 square Manhattan distance),
        the corresponding entry will be None, as its position is unavailable.
 
        Keyword arguments:
        enemy_indices -- If True, returns the positions of the opposing team,
                    otherwise return the position of your own team.
                    (default: True)
        """
 
    def get_team_distances(self, game_state, enemy_dist=True):
        """Return a list with the noisy distances from the members of a team.
        Distances will be returned as the real distance, +/- 6.
 
        Keyword arguments:
        enemy_dist -- If True, returns the distance of the opposing team,
                    otherwise return the distance of your own team.
                    (default: True)
        """
 
    def get_score(self, game_state):
        """Return your score.
 
        Your score is given by a number that is the difference between your
        score and the opponents score. This number is negative if you're losing.
        """

GameState

This object represents the state of the game at some point. You can access game-related data through the following methods (I removed methods that are not useful, and expanded the docstrings if necessary):

def getLegalActions(self, agentIndex=0):
    """
    Returns the legal actions for the agent specified.
    The function returns a list of strings, as defined in game.Directions
    """
 
def generateSuccessor(self, agentIndex, action):
    """
    Returns the successor state (a GameState object) after the specified agent takes the action.
    With this function you can try out different actions and see in which state you are going to land,
    useful for implementing Reinforcement Learning agents.
    """
 
def getAgentPosition(self, index):
    """
    Returns a location tuple if the agent with the given index is observable;
    if the agent is unobservable, returns None.
    """
 
def getRedFood(self):
    """
    Returns a matrix of food that corresponds to the food on the red team's side.
    For the matrix m, m[x][y]=true if there is food in (x,y) that belongs to
    red (meaning red is protecting it, blue is trying to eat it).
    """
 
def getBlueFood(self):
    """
    Returns a matrix of food that corresponds to the food on the blue team's side.
    For the matrix m, m[x][y]=true if there is food in (x,y) that belongs to
    blue (meaning blue is protecting it, red is trying to eat it).
    """
 
def getWalls(self):
    """
    Returns a boolean 2D matrix the size of the game board.
    getWalls()[x][y] == True if there is a wall at (x,y), False otherwise.
    You might want to call this function in the registerInitialState phase
    of your agent if you want to analyze the maze.
    """
 
def hasFood(self, x, y):
    """
    Returns true if the location (x,y) has food, regardless of 
    whether it's blue team food or red team food.
    """
 
def hasWall(self, x, y):
    """Returns true if (x,y) has a wall, false otherwise."""
 
def getRedTeamIndices(self):
    """
    Returns a list of agent index numbers for the agents on the red team.
    """
 
def getBlueTeamIndices(self):
    """
    Returns a list of the agent index numbers for the agents on the blue team.
    """
 
def isOnRedTeam(self, agentIndex):
    """
    Returns true if the agent with the given agentIndex is on the red team.
    """
 
 def getAgentDistances(self):
    """
    Returns a noisy distance to each agent.
    """
 
def getInitialAgentPosition(self, agentIndex):
    """Returns the initial position of an agent."""

getWalls, getRedFood, and getBlueFood return a game.Grid object. It works more or less like a 2D list, but if you want to access the data directly you can call Grid.asList().

Keeping track of your opponent

More useful stuff