Modding:Tutorial/Designing AI

From DoomRL Wiki

Jump to: navigation, search
Under Construction!
This page is still undergoing development by one or few members. It is highly recommended that you do not edit anything here until this tag is removed.

Before the more recent versions of DoomRL, all beings other than the player used a single algorithm for AI, or artificial intelligence, that directed how the being should act under certain circumstances. Although this default AI is modifiable to some extent through a few being flags, the extent to which beings acted remained roughly the same. When it became clear that this default AI could be outwitted with clever tactics, the AI object was added into the game (sometimes known as "lua AI"), allowing for complete customizability as to what a being does given any circumstances. Most beings, as of v0.9.9.4, call a particular AI object that governs their actions, and are thus more difficult to fight when playing the game (which is a good thing).

This tutorial goes through the basics of creating AI objects and what you can do with them. The AI object class is very flexible in that the basic structure is extremely simple, allowing you to add almost anything that the game allows from within its API. Unsurprisingly, designing your own artificial intelligence can be an extremely challenging task, depending on what exactly you want the AI to be capable of. Basic structuring techniques will be explained, as well as general guidelines when writing the code that your AI will consist of.

Contents

Base Prototype

Since the AI object is so very simple in its required fields, all of them will be contained in the following example:

AI{
    name = "simple_ai",
 
    OnCreate = function(self)
        self:add_property("ai_state", "first_state")
        ....
    end,
 
    OnAttacked = function(self)
        ....
    end,
 
    states = {
        first_state = function(self)
            ....
        end,
 
        ....
    }
}
  • name is the identifier of the AI object, to be used as the field value for the being key ai_type.
  • OnCreate(self) triggers whenever self is created. For any being using this AI object, it is functionally identical to that being's OnCreate() hook in its own prototype.
    • The "required" property here, ai_state, is the key factor when determining what subfunction the being should run in the AI object.
  • OnAttacked(self) triggers whenever self is hit. The hit does not have to cause damage. (Such a requirement can be a condition within the hook by comparing HP across multiple actions.)
  • states is an array of functions that are called based on the value of ai_state. These functions will be run immediately after the being's own OnAction() hook in its own prototype. In the above example, since ai_state is set to "first_state" during OnCreate(), the being will run the first_state() function.

Technically speaking, the "state" system of the AI object need not be used: one can instead invoke OnAction(self) directly. However, states are a very practical way to organize the various actions or modes that the AI transitions through, and its system will be used throughout this tutorial. Even including the optional state system, however, what is shown above is all that is necessary for the AI object to initialize.

AI Rules

While you are free to develop your artifical intelligence however you want, there are still some fundamental laws that govern their structure. (They are not so much mandated out of standardization as they are out of necessity: AI objects that do not follow these rules will not run correctly.)

Rule 1

The AI must always lower the being's energy.

Energy, or scount, is determines when the being can take an action, and so long as its energy remains the same, it can continuously take more and more actions until the game refuses to allow a continuation (ie, infinite loop error). There are a number of API functions that will automatically cause the being's energy to decrease, but when none of these functions are called, you must directly lower the being's scount in the following manner:

self.scount = self.scount - 1000
self.scount = self.scount - 10*self.speed
self.scount = 4000

While these three examples may appear similar, they are quite different. Recall that scount increases by speed each game turn, and decreases by 100 multiplied by the number of game turns for default actions (e.g., moving or attacking). Knowing this, we can see three separate possibilities:

  • The first, reducing scount by 1000, will require a being with speed set to 100 to wait exactly ten game turns. For beings with speed greater than 100, it will occur sooner, and for beings with speed less than 100, it will occur later. Thus, a being with an action modified by this method will act "constant" with respect to its default action speed.
  • The second, reducing scount by 10 multiplied by the being's speed, will require any being, regardless of speed, to wait exactly ten game turns. Thus, a being with an action modified by this method will act "global" with respect to its default action speed (that is, all beings across the global scale take the same amount of time).
  • The third, setting scount to 4000, is functionally identical to the first, except that it can be used immediately after default actions in order to make scount decrease by a very specific value. Suppose you wanted a being to be able to equip things immediately: using this method you can reset the scount value (to 5001) in order for the process to have taken no virtually no time at all. (Technically you could compare scount before and after actions taken, but this is a much simpler method.) Thus, a being with an action modified by this method will act "independent" with respect to its default action speed.

You will almost always use the "constant" modifier, but don't forget about the other two. (In particular, "global" and "independant" modifiers become important when you have several beings using the same AI and want to coordinate their actions very precisely.)

Rule 2

Changing states in an AI during a single action always requires a way to exit the current state.

There are two ways to exit a function: break and return. Consequently, there are two ways to switch states:

AI{
    name = "two_state",
 
    OnCreate(self)
        self:add_property("ai_state","state1")
    end
 
    states = {
        state1 = function(self)
            return("state2")
        end,
 
        state2 = function(self)
            self.scount = self.scount - 1000
        end,
    }
}

The return method is a single line that both ends the function prematurely and sets the new ai_state. In the above function, the being begins in state1(), at which point it immediately changes to state2(), then ends in state2() (since scount has decreased when that state ends).

AI{
    name = "two_state",
    ....
    states = {
        state1 = function(self)
            self.ai_state = "state2"
            break
        end,
 
        state2 = function(self)
            self.scount = self.scount - 1000
        end,
    }
}

If changing states with the break method, you'll have to establish what state you're changing to first by modifying the "ai_state" property, then follow it with a break. On the surface, this is like the return method with an extra line. However, you can split the functionality when you assign the ai_state and when you break, in case you expect to purposefully want the state to repeat itself (and are reasonably certain that it will not tend toward an infinite loop error). In many cases, however, you should use the return method to immediately switch to another state.

Note that, were there no break or return in state1(), even if ai_state was modified, the state will continuouslly repeat until an infinite loop error occured. This is unique to the AI object, due to the way that ai_state is handled.

Rule 3

Always make sure that the AI will end a being's turn.

This is a more subtle rule, as it already implies that the first two rules have already been addressed. In short, it is necessary that you make sure your conditions are always comprehensive in either changing states or lowering the being's energy.

This tutorial adopts some terminology regarding the completeness of states:

  • Infinite states have obvious flaws that will cause a being to return an infinite loop error. AI objects with infinite states should always be fixed. However, infinite states can be used as a debugging tool: since infinite loops can easily be traced to a particular condition, they are a useful way to find a problem with the AI.
  • Finite states will always either change to another state or cause the being's turn to end, regardless of the being's circumstances. Ideally, all states should be finite.
  • Indefinite states, while not explicitly finite, will only cause infinite loop errors in unlikely circumstances. Sometimes it is feasible to fix these "indefinite conditions", sometimes not: these can be used in modules but care should be taken to avoid circumstances in which the error may manifest. (The simplest way to prevent an infinite loop is to simply set a limit on how much a loop can repeat itself, either with a loop counter or a finite iterator.) The most common indefinite states involve properties that are not explicitly defined in the AI, but belong to beings that use the AI.

States that satisfy rule 3 are considered finite, and when every state of an AI object is finite, the object itself is considered finite as well.

Let us suppose we want to construct an AI that only traveled up. Here is a simple example:

AI{
    name = "up_ai",
 
    OnCreate = function(self)
        self:add_property("ai_state", "walk_up")
    end
 
    states = {
        walk_up = function(self)
            local up_coord = self:get_position() + coord.new(0,-1)
            self:direct_seek(up_coord)
        end,
    }
}

Any being with this AI will enter the "walk_up" state, get its position, add that coord to an upward-facing unit coord, and use the being:direct_seek() to move precisely to the new position. This is fairly complete, because the new position is only one tile away, so being:direct_seek() will never cause the being to run into obstacles between the being's position and the target position.

However, what happens when the tile above the AI is, itself, an obstacle? In such a case, being:direct_seek() does not perform a move, which means that the being's energy does not lower, and so the being is stuck in an infinite loop! We have created an infinite state and want to immediately resolve this problem. Fortunately, there is an easy way to deal with the loop, thanks to the way that movement methods work:

AI{
    name = "up_ai",
    ....
    states = {
        walk_up = function(self)
            local up_coord = self:get_position() + coord.new(0,-1)
            if self:direct_seek(up_coord) ~= 0 then
                self.scount = self.scount - 1000
            end
        end,
    }
}

being:direct_seek() returns an output integer, depending on what happens inside of the method. For our purposes, the method returns 0 whenever the being moves and returns a different number whenever it doesn't. Knowing this, we can set up a conditional checking to see if the return value is zero or not. If it is not, we cause the being to wait by lowering its energy. With our new condition, the being will move up until it can't, at which point it will do nothing; if the obstacle in the being's way is cleared, it will resume moving up. We have now formed a finite state, and since the AI only consists of a single state, we can consider the AI finite as a whole.

If you keep these rules in mind, you will have fewer problems in running the various algorithms within your AI.

Default Actions

There are a number of being methods that will automatically cause the being to lower its energy. Many of them are critical in developing an AI for DoomRL, and you would do well to memorize them:

being:direct_seek()

being:direct_seek(coord)) commands being to take a step toward coord following the most direct path possible. The direct pathing used here is the same as the game's default AI, and its algorithm can be roughly described by the following:

calculate the x-distance "dx" and y-distance "dy" from the target coordinate
 
if dx != 0 and dy != 0
  attempt to move {sign(dx),sign(dy)}*
  if move failed
    if dy >= dx
      attempt to move {0,sign{dy})
      break on failure
    else
      attempt to move {sign(dx),0}
      break on failure
    end
  end
elseif dx == 0
  attempt to move {0,sign(dy)}
  break on failure
elseif dy == 0
  attempt to move {sign(dx),0)
  break on failure
end
 
if break occurred, end turn
 
*sign(x) is +1 for positive x and -1 for negative x

being:direct_seek() returns an integer, which determines what occured with the being attempted to move:

  • 0 indicates that the being move and that the being's energy lowered
  • All other returns indicate that the being did not move and that its energy did not decrease:
    • 1 indicates that the being was stopped by a wall
    • 2 indicates that the being was stopped by a door
    • 3 indicates that the being was stopped by another being

Specifically, when the movement process is interrupted, being:direct_seek() searches for CF_BLOCKMOVE. If it is not found on the blocking coordinate, 3 is returned; if CF_BLOCKMOVE is found but CF_OPENABLE is also found (which is used for doors), 2 is returned; in other cases when CF_BLOCKMOVE is found, 1 is returned.

being:path_find()

being:path_find(coord,scan_cutoff,scan_max) searches for a series of coordinates for being to travel between its current position and coord, using parameters scan_cutoff and scan_max to choose the best path. scan_cutoff and scan_max are a part of the pathing algorithm and higher numbers tend to be pretty intensive. As a standard, scan_cutoff is set to 10 for normal enemies and 40 for bosses, while scan_maximum is set to 40 for normal enemies and 200 for boss enemies.

The return for this method determines whether or not a path was successfully found. Even if a complete path was not created, there will still be a path that being:path_find() established for being, and can be used as a way to begin its approach to coord.

being:path_find() does not actually move the being, nor does it decrease energy: use being:path_next() in order to initiate the movement attempt.

Generally speaking, it is not good practice to recalculate a pathing algorithm on every action, tempting as that may be. Naturally, the shuffling of enemies and opening/closing doors can affect whether or not the path is a correct one, but it is better to consider these exceptions as the time to call being:path_find(), rather than doing so all the time.

being:path_next()

being:path_next() command being to move to a coordinate as specified by being:path_find(). To think of this process abstractly, consider that whenever being:path_next() is executed, an array of steps is called: the first step is then used as the coordinate into which the being moves, and is removed from the array. Thus, calling the method each time moves the being to each coordinate, until the array is entirely removed and the being has reached the supposed target position.

being:path_next() carries the same return values as being:direct_seek(), and exceptions can be handled in the same manner.

being:attack()

being:attack(coord) commands being to use its melee attack on coord. If coord cannot be attacked within melee distance, or being cannot use its melee attack, the being's energy does not decrease.

being:attack(enemy) commands being to use its melee attack on enemy, another being. The same conditions for energy modification are used here.

The output returns true in the event that the attack was successful (whether or not it dealt damage is independent) and false otherwise

being:fire()

being:fire(coord,weapon) commands being fire its weapon, aiming at coord. If weapon cannot be fired (e.g., out of ammo), then the being's energy does not decrease.

As there is no return value to determine whether or not the being's attack was successful, you may want to set up exceptions in the event that weapon cannot fire.

being:reload()

being:reload() commands being to attempt to reload its currently-equipped weapon. If the weapon cannot be reloaded, then the being's energy does not decrease.

The output returns true if the being reloaded the weapon and false otherwise. The reload can partially fill the weapon's ammo capacity depending on the ammo reserves of being, and being:reload() will still return true.

being:wear()

being:wear(slot) commands being to equip an item that exists in position slot of its inventory. (For instance, to equip the first item in the inventory, slot should be set to 1.) If the item in position slot cannot be equipped, or if the being cannot equip items, the being's energy does not decrease.

being:wear(item) commands being to equip item from its inventory. being will always equip item from the lowest slot in its inventory. If item does not exist in the being's inventory, or if being cannot equip items, the being's energy does not decrease.

The output returns true if the being equipped an item and false otherwise.

being:use()

being:use(slot) commands being to use the item that exists in position slot of its inventory. If the item in position slot cannot be used in any way, the being's energy does not decrease.

being:use(item) commands being to use item from its inventory. being will always use item from the lowest slot in its inventory. If item does not exist in the being's inventory, or if cannot be used in any way, the being's energy does not decrease.

Currently there is a bug that causes the output of this method to always return false, so it is useless to return.

being:pickup()

being:pickup() commands being to pick up an item at the same position as being. If there is no item to pick up, or if being cannot pick up items, the being's energy does not decrease.

being:pickup(coord) commands being to pick up an item at coord from where it is standing. The same condition for energy modification is used here.

The output returns true if the being picked up an item and false otherwise.

Single-minded AI

A single-minded AI never needs to make an intelligent decision between two choices. Simply put, it follows only action-based algorithms: if there are changes that do not stem from actions, they are automatic or constant. Such AI are the easiest to create and implement, as their movements are always obvious when observed. There are two ways to implement a single-minded AI: create only single state that performs the actions, or divide each type of action into its own state and cycle through them. Most of the time a single state is all that is necessary, though there can be cases in which having separate states is a more logical procedure.

We will now return to our example AI from before, which involved either walking up or stopping whenever walking would cause it to hit a solid object. It is a sufficient AI but is certainly lacking in mobility: after all, the game will eventually run out of coordinates if it can only move in a single direction. Let us expand on the design by allowing it to move in any orthogonal direction (up, down, left, or right), depending on what direction it is currently walking and what is in the way.

We should begin by setting up the AI such that it can walk in all of the directions specified. Since each direction can be considered a separate action, you may be tempted to create a state for each one, like so:

AI{
    name = "walk_ai",
 
    OnCreate = function(self)
        self:add_property("ai_state","walk_up")
    end,
 
    states = {
        walk_up = function(self)
            local up_coord = self:get_position() + coord.new(0,-1)
            if self:direct_seek(up_coord) ~= 0 then
                self.scount = self.scount - 1000
                return("walk_right")
            end
        end,
 
        walk_right = function(self)
            local right_coord = self:get_position() + coord.new(1,0)
            if self:direct_seek(right_coord) ~= 0 then
                self.scount = self.scount - 1000
                return("walk_down")
            end
        end,
 
        walk_down = function(self)
            local down_coord = self:get_position() + coord.new(0,1)
            if self:direct_seek(down_coord) ~= 0 then
                self.scount = self.scount - 1000
                return("walk_left")
            end
        end,
 
        walk_left = function(self)
            local down_coord = self:get_position() + coord.new(-1,0)
            if self:direct_seek(down_coord) ~= 0 then
                self.scount = self.scount - 1000
                return("walk_up")
            end
        end,
    }
}

This acts much like the original example, except that we have four states that the being cycles through: each time the being runs into a solid object, in addition to waiting, it will move to another state, which will be accessed on the next action. For most programmers this setup looks horribly inefficient, since it is a lot of copy-pasting with only a couple of changes per copy. There is a far more elegant way to set up the four directions using what is called a lookup table:

AI{
    name = "walk_ai",
 
    OnCreate = function(self)
        self:add_property("ai_state","walk")
        self:add_property("walk_num",1)
    end,
 
    states = {
        walk = function(self)
            local c = self:get_position()
            local coord_list = {c+coord.new(0,-1), --up    (1)
                                c+coord.new(1, 0), --right (2)
                                c+coord.new(0, 1), --down  (3)
                                c+coord.new(-1,0)} --left  (4)
            --attempt to move
            if self:direct_seek(coord_list[self.walk_num]) ~= 0 then
                --on failure, wait and turn clockwise
                self.scount = self.scount - 1000
                self.walk_num = math.fmod(self.walk_num,4)+1
            end
        end,
    }
}

A lookup table is an array that contains a number of elements are necessarily hard-coded and conveniently organized for reference. The code doesn't know what a direction is until we explicitly express it somewhere: a convenient shortcut is to use a lookup table and simply call a certain element that we know to be of a certain definition. In some cases lookup tables are placed independently in separate functions for conversion purposes. Here we create a dynamic lookup table (since it changes every time the being's position does) that determines each movement coordinate in the orthogonal directions.

In order to make use of the table, we need a variable that keeps track of the being's current direction it wants to move: this is done by adding a property called walk_num, which is an integer representing a coordinate from the lookup table. self:direct_seek(coord_list[self.walk_num]) will now move the being based on the value of walk_num (comments in the example show the number-direction correlation).

When the being runs into a solid object, in addition to waiting, walk_num is incremented by one on a modulo-four cycle using the math.fmod() method. math.fmod(divisor,dividend) divides divisor by dividend and returns the remainder of the quotient (e.g., 7 / 4 = 1R3; math.fmod(7,4) returns 3). With each use of the cycling line of code, walk_num increases by one, and whenever it was equal to four, it is set back to one again (as math.fmod(4,4)+1 == 0+1 == 1).

This new state is functionally identical to the four-state case but is easily half its size in terms of lines of code. Finding ways to cut down on the magnitude of a script can greatly improve the ability to debug it, as well as allowing others to read and understand it. Since states can become very complicated, it is important to organize them as cleanly as possible.

Suppose we want this being to be able to attack the player whenever they get in its path. It is easy to include this in the AI we already have:

AI{
    name = "bulldozer_ai",
 
    OnCreate = function(self)
        self:add_property("ai_state","walk")
        self:add_property("walk_num",1)
    end,
 
    states = {
        walk = function(self)
            local c = self:get_position()
            local coord_list = {c+coord.new(0,-1), --up    (1)
                                c+coord.new(1, 0), --right (2)
                                c+coord.new(0, 1), --down  (3)
                                c+coord.new(-1,0)} --left  (4)
            --attempt to move
            local move_to = coord_list[self.walk_num]
            if self:direct_seek(move_to) ~= 0 then
                --if player is in the way, attack them
                if move_to == player:get_position() then
                    self:attack(move_to)
                --otherwise, wait and turn clockwise
                else
                    self.scount = self.scount - 1000
                    self.walk_num = math.fmod(self.walk_num,4)+1
                end
            end
        end,
    }
}

First, we define move_to as the coordinate where the being attempts to move: this is done to make the code easier to read through, since the coordinate will now be referenced multiple times. Next, after knowing that self:direct_seek() failed, we include a splitting:

  • If move_to is equal to the player's current position, use self:attack() to attack the player.
  • Otherwise, we do what was done before (wait a second and continue the directional cycling)

One could say that this borders on an intelligent decision made by the AI, but it is a very basic one and doesn't do anything other than include a special case in which the being is blocked by the player. Compared to what AI can eventually develop into, this is still an extremely simple example.

A truly single-minded AI that attacks would look something like this:

AI{
    name = "attack_ai",
 
    OnCreate = function(self)
        self:add_property("ai_state", "attack")
    end,
 
    states = {
        attack = function(self)
            local p = player:get_position()
            if not self:fire(p, self.eq.weapon) then
                if not self:attack(p) then
                    self.scount = self.scount - 1000
                end
            end
        end,
    }
}

First, we set p as the player's coordinate. The being then tries to fire a ranged attack at p. If it cannot (either because the being has no ranged weapon or that weapon is out of ammo) then it will attempt a melee attack on p. If it cannot execute the melee attack, it will simply wait. Such a being, with a ranged attack, will attempt to kill the player from anywhere on the map: after all, it lacks the intelligence to know whether or not it can see the player.

Single-minded AI are simple, but they lack the ability to give the player an intelligent challenge with respect to a being using it. Most modders will want to create AI that make decisions based on the whereabouts of various objects on the map, or the being's state, or even global timers. These require a more rigorous approach to ensure that they are produced without flaw.

Personal tools