
The anatomy of the agent
As we saw in the previous chapter, there are several entities in RL's view of the world:
- Agent: A person or a thing that takes an active role. In practice, it's some piece of code, which implements some policy. Basically, this policy must decide what action is needed at every time step, given our observations.
- Environment: Some model of the world, which is external to the agent and has the responsibility of providing us with observations and giving us rewards. It changes its state based on our actions.
Let's show how both of them can be implemented in Python for a simplistic situation. We will define an environment that gives the agent random rewards for a limited number of steps, regardless of the agent's actions. This scenario is not very useful, but will allow us to focus on specific methods in both the environment and the agent classes. Let's start with the environment:
class Environment: def __init__(self): self.steps_left = 10
In the preceding code, we allow the environment to initialize its internal state. In our case, the state is just a counter that limits the number of time steps the agent is allowed to take to interact with the environment:
def get_observation(self): return [0.0, 0.0, 0.0]
The get_observation()
method is supposed to return the current environment's observation to the agent. It is usually implemented as some function of the internal state of the environment. In our example, the observation vector is always zero, as the environment basically has no internal state:
def get_actions(self): return [0, 1]
The get_actions()
method allows the agent to query the set of actions it can execute. Normally, the set of actions that the agent can execute does not change over time, but some actions can become impossible in different states (for example, not every move is possible in any position of the TicTacToe game). In our simplistic example, there are only two actions that the agent can carry out, encoded with the integers 0 and 1:
def is_done(self): return self.steps_left == 0
The preceding method signals the end of the episode to the agent. As we saw in Chapter 1, What is Reinforcement Learning?, the series of environment—the agent interactions is pided into a sequence of steps called episodes. Episodes can be finite, like in a game of chess, or infinite like the Voyager 2 mission (which is a famous space probe launched over 40 years ago that has travelled beyond our Solar System). To cover both scenarios, the environment provides us with a way to detect when an episode is over and there is no way to communicate with it anymore:
def action(self, action): if self.is_done(): raise Exception("Game is over") self.steps_left -= 1 return random.random()
The action()
method is the central piece in the environment's functionality. It does two things: handles the agent's action and returns the reward for this action. In our example, the reward is random and its action is discarded. Additionally, we update the count of steps and refuse to continue the episodes which are over.
Now when looking at the agent's part, it is much simpler and includes only two methods: the constructor and the method that performs one step in the environment:
class Agent: def __init__(self): self.total_reward = 0.0
In the constructor, we initialize the counter that will keep the total reward accumulated by the agent during the episode:
def step(self, env): current_obs = env.get_observation() actions = env.get_actions() reward = env.action(random.choice(actions)) self.total_reward += reward
The step function accepts the environment instance as an argument and allows the agent to perform the following actions:
- Observe the environment
- Make a decision about the action to take based on the observations
- Submit the action to the environment
- Get the reward for the current step
For our example, the agent is dull and ignores observations obtained during the decision process about which action to take. Instead, every action is selected randomly. The final piece is the glue code, which creates both classes and runs one episode:
if __name__ == "__main__": env = Environment() agent = Agent() while not env.is_done(): agent.step(env) print("Total reward got: %.4f" % agent.total_reward)
You can find the preceding code in this book's Git repository at https://github.com/PacktPublishing/Deep-Reinforcement-Learning-Hands-On in the Chapter02/01_agent_anatomy.py
directory. It has no external dependencies and should work with any more-or-less modern Python version. By running it several times, you'll get different amounts of reward gathered by the agent.
The simplicity of the preceding code allows us to illustrate important basic concepts that come from the RL model. The environment could be an extremely complicated physics model, and an agent could easily be a large neural network implementing the latest RL algorithm, but the basic pattern stays the same: on every step, an agent takes some observations from the environment, does its calculations, and selects the action to issue. The result of this action is a reward and new observation.
You may wonder, if the pattern is the same, why do we need to write it from scratch? Perhaps it is already implemented by somebody and could be used as a library? Of course, such frameworks exist, but before we spend some time discussing them, let's prepare your development environment.