In the case of on-policy learning, the behavior policy and the target policy for the agent are one and the same. So, naturally the current policy (before the update) that the agent is using to collect samples is going to be , which is the behavior policy, and therefore the objective function becomes this:
TRPO optimizes the previous object function with a trust region constraint, which using the KL divergence metric given by the following ...