Jump to content

Proximal policy optimization

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Sideswipe9th (talk | contribs) at 19:43, 1 October 2022 (→‎top: Add more citations needed tag, what content has been published since 2017 on this topic?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Proximal Policy Optimization (PPO) is a family of model-free reinforcement learning algorithms developed at OpenAI in 2017. PPO algorithms are policy gradient methods, which means that they search the space of policies rather than assigning values to state-action pairs.

PPO algorithms have some of the benefits of trust region policy optimization (TRPO) algorithms, but they are simpler to implement, more general, and have better sample complexity.[1] It is done by using a different objective function.[2]

See also

References

  1. ^ Schulman, John; Wolski, Filip; Dhariwal, Prafulla; Radford, Alec; Klimov, Oleg (2017). "Proximal Policy Optimization Algorithms". arXiv:1707.06347.
  2. ^ "Proximal Policy Optimization". OpenAI. 2017.

External links