-->

Tuesday, July 3, 2018

Reinforcement learning (RL) is an area of machine learning inspired by behaviourist psychology, concerned with how software agents ought to take actions in an environmenti> so as to maximize some notion of cumulative reward. The problem, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. In economics and game theory, reinforcement learning may be used to explain how equilibrium may arise under bounded rationality.

In machine learning, the environment is typically formulated as a Markov decision process (MDP), as many reinforcement learning algorithms for this context utilize dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible.

Reinforcement learning differs from standard supervised learning in that correct input/output pairs need not be presented, and sub-optimal actions need not be explicitly corrected. Instead the focus is on performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge). The exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and in finite MDPs.

Introduction




An introduction to Reinforcement Learning - This episode gives a general introduction into the field of Reinforcement Learning: - High level description of the field - Policy gradients - Biggest challenges ...

Basic reinforcement is modeled as a Markov decision process:

  • a set of environment and agent states, S;
  • a set of actions, A, of the agent;
  • P a ( s , s ′ ) = P r ( s t + 1 = s ′ | s t = s , a t = a ) {\displaystyle P_{a}(s,s')=Pr(s_{t+1}=s'|s_{t}=s,a_{t}=a)} is the probability of transition from state s {\displaystyle s} to state s ′ {\displaystyle s'} under action a {\displaystyle a} .
  • R a ( s , s ′ ) {\displaystyle R_{a}(s,s')} is the immediate reward after transition from s {\displaystyle s} to s ′ {\displaystyle s'} with action a {\displaystyle a} .
  • rules that describe what the agent observes

Rules are often stochastic. The observation typically involves the scalar, immediate reward associated with the last transition. In many works, the agent is assumed to observe the current environmental state (full observability). If not, the agent has partial observability. Sometimes the set of actions available to the agent is restricted (a zero balance cannot be reduced).

A reinforcement learning agent interacts with its environment in discrete time steps. At each time t, the agent receives an observation o t {\displaystyle o_{t}} , which typically includes the reward r t {\displaystyle r_{t}} . It then chooses an action a t {\displaystyle a_{t}} from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state s t + 1 {\displaystyle s_{t+1}} and the reward r t + 1 {\displaystyle r_{t+1}} associated with the transition ( s t , a t , s t + 1 ) {\displaystyle (s_{t},a_{t},s_{t+1})} is determined. The goal of a reinforcement learning agent is to collect as much reward as possible. The agent can (possibly randomly) choose any action as a function of the history.

When the agent's performance is compared to that of an agent that acts optimally, the difference in performance gives rise to the notion of regret. In order to act near optimally, the agent must reason about the long term consequences of its actions (i.e., maximize future income), although the immediate reward associated with this might be negative.

Thus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including robot control, elevator scheduling, telecommunications, backgammon, checkers and go (AlphaGo).

Two elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments. Thanks to these two key components, reinforcement learning can be used in large environments in the following situations:

  • A model of the environment is known, but an analytic solution is not available;
  • Only a simulation model of the environment is given (the subject of simulation-based optimization);
  • The only way to collect information about the environment is to interact with it.

The first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems.

Exploration


Basic Concepts in Machine Learning | Big Data
Basic Concepts in Machine Learning | Big Data. Source : bigdata.black

Reinforcement learning requires clever exploration mechanisms. Randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite Markov decision processes is relatively well understood. However, due to the lack of algorithms that provably scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical.

One such method is ϵ {\displaystyle \epsilon } -greedy, when the agent chooses the action that it believes has the best long-term effect with probability 1 âˆ' ϵ {\displaystyle 1-\epsilon } , and it chooses an action uniformly at random, otherwise. Here, 0 < ϵ < 1 {\displaystyle 0<\epsilon <1} is a tuning parameter, which is sometimes changed, either according to a fixed schedule (making the agent explore progressively less), or adaptively based on heuristics.

Algorithms for control learning


The Tech Pro (kttpro.com): Introduction to Machine Learning (ML)
The Tech Pro (kttpro.com): Introduction to Machine Learning (ML). Source : kindsonthegenius.blogspot.com

Even if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions are good.

Criterion of optimality

Policy

The agent's action selection is modeled as a map called policy:

Ï€ : S × A â†' [ 0 , 1 ] {\displaystyle \pi :S\times A\rightarrow [0,1]}
Ï€ ( a | s ) = P ( a t = a | s t = s ) {\displaystyle \pi (a|s)=P(a_{t}=a|s_{t}=s)}

The policy map gives the probability of taking action a {\displaystyle a} when in state s {\displaystyle s} . There are also non-probabilistic policies.

State-value function

Value function V π ( s ) {\displaystyle V_{\pi }(s)} is defined as the expected return starting with state s {\displaystyle s} , i.e. s 0 = s {\displaystyle s_{0}=s} , and successively following policy π {\displaystyle \pi } . Hence, roughly speaking, the value function estimates "how good" it is to be in a given state.

V Ï€ ( s ) = E [ R ] = E [ âˆ' t = 0 ∞ γ t r t | s 0 = s ] , {\displaystyle V_{\pi }(s)=E[R]=\textstyle E[\sum _{t=0}^{\infty }\gamma ^{t}r_{t}|s_{0}=s],}

where the random variable R {\displaystyle R} denotes the return, and is defined as the sum of future discounted rewards

R = âˆ' t = 0 ∞ γ t r t , {\textstyle R=\sum _{t=0}^{\infty }\gamma ^{t}r_{t},}

where r t {\displaystyle r_{t}} is the reward at step t {\displaystyle t} , γ ∈ [ 0 , 1 ] {\displaystyle \gamma \in [0,1]} is the discount-rate.

The algorithm must find a policy with maximum expected return. From the theory of MDPs it is known that, without loss of generality, the search can be restricted to the set of so-called stationary policies. A policy is stationary if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to deterministic stationary policies. A deterministic stationary policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.

Brute force

The brute force approach entails two steps:

  • For each possible policy, sample returns while following it
  • Choose the policy with the largest expected return

One problem with this is that the number of policies can be large, or even infinite. Another is that variance of the returns may be large, which requires many samples to accurately estimate the return of each policy.

These problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search.

Value function

Value function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for some policy (usually either the "current" [on-policy] or the optimal [off-policy] one).

These methods rely on the theory of MDPs, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best expected return from any initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found amongst stationary policies.

To define optimality in a formal manner, define the value of a policy π {\displaystyle \pi } by

V π ( s ) = E [ R | s , π ] , {\displaystyle V^{\pi }(s)=E[R|s,\pi ],}

where R {\displaystyle R} stands for the random return associated with following π {\displaystyle \pi } from the initial state s {\displaystyle s} .Defining V ∗ ( s ) {\displaystyle V^{*}(s)} as the maximum possible value of V π ( s ) {\displaystyle V^{\pi }(s)} , where π {\displaystyle \pi } is allowed to change,

V ∗ ( s ) = max π V π ( s ) . {\displaystyle V^{*}(s)=\max \limits _{\pi }V^{\pi }(s).}

A policy that achieves these optimal values in each state is called optimal. Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return ρ π {\displaystyle \rho ^{\pi }} , since ρ π = E [ V π ( S ) ] {\displaystyle \rho ^{\pi }=E[V^{\pi }(S)]} , where S {\displaystyle S} is a state randomly sampled from the distribution μ {\displaystyle \mu } .

Although state-values suffice to define optimality, it is useful to define action-values. Given a state s {\displaystyle s} , an action a {\displaystyle a} and a policy π {\displaystyle \pi } , the action-value of the pair ( s , a ) {\displaystyle (s,a)} under π {\displaystyle \pi } is defined by

Q π ( s , a ) = E [ R | s , a , π ] , {\displaystyle Q^{\pi }(s,a)=E[R|s,a,\pi ],\,}

where R {\displaystyle R} now stands for the random return associated with first taking action a {\displaystyle a} in state s {\displaystyle s} and following π {\displaystyle \pi } , thereafter.

The theory of MDPs states that if π ∗ {\displaystyle \pi ^{*}} is an optimal policy, we act optimally (take the optimal action) by choosing the action from Q π ∗ ( s , ⋅ ) {\displaystyle Q^{\pi ^{*}}(s,\cdot )} with the highest value at each state, s {\displaystyle s} . The action-value function of such an optimal policy ( Q π ∗ {\displaystyle Q^{\pi ^{*}}} ) is called the optimal action-value function and is commonly denoted by Q ∗ {\displaystyle Q^{*}} . In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally.

Assuming full knowledge of the MDP, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions Q k {\displaystyle Q_{k}} ( k = 0 , 1 , 2 , … {\displaystyle k=0,1,2,\ldots } ) that converge to Q ∗ {\displaystyle Q^{*}} . Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) MDPs. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.

Monte Carlo methods

Monte Carlo methods can be used in an algorithm that mimics policy iteration. Policy iteration consists of two steps: policy evaluation and policy improvement.

Monte Carlo is used in the policy evaluation step. In this step, given a stationary, deterministic policy π {\displaystyle \pi } , the goal is to compute the function values Q π ( s , a ) {\displaystyle Q^{\pi }(s,a)} (or a good approximation to them) for all state-action pairs ( s , a ) {\displaystyle (s,a)} . Assuming (for simplicity) that the MDP is finite, that sufficient memory is available to accommodate the action-values and that the problem is episodic and after each episode a new one starts from some random initial state. Then, the estimate of the value of a given state-action pair ( s , a ) {\displaystyle (s,a)} can be computed by averaging the sampled returns that originated from ( s , a ) {\displaystyle (s,a)} over time. Given sufficient time, this procedure can thus construct a precise estimate Q {\displaystyle Q} of the action-value function Q π {\displaystyle Q^{\pi }} . This finishes the description of the policy evaluation step.

In the policy improvement step, the next policy is obtained by computing a greedy policy with respect to Q {\displaystyle Q} : Given a state s {\displaystyle s} , this new policy returns an action that maximizes Q ( s , â‹… ) {\displaystyle Q(s,\cdot )} . In practice lazy evaluation can defer the computation of the maximizing actions to when they are needed.

Problems with this procedure include:

  • The procedure may spend too much time evaluating a suboptimal policy.
  • It uses samples inefficiently in that a long trajectory improves the estimate only of the single state-action pair that started the trajectory.
  • When the returns along the trajectories have high variance, convergence is slow.
  • It works in episodic problems only;
  • It works in small, finite MDPs only.

Temporal difference methods

The first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of generalized policy iteration algorithms. Many actor critic methods belong to this category.

The second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation. Note that the computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue.

In order to address the fifth issue, function approximation methods are used. Linear function approximation starts with a mapping ϕ {\displaystyle \phi } that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair ( s , a ) {\displaystyle (s,a)} are obtained by linearly combining the components of ϕ ( s , a ) {\displaystyle \phi (s,a)} with some weights θ {\displaystyle \theta } :

Q ( s , a ) = âˆ' i = 1 d θ i Ï• i ( s , a ) {\displaystyle Q(s,a)=\sum \limits _{i=1}^{d}\theta _{i}\phi _{i}(s,a)} .

The algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored.

Value iteration can also be used as a starting point, giving rise to the Q-Learning algorithm and its many variants.

The problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy. Though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency. Another problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called λ {\displaystyle \lambda } parameter ( 0 ≤ λ ≤ 1 ) {\displaystyle (0\leq \lambda \leq 1)} that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue.

Direct policy search

An alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods.

Gradient-based methods (policy gradient methods) start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector θ {\displaystyle \theta } , let π θ {\displaystyle \pi _{\theta }} denote the policy associated to θ {\displaystyle \theta } . Defining the performance function by

ρ ( θ ) = ρ π θ . {\displaystyle \rho (\theta )=\rho ^{\pi _{\theta }}.} ,

under mild conditions this function will be differentiable as a function of the parameter vector θ {\displaystyle \theta } . If the gradient of ρ {\displaystyle \rho } was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams' REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature). Policy search methods have been used in the robotics context. Many policy search methods may get stuck in local optima (as they are based on local search).

A large class of methods avoids relying on gradient information.These include simulated annealing, cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum. In multiple domains they have demonstrated performance.

Policy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, actorâ€"critic methods have been proposed and performed well on various problems.

Theory


Dissecting Reinforcement Learning-Part.3
Dissecting Reinforcement Learning-Part.3. Source : mpatacchiola.github.io

Both the asymptotic and finite-sample behavior of most algorithms is well understood. Algorithms with provably good online performance (addressing the exploration issue) are known.

Efficient exploration of large MDPs is largely unexplored (except for the case of bandit problems). Although finite-time performance bounds appeared for many algorithms, these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations.

For incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation).

Research


From classic AI techniques to Deep Reinforcement Learning
From classic AI techniques to Deep Reinforcement Learning. Source : towardsdatascience.com

Research topics include

  • adaptive methods that work with fewer (or no) parameters under a large number of condition
  • addressing the exploration problem in large MDPs
  • large-scale empirical evaluations
  • learning and acting under partial information (e.g., using Predictive State Representation)
  • modular and hierarchical reinforcement learning
  • improving existing value-function and policy search methods
  • algorithms that work well with large (or continuous) action spaces
  • transfer learning
  • lifelong learning
  • efficient sample-based planning (e.g., based on Monte Carlo tree search).

Multiagent or distributed reinforcement learning is a topic of interest. Applications are expanding.

Reinforcement learning algorithms such as TD learning are under investigation as a model for dopamine-based learning in the brain. In this model, the dopaminergic projections from the substantia nigra to the basal ganglia function as the prediction error. Reinforcement learning has been used as a part of the model for human skill learning, especially in relation to the interaction between implicit and explicit learning in skill acquisition (the first publication on this application was in 1995-1996).

End-to-end (Deep) reinforcement learning

The work on learning ATARI TV games by Google DeepMind increased attention to end-to-end reinforcement learning or deep reinforcement learning. This approach extends reinforcement learning to the entire process from observation to action (sensors to motors or end to end) by forming it using a deep network and without explicitly designing state space or action space. This includes deep reinforcement learning agents that can sense the environment and learn with limited supervision. It can reduce interference (bias) from human design. Flexible and purposeful learning with greater degrees of freedom enables game strategy and other necessary functions to be learned.

Inverse reinforcement learning

In inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal.

Apprenticeship learning

In apprenticeship learning, an expert demonstrates the target behavior. The system tries to recover the policy via observation.

See also


Dissecting Reinforcement Learning-Part.2
Dissecting Reinforcement Learning-Part.2. Source : mpatacchiola.github.io

  • Temporal difference learning
  • Q-learning
  • Stateâ€"actionâ€"rewardâ€"stateâ€"action (SARSA)
  • Fictitious play
  • Learning classifier system
  • Optimal control
  • Dynamic treatment regimes
  • Error-driven learning
  • Multi-agent system
  • Distributed artificial intelligence

Footnotes


A Beginner's Guide to AI/ML 🤖ðŸ'¶ â€
A Beginner's Guide to AI/ML 🤖ðŸ'¶ â€" Machine Learning for Humans .... Source : medium.com

References


Introduction to Machine Learning | trendingtechno
Introduction to Machine Learning | trendingtechno. Source : trendingtechno.com

  • Auer, Peter; Jaksch, Thomas; Ortner, Ronald (2010). "Near-optimal regret bounds for reinforcement learning". Journal of Machine Learning Research. 11: 1563â€"1600. 
  • Bertsekas, Dimitri P.; Tsitsiklis, John (1996). Neuro-Dynamic Programming. Nashua, NH: Athena Scientific. ISBN 1-886529-10-8. 
  • Bertsekas, Dimitri P. (2012). Dynamic Programming and Optimal Control: Approximate Dynamic Programming, Vol.II. Nashua, NH: Athena Scientific. ISBN 978-1-886529-44-1. 
  • Busoniu, Lucian; Babuska, Robert; De Schutter, Bart; Ernst, Damien (2010). Reinforcement Learning and Dynamic Programming using Function Approximators. Taylor & Francis CRC Press. ISBN 978-1-4398-2108-4. 
  • Deisenroth, Marc Peter; Neumann, Gerhard; Peters, Jan (2013). A Survey on Policy Search for Robotics. Foundations and Trends in Robotics. 2. NOW Publishers. pp. 1â€"142. Bradtke, Steven J.; Barto, Andrew G. (1996). "Learning to predict by the method of temporal differences". Machine Learning. Springer. 22: 33â€"57. doi:10.1023/A:1018056104778. 
  • Gosavi, Abhijit (2003). Simulation-based Optimization: Parametric Optimization Techniques and Reinforcement. Springer. ISBN 1-4020-7454-9. 
  • Peters, Jan; Vijayakumar, Sethu; Schaal, Stefan (2003). "Reinforcement Learning for Humanoid Robotics" (PDF). IEEE-RAS International Conference on Humanoid Robots. 
  • Powell, Warren (2007). Approximate dynamic programming: solving the curses of dimensionality. Wiley-Interscience. ISBN 0-470-17155-3. 
  • Sutton, Richard S.; Barto, Andrew G. (1998). Reinforcement Learning: An Introduction. MIT Press. ISBN 0-262-19398-1. 
  • Sutton, Richard S. (1988). "Learning to predict by the method of temporal differences". Machine Learning. Springer. 3: 9â€"44. doi:10.1007/BF00115009. 
  • Sutton, Richard S. (1984). Temporal Credit Assignment in Reinforcement Learning (PhD thesis). University of Massachusetts, Amherst, MA. 
  • Szita, Istvan; Szepesvari, Csaba (2010). "Model-based Reinforcement Learning with Nearly Tight Exploration Complexity Bounds" (PDF). ICML 2010. Omnipress. pp. 1031â€"1038. Archived from the original (PDF) on 2010-07-14. 
  • Williams, Ronald J. (1987). "A class of gradient-estimating algorithms for reinforcement learning in neural networks". Proceedings of the IEEE First International Conference on Neural Networks. 
  • Watkins, Christopher J.C.H. (1989). Learning from Delayed Rewards (PDF) (PhD thesis). King’s College, Cambridge, UK. 

Literature


Machine Learning for Humans, Part 5: Reinforcement Learning
Machine Learning for Humans, Part 5: Reinforcement Learning. Source : medium.com

Conferences, journals

Most reinforcement learning papers are published at the major machine learning and AI conferences (ICML, NIPS, AAAI, IJCAI, UAI, AI and Statistics) and journals (JAIR, JMLR, Machine learning journal, IEEE T-CIAIG). Some theory papers are published at COLT and ALT. However, many papers appear in robotics conferences (IROS, ICRA) and the "agent" conference AAMAS. Operations researchers publish their papers at the INFORMS conference and, for example, in the Operation Research, and the Mathematics of Operations Research journals. Control researchers publish their papers at the CDC and ACC conferences, or, e.g., in the journals IEEE Transactions on Automatic Control, or Automatica, although applied works tend to be published in more specialized journals. The Winter Simulation Conference also publishes many relevant papers. Other than this, papers also published in the major conferences of the neural networks, fuzzy, and evolutionary computation communities. The annual IEEE symposium titled Approximate Dynamic Programming and Reinforcement Learning (ADPRL) and the biannual European Workshop on Reinforcement Learning (EWRL) are two regularly held meetings where RL researchers meet.

External links



  • Website for Reinforcement Learning: An Introduction (1998), by Rich Sutton and Andrew Barto, MIT Press, including a link to an html version of the book.
  • Reinforcement Learning Repository
  • Reinforcement Learning and Artificial Intelligence (RLAI, Rich Sutton's lab at the University of Alberta)
  • A Beginner's Guide to Deep Reinforcement Learning
  • Autonomous Learning Laboratory (ALL, Andrew Barto's lab at the University of Massachusetts Amherst)
  • "The Reinforcement Learning Toolbox". Archived from the original on 22 July 2012.  From the Graz University of Technology.
  • Hybrid reinforcement learning
  • Piqle: a Generic Java Platform for Reinforcement Learning
  • "A Short Introduction To Some Reinforcement Learning Algorithms". Archived from the original on 8 November 2015. 
  • Scholarpedia Reinforcement Learning
  • Scholarpedia Temporal Difference Learning
  • "Stanford Reinforcement Learning Course". Archived from the original on 21 March 2012. 
  • Real-world reinforcement learning experiments at Delft University of Technology
  • Stanford University Andrew Ng Lecture on Reinforcement Learning
  • Dissecting Reinforcement Learning Series of blog post on RL with Python code
  • Gershgorn, Dave. "When AI learns to sumo wrestle, it starts to act like a human". Quartz. Retrieved 2017-10-12. 


 
Sponsored Links