Quick Contact


    Reinforcement learning

    It is a type of machine learning approach which allows an agent to learn by trial and error in an interactive environment using feedback from their actions and experiences. However, both supervised and reinforcement machine learning techniques use the mapping between input and output. Where supervised machine learning provides an accurate set of tasks feedback to an agent. On the other side, reinforcement learning approach, for positive and negative signals behaviour it uses rewards and punishment.

    Further, the comparison between unsupervised learning and reinforcement learning is quite different in terms of the aim. While the unsupervised learning aims to search similarities and differences between data points and reinforcement learning search for an appropriate action model which maximize the total cumulative reward of the agent. The figure shows the working of the reinforcement learning model.

    Reinforcement learning

    Here, some of the important terms used in reinforcement machine learning are listed:

    • Agent:

      It is an assumed entity that performs actions to get some reward in an environment.

    • Environment (e):

      A practical world, where an agent has to operate.

    • Reward (R):

      Feedback from an agent.

    • State (s):

      Current situation of an agent.

    • Policy (π):

      An agent decides further actions based on the current situation.

    • Value (V):

      With the discount, it returns long-term expectation to an agent in the comparison to the short-term rewards.

    • Value Function:

      It defines state value which is considered as the total amount of the rewards.

    • Model environment:

      It determines the behaviour of the environment.

    • Model methods:

      An approach to solve reinforcement learning issues by using model-based methods.

    • Q value (Q):

      This is much similar to the value term. The main difference between them is, it takes an extra parameter for its current action.

    Types of reinforcement learning

    It consists of two main types of reinforcement learning are as follows:

    • Positive:

      It is defined as an occurrence that happens due to particular actions. It increases the intensity and duration of the conduct and has a positive effect on the action taken by the agent. For a longer period, this type of reinforcement helps you improve efficiency and sustain improvement. Too much reinforcement, however, can lead to over-optimization of the state, which can impact the outcome.

    • Negative:

      It is described as reinforcing behaviour that happens due to a negative condition that should have been prevented or avoided. This allows establishing the minimum level of success. The downside of this strategy, however, is that it offers enough to fulfil the minimum actions.

    Reinforcement learning algorithms

    It consists of the main three-techniques to implement a reinforcement learning algorithm are as follows:

    • Value-based:

      It attempts to decrease the function value V in this algorithm (s). Where the agent expects the current states to return in the long term under policy π.

    • Policy-based:

      In this algorithm, the behaviour done in each state allows us to obtain a full reward in the future. There are two types of policy-based concepts are:

      • Deterministic: The same action is created for every state by the policy π.
      • Stochastic: Certain probability is performed by some actions and to determine those actions the following equation is:
      • Stochastic : n{a\s) = P\A, = a\S, =S]

    • Model-Based:

      Create a virtual structure for every environment. The agent should learn how to perform in that particular environment.

    Models of reinforcement learning algorithms

    There are two important learning models in reinforcement learning:

    • Markov Decision Process (MDP):

      It is a statistical framework which is used to describe an environment behaviour in reinforcement learning. It also consists of a set of S finite environment states, a set of A(s) potential actions in each state, and R(s) real-valued reward function and a P (s ‘, s | a) transition model. Real-world environments, however, are more likely to lack any previous understanding of environmental dynamics. In such instances, the model-free RL concept come in handy.

    • Q-learning:

      It is a widely used model-free approach that can be used to create a PacMan agent for self-play. This revolves around the idea of updating Q values that denote the importance of acting an in states. The centre of the Q-learning algorithm is the value update rule.

      Reinforcement learning

      To use Q-learning the following steps are as follows:

    • The Q-table is initialised with all zeros.
    • To explore start actions for all state, from the current state (S), select a possible action.
    • For the next state (S’) its result depends on its action (a).
    • From the state(S’), select the highest Q-value for it all possible actions.
    • By using an equation, update the Q-table values.
    • For the next state, set it as the current state.
    • The process can be considered as an end when a goal state is reached to its destination, else repeat the process.
    Applications of Reinforcement Learning

    The algorithm allows software agents and machines to describe the environment behaviour of a particular context and also it reduces the performance. Some of the applications are as follows:

    • Manufacturing:

      The industry, Fanuc, uses the reinforcement learning algorithm for the robot. The algorithm is training the robot in such a manner that, they can pick an object from one box and put it into another box. Whether the robot succeeds or fails but it memorizes an object and obtains knowledge to train’s itself. For example, these intelligent robots are used by many warehousing facilities used by eCommerce sites and other supermarkets to sort their millions of items regularly to help deliver the right products to the right people. If you look at the Tesla plant, there are more than 160 robots that do much of the work on their vehicles to reduce the chance of any defects.

    • Management of Inventory:

      To maximise space usage and warehouse operations, reinforcement learning algorithms can be constructed to minimise transit time for stocking and retrieving goods in the warehouse. The alignment of inventory policies implemented by various supply chain players, such as suppliers, producers, distributors, is a major problem in supply chain inventory management, to smooth material movement and reduce costs while responsively satisfying consumer demand.

    • Power systems:

      Reinforcement Methods for learning and optimization are used to test the security of electrical power systems and increase the performance of the Microgrid. To build control and security systems, adaptive learning approaches are used. High-Voltage Direct Current (HVDC) and Versatile Alternating Current Transmission System (FACTS) transmission technologies focused on adaptive learning approaches can help to reduce transmission losses and CO2 emissions.

    • Delivery management:

      To address the issue of split delivery vehicle routing, reinforcement learning is used. With only one car, Q-learning is used to support suitable clients.

    • Finance:

      For testing trading strategies, Pit.AI is at the forefront of optimising reinforcement learning. It turns out to be a versatile instrument for training programmes to optimise financial targets. In stock market trading, it has enormous applications where the Q-Learning algorithm can learn an optimal trading strategy with one simple instruction; optimise the value of our portfolio.

      This way, someone who can get his/her hands on a Q-Learning algorithm can theoretically gain revenue by thinking about the market price or the risks involved as the Q-Learning algorithm is intelligent to take all of these into account when making a trade.

    Advantages and Disadvantages

    Some of the following advantages and disadvantages are as follows:

    Advantages:
    • The algorithm can be used to solve complex problems which cannot be solved by conventional techniques.
    • The model will correct the errors that have arisen during the process of testing.
    • If the model corrects an error, the odds of the same error happening are much smaller.
    • It can create an ideal model to solve a specific issue.
    • To achieve long-term effects, which are very hard to achieve, this approach is preferred.
    • This model of learning is very similar to human beings’ learning. Hence, it is similar to perfection being accomplished.
    • It is bound to benefit from its experience in the absence of a training dataset.
    • To optimise its efficiency, reinforcement learning is designed to achieve the ideal behaviour of a model within a particular context.
    • If the only way to gather environmental information is to communicate with it, it can be useful.
    Disadvantages:
    • In several different ways, reinforcement learning as a system is inaccurate, but it is precisely this quality that makes it useful.
    • Much of reinforcement learning can produce an overload of states, which can minimise the outcomes.
    • It is not preferable to use reinforcement learning to solve basic problems.
    • Reinforcement learning involves a lot of knowledge and a lot of computation. It’s data-starving. That’s why in video games it works so well because you can play the game over and over again, so it seems feasible to get a lot of data.
    • Reinforcement learning assumes that the universe, which it is not, is Markovian. A sequence of possible events in which the probability of each event depends only on the state attained in the previous event is defined by the Markovian model.
    • In the case of robotic learning, the hardware of the robot is normally very costly, suffers from wear and tear, and needs careful maintenance. Repairing a robotic device is very costly.
    • To address certain reinforcement learning concerns, we should use a mix of reinforcement learning with other approaches rather than leaving it entirely. One common combination is learning to reinforce with deep learning.

    Copyright 1999- Ducat Creative, All rights reserved.