Learning rule
This article may be confusing or unclear to readers. (May 2012) |
This article needs additional citations for verification. (May 2012) |
Learning rule or Learning process is a method or a mathematical logic which improves the artificial neural network's performance and usually this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment.[1] A learning rule may accept existing condition ( weights and bias ) of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias. [2] Depending on the complexity of actual model, which is being simulated, the learning rule of the network can be as simple as an XOR gate or Mean Squared Error or it can be the result of multiple differential equations. The learning rule is one of the factors which decides how fast or how accurate the artificial network can be developed. Depending upon the process to develop the network there are three main models of machine learning:
See also
- Machine learning
- Decision tree learning
- Pattern recognition
- Bias-variance dilemma
- Bias of an estimator
References
- ^ Simon Haykin (16 July 1998). "Chapter 2: Learning Processes". Neural Networks: A comprehensive foundation (2nd ed.). Prentice Hall. pp. 50–104. ISBN 978-8178083001. Retrieved 2 May 2012.
- ^ S Russell, P Norvig. "Chapter 18: Learning from Examples". Articial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. pp. 693–859. ISBN 0-13-103805-2. Retrieved 20 Nov 2013.