Jump to content

Feedforward neural network: Revision history


For any version listed below, click on its date to view it. For more help, see Help:Page history and Help:Edit summary. (cur) = difference from current version, (prev) = difference from preceding version, m = minor edit, → = section edit, ← = automatic edit summary

(newest | oldest) View (newer 50 | ) (20 | 50 | 100 | 250 | 500)

7 August 2024

13 July 2024

1 July 2024

24 June 2024

  • curprev 00:3300:33, 24 June 20242.121.108.30 talk 21,623 bytes +4 Non-Linear Activation Functions: To approximate any continuous function, we need non-linear activation functions. The Universal Approximation Theorem states that a feedforward neural network with at least one hidden layer and non-linear activation functions (such as sigmoid, tanh, or ReLU) can approximate any continuous function to any desired degree of accuracy, given a sufficient number of neurons. Multiple Parallel Linear Units: The statement about multiple parallel linear units ap undo

28 April 2024

24 April 2024

7 March 2024

20 February 2024

11 February 2024

4 February 2024

3 February 2024

13 January 2024

3 October 2023

2 October 2023

21 September 2023

12 September 2023

12 August 2023

11 August 2023

9 August 2023

8 August 2023

28 July 2023

25 July 2023

(newest | oldest) View (newer 50 | ) (20 | 50 | 100 | 250 | 500)