AlexNet

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

AlexNet is the name of a convolutional neural network, originally written with CUDA to run with GPU support, which competed in the ImageNet Large Scale Visual Recognition Challenge[1] in 2012. The network achieved a top-5 error of 15.3%, more than 10.8 percentage points ahead of the runner up. AlexNet was designed by the SuperVision group, consisting of Alex Krizhevsky, Geoffrey Hinton, and Ilya Sutskever.[2] [3]

AlexNet has had a large impact on the field of machine learning, specifically in the application of deep learning to machine vision. As of 2018 it has been cited over 25,000 times. The original paper's primary result was that the depth of the model was essential for its high performance, which was computationally expensive, but made feasible due to the utilization of GPUs during training.[1]

Network design[edit]

AlexNet contained eight layers; the first five were convolutional layers, and the last three were fully connected layers.[4] It used the non-saturating ReLU activation function, which showed improved training performance over tanh and sigmoid.[1]

References[edit]