Talk:Decision boundary

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Robotics (Rated Stub-class, Mid-importance)
WikiProject icon Decision boundary is within the scope of WikiProject Robotics, which aims to build a comprehensive and detailed guide to Robotics on Wikipedia. If you would like to participate, you can choose to edit this article, or visit the project page (Talk), where you can join the project and see a list of open tasks.
Stub-Class article Stub  This article has been rated as Stub-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 

Untitled[edit]

I wrote this article to replace what was basically a substub, but I don't know much about this topic, so I've tagged it for expert review. (I'd never even heard of support vector machines until now; I just mentioned them because they were mentioned in the original version of the article.) I also don't know what a decision space is, or how it relates to the concept of a decision boundary, but perhaps it ought to be merged somewhere (here, for example)? —User:Caesura(t) 19:57, 28 November 2005 (UTC)

As far as I can tell, a decision space is just a 3 dimensional hyperplane belonging to a 4 dimensional hyperspace, which then seperates 4 dimensional objects in two groups. I only study this field right now, so it may be completly off.

This article is wrong[edit]

I cite the http://en.wikipedia.org/wiki/Universal_approximation_theorem

The article reads "If it has one hidden layer, then it can learn problems with convex decision boundaries (and some concave decision boundaries). The network can learn more complex problems if it has two or more hidden layers." This is not true. A feedforward neural network with one hidden layer and one output layer can approximate anything. More layers may be more efficient, more elegant, or otherwise desirable, but not strictly needed. — Preceding unsigned comment added by 70.162.89.24 (talk) 23:02, 26 February 2014 (UTC)