Talk:Finite-state machine

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Language Links[edit]

hey there! Why is the language link for turkish ends with weird characters? thanks. —Preceding unsigned comment added by (talk) 16:16, 1 March 2011 (UTC)


Was it Alan Turing who invented the concept of a finite state machine by creating the example we now call Turing machines in his famous paper about the decision problem? Jfgrcar (talk) 04:02, 18 February 2010 (UTC)

Interesting question! In all my "collections" of books pulled off the internet (via googlebooks) or studied in library hunts, or purchased -- (and some of these go back to the early 1800's) -- I have not seen any references whatever earlier than Turing that use the words and definitions he uses in his 1936-7 paper. It's hard to believe that he could, by himself, hatch as many words/defintions/ideas as he expressed in his paper. But I've discovered no evidence to the contrary. It's as if the words/ideas were all there but he was the first to put names to them. Turing's "famous paper" never ceases to amaze me. Bill Wvbailey (talk) 02:39, 23 April 2010 (UTC)

State machine is a model? WRONG![edit]

State machine is the device readily implemented in a vast number of techniques. THIS IS NOT A MODEL! This is a "tangible" device. This is implementation of a model, not the model itself. Model is always missing some details, while state machine is absolutely concrete, no matter in what way you implement it, in C++ or in VHDL (I did both plenty times). State machine exists as a digital electronics device, the same way as trigger or register; I hope nobody here will say a register or trigger is A MODEL!!! State machines exist in form of software components. Who will say the working software is a model??? Oh my... Neonil (talk) 14:14, 8 July 2009 (UTC)

I am sorry, but FSM is a model. A register is not a model, obviously, as well it is not a FSM. A register can be modeled in form of FSM. A FSM (that models a register) does not deal with the delays, voltage, mass, material, size, color of the register. More over FSM does not describe full behavior of the device it models (e.g. FSM can not describe the multivibrator circuit on triggers). So, I am sorry I must disappoint you, FSM is a model. (Vladimir E. Zyubin, PhD in CS) Zyubin (talk) 13:56, 20 November 2011 (UTC)

I do not think that the definition of Melee and Moore is correct[edit]

I do agree w/ the wiki pages discussing "Moore machine" & "Melee machine".
a Mealy machine is a finite state transducer that generates an output based on its current state and input.
a Moore machine is a finite state transducer where the outputs are determined by the current state alone (and do not depend directly on the input).

Is a computer a FSM?[edit]

OK. Consider a "real" computer. Let M, N be fixed positive integers. At any given time the computer is in a state s(i) such that 0 <= s(i) < N. Let the input at time i be denoted x(i), where 0 <= x(i) < M. Each "next" state is a mathematical function of the current state and the input: s(i+1) = f(s(i), x(i)). But this is just an FSM. Thus the FSM is a more realistic model for a real computer than the Turing machine. Yet the Turing machine is preferred in CS theory as the model for the computer. This makes sense, since FSMs in practice must have an astronomically large number of states to "run" many algorithms. So, is there some sort of general result that shows why the TM is better than the FSM as a computing model? Or am I completely wrong in my analysis? - Connelly 08:28, 9 October 2005 (UTC)

I would disagree with the statement that TM is better than FSM in modelling a Computer. Using your terms, modelling a Computer as a TM would have “astronomical” complexity times ten (at least). Your approach to design one FSM to describe the behaviour of a Computer is the problem. In practise you would design a system of multiple FSM, each of them having a small easy to “control” number of states. For instance having 10 FSM each with 10 states makes 10^10 (!) situations your model would cover (theoretically). Going this way you can imagine that you would be able to cover all situations your Computer can be in within a more or less coherent model. I cannot imagine to achieve the same using TM. So where did you get the information? Thowa 17:32, 9 October 2005 (UTC)
The purpose of a model of a real computer is to derive guidelines for programming real computers. The infinite tape in TMs allows us to talk about asymptotic rates of growth. (That is, big-O notation.) Though there are exceptions (Levin search, for example), the rule of thumb "prefer algorithms with better big-O" leads to good results in practice. Though modeling a real computer as a finite automaton is more "accurate", the purpose of a model is to be helpful, not to be unhelpfully accurate. JohnicholasHines 14:43, 13 October 2006 (UTC)
Hmm, yeah thanks for the reply. It seems that there is a common sense argument that a TM is easier to program than a FSM without a bunch of functions which "explode" a TM program onto a FSM, and no real formal argument that I can find. Maybe the formal argument is back in the annals of history somewhere, or it's an article of faith to date. - Connelly 08:28, 19 October 2006 (UTC) (Not logged in)

FOLDOC definition[edit]

Here is the FOLDOC article on this... if there is any information from here that can be merged into the main article, do so:

Finite State Machine <mathematics, algorithm, theory> (FSM or "Finite State Automaton", "transducer") An abstract machine consisting of a set of states (including the initial state), a set of input events, a set of output events, and a state transition function. The function takes the current state and an input event and returns the new set of output events and the next state. Some states may be designated as "terminal states". The state machine can also be viewed as a function which maps an ordered sequence of input events into a corresponding sequence of (sets of) output events.

A deterministic FSM (DFA) is one where the next state is uniquely determinied by a single input event. The next state of a nondeterministic FSM (NFA) depends not only on the current input event, but also on an arbitrary number of subsequent input events. Until these subsequent events occur it is not possible to determine which state the machine is in.

It is possible to automatically translate any nondeterministic FSM into a deterministic one which will produce the same output given the same input. Each state in the DFA represents the set of states the NFA might be in at a given time.

In a probabilistic FSM [or stochastic FSM], there is a predetermined probability of each next state given the current state and input (compare Markov chain).

The terms "acceptor" and "transducer" are used particularly in language theory where automata are often considered as abstract machines capable of recognising a language (certain sequences of input events). An acceptor has a single Boolean output and accepts or rejects the input sequence by outputting true or false respectively, whereas a transducer translates the input into a sequence of output events.

FSMs are used in computability theory and in some practical applications such as regular expressions and digital logic design.

See also state transition diagram, Turing Machine.

[J.H. Conway, "regular algebra and finite machines", 1971, Eds Chapman & Hall].

[S.C. Kleene, "Representation of events in nerve nets and finite automata", 1956, Automata Studies. Princeton].

[Hopcroft & Ullman, 1979, "Introduction to automata theory, languages and computations", Addison-Wesley].

[M. Crochemore "tranducters and repetitions", Theoritical. Comp. Sc. 46, 1986].

Seperate page for tools?[edit]

Ideally I think it would be best to have a seperate page for tools so that we could add some more (there are tons of them out there), sort them in categories and give a short description (IMHO having just the names is almost totally useless, the names and a link is already better but still not good enough).

I see at least three distinct categories (some programs would fall into more than one category but that's ok, I guess):

  • FSM graphical editors
  • automatic FSM generator from "java/C/..." code
  • automatic code generator from FSM

--[[User:Gedeon|Ged (talk)]] 13:36, 11 Aug 2004 (UTC)

Removed tool[edit]

Couldn't find a webpage for "fa (TCL)", so I removed it. Feel free to add it back IF you know of a link for it. --[[User:Gedeon|Ged (talk)]] 14:52, 11 Aug 2004 (UTC)

Hardware applications[edit]

I don't feel strong enough to write a good article about the hardware applications now. Therefore I just moved the old content from the Implementation section to the Hardware application section making just small corrections. My changes in the Implementation section are focused on the definition and software applications, so let me know if you have any comments...

--Thowa 20:52, 18 Dec 2004 (UTC)

Restored FSM logic diagram[edit]

I replaced the new FSM logic by the old one, as it uses terms which are not mentioned/explained on the page: state generator, output generator etc. I wonder if this terms are correct in this context. Maybe some specific FSM modification was meant, but then I would write a new article, as this is not a classic FSM any more. For instance an FSM does not generate any states, but it is in a certain state and can change the state if certain (input) conditions are fulfilled.

--Thowa 19:49, 15 Feb 2005 (UTC)

The problem with Image:Finite state machine definition.gif is that it's about hardware applications. This image is in no way a general definition of an FSM, nor is the section called "Definition". Outside of hardware applications, for example in formal language theory, FSMs are usually conceptualized abstractly, without paying attention to such things as "entry actions". IMO it would be best to define FSMs informally at first, then give a formal definition of a state machine, and then state under what conditions (described by the Myhill-Nerode theorem) a state machine is (equivalent to) a finite state machine. --MarkSweep 20:54, 15 Feb 2005 (UTC)
I think this diagram is fine where it is, de-emphasized, but the lead diagram is much too abstract (it could describe any state machine) while the others are too specific. Can't we just have an FSM that matches a simple regexp or something, like the kind you see in theory textbooks? Deco 05:03, 16 Feb 2005 (UTC)

Thowa 20:44, 16 Feb 2005 (UTC): I wonder a little bit that you are so sceptic about the FSM logic figure. It shows exactly how an FSM behaves, so that everybody can understand. But paste into the discussion an other alternative diagram - as usually, everything can be improved - so we can discuss it. I would like to make a general statement about pages like this: I don't think the goal is to explain the entire FSM theory within one Wiki article: nobody can seriously try it. For this there are many books provided, we shall point to in the References section. The purpose of the article shall be to explain shortly to an interested reader what a FSM is and how can it be used in practise. This means of course that the current FSM article need some "rework". Besides an explanation how does it work (and this shall be possible to understand without spending few years studying maths) it shall mention of course that it can be used to build any kind of parser applications, but it shall also clearly state that the major applications are control systems in hardware and in software applications. This information for instance might be very useful for students many of them leaves the university believing that FSM is to build parser, just because it was used as a nice FSM example in some courses. The true potential of FSM is enormous and this is something this article shall communicate. To provide the "grey" theory is useless as nobody is interested to read it here as there are probably many thousand books which are much better if somebody really want to learn it. The article can be just a starting point.

If I understand correctly the comment of Deco, you would like to have a parser example instead of the provided door open/close example. I had to think about my salesman friend: I'm sure he will get an idea of how an FSM works if I tell him that a door can be closed or open and that this are the states of the door FSM. Then I mention the two obvious possible transitions: open to close, close to open and my friend learned something within few minutes. And if he is more interested I add an engine to control the door, which can be controlled by commands "close" and "open" and the actions problem is also clear. A parser is an academic example which would cover only few aspects of FSM and is probably to abstract to my friend. However I agree that it might be useful to put both examples, as we know everybody has his own individual preferred way of learning and this shall be probably taken into account by articles in Wikipedia.

Actually, I just couldn't see the image very well and had no idea what it was about. Now that I look closer it's not a bad diagram, just hard to read. If it were larger and/or had better contrast, and were the lead diagram at the top of the article, I would consider this a great overall improvement. Sorry for the ignorance. Deco 07:26, 17 Feb 2005 (UTC)
Oh, and I realise the diagram serves to explain the Definition section — but I think someone unfamiliar with state machines is likely to find the Overview section quite confusing in any case. I think starting with a simple example and some terminology in the context of that example would be a lot better (such as the example this diagram describes). Deco 07:30, 17 Feb 2005 (UTC)

Article revision[edit]

I sorted the available content a little bit. The focus was on a general explanation for everybody rather than explaining details of the automata theory within a short article. For more information I extended the books list a little bit too. Any comments welcome. Thowa 11:05, 26 Feb 2005 (UTC)

I'm very happy with the changes. Much gentler introduction. Deco 08:21, 2 Mar 2005 (UTC)

Removed clarification on Moore/Mealy usage[edit]

I removed the changes done by because the information was misleading. He wrote:

Moore automata are a subset of Mealy automata, so unless the Moore restriction is useful (e.g. to simplify implementation or analysis or to control latency) mixed models are typically used, with outputs depending on inputs for a subset of the states, as appropriate for the modeling task.

The two most relevant things are incorrect:

1. Moore is not a subset of Mealy. Although it looks like this (Moore=output depends on state, Mealy=output depends on state and input), both models are independent. Moore means that there will be an action executed only if changing a state (i.e. we use only entry actions). Mealy is independent of state changes, because the actions are executed if a certain input is given and the FSM is in a certain state (i.e. we use only input actions). Have a look to the examples provided in the article: in figure 4 the FSM receives the input "close the door", so it starts the motor (input action), but it does not perform a transition to "Closed"

2. mixed models are typically used in software most probably, but in hardware mainly Moore model is used. So to keep this article generic, we better state "mixed models are often used", not more.

Thowa 20:52, 15 Mar 2005 (UTC)

It presently says, for Mealy: "Output depends on input and state, i.e. the FSM uses only input actions."


Thowa: You are right, I did not notice this. It is in deed confusing. Actually I shall flip the arguments: "The FSM uses only input actions, i.e. the output depends on input and state".

It wasn't until I read this talk page that I understood how Mealy worked. That's because you wrote "Mealy is independent of state changes, because the actions are executed if a certain input is given and the FSM is in a certain state."

That made things much clearer.

The reason that what the article says is unclear, is because: Everything that happens in any FSM "depends on input and state."

Thowa: Actions in a Moore FSM depend only on the state, because we don't care what was the previous state, i.e. actually it could be any input which triggered the FSM to perform one of the several possible transition to the current state. A good exercise is to design a moore FSM (i.e. FSM which uses only entry actions). You will see that you will think purly "situations oriented" (=state oriented) without the need to analyse the required input: "if (state A) then action X". Try the same with a Mealy design and you will see that your thinking is much more complex: "if (state A && input M) then action X".

Perhaps what is needed is a clarification of "Input Action"?

Thowa: FSM is a mathematical model. Several aspects might need some time and exercise to be understood, independent of what we write to explain it. But if you have a good proposal, let's discuss it here.

To be honest, I'm not sure what "input action" is either; "Execute the action dependent on input conditions." That can't mean that the current state is ignored, can it?

Thowa: This sentence was meant "in the current state". Somebody added this already to the definition to make it clear. However, there are two cases where the current state can be ignored:
1. in case the input action is defined for each state of the FSM
2. in case of a combinatorial FSM, as this FSM has only one state.

Perhaps it should say: "execute the action dependent on input conditions and immediate state, independent of transitions." It's not as short, but it's a lot clearer.

Thowa: "independent of transitions" might be confusing for other reasons, but " immediate/present state" is helpful in deed.

I feel that there should be a separate article on Moore and Mealy, to explain in more detail about how they work, and how they differ from the others.

LionKimbro 20:22, 14 May 2005 (UTC)

Thowa: Click on the chapter title to go to the separate pages. But don't overestimate this two models. It is rather a pure academical problem. In practice you will use mixed models, based on entry actions. Input actions make the design much more complex and are useful only in certain obvious situations like for instance log/alarm generation or in a combinatorial FSM (on top of a hierarchical system of FSMs).

Thanks for the responses!

I already made my change though: I made it say: "execute the action dependant on present state and input conditions"

LionKimbro 20:01, 18 May 2005 (UTC)

Removed Moore/Mealy statement[edit]

I removed the following contribution from GoodStuff:

"Simply stated, the difference is that a Moore machine generates an output for each state, where a Mealy machine produces an output for each transition."

There are two problems with it:

1. Mealy machine: the conditions which causes the input actions don't cause necessary a transition (it is possible but not obligatory). So a Mealy FSM can perform actions without changing the state. Besides this, we can use also transition actions which defines actions performed for a state transition but those are not known in a Mealy machine. Those actions can be replaced by input actions which conditions would also cause a transition, but this "work around" does not work in all cases.

2. Moore machine uses only entry actions, but it does not mean that in each state an entry action must be present.

Based on several comments and questions in the past, I found that the difference between Moore and Mealy as well as the sense and advantages of using a certain model are pretty confusing for many readers of the FSM article. Therefore I added a link to an external page, where a technical note about this topic accompanied by an executable example is presented. Maybe this is more helpful?

Thowa 19:20, 25 Apr 2005 (UTC)

Narrow focus?[edit]

This article seems to have a quite narrow focus in viewing finite state automata as machines with finite control state that react to events. Certainly for some applications, that view is sufficient, but finite automata are one of the most fundamental structures to computer science, and have a huge array of applications (usually with some generalizations, e.g. Markov chains can be viewed as stochastic finite automata, different logics can be interpreted using finite state automata over finite and/or infinite strings, properties in abstract algebra and term rewrite systems can be solved using finite tree automata, etc... At the same time, it is clear that a number of people have spent quite a bit of time on explaining finite automata in terms of these reactive state machines, and the explanation is those terms is quite good.

Perhaps there should be two articles, one on finite automata, the other on finite state machines. The finite automata article presents a high level overview of finite automata, how they are used, and why they are important. The finite state machine article gives a concrete model of a finite automata, specifically the one presented in this article. Before attempting to make such a large change, I thought I'd solicit feedback.

To be honest it is not clear to me what do you plan to change, so it's hard to discuss. But I disagree with a few statements you made. The current article gives a very generic description what FSM is. It is not focused on any specific type of FSM. Of course your personal view depends on the domain you are coming from. For instance typically in academic domains FSM is known only (or mostly) as a model for designing compiler, parsing strings, speech recognition etc. However the idea of FSM was not born in this domain but much earlier and is mostly used in the industry for application modelling (h/w and s/w). Beside this an article in Wikipedia shall be written in a way that people can understand it. It's not written for students who shall learn all the mathematical details but for "neutral" people who shall get the idea how powerfull and easy this great concept is. Last but not least, finite state machine and finite automata are synonyms, so we cannot create two different pages for it. What I can suggest for you is to create an additional page called "the role of FSM in computer science" (maybe you fine a shorter title) and link from the FSM to the new article. It is important that people learn the generic model of FSM: based on my experience the most of students, after leaving the University, don't know what to use an FSM for (besides of course for parsing strings etc. - but how many engineers in the world are needed for this kind of job?). And this is a huge loss, as this technology can be applied in so many domains. Thowa 02:15, 2 October 2005 (UTC)
I think probably the thing for me to do, would be to find some time to update the Automata Theory entry. It isn't nearly as well written and I think overall you've done a excellent job at showing a key use of finite automata. Incidentally, in my experience academics are much more focused on using finite automata these days for modeling systems and logic for verification purposes. This itself isn't really new though as specific uses date back to Turing and Von Neuman, and the general ideas come from Leibniz. You can probably date some of the notions of automata back to Aristotle. 04:30, 7 October 2005

Flow charts?[edit]

Are automata diagrams an evolution of flow charts? How are the two related? Should we mention this? I've never heard them talked in relation to each other, so I wonder how many people have even taken that view. Is it a bad one to take? --Rocifier 10:11, 14 November 2005 (UTC)

There are several methods to design control systems (FSM, flowcharts, statecharts etc.). So somehow all of them are related. For this the section "See also" can be used. It is hard to say which model was the first one and if there was an evolution. Anyway the FSM is already very old and most probably all other models came later. If somebody really knows it (source?) then it would be worth to mention it in the article. Just make sure it is truth and not speculations.
The FSM and flowchart are very different although if learning it the people often don't see the differences so clearly. The article "A flowchart is not a state machine" explains the differences very nicely. Thowa 13:04, 20 November 2005 (UTC)


I've heard them called finite state transition networks, is this correct?

I would like to modify Fig. 2[edit]

I would like to modify Fig. 2 (Image:Fsm parsing word nice.jpg) in the article to include a transition from state 7 to state 6 and one from state 6 to itself. Any objections? Bergsten 14:00, 18 November 2005 (UTC)

Some objections: The additional transitions are complicating the example. But they are not necessary in many of the use cases. In practise does such FSM never work alone. It will signalize to an upper application (i.e. an other master FSM) with its states the situation. So if the muster FSM sees the parser in state "success" or "error", it might for instance stop delivering input. It can "reset" the parser FSM or redirect the stream to an other FSM. On the other site the parser FSM can just ignore the input, as there are no transitions possible any more. Several solutions are possible which makes this example correct. Any additional transitions require additional explanations, but a deep analysis of a parser FSM was not the purpose of the article. So your idea is not incorrect, it is just not helpful in this context. Thowa 12:44, 20 November 2005 (UTC)
I disagree, I don't think the additional transitions would require any additional explanations but rather save the explanation you give here. I also think this would simplify the example and not make it more complicated in that the outside interface of the FSM becomes as simple as possible. However if I'm the only one supporting this idea I'll leave it alone. Bergsten 16:16, 20 November 2005 (UTC)
Seeing your comments I assume you are pretty familiar with state machines. You would like to build a perfect system, not just the "sunny day" scenario. If you look into other FSM in this article, you will have to agree all of them are simplified and represent only the basic functionality. Let's take the door (Moore): there is no starting state like "Init". There should be a transition from "Init" to any other state (probably), dependant on the sensor input. Beside this an error state is probably required in case the sensor is defect and we don't know what to do. I assume a perfect door FSM would have up to 10 states in order to cover all possible real situations. To make such an example would be useful for a book where you show the evolution of such a system. But a short article to explain what FSM is, shall better focus only on basics, just the principles, the "sunny day" scenario. This makes people thinking. At least this is the experience I did. Thowa 21:22, 22 November 2005 (UTC)
My will was not to optimize for a real-life situation, but rather to see to it that the FSM described by the diagram was well defined in the sense that every element in the state transition table had a value. However as you point out, to have the door example both well defined and still meaningful would require new states and a whole bunch of new transitions. This would clobber up the example and be somewhat cumbersome to grasp at first sight. So in this case I agree with you that it's best to keep it as it is. But I still think that in understanding the nice example, the win of having it well defined is greater than the loss of adding the two extra transitions. As I said, I realize that this is my point of view and won't try to push it through on my own. Also I think it's a good example the way it is now, but feel that it would benefit slightly of being well defined. Bergsten 17:45, 23 November 2005 (UTC)

Merge proposal[edit]

(the anonymous editor User: added a proposal to merge with deterministic finite state machine to the article)

  • Oppose. This article is used in other pages and no presumption is made that a deterministic FSM is used on those other pages. --Ancheta Wis 02:11, 3 December 2005 (UTC)
  • Oppose, but we should be more careful in this article to not assume we're dealing with DTMs. In particular, the transition function formal def assumes determinism, all the examples are deterministic, and we neglect the possibility of epsilon (or lambda) transitions. Deterministic machines are easiest to understand, which is a legitimate reason for using them in introductory examples, but we also don't want to create misimpressions.
On an unrelated note, would the original author of the pictures please give me access to the originals so I can make them .png's? JPEGs are entirely inappropriate for diagrams - there are clearly visible artifacts. Deco 02:28, 3 December 2005 (UTC)
To generate the pictures I used unfortunately not a nice graphical tool but a FSM design tool. The advantage was, I was able to let all the FSM run in order to check them for correctness. The disadvantage was that the only two export formats are jpg and wmf. Only the introduction picture is extended using Photoshop. So if you want to make the pictures looking more nicely, you will have to work with jpg sources as stored in Wikipedia. I can also deliver wmf if it is better. Thowa 09:46, 18 December 2005 (UTC)
  • Oppose. Since DFA is a special case of FSM, It is strongly requested to keep them separate in terms of explaination. However, I see the definition of DFA is already included in this article. --Muzammil 12:24, 14 December 2005 (UTC)
  • I agree with all opposes. The DFA/NDFA discussion shall have its own article and it shall be uncoupled from the common FSM definition. The reason why the article assumes we are dealing with DFA, is because this represents probably 99% use cases in the practise. It does not make sense to complicate things by introducing any possible exception. Anyway both types (DFA and NDFA) are mentioned and are linked to the proper articles. I would rather suggest, to change the DFA/NDFA articles to point out the meaning of this distinction instead of explaining FSM again. This being said, I'm going to remove the merge proposal, as for four weeks nobody could find any pro argument. Thowa 09:46, 18 December 2005 (UTC)
Thowa - most regular expressions are implemented in a near-equivalent of NFAs, up to and including the possibility of taking a "wrong path" on a given evaluation. Michael Ralston 03:17, 28 February 2006 (UTC)

Another merge proposal[edit]

I suggest merging accept state into this article. Accepting states are a property of state machines, and I don't think that the accept state page will be able to be much more that a definition. MagiMaster 09:19, 10 July 2006 (UTC)

I don't like this merge. The article becomes slowly more and more complex. The automata theory has a lot of such details. Do you want to put all of them here? It is better to have articles people can understand even if the did not study computer science. If somebody is interested in a detail like "what is an accept state", he can follow the links. Please undo. Thowa 20:06, 17 July 2006 (UTC)

Optimization section[edit]

The section on optimization could really use some further explination! It is practically useless as it stands now. Meekohi 01:52, 30 August 2006 (UTC)

Optimization algorithms for finite state machines are complex and interesting. They should probably have their own pages. Markminn07 (talk) 04:46, 29 August 2008 (UTC)

There's a separate article (albeit a small one) for optimization. However, afaict neither article says anything about optimization of FSTs, only about FSAs. FSTs are of course much more difficult to optimize, and there is no unique optimization. It would be nice to say so (i.e. it would be nice for someone who knows what they're talking about--not me--to say something :-)). Mcswell (talk) 20:42, 4 April 2011 (UTC)

finite state machines, what are they?[edit]

The following originally appears in the talk pages of Turing machine but I've copied it here closer to where it belongs. The notions of "discrete", "mappable to integers", "finite" are important to computational models such asTuring machines and somehow need to be emphasized more in this FSM article. wvbaileyWvbailey 14:18, 6 September 2006 (UTC)

Minsky (1967) spends about 20 pages giving us a definition of a "finite-state machine". It includes the following (italics in original):

"...a class of machines known as finite-state machines or finite automata. These are the machines which proceed in clearly separate 'discrete' steps from one to another of a finite number of configurations or states." (p. 11)
"2.0.1 Moments of Time. The theory of finite-state machines deal with time in a rather peculiar way. The behavior of a machine is described as a simple, linear sequence of events in time. These events occur only at discrete "moments" -- between which nothing happens. One may imagine these moments as occuring regularly like the ticking of a clock, and we identify the moments with integers -- the variable for time, t, takes on only the values 0, 1, 2, ...
"the system [must meet] the conditions we will impose -- that by the end of each interval the machine does, in effect, "settle down" into one of a finite number of describable states" (p. 12)"
"One restriction we will introduce here is this -- the output state at the moment does not depend on the input state at that same instant. This is to say that we won't admit the possibility of signals traversing a unit instantly" (Minsky (1967) p. 14).

In this context the word "state" is derived as follows:

state n. from Latin status, pp. of stare to stand. more at STAND. a condition or stage in the physical being of something.
stand from Latin stare from Greek histanai to cause to stand. usage #1: to maintain one's position (~ firm). usage #2: (1592) an act of stopping or staying in one place.

There is another usage of "state" -- in mechanical dynamics theory. "State" here is the instantaneous snap-shot of an object relative to a frame of reference, but giving both its position (x, y, z) and velocity (dx/dt, dy/dt, dz/dt). The following is from pages 39-40 of Robert H. Cannon Jr, Dynamics of Physical Systems, McGraw-Hill Book Company, 1967.

'The state of a system. What we mean by the state of a mechanical system may be introduced by analogy with configuration: The state of a system is often defined by an independent set of position coordinates, plus their derivatives. Thus, state implies configuration plus velocity. Configuration tells only where the system is, but state tells both where it is and how fast (and in what direction) it is going; and as expressed by professor L. Zadeh, 'the state of a system at a given time, plus its differential equations of motion and inputs, will detrmine its configuration for all future time....
Although state comes from the Latin status, a standing or position, a more helpful image for our purposes is that of a stopping or freezing of action, as when a moving-picture film is stopped on one frame, but the instantaneous speed and direction of motion are revealed by blurring of the picture."

Thus conventional, canonical state-machine analysis doesn't allow for "incoming" actions produced by a state. Input causes a change to a new state. "Outgoing" action, maybe -- at least in terms of "direction of change" shown by the arrow. Mealy machines represent a problem because [as noted above, repeated here]:

"One restriction we will introduce here is this -- the output state at the moment does not depend on the input state at that same instant. This is to say that we won't admit the possibility of signals traversing a unit instantly" (Minsky (1967) p. 14).

This notion of "finite time to traverse" comes up in the definition of algorithm -- in particular the work of Gurevich quoting Gandy.

And there really is no need for 'Mealy machines' anyway, because any 'Mealy' can be converted into a 'Moore' (cf Booth (1967), p. 960. Here he discusses this problem, that a 'Mealy' produces no output unless there's an input. He also notes that "a short time is allowed after the input symbol is applied before the output is observed", etc.

This is bizarre. No input e.g. (0,0,0,0) produces no output (0,0,0,0) but the machine is in a particular state, call it "B". As soon as an input is applied i.e. (0,1,0,0) the machine goes to a new state, call it "D". During its brief time going from B to D it produces an output (0,0,1,1). This appears as a pulse during the transition. If D is to sustain this pulse it must have a recirculating path back to itself i.e. (0,1,0,0) produces a recirculation back to D and maintains (0,0,1,1). Clearly, this is not trival stuff.

There appears to be a kind of pseudo-state used by software folks -- but you can always atomize this so-called "state of many colors" into true states each of one color. I've questioned this on a talk page. Over the years I've both written and debugged thousands of lines of pseudo-state-machine software -- thousands of pieces of equipment use the code-- and I have no idea what the fuck the article is talking about. wvbaileyWvbailey 14:29, 17 August 2006 (UTC)

Dubious assertion of 4 different types of action[edit]

There is a problem with starting line of article. Basically it states that actions are fundamental part of automata and than it proceeds with entry/exit/input/transition actions. This isn’t right. Actually it is completely wrong. Automata with actions, like more or mealy machines or workflows in general, are higher level abstractions of state automata. Actions are cool concept but they are convenience within automata; they do not present fundamental property of state machines. Fundamental property is transition table. Stating that automata have actions is plain wrong. This article also fails to mention that fundamental automat is DFA. It doesn’t state that fundamental property of automata is state transition table (that is states and graph they make). All automata can be presented as DFA, and if there is some *automata* which can’t be transformed to DFA than it isn’t automata. That’s why they are called state machines in the first place. This article doesn’t state that and it doesn’t make proper distinction between fundamental parts of automata and conveniences which we use to make them easier to handle.

A finite state machine is an atomized model of a type of "computation": with discrete, finite and "static-between-moves" behaviors (see the section above for definitions of "state"). The canonical FSM model is nothing more than a single lookup table with a memory feeding back to the lookup table, inputs into the lookup table, and outputs from the lookup table. Whether the memory is clocked or not depends on what the designer is up to. Clocking helps eliminate evil behaviors such as "races" etc. Every single type of real and model FSM (e.g. decimal counters) can be redrawn to "look like" the canonical model.

Such an atomized model will not have all these different kinds of input and output behaviors.

A particular state can have exactly one "behavior" (output), and one behavior only. This is virtually the definition of "the state". Otherwise it can be further atomized into more states (QED). Even a "behavior on exit/transition" (e.g. a pulse) can be further atomized into two states and the model refined. A transition occurs 'very fast' and if it doesn't and a glitch occurs (due to a delay) then the delay must become part of the FSM model and reflect an added state (or states).

Someone needs to reference the source(s) that claim all these types of input and output actions. Otherwise the intro will have to be reworked. I have not seen all these actions asserted in any classical engineering or theoretical computer-science texts. wvbaileyWvbailey 14:48, 6 September 2006 (UTC)

I have to give a short answer to all your remarks and changes to the article:
1. Actions: if you understand what FSM Output is, did you ever think what does it practically mean? Actions is probably the most important and powerful concept in the FSM practise. You asked for sources: take i.e. the books Carroll and/or Wagner (reference 5 and 1). Please note that the FSM topic is extremely wide. Within a book, usually only some aspects of the topic can be discussed and it might be more or less close to a common definition/understanding. The book from Minski you are obviously familiar with, is focused on a very specific topic and can hardly be taken to explain FSM within an encyclopaedia.
You are quite wrong. As noted below, the same definition appears in every single book starting from Mcclusky 1965 up to Sipser 2006.
Not true: the definition is not the same in every single book. Take e.g. the references I provided. Or compare Caroll (1989) and Huffman (1954).Thowa 20:22, 13 September 2006 (UTC)
Take a look at the references that I provided. Or read the ones above that I so laboriously typed into this dialog (hint, hint). See another quotation below. See -- the point is, not all the references agree. So you have to deal with the disagreement in the article. Or we can defer this to the article state with a comment that "there is no precise definition of "state" in the context of 'state machine'; see State".. That is the nature of the beast -- there is no accepted agreement. I know of at least 5 definitions. Every time I look I find another one. wvbaileyWvbailey 02:05, 14 September 2006 (UTC)

To write an article within an encyclopaedia you shall compare multiple books and have experience in several different use cases, not only hardware design (based on your remarks, I assume you are a “hardware man”)

I am not a hardware man. I have programmed stuff starting with the 8085 in mid 1970's. Actually, all of the more recent work I've done is in the context of theoretical computer science. The latest and greatest text I have is Sipser (2006) and nowhere does he describe anything like what is being described here. I have about 10 texts engineering-based (McCluskey 1965, Hill and Peterson 1974, Booth 1967) and theoretical computer science (Minsky 1967, , Hopcroft and Ullman 1979 & 1994 various editions, Sipser 2006), and none of them describe anything like this. There is something very wrong with this article and I have yet to determine what is going on, except that these texts do not reflect any it. wvbaileyWvbailey 17:46, 7 September 2006 (UTC)
Probably you would like to have the FSM theory explained including all possible aspects here. Not very useful within an Encyclopedia in my opinion. The article describes the essence. It is clear and pretty easy to understand. One can probably imagine the practical meaning of FSM, which is not only theoretical computer science. Not so easy to pick out the essential points and then fight against some experts who would like to see the "ultimative truth". It is like a fight against an ideology.Thowa 20:22, 13 September 2006 (UTC)
Don't totally disagree -- a summary article with branches to the good stuff is fine. But you cannot mislead. Something as old as "state machines" you have to write in a quasi-historical perspective. You write the article as follows: "As described in the literature, there are at least x types of state machine [I've identified 5, hence the references]. Historically, the first descriptions were of the behavior of physical machines (xyz -- examples, such as machines built with relays -- relay logic (where early books start) and Turing machines, including Turing's own definition of "state") and then continued in the 1940's and 1950's due to the work of Turing, Shannon [or whoever]. By the early 1950's research bifurcated into two general directions (I) Hardware design, (II) Theoretical computer science.
(I) By the mid-1950's research of Shannon, Huffman, Moore and Mealy (or whoever) resulted in the text-book-ready methods of logic minimization and synthesis techniques of Quine (19xx) and his studet McCluskey (19xx), cf McCluskey (19xx), Booth (19xx) Peterson (19xx) et. al. These were oriented toward the design of true hardware, as in gut-level discrete and then integrated circuit designs-- there were no on-chip computers in the 1950-1960's, not until 1970's were they available. [I got my start building flip-flops and frequency dividers from discrete PNP transistors (not NPN), there weren't any NPN then-- PNP were built from germanium, not silicon]-- counters, flip-flops, shift-registers, etc. (II) In the 1950's and 1960's theoretical computer science saw [at least] three uses and explorations of FSM's in: (IIa) Automata theory, (IIb) "Hard" problems identified by researchers (examples: Post problem of "tag", Hilbert's 10th problem re diophantine equations), (IIc) Definition of the notion of "algorithm".
The abstract computer science interest in "finite state machines" derived from (IIa) the design of compilers -- methods for processing "languages" as strings of symbols. This also dovetailed into the recognition of the processing of language in proof and algorithm. The second (IIb) was driven by the need to precisely define the notion of "computable", of "effective procedure", of "undecidable", of "intractable". The third (IIc) was driven by questions about what an "algorithm" really is, per the Church-Turing thesis. And (IIb) and (IIc) interacted in considerations of "intractability" -- of machines unable to compute easy-to-describe problems in a "reasonable" time and/or reasonable space (NP-complete, etc).
In the mid 1980's ... and here I trail off, because my hardware researches end in the mid-980's. As I said I used state-machine ideas in software from day one (starting with the 8085- 1978 or so, design of a parser using a state table run on an 8085). But as to the theory of "abstract state machines", I have no expertise. So this needs to be written by someone with that expertise.
That's my view of finite state machines, that ... and the "canonical model", which incidentally I ran into in Gurevich. And Markov chains which I fiddled with once out of curiosity. wvbaileyWvbailey 02:05, 14 September 2006 (UTC)
2. FSM logic: I reverted your changes. Figure 5 is a common definition of FSM. The box “output conditions” is not a AND-gate or so. It is a generic box which can implement a function which ignores Input and watches only the State. It is just a generic diagram. By the way, there is no need in this section to explain what State is. It is explained at the begin of the article “A state stores information about the past, i.e. it reflects the input changes from the system start to the present moment”.
"A state stores information" is not a good definition, as I have noted above, in gory detail. In fact it is a very poor definition. One might say: "A state stores energy". But "information" is in the eye of the beholder. You will need to find a source for your definition -- quote it in the text as in (John Doe (1983) p. 42), or I will remove it and substitute Minsky's definition. What I think is a better approach will be to move all of this to state and there define it accurately and deeply. wvbaileyWvbailey 17:46, 7 September 2006 (UTC)
As a state does exactly this, i.e. it is a kind of compressed information about the past, this definition is the best one you can give here. You can of course write an article about it explaining all aspects, but as said: an extra article. Within an FSM article it is perfect. References? Ok, what about e.g. ref 1, chapter 4.Thowa 20:22, 13 Sptember 2006 (UTC)
When you introduce terms like “short term memory” or “delay”, you shall explain it.
Strange that you say this, when above you define "A state stores information about the past". Isn't this exactly what a memory is/does? And a delay is/does? wvbaileyWvbailey 17:46, 7 September 2006 (UTC)
"A state stores information about the past" is hopfully clear for an uneducated reader. A univeristy professor in computer science will not try to read this article to learn something. “short term memory” or “delay” can mean a lot in this context.Thowa 20:22, 13 September 2006 (UTC)
Actually a better example of the use of the word "state", at least in the English language, comes from the Latin status, i.e. stationary, motionless. If a person records a high-speed "cinemagraphic" image of the operation of a "machine" and observes that only a few configurations of the machine occur, over and over, then these are the "finite states". "Memory" is an explanation of why a finite number of "states" might occur. Or not: an oscillating pendulum's sine wave has an infinite number of "states"; it becomes motionless at the tip of its swing -- its velocity is zero. But it is an analog, not a digital, entity. Configuration is "the thing itself". One of the sources, Kozen (1997) says:
"Intuitively, a state of a system is an instantaneous description of that system, a snapshot of reality frozen in time. A state gives all relevant information necessary to determine how the system can evolve from that point on. Transitions are changes of state; they can happen spontaneously or in response to external inputs.
"We assume that state transitions are instantaneous. This is a mathematical abstraction. In reality, transitions usually take time. Clock cycles in digital computers enforce this abstraction and allow us to treat computers as digital instead of analog devices" (p. 14: Kozen (1997), Automata and Computability, Springer-Verlag).
Above is the second source I have quoted with regards to state. In fairness: I have seen your definition of "state" in the literature, plus at least two more besides which are unlike either of these (!). But I can match your "memory" definitions one-on-one with other diverse quotations. The point is: the earlier (especially the engineering texts) seem to use the "memory" notion but the later texts seem to be moving more toward the "snapshot" definition. Even the Kozen quote has it (slightly) wrong: in the physical world, "state" is represented by both position and velocity (cf the quote from Cannon in the Minsky/Cannon quotes above). Both position and velocity are needed at an instant to compute "the next place" the configuration moves to. "State" implies "state-tionary" -- static, motionless... ferchrisake this is the definition in the English language! (I can't stand this... this drives me nuts. Look up the word in an American/English dictionary).

It is an encyclopaedia article and you cannot expect that the reader is familiar with all this stuff. People, who know FSM read books to learn more, not short explanation within an encyclopaedia. As mentioned above, those terms don’t fit here anyway. Thowa 07:45, 7 September 2006 (UTC)

As I said, this article is very flawed. It is not accurate. It is not correct. It is not cited well so we (experienced editors like myself and casual readers) can't see where the ideas are coming from. I believe what is going on is there is the classical version of "finite state machine" which is the theoretical computer science and engineering version -- it evolved from Turing's definition and the work of Shannon at Bell Laboratories (am in process of trying to confirm this last point)-- and another version which is having to do with a programming methodology and is entirely different. By the way: I've used a "state diagram" methodology for years in specifying the programs for microcontrollers used in plasma-arc cutting machinery, and a doll for which I have a patent (if you read the patent, its full of "synthetic state diagrams")-- what I would call "synthetic" state diagrams do in fact have all sorts of actions associated with them. But these need a different name because they are not the same as the canonical, classical finite state machine. What we need to do is figure out a way to split these out so this is clear to the reader. wvbaileyWvbailey 17:46, 7 September 2006 (UTC)
The definition in the article is simplified but 100% correct.
WP:Horselegs. Study the history of "finite state machines" and you will discover that they derive from true-to-life steel and copper-wire machines. Not software. In those days there was no "software". The notions developed out of Shannon's work with switching systems in telephony and Turing's work on a machine to do computations. A correct presentation would show this. This article starts (I guess, it makes no sense) at the totally- "abstract state machine" level and sort of disintegrataes as it wanders into the place where it all started -- hardware. wvbaileyWvbailey 02:05, 14 September 2006 (UTC)

I'm also using FSM technology for years mainly for SW design but within my team there is also a lot of HW design where this technlogy is used every day. So this is a bad argument as well as using own published achievements - I could also provide a long list (publications, books, patents). As said above, the intention of this article is different from what you expect.Thowa 20:22, 13 September 2006 (UTC)

Here again is another example of wicked wikipedia: Then maybe you should write a verifiable biography instead of providing the rest of us with dead-ended nothingness. Write a verifiable bio and prove your credentials. wvbaileyWvbailey 02:05, 14 September 2006 (UTC)
I will look at the references you cite and see if I can figure out where the problem is. I'm pretty sure its where I think it is. wvbaileyWvbailey 17:46, 7 September 2006 (UTC)
At the end of the article there were a few books, you shall be able to find enough information. I cannot imagine, you would like to add a reference to each sentence within the article. If so, it is a wrong place you are trying to spread your knowledge.
Currently you are adding a lot of new references. There are some classics missing:
  • Huffman, D. D. (1954) "The synthesis of sequential switching circuits"
  • Mealy, G. H. (1955) "A method of synthesizing sequential circuits"
  • Moore, E. F. (1956) "Gedanken-experiments on sequential circuits"
  • Moore, E. F. (1964) "Sequential Machines: Selected Papers"
and some newer publications:
  • Commer, D. J. (1995) "Digital Logic and state machine design"
  • Wagner F. (2004) "Misunderstandings about state machines"
  • Sunggu, L. (2005) "Advanced digital logic design using VHDL, state machines and synthesis for FPGA"
It will be a nice link collection in the article.
I agree. Cite them. These are excellent background/historial references. They are hard to find. I have not seen them, over the years I've seen the references to Moore, Mealy and Huffman but have not passed my eyes over the articles themselves. I consider this an example of a constructive effort to make the article better. "Finite state machine" is an important article, is referenced by a lot of other articles. It should be as good as it can be. wvbaileyWvbailey 02:05, 14 September 2006 (UTC)
Assuming, you read all this books it is hard to understand that you do not understand the output/actions concept, or why the FSM logic was not clear to you. Therefore I still believe you are working in a special domain and you don't have a common overview about the entire FSM idea. And therefore in my opinion you shall better not perform to many changes in the article.Thowa 20:22, 13 September 2006 (UTC)
The only domain I do not understand is the "abstract state machine" domain, and I've included a source (Gurevich). I've read two-three of his papers, I've corresponded with him, but I don't understand what he's up to. I cannot comment further on the "abstract state machine" topic. All the rest I've been reading since the 1960's. I would guess that am older than you, been there longer than you, done more state machines than you. I've done them in every construction imaginable -- from RTL logic (bet you've never heard of it) to C++. I'm wasting my breath here...wvbaileyWvbailey 02:05, 14 September 2006 (UTC)
Maybe you could extend the hardware section instead? Thowa 07:51, 7 September 2006 (UTC)

What I suggest is that we work together in a constructive way, rather than against each other. Because I am just as capable of reverting as you are. wvbaileyWvbailey 17:46, 7 September 2006 (UTC)

Discussions like the last one induce me to make the following comment:

Wikipedia should explain some basic concepts, and direct by links to special cases or exotic derivations.

I don't disagree. What we should NOT do is mislead the reader.wvbaileyWvbailey 19:32, 7 September 2006 (UTC)

The origin of a finite state machine comes from the Automata Theory, well known in some circles for over half a century, and the concept is presented in that way in the Wikipedia article.

Wrong. It derives from consideration of telepone switching circuits. And the earliest use of it in any form I've seen is in Turing's paper (1936) referencing "states of mind", and "states of a computation". In 1936 Shannon was also busy with Boolean expressions and (telephone) switching ciruits. But I haven't seen his paper. Turing apparently built a binary multiplier from scratch after (somehow) encountering some of his work. wvbaileyWvbailey 02:05, 14 September 2006 (UTC)
Wrong. It is not. FSM states don't have "entry outputs". Mealy machines have "exit outputs." Moore machines have outputs associated with "the present state". So is there another kind of "finite state machine?" Where is this entry-output crap coming from? Give me a source -- page number, book, quotation from book, please! wvbaileyWvbailey 19:32, 7 September 2006 (UTC)
I am typing this from the Dartmouth library and I cannot find either of the two references quoted by the correspondent above. Will someone please give me the authors' first names, references 1 and 5: "Wagner F." and "Carroll, J. and Long, D". This is the sort of frustration that plagues wikipedia -- why no one inside academia will use them. I use my own kids as examples -- a post-doc and senior in college. Both think wikipedia sucks and refuse to use it; they think I'm nuts spending time working on it. People aren't even able to write a decent citation nor give a decent reference. Please consult the wiki guidelines on footnoting, references, etc.wvbaileyWvbailey
I'm to busy at the moment to answer but at least this shall help for now: search for title or ISBN or just go to the proposed examples directly 1, 2. Thowa 19:56, 7 September 2006 (UTC)

The basic definitions should not be expanded or completed by special definitions found in some books which deal with a specific variety or application of the f.s.m.. A good example is the concept of a state. The strict Automata Theory definition is: "A state stores information about the past, i.e. it reflects the input changes from the system start to the present moment".

Who says? Quote me a source I can find. Quote it chapter and verse, page and author, as I said above. As I noted -- Dartmouth College has neither of the books the corresponent cited. This is not so good. wvbaileyWvbailey 19:32, 7 September 2006 (UTC)

Of course the word state has several other meanings in other contexts but these are irrelevant to presention of the the finite state machine definition.

WP:HORSELEGS wvbaileyWvbailey 02:05, 14 September 2006 (UTC)
Wrong: the state machine's parts come to a dead rest between moves. That is (one of) the definitions of "state" as used in the automata-theory and engineering worlds.wvbaileyWvbailey 19:32, 7 September 2006 (UTC)

Another example is the concept of actions or outputs. In general finite state machines are used for control purposes, i.e. that they define some actions (a term used in software) or outputs (a term used in hardware) to be done at certain moments, in certain states. Among others finite state machines are used to detect specific sequences of inputs: they are then called parsers and they do not employ actions or outputs. A parser is then a specific variant of a finite state machine and there is a link in the Wikipedia article for interested persons. It seems to be reasonable to discuss a general model of a finite state machine and then to indicate variants that do not use all possible features and are known under names that have evolved in the past like parser, Mealy, Moore, Turing, push-down automaton, etc. The variants of the finite state machine sometimes use their own terminology; adding such specific and non-standard terminolgy while discussing basic definitions would not simplify the article but produce chaos that will not help people who are searching for basic definitions in Wikipedia. 18:55, 7 September 2006 (UTC)

But again we cannot mislead. We have to make this clear. This last stuff you wrote is pretty good, and is pointing me toward what needs to be said in the article: there are two definitions, one the classical and two the "software version". The classical does not acknowledge 4 types of action. But the article doesn't paint that picture. What we have to do is present a number of opinions, and such opinions do appear. If you read Minsky carefully you will see that "State" has to do with something coming to rest between moves. Thus there is an alternate definition from the world of theoretical computer science. If there are other alternate definitions we will need to expand on these. As I said, the state-definition should go into the state article. Neither definition is entirely correct nor entirely wrong, both are incomplete. wvbaileyWvbailey 19:32, 7 September 2006 (UTC)
This is my last entry on this topic. I've had my say. Perhaps the dialog has been useful to someone. I suggest that you add your references -- I think the older ones are wonderful i.e. Huffman, Mealy, Moore. I looked for Shannon's stuff but have been unsuccessful so far in finding anything except references -- back to the 1930's. I am still curious. I suggest you consider the quotations (Minsky, Kozen, Cannon) that I presented above. I may work on the state article to bring some of the various definitions out into the open. The next time I get to the library I will explore your references. You may see a bit of movement of bibliographic/references stuff around as I explore more. wvbaileyWvbailey 02:05, 14 September 2006 (UTC)

Acceptors and recognizers are a type of Transducer[edit]

It seems to me that the definition of an "Acceptor" or "recognizer" falls under the definition of a transducer. Why are these written as two different types.. I thought Mealy and Moore machines were the two different types of state machines - as all state machines fall under either one of those categories. Fresheneesz 04:23, 13 December 2006 (UTC)

Yes, acceptors/ recognizers are a type of transducer, but not all transducers are acceptors/ recognizers. In particular, if the output can be different from the input, it is not an acceptor or recognizer. Mcswell (talk) 20:39, 4 April 2011 (UTC)

Figure 1 problem[edit]

Should'nt figure 1 have a closed state ? Both states are opened. Lacouf 21:21, 29 October 2007 (UTC)

I think so. That confused me too. I'm not knowledgable enough about FSA to be confident that I'm right though. —Preceding unsigned comment added by (talk) 10:25, 2 November 2007 (UTC)

Of course it is wrong, it should be replaced with the gif again ( 12:57, 2 November 2007 (UTC))

Article for deletion - List of state machine CAD tools[edit]

List of state machine CAD tools is up for deletion. Since the list started on this page, I thought some people might care if its gone. Here is where the deletion will be decided Wikipedia:Articles for deletion/List of state machine CAD tools. Fresheneesz (talk) 06:17, 7 December 2007 (UTC)

Use Message Terminology[edit]

I find the references from this section which use message terminology much clearer and more consistent than this article.

I'm suggesting use of 'message', 'input message' or 'output message' in place of 'input', 'output', 'condition', 'action' etc. This emphasizes the importance of time and sequence in the theory. For example, the distinction between Deterministic and Non-deterministic FSMs is quite incomprehensible in the article as it stands but can be clearly explained in message terms. A non-deterministic FSM is one in which the state may depend on more than one input message, so the state is indeterminate from the arrival of the first message to the arrival of the last. This also makes it easy to explain why a non-deterministic FSM can be converted to a deterministic one - intermediate states are added for all possible message sequences which provides the machine with a message memory and representations for the previously indeterminate states.

In these terms, a recognizer receives a sequence of input messages each representing a single character. When the required sub-sequence has been recognized, it dispatches an output message signaling this fact. As far as I can see message terminology is generic and covers all FSM variants and usages so avoiding the confusion of multiple domain specific terms like 'string', 'input condition' and 'accepting state'.

JohnAHind (talk) 17:40, 6 February 2008 (UTC)

I do not share this view. To what extent does it render things clearer if we use the term "message" rather than, say "symbol" or "action". IMHO Time and sequence only come in as we talk about 'sequences' (or: words/strings) of messages (or: of symbols or actions).Hermel (talk) 09:54, 7 January 2009 (UTC)

Reference section[edit]

Why is the reference section abnormally large? --AllyUnion (talk) 23:14, 21 February 2008 (UTC)

Figure 1 is in a poor location[edit]

Figure 1 is discussed in the section comparing Moore machines and Mealy machines, but since it's at the top of the page, it's hard to follow the explanation. (talk) 02:15, 31 July 2008 (UTC)


The problem is much easier to understand for those, who understand the aestetics of the primary cycle of texno with knowledge of classical music, whatever is considered (and re-celebrated) as classical. :) —Preceding unsigned comment added by (talk) 19:28, 2 August 2008 (UTC)


Can anybody tell me why the title is "Finite-state machine" while the text says "finite state machine"? —Preceding unsigned comment added by (talk) 22:43, 8 November 2009 (UTC)

  1. finite state machine: finite modifies state and state modifies machine.  Done
  2. finite-state machine: finite-state modifies machine  Done
  3. finite state-machine diagram: finite modifies state machine and state machine modifies diagram.  Done

They are all correct. It is an interesting question, and I hope it will be an interesting answer.

Nist says finite state machine is a kind of state machine or state automaton. Notice how the word state remains steady in the middle of a three-word phrase despite the loss of its first word finite (which can just disappear) or the morphing of its last word machine into automaton.

There are three meanings for "state":

  • the state of the machine, as described by a set of internal measurements
  • the state of measurement point, as described by a range of values, theoretically infinite
  • the state node on a "state" diagram that consists of many "states"

The third meaning of state relates to the diagram; the first two to the machine. The confusion arises for two-part reason that each of the quantities of states are all in fixed relation, and so finite may refer to any of the three and infer the others. A person may have one concept on their mind, when they read or write such a hyper-hyphenate-able phrase—or not—only three will do to understand the answer.

Each of the quantities of states are all in fixed relation because the quantity of state-positions in the diagram is related to the quantity of state-points measured internal to the machine is related to the quantity of values in the measurement range of each point.

Therefore, finite can refer to any one of those quantities. And those quantities may be lumped together in any hyphenated way, because finite applies to state in general. Finite refers to

  • the number of temporal states in the state diagram
  • the number of spatial substates internal to the machine ("machine-internal substates" or "machine internal-substates" or "internal machine substates")
  • the number of possibly-measured values (two for binary, etc) at any particular substate (a "machine-configurable measurement point", or "machine configurable-measurement point" or "measurement of a machine's configurable point")

Subset here implies a state is a set, and a subset is a member of that set. Finite also means closed as in set theory, where every operation on a member results in another member. Funny thing is, member can refer to either a diagram or measurement point, and it is still true to say "every operation on a member results in another member". But wait! There is a forth set. Each path through the state-diagram is a meta-temporal temporal set!

The world is plumb full of it. Just stick up your thumb. Thank you for asking, for now I know. The remaining is the edited phase-of-confusion I waded through.

A Google search shows that the following universities say "finite-state machine":


Quoting Freedman and Ince's "Active Java", p77

An object is something which has a state... the object paradigm is universal.

David J Comer in his book "Digital Logic and State Machine Design" says

...the practical meaning of the existence of states [is a ] set of values measured at various points in a circuit. p.240

(Italics and bolding are mine.) That's one state containing several measurements. And he says: may run through a different sequence of states before returning to the starting state, the system has a finite number of such paths and repetition in inevitable. (p.242)

Obviously finite refers to paths and the three states.

For now, guess what? I make a redirect page to this one, and keep the title. CpiralCpiral 05:31, 30 December 2009 (UTC)

Intro rewrite[edit]

Rewrote the intro. Need 2nd opinion for removal of the following tag { { tooshort } } . M!uqomzXb (talk) 12:03, 4 June 2010 (UTC)

The intro seems to give to much details, in my opinion, it's quite too long.
I think first paragraph, begin of the second until "the model is implemented" and the fourth one are good enough. There is just to much of details for someone not knowing yet what the machine is. Anyway all of them should be explained in the main article Arthur MILCHIOR (talk) 07:00, 6 June 2010 (UTC)
Yes, I scaled it down to 2 paragraphs to comply with Wikipedia's guidelines. I was hesitant at first to do so, because the tag said the the lead is actually not long enough. But I think the intro sums things up quite nicely now.
In any case, the removed content is still in the article (obviously), namely under Concepts and Vocabulary and Classification. M!uqomzXb (talk) 11:21, 6 June 2010 (UTC)

References anyone?[edit]

I understand this is pretty much the real deal but like the entire first half of this page has no references. --Sgtlion (talk) 21:49, 13 January 2012 (UTC)

Finite as opposed to infinite?[edit]

I don't think there is a wiki page for an Infinite State Machine, so why does this page seem to be distinguishing itself from that page by insisting it is about a FINITE state machine? (In other words, the concept is STATE MACHINE, and the page should be as well.)

I see, upon further reading, that "Turing Machine" is offered as an example of an "Infinite State Machine." I think my criticism is still well founded although no longer well expressed. For all practical applications, "infinite" machines are irrelevant. And even when discussing what a Turing Machine might be able to do, to make the argument that it can solve some problem with an infinite amount of memory is to move the discussion ENTIRELY to the realms of the theoretical--which, of course, is just showing by example that the problem has no actual solution. Engineers learning about state machines don't need to spend any time trying to decide if their problems could or could not be solved if only they had an infinite amount of memory. In practice, when someone shows that a Turing Machine can solve something, an obvious next step is to decide how much memory it would actually use on a given problem, AKA "stack space." Mdlayt (talk) 18:32, 4 September 2012 (UTC) (UTC)

"For all practical applications, "infinite" machines are irrelevant." - not really, but it doesn't matter. The article does not have an obligation to be useful and "practical".
Anyway, there is at least one "practical" use for "infinite memory": Halting problem. It is a practical question, isn't it? And we prove that the question cannot be answered using infinite amount of memory. Thus we know that it cannot be solved using finite memory. And the "finite memory" would be a "red herring" while trying to prove that. --Martynas Patasius (talk) 21:44, 6 September 2012 (UTC)
That is the the reason Turing machines are a stronger computational model. All the things Turing machines can do that state machines can't, like add or multiply arbitrarily large numbers, are due to its unlimited memory.--ChetvornoTALK 22:31, 6 September 2012 (UTC)
Also, there are a lot of properties of finite state machines that depend on the number of states, which really should be mentioned in this article. For example, a FSM with N states can distinguish between (recognise) no more than N different input sequences, and the output of a FSM with constant inputs will repeat with a period of N or less --ChetvornoTALK 22:43, 6 September 2012 (UTC).

mathematically FSM[edit]

i cant get through the mathematical fsm its just so confusing can anybody explain it in an easy way ? — Preceding unsigned comment added by Syed Faizan jaffri (talkcontribs) 14:58, 1 September 2013 (UTC)

"Turing Machines are not Finite State Machines."[edit]

Twice now, my edits had been reverted by those who honestly seem to believe this. They've seemed to had reverted this based upon the assumption that the other models of computations, Turing machines, especially, are a lot more powerful than Finite State Machines and are, therefore, can not be considered Finite State Machines in their own right.

This is in spite of the fact that both pushdown automation and Turing machines uses state machines as the core of their logic. Pushdown automations and Turing machines are, in full effect, both finite state machines, those designed to manipulate a stack-based system and tape-based memory, respectively.

(I must admit that I was maybe wording my edits incorrectly, but if others had realized what I am trying to say, they should've at least put in the effort of actually rewording my post instead of reverting them.) Karjam, AKA KarjamP (talk) 07:55, 8 January 2017 (UTC)

Hi Karjam, my reasons for reverting your edits were the following. There is a hierarchy of automaton classes (cf. diagram in the lead) with each class being able to parse more sophisticated formal languages than those below it; see Chomsky hierarchy for technical details. I think I understand the point you are trying to make, but I disagree - as an analogy, a steering wheel is part of a motor car, yet nobody would argue that it has the same 'automotive capabilities' as a motor car; similarly, although an FSM is part of a TM, the former has less computational power than the latter. - Jochen Burghardt (talk) 19:18, 8 January 2017 (UTC)
In the usual terminology, a Turing machine is not a finite state machine. It it true that a Turing machine is a machine with a finite number of states, but being a "finite state machine" means there is no other form of memory apart from knowing the current state. Turing machines have additional memory in the form of the tape, so they are not finite state machines. — Carl (CBM · talk) 23:04, 8 January 2017 (UTC)
First off, my gripe's not about the attempt at mentioning Hierarchical state machines. It's about the reversion of the attempt to refer to the other automas as types of state machines.
Second, where's your source that claims "usual terminology"? Just because Turing machines uses tape memory doesn't make them less of state machines, the same with pushdown automations and their stacks. Just because something has memory does not make anything less of a state machine. If it uses a finite amount of states that can be transitioned in-between, it's a finite state machine. Simple as that.
(Besides, I have at least one application of state machines in mind which effectively requires the usage of memory in order to keep track of things, and that would be video games.)
Karjam, AKA KarjamP (talk) 07:04, 9 January 2017 (UTC)
For a reference, any textbook on Formal Language Theory will do. - Jochen Burghardt (talk) 19:41, 9 January 2017 (UTC)
Agree with Jochen Burghardt and CBM. A Turing machine has an infinite amount of memory, so it is not a Finite State Machine. --ChetvornoTALK 19:53, 9 January 2017 (UTC)
"Any textbook on Formal Language Theory" won't fly here on Wikipedia. I meant a specific reference, one which specifically states they're not Finite State Machines. You know, something that conforms to WP:Reliable. That policy's here for a reason, for if it weren't for policies such as that, I could say anything and you can't prove it. Wikipedia's reputation as an unreliable source among universities and the like would be completely unfounded.
I don't want to repeat myself on what I said about Turing machines being state machines. Let's just say that just because it manipulates memory doesn't make them any less of a state machine. Heck, you can even plan Turing machines using diagrams precisely because its logic happens to be run by a state machine. What you guys are not understanding is that I'm not saying a pure Finite State Machine, that is, one not designed to manipulate a tape or stack, is better than those that are designed to do so, in other words a Turing machine and a pushdown automation, respectively.
What makes a Turing machine would be a state machine and a piece of tape. What makes a Pushdown automation would be a a state machine and at least one stack. What makes a pure state machine happens to be just the state machine itself. It has limited memory not because it's unable to, but because it's not designed to handle something as such. Under your logic, a finite state machine designed to manipulate an infinite tape would be a Turing machine, not a state machine.
Karjam, AKA KarjamP (talk) 05:01, 10 January 2017 (UTC)
Karjam, you are seriously confused about this topic; your use of "pure" should be a hint that something is amiss. It's not up to us to find reliable sources to refute you, but it is up to you to find reliable sources that support your edits. You are not going to find sources that say a FSM is as powerful as a PDA or TM. Nobody disputes that a Turning machine is a finite state machine with an infinite tape. That finite state machine can be configured as an interpreter to make a universal Turing machine: something that can simulate any finite state machine and any Turing machine. You cannot make a finite state machine that does that; it cannot simulate a machine that has more states. Nobody is going to dispute that a Turing machine is not "state machine"; it has states and transitions from one state to the next. There is a huge distinction in that TMs and PDAs are infinite state machines rather than finite state machines. (CBM is wrong here; what is written on the tape is state information.) I also won't dispute that my crufty laptop with 8 GB of RAM and 1 TB of disk is a finite machine; it does not have infinite memory, so it must be a FSM even though most consider it more of a Turing machine than a vending machine. The computational models are abstractions. For most problems, the infinite tape only needs to be big enough; one can never use all of an infinite tape anyway. My FSM laptop has most of the benefits of a TM (it can simulate a lot of useful TMs) as well as its ills (such as a computationally infeasible halting problem). Glrx (talk) 05:59, 10 January 2017 (UTC)
Although the information on the tape does affect the overall state of the computation in the general sense of "state", it is common in the literature to use the term "state" of a Turing machine to refer to current location in the table of states only, and use some other term such as "configuration" or "instantaneous description" instead of "state" to refer to the combination of that state, the tape contents, and the location of the read/write head. As in [1] or Sipser's text. — Carl (CBM · talk) 12:34, 10 January 2017 (UTC)
Agreed, but your, Glrx's and Burghardt's point remains. A Turing machine contains a FSM, and the capabilities of a Turing machine include those of a FSM, but a Turing machine also includes other parts and has additional capabilities, so a Turing machine is not a FSM. --ChetvornoTALK 21:10, 10 January 2017 (UTC)
Alright, guys. I have a hunch that we may be misunderstanding each other. What Chevorno says is a clue: "A Turing machine contains a FSM, and the capabilities of a Turing machine includes those of a FSM, but a Turing machine also includes other parts and has additional capabilities, so a Turing machine is not a FSM.
It's a State machine that powers a Turing machine, like what Chetvorno says. When I said a "pure Finite State Machine", I meant state machines on their own, not state machines that are used in conjunction with other features to become some other automa (i.e., combined with a tape to become a Turing machine, or combined with a stack to become a Pushdown automation).
Of course, they're not Finite State Machines, for if they are, they wouldn't have those extra capabilities that make them a Turing machine or a Pushdown Automation. That's what you guys are saying, right? Correct me if my line of reasoning seems to be wrong.
Karjam, AKA KarjamP (talk) 12:17, 11 January 2017 (UTC)