Talk:Pirahã language

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Language lacks recursion[edit]

If the language lacks recursion, how do speakers say "Mary thought that I said to Carla that she wasn't home"? I know Everett says that the clauses at the end of the sentence are separate sentences, but there's no way that works here. Do they say: "Mary thought a thought. The thought was this. I said to carla an utterance. The utterance was this. Mary wasn't home." Which is completely ridiculous and inhuman.Likebox (talk) 13:53, 1 January 2010 (UTC)[reply]

So I read that Everett reports that they say something like this: "Mary thought", "I said to Carla thought-she. Mary wasn't home I-said." Well, that's it for UG.Likebox (talk) 16:30, 1 January 2010 (UTC)[reply]
"I thought Carla wat home" isn't recursive even in English: only the UG modeling of it is recursive. In reality, it's more like "Carla wasn't home" + evidential, and we don't call evidentials recursion. And there are lots of languages which simply don't say things like "I thought that X said that Y wanted Z to ...", not just Piraha, so it's entirely human (even if Everett believes Piraha to be more extreme, which is considered dubious by at least some linguists who think UG is laughable). Also, English-speaking children don't pick up true subordinate-clause-type recursion until about the age of eight: eight years of daily input to learn s.t. that's supposedly innate? That doesn't look too good for UG even without the claims for Piraha.
I do think the counterclaim that Everett himself recorded recursion, as if he didn't recognize that when he made his claim, is silly, so thanks for putting that in context. kwami (talk) 21:43, 1 January 2010 (UTC)[reply]
But children very early learn that it's ok to say: "I thought that John thought that Mary thought that Carla thought that I wasn't home". Anything that can go on forever is recursion by definition, including "John and Mary and Jacob and Susan and Martha and Sally went to the market" which is disallowed in Piraha, as well as "the book is big, blue, heavy, long, and incomprehensible", which is also disallowed! It's completely unbelievable. If I didn't read Everett's paper and look at his examples of the convoluted ways in which they say things in order to avoid recursion, I would have thought that he had gone crazy.
I agree with Chomsky that human type of stack-recursive context-free type grammar is somehow innate to the human mind, just because you can see examples of human love for context free grammars evertwhere in artificial languages. All computer languages and most mathematical notations are at their core based on context-free grammars, and if you try to introduce a language which has constructions more complex than context free (for example, tensor index notation), the human mind rebels against it. Mathematicians have spent years calling such things "inelegant" or "obfuscatory" when what they mean is "recursive but not context free".
So hearing about Piraha for me was like learning that there is life on Mars. But Everett gives evidence that is fully persuasive.Likebox (talk) 14:42, 2 January 2010 (UTC)[reply]
Just to expand a little: Everett admits that the Piraha can tell stories which are fully complex, involving many layers of recursion. So they can say "John told Martha that Sally said that Joe was fishing, but he was out of his mind!" But they have to say it in ridiculously verbose ways, something like: "John told martha a story. Sally said something john-said. Joe was fishing Sally-said. That was John's story. His mind is weak."
The recursive thought is still there, but the expression for this thought does not come in context-free form. Instead, it comes in the form of bits which can be patched together to make the complex thought with no embedding in a single easy to digest recursive form. This type of thing introduces some new ambiguities that need to be sorted out using context. Did Sally really say something, or was that only John's story to Martha?
One can still salvage a weak form of UG by saying that the human mind has a predisposition to learn context free grammars over other types of grammars. That's gotta be true, because consider the construction:
John went to the store is green.
Meaning: John went to the store and that store is green.
That would never, ever, be allowed in any language, period. Even though "john went to the store" and "the store is green" are both perfectly Ok. You are just never, ever, allowed to make overlapping constructions like this. Before I heard about Piraha, I would have said that the reason is obviously because this type of construction is incompatible with nested recursion. But now, I am curious. Can you make ridiculous sentences like this in Piraha?
The bias for context-free grammars is obvious in some semantic theories. For example, in Fregian semantics, you identify specific nouns "My dog spot" as primitive semantic ideas and predicates as functions from nouns to meanings. Then generic nouns are functions from predicates to meanings. Then you can identify parts of speech as certain types of functions. But there is endless debate over whether adverbs should be classified as one type of function or another. This type of debate is caused by the fact that the nesting of functions creates context-free forms for meanings which are best described by a graph of relations which is not a tree. In order to get a tree out of a graph, there are ambiguities. The only reason I can see that Frege and followers did it this way is because they were slaves to the innate bias for context-free constructions.
More evidence for context-free human bias: here's a theorem in elementary calculus which nobody would ever think of:
There is a product rule: d(AB) = A dB + dA B. Consider this as a circle drawn around AB. Then it says (AB) = (A) B + B(A), meaning you can break up a circle around two objects into the sum of circles around one object.
Now consider three objects A B C,and draw circles around A and B, around B and C, and around C and A. Does this make sense? In other words, if you use the product rule to break it into terms, do you get the same answer independent of which circle you break up first? The answer is yes.
So it is possible to define the "overlapping second derivative" of a product: ABCDE with circles drawn around any subset of the five defines a unique second derivative. It's impossible to even say this in normal calculus language, without a diagram. The only reason I can say it out is because I was using Penrose notation for derivatives (derived from tensor index notation, which allows for non-tree constructions which cannot be summarized well by context-free forms).
The evidence for innate human bias towards context free grammars is overwhelming, to my mind. I assumed that it was because of language. But now I don't know where this bias comes from. I want to point out that most people are not aware of this bias because they have it so strongly, that when you say "grammar", they can't imagine anything other than a context free grammar.Likebox (talk) 15:54, 2 January 2010 (UTC)[reply]
Perhaps I just don't get what you are saying but I doubt that a sentence construction like "A and B and C" or the arithmetic equivalent "A + B + C" is inherently recursive. Yes, of course current formal definitions of what these things mean resort to recursion because that is convenient and concise, but such "definitions" have come millennia after the constructs themselves were invented: they are an afterthought. In other words I doubt that any early "mathematician" would have regarded "A+B+C" (when getting apples from A and B and C) as either "(A+B)+C" or "A+(B+C)" although a formal, recursive definition would require one of these two interpretations to be made explicitly. I think that the early mathematician simply would have said: "I want all those apples, that's what I mean". And to me that is not recursion.
In other words, extending your example: drawing a single circle around A, B and C makes perfect sense to the early mathematician because it means the same as drawing a circle around each letter individually. But using a recursive definition we need to make a choice first and that the end result will be independent of that choice is not obvious a priori (as extending the example to a non-associative operation shows). So introducing the recursion changes the semantics. AlexFekken (talk) 09:57, 24 January 2010 (UTC)[reply]

(deindent) I agree with you that you can consider A+B+C better as a flat list than an embedded (A+B)+C, but both are technically "recursive" because they both go on forever. But I am not interested in these too simple examples.

That means you are totally missing my point. It seems highly unlikely to me that somebody woke up one day and decided to invent language and at the same time deciding (consciously or unconsciously) to make it a fully recursive context free grammar. And then also contemplated the difference between a list and a parse tree when collecting apples from three people. A+B+C as used in my example is neither because my primitive mathematician did not need a definition. And simple examples were all he/she had. Your arguments are like saying that Newton and Leibniz defined calculus in terms of Dedekind cuts or equivalence classes of Cauchy sequences of rational numbers (at least somewhere deep inside their minds). But they didn't: they just had a gut feeling for what a real number is and that was enough to mess about and get results. AlexFekken (talk) 11:03, 25 January 2010 (UTC)[reply]

Consider the language of + , * , ( , ) and consider the expressions of the form

A + B*C + (A+B)*(A+(B+C)*D)

These expressions are all context free expressions in a context free language. Why is that? Is there something inherently "context free" about the integers? But there's something context free about the language we invented for manipulating them. Why did we do that?

In the same way as natural language, it took a very long time for current mathematical notation to evolve. If you would try explain to an ancient Greek mathematician (or a medieval one) what the above expression means, he/she would probably tell you it is complete nonsense. Basically the argument would be that the dimensions don't match. And good luck explaining the meaning of dimensionless quantities. So again what I am trying to say is that your reasoning is based on hindsight: yes, it accurately describes an end product, but it does not seem to describe how these things evolve and in my opinion that makes it unlikely that it reflects something innate. AlexFekken (talk) 11:03, 25 January 2010 (UTC)[reply]
(You should not intersperse comments, because it makes the discussion difficult to follow for others who try to read it. I'll respond in the same way, but this is considered bad form) You are wrong about this. To make it OK for a medieval mathematician, you just have to homogenize the polynomial. So you say
A*U*U + B*C*U+(A+B)*(A*U + (B+C)*D)
Where "U" is a fixed unit of measurement. The dimension of everything now matches.
While you are right that this evolved historically, I give you examples below where the evolution went back and forth. In these cases, the bias for context-free grammars is evident in the treatment meted out to non-context-free formalisms. They were always called "incomprehensible", "obfuscatory", "inelegant", "ugly" etc. The vastly inferior context free versions were called superior, even when they obviously weren't, and even when they lost out in history.Likebox (talk) 13:53, 25 January 2010 (UTC)[reply]

Consider the expressions in calculus for "derivative of a function", D.

D(a D(bc) d)

which means the derivative of ( a times the derivative-of-(b times c) times d)

These are standard notations for mathematical expressions, but they are all context free grammars. But is this because there is something inherently context-free about the notion of derivative?

For example, let me use D(ab) to mean the derivative of the product a times b, and also D[ab] to also mean the derivative of the product a times b, so that I have two different types of parentheses that mean the same thing.

Using two different parentheses, I can consider the expression:

D(a D[bc] d)

which means the same as the previous expression: the derivative of ( a times the derivative-of-b-times-c times d). But now you can make a non context free expression:

d(a D[bc ) d]

Which is going to be hard to read--- it means the derivative of the product abcd where one derivative is of only the "abc" part, while the other derivative is only of the "bcd" part. This expression also makes sense, meaning that it expands to a unique product of derivatives under the product rule. So why does the mathematical language not even allow us to say an expression like that?

I used to think it was because our natural language was context-free. So this infected our thought process, and then this infected our mathematical language. But now it seems it started out in the thought process.Likebox (talk) 19:39, 24 January 2010 (UTC)[reply]

And so I still disagree: I think it did not start in our thought processes, but our thought processes moulded these things things into shape. Eventually. But perhaps our argument is really about whether (or to what extent) they could they have moulded it into a different shape? AlexFekken (talk) 11:03, 25 January 2010 (UTC)[reply]
I know very well the history of the thing. The question I am asking is why do we always end up making up a context free grammar, even when it is suboptimal or not particularly appropriate for the task? What is the central property of our thinking leads to this? I'll give the answer I think is true. But the question is still there, even if the answer is wrong. I know this bias is there, because I have seen people design artificial formalism, and they get stuck in this context-free trap all the time.
The property is the "one-valence" property--- namely that we tend to see two things come together to produce one answer not two. This is why "plus" and "times" are easier to understand than "divide/remainder". This is exactly analogous to Chomsky's "merge" operation, the operation that takes two parts of a sentence and gloms them together to produce a single unit. It never "splits" objects in two, nor does it ever "collide" objects to take two things and make two different things. So A+B+C is glommed together in order, either from the left or from the right to make one integer. The more sophisticated expression A+B*(C+D) is still glommed together by using "merge" operation at every stage. Merge produces context free grammars.
But there are cases where merge is not the right way to look at it. If you are doing division, then there is the quotient/remainer. We don't have a good formalism for this. How would you say (A+B)/(A-C*D)*(Div-rem), which means multiply the quotient by the remainder? We have to choose one or the other endpoints, so that either you say A+B mod (A-C*D) to describe one of the outputs, or you say (A+B)/(A-C*D)
To give the motivating example, where language went the other way (away from context free), in the 1930s, when computation was being defined, it started out called "recursive function theory". A recursive function is a function applied to arguments which can include itself. The function expressions were all context-free expressions--- functions inside functions.
But when people actually built computers, and even within logic, the primitive operations of the computer were never defined in terms of context free recursive function expressions. Starting with Turing, the computer itself has instructions which are less sophisticated than recursive functions. They only do primitive things on memory.
But these primitive things have side-effects. They hardly ever obey the rule "two becomes one". For example, "multiply" in a modern instruction set gives two answers, the low-32 bits and the high-32 bits. So given two 32 bit numbers, you get 64 bits of answer, and the computer returns both answers in two registers. Likewise, when you do "add", you also get a "carry" bit, a "zero" bit, and a "sign" bit, of which only the carry tell you non-redundant extra information. So add is "two becomes two operation--- two numbers form a sum and a carry". Likewise multiplication is two to two. Two numbers become the low bits and high bits of the product.
The assembly language allows a programmer to make a program, keeping in mind all the side-effects of each operation. The expressions which correspond to these assembly language instructions are general graphs, not trees. But assembly languages were turned into higher level languages in the 1960s and 1970s. Lo and behold, these high level languages are nearly all context free!
This pisses off people who know how to program assembly. When you add two integers, you want to talk about the carry. But you can't! It's not available in the high level language. Similarly, when you multiply integers, you want the high-order bits quite often (if you always just keep the high order bits instead of the low-order bits when you multiply, you implement real numbers between 0 and 1 in a natural way). But you can't even talk about these bits! There is no expression you can form which talks about the "other product" of two numbers, because we are slaves to the rule "two becomes one". This is Chomsky's merge, and it is a biasing property of human thinking.
So many assembly programmers hated all high level languages. The one exception is C. The reason C is special is because a very,very few of the most commonly used assembly side-effects have expressions. So C is tolerable for an assembly programmer, but still less than optimal.
But C, because of its side effects, is considered "inelegant", "obfuscatory", "not clean". The clean langauge is LISP, which is completely context free. In fact, in the 1970s, when C was replacing LISP, programmers considered it a paradox! They coined a phrase "Worse is better" to explain why UNIX/C was displacing LISP machines. The answer is because CONTEXT FREE SUX. Merge is STOOPID. You need a language with two-to-two operations, like add/carry, and as little restrictions as possible. But humans are biased toward merge.Likebox (talk) 13:30, 25 January 2010 (UTC)[reply]
The other example where people moved away from context free grammar is in quantum field theory. In the 1930s and 1940s, the expressions in quantum field theory were context free algebraic expressions of quantum fields. But people got bogged down in calculations. So in the 1950s, Feynman made up a diagram calculus which fit the structure of the processes correctly. These expressions were not algebraically sophisticated, but they at least weren't context free. Expressions formed graphs, not trees. These diagrams took over physics.
But there are still things you can't say in diagram language very well. These things are currently said in terms of operator product expansion, which uses a context free multiplication/addition grammar. This is a bad fit, and the tension between the language and the thing that the language is describing leads to headaches.Likebox (talk) 13:44, 25 January 2010 (UTC)[reply]
I remembered another historical example of merge-thinking. In ancient times, when Aristotle was writing about botany, he would notice that species form a tree. So if plant A and plant B are closely related, and A is distantly related to X and Y, then B is similarly distantly related to X and Y. You can write a taxonomy tree for the species like this ((AB)(XY)) which gives a context-free expression which describes how the plants are related.
But Aristotle never asked why the plants are related this way. In fact, he proposed it as the only possible way that things can be related! The idea is that species A and B form a genus, and X and Y form a genus, and the two genuses have a relationship to one another. This idea of merging objects into a category is an example of "merge", and umpteen mornic philosophers (starting with Aristotle) have considered this the basic law of thought.
But this type of thing doesn't work to describe relations in a graph, only in a tree. So it is an important question to ask why species are arranged in a tree. It was Darwin who asked this question (and provided an explanation). So again, here is a historical example where humans have biased their thinking by mistaking a merge operation (heirarchical non-overlapping catergories) for a law of thought instead of an interesting law of nature which requires explanation.Likebox (talk) 14:09, 25 January 2010 (UTC)[reply]
And for a computer science example on a higher level, there is structured programming from the 1970s. It told people "write context free high-level structure", and the reason is so that other people can understand what you are doing. This is a bunch of baloney, but it sounds good to non-programmers, who are biased by merge-thinking, Aristotle style.Likebox (talk) 14:19, 25 January 2010 (UTC)[reply]
I forgot to mention that PASCAL, which was super-structured, was considered the greatest programming language ever by everyone except those who programmed in it. This is the bias toward hierarchical structure, towards merge. I have seen too many examples of this bias to believe that it isn't real. I thought (like Chomsky) that this bias comes from the heirarchical merge structure of language, but it seems to be inbuilt into the mind for other reasons. It's definitely not necessary. Six months of assembly programming will train you to get rid of it.Likebox (talk) 14:24, 25 January 2010 (UTC)[reply]

Misrepresentation of Everett[edit]

Everett's work has been misrepresented here--- he doesn't make the jumbled up claims that have been presented here. He says that sentences cannot embed clauses with subclauses, and even the single-embedding level is very limited. He also says that the same thoughts are expressed by splitting up the information into separate sentences and placing them close together. These are not incoherent claims, and they have been made to look that way on this page.Likebox (talk) 20:27, 17 January 2010 (UTC)[reply]

The criticism of Everett's interpretation of what is called "embedding" here is ludicrous. He gave a detailed response, with dozens of examples, and he is retty much the only credible bilingual speaker of the language. The criticism that his own arguments are undermined by his own examples is nonsense. I read the relevant papers, and absent a second expert going down there and learning the language, the claim that the "sai" construction is not recursive is not undermined by anything that he wrote in 83 or 86.
As far as sources go, the source is Everett's recent response to the criticism of his 2005 paper, and this response is available on his website. While I do have the POV that Everett is correct, it is also important to realize that a neutral POV does not mean giving equal weight to non-speakers and speakers in resolving what constitutes a grammatical construction in a language.Likebox (talk) 06:21, 18 January 2010 (UTC)[reply]
If you wish to read the paper, it's this link. Pay attention to the series of examples involving hammocks. On page 413: (sorry original text omitted--- I don't know how to write Piraha)
Chico sold the hammock. I want the hammock.
Everett points out that if this is interpreted as an embedding, the Chomsky's binding conditions for embedded clauses do not allow for the token "hammock" to refer to the same object. In English, "Chico sold the hammock that I want" does not repeat hammock, and becomes ungrammatical when it does repeat. The clincher comes on the previous page with:
I want the hammock. I am like a Brazilian. Chico sold a/the hammock. It is the same one.
Here, the claimed embedded clause occurs a sentence later, and is obviously non-embedded. Many of the examples in the response paper are new, and are obviously generated from long experience speaking the language. Barring any sign that the critics can speak the language, it is hard to see how they can criticize the claim that this is how you construct sentences which convey the idea "I want the hammock that Chico sold."Likebox (talk) 06:45, 18 January 2010 (UTC)[reply]

There's been a back-and-forth in Language, the journal of the Linguistic Society of America. So whether you think the criticism of Everett is "ludicrous" or not, it is clearly criticism taken seriously in the field of linguistics and therefore needs to be represented in the article, alongside Everett's statements, with NPOV.

But anyway, the issue isn't "speaker vs. non-speaker", but accuracy and coherence of claims. On the issues you cite, Everett is just wrong.

For example, if the structure of your first example were a corelative as Everett first claimed, it would be rendered as "Which hammock Chico sold, I want that hammock." That most emphatically does not violate principle C of Chomsky's binding conditions, since there is no c-command between the two occurences of "hammock".

As for your second example, of course it is possible to paraphrase a relative clause using a separate sentence. It's possible in English too, though. Does that mean English lacks embedding? So this test does not tells us anything about the presence or absence of embedded relative clauses in a language. Clangiphor3 (talk) 13:40, 18 January 2010 (UTC)[reply]

Unfortunately, the current text does not represent the back and forth correctly. First, the claim that the sai construction is embedding is due to Everett himself, and in that paper, he believed that the embeddings are extremely limited, more so than any heavily studied modern lanuguage. So if you are going to say "sai is embedding", you need to say "according to Everett 83/86 sai is embedding of an extremely limited type."
Over time, Everett reinterpreted the very limited forms of embedding he found originally as parataxis. The fact that you can say nonrecursive things in English is not the point--- the fact is that in Piraha you only say them in a non-recursive manner. If you wish to challenge this point, you need to go and listen to Piraha and find a corpus of sentences, and point out a recursive one. The examples that Everett gives should not be interpreted as recursive, since they do not allow you to say the most elementary recursives: like "which hammock the man with the eyepatch sold I want that hammock".
Your claim that there is recursion in the language needs to be supported by real data. The interpretation you give to Everett's example is much less natural than Everett's, and you don't speak the language. If you are claiming that the hammock examples involve recursive embedding, the easiest test is to try to embed multiple times, to settle the point once and for all. Everett claims that there are no examples of multiple embeddings (and that the interpretation of single embedding is itself a stretch). Barring an independently acquired corpus, this claim is difficult to challenge.
Still, there is a back and forth, and this needs to be represented. But you cannot use the back and forth arguments to make it seem like there is embedding in Piraha: you can just say that the only credible linguist that speaks the language claims that there is no embedding, while others who don't know the language have argued on theoretical grounds that this is not possible.
This article should not say that "Everett's analysis is undermined by the sai examples" because there are his examples. He explains them in great depth, and he shows that they are not recursive embedding in any ordinary sense of the word. The fact that there are morons arguing with him is neither here nor there. He's the only credible witness at the moment.Likebox (talk) 19:29, 18 January 2010 (UTC)[reply]

There's no excuse for using words like "ludicrous" and "morons" to describe people and views with which you happen disagree. Clangiphor3 (talk) 20:01, 18 January 2010 (UTC)[reply]

The excuse is that the views are ludicrous, and the people pushing them are morons. However, they should be represented here on Wikipedia, because they are notable and published.Likebox (talk) 20:15, 18 January 2010 (UTC)[reply]
This link is a later response to the NP&R nonsense. As he says, even if their analysis is dead on right (which it obviously isn't), no Piraha sentences would have more than one level of embedding, and there would still be a sentence of maximum length. Sorry dudes, that's not recursion.
Just so you know--- I am not a linguist, so I have no bone to pick in this fight. I thought Chomsky was 100% right until a few days ago when I stumbled across Everett's stuff here. I deleted references to his work because I thought he was a crackpot at first. But then I double checked, just to be sure, and whaddayaknow.Likebox (talk) 00:48, 19 January 2010 (UTC)[reply]

Proposed "embedding" section[edit]

Embedding (old version)

In order to embed one clause within another, the embedded clause is turned into a noun with the -sai suffix seen above:

    hi ob-áaxái kahaí kai-sai
(s)he knows-really arrow make-ing
"He really knows how to make arrows" (literally, 'he really knows arrow-making')
    ti xog-i-baí gíxai kahaí kai-sai
I want-this-very.much you arrow make-ing

Everett claims that this structure does not really constitute embedding, but is an instance of parataxis, but this has been disputed by other linguists.[1] Everett responds to these criticisms with the claim that -sai marks 'old information' and does not nominalize.[2] His critics have replied with the observation that if "-sai" actually marks 'old information', Everett's arguments against embedding are actually undermined, so that 'almost none of [Everett's 2005] original arguments for the lack of embedding remain'.[3]

What's wrong with this?
  1. It claims that "sai" is used to embed one clause within another. This is not supported by any data. The original claim was that "sai" is used to embed a clause one level deep and that's it.
  2. Everett does claim that this structure is parataxis, but more importantly, he claims that you can't use this structure recursively, so that you can't say "I really like your saying that you really like my arrow-making". This is the central claim, no recursion, not the analysis of this one sentence as parataxis or embedding.
  3. The criticism that "if sai marks old information, Everett's arguments against embedding are actually undermined" is both foolish and out of context. How are they undermined? This is a stupid sentence, and should be removed. If you wish to put in the criticism, say that his critics continue to interpret sai as a clause embedder.
Embedding (proposed rewrite)

Everett used to claim that in order to embed one clause within another, the embedded clause is turned into a noun with the -sai suffix seen above:

    hi ob-áaxái kahaí kai-sai
(s)he knows-really arrow make-ing
"He really knows how to make arrows" (literally, 'he really knows arrow-making')
    ti xog-i-baí gíxai kahaí kai-sai
I want-this-very.much you arrow make-ing

Everett later retracted the claim that this structure is recursive embedding, since the constructions allowed are limited. In doing this, he further claimed that Piraha doesn't have any recursive embedding at all. He reclassified many of the instances which he had earlier thought were recursive as instances of parataxis, sentences on related subjects which follow one another closely, and which include the content which would be embedded in clauses in a recursive language such as English.

He argues that the single sentence construction, such as the one above, is not a true embedding since it cannot embed clauses with subclauses, like "He really knows how to talk about building houses". To express these thoughts, you must split them into two sentences. Close juxtapositions of sentences can be mistaken for embeddings by a linguist who only has a slight familiarity with the language.

This claim conflicts with the most fundamental principles of Chomskian linguistics, and has been vigorously disputed by other linguists.[1] They believe, based on established theoretical principles, that Everett's new analysis must be incorrect. Everett responds by saying that his earlier understanding of the language was incomplete, and slanted by the same theoretical biases, which demand that recursive embedded constructions should be possible in all languages. He now classifies the stuff before the -sai as 'old information' which does not nominalize.[2]

What's wrong with this version?

I think this is accurate.Likebox (talk) 01:28, 19 January 2010 (UTC)[reply]

It is not accurate. For one thing, the claim does not "conflict with the most fundamental principles of Chomskian linguistics", no matter what Everett says. (If you disagree, produce the relevant argument from Chomsky or others whose work can be called "Chomskian".) In fact it doesn't conflict with any principles of anybody's linguistics I know of. Every language has lots of restrictions on embedding, that's one of the ways languages differ from each other. Also the "vigorous dispute" from other linguists is not on the grounds of "established theoretical principles" but on the grounds of what can or cannot be claimed on the basis of the published facts.

I agree the current article can be improved, but probably you should leave that job to editors with more expertise and background in the field, and with less tendency to dismiss opposing views as "ludicrous" (and those who hold them as "morons"). Clangiphor3 (talk) 19:58, 19 January 2010 (UTC)[reply]

Sorry to burst your bubble, but there's no such thing as an "expert", or at least, I am as much of one as anyone else, except Everett. I learned the stupid jargon, I read the relevant literature, and I am familiar with the relevant mathematics.
I am glad that you agree that the current wording is no good. However, you are absolutely wrong about Chomsky. The lack of embedding and recursion in general conflicts with Chomsky's core claims about language:
Chomsky has, from the beginning, emphasized that humans can produce infinitely many sentences in a language from a grammar once the basic structure has been absorbed. He calls this the "creative aspect" of language.
Without recursion, or embedding within embedding, languages cannot be infinite. In particular, Piraha is finite. There is a maximum number of words in a sentence, so that there are only finitely many different sentences. This is a contradiction to the core claim that the syntax of any human language allows for infinitely many sentences. In Piraha, given the limited size of the vocabulary and the limited number of proper names in use, it might be possible to actually list all the sentences.
Recently Chomsky, Hauser and Fitch published this claim that the only thing that distinguishes language from animal grunts is recursion. The claim is demolished by Piraha, since the finite number of sentences are clearly enough (in context) to produce any utterance that is worth discussing. The sentences only become recursive on the story level, not at the level of syntax.
That's a falsification of Chomsky, pure and simple. Barring a finding which shows that Piraha has somehow undergone a recent language destruction event, the idea that generative grammars with recursive structures are fundamental to human grammar is demolished. It's over.
Not only that, but it seems that Piraha is not really that exceptional, but that nobody before Everett had the balls to say straight up that a language lacks recursion, for fear of being labelled a racist or worse, or being attacked by Chomskyites.
I have nothing against Chomsky--- it was a great idea. It just isn't true.Likebox (talk) 20:51, 19 January 2010 (UTC)[reply]

No. You attribute to Chomsky "the core claim that the syntax of any human language allows for infinitely many sentences", but that's not his claim. The actual claim (and the whole point of the article with Hauser and Fitch) is that the human faculty of language allows human languages to involve an infinite number of sentences. A very different claim, since we know that every language is restricted in the instances of recursion that it accepts. If Pirahã is finite, as Everett claims, it would simply be the limiting case of restrictions on recursion -- so many restrictions that it fails to reveal this property of the human faculty of language. It's another question whether Everett is right about Pirahã, of course, but even if he is, no core claims are affected. Clangiphor3 (talk) 21:09, 19 January 2010 (UTC)[reply]

I'm sorry-- but this is goalpost shifting. Chomsky also responded in a similar manner, saying that the basic faculty for language includes recursion, but the restrictions in Piraha just make it so that this faculty ends up unused. This is perhaps a possible point of view, but as Everett has said, such a point of view makes no predictions about universal language structures, even of the most primitive sort. So if it makes no predictions about language, how do you test the idea? Do you do psychology experiments?
So you have this hypothesis which is very strong and very predictive, and in its predictive form it says "Human languages have recursion, and this is a fundamental feature of all of them". This hypothesis is grandly satisfying, and I believe it is the intended interpretation of Chomsky Fitch and Fitch.
quoting: "... and there is no non-arbitrary upper bound to sentence length. In this respect, language is directly analogous to the natural numbers." and later "At a minimum, then, FLN includes the capacity for recursion."
Now if this claim is about cognition, then it is a radically different claim then if it is about language. Everett has said that he takes the recursion claim to be about language, and then it is falsified by Piraha. If it is about cognition, then it is not predictive at all, and should not be considered part of science (or perhaps not yet).
I think that for this article, it is possible to say only the Piraha does not allow unlimited embedding, and that there is a maximal sentence length. It is also possible to say that Everett does not believe that it has any embedding at all, even in the simple sai example, but this is not univerally accepted. But it does not affect his conclusion about the maximum sentence size in the language.
I should add that, given this data, my personal expectation is that full blown gramatical recursion is a recent development, perhaps even postdating writing. If so, it might be possible to trace the development of recursion in ancient texts. This would be more predictive and exciting than any FLN nonpredictive stuff.Likebox (talk) 01:43, 20 January 2010 (UTC)[reply]

1. No goalposts have been shifted by anyone. You offer two quotes from the Hauser, Chomsky & Fitch paper in support your view of what they claimed:

a. "At a minimum, then, FLN includes the capacity for recursion." But the "F" in "FLN" stands for "faculty", as in "faculty of language". The claim thus concerns a human faculty -- a capacity, with no claim whatsoever that all humans necessarily exercise this capacity or exercise it in the same way.

b. "... and there is no non-arbitrary upper bound to sentence length. In this respect, language is directly analogous to the natural numbers." This passage also comes from a discussion of "FLN" (look at its context), not from a discussion of any particular language.

So they are saying exactly what I characterized them as saying, and what Chomsky himself (as you note) has repeatedly said was their intent.

2. So what predictions do these particular claims make? Nothing very novel, in fact. Simply that languages exist whose grammars involve recursive rules, and that every child has the capacity to acquire such languages. This is a prediction confirmed more than a half-century ago by...well by arguing that there are such languages (lots and lots) -- and that children of whatever genetic background appear to acquire them!

Now maybe you think therefore that this is a pretty weak claim to attract all the attention that it has, and that surely Hauser, Chomsky & Fitch must have had something stronger in mind. Well take that up with the editors of Science who published the paper. Because that's the only claim they did make on this topic -- accompanied by the speculation (the main point of their paper) that given the right theory of grammar, the capacity for recursion is *all* that needed to be added to pre-existing primate cognitive systems to yield a faculty of language. And yes, that speculation is weakly supported, and you're free to join the chorus of people who are skeptical -- but that's another topic entirely.

The topic of relevance here is only whether Pirahã -- even under Everett's description -- is a "a falsification of Chomsky" or "conflicts with the most fundamental principles of Chomskian linguistics", as you suggest -- even if it fails to instantiate the human capacity for recursion in its grammar. The answer is no. So though it would be right to report that Everett claims otherwise (because indeed he does), it would be incorrect to report this as if it were just true, because it just isn't.

3. So let's turn to Pirahã. You suggest "that for this article, it is possible to say only the Piraha does not allow unlimited embedding, and that there is a maximal sentence length." Frankly, I don't think this has been shown anywhere, except as an unsupported assertion on Everett's part. But maybe it's true, and in fact his 1986 analysis also predicts this for clauses -- since they are noun phrases and there's definitely only one level of noun-phrase embedding elsewhere. So, yes, it can be accurately reported that Everett makes an claim to this effect, whose significance to other issues is disputed. So go ahead, if you want. But I recommend leaving it at that.

I'm going to stop here. Thank you for the conversation. Clangiphor3 (talk) 02:47, 20 January 2010 (UTC)[reply]

Although these sorts of things are sometimes annoying and time consuming, please have a little patience, at least until we agree on the text. I will try not to waste your time.
I essentially agree with all your comments. Moreover, I think Everett agrees with all of them too. Everett does make the distinction, which I think is useful, beteween "strong FLN" (I don't remember his term for it), which is a predictive statement about language structures, and "weak FLN" which predicts that humans have certain cognitive faculties which allow them to easily internalize certain recursive grammars. The weak FLN is hard to falsify, while the strong FLN is falsified by Piraha.
The "strong FLN" might not be directly attributable to Chomsky, considering how careful he is to avoid making specific predictions about language structure as opposed to cognitive capacity, but it might have other sources besides Chomsky. Perhaps you know who to cite. There is definitely a widespread point of view that all languages embed and recurse in essentially the same way.
For what it's worth, I actually agree that the weak FLN is there, because it's just true that context free grammars are a thousand times easier to internalize than arbitrary grammars. But the weak FLN links this cognitive property directly to language, by calling it the faculty of language, it does tend to imply that understanding language is the essential reason why humans developed this capacity. So it would be strange to see that recursion in language postdates the evolution of this feature. It definitely calls into question the name. What if it were called the cognitive faculty for recursion? Would that be equivalent to FLN?
One thing that bothers me about this is that FLN does not seem to distinguish stack-recursion from recursion in general. Chomsky in the past has made this distinction (he essentially invented it), and I think that this is a strong and predictive statement: humans will pick up structured "C" code much easier than if-goto code, and phrase-structured grammars for crazy artificial languages much easier than crazy grammars. I don't know why CHF weren't more predictive than they were--- they could have said "recursion with a preference for context-free grammars" instead of just "recursion".
If the text is toned down to saying "This feature contradicts a strong interpretation of Chomskyian linguistics, the interpretation that predicts that all languages should share the same recursive grammatical structure. Chomsky himself has recently made clear that he promotes a more nuanced view, considering the faculty for recursive grammar to be an innate cognitive feature of human beings, but one which does not necessarily manifest itself in the language structure."
Is this satisfactory? If there is no response, I will take that as a yes.Likebox (talk) 05:59, 20 January 2010 (UTC)[reply]

References

  1. ^ a b Cite error: The named reference nevins was invoked but never defined (see the help page).
  2. ^ a b Cite error: The named reference dev07a was invoked but never defined (see the help page).
  3. ^ Cite error: The named reference nevins2 was invoked but never defined (see the help page).

Recent Changes[edit]

What gives? I thought we agreed on embedding. The statement that "Languages all share the same recursive grammar structure" is no longer attributed to Chomsky, but it is attributed to a nebulous "widespread intepretation". Where's the beef?Likebox (talk) 15:38, 7 February 2010 (UTC)[reply]


It is not a "widespread interpretation", just Everett's. And it's not what Chomsky said. Current text makes it clear what Everett says Chomsky says, and what Chomsky says he says.

Also, the text you added adopts Everett's stance that his critics offer only ideology as the basis for their objections, but in fact their objection is empirical -- using published data to counter Everett's claims. Current revision makes that clear, while taking no stand on who is right in the end. That too was discussed.

I am not, however, going to engage in lengthy continued discussion of this.Clangiphor3 (talk) 15:50, 7 February 2010 (UTC)[reply]

This is Wikipedia, no talky, no write-y.
The interpretation is widespread, and to attribute it to Everett is incorrect. Chomsky Hauser and Fitch nowhere make it clear that FLN is only a cognitive capability, and in fact, nowhere in Chomsky previous to Everett does Chomsky make it clear that language does not have to have recursion. The "Languages recurse" interpretation is mentioned in the New Yorker article, is discussed by both Chomsky and Everett, and it appears in the literature in relation to native Australian languages which were also claimed to lack recursion in the 1970s.
What Chomsky said about this is a bunch of baloney. He made an extrapolation from the isomorphic recursive syntax of all modern old-world languages to the idea that all languages have had this feature for a long time. Piraha shows this is false, and in fact, strongly suggests that recursion has gradually evolved sometime in the last 3,000-40,000 years.Likebox (talk) 01:14, 8 February 2010 (UTC)[reply]

If you believe the interpretation is widespread, that needs to be documented. Find sources that do not trace back to Everett (as the New Yorker does) and cite them.

Your reference to "literature in relation to native Australian languages which were also claimed to lack recursion in the 1970s" is probably a reference to two articles by Ken Hale, which claim only that Warlpiri lacked embedded relative clauses. If you think there is other literature, cite it.

If you think that Chomsky "made an extrapolation from the isomorphic recursive syntax of all modern old-world languages to the idea that all languages have had this feature for a long time." -- cite your source.

Until then, please leave these views out of the Wikipedia article, since they are not sourced and not verifiable. Clangiphor3 (talk) 02:56, 8 February 2010 (UTC)[reply]

Well, that's a claim. I disagree that it is not verifiable--- the articles on Warlpiri will be a good place to start. Please do not presume that the literature is on your side on this--- find some sources and I will do the same.Likebox (talk) 04:20, 8 February 2010 (UTC)[reply]
To start: Universal grammar gives the "widespread interpretation"--- universal grammar is a property of all languages.Likebox (talk) 09:12, 8 February 2010 (UTC)[reply]
About your Third Opinion request:
Disclaimers: Although I am a Third Opinion Wikipedian, this is not a Third Opinion in response to the request made at WP:3O, but is merely some personal observations and/or information about your request and/or your dispute. I have made no previous edits on Pirahã language and have no known association with the editors involved in this discussion. My personal ethical standards for issuing third opinions can be viewed here.

Comments/Information: I'm afraid that your request has languished without anyone taking it at WP:3O because none of us generalists there are specialized enough to figure it out. If you really need help with it, can I suggest that you withdraw your Third Opinion request (just go back to the 3O page and delete it from the list of active disputes with a note in the edit summary and remove the {{3O}} template, above, too) and ask for help via a WP:RFC or a posting at Wikipedia_talk:WikiProject_Languages. If you would rather have help from 3O, nonetheless, please feel free to leave it listed (but don't ask for help at one of those other places, too, as that can be seen as forum–shopping), but I'm afraid that you may not get it or it may be a long time coming.

You would require a 3O wikipedian that has extensive knowledge in this field. I am simply clueless as to the current discussion, and although I have tried to figure it out, I believe that it is fairly futile to learn what I would need to in a not ridiculously long period of time. NativeForeigner Talk/Contribs 01:45, 11 February 2010 (UTC)[reply]
To help the 3O people out--- there are somewhat technical comments above of a mathematical nature about what "recursion" means, and how significant it is for language. But you can ignore all those comments.
The actual dispute is a very minor one, and might be resolvable without a 3O. It was a question of whether Noam Chomsky advocated a form of universal grammer that claimed that all languages have a similar grammar, or whether he only made the weaker claim that human beings are capable of learning the most sophisticated grammar?
Chomsky made it clear recently that he believes the latter. But this clarification came after the example of Piraha was brought to everyone's attention. The question is whether it is backpeddling or not.
One way to avoid the dispute is to stop focusing on Chomsky, and just link to Universal grammar directly. Chomsky has long been associated with universal grammar, and so it is not necessary to pinpoint his views. I tried this out, and so far, the other editor does not seem to object. Perhaps this dispute is resolved.Likebox (talk) 23:09, 11 February 2010 (UTC)[reply]

Note to other 3O Wikipedians: I have not yet "taken" this request, removed it from the active request list at the WP:3O page, or otherwise "reserved" it, so please go ahead and opine on it if you care to do so.TRANSPORTERMAN (TALK) 20:50, 10 February 2010 (UTC)[reply]

LikeBox please stop reverting all edits that attribute the alleged "counterexample to universal grammar" to Everett rather than stating it as fact, just because you personally agree with Everett. 217.41.229.219 (talk) 09:31, 14 February 2010 (UTC)[reply]

LikeBox, It appears you are repeating a pattern of argumentative reversions and reinsertions that you have practised on other pages: http://en.wikipedia.org/w/index.php?title=Wikipedia:Administrators'_noticeboard/Incidents&oldid=322151825#User:Likebox_and_tendentious_re-insertion_of_original_research. I see from the history of your talk page that you have also been suspended for this behaviour in the past. Please take that behaviour elsewhere. 09:48, 14 February 2010 (UTC) —Preceding unsigned comment added by 217.41.229.219 (talk)

It would appear that an anon has decided to comment on this issue.
  1. It is best that you log in. Make an account, and use it. This way, we can use your opinion to form a consensus on what the page should look like, and we can better know what your opinion is based on. Until you do this, it is difficult to know whether your edits are comprehending.
  2. I have never been suspended for "argumentative behavior": argumentative behavior is not a rules violation. I was blocked for 3RR violations. Learn the difference.
  3. Being blocked for 3RR is something that happens to anyone who argues here, and I wear it as a badge of honor.Likebox (talk) 10:45, 14 February 2010 (UTC)[reply]
I am going to remove this dispute from 3O because more than two editors are involved. I'll put the page on watch. I note that, like many disputed pages, there is much poor construction and little spot citation, and the dispute seems to follow from this. I'd advise Lifebox that arguing does not necessarily lead to supsension - however, disruption, refusing to seek consensus, calling people idiots and demanding they do not edit without a user-name ARE violations, nothing at all to do with arguing the subject, that lead to suspension. Please resolve this by using precise citations, avoiding OR and reporting incivility. Redheylin (talk) 00:36, 15 February 2010 (UTC)[reply]
The issue here is not spot citations--- the citations to Everett, and to this encyclopedia's page on universal grammar are sufficient. The issue is a reasonable difference of opinion. The reason editors should sign in is so that there will be a clear record of editors/positions, so that a real consensus can develop. I am not against the current wording--- it's not terrible.Likebox (talk) 04:06, 15 February 2010 (UTC)[reply]
Citations of other wikipedia pages are not sufficient; it should not be done. If there's a reference on the other page, you quote it, otherwise it's not RS and should not be included. Likewise, editors' opinions, however "reasonable", are always OR. You have to support every contentious point with a reliable source, hence my saying that spot citations will fix the problem. IF Everett says a certain thing, it's fine to say "according to Everett" (avoiding possible pejoratives such as "Everett claims") If there's another notable view, it can also be cited. These steps, which are wiki policy, are designed to prevent edit wars. Please follow them. You wrote to me that you were not calling an editor, but the editor's source "idiotic": it does not matter, it reflects upon the editor, particularly when accompanying an uncivil reversion. Again, your personal opinions are not valid reason for removal. If you have a source that declares the given source "idiotic", or an RS enquiry upholds this view, that's fine. Redheylin (talk) 16:22, 15 February 2010 (UTC)[reply]
PS. If you believe an editor is also editing as IP, then report it. Otherwise nothing prevents such a contributor from using the talk page and s/he should be invited to do so: there is no reason to object to such contributions and they should not be removed, neither should the editor be told what to do and not to do. Redheylin (talk) 16:35, 15 February 2010 (UTC)[reply]

(deindent) It is not that I think this editor is editing as IP (and I wouldn't report it anyway--- it would be silly), rather, there have been anon comments on pages I have visited by someone who doesn't seem to care about content, but who is hung up on certain past edit wars.

This discussion is off topic--- you are new here and you should know some things: Sources don't resolve disputes, compromise resolves disputes. Compromise requires talking until you find language that is acceptable to all positions. In the course of this discussions, you can say whatever you want to about sources: "This was written on toilet paper in a mental asylum!" "This source is deranged", "This writer is totally out there!" If the author (who is the only one who should be offended) is here, that's not incivility on your part, that's COI on her part.Likebox (talk) 16:50, 15 February 2010 (UTC)[reply]

You did not write it on the talk page but on the page history as your reason for reversion without consensus. This is what makes the action dispruptive, and it is certainly not not the "compromise that resolves disputes". The present dispute is the present topic, and that is why I am commenting. If you wish to introduce any matter of substance that is fine, but so far the question has been one of sources and editing policy. If someone would like to bring forward a statement that shows Chomsky relied on the universality of recursive structures? Redheylin (talk) 16:20, 16 February 2010 (UTC)[reply]
You are wrong about how consensus has always been acheived here. It is achieved by editors changing stuff back and forth iteratively, until they converge on wording they all find acceptable. Consensus is when people stop arguing. Sources for different positions can always be found, and sources are useless for resolving disputes. They usually just make things worse.
Chomsky has said the following things, which everyone should be familiar with: "languages have an infinite number of sentences", "speakers of any language can produce infinitely many utterances from finite number of examples", etc. Whether he meant that is true of all languages or not depends on exactly how you understand his words.
Chomsky's major work in mathematical linguistics involves finding structures called "context free grammars" that model some languages, with additional transformations, and later ideas about how sentences are built up step by step. Chomsky makes the additional statement that these mathematical structures involving language are hard wired in the human brain, and therefore (some would conclude) universal to all languages. This is the default standard interpretation of Chomsky. However, Piraha throws a wrench in these interpretations, so Chomsky is saying "this is a cognitive capacity, not a property of language".
For Chomsky, that's not a problem. For other linguists, they want to understand languages, not cognition, so this is a problem. The solution to this debate is not to talk about Chomsky at all, but to say "universal grammar" instead. This solution was found very quickly by myself and the other editor, after about two or three iterations back and forth.
None of the back and forth on this page was disruptive--- this was convergent editing of the usual kind. You accusations of "disruption" over a dispute as minor as this one would have been out of place on the encyclopedia a year ago, or even six months ago. Wikipedia was built and improved under near complete anarchy. Now that people are imposing real rules and regulations, perhaps the growth of the encyclopedia will stop.Likebox (talk) 00:10, 17 February 2010 (UTC)[reply]
Comment, as I've touched the article I no longer qualify as a virgin Third Opinion Wikipedian. However, I believe I'm close as I've only "done it" a couple of times. :-) Everett covered some of the recursion debate in his Don't Sleep, there are Snakes (2008) pp. 225-243. I'll have to say I was uncomfortable. He was clearly presenting *his* side of what apparently is one or more points of contention. Thus it was not something I would want to use as source material for WP without a great deal of thought.
I believe we can keep this article NPOV by quoting Everett and others when they write about empirical data. This is an article about the Pirahã language and not about Everett, Chomsky, or any debates. Thus we do not need to include in the article what one party thinks of the other's thinking and should also avoid avoid those instances where one party quotes the other in an attempt to frame their arguments. For example, Everett says that "Chomsky claimed that the fundamental tool that underlies all of this creativity of human language is recursion." ("Snakes" p. 228). There's too much potential for POV or bias if we include that in the article. If Chomsky writes about Pirahán then we will quote and cite that for this article but let's not include his comments about Everett's argument in this article either.
I believe that'll allow us to focus this article on the Pirahã language without POV pushing nor accidentally slipping into synthesis that takes a POV. --Marc Kupper|talk 08:08, 17 February 2010 (UTC)[reply]
Obviously I agree about the cites and also that, if only Everett can be sourced claiming a violation of Chomsky's theorem, then this should be attributed to him inline. However, it seems to me that this claim would be a notable fact about the Piraha language that people might look up here, though I agree that generalisms about linguistics belong elsewhere. If the theorem is truly notable it will resolve in the course of time. Perhaps the matter can be explored on another page and briefly mentioned and linked here.
Lifebox, you appear to be laying down a set of rules governing the ethos and editing of this particular page. Your rules have produced a dispute. Policy rules govern disputes: they are the same on every page and everyone is free to edit this article in line with those policies and entitled to expect the same in return. Redheylin (talk) 05:28, 18 February 2010 (UTC)[reply]

Phonemes[edit]

While it's known that Pirahã has very few phonemes there some inconsistency in how many there are relative to other languages. This article said:

  • "One of the smallest phoneme inventories of any known language (perhaps surpassed only by Rotokas)"
  • "claimed to have as few as ten phonemes (one fewer than Rotokas)"

I've cleaned that up to remove the absolute comparisons and inconsistency. I'm currently reading Everett's Don't Sleep, there are Snakes and adding citations to the Pirahã article to better source that one. In "Snakes" p. 179 Everett states that Pirahã, Rotokas, and Hawaiian each have eleven phonemes. Unfortunately, the Hawaiian phonology article documents 13 or 33 phonemes meaning that how you count, and perhaps who does the counting, is an issue.

Everett helps to clear up some of the contention as "Snakes" p. 178 has "Pirahã has one of the smallest sets of speech sounds or phonemes in the world..." meaning we have a reliable source that puts Pirahã among the smallest rather than making absolute larger or smaller claims relative to Rotokas, Hawaiian, or other languages.

I am concerned that 20% of this article is an unsourced dissection of various ways of measuring the phoneme inventory. While it seems valid the dissection itself feels like original research and more specifically, a synthesis. --Marc Kupper|talk 06:24, 17 February 2010 (UTC)[reply]

If it seems valid, don't challenge it. Labels like "Original Research" etc, should be applied only to honestly disputed material, not to stuff that everyone knows is ok.Likebox (talk) 17:46, 17 February 2010 (UTC)[reply]

Plural[edit]

When there is no singular plural distinction (even in pronouns), why does the language have different pronouns for it? gi¹xai³ "you" (sing.) gi¹xa³i¹ti³so³ "you" (pl.) —Preceding unsigned comment added by 134.96.2.50 (talk) 14:46, 18 March 2010 (UTC)[reply]

Maybe because the language does have a singular plural distincton?Linguïston (talk) 03:57, 9 September 2010 (UTC)[reply]

AFAIK this is not a grammatical requirement, and in any case has been analyzed as more like et al. than like -s.kwami (talk) 08:18, 9 September 2010 (UTC)[reply]

A Possible Experiment[edit]

Here's an interesting experiment we can try out: Someone who knows both a recursive language and Pirahã should try raising a child bilingually, with both languages spoken to the child as he/she develops; it may happen that this child will realize how to bridge the gap of understanding between Pirahã speakers and recursive language speakers. This may reveal something about humans that has yet to be seen. It may require individuals that know a recursive language getting residence in a place where Pirahã is spoken, so the child can get interaction from both Pirahã speakers and recursive language speakers as he/she develops. —Preceding unsigned comment added by 67.170.139.11 (talk) 08:20, 16 November 2010 (UTC)[reply]

Everett "claims"... is not NPoV[edit]

I removed several instances of "Everett claims ___" and replaced "claims" with "states" or "says" or simply re-cast the sentence. As John Wyndham once wrote, "Do you claim to have had breakfast this morning or did you have breakfast?" I felt that his side of the controversy was minimized and not adequately explained. As someone who has no entrenched point of view in this discussion, I hope that what I have done makes the article both clearer and more fair. —Monado (talk) 03:06, 6 January 2011 (UTC)[reply]

According to WALS, Pirahá is Zero-Marking[edit]

In the beginning of the "Verbs" section, it is stated that Pirahá "is agglutinative, using a large number of grammatical affixes to indicate meaning." Not only is there no source for this statement but it directly contradicts those of WALS that Pirahá is zero-marking. I am going to change said statement and add the source I linked to.

自教育 (talk) 18:11, 8 February 2012 (UTC)[reply]

Reverted. WALS indicates a high degree of synthesis, and they certainly don't say that it's the only zero-marked language in the Americas. In a more complete description, we would include that info as well, but the description needs to be factual. — kwami (talk) 02:29, 9 February 2012 (UTC)[reply]
WALS also says it is exclusively suffixing which would indicate agglutination.·ʍaunus·snunɐw· 03:04, 9 February 2012 (UTC)[reply]
All WALS means by "zero marking" is that the direct object NP is not case-marked (which would be "dependent marking") nor is the verb marked to agree with it (which would be "head marking"). In other words, it doesn't mark the direct object at all. This is true of Piraha, since none of its affixes are agreement markers. Also, be very wary of WALS as a source, as it is replete with errors and contradictions.Cromulant (talk) 22:54, 19 September 2012 (UTC)[reply]

Tones[edit]

Why the article use three tones, as proposed by Sheldon, instead of the two proposed by Everett? Looking at it on the surface, I'd guess that the person that spent decades with these people would know more about their tone system than some Sheldon, last name unknown, who made that uncited claim in 1988, and that's the extent of the article's covering of him. Spacenut42 (talk) 22:35, 11 March 2014 (UTC)[reply]

Sheldon's paper is in the bibliography section of the article, and it's available online if you can read Portuguese (which I can't, so I have no comment on the number of tones). —Mr. Granger (talk · contribs) 23:25, 11 March 2014 (UTC)[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on Pirahã language. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—cyberbot IITalk to my owner:Online 03:03, 28 February 2016 (UTC)[reply]

The "R" Sound[edit]

I came here to find out how the R is pronounced in "Pirahã," but did not find anything about that letter although it is contained in the tribe's very name. M^A^L (talk) 18:25, 26 March 2016 (UTC)[reply]

"Pirahã" is the Portuguese name of both the tribe and language, with the local names being "híaitíihí" (for the tribe) and "xapaitíiso" (for the language). For the pronunciation of the word "Pirahã" itself, it is written in Pirahã. OosakaNoOusama (talk) 01:13, 25 September 2017 (UTC)[reply]

Letter for [t͡ʙ̥][edit]

What is the letter that represents the sound [t͡ʙ̥]? And what are some words that use this sound? OosakaNoOusama (talk) 01:09, 25 September 2017 (UTC)[reply]

The "notion of nothing/nothingness" isn't the same as "nothing/nothingness", neither does constitute "nothing/nothingness" actual. The "notion of nothing/nothingness" can be imprinted physically on paper or circuitry though.[edit]

The "idea of god" is a matterial circuitry inside the brains of some people, but the "notion of god" is not god.

The antitheist claims that we shouldn't make such erroneous connections in our brains, and the atheist simply doesn't make these erroneous axonal connections.

Being an atheist isn't a neutral state.

The atheist KNOWS that such erroneous axonal connections exist inside some people's brains.

That's why babies are religiously indifferent but not atheists.

The "notion of god" comes before atheism.

Study about the Amazonian tribe Pirahã and Daniel Everett.

The Pirahã were irreligious and religiously indifferent.

They didn't have a notion of god.

They didn't deny something they didn't know; originally they weren't atheists.

If one debated them for hours about god, he might made them atheists, but some remained religiously indifferent even after that. They simply didn't care about the idea of an omnipotent invisible person demiurge; and didn't waste brain circuitry to deny something they didn't care about. — Preceding unsigned comment added by 2A02:587:4116:6200:562:5FE5:E593:337F (talk) 03:23, 26 September 2018 (UTC)[reply]

Bilabially trilled affricate[edit]

While this article in the University Times at the University of Pittsburgh—which is dated 1994, not 2004, by the way—does assert that [t͡ʙ̥] is found in Pirahã, not only has Everett disputed it in the Reddit AMA, but if this was the case, the lack of mention of Pirahã in Ladefoged & Everett (1996) would seem too glaring an omission. I find it likely that Everett was only talking about the simple bilabial trill—which is still an uncommon sound overall—in Pirahã and the writer of the article mistook it for the affricate. Nardog (talk) 09:32, 26 October 2021 (UTC)[reply]

Ambiguous phrasing confuses reader[edit]

The current article text contains:

"Pronouns are prefixed to the verb, in the order SUBJECT-INDOBJECT-OBJECT where INDOBJECT includes a preposition "to", "for", etc. They may all be omitted, e.g., hi³-ti³-gi¹xai³-bi²i³b-i³ha³i¹ "he will send you to me"."

Reading the second sentence, I expected "e.g." to be followed by an example where all three pronoun prefix positions were empty. But unless I am very mistaken in comparing the segments of the Pirahã expression with the expressions in the table, it is actually an example of an expression where all three prefix positions SUBJECT-INDOBJECT-OBJECT are all occupied. If the latter holds true, I propose the text be edited so that it no longer confuses.Redav (talk) 03:17, 18 December 2021 (UTC)[reply]

Parentheses around [k] and [s][edit]

Both of the inventories here have k and s https://phoible.org/languages/pira1253 And the reasons of "Everett posits that [k] is an allophone of the sequence /hi/", "Women sometimes substitute /h/ for /s/", do not seems like reasons for the article to treat them as nothing but allophones. So I will remove the parentheses ONCE, I will not edit war. 97.113.186.235 (talk) 16:40, 24 March 2023 (UTC)[reply]

Missing translation[edit]

What is the translation for the following sentence?

ti

I

xog-i-baí

want-this-very.much

gíxai

you

kahaí

arrow

kai-sai

make-ing

ti xog-i-baí gíxai kahaí kai-sai

I want-this-very.much you arrow make-ing

Does it mean "I very much want you to make arrows/an arrow"? 120.22.146.71 (talk) 01:16, 21 October 2023 (UTC)[reply]