# User talk:Palaeovia

Welcome!

Hello, Palaeovia, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are some pages that you might find helpful:

I hope you enjoy editing here and being a Wikipedian! Please sign your name on talk pages using four tildes (~~~~); this will automatically produce your name and the date. If you need help, check out Wikipedia:Questions, ask me on my talk page, or ask your question and then place {{helpme}} after the question on your talk page. Again, welcome!  Baristarim 19:33, 6 April 2007 (UTC)

## Wikipedia:Contents

The Encyclopedia Britannica is not one of "Wikipedia's other broad categorical indices", so I have removed your edit to Wikipedia:Contents. Thanks, Gwernol 15:09, 10 April 2007 (UTC)

## Proof sketch of Godel's theorem

I noticed you have been cleaning up that article, and I appreciate the help. If you haven't seen it, you may be interested in the editor resources page at the math wikiproject. The math project talk page is watched by several active, knowledgable editors, and it can be a useful resource. If you have any questions about WP or run into problems, feel free to contact me. CMummert · talk 12:07, 26 April 2007 (UTC)

## shamanism

what was the point of adding question marks?Charred Feathers 05:48, 24 May 2007 (UTC)

## Nonfree images

I noticed that you have a nonfree image of calligraphy on User:Palaeovia/Academia and User:Palaeovia/Memorandum. The nonfree content policy disallows nonfree images except in the main namespace (WP:NFCC#9). Would you mind commenting out the images, or using a link instead of displaying the image? — Carl (CBM · talk) 19:14, 17 September 2007 (UTC)

The lists in Development studies have been expanding. I don't know enough about this discipline to judge whether the content is reasonable, but the article seems to be becoming a directory of schools that offer this program. Are there some guidelines that Wiki editors can use for articles about academic disciplines? --Busy Stubber 17:09, 7 November 2007 (UTC)

### Mathematical Subdisciplines

You reverted my unexplained changes in the mathematical sciences (which I understand from a Wikipedia procedural standpoint).

However, if you examine most of the changes---e.g. using the links to Wikipedia or the MSC2000 index (except for scientific computing)---you will see that I was right.

1. Game theory is MSC2000 91 like mathematical economics, and graph theory is a subcategory of combinatorics (MSC2000 08, I think). UPDATE: Maybe I erroneously failed to move game theory and mathematical economics next to optimization theory??Kiefer.Wolfowitz (talk) 22:54, 27 May 2009 (UTC)My thanks for your correcting my un-intended deletion!!Kiefer.Wolfowitz (talk) 22:54, 27 May 2009 (UTC) 2. "Quality control" sounds like Shewhart (or my grandmother in the 1940s at Republic Steel), while "Quality" is emphasized by Deming and Taguchi. 3. Scientific computing is a substantive field encompassing (and also a better marketing strategy for) numerical analysis.

However, these problems are not of great personal concern to me.

Listing "statistics" as a subdiscipline of "applied mathematics" was the real problem, and I am glad that there has not been a Dred-Scott (!) decision to refetter statistics!

Again, I thank you for your moderation and leadership. Kiefer.Wolfowitz (talk) 22:39, 27 May 2009 (UTC)

Thank you for the thoughtful contribution to the issue of whether Statistics is independent of Mathematics. I will wait to hear from others for a few days before airing my views.
Mathematics is a "big tent"; it is inclusive. As is clear from MCS2000, it accommodates Applied Mathematics of many types, from Chaos Theory to Game Theory, including Statistics. The argument for the distinct ontological status of Logic from Mathematics is far stronger (Logic is Metamathematics, or the foundation of Mathematics) than that for the status of Statistics. Statistics appears to me obviously "Applied Mathematics". I am not convinced of the ontological uniqueness of Statistics. I will elaborate further in the article's Talk page.
Quality does not link to any Wikipedia article on a statistical topic. Is it even a subfield of Statistics? Should Quality or Quality control be listed? We are talking about Academic Disciplines.
There is no single correct way to classify academic disciplines, or Mathematical sub-disciplines. Listing Graph Theory as a subfield of Combinatorics, and Numerical Analysis as a subfield of Scientific Computing, are agreeable to me. Moving Game Theory, and adding Mathematical Economics is certainly uncontroversial.
List of academic disciplines cannot serve as a comprehensive listing of all subfields of Mathematics, or of Statistics. (Lists of mathematics topics, Lists of statistics topics are the lists for that purpose.) It will have served its purpose well to have shown the broad structure, features and range of human knowledge. --Palaeoviatalk 01:10, 28 May 2009 (UTC)

## Recast of Eugen Rosenstock-Huessy biography

I'm looking for advice on how to proceed on the Eugen Rosenstock-Huessy article. You've shown some interest in the past. The article has become less of a encyclopedia-style biography and more of a series of book reports. I encourage you to review my proposal on how to proceed and leave your suggestions.HopsonRoad 12:07, 11 November 2007 (UTC)

## Isaac Newton OS death date

It is not vandalism. Please see Talk:Isaac_Newton#Old_Style_date_of_death. Since in Old Style a year number increased on March 25, March 20 was still 1726. --Mgar (talk) 20:01, 14 May 2008 (UTC)

## AfD nomination of List of sciences ending in -logy

List of sciences ending in -logy was nominated for deletion by Pharmboy. I just wanted to tell you since you did some work on this article. See Wikipedia:Articles for deletion/List of sciences ending in -logy. Cheers --ἀνυπόδητος (talk) 19:07, 30 October 2008 (UTC)

Edmond H. Fischer was born and grew-up (until 7) in Shanghai, so that guy categorized him into the Shanghai Nobel Laureates is understandable. (5467buddy (talk) 04:17, 14 May 2010 (UTC))

• Consider the following example. I would expect a "French Nobel Laureate" to be either of French nationality, descent, or permanently resident in France. It seems to me unjustified for France to claim as her own a Russian Nobel laureate who happened to have been born in France, and spent some early years there.--Palaeoviatalk 04:40, 14 May 2010 (UTC)
• Personally I think in that case he or she can be counted twice - both French and Russian, that's the complexity of human society or human being, let's say. By the way, since Shanghai is only a city instead of a nation, and if Mr. Fischer was indeed born there and during those continuous years he was a legal citizen of the city, I guess it would be ok. Thanks a lot for your excellent example anyway! (5467buddy (talk) 04:51, 14 May 2010 (UTC))
WikihHolic, who I presume (without evidence) is Chinese, claimed Pearl S. Buck (besides Edmond H. Fischer) as a Shanghai Nobel laureate. I am sorry to have been a little annoyed by this claim. Is this an act of desperate boosterism? Shanghai should be confident enough of its past glory (and shame) and future promise without such absurd promotion. :)--Palaeoviatalk 08:07, 14 May 2010 (UTC)
There are many mainland Chinese Wikipedians (presumably living outside China) engaging in edit wars with Taiwanese Wikipedians. Voices from mainland China can be heard loud and clear here.
Pearl Buck had never lived in Shanghai, to the best of my knowledge. ;)--Palaeoviatalk 22:36, 14 May 2010 (UTC)

According to this detailed chronology, Pearl Buck did not attend any school in Shanghai as a girl. She attended Chongshi Girls' School (崇实女子中学) in Zhenjiang.--Palaeoviatalk 22:27, 15 May 2010 (UTC)

This is Buck's childhood association with Shanghai: "During the Boxer Uprising, Caroline (the mother) and the children evacuated to Shanghai, where they spent several anxious months waiting for word of Absalom's (the father's) fate. Later that year, the family returned to the US for another home leave." ([1])--Palaeoviatalk 22:47, 15 May 2010 (UTC)
"In 1909, Buck enrolled in Miss Jewell's School in Shanghai. ... In 1910 Buck returned to America and enrolled in Randolph Macon Woman's College in Lynchburg, Virginia." [2] I was ignorant of her spell of finishing school in Shanghai.--Palaeoviatalk 23:47, 15 May 2010 (UTC)

## Concert Party (entertainment)

I'm flattered that you should insert two sentences from the "Pierrot" page into this one, but, technically, this is plagiarism, an error that I wouldn't want either of us to seem guilty of. Would you please revise the sentences so that they are not carbon-copies of my own? I'll restore the link after you've done so. Many thanks. Beebuk 00:51, 30 May 2010 (UTC) —Preceding unsigned comment added by Beebuk (talkcontribs)

I apologize if copying text from one Wikipedia article to another constitutes plagiarism. Who holds the copyright to any Wikipedia text? I assumed that it is Wikipedia. In my copying the text, who violates Wikipedia's copyright? Wikipedia? me? Do you have the appropriate copyright guideline? (I looked at Wikipedia:Basic copyright issues, but did not anything relevant.)--Palaeoviatalk 01:46, 30 May 2010 (UTC)
In Wikipedia:Text of the GNU Free Documentation License (which applies to Wikipedia articles). I find:
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.--Palaeoviatalk 02:01, 30 May 2010 (UTC)

You've convinced me: I'm perfectly happy to restore the link. Thanks. Beebuk 06:20, 30 May 2010 (UTC) —Preceding unsigned comment added by Beebuk (talkcontribs)

Thanks. As further clarification, I note the following statements from the "Important note" (pink box) in Wikipedia:Copyrights:
The Wikimedia Foundation does not own copyright on Wikipedia article texts and illustrations.
Permission to reproduce and modify text on Wikipedia has already been granted to anyone anywhere by the authors of individual articles as long as such reproduction and modification complies with licensing terms.--Palaeoviatalk 07:37, 30 May 2010 (UTC)

One more note. (Please don't think I'm obsessed with this issue; I happened to stumble across the following material while researching how to start a new page.) There's a useful note (i.e., the last paragraph) in the section "Copying within Wikipedia" of the Wikipedia:Plagiarism page that I think you should look at. Beebuk 09:29, 5 June 2010 (UTC) —Preceding unsigned comment added by Beebuk (talkcontribs)

The copying is now duly noted in the Edit Summary. Thanks,--Palaeoviatalk 10:14, 5 June 2010 (UTC)

## For my own reference (from Talk:Regression toward the mean)

### Plain, unambiguous text is not so plain, after all (Fallacy for gifted eighth graders: Part two)

Debunking the fallacy of "regression towards everything" (written by AaCBrown) is now diffused over several sections and several discussion threads. This tactic creates the false impression that I am unable to respond to AaCBrown's postings. To counter that problem, I will address the issue centrally in this section.

The fallacy----------------------------------------------

The idea is simple. Take any set of data, even one that includes completely different measurements like the height of Mount Ranier in feet, the time is takes you to run a mile and the weight in kilograms of the average car. Pick any point within the range of the data. The observations greater than that point are more likely to have been measured with positive error than negative error; the observations below that point are more likely to have been measured with negative error. That requires no assumptions about the distribution of true values or errors, just that there are errors. Therefore, if you take a new set of measurements of the same quantities, the observations that were above the point on the first set are more likely to be smaller on the second set than the first set; and the observations that were below the point on the first set are more likely to be larger on the second set than the first set.

Debunking the fallacy-----------------------------------------

Consider data-point 15 in the data range [2, 30]. 15 is above 10, so positive error is expected of the data-point 15. 15 is below 20, so negative error is expected of the data-point 15. A contradiction.
This is an issue that many people have when first encountering this subject. It is addressed rigorously in the citations. The verbal (non-rigorous) explanation is there are factors arguing for the error to be in the direction of different points. Which factors dominate depend on the specifics. Another issue is you interpret everything as a complete information problem. Many applications of regression refer to incomplete information. If all you know is that a point is above 10, that supports some inference. If you know the point is 15, those inferences need not be valid. The point is the same, but your information about it is different. Precisely how you formalize that depends on whether you take a Bayesian or frequentist approach.AaCBrown (talk) 14:54, 30 July 2010 (UTC)

AaCBrown tried to finesse his argument by stating that the assertion is true at the point before (but not after) any measurement is made. However, his text started with "Take any set of data". This means that the data are given and known. We don't start with no data. Therefore his finessing move is not allowed by his text. The assertion is a fallacy, plain and simple.

I think your confusion here is between statements about samples and statements about distributions. Perhaps my writing is not clear. However, whenever I offer a verbal explanation intended to aid understanding, you seem to take it as an attempt at rigorous proof. And when I offer citations to rigorous proofs, you object that they don't help understanding.AaCBrown (talk) 14:54, 30 July 2010 (UTC)
This is what the text says: "Take any set of data, even one that includes completely different measurements like the height of Mount Ranier in feet, the time is takes you to run a mile and the weight in kilograms of the average car. Pick any point within the range of the data (meaning that the sample range is known) . The observations greater than that point are more likely to have been measured with positive error than negative error; the observations below that point are more likely to have been measured with negative error...." Therefore either the text is so poorly phrased and so misleading that it is useless, or that you are being evasive. --Palaeoviatalk 19:57, 30 July 2010 (UTC)
I sympathize with your confusion here, this is a tricky point that baffles many people. For practical work, we have data and try to draw conclusions. But probability statements have no meaning once the data are known (to a Bayesian) or determined (to a frequentist). So we have to state rigorous statements in terms of either prior and posterior belief (Bayesian) or hypothetical infinite future repetitions (frequentist). Once the observations are known/determined, the errors are what they are, no probability statement about them is meaningful. What we can make probability statements about is what might have happened.
Specifying that a point is within the sample range does not imply the range is known. You know the median is in the sample range before the sample is drawn. Again, I understand this stuff can be confusing.AaCBrown (talk) 23:03, 30 July 2010 (UTC)

This is AaCBrown's unambiguous definition of "Regression toward everything":

The idea is simple. Take any set of data, even one that includes completely different measurements like the height of Mount Ranier in feet, the time is takes you to run a mile and the weight in kilograms of the average car. Pick any point within the range of the data. The observations greater than that point are more likely to have been measured with positive error than negative error; the observations below that point are more likely to have been measured with negative error. That requires no assumptions about the distribution of true values or errors, just that there are errors. Therefore, if you take a new set of measurements of the same quantities, the observations that were above the point on the first set are more likely to be smaller on the second set than the first set; and the observations that were below the point on the first set are more likely to be larger on the second set than the first set.

Bold phrases above show that the text plainly specifies that the context is that "the first set of observations is given and known". AaCBrown now requires that the context be changed to "before any observation is made." This is to completely change the assertion. (Any well trained mathematical statistician or mathematician would know how vastly different the revised assertion would look. It would be completely different from the quoted text.) So the unambiguous assertion quoted above, as written by AaCBrown, is a fallacy.

Could Charles Stein, an eminent statistician, have been the author of such a patent fallacy (as asserted by AaCBrown)? Who is the author of this fallacy?--Palaeoviatalk 11:28, 31 July 2010 (UTC)

I need a proof of:

For any c in [a,b]:
• Pr( [Sum (over all i such that y_i > c) e_i] > 0) > 0.5
• Pr( [Sum (over all i such that y_i < c) e_i] < 0) > 0.5

This is the obvious generalization of his formulation. I assumed wrongly that he knows how to generalize his formulation. What AaCBrown provided is rubbish, purporting to prove that "for y_j > c, Pr(e_j > 0) > 0.5", something entirely different. His "proof" is:

Consider any point c in the range [a,b]. Consider any j. Unconditionally, e_j is equally likely to be positive or negative. If x_j > c, if e_j > 0 then Pr(y_j > c) = 1. If e_j < 0 then Pr{y_j > c) < 0.5. If x_j < c, if e_j > 0 then Pr(y_j > c) > 0 and if e_j < 0 then Pr(y_j > c) = 0. So either way, for y_j > c, Pr(e_j > 0) < Pr(e_j < 0), so Pr(e_j > 0) > 0.5

This "proof" does not even prove what it claims to prove. I have gone through this trash, and it is totally worthless. (It is an excellent exercise for gifted ninth graders to debunk.) The most egregious error is this "so", equating Pr(e_j > 0) to Pr(y_j > 0) [This gross error can be spotted by gifted ninth graders easily]:

So either way, for y_j > c, Pr(e_j > 0) < Pr(e_j < 0), so Pr(e_j > 0) > 0.5.--Palaeoviatalk 01:43, 2 August 2010 (UTC)

There are several other elementary mistakes. I recommend this, seriously, as an exercise for gifted eleventh graders to debunk point by point. (See my "Guide to debunkers" below.)--Palaeoviatalk 02:16, 2 August 2010 (UTC)

I am convinced that I have been absolutely right not to trust the arguments on RTM of someone whose mathematical sophistication, mathematical maturity, and muddleheadedness are reflected in such a proof as this.--Palaeoviatalk 02:23, 2 August 2010 (UTC)

The exercise is to debunk, in detail, the following "proof".

Claim Let X = {x_1, x_2, . . ., x_n} be any set of unknown points. Let E = {e_1, e_2, . . .,e_n} be unknown i.i.d draws from a distribution with median zero and support over the entire real line. We observe only Y = {x_1 + e_1, x_2 + e_2, . . ., x_n + e_n}. The minimum value of y is a, the maximum value is b. Let c be any value in the range [a,b].

For all j such that y_j > c, Pr(e_j > 0) > 0.5.

(Though greater clarity is possible, I have preserved the original phrasing of the Claim.)

Invalid Proof (for debunking): It is in the citations, and it is as trivially obvious as the first proof. In fact, it's the same argument and I can do better, I can prove it for every point, not just the sum. However, again, I'm just reporting from the sources, none of this is my original work.

Consider any point c in the range [a,b]. Consider any j. Unconditionally, e_j is equally likely to be positive or negative. If x_j > c, if e_j > 0 then Pr(y_j > c) = 1. If e_j < 0 then Pr{y_j > c) < 0.5. If x_j < c, if e_j > 0 then Pr(y_j > c) > 0 and if e_j < 0 then Pr(y_j > c) = 0. So either way, for y_j > c, Pr(e_j > 0) < Pr(e_j < 0), so Pr(e_j > 0) > 0.5

To start gifted eleventh graders taking up this challenge off, let me analyze the beginning of the "proof":

First. note the bold phrases. They illustrate proof by intimidation. Always refuse to submit to such a proof tactic. Now we examine:

If x_j > c, if e_j > 0 then Pr(y_j > c) = 1.

Remember that the only probability space is that of error E. So "If x_j > c, if e_j > 0" means that x_j, error e_j, y_j are all known, and no more uncertainty remains. "y_j > c" is true. You should say "y_j > c" (a certain fact). It is wrong to say "Pr(y_j > c) = 1". Pr should always refer to the probability space in question. Now the next sentence,

If e_j < 0 then Pr{y_j > c) < 0.5.

Now this is confusing. If e_j is known, then y_j is also known, and (y_j > c) should be either true or false. "Pr{y_j > c) < 0.5" makes no sense. Is this Pr{y_j > c) the probability before e_j is known? Is so, then "Pr(y_j > c) = 1" in the earlier sentence must also refer to the probability before e_j is known. But how can the earlier sentence say "Pr(y_j > c) = 1" (i.e. before e_j is known, it is certain,with probability 1, that (y_j > c))? It is plainly false. So we are in a major notational and conceptual muddle here. Very sloppy thinking is exhibited here. Try to avoid such laziness and sloppiness of thought. Such sloppy thinking can lead to "discoveries" such as "0=1".

I'll leave the rest to you. You can have great fun debunking this "proof".

It is a good exercise in clear and rigorous mathematical reasoning.--Palaeoviatalk 05:08, 2 August 2010 (UTC)

### Correct Claim

We have above this Claim:

Claim Let X = {x_1, x_2, . . ., x_n} be any set of unknown points. Let E = {e_1, e_2, . . .,e_n} be unknown i.i.d draws from a distribution with median zero and support over the entire real line. We observe only Y = {x_1 + e_1, x_2 + e_2, . . ., x_n + e_n}. The minimum value of y is a, the maximum value is b. Let c be any value in the range [a,b].

For all j such that y_j > c, Pr(e_j > 0) > 0.5.

The "proof" was a mess. The question remains: Is the claim true? Is there a valid proof? The answer is "No". The Claim is false. No valid proof exists for a false claim.

It is straight forward. The intention is to assert something about Pr(e_j > 0), for all j such that y_j > c.

Values in E are unknown. Consider any j (a particular j) such that y_j > c (we don't know which numbers qualify as such j yet). What is Pr(e_j > 0)?

Simple. Because the median of the error (E) distribution is 0, Pr(e_j > 0) = .5 . (This is in fact true of any j in [1,n].)

The correct (trivial) claim is therefore:

For all j such that y_j > c, Pr(e_j > 0) = 0.5--Palaeoviatalk 16:00, 2 August 2010 (UTC)

### Palaeovia's concluding remarks (from Talk:Regression toward the mean)

Anyone is welcome to propose improvement to the article. However, when someone with a history of writing utter nonsense proposes to re-introduce into this article his pet theory, for which after lengthy discussions I was not shown either a proof or a credible source, I am apt to be fervent in my pursuit of truth (mathematical and statistical, theoretical and empirical). My impression has been strenghthened that his understanding of the issue is superficial, his mathematical training is inadequate, and his interpretation is either "original research", or gross distortion of more carefully phrased, qualified statements from scholars.
Mathematicians generally promptly admit their errors, when pointed out, and proceed to seek the truth. Crackpots never admit their patent errors (usually they cannot understand mathematical and logical reasoning), and proceed to defend their pet theories to their last breath. I respect the former, and expose the latter.
In mathematics, truth is remarkably uncontroversial. It is not a matter of compromise. Of course what truth belongs to this article is a matter of debate and compromise. Excluding error and fallacy is my sole objective. I am open to be proved an idiot. I will learn from my errors, and improve. --Palaeoviatalk 23:02, 1 August 2010 (UTC)

On the matter of mathematicians' honesty in facing up to their errors, the following example (from the article Andrew Wiles) of Andrew Wiles is exemplary:

The proof of Fermat's Last Theorem
Starting in the summer of 1986, based on successive progress of the previous few years of Gerhard Frey, Jean-Pierre Serre and Ken Ribet, Wiles realised that a proof of a limited form of the modularity theorem might then be in reach. He dedicated all of his research time to this problem in relative secrecy. In 1993, he presented his proof to the public for the first time at a conference in Cambridge. In August 1993, however, it turned out that the proof contained a gap. In desperation, Andrew Wiles tried to fill in this gap, but found out that the error he had made was a very fundamental one. According to Wiles, the crucial idea for circumventing, rather than closing this gap, came to him on 19 September 1994. Together with his former student Richard Taylor, he published a second paper which circumvented the gap and thus completed the proof. Both papers were published in 1995 in a special volume of the Annals of Mathematics.--Palaeoviatalk 00:40, 2 August 2010 (UTC)

How do you tell a mathematician from a mathematical crackpot?

• A mathematician occasionally makes subtle mistakes, understands his mistakes, and readily admits to them.
• A crackpot frequently makes obvious, elementary mistakes, cannot understand that they are mistakes, and never admits to any mistake.
• A mathematician confesses to ignorance in fields beyond his expertise.
• A crackpot propounds authoritatively on subjects of which he has but superficial and faulty knowledge.--Palaeoviatalk 04:27, 2 August 2010 (UTC)

"Against sciolism:

• A little learning is a dangerous thing; drink deep, or taste not the Pierian spring: there shallow draughts intoxicate the brain, and drinking largely sobers us again."--Palaeoviatalk 11:06, 2 August 2010 (UTC)

Note: reposted for own reference.--Palaeoviatalk 05:41, 2 August 2010 (UTC)

# geometry topology

I just added cathegories and made \emph{some} ordering. cartan->riemann->euclidean ort poisson->syplectic, ... it wasn't perfect I guess, but I just wanted to see some missing topics and a structure 212.186.99.222 (talk) 12:29, 29 November 2010 (UTC)

I am puzzled by the following points:

1. Why does general (point-set) topology subsume the five fields that you grouped under it?

2. Why does differential geometry subsume non-Euclidean, projective, affine, convex, and integral geometry?

3. Why does Riemannian geometry subsume non-Euclidean geometry?

3. Why does non-Eucliean geometry subsume Eucldean geometry?

4. Please explain the hierarchical relation among the fields that you introduced (such as Cartan geometry, Klein geometry) as I did not find any clear explanation in the relevant articles.--Palaeoviatalk 14:08, 29 November 2010 (UTC)

You're right, the ordering is not that good.

My main wish was to point out witch fields can be regarded as special cases of another, hence the Riemannian geometry->Euclidean geometry or Poisson geometry->Symplectic geometry example. In the first case you restrict the metric and don't consider immersions and in the second you demand the bivector to be invertible. The problem with the list then is that some of the tipics don't use the same structures as the topics below, but only make it possible to define them in suitable cases - the differentiable manifold structure makes it possbile to define a metric or a bivectorfields in this case, i.e. specific sections over the tangential bundle and it's powers. And if you do so, you don't leave differential geometry. (Ya, the integral geometry classification doesn't work.)

I don't like my use of that many subblocks. One can drop it. Anyway, independed of the ordering ect., the topics Riemannian geometry, Symplectic geometry and Complex geometries are definately missing in the list as of now. I mean if even non-commutative geometry made the list... it's Poisson geometry limit is probably the only reason why it's even called "geometry". 91.137.20.132 (talk) 19:27, 29 November 2010 (UTC)

Thanks for the explanation.
My view is this: Suppose that field A studies mathematical object A' and field B studies mathematical object B', and B' is a special case of A'. This does not justify the hierarchy A->B in this list.
The guiding principle should be what researchers generally understand the relations among the fields to be.
A->B should mean that students of A would logically advance to the study of B, building upon the techniques, theorems or theories of A. Scholars should generally view B as a subfield of A, in library classifications, journals, and conferences.
Please proceed to add fields, and group them. If the grouping conforms to the above rule, I would have no objection.
With regards, --Palaeoviatalk 23:35, 29 November 2010 (UTC)

Palaeovia,

I've read all of Wade Davis's work both scientific and popular. The section on the scientific work and relevant criticisms for Wade Davis begins with two blatant falsehoods. First he never in writing suggests that Haitian witchdoctors can keep zombies in a state of pharmacologically induced trance for many years. Quite the opposite. TTX acts within 6 hours and if not fatal it has no lingering effects. Second it is simply not true that he commissioned a grave robbery. Dr. Davis commissioned from a bokor a preparation. The bokor did what he did, which included this act. It may well have been an ethical lapse to accompany him as Wade did, but he most assuredly did not commission the deed. I am in the process of re-writing this (to include proper criticisms) and would it appreciate it if you would consider leaving it as I had it. I would be happy to collaborate with you in this effort to make it a more fair outline of his work.

Tbfrost (talk) 00:39, 27 December 2010 (UTC)

Did the Wikipedia text fairly represent the cited sources' views? Were the sources noteworthy or credible?
If the answers are "yes" to both these questions, then we need further sourced refutation to the cited criticism. Let both sides of the argument be represented in the article.
Since earlier editors have documented the sources for criticisms, any further editing requires careful reasoning and documentation. Otherwise it would appear as a partisan effort to obscure facts.--Palaeoviatalk 01:00, 27 December 2010 (UTC)

## Talk:Sim Lim Square

Hello Palaeovia: next time you see edits like these (and in the main article), look at the edit summaries also and report the user for making a legal threat. They would have been blocked a week ago. Thank you, Drmies (talk) 04:17, 18 September 2012 (UTC)

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Sim Lim Square, you added a link pointing to the disambiguation page GST (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 11:03, 5 November 2012 (UTC)

## Re: List of academic disciplines

Just a friendly note to say "thank you" for fixing my error on the List of academic disciplines page. I'm much obliged. — Stephanie Lahey (talk) 13:24, 7 May 2013 (UTC)

## Wanted to get in touch with you for the Sim Lim Square Page Edit

I had been trying to edit the page for adding the useful information for the unsuspecting tourists. I sited a few reference from the http://www.case.org.sg/consumer_alerts.html and my personal experience (which might not be accepted without verification) but CASE is certainly reliable.

and this user sni05**** something kept troubling me by undoing what I had been trying to put. — Preceding unsigned comment added by Silverline on darkcloud (talkcontribs) 10:09, 10 September 2013 (UTC)