Talk:Global Consciousness Project/Archive 1

From Wikipedia, the free encyclopedia
Jump to: navigation, search


The criticism section on GCP methodology is apparently well-intended, but there are mistakes that should be corrected.

1) The formal analysis is canonical, and is not subject to selection bias. Before the data are examined, a hypothesis test is fully defined: The beginning and end of the data segment to be analysed, and the statistical test to be used are specified in the GCP Hypothesis Registry. All the pre-specified analyses are reported and all are included in composite statistics.

2) An assumption that there should be a correlation of effect size with the number of people engaged is presented as a criticism. Nobody knows without more research whether this is a sound assumption, and this is one of our current research questions. Preliminary results suggest there may be a small correlation, but that effect size depends on multiple factors. As of early 2007, these analyses have shown that events with more people engaged do have a larger effect size; the difference between large N and small N events is statistically significant.

3) The last criticism is that we have no satisfactory explanation or mechanism for random devices responding to states of consciousness. The absence of a theoretical explanation for an empirical effect is not a valid criticism of the experiment, only of the brilliance and acumen of the theoreticians.

A general comment: The GCP presents exploratory and contextual analyses to supplement the formal analysis, but makes its evidentiary claims only on the latter. We clearly label the explorations as such. We offer some attempts at interpretation, but these are labeled as speculative and tentative. For example, we offer three possibilities, including chance variation, to account for the "spike" beginning four hours before the 9/11 terror attack. We do not assert "backwards causality or subconscious mass precognition". The change in the device variance is unique in the database up to that time, but we report this as a correlation, not a causal link.

I tried to add the responses to the respective points and NPOVed both a bit. Since the section is about criticism and not about responses, I had to cut drastically. Feel free to improve on this. But no "we" and no POV please. --Hob Gadling 18:06, 7 December 2005 (UTC)

I feel the section as it stood clearly violated WP:AWW. The criticism section is for criticism, but I also deleted some of the more POV critical statements (I will defend the inclusion of "bizarrely," however ^_^). If there is any false statement in that section, please delete it and post here why you have done so. I will review. Thanks. Argyrios 01:22, 23 May 2006 (UTC)

I work with the GCP project as a skeptical analyst. The criticism section is a good idea, but it contains errors and misunderstandings of the project that would be good to clear up. Also, it might be clearer stylisticly for the reader to put the criticisms in a numbered list rather than string then along with "also"'s. I find the greatest misunderstanding is that the project somehow selects data. The GCp avoids data selection of any kind. If this is clear in one's mind, a large class of objections are seen to be invalid. It is very simple: the data is not examined before an event is identified. You cannont select something if you don't examine it.

Here are some comments on the text: 1."Another criticism is that there is no objective criterion for determining whether an event is significant. " --True but misleading. Not a tenable criticism. The GCP demonstrates that statistical tests on events yield small effect sizes. This is why many events need to be tested to achieve significance. The criticism is untenable because it would argues that *all* research of a statistical nature is invalid. A *valid* criticism is that the project does not present a closed experiment with a simple hypothesis predicting a significance level to be obtained in order to reject the null hypothesis. In the jargon, the GCP is not performing a hypothesis test. The GCP has choosen not to do this. The project judges that it is not yet possible to make an adequete simple test hypothesis. But one could critize this choice.

2."Events are seemingly arbitrarily selected post-hoc, and only the data from that time period are observed. " --POV in the use of "seemingly". Either events are selected arbitrarily or they are not. Also, as Roger says (quite clearly) in his point #1 above, there is no arbitrary selection possible because test data for an event is selected before examination of the data.

3."Data from other time periods are ignored, whether or not they may display similar fluctuations. This allows opportunity for selection bias." --True and false and misleading. Not a tenable criticism. The GCP tests if *stronger than average* fluctuations correlate with identified events. Other periods *must* be ignored. Otherwise you *do* have data selection. A *valid* criticism might be that the GCP doesn't identify fluctuations to be tested against a database of events. This would be a different experiment. It would be the inverse experiment of the GCP. 4."Also, there is no correlation between degree of significance and type or magnitude of fluctuations observed. Since the GCP has posited that individual..." --False. This has not been tested yet. A *valid* criticism is that the GCP hasn't looked at an obvious question. [fyi, there is a reason for this. The small effect size requires many events to see an effect. Even more are required to see a modulation of the effect. There are not enough events yet.] A *valid* criticism is that the GCP has not figured out how to achieve enough statistical power to address many basic assumptions associated with it's hypothesis of global consciousness.

Peter Bancel June 13, 2006 pabancel

As in wikipedi, you may change what you see fit. I others accept your changes, such as editing the strings ionto points you will not be reedited. pleae be bold]. :) Procrastinating@talk2me 13:32, 14 June 2006 (UTC)

I am currently compiling sources for a complete rewrite of the section, one based completely on published articles. If anyone has any sources that may be useful, please list them below. So far, the best source I've been able to find is this paper, which purports a failure to verify the GCP's analysis of their results.

If you read Spottiswoode and May carefully, you will discover that their criticism is of post facto exploratory analyses, not of the formal analyses for 9/11 except to opine that the results are not as strong as they "should be" for such a momentous event. Roger Nelson 02:19, 3 October 2006 (UTC)

Other sources are less formal, e.g. Claus Larsen's interview with Dean Radin and the Skeptic's Dictionary essay. I would like to find more peer-reviewed journal articles. I have some questions for the "skeptic" above who "works with" the GCP: Have the results ever been replicated by anyone unaffiliated with the GCP? If so, where? If not, have there been failed attempts? If so, where are they? If not, why hasn't the scientific community taken this project seriously? Thanks for any help and sources you can provide. Argyrios 15:04, 14 June 2006 (UTC)

Answering Argyrios: Another critical paper (your reference is May&Spottiswode) is by Scargle. See GCP site under Indepedent Analyses for a link. These papers may not be peer-reviewed; check with the authors to be sure.
I don't know of any determinded efforts to reproduce the GCP results. It would be a big job to set up an independent network. The project encourages researchers to freely use the GCP data for independent analyses. May&Spottiswode, Scargle, Radin and a few others have done so for the 9/11 data. But there has not been much beyond that one event. I've done extensive analysis over the last 4 years and a paper will be out in 2007 (I hope!). Currently there is little on which you can base a wiki article, mais c'est comme ca.
The scientific community has not taken this seriously because it's too early to do so. The idea is loopy, to say the least, and the general result, although highly significant, is too vague for researchers to consider spending precious time on this. However, there is a wait and see interest that is percolating in some quarters. The AAAS ( Amer. Ass. for Advancement of Sci) regional meeting in San Diego June 20-23, 2006 has an extended session on retro-causation and the GCP presented an invited contribution. Can't get more mainstream than that. A proceedings will be out later this year.
A comment on May&Spott and Scargle. These critiques deal with the 9/11. The GCP has published a paper (Foundation of Physics Letters 2002, see ref on GCP site) on the 9/11 data because of its huge historical importance. The critiques preceeded that article and are thus not so germane. More importantly, focussing on 9/11 is somewhat a red herring. The most important result of the project so far, aside from its cumulative significance of > 4 sigma (standard deviations), is that the mean effect size is about 0.3 sigma. This means that *NO* individual event is expected to show significance. This is a key point that is always ignored and always leads to misunderstandings. A wiki article which fails to point this out does not accurately portray the project. This is mentioned in the FoPL paper and is implicit on the site results page where you find result = 4.5 sigma and #events=204 => effect size = 4.5/Sqrt[204] = 0.32 Therefore, it is a stretch to expect significance even for the 9/11 event. If for argument's sake you assume the 9/11 effect size is a huge 10x greater than average, then the formal result of 1.9 sigma is within a 95% confidence interval of this. That is, you can say the smallish 9/11 result is consistent with a huge effect for 9/11. It is also nearly consistent with the null hypothesis, as May&Spotti point out. Conclusion: single events are just too noisy to provide definitive tests.
The project's main task is to refine it's hypotheses in order to allow better testing. This is a valid critique that May&Spotti make. Our experience is that it has been a long haul to do this. Why? Again, because of small effect sizes.
I hope this is useful. -Peter

Peter Bancel June 22, 2006 pabancel

An evening with Dean Radin: Testimony

The link listed in the External links section is testimony.

Almost the entire article is filled with 'he said' and 'I said'.

The reader is not presented with verification of what Radin said (or showed) and what was said to him. The article refers to many graphs the reader doesn't even see in the article.

For a critique to be worth something, we must be able to verify what occured. That way we know the critique has merit.

For example, is there a transcript or a video of the event? 04:08, 11 January 2006 (UTC)

The link clearly has merit and I confess I have difficulty understanding the thrust of your argument. Yes, it is testimony. Yes, it refers to what people said. Yes, there was reference to graphs. So?
The article is a published column taking a skeptical perspective based on the writer's personal experience and judgment, and it doesn't pretend to be anything else. To demand a transcript or video is a bit ridiculous. Argyrios 09:32, 10 January 2006 (UTC)

How can we judge the critique as accurate or not if we are not presented with a verifiable record of what is being critiqued? It would be, for example, if I write up a book review, but you have no way of seeing the book, or a movie critique but no way of watching the movie. To ask (not "demand" as you say) for a transcript or video seems a reasonable way to provide verification.

The article is "a published column", true, but on the author's own website, or on websites of organizations that the author belongs to.

I agree that the page shows a "skeptical perspective", but of something that cannot be verified. So again, I ask, how is that useful? If the author chooses to not provide it, or cannot, it is something to think about. That doesn't mean I'm for removing the link, however, just questioning how useful it is. 04:08, 11 January 2006 (UTC)

You seem to believe that eyewitness testimony is inherently worthless. It simply isn't. I don't know what else to say.
Also, you can sign your comments by typing four tildes (~) in a row. Argyrios 03:50, 11 January 2006 (UTC)

Thanks for the tip re: signing the comments. Thanks for the discussion. 04:08, 11 January 2006 (UTC)

I notice someone changed

I notice someone changed "data are" to "data is". Of course the word is the plural of datum, and hence should take "are". But I know that English is, like all languages, a living thing, and that change is inevitable. So I won't change it back, but merely point to the Wiki entry on the word -- where you will find examples like, "The experimental data are recorded in files." The following comment refers to the next (and unsigned) entry below. Wikipedia is designed for you to do something about it if you don't like what you see. Go ahead and fix the one-sidedness. And perhaps you can document your statement that the "whole enterprise is worthless" while you are at it. Roger Nelson (talk) 02:28, 3 January 2008 (UTC)

The page as it tands is one sided beyond tolerance and not a balanced discussion. If the whole enterprise is worthless, then why is it staffed and budgeted at such level by Princeton? 5 Nov 2007

An external reference that appears to be an advertisement for Dr Robert Aziz was added on 4 April 2007 by Barry Wells: (Talk | contribs) m (→External links - added another relevant external link). It isn't relevant, but since I am not a neutral editor, it would be better for someone who is to undo that addition. Roger Nelson 16:40, 22 August 2007 (UTC)

July 12 2007. I undid a misleading change by an anonymous user because he or she confused the website's description of the statistical frequency of a one-second score with the statistics for the full day analyses. My comments in the Edit summary were not registered, probably because I used "undo", so I am adding this explanation here. Roger Nelson 03:38, 13 July 2007 (UTC)

There are now two "label" boxes on the main page which should be removed by someone who has responsibility and is disinterested (not a protagonist). One says the article is disputed, and that may no longer be the case since a good deal of editing has been done to fix problems either in the main article or by adding comments in the discussion. There remain some inaccuracies in the "criticism" section, as explained in the discussion, but some neutral editor needs to address those. The second box says the article needs cleanup, but I could not find out what that refers to. Finally, the article is rated as "start class" but that is probably no longer appropriate, or if it is, the rater should make it clear what is missing. Roger Nelson 06:40, 21 March 2007 (UTC)

Why the "of course" in the statement on white noise -- seems a bit editorializing to me.--Wesley Biggs 01:56, 23 Apr 2005 (UTC)

--Agreed; took it out. The idea was that if there were a correlation between white noise and global events, then there would be no need to muck about with random number generators, so "of course" there is not. But I can see how it could seem like editorializing.

Is it just me, or do the latest edits to the "criticism" section seem to be non-critical? I would like to start a discussion about how to make it fair. If the person who edited it feels that the criticism is unjustified, then reword or delete it, but don't leave it and then rebut it within the section. If I don't get a response within a week, I will do the edits myself. Otherwise let's talk about how to make it fair. Argyrios Saccopoulos 02:19, 7 May 2005 (UTC)

I have made the changes as promised, hopefully addressing the objectionable content changed by the previous editor. If any of the criticism is unfair, please discuss why HERE so that we may arrive at a consensus. Argyrios Saccopoulos 21:16, 18 May 2005 (UTC)

What bothers me is that the criticism is more than half the article. This is clearly not a neutral view.

If anyone wants to expand the other part of the article, there is nothing stopping them. Argyrios 17:50, 24 August 2005 (UTC)
The description of the 9/11 spike as the "most striking evidence" is incorrect, technically, since it does not refer to a formal hypothesis test but a post hoc observation. The best evidence is arguably the concatenated result of over 200 pre-specified hypothesis tests, which show a composite deviation of about 4.5 sigma, equivalent to odds against chance of roughly 500,000 to 1. Roger Nelson 01:31, 3 October 2006 (UTC)

The response to the criticism seems like it was written by someone from the GCP defending themselves. Should it not be rewritten to be more third person?

I moved it here, as a first step. --Hob Gadling 17:55, 7 December 2005 (UTC)

Glad to know the material I contributed was not simply deleted after all. I have added a note at the end of the criticism section to tell readers they can find a response in the discussion section. In the process, it occurred to me to wonder why criticism is not also "discussion". Why, in other words, does criticism have priority of place in the main article, but response thereto only a secondary place, invisible to non-expert wikipedia users? -- Roger Nelson 15:32, 28 May 2006 (UTC)

A criticism section is appropriate to the article, as your project is controvertial. It is appropriate to report the controversy. A criticism-and-response type of format reads too much like a series of forum postings rather than like an encyclopedia, and the way that Hob handled it ("Critics say... Supporters say") clearly violates WP:AWW. However, there should be a way to fairly criticize the project. I agree that the section as it stands has its problems; for example, it is badly in need of citations to avoid violating WP:NOR. Wikipedia:Criticism may also be of help in formatting the section. By the way, welcome to Wikipedia. I appreciate your restraint and patience in this matter. Unsourced criticism of the project will eventually be deleted, and I will work on finding sources in published work. Argyrios 20:45, 28 May 2006 (UTC)
Criticism is a necessary part of scientific work, but false criticism is not useful and it is not appropriate to an encyclopedia. The section contains misleading statements, such as that events are "arbitrarily selected post-hoc" allowing "selection bias". While responses and corrections in the discussion section are a step in the right direction, the presence of incorrect information in the criticism section is a serious problem if the Wikipedia entry is to be authoritative. Roger Nelson 02:06, 3 October 2006 (UTC)
Indeed, it seems that most of the criticisms are based on a complete lack of knowledge of experimental design (moreover, if anyone had deigned to read the project's site, the data are claimed to be blindly picked, which eliminates any basis for claiming that they have 'cherry-picked' the data). As well, who the hell cares if there are random spikes for which no apparent 'world event' can be correlated. If I design an experiment wherein I flip a coin 100 times and predict before each flip what the outcome will be, it doesn't matter if I failed to predict where the 'biggest spike' would lie (e.g., where there were, say, five heads in a row). What matters is that I was able to predict the coins before seeing the resultant data with such startling accuracy as to generate a probability of my getting so many right by chance of 0.000001112 (as the results now stand). -- 02:56, 17 April 2007 (UTC)
Shouldn't there be a "response to criticism" section? That has worked well on many other Wikipedia pages I've seen, and certainly is better than the completely non-standard "see the discussion page" for more information.Pro crast in a tor 21:19, 20 September 2006 (UTC)

Merge request

  • Why should it be merged?


Could someone explain this: "is a mix of art installation and science experiment" ? If there is anything connected with art, this should be explained. Martinphi (Talk Ψ Contribs) 23:08, 1 January 2007 (UTC)


For example, on September 11, 2001, it was alleged that in addition to the spikes occurring at the times of the plane impacts and the building collapses, changes in the level of randomness seen in the EGG data began hours and even days before the attacks were themselves caused by the attacks, implying subconscious mass precognition or backwards causality (which is, in fact, not an absurd hypothesis if the theory of retrocausality is accepted).

I cannot see how a response to September 11, 2001 event can imply precognition or backwards causality any more than for someone to hold their ears when they see a firecracker being lit. This was not a random event, but one known by a fairly large number of people likely in a number of different organizations. It took no more precognition to see a significant consequence would be following their actions than seeing that a window is about to break when a stray baseball is headed toward it. Knowledge by a universal subconciousness, perhaps, but precognition? I just don't see any reason to theorize that with this evidence. —The preceding unsigned comment was added by MarshallDudley (talkcontribs) 20:06, 30 April 2007 (UTC).

It could also be registering other events, such as God turning his gaze towards Earth (haw haw, slaps thigh, crowd murmurs "nice one"), or the effect of premonition on the perceived event, or the effects of future viewers "tuning in" on an historical event. Indeed local weather fluctuations which coincide with an "event". There are more than enough rainclouds in Edinburgh to occlude radiation, even leftover radiation from Chernobyl. 01:24, 11 May 2007 (UTC)

PS Isn't it way off NPOV if one of the researchers is actively contributing to the article.

Interception of transmissions?

A skeptic might suspect another explanation: what if international transmissions of random data were being intercepted and altered? After all, "random" data could conceal a hidden message, so a cautious spy might hit on the notion of replacing it with a pseudo-random data stream of his own just in case. What is intriguing about this idea is that the deviations began at 5:30 a.m. on September 11! So IF you've kept the original data streams it would be very interesting to see if there are discrepancies in what was sent vs. what was received, and WHEN they occurred. If not, if you could find some mathematical hallmark of pseudo-random number generation in a single archived stream at Princeton that would also be something. 14:49, 26 August 2007 (UTC)

The GCP does keep all the data, in the raw form received at the archiving server. Moreover they are all publically available for download by anyone with the desire to analyse them. Roger Nelson (talk) 02:28, 3 January 2008 (UTC)

Cue Scary Music

If there is a spike in their data, they can then search the world and see if anything has happened to explain it. If nothing happens of any significance, they can wait as long as they want and the further away the spike is from the event in time, the more powerful a predicter it is. It doesn’t yet “predict” what will happen or when it will happen or where it will happen, but it is spookily accurate. Hold me, I’m scared! —Preceding unsigned comment added by 1138.182.41.143 (talk) 21:15, 20 February 2008 (UTC)

And yet they claim to (a) pick data blindly after the fact, and (b) retain and record that data in their results, regardless of whether (c) it deviates from the null hypothesis or not. But, now, you go ahead and call their bluff, there never having been a more obviously contrived experiment. Kudos, I say! (And, what's more, they include a bunch of results that in no way contribute to the supposed non-randomness of their results, just to throw you off! What fiends!) [end sarcasm]. -- (talk) 04:30, 25 April 2008 (UTC)

Hello? Confirmation bias ring a bell?

Did any of these researchers bother coming up with strict standards for what defines a significant event? No, they didn't. Did they define how long readings should be considered related to those events? No, they didn't. Did the researchers account for and weigh misses the same as they did for hits? No, they didn't. Did the researchers compare those supposedly statistical readings to a random sample of readings, in order to account for random pools of apparent statistical significance? No, they didn't.

Their methodology appears strict only on the surface, but they fail to account for bias or follow all the rigors of the scientific method. So I ask you, why is the subject article not considered out-and-out pseudoscience? How is this listed alongside such things as consciousness studies and futurology?

I'm further adding a neutrality dispute flag, because criticisms and skeptical views have not been given nearly enough space in the article. (talk) 13:58, 14 March 2009 (UTC)Anon

The project actually gives thought for all of these subjects, but not necessarily very strict. The significant event definition seems most dubious, and there seems to be no real standards for it, only try at being objective. There is given thought on how long the readings should be related to the event, and the issued is adressed on the project page. Misses and weights are also accounted for. What comes to be the statistical analysis itself (excluding the subjective definition of significant event) it seems to be very rigorously controlled and mathematically sound. This of course is my impression, but the project does describe these things, so they aren't ignored.
[1] Lists as it getting support from 75 respected scientists from 41 different nations. Of course I have no clue what the definition of respected scientist is, but in my mind with possible support from scientific community on least part of the methodology and the fact that there is documentation on these issues, it's not valid to state that none of the above subjects weren't adressed.
I agree there not being lot of criticism of the project. Especially since this kind of project will likely attract a lot of it. —Preceding unsigned comment added by (talk) 00:52, 3 August 2009 (UTC)
No indication on reference page or on gcp homepage who these 75 "respected" scientists are. We should be careful about that source; let's not forget that huge list of human-cause climate change deniers who were presented as "respected scientists" and it turned out over half the list were TV weather men. Simonm223 (talk) 13:44, 20 September 2009 (UTC)

Link between RNGs and 'global consciousness'

I have not been able to find an answer for this so far, but what does a random number generator have to do with human consciousness or traumatic global events? Why would the RNGs generate more 0s/1s in case of a global event? The GCP page talks about operators trying to influence the outcome of the RNGs, but what is the basis for that? —Preceding unsigned comment added by (talk) 01:54, 15 May 2009 (UTC)

They suggest that if they think really hard at the numbers they will become infinitesmally less random.Simonm223 (talk) 19:40, 18 September 2009 (UTC)
There's no logical connection. There's no real sense in the project, it's cargo cult science. But it's a notably crack-pot project at a major university... Fences&Windows 20:17, 18 September 2009 (UTC)
Concur entirely. If I thought otherwise there would be a PROD up by now. As has been noted previously I tend to be a bit of deletionist and tend to propose the deletion of borderline articles based on the postulate that the ones worth keeping will get fixed faster if there is a deadline.  ;) Simonm223 (talk) 20:22, 18 September 2009 (UTC)

More into the hardware setup

It seems they're summing up the results of all 65 RNGs. Or do they compare performances of each or closely located RNGs? Is there any method to "shield" these from the outside world? Are the ones protected by tinfoil hats revealing similar peaks? How about seasonal effects? Though a bit like hundredth-monkey effect, stunning I must say: "a few minutes around midnight on any New Years Eve". Santa Claus? Logos5557 (talk) 03:42, 19 September 2009 (UTC)

Gathering of Global Mind by Roger Nelson, early 2002 gives some answers to the above. It seems they're able to observe locality during some major events; "Although some other cases suggest otherwise, these eclipse results indicate that the REGs are most sensitive to relatively local influences, in apparent contradiction of one of our in-going assumptions, which says that the location of events relative to the eggs should be unimportant. If this indication is confirmed in other assessments, it means that although the anomalous interaction of minds and machines that we use for our measure is nonlocal, it isn't unboundedly so. The intensity of regard, or the concentration of attention may have an effect that is stronger on machines least distant from the people who generate the group consciousness. At the same time, we must emphasize that other evidence suggests a different relationship. We have to learn much more before drawing conclusions in this deeply complex area.". They were also able to observe "less random" performances from RNGs during new year transition in a "zone by zone" fashion; "New Years, 1998, presented an excellent opportunity to test the essential notion that large numbers of us joining in a mutually engaging event may generate a global consciousness capable of affecting the EGG detectors. Of course New Years doesn't happen all at once, but again and again as the earth turns and brings the end of the old and the beginning of the new to each time zone. Our plan was to gather the data surrounding each of the midnights, and to compound all of the time zones into a single dataset that would represent a brief period marking the height of celebration -- everywhere. When this was done, the result was a spectacular confirmation of the prediction: data from the ten-minute period around midnight differed from what theory and calibrations predict, with a probability of three parts in 1000 that the deviation was just chance fluctuation. The scores were slightly, but consistently less random than at other times; they were more structured than they were supposed to be. Figure 4 shows the composite trend, which steadily departs from the expectation for a typical random walk such as that shown in the previous figure for calibration data". Contrary to the critic here, it seems that the earthquake in Turkey, 1999 resulted in deviation from randomness in the data given by RNGs; "It is worth noting that the composite of US eggs shows stronger deviation than those in Europe in Dick's detailed analysis of the Turkey quake. Interestingly, the individual egg showing the largest effect was one of the most distant, in Fiji. Interpreted literally, this suggests the opposite conclusion to that of the last example with regard to nonlocality. Again, we have much to learn before reaching strong conclusions." Logos5557 (talk) 07:49, 20 September 2009 (UTC)

Who is responsible for the Project?

The following (copied from this) are the primary contributors of the project:

Roger Nelson: GCP Director 1997 to present; Princeton University, PEAR, 1980 to 2002
Peter Bancel: Experimental physicist, Paris, France; formerly at U. Penn and IBM Research
Dean Radin: Laboratory director, Institute of Noetic Sciences Research (IONS)
Dick Bierman: Professor, University of Amsterdam and University of Utrecht
John Walker: Founder and retired CEO, Autodesk, Inc.
Greg Nelson: Senior scientist, Princeton Gamma Tech Instruments
Paul Bethke: Windows developer, Network management
Rick Berger: Founder, Innovative Software Design, Website development
Marilyn Schlitz: Director of Research at IONS; Senior scientist, California Pacific Medical Center
York Dobyns: Physicist, Princeton Engineering Anomalies Research, Princeton University
Mahadeva Srinivasan: Senior scientist, Indian Nuclear Research, retired
Dick Shoup: Director, Boundary Institute, Saratoga, CA
Jessica Utts: Professor of Statistics, University of California at Davis
Tom Sawyer: Founder, Santa Rosa Internet Group, E-commerce systems
Jan Peterson: Executive director for public relations, APA, retired
Ed May: Director, Laboratories of Fundamental Research, Palo Alto, CA
Ralph Abraham: Professor, University of Santa Cruz, retired; Founder, Visual Math Institute
Adrian Patrut: Professor, Babes-Bolyai University, Romania
Johannes Hagel: Professor and Founder, Institut fuer Psycho-Physik, Koeln, Germany

Logos5557 (talk) 22:08, 23 September 2009 (UTC)

We're not going to list them all, are we? Fences&Windows 22:59, 23 September 2009 (UTC)
No, I just wanted to point out the quality of the team, because this list is not in a visible, easily accessible place on the project website. There was another page listing these people with their more detailed backgrounds (education and other things), but I am not able to find it again. Logos5557 (talk) 12:36, 24 September 2009 (UTC)


I would caution other editors about edit warring. Certain editors have gone over 3 reverts on this page on the same day. I understand that there are some strong opinions about the project but reverting reliably sourced material repeatedly is counter to Wikipedia policy. Please stop. Simonm223 (talk) 19:15, 27 September 2009 (UTC)

It is very clear that there is either an understanding problem or simple win-lose editing over this article. Alhough you think that the material you (and two other editors) are trying to insert, in spite of my valid warnings, belongs to this article (because you believe it is reliably sourced), it does not, never will be. You and two other editors can't make it viable just because there are more than one editor defending your position and only one editor (myself) defending the fact. Can you change a concrete fact? No. I'm afraid you will have to have some more bike rides, some more blow some steam offs until we can resolve this dispute unless you refrain from emotional, baseless editing. Good luck. Logos5557 (talk) 20:21, 27 September 2009 (UTC)
Logos5557 Please respect Wikipedia guidelines such as wp:civil and wp:3rr. No respectable scientific journal is going to call something like GCP pseudoscience because they do not see it as being worthy of their time to do so, and I agree. I can find plenty of primary sources from debunkers and their sites that do, in fact, call GCP pseudoscience. notably skepticnews and skeptoid and this interview skeptico with noted debunker and scientist Phil Plait. GCP is pseudo-scientific and while my personal opinion is that it is not elegant writing to actually say so in the article, there is no rule or convention in wikipedia that forbids saying that this subject is pseudoscience and I think there is sufficient evidence to do so. Voiceofreason01 (talk) 20:46, 27 September 2009 (UTC)
Also, please note that WP:3RR indicates that three reverts of any (non-vandalism non-BLP) material in the same article is merely prima facie evidence of edit warring. Clearly note that it is not necessary to revert the same material to be in violation of 3RR, and edit warring is easily possible without venturing near 3RR. The article has been calm for a few hours now, but based on the history these seem to be points of confusion. Please talk it out here, with recourse to page protection if necessary. - 2/0 (cont.) 21:42, 27 September 2009 (UTC)
I suggest the ones, who think skepticnews, skeptoid and skeptico are sufficient to justify pseudo-science categorisation, to read reliable source policy once again. These websites/blogs/interviews can not be the reliable sources to justify pseudoscience categorisation. Some other users can come up with some other websites/blogs/interviews rejecting such a labelling and accusation; then what will we do? Will we count the number of websites/blogs/interviews from each side? As Alex Tsakiris states here there is no consensus on labelling this project as pseudoscience: "had an interesting discussion just a few days ago with a guy, very popular skeptical Web site and blog, and he brought up the topic of the Global Consciousness Project so we’re talking about it and then half-way in, he’s like, “Hey, you know what? That’s pseudo-science. It doesn’t deserve discussion. I’m out of here. I’m not talking about it anymore.” And you know, then you look at the guys behind Global Consciousness. No matter what you think about that, you know, but you’ve got a Princeton guy, you got a guy who’s Bell Labs and SRI, all the right credentials. You might not like what they say but do you really want to kind of just kind of go out and label these guys as pseudo-science? And that’s what I think – that’s the baggage I think that JREF has and has maybe fed into a little bit in terms of labeling people you don’t like or who don’t follow scientific orthodoxy as practicing pseudo-science."
Pseudoscience categorization can only be justified in pretty obvious, no-discussion cases, nobody needs to present any reliable source in those cases also. But this one is not such a case. There are credible scientists & individuals involved, therefore you need to present very strong reliable sources discussing each and every aspect of this project from the perspective of being pseudoscience, proving the claim with solid evidences. It's not our problem whether any respectable scientific journal do not see GCP as being worthy of their time to call it pseudoscience or not. No source, no pseudoscience categorization holds in this case. Users can mention some lame skeptics (if they are notable enough) who see GCP as pseudoscience in the article, but that's all. We are not allowed here to do OR and synthesis. I don't see a reason discussing this "pseudoscience categorisation" any more. I am about to look for a definite solution to this dispute. Once I'm finished you can contribute in the proper venue. Logos5557 (talk) 22:17, 27 September 2009 (UTC)

{outdent}Here is the avenue for further discussion and dispute resolution [2] Any user who wants to enroll and contribute may do so. Logos5557 (talk) 00:37, 28 September 2009 (UTC)

We don't need edit warring over a single word. If pseudoscience can't be sourced well enough then it doesn't go in. It's just a single word. If we write the article adequately based on sources - see those pasted above! - then it should be obvious to readers that this isn't mainstream science. We're not going to persuade the faithful that this is pseudoscience by including this "dog whistle" word in the article. Fences&Windows 18:06, 28 September 2009 (UTC)
AN/I is not an appropriate venue for this, there are other dispute resolution mechanisms, include talking on this page. Fences&Windows 18:07, 28 September 2009 (UTC)


I'd like to know if the fact that this article is in the 'pseudoscience' category does have any ground. As far as I know, the project does research related to statistics (which is a science). It does use abstract vocabulary at many points, but when it explains the experimental processes, it doesn't seem inaccurate or irrational to me. --TEO64X 09:32, 6 October 2008 (UTC)

It is pseudoscience because they measure an effect, but they are not looking for an explanation, they are prepossed with the idea that it is caused by collective consciousness in the world. Science would look for explanations, EGG Project only looks at its own explanation. Jan Arkesteijn (talk) 10:03, 12 November 2008 (UTC)

Jan Arkesteijn's definition of pseudoscience is different most peoples'. In any case, much of the GCP's analytical work is focused on understanding the structure of the data in service of explanations or at least good models to guide further research. We think of "collective consciousness" in terms of operational definitions. Analysis-based modeling and unambiguous definitions are classic hallmarks of science. Roger Nelson (talk) 15:37, 19 December 2008 (UTC)

I did not find any information on your site that considers or investigates other explanations for this phenomenon, nor did I find any information that investigates the anomalies that are not somewhere near a major event, nor did I find any informations that would make it plausible that a random generator is in fact an antenne for consciousness, nor did I find any information about the verification and stamping of your EGGS, nor did I find any information about the trustworthiness of your partners where you place your EGGS, nor did I find any information about your efforts to see if you might be wrong. I could go on, but pseudoscience is written all over your organisation. Kind regards, Jan Arkesteijn (talk) 18:44, 20 December 2008 (UTC)
Poor experiment design, unclear objectives for confirmation of existence of phenomenon, lack of a mechanism for the proposed phenomenon, misinterpretation of stastical data all put this into the category of pseudoscience.Simonm223 (talk) 16:13, 18 September 2009 (UTC)
To the best of my knowledge the effect they are hypothesising is impossible under the currently accepted laws of physics. Adding to that the poor application of the scientific process that Simonm223 noted above, firmly puts this subject in the realm of pseudoscience. Voiceofreason01 (talk) 16:36, 18 September 2009 (UTC)
Considering precognition requires cause to come after effect, yes that is problematic. Unless, of course, they postulate that the psychic waves are the cause of the event and that's completely lacking in parsimony.Simonm223 (talk) 16:50, 18 September 2009 (UTC)
sometimes the mechanism needs to come later; there is no any scientist on earth who knows the exact mechanism of gravity, but humankind accepts Law_of_gravitation as a physical law and continue to use gravitational constant and gravitational acceleration in engineering calculations. Due to the fact that we don't know the mechanism yet, should we stop using them? Would you care to state specifically which part of the design of the experiment is poor, and to give an example to the misinterpretation of statistical data? Logos5557 (talk) 08:52, 21 September 2009 (UTC)
Well, to start with the experiment is too vague. They look for anything they believe to be anomalous in a set of random numbers and then look around to see if it vaguely correlates to something important happening. It reeks of confirmation bias from stem to stern.Simonm223 (talk) 14:38, 21 September 2009 (UTC)
It seems vague at first look, but after seeing the backgrounds of the people involved and the methodology used, it is not so easy to stick to the prejudice. They hypothesize beforehand that less randomness will be observed during the events which are known for sure to happen, like new year eves, eclipses, some gatherings, celebrations etc. And then they analyze the data and see that there were statistically less randomnesses happening during those hypothesized events. They say that those statistically less randomnesses may be correlated to the global consciousness. In order to be able to reach some definite conclusions, they need to continue to record and analyse. They also look at the data recorded during the catastrophic events (or the ones which nobody knows to happen) afterwards and see whether there is less randomness or not. Logos5557 (talk) 15:39, 23 September 2009 (UTC)
Do you sincerely not see the confirmation bias in that experiment structure? Simonm223 (talk) 17:30, 23 September 2009 (UTC)
Due to the nature of the phenomenon, it looks as if there is confirmation bias in this experiment, but actually there is not.
Nelson explains here; "Dr. Roger Nelson: Well, you’re really quite right. In my experience, they’re rare. Yes, I have encountered a few, mostly years ago, but rather recently in a guy named Jeff Scargle who I think may be blocked from ever believing this is real. But at least what he does is look at the situation and say, “You’re using an exclusive XOR to remove bias from your data,” and he says, “That means you are removing any possibility of affecting your data.” He says, “You’re throwing out the baby with the bath water, ” and so forth in that vein. The problem is that the data that we collect actually showed changes in spite of throwing out the baby with the bath water. So that means that his background of assumptions is wrong. But still, he’s asking a serious question which results in us stirring the pot a little deeper, trying to find out well, if his idea has merit at all, if he’s even half right, then maybe we are throwing half the baby out with the bath water. Let’s see what would happen if we try to understand how something could possibly get through the XOR.
Alex Tsakiris: Let’s take that example, because I read Scargle’s paper and one thing that I pointed out to Brian Dunning, who I had this conversation with, who is a skeptic and publishes a very popular skeptical blog and podcast called Skeptoid. In part of the dialogue he had referenced Scargle’s paper so I went and looked at Scargle’s paper. First thing, I need to kind of re-emphasize is that Scargle’s paper is published right alongside your paper and Raden’s paper in The Journal of Scientific Exploration. So sometimes when skeptics call out this paper as some kind of criticism, it seems to me like a pretty fair scientific debate. Everyone’s talking to each other, everyone’s publishing in the same journal, and everyone’s trying to figure out the truth.
Dr. Roger Nelson: Yes.
Alex Tsakiris: So that’s point Number One. Point Number Two, I read Scargle’s paper and I have to say personally, I think you’re being a little bit too generous.
Dr. Roger Nelson: [Laughs]
Alex Tsakiris: He seems to be nipping at your heels on some minor, minor stuff. The XOR stuff is really interesting, but the logic that he applies to why this is such a huge oversight is kind of strange, too. It’s this, oh, you guys claim that there’s this conscousness effect. Aren’t you eliminating it by XORing it out? The obvious argument to that is, well, what you just said. Perhaps some is being eliminated but still there is a lot there to look at. I don’t see that as a real substantive claim. Taking that all aside, here’s what I’d like to ask. What’s been the follow-up to that? Has Scargle come back? Has he collaborated? Has he retracted any bit on his stance or his position, or has he just gone away?
Dr. Roger Nelson: He has not retracted anything. In fact, because I kind of present him as a reasonable skeptic, he gets interviewed because I tell people who are interviewing me from NBC or whatever, they will want to know who can they talk to who’s a reasonable skeptic. So I give them Scargle’s name. He appears in one interview saying after things have been explained, how it all works and so forth, he says, “Well, unusual things will happen in random data. And so if you look long enough, you’re going to find something unusual.” So I get to respond to that in the interview saying, “All right, that’s true, but what we show in the data and the whole experiment is that the likelihood of this particular unusual thing happening is on the order of one in a million or some other large number.” So it becomes, I guess, your point is manifest in what he does in that. Now he’s not talking about XOR, he’s saying or implying at least, that we’re picking out the unusual bits and saying, “See? Here’s something unusual.” Which is, of course, not what we do.".
Radin adresses confirmation bias issue in his blog here. Logos5557 (talk) 19:07, 23 September 2009 (UTC)
Radin's interpretation is flawed. The use of an exclusive or does not eliminate the risk of bias. In fact it remains as biassed as this interview snippet you posted.Simonm223 (talk) 19:31, 23 September 2009 (UTC)
Well, the rest of the dialogues are in the link given. Radin's interpretation is quite neutral. They're presenting their arguments against confirmation bias claims in neutral ways. Note that, discarding the statistically less random data which coincide with global events, with the belief of there can be no correlation, is the real confirmation bias. Logos5557 (talk) 20:22, 23 September 2009 (UTC)
I have not seen any evidence that this so-called study is even blinded. I have seen nothing to suggest that their "anomalies" are at all statistically significant. What I see is people looking for "anomalies" and, surprise! finding them. The dialogues have a heavy POV and are proof of nothing. Simonm223 (talk) 20:27, 23 September 2009 (UTC)
To some extent the study is blinded; the people in the world who affect random number generators are blinded to the study, to the experiment, to the existence of RNGs etc. This paper talks about experiments carried out with portable RNGs. These are the results of the study. As far as I know, they do not claim that their "anomalies" are all statistically significant, either. Logos5557 (talk) 21:42, 23 September 2009 (UTC)
That isn't blinding at all. And that assumes that random people in the world have an effect again with the confirmation bias. The more you tell me about this "project" the stronger the case for it being pseudoscience becomes. Simonm223 (talk) 14:49, 24 September 2009 (UTC)
You're just throwing some concepts (confirmation bias, poor experiment design, blinding etc.) without arguing in detail how this project does not conform to those, and when you're not satisfied with the answer, you shortcut to the judgement (which is confirmation bias by the way) and claim that it is pseudoscience. Would you mind stating how blinding would apply in this case. How could this study be blinded? This is not a drug trial. What would double-blinding this study change or bring? Logos5557 (talk) 16:30, 24 September 2009 (UTC)
In specific, in order to be valid, an experiment such as this would have to be at the very least properly double-blinded. In other words the researchers should not be able to know ahead of the experiment which "spikes" correlate with "important events" and which do not. This is not at all the case. This would provide minimal protection against confirmation bias though, as demonstrated through criticism of the MBTI this would not be entirely sufficient to ensure a valid design. The experiment also suffers from serious design issues over the nature of an important event. Events important to whom? This is another place in which serious bias creeps into the work as the nature of an important event is decided by the researchers and is not dependent on specific criteria. They will even shift the time frame in order to correlate "anomalies" with "important events". It's just a mess and it is clearly pseudoscience. Simonm223 (talk) 16:44, 24 September 2009 (UTC)
Disagree completely. Double-blinding is not always required for validity; it can not be applied in all situations. You can talk about blinding and double blinding necessities for psi experiments, where individuals are tested for their psi abilities to affect randomness (in case of random number generation) or to predict somethings in advance. When there can be no interaction between RNGs and the "researcher/experimenter" and between RNGs and the "subjects" (unlike in psi experiments), blinding should not be a concern too much. There should be no argument over the importance of some events such as 9/11 terrorist attacks, new year eves, and the ones which pick extensive attention from all the world. I see the rest as just "experiences" for the project; when you have the data available why not look at others for which initial guess would be that those would get little attention. I don't think that, the researchers' not being able to know ahead of the experiment which "spikes" correlate with "important events" and which do not, would be double-blinding. Nevertheless, they have a dot as you may know, which provides the result of real time data analysis, that independent "experimenters" can check during "important" events. Logos5557 (talk) 18:36, 24 September 2009 (UTC)
Which new years eve? What makes Sept. 11, 2001 more important than Sept. 13, 2004?? And "no it's not" is not a valid reason for not-blinding. Simonm223 (talk) 18:43, 24 September 2009 (UTC)
All new years eves, when people stop doing what they routinely do and celebrate the coming year. Nothing seems to have happened in September_2004#September_13.2C_2004 as important as September 11 attacks. Logos5557 (talk) 20:28, 24 September 2009 (UTC)
So... what date is that for New Years? Simonm223 (talk) 20:37, 24 September 2009 (UTC)
Furthermore I can't think of anything important that happened on September 13, 2004, it's not like 1.5 million people had to be relocated because of a category 5 hurricane or something. Simonm223 (talk) 20:55, 24 September 2009 (UTC)
I couldn't get your point regarding new years. Do you mean new years are known beforehand and therefore should not be analysed? Do you really think, that's detrimental? Actually eclipses and new year eves are excellent chances to test global consciousness and locality. When statistically meaningful deviation from randomness travel across time zones the same as new year transition, that's something. Apparently, sept. 13 2004 was not as important as sept. 11 2001. It seems relocated 1.5 million people were not as impressive as the death of some thousands people in 9/11 attacks (at least 200 by jumping to their deaths from towers) and collapsed world trade center(which was unique in human history), to the billions of people on earth. Logos5557 (talk) 21:47, 24 September 2009 (UTC)
No, I meant literally which new years celebration. In the last year there have been various new years celebrations in various places on:
  • Oct 31, 2008
  • Dec 31, 2008
  • Jan 1, 2009
  • Jan 7, 2009
  • Jan 26, 2009
  • March 27, 2009
  • April 13-14, 2009
  • Sep. 18-20, 2009
So which one are you referring to? And the cultural centerism of assuming 1000 deaths are more important than millions of displaced people and billions of dollars of damage from one of the 10 worst recorded hurricanes ever is, again, bias. Simonm223 (talk) 21:53, 24 September 2009 (UTC)
Had you stated the numbers of people celebrating each of those different new years, it would be much easier for me to choose the one I am referring to. It was a typo, more than a thousand people died in 9/11 attacks. Well, it works almost as you mentioned; people of the world do not care for mass sufferings much (especially if it is a recurring event) but instead focus on the ones during which casualties are high. There were lots of hurricanes but only one 9/11 attacks. Have you seen any palestinian dancing and celebrating the hurricane? Logos5557 (talk) 22:14, 24 September 2009 (UTC)

{undent} there is no need for further debate. The information you have provided supports the argument that this is a pseudoscience by any reasonable definition. I am sorry if you fail to see that but we are just arguing in circles. Your bias about a single bombing being more important than other major disasters because it happened to Americans doesn't change anything. Furthermore you have demonstrated the vagueness of claiming something like "new years" for an "important event" since you don't even know which new years was being discussed. I'm done with this. Simonm223 (talk) 14:30, 25 September 2009 (UTC)

Is there any other new year in the world which is celebrated by millions? No, the study checks the data for the ones happening at December 31s of each year. We are arguing in circles because you will not at all accept that your pseudoscience claim is not (and can not be) justified. You're creating or making up some strange connections between being not scientific and blinding, lack of a mechanism (I see that you no longer defend your position on this by the way; why?) etc. You also make some vague weasel assertions like misinterpretation of statistical data (have you analysed the statistical data) without even bothering to proove your claims. This is plain pseduoskepticism. Either proove your claims or do not make any baseless points. Logos5557 (talk) 16:00, 25 September 2009 (UTC)

Is there any other new year in the world which is celebrated by millions?

After the above comment I need to step back because if I don't my WP:SPADE will beat my WP:CIVIL to death and that would be bad. When I think of a way to formulate a response without breaking WP:CIVIL I will do so. Simonm223 (talk) 16:16, 25 September 2009 (UTC)
"Is there any other new year in the world which is celebrated by hundreds of millions?" or "Is there any other new year in the world which is celebrated in all over the world?" or "Is there any other new year in the world which is celebrated by billions?". Happy now? The study checks December 31s of each year. So what's your point, if there is any? Logos5557 (talk) 16:28, 25 September 2009 (UTC)
Ok, here is all I'm going to say on your new year comments. Approximately 2.05 billion people celebrated new year on Jan. 26, 2009 this year. That is just under 1/3 of the world population. The cultural centerism in your recent arguments has reached the point where I am not comfortable continuing this debate. Simonm223 (talk) 16:32, 25 September 2009 (UTC)

The GCP is clearly very bad science and seems to be just general nuttery, but despite the obviously flawed design of their experiments they seem to be acting in good faith to perform a scientifically viable experiment and relatively small changes to their experimental procedure would result in a viable experiment. Actually calling it a "pseudoscientific experiment" in the article seems presumptuous and may violate wp:undue since why GCP may be considered pseudoscience isn't really clearly explained in the article. I suggest fleshing out the article more to show the flaws in GCP's methods rather than simply pronouncing editorial judgment on the subject. I suggest you both review wp:civil before continuing to pursue this argument. Voiceofreason01 (talk) 16:35, 25 September 2009 (UTC)

I am beginning to argue emotionally and concur I'm treading a thin line with WP:CIVIL here. I'm going to bow out for a while until I cool my jets and may come back when I can participate on this topic more constructively. Sorry. Simonm223 (talk) 16:40, 25 September 2009 (UTC)
I guess I couldn't make myself clear on palestinian remark; as most will remember some palestinians were dancing & celebrating 9/11 attacks since it was an attack by some arabs to Americans. Hurricanes on the other hand, frequently occur and generally other people in the world tend to halt their hatred and anger and tend to feel compassion. That's why I think 9/11 attacks were unique in human history considering the type of attack, casualties, the shock created etc. when compared to natural disasters. Chinese people use several calendars; Chinese calendar. Gregorian calendar was officially adopted, effective 1 January 1929. As far as I know they celebrate gregorian new year, too (Chinese New Year). Another point is; there are not much EGGs in China and considering the fact that locality was heavily observed during new year transitions, eclipses, etc., it can be said that the "consciousness" of chinese people are not reflected much in this project. Logos5557 (talk) 17:13, 25 September 2009 (UTC)

Could we please stop this discussion. The measurements simply prove that the used RNG's are not really RNG's. My guess is that they just measure the correlation in the electro-magnetic noise generated by the abundantly available tv- or radiosets. They all transmit a tiny electro-magnetic signal, which in global events are correlated, only separated by a time delay. This is it. Nothing more, nothing less. Jan Arkesteijn (talk) 01:11, 26 September 2009 (UTC)

Which measurements prove that? If there is a possibility of a RNG to be affected by "consciousness", which this project is researching, one should carry out testing RNGs' truly randomness in Mars or Moon.. If you carry out the study in a highly populated place, how can you be sure that RNGs you're testing were not "affected" by "consciousness". Regarding your guess (the correlation in the electro-magnetic noise generated by the abundantly available tv- or radiosets); remember that RNGs used in this study are shielded in order to protect them against such external effects. The last thing to be mentioned is; unless somebody come up with a source (not a lame skeptic blog or something, should be either a published paper or a reliable book) stating that this project is a bad science or pseudoscience with some valid arguments (not sweeping emotional personal opinions), I'm going to remove pseudoscience categorisation from the article. Logos5557 (talk) 17:46, 26 September 2009 (UTC)
The measurements that are conducted world wide for some years now prove that. People behind this experiment are biased in saying this must be consciousness. Instead they would have to say, my RNG's are not random enough. Look, if this would be science, a proper approach would be to go into a lab and design a sensor that could detect consciousness, proven. Only then you would go out to conduct an experiment on global consciousness. Not the other way around, like is done here. "Oh look, we have a device, it seems to be doing something, we don't know how it works, but what comes out of it sure must be consciousness!" The RNG's are shielded, you say. I say, the RNG's are not shielded enough. Anyway, the researchers did not make any attempt to determine the influence of the electro magnetic noise, caused by daily life, on their results. If they would, they would probably find it is of much greater influence than consciousness, especially that noise that is caused by millions of TV-sets displaying the same image at more or less the same time. Jan Arkesteijn (talk) 18:58, 26 September 2009 (UTC)
Sorry, but your ideas are stranger than butterfly effect. "The measurements that are conducted world wide for some years now prove that"; still not clear which measurements are you talking about. People behind this experiment are saying that this may be correlated with global consciousness, there is no "must" in the equation. This is science because, they first went into the lab and carried out experiments on individuals. Results of those experiments inspired them to turn that into a global experiment. You're simplifying excessively (unsurprisingly) what they had done. How can someone summarize this project with such sentence "Oh look, we have a device, it seems to be doing something, we don't know how it works, but what comes out of it sure must be consciousness!", after looking at the list of GCP team? How do you know that the researchers did not make any attempt to determine the influence of the electro magnetic noise on RNGs? Why did they shield those RNGs? Do you think that a scientist who has some titles and degrees, would risk those by popping up on science arena without even bothering to make the preliminary checks on the equipment they would use. If you say RNGs are not shielded enough, where is your proof, or is there any proof published by somebody? You're just making up a story here; "If they would, they would probably find it is of much greater influence than consciousness". You should know that wikipedia is not a proper arena for OR and synthesis. That's why, users can't just base/justify their edits they make in articles on debates made in talk pages. Users should bring reliable secondary sources. Sorry but, statements full of "guess"es, "would"s, "could"s do not qualify as sources in wikipedia. I'm repeating once again; unless somebody come up with a source (not a lame skeptic blog or something, should be either a published paper or a reliable book) stating that this project is pseudoscience with some valid arguments (not sweeping emotional personal opinions), I'm going to remove pseudoscience categorisation from the article. Logos5557 (talk) 21:32, 26 September 2009 (UTC)
The literature shows clear evidence of significant (by experiment standards) non-randomness in the baseline tests; at the time when the RNGs should have not had significant non-randomness. Rather than questioning if the RNGs were properly randomizing the researchers just said "well we must have psychically influenced the RNGs during calibration" as if that somehow circumvented the glaring flaw in the experiment equipment. Simonm223 (talk) 01:12, 27 September 2009 (UTC)
I started out asking to stop this pointless discussion, and now I am discussing. So this is my last response on the matter. You know what measurements I mean; they have been running these measurements for years now, sometimes picking out a sequence connecting it to a global event, when it seemed appropriate. But their flagship, the WTC-attack, crumbled in the hands of May and Spottiswoode.
I know they went into the lab, but what they designed is not a consciousness-sensor that answers to any scientific standard of quality. You are asking me a lot of questions, how do you know this, how do you know that. I can only respond, by saying, why don't they show this, why don't they prove that. It is not up to me, It is up to them. I am not credulous, they should be thorough. Any good scientist only publishes his findings after a solid research; it is a sign of pseudo-science to publish seemingly spectacular results up-front outside the scientific domain on a popular science style website. The truth is, real science doesn't want to touch projects like these with a ten-foot-pole, because they don't want their good name dirtied. But the lack of criticism leaves the general impression that this is real scientific research. Unless you find a scientific publication that discredits (and I mean discredit, not criticize) the results of May and Spottiswood, I am not responding anymore to this pointless argument. Jan Arkesteijn (talk) 09:43, 27 September 2009 (UTC)
Ok, since there seems no reputable source on the horizon stating that this study is pseudoscience, I'm removing the pseudoscience category from the article. Once somebody come up and justify with evidences instead of personal opinions on the subject matter, he/she can add it again. Where are the evidences/literature about the non-randomness in the baseline tests? Don't forget, if there is any, the tests should be on the equipment used in GCP not on some others used in some other experiments. Logos5557 (talk) 07:32, 27 September 2009 (UTC)
I was here to remove the pointless/baseless categorisation of pseudoscience, which is not backed by any source (not even by May and Spottiswoode). Users can not synthesize "things" out of publications, or from their ingenious "would"s, "could"s and "probably"s. I don't think that users claiming pseudoscience here, spent enough time for checking the findings before coming up with such judgements, that's what should be opposed. The latest edits you made, have nothing to do with the project, as you're very well aware; there is no mention of the project in Stanley's article and in fact that article is about another study. Therefore, please do not edit war any more and accept this plain fact. Logos5557 (talk) 14:03, 27 September 2009 (UTC)

This article's inclusion in Category:Parapsychology doesn't seem to be in dispute. Because Category:Parapsychology is a subcategory of Category:Pseudoscience, I think some of the heat about whether to directly cat this in the latter can simmer down just a little bit. What would be more worthwhile is finding a source to cite in the text that identifies this topic as pseudoscientific -- which I'm sure is the case. Once that's settled, whatever (un)official MOS exists for Category:Pseudoscience can dictate whether it's also listed in that over-category. --EEMIV (talk) 02:26, 28 September 2009 (UTC)

I hadn't noticed that Category:Parapsychology is a subcategory of Category:Pseudoscience. That is also questionable, because there is no consensus among scientists on classifying Parapsychology as Pseudoscience. Logos5557 (talk) 08:21, 28 September 2009 (UTC)
Including both categories is covered by Categories and subcategories. Please start a discussion at Category talk:Parapsychology rather than here if you think it is miscategorized. - 2/0 (cont.) 16:23, 28 September 2009 (UTC)
Good point; creates new alternatives. Since there have already been disputes over the subcategorizing parapsychology under pseudoscience here, I don't see any value in taking that road. Instead parapsychology categorization may be questioned as well. Logos5557 (talk) 16:56, 28 September 2009 (UTC)
I found sources that call GCP pseudoscience, the problem is that they are all primary sources, i.e. debunkers. skepticnews and skeptoid and this interview skeptico with noted debunker and scientist Phil Plait. It may be very difficult to find good sources to label GCP as pseudoscience because respected journals don't waste their time on trying to debunk these kinds of things and usually disproving finge science isn't particularly newsworthy so news sources don't normally run these types of stories either. Voiceofreason01 (talk) 18:06, 28 September 2009 (UTC)
Plait would be a suitable source considering his expert status and the fringe nature of this article. Verbal chat 19:18, 28 September 2009 (UTC)
Considering JREF's (of which he is the president) controversial position against these kind of topics, Plait is not a suitable neutral source for justifying pseudoscience categorization. As I mentioned in 3RR section below, instead of categorising this article based on controversial ideas/opinions of some individuals, users can insert a sentence into the article stating that Plait labels GCP as pseudoscience. That's the only neutral solution. Logos5557 (talk) 19:39, 28 September 2009 (UTC)
That would seem like a suitable compromise to me. Voiceofreason01 (talk) 16:28, 29 September 2009 (UTC)

Stanley Jeffers

Is an associate professor in physics at York University, very valid source. Simonm223 (talk) 18:55, 25 September 2009 (UTC)

No doubt about that, but we have to remove the section (and the reference) you added into the article. Because this article is completely about the first study of PEAR, which was carried out with individuals to test their "mind powers". It is even not about the latest findings but about the ones dating 1982 or something, as stated by our respected scientist Stanley Jeffers "However, in this article I will take a critical look only at the first group of experiments.". Anyway, Jeffers' article is not about GCP, the hardware, the methodology used and the results. Therefore we should remove it. Would you like to do that by yourself or would you like someone else to get involved? You can relocate that part into a relevant article like this one Princeton Engineering Anomalies Research Lab. Logos5557 (talk) 18:25, 26 September 2009 (UTC)
GCP is a continuation of PEAR. Please cease POV edits. Simonm223 (talk) 14:26, 27 September 2009 (UTC)
I'm sorry but you and some others are the ones who should cease from POV edits. The material you're trying to insert is completely a synthesis WP:Synthesis. Logos5557 (talk) 14:30, 27 September 2009 (UTC)

Although Jeffers' article belongs to PEAR article, I guess the problems with criticism in Jeffers' article should also be mentioned here because some users think that Jeffers' criticism applies to global conscioussnes project also and even falsifies its results:

1- We understand from CSI article that Jeffers himself was involved with some other scientists in separate experiments to test the "mind powers" of some individuals. The methodology he used was different than Jahn and Dunne used. He compared two different sets of data taken from RNGs; one set when "mind powers on", the other when "mind powers off". He seems in a sense thinking that "mind powers off" is somehow a calibration to evaluate the randomness of RNG. He criticizes Jahn and Dunne for running calibration test occasionally (but we don't know the details of calibration done by Jahn and Dunne), not after each "mind powers on" test run. He contends that his methodology is scientifically more sound. He states that Dobyns disputed his claim in the paper here.

2- The second criticism is directed by Jeffers towards the cumulative results. He states that while the baseline in fig.3 in Jahn and Dunne's paper here seems supporting their claims, fig.4 shows baseline deviating much from statistically normal deviation (theoretical chance expectation; p=.05 envelope/curve). Which, he believes, is a sign of non-randomness of RNG used in those experiments. To make it clear what hi, lo and baseline means, from the paper of Jahn and Dunne; "The operators attempted, following pre-recorded intentions, to induce the device to yield higher, lower, or undeviated (baseline) mean values of its output distributions" "As displayed in Figures 1a through c and Refs. 1–3, it was indisputably evident that this operator had succeeded in shifting the mean of the high-intention (HI) and low-intention (LO) outputs in the intended directions, while the null-intention or baseline (BL) data were indistinguishable from calibration or theoretical chance expectation" "These initial results immediately raised a ladder of derivative questions: 1. Could this same operator continue to produce anomalous correlations with a high degree of replicability? 2. Could other operators produce similar effects? 3. If so, how did their individual results differ? 4. Could structural features of their output distributions other than the means be affected? 5. What personal characteristics of the operators were relevant? 6. What operator strategies or protocol variants were most effective? 7. How important was the mode of feedback provided to the operators? 8. Were the details of the random source important to the occurrence or scale of the effect? 9. What were the spatial and temporal dependencies? 10. Could pseudorandom or other deterministic sources be similarly affected? 11. What forms of theoretical model could be posed to accommodate such effects?" "Question #1 has been answered affirmatively over many subsequent years of continuing participation of the original operator in this and other closely related REG experiments. For example, Figure 3 shows the cumulation of results of some 125,000 trials, each comprising 200 sampled bits in each of the three intentions, acquired by this same person over the first decade of the PEAR program.". And the most important part is about fig.4; "The second question has been addressed over the same period by the deployment on the same experiment of 90 other volunteer operators, all anonymous and claiming no special talents, with the results displayed in Figure 4. From their composite database, three features have emerged: a) statistically significant deviations of the HI (p’0.0004), LO (p’0.02), and HI – LO (p’0.0001) data from chance expectation have been maintained; b) the average effect sizes in this database are slightly smaller than those of the original operator; and c) the baseline data also display a positive secular drift which, while not statistically significant (two-tailed statistics required in absence of an intended direction), nonetheless hints at more subtle operator influences. Throughout this extended period of experimentation, the unattended calibration data continued to fall well within chance behavior." So, as it is clear from these quotes that fig.4 is the cumulative results of 90 other operators claiming no special talents (I guess there is a typo in fig.4; it states 91 operators instead of 90). Jeffers focuses on very small deviation of baseline in fig.4 from theoretical chance expectation (p=.05 envelope/curve), instead of the high deviations of HI and LO from theoretical chance expectation. He claims that very small deviation of baseline in fig.4 from theoretical chance expectation (p=.05 envelope/curve) is the proof of non-randomness of the device (RNG) used in the experiment, which was "a first-generation random event generator (REG) based on a commercial noise diode". However, contrary to Jeffers' claim/belief "baseline" or "null-intention" is not some sort of "unattended calibration data". Jahn & Dunne thinks that the baseline deviation in fig.4 hints at more subtle operator influences; "c) the baseline data also display a positive secular drift which, while not statistically significant (two-tailed statistics required in absence of an intended direction), nonetheless hints at more subtle operator influences". Instead of focusing on the results of 90 other operators claiming no special talents, we should focus on the results of 1st operator (whose results suggests he/she might have some talents) as stated here; "Note that although the initial rates of anomalous correlation with the directions of intention have not been fully sustained, the overall secular progress of the deviations from theoretical mean expectation, calibration, or baseline results over this huge composite database have continued to carry the HI, LO, and HI – LO terminal probabilities well beyond any reasonable chance interpretation (pHI= 2 x 10E-6; pLO= 5 x 10E-4; pHI-LO= 10E-8)." Logos5557 (talk) 18:50, 29 September 2009 (UTC)