Jon Krosnick

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Jon Alexander Krosnick
Jon Photos 2012 Huge.jpg
Alma mater
Known for
  • Studies in the psychology of political behavior
  • Studies in attitude formation and change
  • Survey methodology research
Scientific career

Jon Alexander Krosnick is a professor of Political Science, Communication, and (by courtesy) Psychology, and director of the Political Psychology Research Group (PPRG) at Stanford University. Additionally, he is the Frederic O. Glover Professor in Humanities and Social Sciences and an affiliate of the Woods Institute for the Environment. One major focus of his research has been questionnaire design and survey research methods. Krosnick has studied the psychology of attitudes, voter choice behavior, and public opinion on global warming. He has been a co-principal investigator of the American National Election Study, the nation's long-running and most comprehensive academic research project exploring voter decision-making.[1] Krosnick has served as a consultant for government agencies, universities, and businesses and has testified as an expert in court proceedings. He has also been an on-air television commentator on election night.[2]

Krosnick's work focuses on the design and methodology of questionnaires and surveys, and he has served as a consultant to the government, academia, and industry on these issues. Krosnick was a principal investigator leading the American National Election Studies from 2005 to 2009, along with Arthur Lupia of the University of Michigan.[3] He was a member of the National Election Study Ad Hoc Committee on Survey Mode which compiled a report for the National Election Study Board of Overseers on the pros and cons of moving from face-to-face to telephone interviews.[4] He has also studied the psychology of attitudes and researched how voters make up their minds and how campaigns influence them.[1] He has conducted research on American attitudes toward global warming, how negativity in campaigns affects turnout, and ballot order effects. He has also been an on-television commentator on election night.[2]

Personal life[edit]

Krosnick's mother was an educator and opera-singer, and his father was a physician[5][6][7] who was a diabetes specialist, professor, expert witness, and researcher.[8].[7][9] Jon Krosnick has a sister, Jody Arlyn,[10] who is a surgeon.[9] He became interested in music at an early age, starting to learn how to play the piano at age 6 and going to the National Music Camp at Interlochen at age 9, where he first encountered jazz drummer Peter Erskine. Erskine would later be a major musical influence for him and a personal friend. Krosnick continued playing percussion instruments from elementary school on, performing as a soloist with various orchestras, including the Philadelphia Orchestra, as well as playing jazz with many ensembles, including the electric jazz band, Charged Particles.[6]

Krosnick went to the Lawrenceville School in Lawrenceville, New Jersey and graduated in 1976.[11] He later graduated magna cum laude from Harvard University in 1980 with a B.A. in Psychology. He then received both an M.A. in 1983 and a PhD in Social Psychology in 1986 from the University of Michigan, Ann Arbor.[5][2] On June 1, 1986, Krosnick married Catherine Ann Heaney.[5] He joined the departments of psychology and political science at Ohio State University, Columbus, as a lecturer in 1985, became an assistant professor in 1986, and was promoted to associate professor in 1991.[5][6] He became a full professor and was a member of the Ohio State University (OSU) political psychology program and co-directed the OSU Summer Institute in Political Psychology. In 2004, Krosnick became a professor at Stanford, where his wife also accepted a faculty position. The couple have a daughter who graduated from Stanford as an undergrad and is now a PhD student at Columbia's Mailman School of Public Health. Jon and Catherine now live in Portola Valley, next to Stanford.[6] Like his parent's home [12], Krosnick's under-construction home was seriously damaged by a fire in 2016. [13]

Work in survey methodology[edit]

Questionnaire Design[edit]

The largest area of Krosnick's work in survey methodology lies in questionnaire design. In his article, Optimizing Survey Questionnaire Design in Political Science: Insights from Psychology, which he co-authored with Josh Pasek, he "shows the general principles of good questionnaire design, desirable choices to make when designing new questions, biases in some question formats and ways to avoid them, and strategies for reporting survey results."[1]

When deciding whether to use ratings or rankings as response option for a question, Krosnick and Alwin found, in a study with thirteen rating scales, that 42 percent of individuals evaluated nine or more of the objects identically.[2] Krosnick argues that such non‐differentiation is most likely to occur under the conditions that foster satisficing. [3] Krosnick has also found that although ranking questions take more time, rankings acquire responses that are less distorted by satisficing and are more reliable and valid than ratings. [4]

In a series of separate studies that were designed to test the effects of having a "don't know" response option, Krosnick and his colleagues found that candidate preferences predict actual votes better when researchers discourage “don't know” responses. [5]. This is one of the reasons that Krosnick argues that discouraging “don't know” responses collects more valid data than does encouraging such responses. In his article, Optimizing Survey Questionnaire Design in Political Science: Insights from Psychology, Krosnick also says that respondents who truly are completely unfamiliar with the topic of a question will say so when probed, and that answer can be accepted at that time, thus avoiding collecting measurements of non‐existent “opinions.” Thus, because many people who initially say “don't know” do indeed have a substantive opinion, researchers are best served by discouraging these responses in surveys. [6]

Krosnick also argues for a phenomenon known as response order effects, which is another form of satisficing in which a respondent chooses the first plausible response option he/she considers. [7] [8] More specifically, there are two types of effects that can be seen. One is called a primacy effect, which is the tendency to choose the items at the beginning of a list of options, and the other is called a recency effect, which refers to the tendency to choose items at the end of a list of options. To decrease response order effects, Krosnick suggests that researchers use what are called seemingly open‐ended questions. [9]

Survey Response Rates[edit]

Krosnick and colleagues compared Internet surveys, telephone surveys, and face-to-face (FTF) surveys of probability samples and found that people provide socially desirable answers more often in telephone surveys than in the other two cases. They found that face-to-face-survey respondents respond more accurately than telephone respondents. Since face-to-face interviews are costly, Krosnick conducted a study providing computers and an Internet connection to a set of randomly sampled people, and inviting them to answer survey questions online over a year. This method is known to produce samples, after subtracting those who refused to participate, reflecting population counts of various groups proportionately.[14][15]

Krosnick's research has also focused on the effects of response rates on the accuracy of survey results, and his work indicates that women were overrepresented in RDD (random digit dialing) surveys relative to the population.[10] Additionally, in the same analysis, Krosnick found that an RDD survey sample included more high-income respondents and fewer low-income respondents than the population. [11]

Opt-in surveys[edit]

Krosnick has published studies questioning the use of Internet opt-in surveys. Such surveys do not result in a random sample because participants are a self-selected group. Along with David Yeager, Krosnick concluded such surveys produced results varying from traditional surveys even after they were statistically adjusted to cancel effects from their non-random nature.[16] Another study found such studies could not be used to compare how a group's behavior or attitude changed over time, or how their responses to different issues related to one another.[17] Krosnick and Yeager used the same procedure to weight the raw data demographically in order for their surveys to be equally representative in terms of gender, age, race, etc. They then calculated the average error for the surveys on 13 additional measures of "secondary demographics" and other non-demographic factors. [18] The responses of opt-in Internet surveys differed from those in traditional surveys. Krosnick reached similar conclusions using two surveys collected for the U.S. Census Bureau, with one being a traditional poll and the other an Internet opt-in one.[16]

In another study, Krosnick and a collaborator, LinChiat Chang, compared probability samples interviewed by telephone and via the Internet to an opt-in Internet sample. This study found the latter to be less representative of the population in terms of demographics and to over-represent people with high interest in the topic of the survey. [12]. To further explore the generalizability of these findings, Krosnick, along with David Yeager and other colleagues, collected data on a variety of topics via an RDD telephone survey, an Internet survey of a probability sample, and Internet surveys of seven non-probability opt-in samples of American adults. The estimates from each survey were then compared to benchmarks from official government records or high-quality federal surveys with very high response rates [13]. Using a sample of 1,000 participants, the results showed that all of the non-probability sample Internet surveys were significantly less accurate than the probability sample Internet survey in terms of primary demographics, and all but one of the non-probability sample Internet surveys were significantly less accurate than the telephone survey. [14]

Work in political psychology[edit]

Issue Publics studies[edit]

Ballot order studies[edit]

Krosnick and a colleague, analyzing data from an Ohio election, concluded the candidate whose name is listed first on a ballot received roughly 2% votes more in half of the races they studied.[19][20] The effect was stronger in races where the voters had no clear a priori choice.[20] While this effect has been known for more than a century, the study produced evidence.[21] His testimony to this effect led a court to invalidate an election in Compton, California. The effect carried over to other areas.[19]

Krosnick and others conducted a study of the 2000 U.S. Presidential elections in Ohio, California and North Dakota and found that candidates gained votes when listed first on the ballot as opposed to when listed later.[22] For elections, Krosnick hypothesized the effect may be from voters, feeling compelled to cast a vote, choosing the first choice on the list.[21] He believes Bush benefited from this effect in the 2000 presidential election in Florida,[23] and that the exit poll of the 2004 U.S. Presidential election was skewed toward the Democratic candidate, John Kerry, because he was listed on the questionnaire first.[24]

News Media Priming studies[edit]

Studies of racism[edit]

Between 2008 and 2012, Krosnick helped develop surveys with AP Poll to measure racial views in the U.S. Their surveys revealed that both implicit and explicit racism actually had increased within America since Obama's election in 2008. When tested on explicit anti-black attitudes, 51% of Americans were found to express them compared to 48% in 2008. On implicit attitudes, the number of Americans with anti-black ones jumped to 56% from 49%.[25] Many black Americans have also reported perceived antagonism since Obama has taken office.[26] The percentage of non-Hispanic whites who expressed anti-Hispanic attitudes rose from 52% to 57% from 2011 to 2012.[27] These results indicate a loss of 2% of the popular vote for Obama during the 2012 elections.[25]

In the survey, conducted online, respondents were shown a picture of a black, Hispanic, or white male before a neutral image. They were then asked to rate their feelings toward the neutral image. These feelings were taken as a measure of implicit racism toward the prior image. Responses were correlated to age, partisan belief, and views on Obama.[25]

Studies in voter turnout[edit]

Among his work in political psychology, Krosnick has studied the psychology behind voter turnouts. In 2008, Krosnick published "Why do people vote? A psychological analysis of the causes of voter turnout," in which he designated several factors that increase and depress voter turnout during elections. Among these factors were age, race, residential mobility, and marital status.[28] It also showed that contrary to popular belief, an increased sense of diversity within communities actually discouraged people from voting.[29] The report also designated the most effective methods that candidates could use to increase voter turnout. Of common campaign practices, Krosnick's study found that canvassing was the most effective way to increase voter turnout, whereas common practices such as phone calls to people's houses seemed to have no effect at all. The study also found that involving people in civic service made them more likely to vote in the coming elections.[30]

Krosnick later traveled to Washington to present studies on voting psychology at the annual meeting of the American Political Science Association.[31] This particular study was conducted by the National Election Study (NES), has been funded by the National Science Foundation for the past 30 years, and involved researchers from Princeton, Northwestern, and the University of Chicago.[31][32] It spans over a 16-year period and involved more than 5,000 Americans in face-to-face interviews over the course of four elections. The resulting analysis of voter turnout was a part of a larger study that involved NES data from seven presidential elections and more than 25,000 respondents. In the end, these studies revealed a new way of thinking about voter decision-making that, according to Krosnick, was more consistent with psychological theory than reigning theories in political science at the time.[31]

One of the results of the study indicated that higher voter turnouts occurred when one candidate is disliked to the point of being a threat to voters, while the other is perceived as a hero. However, subjects who liked both candidates were not as likely to vote, even if they liked one significantly more than the other. This also holds true for subjects who disliked both candidates because in these cases voters would be happy or unhappy with either outcome.[31] The studies also indicated that mudslinging in political campaigns effectively increased voter turnout, provided that candidates vilified their opponents tastefully without tarnishing their own image. The study also revealed that if people liked or disliked the candidate at the first encounter, their opinion was difficult to change later on.[32] In fact, Krosnick's studies show that people become more resistant to changing their views as they learn more and more about a candidate. At the start of a campaign, most candidates are viewed in a mildly positive light. After presenting their positions, impressions of candidates solidify and information gained earlier in the campaign tends to have a greater impact. Krosnick calls this model the "asymmetrical" model of voting behavior. [33] This suggests that the current marketing strategy for campaigning - saving money for advertising more at the end of a campaign - is completely wrong.[32]

Studies on the Patient Protection and Affordable Care Act[edit]

In 2010 and 2012, Krosnick conducted national surveys to explore American's understanding of the Patient Protection and Affordable Care Act, better known as Obamacare. [34] Over 2,600 participants in the survey were asked to answer 18 questions about whether a certain provision was in the bill, and how certain they were of their answer. 0% of the participants answered all the questions correctly,[35] and only 14% answered a majority of questions correctly with high certainty.[36] Besides not knowing about provisions that were in the bill, the participants also had trouble identifying provisions that weren't in the bill at all. For instance, only 17% of survey-takers were confident that the bill did not contain death panels, 11% recognized that there was no provision for illegal immigrants to get free healthcare, and all but 14% thought that the bill required smokers to pay $1000 per year.[35]

Krosnick also went further in his study to discover that the more accurately one understood the bill, the more likely he or she was to be in support of it. In fact, the majority of respondents favored nine of 12 provisions in the legislation.[36] The only three components not supported by the American majority were: "U.S. citizens without health insurance have to pay fines if they don't have specific reasons," "New fees for companies that make drugs," and "New fees for health insurance companies."[36] The research team concluded that if everyone in America knew enough to answer all the questions correctly, the approval rating of Obamacare would rise from 32% to 70%.[35]

Work in climate change[edit]

Krosnick has both conducted surveys and analyzed previous ones on global warming, some as part of his work at Stanford's Woods Institute for the Environment. His survey found, in 2007, that most Americans accepted global warming, but by a two-thirds majority were not convinced significant efforts were needed to stop it. Krosnick's view was that scientists were finding this lack of public concern a problem. Krosnick considered the media providing equal coverage to both sides of the debate, not in proportion to how strongly the views were represented among experts, a prime reason for the public's disbelieving scientists were united on the issue. He has also analyzed a 2006 poll by ABC News, TIME and Stanford, which showed the public has grown more concerned about global warming over the previous decade, with more than two thirds believing in unsettled weather patterns caused by human activity. Krosnick believes not acting now will cost the world more in the future.[37]

Studies in public belief and trust[edit]

Starting in 2008, polls began to show decline in the percentage of Americans that believed there was solid evidence for global warming and believed it to be a serious problem,[38] specifically from 80% in 2008 to 75% in late 2009. [39] In response, Krosnick conducted surveys and drew his own conclusions about this supposed dip in public belief.

Krosnick, who has run polls on public attitudes towards global warming since 2006, conducted a 2010 survey among 1000 Americans with the same questions as previous years in addition to new inquiries about recent and relevant controversies.[40] One of which was a controversy in which the email archive of the Climate Research Unit of the University of East Anglia was hacked in 2009. The emails retrieved from the hacking supposedly revealed extensive data manipulation in studies on climate research.[41] Krosnick's surveys revealed that 9% of the 32% of subjects who were aware of this controversy believed that it indicated that climate scientists should not be trusted. There was a subsequent controversy with the fourth report on Climate Change from the IPCC. 54% of the 13% of subjects who knew about this controversy believed it indicated that climate scientists were untrustworthy.[40]

As for the apparent public skepticism among Americans towards global warming, Krosnick believed that the apparent dip wasn't actually a result of decline in public belief in global warming, but the result of the questions on the surveys themselves. For instance, one of the integral questions of the survey conducted by the Pew Research Group was, “From what you’ve read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past few decades, or not?” Krosnick in particular argued that the question's wording taints its intent and results.[42] The question doesn't ask for one's personal beliefs concerning global warming, but instead asks the respondent about what they've heard or read about global warming instead. Krosnick also critiqued another question used in repeated Gallup surveys: "Thinking about what is said in the news, in your view, is the seriousness of global warming generally exaggerated, generally correct, or generally underestimated?" In response to this question, the number of respondents who answered with "generally exaggerated" rose from 30% to 48% between 2006 and 2010. However, Krosnick noted that, based on the wording of the question, this increase could be a result of either a change in views on global warming or a change in the media.[40]

Since 2009, Krosnick's findings have diverged from those of other organizations. Currently in 2012, Gallup and Pew polls report the number of Americans that believe in global warming hovers around 50%, whereas Krosnick's latest poll suggests a percentage of 83%. His poll also indicated that, among believers, the majority reported thinking that fossil fuels and human activities are factors in the phenomenon. Krosnick also asked two questions: "What is the most important problem facing this country today?" and "What will be the most important problem facing the world in the future if nothing is done to stop it?" In the response to the first, respondents ranked the economy first with global warming dead last. For the latter, the results were reversed.[citation needed] His surveys also have indicated that 85% of Americans accept the idea of global warming and endorse steps to address it, even if higher costs are necessary to do so. Krosnick has acknowledged that such high levels of agreement are rare on major questions of foreign policy, but the key division among the public lies in public trust of scientists who study climate change.[43]

Advocacy and climate change[edit]

In 2012, Krosnick conducted another study based on a recent small dip in public belief in climate change. A national survey revealed that low-income and low education students were more willing to trust a scientist who presented evidence for global warming, until that same scientist began to urge their listeners to pressure their government into greener policies. At that point, viewers immediately became suspicious of that scientist's motives and the science they'd presented by extension.[44]

To come to this conclusion, Krosnick recruited a national sample of 793 Americans and split them into three groups to view three videos: a video of a scientist talking about the science of climate change, that same video with an added appeal to demand action from political representatives, and a video about making meatloaf as a control.[44][45] After each group viewed their respective video, they filled out a survey on their attitudes toward global warming.[45]

Krosnick discovered that subjects who'd watched the scientist discuss climate change gave them the same results as the group that had watched the video on meatloaf.[45] But the group that had seen the scientist make a political appeal after his discussion trusted the scientist 16% less (from 48% to 32%). Their belief in the scientist's accuracy fell from 47 to 36 percent. Overall trust in all scientists went from 60 to 52 percent. Their belief that government should "do a lot" to stop climate change fell from 62 to 49 percent. Finally, their belief that humans caused climate change fell from 81 to 67 percent.[44]

However, it should be noted that these changes only occurred in a cohort of 548 respondents who either had an income below $50,000 or no more than a high school diploma. Educated or wealthy respondents had no significant reaction.[44]

Climate change and voting[edit]

Krosnick has also combined his studies in global warming and voter choice through two studies. The first was based data collected from randomly selected households before and after the 2008 Election. These surveys asked on voter's opinions on McCain and Obama's policies on climate change before the election, and then asked who they voted for after the election process. He then conducted a study based on climate change and the 2010 Congressional Election. The results of both of these studies implied that Democrats who vehemently pursued green goals garnered more votes than Democrats who remained silent, and that Republicans who took "not-green" positions won less than Republicans that stayed silent.[46] The study reflects the growing concern over climate change in America and the ways those concerns affect political elections.

Krosnick also authored a study that revealed a subset of voters that focus in on a single issue that could be compelled to turn out if candidates appeal to them on climate change. Essentially Krosnick argued that, by speaking on climate change, candidates could actually enhance turnout and attract voters, especially in the current political climate, where neither candidate is a clear winner on significant issues.[47]

Work in attitude research[edit]

Krosnick has investigated in detail how attitudes, in general, are formed and how they relate to responses to surveys. He has modeled the emotional aspect, affect, that influences attitudes in a framework for long-term memory drawn on the computer model of short-term random access memory and longer-term disk storage.[48] Long-term memory is posited to be made of interconnected nodes, and Krosnick models affect as tags attaching to the node for say a political candidate, weighting it and influencing other nodes through connections.[49] The well-informed and politically savvy are expected to have more well-developed network structures of such nodes.[50]

Krosnick has also researched attitude strength, which per him is a subjective element,[51] with one possible measure being the attachment to a topic a respondent expresses in a self-report survey. He showed this form of attitude strength has four disparate dimensions, revealed by the statistical technique of factor analysis. The four dimensions found were polarized and positive or negative intensity (valence) of attitudes, ease of retrieval of the associated memories (accessibility), personal beliefs driving attitudes, and degree of thinking done on the subject.[52]

On the practical issue of how attitudes affect survey results, in line with other studies, Krosnick has looked separately at well-informed subjects aware of political issues and ill-informed or unmotivated respondents. In his research jointly with colleagues he found knowledgeable subjects used different cognitive organized patterns of thought (schemas) and knowledge-churning strategies from the naïve or undermotivated subjects.[53] Non-intuitively, in certain circumstances the experts were easier to prime with specific appeals or political advertisements.[54] The other group tended to generate more evasive answers avoiding the question,[55] especially when the issue was not considered relevant.[56] Some biases arising from this were a tendency to settle on the midpoint of a scale with an odd number of divisions, being more influenced by leading questions, and answering most questions with the same number on a scale, especially toward the end of the survey, a form of satisficing.[57] These combined increased the chance and amount of measurement error for such responders.[58]

Congressional Testimony[edit]

National Aviation Operations Monitoring Service[edit]

The National Aviation Operations Monitoring Service (NAOMS) was an $11.5 million research and development project by NASA using survey methods to measure aviation safety.[59] The program was created in response to the goal set by the White House Commission on Aviation Safety and Security in 1996 to reduce the risk of air travel accidents by 80 percent over the next 10 years.[60] Krosnick was the lead consultant in developing and implementing the NAOMS survey methodology.[61][62]

While plane crashes remain rare, NAOMS sought to identify and reduce accident precursors and potential safety issues by regularly surveying commercial pilots, general aviation pilots, ground and flight crew members, and air traffic controllers.[63] The project was designed to provide broad, long-term measures on trends and to measure the effects of new technologies and aviation safety policies.[64] The project implemented a survey with an 80 percent response rate, interviewing a random sample of pilots about safety incidents.[65][66]

In 2004, NAOMS researchers finished collecting data on the first cohort of pilots, having conducted about 24,000 interviews. To some observers, preliminary findings suggested that some safety-related problems were occurring at astonishingly high rates, in some cases as much as four times the number previously reported by the FAA. The FAA was “extremely unhappy” with the results and called for the program to be shut down. NASA soon cancelled the program.[67] The House Committee on Science and Technology Subcommittee on Investigations and Oversight later investigated FAA's role in the ending of NAOMS. Chairman Brad Miller (D-NC) stated the subcommittee found that the FAA did not support NAOMS.[68] In 2006, Associated Press reporter Rita Beamish filed a Freedom of Information Act request for the NAOMS’s data. For 14 months, NASA rejected the request.[67][69][70]

In a final denial letter to the AP, Thomas Luedtke, senior NASA official, indicated the data would not be released because the findings could damage the public's confidence in airlines and affect airline profits. Luedtke acknowledged that the NAOMS’s results "present a comprehensive picture of certain aspects of the U.S. commercial aviation industry."[70][71] Significant criticism from the public over NASA’s refusal to release the data and its handling of NAOMS prompted Congress to launch an investigation into the matter.[72] Members of Congress from both sides were very critical of NASA’s handling of the matter and demanded NASA release NAOMS’s results. During an oversight hearing, NASA's administrator, Michael D. Griffin, testified that Luedtke’s reasoning was a mistake and NASA would release the data. However, Griffin cast doubts on the reliability of the NAOMS’s data, cautioning that the data was never validated. Griffin warned, "there may be reason to question the validity of the methodology."[73] On January 1, 2007, Griffin released some of the NAOMS data.

Many refuted Griffin's criticism and defended NAOMS. The NAOMS survey methods were extensively peer reviewed, and the methods were adapted from proven survey methods. Krosnick and others had used such methods in similar contexts in published scientific studies that had been extensively peer reviewed.[74] In addition, NAOMS had also been thoroughly reviewed by internal and external experts.[74] The International Federation of Professional and Technical Engineers (IFPTE) sent a letter to Congressman Bart Gordon, then chairman of the House Committee on Science, Space and Technology stating, “there was no valid scientific basis for the Administrator's technical criticism of the NAOMS project.”[74] In a National Academy of Sciences 2004 report, NAS officially recommended “NASA should combine NAOMS methodology and resources with the ASRS program data to identify aviation safety trends."[74] After thorough review, the Office of Management and Budget, which reviews all federal survey projects to ensure they are optimally designed, approved NAOMS.[62][74] The union representing the majority of commercial pilots in the United States deemed NAOMS “tremendously valuable.”[75] In 2009, the Government Accountability Office investigated the NAOMS survey methodology and found “the project was planned and developed in accordance with generally accepted principles of survey planning and design...[and] as a research and development project, NAOMS was a successful proof of concept.”[76]

Work as an expert witness[edit]

Krosnick frequently works as an expert witness. For instance, he was hired by the attorney general of Oklahoma in a case against Tyson Foods, in which they were accused of polluting the Illinois Water Shed.[77] In another case, Krosnick worked on behalf of Empire Blue Cross Blue Shield in a case wherein many tobacco companies were sued for engaging in deceptive practices designed to mislead the public regarding the harmful and addictive properties of cigarette smoking [78]

Academic programs[edit]

Among the academic programs Krosnick directs at Stanford are the Political Psychology Research Group, which focuses on the study of public and political issues[79] such as global warming[80], and the Summer Institute in Political Psychology.[81] The Summer Institute in Political Psychology is a program that began as an annual tradition at Ohio State University in 1991 under the direction of Margaret Hermann. In 2005, it was moved to Stanford's campus. Today the program offers a three-week training experience in political psychology for up to 60 participants. [82]


Awards and recognition[edit]


  • Weisberg, H.; J. A. Krosnick; B. Bowen (1989). Introduction to survey research and data analysis. Chicago: Scott, Foresman.
  • Jon A. Krosnick, ed. (1990). "Social Cognition". 8 (1). New York, NY: Guilford Press. doi:10.1521/soco.1990.8.1.1. |chapter= ignored (help)
  • Petty, R. E.; J. A. Krosnick (1995). Attitude strength: Antecedents and consequences. Hillsdale, NJ: Erlbaum.
  • Weisberg, H.; Krosnick, J. A.; Bowen, B. D. (1996). An Introduction to Survey Research, Polling, and Data Analysis (3 ed.). Thousand Oaks, CA: Sage. ISBN 0-8039-7401-9.
  • Carson, R. T.; M. B. Conaway; W. Hanemann; J. A. Krosnick; R. C. Mitchell; S. Presser (2004). Valueing oil spill prevention: A case study of California’s central coast. Dordrecht, The Netherlands: Kluwer Academic Publishers.
  • Krosnick, J. A.; L. R. Fabrigar (2006). The handbook of questionnaire design. New York: Oxford University Press.
  • Krosnick, Jon; Pasek, Josh (2010). "Optimizing survey questionnaire design in political science: Insights from psychology". In Leighley, Jan E. The Oxford Handbook of American Elections and Political Behavior. Oxford University Press. doi:10.1093/oxfordhb/9780199235476.003.0003. ISBN 978-0-19-923547-6.
  • Callegaro, M.; R. Baker; J. Bethlehem; A. Göritz; J. A. Krosnick; P. J. Lavrakas (2013). Online panel research: A data quality perspective. New York: John Wiley and Sons.


  1. ^ a b c d e "Public Perceptions of Climate Change". American Metrological Society. Retrieved October 12, 2012.
  2. ^ a b c Laura L Carstensen and Christine R Hartel, eds. (2006). "Appendix: Biographical sketches of committee members and contributors". When I'm 64: National Research Council (US) Committee on Aging Frontiers in Social Psychology, Personality, and Adult Developmental Psychology. National Academies Press. ISBN 0-309-65508-0. PMID 22379650.CS1 maint: Uses editors parameter (link)
  3. ^ "Previous principal investigators". American National Election Studies. Retrieved October 18, 2012.
  4. ^ Bradburn et al. 1999.
  5. ^ a b c d "Profile detail: Jon Alexander Krosnick". Marquis Who's Who. 2012. Retrieved October 1, 2012.
  6. ^ a b c d Elena Kadvany (July 24, 2012). "Feature story: Beating the drums for jazz". The Almanac: Menlo Park, Atherton, Portola Valley, Woodside. Retrieved October 12, 2012.
  7. ^ a b "Something of a druid George Nakashima gives old trees masterly new life". TIME. 133 (26). June 26, 1989. p. 75.
  8. ^ "News about diabetes". Post Herald, West Virginia. January 10, 1969. p. 10.
  9. ^ a b Jamie Saxon (November 5, 2003). "The Krosnicks' third act". U.S.1, Princeton, New Jersey.
  10. ^ "Weddings: Rodgers–Krosnick". Aiken Standard, South Carolina. September 23, 1990. p. 2D.
  11. ^ "Jon Krosnick '76 and Election 2008". Lawrenceville School. October 17, 2008. Archived from the original on December 28, 2012. Retrieved October 12, 2012.
  12. ^
  13. ^
  14. ^ Lisa Trei (September 27, 2006). "Social science researcher to overhaul survey methodology with $2 million grant". Stanford Report. Retrieved October 18, 2012.
  15. ^ Atkeson 2010, pp. 15–16.
  16. ^ a b Gary Langer (September 1, 2009). "Study Finds Trouble for Opt-in Internet Surveys". ABC News.
  17. ^ Gary Langer (March 21, 2011). "Study Raises New Questions for Opt-in Online Data". ABC News. Retrieved November 2, 2012.
  18. ^ Mark Blumenthal (January 10, 2011). "No Such Thing As A Perfect Sample". National Journal. Retrieved November 2, 2012.
  19. ^ a b Grabmeier, J. "Ohio State research may help change outcome of California election". Ohio State Research.
  20. ^ a b Stewart et al. 2008.
  21. ^ a b Price 2008.
  22. ^ Jeff Grabmeier (August 18, 2003). "New Research Shows Candidate Name Order Will Matter in California Recall Election". Ohio State Research News. Retrieved October 18, 2012.
  23. ^ Grabmeier, J. "Study: Bush's placement on top of Florida ballots gave him edge". Ohio State Research.
  24. ^ Sproul 2007, pp. 18–19.
  25. ^ a b c Dennis Junius (October 27, 2012). "AP poll: U.S. majority have prejudice against blacks". USA Today. Retrieved November 2, 2012.
  26. ^ Jennifer Agiesta and Sonya Ross (October 28, 2012). "Poll finds majority in US hold racist views". Boston Globe. Retrieved November 2, 2012.
  27. ^ Paul Harris (October 27, 2012). "Racial prejudice in US worsened during Obama's first term, study shows". The Guardian. Retrieved November 2, 2012.
  28. ^ Harder 2008, pp. 531-535.
  29. ^ Harder 2008, pp. 534-535.
  30. ^ Harder 2008, pp. 540.
  31. ^ a b c d Jeff Grabmeier (August 25, 2000). "Want to Increase Voter Turnout? Give Them a Candidate to Hate". Ohio State Research News. Retrieved November 12, 2012.
  32. ^ a b c Lee Dye (September 6, 2000). "The Psychology of Voting". ABC News. Retrieved November 12, 2012.
  33. ^ Wallace Ravven (Spring 2011). "Casting about for Your Vote". California Magazine. Retrieved November 12, 2012.
  34. ^ "Study: If Americans better understood the Affordable Care Act, they would like it more". 2 Janvier. October 24, 2012. Archived from the original on January 16, 2013. Retrieved October 30, 2012.
  35. ^ a b c Karen Kaplan (October 26, 2012). "Americans too confused by healthcare act to like it, survey finds". Los Angeles Times. Retrieved October 30, 2012.
  36. ^ a b c Max McClure (October 24, 2012). "What does Obamacare actually do, you ask? You're not alone, says Stanford pollster". Stanford News. Retrieved October 30, 2012.
  37. ^ Lisa Trei (February 21, 2007). "Public agrees global warming exists but also divided over severity of problem". Stanford Report. Stanford University. Retrieved October 16, 2012.
  38. ^ "Fewer Americans See Solid Evidence of Global Warming: Modest Support for "Cap and Trade" Policy". Pew Research Center. October 22, 2009. Retrieved October 30, 2012.
  39. ^ "Polls and Media Reports Exaggerate Climate Change Backlash, Says Stanford Polling Expert". Advancing Science. Serving Society. March 22, 2010. Retrieved October 30, 2012.
  40. ^ a b c "US pollsters argue over public view on climate change". New Scientist. June 10, 2010. Retrieved October 30, 2012.
  41. ^ "Hacked archive provides fodder for climate sceptics". New Scientist. November 24, 2009. Retrieved October 30, 2012.
  42. ^ "Exclusive Bombshell: Experts Debunk Polls that Claim Sharp Drop in Number of Americans Who Believe in Global Warming". Think Progress. November 15, 2011. Retrieved October 30, 2012.
  43. ^ "Record Summer Heat Shifts Public Opinion on Global Warming". The Chronicle. September 3, 2012. Retrieved October 30, 2012.
  44. ^ a b c d Paul Voosen (July 9, 2012). "Scientists struggle with limits -- and risks -- of advocacy". E&E Publishing. Retrieved October 30, 2012.
  45. ^ a b c Greg Breining (September 9, 2012). "Too much advocacy? Scientists and public policy". Star Tribune. Retrieved October 30, 2012.
  46. ^ "Dr. Jon Krosnick: Public opinion on climate change and its impact on voting". Climate Science Watch. October 18, 2011. Retrieved October 30, 2012.
  47. ^ Joe Romm (October 9, 2012). "Krosnick: Candidates 'May Actually Enhance Turnout As Well As Attract Voters Over To Their Side By Discussing Climate Change'". Think Progress. Retrieved October 30, 2012.
  48. ^ Steenbergen & Lodge 2006, p. 128.
  49. ^ Steenbergen & Lodge 2006, p. 160.
  50. ^ Steenbergen & Lodge 2006, p. 143.
  51. ^ Weisberg & Greene 2006, p. 99.
  52. ^ Weisberg & Greene 2006, p. 100.
  53. ^ Althaus 2010, p. 100.
  54. ^ Althaus 2010, p. 21.
  55. ^ Althaus 2010, p. 66.
  56. ^ Althaus 2010, p. 153.
  57. ^ Althaus 2010, p. 161–164.
  58. ^ Althaus 2010, p. 37.
  59. ^ "NASA Offers Airline Safety Data". NYT. January 1, 2008. Retrieved October 24, 2012.
  60. ^ "NASA Mum, But Airline Mishaps More Common". CBS. February 11, 2009. Retrieved October 24, 2012.
  61. ^ "Statement on the National Aviation Operations Monitoring Service" (PDF). Committee on Science, Space, and Technology. October 30, 2007. Retrieved October 24, 2012.
  62. ^ a b "No Space for Aviation Safety at NASA". Union for Concerned Scientists. February 11, 2008. Archived from the original on October 31, 2012. Retrieved October 24, 2012.
  63. ^ "What Pilots Could Tell Us". New York Times. August 30, 2006. Retrieved October 24, 2012.
  64. ^ "Congress Releases GAO Report on NAOMS". Aviation Today. April 13, 2009. Retrieved October 24, 2012.
  65. ^ "NASA releases data on close calls in air". Herald Tribune. April 13, 2009. Retrieved January 1, 2008.
  66. ^ "Sleeping Pilots Underscore Flight-Risks Described in Suppressed NASA Report". Security Management. November 2, 2007. Archived from the original on January 9, 2008. Retrieved January 1, 2008.
  67. ^ a b "Survey pits NASA administrators against researchers". American Psychological Association. March 2008. Retrieved October 24, 2012.
  68. ^ "House Committee on Science and Technology Subcommittee on Investigations and Oversight Seeks Information From FAA on Ending of Aviation Safety Survey". Space Ref. July 25, 2008. Retrieved October 24, 2012.
  69. ^ "NASA releases air safety study". USA Today/AP. December 31, 2007. Retrieved October 24, 2012.
  70. ^ a b "NASA Sits on Air Safety Survey". Washington Post/AP. October 22, 2007. Retrieved October 24, 2012.
  71. ^ "Gordon, Miller, Udall Direct NASA to Halt any Destruction of Records Relating to the NAOMS Project". U.S. House of Representatives. October 22, 2007. Retrieved October 24, 2012.
  72. ^ "NASA Releases Results of $11.3 Million Air Safety Study After Congressional Criticism". Fox News/AP. December 31, 2007. Retrieved October 24, 2012.
  73. ^ "NASA promises Congress it will reveal survey that shows safety problems worse than thought". South Coast Today/AP. November 1, 2007. Retrieved October 24, 2012.
  74. ^ a b c d e "Letter from IFPTE To Rep. Gordon Regarding NASA Aviation Safety Research Projects". Space Ref. November 29, 2007. Retrieved October 24, 2012.
  75. ^ "NASA Airplane Safety Report Stalled". ABC. October 23, 2007. Retrieved October 24, 2012.
  76. ^ "NASA's National Aviation Operations Monitoring Service Project Was Designed Appropriately, but Sampling and Other Issues Complicate Data Analysis". GAO. March 13, 2009. Retrieved November 11, 2012.
  77. ^$FILE/Complaint.pdf
  78. ^
  79. ^ a b "PPRG Website". Stanford University. Retrieved October 16, 2012.
  80. ^ "Reflections on Jon Krosnick's Global Warming Op-Ed". Gallup. June 10, 2010. Retrieved October 30, 2012.
  81. ^ a b "Summer Institute in Political Psychology". Stanford University. Retrieved October 16, 2012.
  82. ^ Dan Goldstein (December 10, 2008). "Stand for something political at Stanford". Decision Science News. Retrieved October 25, 2012.
  83. ^ "Distinguished Speaker Series: Fathauer Lecture in Political Economy". Arizona State University: Eller College of Management. October 16, 2008. Archived from the original on June 29, 2010. Retrieved October 15, 2012.
  84. ^ a b c d e f "Jon Krosnick". Social Psychology Network. April 23, 2010. Retrieved October 16, 2012.
  85. ^ "Krosnick receives national award for research accomplishments". Ohio State University. September 14, 1995. Retrieved October 16, 2012.
  86. ^ "AAAS members elected as fellows". American Academy of Arts and Sciences. January 11, 2011. Retrieved October 12, 2012.


External links[edit]