Jump to content

Talk:2020 in science

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Lowercase sigmabot III (talk | contribs) at 20:57, 8 June 2024 (Archiving 20 discussion(s) to Talk:2020 in science/Archive 2, Talk:2020 in science/Archive 1) (bot). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Here is a table of non-included items (some relevant to other articles or with non-inclusion rationales)

Inclusion criteria and routine addition of entries (e.g. of weekly science reviews)

[edit]

I think it would be useful to gradually develop some formal inclusion criteria for which items can be included in the list.

This list of proposed criteria can also be found in a dedicated separate page here: editor/Year in science/Inclusion criteria.

  • There's too many studies that have news reports to add all of them here.
  • And too many entries make it hard to discern what's been a significant development in terms of:
    • progress within a field or research question of varying usefulness/... or
    • practicable usefulness or
    • importance or
    • degree of news coverage / popularity
    • the "novelty" (compared to previous studies/knowledge) and "unexpectedness" of the conclusions/findings (to the public and the scientific field)
    • this is mostly related to the criteria of importance and practical usefulness:
      • the permanence of the subject being studied and/or the study/its conclusions
      • the number of people affected by the subject studied and/or the study/its conclusions
      • the degree of affectedness by the subject studied and/or the study/its conclusions
It also makes the list take longer to read. Many are only interested in the most significant findings and/or specific topics or fields.
This problem can be resolved by converting the list to table (see section above) and adding topics and categories or by subdividing sections into three or more subsections like in the former German articles.
The formal inclusion criteria could be criteria for the different categories. Some items would be excluded from the list entirely and some would be excluded e.g. from category 1 and 2.
  • Maybe people would like to add an entry but aren't sure if it's appropriate to add to the list: clear criteria to check against would make it easier for them.

I would suggest the following criteria for a start:

  • The entry has to include at least one wikilink to an existing or possible article to contain the information of the entry. If the list is converted to a table there could be a column that only contains the wikilinks to the articles that include the information once they do. (Moreover it would be good to link all the articles the new study/finding is relevant to in the entry.)
  • The finding has to have at least one news report by a reputable/reliable source.
  • If the finding / the conclusions are controversial the criticism should be explained in the entry.
  • Category-1 items are major in at least 2 of the 4 roughly defined criteria above. ("Importance" and "usefulness" are roughly evaluated in terms of the degree of known and potential impact on number of people (including in the future) or environment or alike as well as the degree of completion of an usable product or data/knowledge.)
  • Category-1 items exclude new measurements / data that don't bring any new conclusions like significantly supporting or weakening a major theory or showing that something previously unknown exists.
  • Category-1 items exclude findings that only support a theory that's already very established and the other way around.
  • Category-1 items (usually) exclude items which can be adequately written in the short text available for Category-2 items.
  • Category-1 items can exclude items if there recently was a very similar item in Category-1 and should exclude the item if it's only an update to an earlier Category-1 item
  • Category-2 items exclude findings that only hypothesize (without much significant evidence) or are very fragile research questions or studies that aim to show that something might be worth further investigations. In general items should stay in Category-3 until they're relatively established knowledge without lots of "could"s.
  • Category-2 items exclude items that are have very low degrees of the 4 criteria roughly defined above.
  • Category-2 items exclude items whose relevance / meaning isn't sufficiently clear.
  • Category-2 items exclude items which are likely/mostly/in most cases not very useful/understandable/meaningful for people that are not experts in specific fields and non-enthusiasts (at least in terms of the understanding of the item, not necessarily their implications).
  • Category-2 items should generally be kept a bit shorter than the average Category-1 item.
  • Category-3 items exclude extinctions and discoveries of animals except if there is good reason to include them here – they should be featured in separate dedicated articles which may not exist yet
  • Category-3 items should be kept very short if possible.
  • For example a significant breakthrough within a field or an improved/new technology for a potential device could be added as a Category-3 or Category-2 entry and later, once further R&D has been carried out or an actual device using the technology, found to be practically useful or to at least already work in principle has been built (or the uses/implications have become clearer), a Category-1 entry could be created for another, more conclusive or practically useful, finding or implementation within the same field of R&D. The same also goes for observational science and data-interpretation for when more data/evidence has been established for the respective finding/conclusion. Often it's better to wait for more evidence before a Category-1-in-general item is added as such.
  • Other useful criteria would be excluding or, rather, appropriately categorizing items about newly developed tools or new findings (like e.g. properties of materials) researchers think will or are likely to be very useful. For that it should also be relevant whether or not researchers have built a prototype(/s) that proves that their tools could be useful to some (varying) degree. It's also important to differentiate between resolving a known or new problem within a field or having developed a technology whose use isn't entirely clear. Generally it would be good to try to avoid findings with no clear implications or uses (which are usually explained in the paper and/or the reports on it) - There are many differentiations and exceptions that should be made in this area and they could get formalized better.
  • It would be useful to make it possible for people with expertise in the respective field of study to get notified for additions of items within their field so they can review the item and adjust/correct the item's text if adequate. I think that it would be inappropriate to create a talk page entry on every item's main article but that currently the creator of a Wikipedia article gets notified once a page links to it - maybe the MediaWiki software could be changed so that more people can get these notifications. For that to be useful one would need to specify which of the item's wikilinks should trigger these notifications (sometimes that's more than the "main wikilink" for the entry).
  • The text-lengths of the entries should be kept as short as possible but adequately long. The three categories could have different recommended lengths as an orientation. Category-3 items could be made to be hidden by default in which case their lengths wouldn't be as important and could also be longer than C1 or C2 items. One orientation would be trying to fit the text length to the 5 main criteria and only adding more text if that's needed to summarize (the main conclusions of) the study.
  • The categories (as already used by the Science Summary images alongside a few extra criteria like not featuring all C2-items due to lack of space and sometimes combining items) are not to be seen as some kind of award-like assessment of the study's merit for society - they are categorized by multitude of criteria of which importance and practical usefulness are only one each. It is mostly impossible to assign the merit of individual studies anyway as the work is built on a multitude of other tools and research. Instead they are better seen as some kind of "main" happening within science where popularity of the finding is a factor.
  • Images for Category-2 items are chosen by how useful the image is for comprehension and/or visualization of the item (this also depends on which images are available), how important the image is for the item and/or how significant the item is judging from the four main criteria.

But I guess right now too many entries probably isn't a problem: instead we should probably find criteria for items to always include and try to implement them. So first and foremost it probably needs more editors for this list which is when such inclusion criteria would be truly useful. I'll probably make a section on that later.

Items to always include could come from external weekly science reviews, like:

  • Edit: we could also try to include items that were popular on science-related subreddits: https://www.reddit.com/r/EverythingScience+science+sciences/top/?t=month
    Posts that gained a lot of upvotes there would already meet (a high level of) the "degree of popularity"-criteria. This doesn't mean that they should be included (at least in the current way) - for example because they could still be very weak on other criteria or because the results are very preliminary or because they are very niche or because it's not a new and scientific finding/conclusion or alike. It would be hard to include all the items included there except if redditors themselves come up with an entry-text to add to the list within the respective comment-section. Hence it would be good to find lots of formal in/exclusion criteria due to the large number of entries that could be taken from there. The posts there often have comments that might be relevant to research for the entry-text e.g. by explaining a catch or by putting the finding into perspective or by elaborating on the background or by explaining problems with the news report.

So for example as a baseline we could try to include all items of the "This Week In Science" images by ScienceAlert as well as the 2 most popular phys.org article for each day and the top story of the week for as many major journals as possible.

It would be useful to organise this here (centrally/in-one-place, independently and openly) so that third-party websites and readers can then use this information/data to get a broader, more complete, more contextual/understandable, shorter (...) overview of the developments in science.

I think it would make sense to develop such inclusion criteria and routine-addition-selections even before they can become useful and get implemented once sufficient capacity has been reached (mainly number of editors but also editing time and editing time-efficiency).

Please comment what you think about this and whether you would suggest any refinement to the criteria even if this section is old.

--Prototyperspective (talk) 22:55, 10 March 2020 (UTC)[reply]
--Prototyperspective (talk) 15:50, 24 March 2020 (UTC)[reply]
--Prototyperspective (talk) 15:46, 11 May 2020 (UTC)[reply]

ScienceAlert has a questionable quality. If they include something then it usually got wider media attention but it doesn't have to be relevant in its field (not even discussing the relevance outside). If we only want to go by media attention we can check that in other ways. --mfb (talk) 07:10, 11 March 2020 (UTC)[reply]
Yes, however I thought it might be a good thing to include all the items they include because those usually got a lot of media attention and being included in their weekly review makes them even more popular. This is mainly just for covering the "degree of news coverage / popularity" criteria which is why it's not the only source of items I'd suggest to routinely include. (And of course it's also about the three other criteria as public interest and media coverage often increases with those.) Furthermore it may allow summaries based on this Wikipedia list / the list itself to be considered to be at least as good as their weekly reviews in some way due to having all of their items included.
I'm not proposing that editors use ScienceAlert news articles for the sources: there's news articles from a variety of sources for afaik all of their items. We could check for media attention in other ways in addition to that. For example the phys.org news articles are sorted by popularity. It might not be that easy to evaluate popularity better than via such reviews and the phys.org website (and the study's number of views / metrics) because you'd need to accumulate all of a study's social media and news articles' likes and views etc. on your own to make it comparable as I don't think there's any organization/website that is doing that.
But maybe we should only try to routinely add their items once we're implementing some other criteria of what to routinely include (e.g. via phys.org).
--Prototyperspective (talk) 16:22, 11 March 2020 (UTC)[reply]
Note: currently I'm checking the most popular items on phys.org along with any other news I happen to come across while making a lot of use of the papers' altmetrics-tools. I think that's a baseline that's not too bad, somewhat efficient and should catch many of the months' most findings relevant for the list. --Prototyperspective (talk) 23:36, 5 July 2020 (UTC)[reply]
Note: currently I'm adding items based on selection from studies published during the month sorted by Altmetrics score as well as via some sites like phys.org (that one doesn't play of a role anymore).--Prototyperspective (talk) 13:12, 10 February 2023 (UTC)[reply]

Examples of items not included

[edit]

More items which I decided not to add/include but mostly weren't easy to exclude are listed here, many with rationales which could make understanding the proposed criteria easier (and some of these may be relevant to other articles).

The below is not a list of items that should not be included. It's a list of items I did not include and together with the reasons this may help building inclusion criteria or better understanding of and extending the criteria proposed above. I think all of them should still be addable as (very short) Category-3 items as per above.

For almost all of the items it wasn't easy to decide whether or not to include because they're e.g. very useful or important (often only potentially so). Many of them are listed here precisely because it was tricky to decide whether or not to include at some point.


Other content I try not to include or would suggest to get tagged/categorized appropriately to enable filtering, better clarity and shorter length:

  • Improvements to instruments such as improvements to the sensitivity of instruments (not just the development of new instruments as already written above)
  • Newly measured values/measurements (only "non-value"-conclusions from these etc) (example)
  • Unexpected findings (only conclusions where they have made sense of unexpected findings) (example 1 example 2)
  • Verifications of standard / established theories

When adding items which are also notable due to such things I try not to include such aspects in the summary and only describe its other major notability explicitly (it may be included implicitly anyway).

--Prototyperspective (talk) 16:29, 15 August 2020 (UTC)[reply]

Further criteria

[edit]

To complete the proposed criteria, here are some more clarifications developed over time that could make it easier for others to understand the above as well as some additional criteria:

  • how many are affected by the study is a major but simple way to select
  • it should be globally relevant, e.g. about a global issue and included policy studies should not be about the U.S. only
  • when considering inclusion consider what it means in terms of e.g. usefulness in terms of demonstrated applications
    • no developments of robots but finished prototype – useful viable *applications of developed robots* (not general development or undeveloped/proposed robots)
    • being economically (present and future) viable and other wise viable is a typically important criteria for technical systems
      • if that is unclear it's typically better to wait until a better demonstration after further development
    • a thing to look for are phrases (red flags) like "could allow" xyz rather than specific demonstrations (possible example)
  • included items should e.g. be contrary to widespread belief, not unsurprisingly confirming already robust knowledge – basically "the new significant change-of-scientific-knowledge" (of the scientific knowledge is not changed but made more robust that may be great but is unlikely to be notable here)
    • consider whether the study substantially challenges a prevailing paradigm
    • a thing to look for are phrases (red flags) like "Our results are consistent with leading theoretical models" (possible example) – often e.g. first observation of what was expected theoretically with good confidence
  • consider whether key problems of the field/topic are addressed by the study (or development), for example for perovskite solar cells, the commercialization is not a major problem at this point but rather their stability&durability
  • see Wikipedia:WikiProject Weather/Weather by year criteria for another timeline's criteria, albeit these two can barely be compared and are very different
  • development of tools / instruments (R&D instruments mainly such as better imaging systems) and iterative steps typically shouldn't be included, rather include prototypes, applications-demonstrations, etc
  • consider whether it's empirical data or some vague soft projections or calculations based on human-made metrics with limited relevance as descriptors of reality (often relevant to "economics" studies); how hard the science is could be a factor
  • consider whether it's early-stage research or a milestone-*achievement*
  • in many cases it would be better not to include individual applications but only reviews/lists of applications and/or entire (new or reviewed or demonstrated) application domains etc
  • studies shouldn't be included if they're only about new quantification but not the first quantification of broadly that sort (especially at broadly the same scale & quality) and/or don't provide new insights
  • battery-related studies are difficult to in/exclude, personally I don't include nearly none of these and would recommend only including good-quality reviews from/about that field (or reliable-authority statements/certifications)
  • preprints could often be included when they meet criteria when there's already lots of major RS reports but in some cases even these cases it's better to wait until the study has been published in a journal
  • consider whether the R&D is near practical usefulness or whether it's (new) substantive new knowledge (e.g. confirmed well enough and understood in terms of what it means)
  • simulations (not full proof) of something within field that isn't very relevant outside the field should typically be excluded (possible example)
  • 'announcements' are (typically) never featured as Category-1 items and it's similar for plans and proposals even if they are described in detail in a peer-reviewed study
  • big claims are not particularly worthy of inclusion if they don't also provide good evidence/data (possible example)
  • the inclusio bar should be set higher for items for which there already was an item about the same topic (maybe alternatively the former item could be moved/edited)
  • the item should typically be relevant for addition to at least one (other) WP article and it would be good if the editor adding it here also integrated it there

--Prototyperspective (talk) 13:12, 10 February 2023 (UTC)[reply]

Suggestion for a WikiProject / some kind of task-force

[edit]

Maybe at some point there could be a WikiProject or another way organize work on these things.

One thing I'd like to propose is that some people check whether entries of the article are properly worked into their linked Wikipedia articles and add the information of the item to them if it hasn't yet been added. All items in the events-list of the article should have (and have as far as I have checked) their "main" WP article (where one would expect this info to be found / that is the most relevant WP article) linked (a few of them are redlinks). Maybe there could be a way to "mark" which items whose relevant articles already contain the info, so that one can easily check which ones still need to be updated. I'd suggest 5 statuses for the items:

  • ☒N Unchecked
  • checkY Suggestion for update / info-addition has been added to the article's talk page
  • checkY An article was updated but not yet all articles which should be updated
  • checkY Checked and the most relevant article should not include info of the item
  • checkY (All) most relevant article(s) have been updated to contain the item's info

(And another thing would be the creation of articles for the redlinks.)

(I already suggested a WikiProject for these articles in specific and some of the tasks at the German-language Wikipedia.)

--Prototyperspective (talk) 23:36, 5 July 2020 (UTC)[reply]

Another task would be finding or taking relevant images, uploading them and adding them to the article. Most entries have a study which has an appropriate with an unfit copyright-license. However, sometimes the image could be released separately under another copyright and some studies are published under a commons license (the open access ones could be routinely checked for relevant images and authors of the other ones could be contacted/emailed about the image copyright). I added the {{Image requested}} template to this page. --Prototyperspective (talk) 16:50, 19 July 2020 (UTC)[reply]

For now, the thing that comes closest to it is WP:WikiProject:Science. I may ask there. --Prototyperspective (talk) 13:12, 10 February 2023 (UTC)[reply]