Jump to content

Wikipedia talk:Notability (academic journals)/Archive 6

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 4 Archive 5 Archive 6

Some clarifications

  • I have noticed several misunderstandings in the above discussion and will give some clarifications here. As the discussion is getting unwieldy, I am putting this in a separate section. I hope that this will make the discussion better informed. --Randykitty (talk) 09:06, 7 July 2023 (UTC)
    Your implication that people who disagree with you are uninformed is simply insulting. I am a professional physicist with plenty of experience in refereeing and editing, and much more knowledge about these bibliographic databases that I ever wanted to have.
    To briefly correct two falsehoods you've written below: being included in SCIE or Scopus does not make a journal influential. It is the bare minimum for a journal to be taken seriously. There are plenty of indexed journals with laughable impact factors. They are not influential.
    The other falsehood is about peer review. The process at Physics Essays is not normal, contrary to your claims. For the editor to declare in advance that the author is free to disregard the referee's comments is completely unheard of. There's also no hint that a paper can be ever rejected from the journal.
    In a normal peer-review process, the editor chooses referees they can trust and does listen to their advice. Only on rare occasions where the referees have clearly made a mistake or are being plainly corrupt the editors allows for their comments to be disregarded. Tercer (talk) 09:41, 7 July 2023 (UTC)
  • No insult was intended. If you look at the discussion above, comparing inclusion in SCIE with lists of recent books, for example, then it is clear that some editors are misinformed. There's a gazillion of subjects where I am misinformed, if somebody points that out in a calm and reasonable manner, I'm not insulted at all. Being included in SCIE does indeed not make a journal influential. It's the other way around: a journal has to be shown to be influential (among many other things), to be included in SCIE. And I did not say that an editor routinely ignores reviewers (otherwise veery soon nobody would be willing to review for your journal any more), I just wanted to point out that it is the responsibility of the editor to make a final decision, not the reviewers (who often enough disagree among themselves), they only provide advice. --Randykitty (talk) 10:25, 7 July 2023 (UTC)
    A calm and polite statement of a falsehood is still a falsehood. A journal most emphatically does not need to be influential to be indexed in SCIE. It's an index with literally thousands of journals! Do I need to give you a list of journals that are in SCIE but are nevertheless irrelevant? Tercer (talk) 12:06, 7 July 2023 (UTC)
  • It's absolutely not a falsehood. There exist well over 100.000 academic journals and only a fraction of those are in SCIE. And read the inclusion criteria that I have linked directly below. If a journal is not influential, it doesn't get in SCIE. --Randykitty (talk) 12:26, 7 July 2023 (UTC)
    Sigh. I do have to give you a list then. Of course I don't take Clarivate's word for it. Let's start with Physica Scripta and Entropy, crackpot journals well-known for having no standards. Both in SCIE. Also in SCIE is Scientific Reports, a borderline scam journal that managed to make a lot of money from the "Nature" name before the community realized that it had terrible quality. And these are just notoriously bad journals. The vast majority are serious but low-impact and little-known journals like Laser Physics, Physical Review Accelerators and Beams, Chinese Physics B, or Acta Physica Polonica.
    Do you seriously maintain that these are influential journals? Tercer (talk) 13:56, 7 July 2023 (UTC)
    I do, yes. There's not necessarily at the very top, but they are all impactful journals. That some are shit does not make them unimpactful. Entropy is a terrible journal, but it has nonetheless an h-index of ~91 for instance (Google lists h-index of 60ish, ranking 15th in General Physics). Headbomb {t · c · p · b} 14:06, 7 July 2023 (UTC)
    An impactful journal? With an impact factor below 1? I'm sorry, I can't believe you are arguing in good faith anymore. Tercer (talk) 14:12, 7 July 2023 (UTC)
Entropy has an IF of around 3, not below 1. Headbomb {t · c · p · b} 14:14, 7 July 2023 (UTC)
I'm talking about Acta Physica Polonica. Tercer (talk) 14:19, 7 July 2023 (UTC)
APP's history goes back to 1920 and has a rich history of fruitful publications. That it's current standing is not what it once was is irrelevant. Headbomb {t · c · p · b} 14:22, 7 July 2023 (UTC)

If it is a "rich history", then where are the sources that indicate that? The article right now is a stub that contains essentially no independent sources written about the journal. How does the standalone article as it is written help the reader? And is there any chance that we will find sources that can improve it to the point where a reader may be able to actually learn that it has a rich history?

Look, I'm all in favor of stub-culture when it looks like there are ways to expand articles. But in example after example I see articles about obscure journals that look like they have no chance to go anywhere. What is the point of an inclusion criteria which is essentially creating standalone articles that function exactly as a WP:DIRECTORY?

jps (talk) 13:27, 11 July 2023 (UTC)

Bibliometric databases

In the discussion above, inclusion in databases like the Science Citation Index Expanded (SCIE) and Scopus has been compared to sports databases and even indiscriminate lists of recently-published books. These comparisons are based on an apparent misunderstanding of these bibliometric databases. Inclusion into these databases is not automatic, as is the case with sports databases or lists of recently-published books. The procedure to get included in SCIE or Scopus is not easy and publishers and editors have to jump through several hoops before they get accepted. What they do not have to do (as suggested above or on one of the other pages where this discussion is raging) is pay a fee. Inclusion in these databases is absolutely free for a journal, so getting money from publishers is not a motivation for the database providers to include a journal.

To get included, a journal must have published a certain minimum number of issues, usually one or two years worth. When it has been shown in this way that a journal has "staying power", a journal's application goes to a commission of specialists, who evaluate the journal on contents and geographical spread of editors, editorial board, and authors, among other criteria. This evaluation is very detailed and very stringent: many journals fail to get included the first time they apply or even ever... And if they get rejected, they will have to wait several years before they can apply again.

There are more than hundred thousand journals in existence, only a small proportion of them are included in SCIE/Scopus/etc. It is databases like Google Scholar that resemble more the "recently-published books list", as GScholar strives to include everything (even predatory journals). DOAJ is not a selective database in this sense either. It's selective only in the sense that it tries to keep out overtly predatory journals, but apart from that it aims to include every open-access journal around.

As the late lamented DGG argued: do we WP editors know better than a committee of specialists? Only the best journals get included in these databases. And staying included is not automatic, if a journal turns bad, it will be dropped from coverage (as, indeed, happened to Physics Essays).

Once a journal is included in the SCIE (or similar databases like the Social Sciences Citation Index, it gets evaluated in-depth in the Journal Citation Reports (JCR). While part of that evaluation is automatic, it is important to note that the final results are hand-curated. The result is a lot of interesting data, varying from the impact factor to which journals get cited most by a certain journal and the other way around.

The roles of editors and referees

The description of the editorial procedures used by Physics Essays, specifically that the editor may deviate from what is suggested by referees, has been interpreted as meaning that the journal is not peer-reviewed. This is incorrect. An editor has the final responsibility for what gets published in a journal or not. Editors who simply count referees' "votes" are lazy bums that don't do the job they're supposed to do. And referees do not accept or reject a manuscript: they give advice to the editor, nothing more and nothing less. It is up to the editor to interpret their comments. Most of the time, an editor will follow the suggestions of the reviewers, but for all kinds of reasons they may deviate from that. This can go both ways: reviewers may recommend "major revisions", but an editor may find the issues raised too serious and reject the manuscript for publication. Or a reviewer may recommend rejection, but the editor judges the objections raised as more minor issues and ask for a revision before accepting the manuscript for publication. In all cases, this is normal peer review.

What is peer review

Peer review is a way of handling submissions to a journal. It's a procedure, nothing more, nothing less. It's not a badge of honor, nor is it a guarantee that a journal practicing peer review will be high quality. Bad peer review is peer review all the same. So just as we accept it if a journal says that John Doe is their editor-in-chief (unless we have evidence to the contrary, as happens occasionally with the more blatant predatory journals), we should accept a journal's self-description of whether it is peer reviewed or not.

  • One can say "Flat earth is a scientific theory supported by astronomers and experts." Technically, Flat Earth is science, and "astronomers" support this claim. However, this is a violation of Wikipedia:FRINGE. To parrot your logic, the label 'science' is not a guarantee that the science is valid. The label 'astronomer' is not a guarantee that they are qualified. Therefore, this lead sentence must be perfectly fine. Ca talk to me! 14:07, 7 July 2023 (UTC)
Luckily, we have sources describing Flat Earth shit as nonsense, so you are well justified in adding the label "fringe" on there. Headbomb {t · c · p · b} 14:20, 7 July 2023 (UTC)


Misconceptions in RandyKitty's essay

Inclusion into these databases is not automatic, as is the case with sports databases or lists of recently-published books. Inclusion in sports or recently-published books databases isn't "automatic" either. There are strict criteria for inclusion, but that still allows for a gigantic list of players and books. This is absolutely the same as bibliometric indices. Academia is not somehow special by comparison. It is the same gatekeeping that goes on anywhere. If you think it is easy to get included in a sports statistic database, I encourage you to try to get yourself into one!

Inclusion in these databases is absolutely free for a journal, so getting money from publishers is not a motivation for the database providers to include a journal. While it is true for some indexes, it is not true for a few that were uncritically being cited on Physics Essays. It would make sense to disparage those indexes that charge for inclusion. We are silent on this fact in this guideline.

While part of that evaluation is automatic, it is important to note that the final results are hand-curated. The result is a lot of interesting data, varying from the impact factor to which journals get cited most by a certain journal and the other way around. The reports, however, mention essentially nothing about the subject matter of the journal, for example. It's instead a lot of curated data without any analysis. Not something upon which we would be able to write meaningful prose, that I can see.

an editor may find the issues raised too serious and reject the manuscript for publication. Or a reviewer may recommend rejection, but the editor judges the objections raised as more minor issues and ask for a revision before accepting the manuscript for publication. In all cases, this is normal peer review. Corrupt pocket journals are a big problem in academia that we are not going to solve here. In the case where a EiC starts publishing garbage, the community responds by ignoring the journal. Wikipedia is ill-equipped to notice this. The correct approach is to be really stringent in sourcing. If there are sources which indicate something about the quality of the journal, then it is safe to have an article. If not (as in the case of many journals included here), I question the wisdom of an inclusive philosophy.

Bad peer review is peer review all the same. So just as we accept it if a journal says that John Doe is their editor-in-chief (unless we have evidence to the contrary, as happens occasionally with the more blatant predatory journals), we should accept a journal's self-description of whether it is peer reviewed or not. In no other part of Wikipedia do we take people at their word when there is controversy. This idealization of peer review is your own invention. It is not reflected in the literature on the subject. It is not the public understanding. It is a kind of approach which flourishes in libraries where there is a kind of enforced agnosticism when it comes to publication. They are not in the business of enforcing epistemic closure, etc. However, Wikipedia is not a library. We are tasked with looking at what experts evaluate various claims to be. If no expert has evaluated that peer review has taken place (for some value of peer review meaning "responsible peer review") then it is the height of arrogance for us to parrot a journal's self proclamation. It would also be the height of arrogance to say that the journal is not peer reviewed. The responsible thing for Wikipedia to do is to get an independent, reliable source to verify. Not an index that copies what the journal itself says. Not the journal itself. A proper third party. To do otherwise is to treat academic journals as special flowers in the Wikipedia universe that they simply are not.

jps (talk) 22:49, 9 July 2023 (UTC)

I wonder if you two have different ideas of what constitutes "automatic" inclusion in a database.
On the one hand, someone might say "Inclusion in the famous footy database 'Bundesliga Players' is not automatic. You have to go through years of training, work your way up through the various lower-level teams, and get hired by a professional team in the correct tier of the right association, and only then will you be added to the database." Another person might look at this and say, "Yeah: As soon as you join any of the teams they cover, you're automatically added to the database."
It does not appear that any selective index system is "automatic" in this way. These databases don't have single-criterion automatic rules like "anything published by Elsevier". WhatamIdoing (talk) 11:07, 12 July 2023 (UTC)
That's a fair contrast, but I'm not sure there is a functional difference between the two situations in terms of article writing. jps (talk) 14:15, 13 July 2023 (UTC)
While part of that evaluation is automatic, it is important to note that the final results are hand-curated. The result is a lot of interesting data, varying from the impact factor to which journals get cited most by a certain journal and the other way around.
The final approval for inclusion/review in the index are subject to human judgment, but the actual "coverage" of the journal is all autogenerated.
Citation indices also provide a lot of curated, highly-detailed metrics on authors (like h-index, graphs of their publication history, citation calculations, sometimes even basic network analysis) and papers (field impact score, other rankings). Those metrics are never considered secondary SIGCOV of people or papers, so why should they be considered so important as to be notability-conferring for journals? Simply being published in the highest-ranked journal in the world isn't a notability criterion for first/senior authors; surely that's a higher bar to clear than the Scopus journal inclusion criteria of: having articles cited in Scopus-indexed journals, having editors that have some professional standing (e.g. professors), having a publishing history of 2+ years, and publishing articles that contribute to their academic field and are within the journal's stated scope. Indexing services just want journals that aren't obvious trash or so minor as to be read by only undergrads at one college. A journal of relevant breadth not being indexed in Scopus is a red flag, whereas not being a member of the National Academy of Sciences doesn't mean anything at all for a researcher. JoelleJay (talk) 19:52, 13 July 2023 (UTC)
I have to concur with jps and JoelleJay, at least in this section. This is especially on-point: "If there are sources which indicate something about the quality of the journal, then it is safe to have an article. If not (as in the case of many journals included here), I question the wisdom of an inclusive philosophy."  — SMcCandlish ¢ 😼  00:42, 26 July 2023 (UTC)
You appear to be the first in this entire debate to base your argument on "an inclusive philosophy". Nobody else here is doing that; we are instead debating different criteria for how to judge the relative noteworthiness of journals, but have not addressed whether some of those criteria would allow more journals to be included, or fewer, or even whether that would be a good thing or a bad thing. So, pray tell: how many articles on journals is the right number? Are we above that, or below that, currently? Which of the two positions in this debate (that we should base notability on inclusion in selective indices, or on whether the journal can get some publicity for itself in independent publications) would support "an inclusive philosophy", and what data do you have to support the inclusiveness of that position and the exclusiveness of the opposite position? —David Eppstein (talk) 00:55, 26 July 2023 (UTC)
On your first point, I don't get what you mean, since I'm directly quoting someone else's prior commentary. Anyway, I think (even judging from your own bullet-point summary in the thread at or near the bottom, where you warn of the possibiliity of being less inclusive of high-quality journals and more inclusive of fringe/controversial ones, depending on how things go in this broader debate), that inclusiveness shifting will necessarily be a consequence, even if people don't want to focus on that as the prime-mover of the discussion. On your seeminly semi-facetious question: There is no specific "right number", of course, but it's clear that some editors think we have too many articles (not just on this topic; AfD is a very, very busy place). There is a general community feeling against "perma-stubs". On your latter question, it's the former criterion ("base notability on inclusion in selective indices") that would lead to an inclusive (i.e. m:Inclusionist) result. I don't have any particular data, and this to me is not about a "right number" or any other tabular measure, but about a) maintainability at the project level (are we wasting our time trying to create and maintain zillions of journal articles, when there are over 100,000 academic journals in the world?), and b) clarity at the editor level (am I wasting my time writing an article on a journal, only to have it deleted on the basis of pseudo-rules from an essay there is no actual clear consensus about?).  — SMcCandlish ¢ 😼  01:06, 26 July 2023 (UTC)