Reliability or liability in the contemporary mathematics publishing process? An ethical and technological case study

Abstract In this article, we examine issues of correctness and ethics in mathematics publishing through recent developments in the publishing landscape. Errors are especially critical in mathematics research, but questions on how proper journal management should maintain correctness have received comparatively little attention. As basis of our analysis, we use experiences by the author and colleagues, as well as published accounts of various incidents. Specifically, we build on the monograph case of the preceding article, and discuss actions by the involved parties through a list of anonymous aliases. The study focuses, qualitatively, on conformance to standards of scientific conduct. Exposing dubious practices of editors, authors, and reviewers, we provide evidence that consistency efforts at various venues for scientific communication are insufficient. The role of the internet and digital reserve is discussed, as well as modern trends like automated journal infrastructure, machine verification, and citation statistics. To address the resulting problems, we suggest that paying reviewers is the only viable neutral (bias-unsupportive) reward scheme, but that it must be coupled with stricter journal-internal inspection/grading. We also propose, describing in detail, a facility that allows authors to (discreetly) evaluate editors, and a setup to address correctness disputes.


Background
Issues of academic integrity, more specifically in relation to research publishing, are widely discussed in the literature (see for instance Bedeian, 2003, or Dickey, 2019).However, mathematics has received comparatively little attention among the sciences.(Geist et al., 2010 is one of the few accounts.)Nevertheless, deceptive attempts have also been reported there as well (Hill, 2009(Hill, , 2010)).Fraudulent practices are particularly critical in mathematics, since its research is very sensitive to technical details, and small (hidden) errors can have strong implications.In contrast, the scrutiny of mathematical proofs is a daunting process that often few experts can perform.This brings up the fundamental question how to deal with published errors and, more generally, how a journal should be managed for its consistency and reliability, while simultaneously maintaining fairness in its review process (cf., e.g., Shephard et al., 2021;Trebino, 2009).Additional factors play a role, for instance backlog and funding (in multiple directions: what are proper charges for publication and to whom, should reviewers be paid, competition for research grants), etc.These problems are further influenced by more recent developments in journal workflow, mainly through the rise of the internet and electronic communication, to which we dedicate most of our treatise here.

Outline of the paper
Involvement in science journal publishing is generally judged by a number of expectations: correctness, intellectual non-misappropriation (authors), transparency (editors), impartiality (reviewers), etc.The Committee on Publication Ethics (COPE) provides in its guidelines suggestions regarding the discharge of such duties.An author's actions can, at least to an extent, be controlled by journals, in their capacity as publishing regulators.However, when journals transgress their own obligations, a process of related behavioral and scientific concerns is less assignable.Then, "[w]ho polices the police?"(Dickey, 2019, p. 8).
The previous article (Stoimenov, 2022) discussed events related to these questions in the realm of mathematics, which are recapitulated in Section 1.4 about "The Monograph case."Fundamentally, the standard manuscript review policy of mathematics journals is single-blinded (with the author known to the reviewer).Although based, to some extent, on area-specific compulsions (see Section 2 "General practices"), this setup also brings distinct disadvantages to authors.
The status quo has raised a culture of peer review in mathematics, where often brief and vague referee or editor comments are deemed sufficient to discredit submissions.We have also seen (Stoimenov, 2022) manifestations of attack and retaliation through the review system, by some individuals who were openly criticized, as well as others' reactions to such vocal criticism.Section 2 on "General practices" summarizes some instances that are largely separable from internetrelated discussion.
The next Section 3 on "Automation" contains the main new analysis.It is divided into several parts, each of which elaborates on how such defective attitudes are reinforced by various forms of digitization and related trends.Section 3.1 outlines some issues with online resources (Pagliaro, 2021;Shephard et al., 2021), which can be easy to modify and difficult to identify.Next, in Section 3.2 we will treat tools that are designed to facilitate the work of journals (Horbach & Halffman, 2018).In Section 3.3, we discuss impact factors (International Mathematical Union, 2008), and related bibliometrical statistics.(However, we limit ourselves to the effect of such information, and do not delve into data itself.)Although it is difficult to draw a clear line between the roles of humans and machines in this development, their coexistence brings up a number of peculiarities.Our emphasis here is not a dreary litany, but a variety of management issues that require an amendment to be addressed.While the documentary basis deals predominantly (but not exclusively) with mathematics, many arguments and suggestions laid out apply to other sciences as well.Consistency will be the unifying aspect of the treatise; plagiarism, the other major issue, will be occasionally, but recurrently, touched upon.Section 3.4 on "The journal landscape" examines the extent to which it globally offers opportunity to maintain its own reliability.Less official preprint servers (Jackson, 2002;Pagliaro, 2021) and similar venues will be discussed in more detail in Section 3.5 "Internet and electronic platforms."Section 4 "Individualizing mathematical correctness" summarizes the evidence provided, with focus onto the main topic of current problems of consistency in mathematical publishing.This is then integrated into the global theme of the debasement of standards of scientific conduct, including its ethical implications.While elsewhere we mainly hold on to the factual and bibliographical footing, here also some personal opinions are expressed.
The paper finishes with a "Conclusion" Section 5. Addressing many of the detailed points raised earlier, we discuss possible solutions.Specifically, we elaborate on an editor/reviewer evaluation system.We explain (Section 5.1) what can be done by journals (or publishers) themselves, but more importantly that, with reasonable certainty, a component must be operated on a global level (Section 5.2), to overcome stagnation at (or convergence to) the status quo.We conclude with some analogous outlines to settle correctness disputes in journals (Section 5.3).In parallel, we indicate potentially more constructive uses of digital media, text comparison tools, etc.These provisions are meant to serve the goal of implementing proper policies for accountability and consistency, and maintaining science standards.

Materials, Methods, and Limitations
Materials used for evaluation include personal records by the author (an academic mathematician, not an editor, but occasionally a reviewer), and information communicated by colleagues (mostly mathematicians).They concern discussions about and at (mostly, but not exclusively, mathematics) journals, and research management facilities, such as databases, preprint servers, etc.We also refer to a number of published accounts, varying from blog posts (Hill, 2009;Trebino, 2009), over media journalism (K.Lee, 2023), and science news columns (Mushtaq, 2007), to scholarly studies (Watve, 2023).
The approach follows the previous article (Stoimenov, 2022), and focuses on qualitative assessment supported by the available evidence, mostly about questions of conduct.It is again necessary to refer to various individuals and entities related to the monograph case treated before, but under anonymity.For that purpose, we have introduced a list of one-letter aliases.It has been adapted here, with minor changes, and has been provided after the main text.
Several additional events will be discussed to illustrate our arguments.They are largely mutually independent, and we will use them more succinctly than the monograph case (and its parties), without separate introductory explanations.One of these concerns the journal retroactively deleting a publication (Retraction Watch, 2018).This incident also entails the failure of an administrative investigation, which adds to the motivation for the discussion.
With reference to these occurrences, we will highlight various problems arising between modern editorial procedures and traditional scientific responsibility.As for the latter, we will continue drawing upon Lang's account (Lang, 1993).It conveys crucial issues that later developments have only confirmed, if not reinforced.Lang's article is slightly updated in his book (1998), and there are several related resources; Doty's commentary (Doty, 1991) is especially important.
We emphasize that we do not use our case evidence to imply any objective overall generalization, even one in a particular domain like mathematics.However, our chosen methodological preference completely aside, it is insidiously hard to retrieve problem data (to serve as a basis for broader claims) which is both extensive and reliable.Such information often faces various wellknown obstacles to exposure, such as fear of retaliation or closing ranks (see for instance Lang, 1993, §IV.2 and §V.5, especially the quote of Gilbert).Thus, many cases go unrevealed, and can be underrepresented in relevant statistics (like Geist et al., 2010).
Moreover, official channels often dislike dwelling upon sensitive details, but then rebut accounts for a deficient scholarly basis, for which they are partly responsible.To exit this defective sequence, as in the earlier study, personal experiences and private communication are a valuable component of the analyzed material.Most are documented in written form (and available further at request), and are often literally, but anonymously, quoted, or concisely recaptured.The cited articles' discussions of relevant incidents and the presented inside information correlate and corroborate each other in many ways.They interlink the problems discussed, which do not need to await numerical validation to warrant treatment.
In that spirit, we have provided an extended final section, where we propose oversight and accountability procedures, while also minimizing the unnecessary exposure of conflict.Building on previous remarks (like Trebino's "Addendum to the Addendum," 2009), we detail what would be rights and obligations for the various parties involved (authors, editors, reviewers, publishers, and readers).Transpiringly, the organization will not substantially depend on the specifics of mathematics.And certainly for the submission case database (Section 5.2), academic and technical requirement is very limited, both in terms of staff manpower and expertise (say, not more than the equivalent of a preprint server).But of course, the resistance of the establishment remains the universal hurdle to overcome.Such a suggestion thus carries the desire that, in some form, it can find wide community support.

A summary of the monograph case
Here, we summarize the preceding account (Stoimenov, 2022), occasionally drawn upon below.
Events centrally concerned a monograph series M. We documented the reactions of its editor G and the authors A and B of a monograph, when an error was pointed out by a junior scientist S. S had worked for some years under A's supervision, and had a foregoing dispute with B over a separate paper.M rejected first a long manuscript and then a short note from S, containing an explanation and (partial) correction of the error.Organization E, managing M, also offers an online review database X of published mathematics articles, to provide orientation about their content to researchers.
First, some relevant events were explained, which preceded the case of M. We featured remarks of A regarding correctness issues, including his advice to S that before using a paper, "you better check," and his disposition to write on X, without pertinent explanation, that some paper "looks wrong."Regarding his conflict with B, A told S that B "is now your enemy" and "you cannot afford more enemies."S had had a string of experiences with (what he deemed) editorial or reviewer malfeasance, including with a journal C. His attempts to expose various such incidents, via replies to editors and his website, drew reactions that varied from disapproval, as in the confidential letter sent to H, to offense, as in Editor D's emphatic advice.He told S he is "satisfied that the referees consulted were honest and did professional jobs.If your comments are criticisms of the refereeing process in general, then you should find some other forum in which to debate them.If they are specific criticisms of [D's institution], and you genuinely believe them, then you may do better to find some other journal to submit your articles to." D then "consider[ed] this correspondence terminated." After failing to receive a proper response from Authors A and B regarding their error at M, S submitted a long manuscript, including the correction.Seven months later, M stated that it could not consider the manuscript.Regarding the short note, submitted later (and treating the error exclusively), Editor G insisted that "corrections are by the same authors.So [S' note] must be considered a new article.It is certainly not appropriate for [M] by length considerations.It certainly seems that you and the authors of the original [monograph] do not agree on many things.I think that if you want to publish this, you should submit to some independent journal and have it refereed.So we cannot consider this for [M]." Reactions from other journals were quoted, where S subsequently tried to submit his correction note.One referee acknowledged that "[t]his paper describes an error in a [monograph] that appeared in [year].Given that [monograph] has been cited [number] times (according to the [X database] count), it is not a major publication.Nonetheless, it is of interest to the [specialized] community, and has generated further research.Thus, a paper correcting an error in it should be published.But does [this journal] publish papers of that sort, or should there be an audience outside of [this particular] community?"

General practices
Of course, the mathematical class of many editors is respectable, and their efforts credible to make a journal for their readers as good as they can.It is plausible that many submissions, diverse referees opinions (Starbuck, 2003), a lack of definite evaluation criteria, etc., pose difficulties to the publishing selection process of a journal (Lawrence, 2003).At the opposite end, the author's life is hardly made easier.
Officially, journals seldom restrict potential authorship, and sometimes openly pledge to a fair, unbiased, etc., consideration process.In reality, though, the public has no access to editors' discussions, and even less so to their minds.Regardless of their reasons, editors can take considerable freedom for severe action.Especially when "it is hard to prove it," A's argument (regarding S' experience with Journal C) that "if his papers are good, some journal would publish his papers," for instance, is efficient to virtually ban every undesired author from a given journal.Obviously, our current system profoundly hinges on the assumption that at least a general editor will treat the author's paper according to its real worth.But handing contributory duty to one's mind's imaginary scenario is outright evasion.When being indicated that and where S succeeded in publishing some papers they previously rejected, Journal C did not comment, while Editor D largely reiterated his posture.Another related excuse is backlog, which is very difficult to track externally, but can be emphasized differently to different authors.
We add that S was not alone in his attempt with his website; at least one other famous mathematician is known for posting reports he received.Also, a journal like Rejecta Mathematica (2009) has tried to integrate this theme into its philosophy.But D's reaction illustrates why the idea falls short.An editorial process can be notoriously complicating to an author.(An illustrative selection of evidence appears below.)Thus openly exposing criticism his work received helps little his efforts at publishing.Even after he eventually accomplishes it, this disclosure, as adopted by the Rejecta, or also supported by Watve (2023), inevitably fuels controversy and deepens a rift between authors and editors/reviewers.All B, C and D demonstrate that prejudice sticks, past critics are hard to trip over, and publishing success seldom proves unequivocal.It is thus important, when introducing accountability, to avoid making the discord (unnecessarily) explicit.
In relation, we clarify here that almost all mathematics journals opt out of double-blind peer review.Authors' self-anonymization can seriously impede presentation since building on own previous work is very common and often substantial, leading to indispensable self-citations.These can face (formally) blinded authors with sometimes awkward requests, such as to reinstate unidentifiably (in)cited resources specifically for the sake of peer review, or to prove what peer review they themselves underwent.Furthermore, the distribution of preprints and internet search engines have facilitated uncovering identities, complementing the predating factor of close-circuit familiarity (e.g., Dickey, 2019, p. 8;Smith, 2006, p. 181).In fact, intent to guess anonymized authors is documented (Kuehn, 2017).We will also add below in Section 3.1 issues with plagiarism 'scan.' Single-blinding obsoletes these irrational, even ludicrous, formalities.But tracking down whose paper one received, no matter how easy, could take time and leave uncertainty.Having the information readily and officially thus may still serve some reviewers' potential bias.
If agreeing to do so, a journal demands exclusive right to handle a manuscript, which the author should not submit simultaneously anywhere else.However, a time frame for this process is not guaranteed, even if journals provide some general guidelines for, and data on, turnover speed.Not uncommon are delays to consider a submission (Tosi, 2009), in mathematics sometimes over years (see Notices, 2010, p. 640).But a delay not to consider it, as with S' long manuscript, can unlikely be explained with careful scrutinizing.Neither uncommon are reactions of a referee (or editor) which can vary from a lapidary phrase "to wash his hands" (in a reputed colleague's language) over lacking any degree of politeness to much worse (See Nature, 2001or Smith, 2006, p. 180).
Among the methods adopted by certain referees to "shoot fast enough" (Lang, 1998, p. 797) is copy-pasting derogatory comments about a paper to multiple editorial offices (Stoimenov, 2022).Duplicate requests are more commonly possible in mathematics, where the circuit of reviewers can be narrow.While many editors may welcome such inference as a reason to discard a submission, those concerned about integrity of review have limited opportunity to spot being sold a tour-de-force.(Iteratively supporting the author, against various editors' limitations, should contrarily not be regarded as a major reviewer offense.) A cliché like 'too specialized' is already long assumed to be tacitly acceptable, but once a journal claimed that a paper by S is "extremely specialized" in a rather unrelated field of mathematics.Only after S' collaborator wrote to the office multiple times did they admit that they did not identify the subject area properly.This is among the instances displaying that, when the work fails to please an examiner's taste, it does not really matter much how the author learns about it.And this was a comparatively respectable journal.Time issues play as well: "This paper first appeared in [year] on [preprint server V].I am not sure why it has not already been published."According to the author, the report quoted from was itself forwarded by the editors with some delay.Portraying him as the ultimate culprit, such a blurry implication adjudicates nothing in his article's (delay) history, but further colors the intrigue repertoire.
A heavily unfavorable referee report can also affect the board's general perception of an author, and thus its reaction to his subsequent submissions.With record easily maintainable, this can become a long-term handicap for an author.And as mentioned (Stoimenov, 2022) about B (and his advisor), journal-internal exchanges are not the sole channel to taint an author's image.Even with a positive report, or after a revision is requested, the final decision could well be negative (cf.Benos et al., 2007, p. 146), and editors usually decline any debate over it.If a revision is admitted, an author can also -unfairly -get blame for not addressing comments the editor forgot to forward to him, or conflicting advice from a different reviewer.
It is difficult to estimate how generic such situations are, but at least in mathematics, some of them are very common.H's intervention in one famous journal under his editorship helped S out (only) with his eighth submission; at least four of the previous seven had been reviewed supportively.This recourse is rather exceptional, though; usually, the author's sole option is to try different journals.(We will return to this point.)While facing potentially recurring and/or lingered inequity, he may need to seek employment, or to compete with several other researchers working on the same subject.

Online publication and online search
An unprecedented candidate for front-runner among the abnormalities of online publishing in mathematics is the recent journal N, which retroactively, and against the author's intent, deleted one of its publications (Retraction Watch, 2018).It is through its online-only platform that N could permanently erase a piece of its content (of which it now officially denies record).As usual in such situations, a debate heated up mostly over the merits (or demerits) of the article, and the internal modalities behind the editorial mishap.However, the controversy should not turn eyes off seeing the incident, at minimum, as a stark departure from habitual publishing conduct.And it addresses little arising key questions, such as about standards of scientific internet management.(This will be further discussed in Section 3.5.)Official investigative bodies had little to say either, as usual, turning down support to the author.
Partly due to the special (technical and closed-society) nature of review, mathematics is far more rigid to recent innovations in the publishing process (Horbach & Halffman, 2018).Certainly, some bring a genuine advantage.I personally find no hassle in communicating with the editors via email (rather than postal mail or phone calls).And facilities such as submission managers or transfer desks properly serve their purpose.However, opinions on modern electronic infrastructure have apparently grown as far as supporting "a fully automated publishing process-including the decision to publish" (Horbach & Halffman, 2018, p. 7).
One such notable asset, whose results in mathematics remain to be observed, is "a tool that checks whether there are sections in the paper that have been published before," or more bluntly, outputs "[number]% similarity to internet text."How this "similarity" is calculated is not transparent.Legitimate coincidences include attributed quotations, or some possibly outdated or fragmented versions of the material on a preprint server or a harvest downloader's website (see Section 3.5 "Internet and electronic platforms").A machine may struggle determining the objective behind such matches especially if authors cannot be compared due to self-anonymization for review.And sure enough, after only the third editor responded about what resources the capacity deemed similar, they indeed fell into the above categories.
Apart from facilitating invention of unfounded charges, obfuscation of similarity data can, for instance, cover up the author being himself plagiarized.Suggestively, web searches can help reveal such infringements also.But the editorial facility is supposed to bring up some added quality of service (thoroughness, accuracy, etc.).Especially if it comes with a fee, its (mal)functioning may merit some further concern, definitely before it is accredited to "shoot fast enough" (Lang op.cit.) on the author.Utilized deviating from its intended purpose, explicably "the software [. ..] breeds mistrust" (Rivard, 2013).
A final thought here on pilfering in mathematics is that it would more likely target original ideas and technical performance, which may require more advanced recognition, also due to the large number of formulas.Nonetheless, it has occurred in mathematics as well that page-long literally copied content made it into print.For another adaptation of text-based comparison, see Section 5.2.

Communication issues
A further problem can constitute the spread templated prose, which turns senseful communication increasingly disregardful, if not distasteful.We pointed out that assembling a non-trivial yet correct mathematical paper-and even more so its intelligeable presentation-is often a painstaking task.This can make artifacts of streamlining 'optimization' a particular annoyance to authors.
"If there are referee reports attached to this letter," then an author may at least believe in some carefulness met with, but otherwise?If a report the author had already revised after was attached, then "because of routine and [the editor] did not check carefully."Another notification reminded of a review deadline including the time stamp "11:59:59:997PM."At inquiry, the managing editor explained that such system messages elude to his moderatorial control.(Instances discussed below in Section 3.5 will show electronic management supporting precision far less where needed much more critically.)Providing for submissions not reaching a reviewer, one of Organization E's journals conveyed S "the judgment of the editor(s) involved" in sequential emails almost identical up to the respective titles.Latters, too, fail to earn any longer thoughtful handling.S tested this numerously, by changing on a series of submission forms a (Latin) title letter to another (non-Latin) one of the same appearance.The special character, or words it garbled, threaded through most journal feedback.
Authors' (or job applicants') names, if written at all, hardly fare any better.One "Dear Author" email of a very established publisher asked about my "satisfaction" with a journal in physics, decisively outside of my research spectrum.Despite a (likewise impersonally formulated) remedying follow-up, the survey clearly contributes to further growing the 'electronic jungle,' where directions from a machine can be irritating and sometimes actually harmful, unless the addressee guesses all possible lapses of its programmer.And when the greeting reads "Dear [email address]," one may infer how "carefully considered" the manuscript was.Series M's delayed non-consideration message to S was also transpiringly pre-formatted.Of course, it is impossible to ascertain in this kind of textures what a human actually typed (or at least held eyes over before sending it off).However, a rule of thumb is how much relates to the submission in a specific way and cannot apply in exactly the same wording to hundreds of others.If meant to be believed, M's email effectively admits that S' manuscript was left to gather dust on someone's desk over the seven months.At least, it is another testimony of an inconsiderate verbal standard.
Thus, going beyond the established practice of sending the author off without telling him anything insightful (and inciteful) about his work, decision letters are nowadays effectively distribuable per key-press.What they declare commonly appeals, implicitly or explicitly, to two types of argument: the limit of printing space, and the journal's scope and quality.
In principle, keenness for efficiency can be explained, which may justify a common response to multiple journal submissions sharing common problems.However, deciding what is a problem is not necessarily improved through computer-aided hard-boiling of editorial demeanor.It is not a priori evident that a 'quality' problem lies with a correction, and not with what it corrects.Lang (1993, §II) argued about the value of neutral review, whose stifling can obstruct meaningful scientific debate.Moreover, it is generally noticeable that errata usually occupy a minor part of the printing space; thus, they add to backlog pressures at most fractionally compared to full-length articles.Editor G's "length considerations" reckoned S' correction inappropriately short, but he did not clarify where is "some independent journal" more suitably profiling for its treatment.
Journals declare upon rejecting papers that they cannot publish everything worthwhile.This is agreeable, but does not explain why they still prioritize new studies over corrections (and, in social sciences, replication studies).One possible reason for this can easily be named.

Reference counts
The referee report reproduced in Section 1.4 "The monograph case" judges the merits of the A-B monograph-and along with it of S' correction-by the number of citations recorded in Database X.Thus, it testifies the role of reference counts in disposing over correctness matters.In recent years, citation statistics (Cole, 1989;MacRoberts & MacRoberts, 1989) have, indeed, gained increasing popularity.Initially, in general academia opinions about their pros and cons were largely balanced (Bornmann & Daniel, 2008;Editorial, 2005;Geisler, 2001).
One of the characteristics of mathematics relevant in this context is, though, that citations vary a lot in importance and role: some are included purely informatively, while others are a necessary tool in a proof, and few relate to reasons such as correctness or correction.Moreover, the citation rate is lower than in other disciplines.This was noticed explicitly in Patience et al. (2017, §4), but has also transpired elsewhere (e.g., Mushtaq, 2007, or Adler et al., 2009, Fig. 3).A possible, but certainly incomplete, explanation is that, in mathematics, extensively compiling related work is not deemed a substantial merit of one's own.Enumerative techniques poorly capture the purpose of a citation, and fewer references further destabilize their indications.For such reasons, serious bibliometry concerns surfaced relatively soon in mathematics (Ewing, 2006;Mushtaq, 2007;International Mathematical Union, 2008;Arnold, 2009;compare, for instance, to Brembs, 2018).However, they could hardly keep it off the global trend making such evaluation a competition criterion between journals.Numerous institutions have now followed Thomson ISI/Clarivate with many more statistical indices of their own creation, which have become a favored feature in the (overflowing) stream of journal self-advertisement.A prominent representative in mathematics is the organization E, which releases a 'citation quotient' based on its database X (quoted in the aforementioned report).One could try to partly justify this development with attempts to address various issues that arose with the procedures calculating the ISI impact factor.But apparently, whatever was a cure, it meanwhile only feeds the epidemic.
For example, Database X generally tries to account for references to a not-yet-published paper, by later linking them to the entry of, and thereby crediting, the journal where the paper appears.However, invoking a rule of exact match, it skips-and at request denied S-this adjustment if a word is replaced or added in the title upon publication.S followed reviewer feedback in making this title change, which ended up obscuring his paper on X.This raises then the question what does X' citation enumerator really measure, and whether this helpfully reflects what is (or is not) "a major publication," as used in the above-mentioned report.And while Journal N's no-longerpublished paper is probably easy to discount, concerns will go much beyond citation data if this becomes an example to follow. . .A new instance was an anonymous letter circulated by the Korean Mathematical Society (KMS, 2021).The email complained that, based on impact-factor publicity at predatory publishers, certain academicians' schools in Korea can inherently secure positions and government research funds.In turn, through APC paid by these funds, the publishers acquire a public (tax-payer) money flow to finance themselves.The dismay allegedly led to the expulsion of the journals in question from some abstracting services-despite their high-measured 'impact.'Once his paper is published, an author obviously cannot control scanning methods applied on its bibliographic content.But their side effects, however distortive to the intrinsic purpose of citations, can become his rightful concern.Among the possible (and plausible) conclusions, supported by the KMS posting, is that citing (or not) certain journals can, even if indirectly, affect one's competitiveness in job or grant applications (cf.Casadevall & Fang, 2014, p. 2;Dickey, 2019, p. 10).
This extends serial evidence on how counting formalisms add to bias in essential scientific management, and thus to the risk of a manipulatory quagmire.It is then conceivable that some 'impact' the 'factors' exert also contributes towards turning the reliable literature increasingly liable.

The journal landscape-an alternative?
When any scientific reasoning fails, the ultimate argument to silence dispute over rejections is, as explained, that the editor's (or referee's) opinion decides what material is suitable for publication.Conveniently, the author is then commonly pointed to other, allegedly more appropriate journals.But S' conflict-triggering paper with B (see Stoimenov, 2022) is only one instance to protrude that insistently disregarding such advice has often, even if not easily, led authors to find outlets for their work more worthwhile than the place directed to, sometimes more even than the place directed from.This next-in-line peculiarity at least suggests some publishing responsibles inclined to vagaries about an author's choice where to submit-the sole essential freedom they have left him.And it is not generally certain that the same editor or referee will be involved at a proposed alternative journal, and genuinely interested, to help organize publication.
Whether depreciatively minded, mentally testing the author, padding rhetoric or not, such comments may carry some reason insofar as they concern new research.However, when a journal's opinion is valued enough to justify it turning away from its own errors to be addressed somewhere else, there seems to be nothing at all that could not be justified.Indeed, A's advice to S that "you better check" clearly reveals his belief that ascertaining the truthfulness of published research can be charged to the reader who plans to work with it.But such "critical departure from common standards" (Doty, 1991, quoted further in Section 4) may scale up occupying all science with verifying its writings, and neither the publishing landscape as a whole, nor the internet, and even less someone's private conversations do provide an efficient solution.
As transparent from A's quotes in Section 2 "General practices" (among many others), the variety of publications is increasingly advocating the thesis that securing attention to whatever piece of work is solely the author's duty.And offering him to ramble through discoordinated journal expectations extends to corrections.Editor G brusquely points S to whether "you want to publish this" [emphasis added], and away from whether or what with A and B they should do.They did not need much to properly handle their problem, and "some independent journal" would rarely include remedying another into its own priorities.
One such quoted a referee calling S' correction note directly a "strange paper."The concern about what appeals to the journal's readership is natural, but deeming corrective efforts "strange" suggests that one may consider a 'normal' way of judging scientific material to be that the statement matters, not the proof.Possibly "repairing that mistake is not such a big issue," as "a couple of experts" persuaded one other editor, but definitely after S' note eventually proved publishable, what these "experts" thought can also be questioned.An author overcoming dismissive judgment is not always constructive, as shows the beginning of Hill's account (Hill, 2009).However, S was informing of an error, and not informed of one.Its originator could try to seek vindication in various excuses put up by places (not) "to publish this" (cf.Lang, 1993, §I.3).
It can be debated whether journals are the proper concept to (access and) assess such kind of expositions (Lang ibid., §V.1(f)).But the usual, and not very friendly, discourse about who and where (else) to read "this," at least, protrudes a widespread negligence towards "the fundamental problem of scientists not answering scientific criticisms of their work, not allowing publication of criticisms, or requiring other scientists to submit to various authorities" (Lang ibid., §V.3).In net insight, however plausibly journals insist on specific criteria more than supporting any general righteous cause, this also evidently turns them effectively "clogged" as "ordinary scientific channels [. ..] for the presentation of scientific challenges."(Lang ibid.)

Internet and electronic platforms
As indicated, the internet at large, too, lacks sensitive management for such a resource.Some journals have adopted open access via their websites for larger visibility and citations (Pagliaro, 2021).The previously discussed KMS memo clearly suggests, though, that authors' own article financing, even if not from their own pocket, can spur quality-unrelated considerations.Thus, OA does not intrinsically enhance the consistency of the literature and, if anything, should trim journals particularly vigilant to what they approve.
A variety of journal-external assets includes tools such as Google Scholar.It can often quickly find records of concerns expressed regarding the truthfulness of a given result (say, theorem, or counterexample), through publications, preprints, or social media.However, this advantage also bears out the internet's fundamental flaw: in a disordered fashion, it promotes an incoherence of not particularly science-oriented ideologies and practices.For example, a business-style model that neglects (or redirects) small-scale issues with a focus on big ones can be extremely counterproductive, especially in mathematics.This, in turn, can play into the hands of loose editors and authors to mislabel the internet as a public forum for verifying (and rectifying) results for which they can nevertheless claim credit.
Preprint servers (Jackson, 2002) are supposed to be set up slightly more systematically.However, they still often support increased access (Pagliaro, 2021), and volume, more than having (or getting) their records straight.Server V undergoes no clear academic maintenance procedures, except the official published content of a few journals now using it as an 'overlay.'As an example, its administrators refused S' request to take down an uncorrected and unedited early version of his long manuscript and to reference instead the book's official homesite.They reminded S that he could update his entry, but having a finalized (corrected) version already in commercial distribution, he found this option improper.Conclusively, V solicits authors to ascertain that their publisher agrees-prospectively stalemating collaboration with whichever disagrees-on the server's open hosting policies.Latters also mandate the editable source of the papers, with a response to replication concerns that its absence would not stymie careful retypers.In contrast, for storage constraints, figures were not admitted for some uploads (referring readers to the author to provide them), and so on.
Not only there does seem scientific precision to be less of a concern.All S' contributions to V were gathered by a certain digital library, run at a reputed US university, and declaring eliteprofile sponsorship.Further offered there are several dozens of his drafts, with a dysfunctional link pointing to some old workplace location of his website (they were downloaded from).While collecting large amounts of scientific documents may be a worthwhile practice in other areas, this raw material was begging recomposition and rid with errors.For one paper, a technical issue was remedied only at the last stage of editing, which is very hard to notice, but would otherwise make the entire paper false.S managed his website also under the premise of updating and correcting it at his discretion.The freedom is evidently subverted when his files are reposited for unrestricted access, overpassing both his consent and jurisdiction.
We saw above that a human (reviewer) can quote a V preprint, and a ('plagiarism-checking') machine more broadly "internet text," to mount obscure claims about a journal submission (Sections 2, 3.1).Such allegations, regardless of their validity, obviously extend the sequence of (unnatural) caveats to an author regarding pre-publication records of his work.(We discussed the protrusion of his identity in a double-blinded review system, or his open give-and-take on reports.)This sort of chicanery at least limits its scope to an internal evaluative procedure, but Journal N demonstrates a gateway for public-scale misorder as well.And while A's tempted (but thankfully not attempted) hoax would have only tested Database X' panic-making potential, fabrication 'capacity' (Maher, 2023) is now in nearly everybody's hands.
Whether by consensual guidelines, personal intentions, or mechanical algorithms, issues arise with consistency, scientific evaluation, copyright. . .Ultimately, the internet may actually less facilitate than aggravate research communication.There is no a priori clarity how a correction there can be readily located, will it be permanently maintained, or who stands for its accuracy.And most importantly, again: when those who published the mistake are not monitored to actively engage in settling these issues, they are only encouraged to believe it-and leave it-to be someone else's job.
Charging the computer with verification was successful for certain famous individual theorems (Flyspeck, 2014;Gonthier, 2005).Such assistance has thus been, and will be, very valuable.However, with incidents (Maher op.cit.) and accidents (K.Lee, 2023), AI has unfolded a debate until our global functioning as a society (see, e.g., Kotecki, 2018;Newman, 2015).While the above envisioned "fully automated publishing process" (Horbach & Halffman, 2018, p. 7; see Section 3.1) could cut delays from months to minutes, fairness and consistency are less straightforward.The extreme complexity of mathematics confines the examination of the validity of an argument often only to a handful of very familiar (human) specialists.Inevitably, a "fully automated" system would have to integrate a substantial portion of the professional skills of all living mathematicians.Should it also evaluate "automated" manuscripts?At the absolutely least, a safe and beneficial operation can only be built upon continuing and dedicated contribution from academia, not the delusions of the software industry.
And with so many cases revealing competence and integrity as totally disconnected virtues, neither is it likely "a couple of experts" to honestly and consistently spread information on errors, and to reach anyone who potentially needs it.Diffuse channels notoriously tend to offer a certain 'instructiveness' regarding pitfalls, intrigues, etc.However, they are as honorable as the idea that nothing better should keep mathematics intact.And they are not what university libraries pay for.

Individualizing mathematical correctness, or: who is responsible?
Above all else, the crucial question stands out: Should one tolerate that authors and journals can evade responsibility in correctness issues of their own publications?We analyzed a number of events, that pertain to a variety of problems: unfairness in peer review, editorial ruthlessness, funding competition, etc.However, their common denominator is their contribution (even if not as their main effect) towards undermining the consistency of published mathematics, thus gradually diminishing opportunities for reliable research exchange.
The M monograph case has beheld authors negligent about their flawed mathematics when they deem that who discovered it "cannot afford more enemies."Thus, correctness issues need open evaluation and jurisdiction, beyond the to-be-corrected authors'.To "shoot fast enough" at (job-seeking) "enemies" is not the proper way to answer criticism (mathematical and ethical).And editorial appetite soured over error reports further drives cover-ups to 'polish' journals and research resumes.
Many of these problems occurred long before the digital age.However, it has diversified and often deepened the problems.We have observed the advantages of electronic capacity (convenience, speedup, and accessibility).However, it can also be, and is, regularly used to unscientific effect-be it directly (disposal of corrections) or indirectly (deficient reference counting, biased citation), intentionally (spreading misinformation), or not (applying untrustworthy software).In the bottom line, little speaks for automatizing, economizing, or bibliometrizing to help making the literature safer-whatever their merits in other aspects.Like the long-overused appeals to the author's scientific credentials, or expert policing (see Lang, 1993, §V.4(b)), the internet has not provided a satisfactory exemption from publishing responsibility.
In conversations with S, A repeatedly denounced challenges as disturbing the common academic climate.This complacent attitude (see e.g.Editor D's words in Section 1.4 "The Monograph Case," or Lang's discourse on Edsall in 1993, §V.5) may be workable with the needed sense of responsibility around at his time 50 years ago.However, such a responsibility is evidently neglected now.The three years S waited before objecting publicly have only shown inaction, scientific misrepresentation, and arbitrary power.
Under the emerging postulate of publishing that to whichever error found in print, its correction may also be sought in the gossip, journals (gaining revenue) could increasingly serve certain perpetrators' ego (and purse).In relation to the Baltimore case, Doty (1991) wrote that such "attitude towards the responsibility of authors [. ..] is a critical departure from common standards. . .[T]o leave to others the responsibility of establishing the validity of what you have published is not only a fundamental retreat from responsibility but, if it became accepted practice, would erode the way science works.For [. ..] science moves forward by building rapidly on what is published on the tentative assumption that it is correct, not by waiting for others to test each paper's validity." The detrimental turnout of politicising academic values has alienated genuinely minded scientists.G. Perelman who, despite being widely respected for his work, retired from mathematical research, is a high-profile example.He was quoted regarding his decision (Nasar & Gruber, 2006): "It is not people who break ethical standards who are regarded as aliens.It is people like me who are isolated.[. ..]Of course, there are many mathematicians who are more or less honest.But almost all of them are conformists.They are more or less honest, but they tolerate those who are not honest."I consider his words an excellent summary of the state that non-scientific policies in mathematics have led to.
Mathematics, as an endeavor of human beings, may carry their imperfection.But it is disturbing to see it gradually felt less as a common duty, and more as a private property."In this way, we risk sliding down toward the standards of some other professions, where the validity of action is decided by whether one can get away with it [emphasis added].For science to drift toward such a course would be fatal -not only to itself and the inspiration which carries it forward, but to the public trust which is its provider" (Doty ibid.In Lang's verdict Lang, 1998, p. 339; "we have reached" "such standards."See also Franzen et al., 2007;Kreutzberg, 2004).

Review and Meta-review
To improve the level of and accountability for review, in his article (2007), Fried proposes a pool of special referees for high-quality mathematics journals, who should be public and receive a small payment.There is some debate on transparent review elsewhere in academia, including disclosing (Godlee, 2002;Ross-Hellauer et al., 2017), educating (Miner, 2003, p. 342;Smith, 2006, p. 181) and crediting (Bergman, 2004, p. 107) open reviewers.Such alternative models are pursued in a few places, like the non-blind ("dialogic") review journal "English Scholarship Beyond Borders" (ESBB), and the open publishing/review platforms Qeios and F1000Research.
A blind-refereeing approach, where the author is unknown to the referee, meets the difficulties already discussed.Another idea could be to allow the author to inquire about the referee's identity after a certain period of time.The conclusion of Trebino's article (Trebino, 2009) contains further constructive suggestions, such as evaluation of reviewers and sharing reviewer records among journals.Watve (2023) proposed journals to publish articles with grading, and relax the accept/ reject dichotomy.Although benignly intended, it seems unrealistic, since especially prestigious journals would hardly stock up with questionable content, regardless of how low the assigned grade is.
One basic hurdle facing suggestions to rearrange journal procedures is neutrality.While a reviewer's opinion should certainly influence the editors' final decision, it should not go, per speculation, the other way around.Neither should the submission's fate affect how the reviewer is credited.
We made clear why many authors and reviewers prefer exchanged criticisms kept between themselves.Some journals still publish reports for accepted papers (Watve ibid., "A behaviourbased alternative system"), but the reviewer's extra optional/mandatory credit in case of acceptance could lead to bias.Moreover, his report's reward value, say as compared to a regular journal article, will be hard to agree upon.Disclosing his name will similarly interfere with discretion or neutrality, beyond some article-unassigned acknowledgment lists published periodically by certain (also mathematics) journals.
Thus, to raise a reviewer's commitment, there seems to be hardly any viable alternative to paying him.(Trebino, 2009;Watve, 2023 also support this policy).However, subsidies may not afford editors' repulsiveness to reviewers, who will remain arduous to engage and maintain.And definitely a negative submission result practically seals a reviewer's identity and comments from public opinion, including his possible undervaluative sloppiness or fakery.
The way out of this dilemma underscores the importance of the journal's self-inspection to limit its risk of low-quality examination.As opposed to published articles, the scoring of reviews and reviewers, be it by authors, editors, or co-reviewers, makes much more sense: it will not compromise the outward appearance of the journal but only stabilize its internal functioning.Moreover, this setup is not binding to system-wide coordination, which makes it more rapid and effective to implement.To promote equity and enduring trust, a progressive incentive scheme can depend on the length of the article, the number of previous reviews, and evaluation grades obtained by the reviewer.In addition, in the spirit of social justice, reviewers without a tenured or tenurable position, or with a lower income, should be paid higher.
There seems to be little wrong for an editor, besides documenting his own impressions, to solicit the authors' (or alternative reviewers') voices regarding comments on a submission.This should be done between the editor and author, with the same level of discretion granted to the reviewer (while maintaining his anonymity to the author), and directly.(See, e.g., also C. J. Lee & Moher, 2017.)Publisher's surveys are often vague and automated, and many are restricted only to authors whose submissions have been published.Likewise, an author should have the right to know the name(s) of "the editor(s) involved," whose actions-on an attribuable basis-should be evaluable to the publisher.And of course, which author gives his opinion should not be informed to the editor(s).
Interactive exchanges with the author during review are rather rare, making him shielded from the process-while nevertheless very much exposed to its wear-and-tear.Endowed with such extensive authority-and especially if paid-any referee (or editor) acting according to honesty and justice can find little unnatural in having his own words more closely weighed.The common, and flimsy, grievance is reiterated that such a hassle would demotivate everyone to work for journals.However, this would imply that the global goal is bad discipline and preservation of foulplay opportunities.Furthermore, to reduce undue delays, or at least pinpoint them, the handling timeline should be recorded and simultaneously accessible to the publisher.After the process is completed, an author should be allowed to request a reviewer-name (but not editor-name) anonymized version, and be able to get the publisher to confirm it.
Such auditing provisions will enable meaningful feedback regarding questions like 'Do you consider the referee's effort to understand the mathematics in the paper serious, his evaluation arguments sound, his comments helpful, his tone polite, his response timely, his journal recommendations (if applicable) appropriate?'Some authors' (un)success bias will persist, but a coreviewer has less in play.His perspective on his fellow's report is at least presumably competent and, certainly on a no-name and post-decision basis, has little secretive agenda.Similar questions go about an editor, with the addition of whether his choice of reviewers was suitable and his letters eloquently composed, or whether he navigated a revision well through incoherent suggestions.
Reviewers' evaluation of editors also addresses an important facet of editorial integrity relatively elusive to the author: whether the actions taken accord well with information internally provided.After sending a report, I am rarely updated what happens with the submission, even although I sometimes explicitly ask for it.Certain places did not give an, even automated, acknowledgment of my comments.The editor is, in principle, bound to advice he receives, once he decides whom to consult.This again emphasizes that his choice matters-and consequently so do provisions to help him along in that regard.
The other chief lament voiced about this type of introspection is the alleged management overhead.However, when editors can handle the dust bin of their continual highlight draw, a minor extra fairness effort towards their authorship is hardly an unaffordable burden either.With strain (and sometimes careers) attached to it, a journal's manuscript inflow is not a resource for enjoyable self-service.Note also that, while paid review is still the exception, editors are often recompensated (however modestly).If, despite conveniences like electronic assistance, their workload indeed grows out of proportion, then they can always, as an ultimate measure, temporarily close off the journal for submission.However, this should be done officially, through the publisher, and -except for errata -completely.In this distinctive situation, appeasing a select few (cf."Editors" in Lawrence, 2003) to hoist the content lineup (or impact factor) appears to be a particularly shady policy.
There is also the reasoning against such questionnaires that (agreeably) asserts why for instance flight passengers should not evaluate their pilots.But the qualification gap between authors and editors is arguably still smaller than the one between students and professors, and the class evaluation system was introduced for a purpose, and is not appearing to go anywhere any time soon.The internet is also full of ratings for any sort of services.
In parallel, a greater awareness seems warranted for machines to help with, rather than to take over, editorial (let alone review) work.Whenever a machine is deployed, more transparency into its working is needed, and a human being should be associated with the process and accountable for it.Unsurprisingly, some governments have initiated legislative efforts (Liao, 2022).

Submission case database
Of course, journals could forcefully defy all arguments for change.To overcome this, it is necessary to create a way for accountability outside of their jurisdiction.We explain here how an author can express (dis)satisfaction journal-independently (and discreetly): the above proposed editor evaluation can be globalized, also beyond the publisher-level.
A central database can be set up to receive cases authors file on a submission (with evidence), and their grading of "the editor(s) involved."If the communication leaves the editor(s) unknown to the author, his evaluation should apply to each one from the entire board simultaneously.Editorial transfers are not the author's investigative jurisdiction, and if members are allowed, especially namelessly, to send out information on behalf of everybody, then all must stand collectively for the resulted actions.
The database can then display a rating for an editor, once a threshold of records about him is reached.This will avoid publicity of individual (heated) exchanges, but reflect a consensus (or compromise) over a large number of independent opinions.That plurality will minimize authors' personal bias reciprocates the principle of editors' overall procedural soundness (as expressed at the beginning of Section 2 "General practices").Thus, submitted opinions should be generally let to speak for themselves.And the authors should opt for what of them can be disclosed.
Still some potential conflicts and abuses must be considered.To avoid excessive reiterations, one may need to limit the number of cases an author can file for a given journal over a given period.Coarse language (if text input is allowed) may require moderation.But perhaps even more attention deserve reports like editors rescinding acceptance, or cover-ups.(Even so, it appears better to principally not interfere in journal affairs.)Thus an advisory committee may be useful.Whoever joins, data about journals under (and during the time of) his editorship should be inaccessible to him.And with advisers openly announced, they can be naturally avoided for review requests.
Displaying an editor's ratings live, and also maintaining an available summary of his past journal work, will provide a valuable source of orientation.Even if repeatable, manuscript submission is innately individual, with no natural relation or interaction between authors of different manuscripts handled by the same editor.(Large coauthorships are also very rare in mathematics.)This offers an author less accessible cues to contemplate what he can expect from a particular editor, or whether his experiences with him were the rule or the exception.
In addition, one could extend the capacity of the database to help editors identify copy-paste attacks.About a given referee report an editor receives, he can check how many textually similar reports are already on file by the same author (without journal or editor names output, but with dates).
One should by no means assume that similar reports automatically concern the same author.Indeed, report templates are posted publicly, and some could get 'trendy.'But random resemblances become far less likely when the targeted author is the same (and dates are close).Also, with reports being confidential, there is no opportunity for mutual quotations.
It is appropriate database queries to have the prior knowledge or approval of the full journal board (except possible editors being authors, which are completely excluded from the discussion).This prevents some editor from checking whether he can undetectedly propagate his own (review) assaults.Since some reviewer comments are directed to the editors only, it may be useful to allow editors to submit these themselves to the database, to be available for later queries.But such data should remain hidden from advisers.Also, in accordance with the concerns expressed in Section 3.1, it could help to display a journal/editor-anonymized version of records (AI may potentially assist creating) as an evidential backup for a similarity test result.
The letter sent to H (Section 1.4) pointed out, referring to S' complaints, that a reviewer feeling brought out of his comfort zone can always write some brief negative comment.This option is rarely pertinent, but some editors will imaginably still perceive it at least as more cultured than text-tampering flimflam.Ultimately, they have the opportunity-and duty-to judge the substance of their reviewer opinions.And the database enables the author to account for any tenuous decision he receives.
It should be emphasized that desirably, reviewer names be completely left off the database.Editors have enough ways (we provided some) to vet their reviewers, and beyond this the responsibility is solely the editors'.It is thus unnecessary to burden the functioning with all possible conflicts of interest associated with reviewers.But since advisers may join after having been reviewers, there should be an announcement period for new advisers, who will not be allowed to access data entered before the end of that period.
In general, an advisory duty should not exclude submitting to journals.Since everyone can file cases, an adviser has no distinct advantage, and neither anything profoundly impedes an editor handling his submission.Once editor-reviewer exchanges are not accessible to advisers, review similarity can also be queried unconcernedly.To go safe, one could allow advisers to file author cases into the database with decreased weight, or only with approval by the full advisory committee.I.e., the advisers should collectively judge whether one of them authoring the submission influenced the editorial process so severely that it is unfair to count it towards the editor's database score.Such persuasive evidence will very rarely exist, though.There is no reason to reprieve the editor over far more common reactions such as robot message delivery.
The proposed provisions do not significantly change the publishing process on the authors' part.But the right of journals not to make it easy to authors remains valid as well.Rather, there should be more accountability and fairness, and this is what the above suggestions aim at.

Correctness settlement
A further point is more serious attention to an administrative process dealing with correctness in mathematics-both ethical and scientific.According to a familiar source, the organization E publishing the monograph M has no channel for discussing bogus research, and I know of no similar provision at other mathematical institutions.But S' case is not isolated-as witness Hill's quoted reports (Hill, 2009(Hill, , 2010))-and related structures in other sciences (see Triggle & Triggle, 2007, p. 47) lend potential further emphasis on such problems in mathematics, too.Postpublication review (see Horbach & Halffman, 2018, p. 8) seems to be a viable option.To further enhance the approval of scientific objections, an editorial board (or publisher) can, for instance, appoint a panel, whose composition is openly agreed on in advance by all disputing parties, to reach a verdict on the merits of the proposed correction.However, if "to wash his hands" becomes a widespread avidity of an editor, it could of course already qualify journals as a conflicting party in corrective disputes.(Editor G, who stereotyped the syndrome, still very much remains high-profile active.)This may warrant a centralized authority to oversee appeal evaluation.It may (have to) charge a-non-exorbitant-fee for filing contestations, but if it so rules, it should bind a journal to publish a correction (as well as to reimburse costs).A payment will necessitate journal-external jurisdiction, but it can also act as a barrier to tenuous demands.Assuming that most scientists using the service will be primarily interested in correct(ed) research, an euphoria of fame-seeking over minor errors is anyway not likely to erupt.It can also be countered by agreeing (or ordering) less substantial alterations to be published anonymously.And while it is hard to pursue (and sue) unyielding editors around the globe, one can, for instance -even openly-blacklist their journals at various ranking services and databases such as X (which is managed by the same people Editor G was working for).This also offers one useful role that electronic facilities can adopt to actively support-rather than subdue-consistency efforts.To any serious journal, public disgrace would hardly be a worthy price for stubbornness over a correction (or refund).
A role model can provide the Sochung Commission of the Republic of Korea (Sochung), which performs a comparable arbitrative function for academic administration cases.As part of the government, it issues decisions with a strong weight to challenge in the country's justice system.Given that such a process appears to be the best (however imperfectly) workable setup to deal with disputive situations, an analogue for scholarly work may be, in the end, less absurd than initially apparent.
However, even more in the publishing realm, such a regulatory organ must, at first place, be run by scientists, not lawyers.It should be pointed out again, following Lang (1993, §V.3), that civil and academic responsibilities are two different things, which is why challenging scientific conduct by legal procedures not only bears enormous risks, but also questionable prospects.The deficiency of scientific procedures thus creates the "legalistic morass" (ibid.),where the idea thrives that all one does in science is legitimized when scaring opponents off, or defeating them in, a court of law.Likewise, the justice system is not meant for any party's battle against some regulation process.Oversight is there to improve performance and make them respect their duties while assuming their rights.And others also deserve an opportunity to know whom to work with.
Ultimately, the central issue is, as Lang writes in the "Conclusion" of his account (1993), that scientists "uphold the traditional standards of science."And in doing so, "[t]hey must rely on individual responsibility, and they must create an atmosphere and conditions under which scientists, both young and established, can exercise this responsibility without fear-fear of retaliation, fear for their careers, fear for their funding, fear for their publications, fear of the tensions which come from a challenge, fear of being uncollegial, whatever.Will they?" (ibid.)I honestly hope that this story does not end up as a temporary boost of noise, but that some really start doing something.

A
professor emeritus at a prestigious Canadian university, first author of the monograph B tenured professor at a middle-tier US university, second author of the monograph.C a journal where S had submissions rejected D an editor S had a dispute with regarding reviewers