ROGER MANTIE
University of Toronto Scarborough (Canada)
July 2025
Published in Action, Criticism, and Theory for Music Education 24 (4): 25–48 [pdf]. https://doi.org/10.22176/act24.4.25
Abstract: This article interrogates the ethics of the author-reviewer-editor relationship in academic publishing. In the first part of the article, I draw on experiences as author, reviewer, and editor to question why so much learning about the publishing process seemingly occurs through trial and error rather than through mentorship. In the second part of the article I turn attention to matters of motive and trust in music education research, sharing examples of how other disciplines approach questions of research ethics. I conclude with modest suggestions for how our field might begin to have more frequent and transparent mentoring conversations on ethical issues in authorship, reviewership, and editorship.
Keywords: Research ethics, publishing mentorship, peer review, disclosure statements, funded research, Singapore Statement
Oscar Wilde reportedly said, “Experience is the name we give to our mistakes.” By this measure, I have a lot of experience. In the context of editorship, one of my earliest examples was in the 1990’s, when I took on the role for a small practitioner newsletter. I understood my responsibility as soliciting content and compiling it into something considered “newsworthy.” In my naïveté, it didn’t occur to me that I also had the responsibility of ensuring the veracity of the content. I quickly learned my lesson after I allowed a piece to be published that was highly critical of the activities of some members in our practitioner community. I never thought to question the motives of the author, failing to recognize that what I took to be factual description was a one-sided attack that conveniently omitted salient information. Unsurprisingly, I faced heat for failing to properly vet the article. It turns out there is a difference between editor and copy-editor.
My first experience editing a scholarly publication came courtesy of an invitation from Lee Higgins, who encouraged me to guest-edit an issue of the International Journal of Community Music (IJCM), a publication for which I currently serve as Senior Editor. The experience of guest editing for IJCM taught me just how powerful the role of editor can be. Given that I wanted the special issue to be as good as possible, I did not send out all the initial manuscript submissions for immediate review.[1] I did a pre-screen and, for some of the manuscripts, corresponded directly with the authors, offering suggestions for improvement. Only after I deemed the manuscripts “ready” did I send them out for double-blind review. But even here I felt the tension. I saw the value of the articles and wanted them to be accepted. I wanted to believe that each manuscript was strong enough to survive the peer review process, but one can never safely predict such things. My selection of peer reviewers was guided primarily by expertise concerns, but it would be a lie to say that I did not consider whether potential reviewers might be more inclined to reject a given manuscript. In the end, I am proud that I maintained the integrity of the process and that everything was ultimately reviewed with the same rigor and standards as any other IJCM issue. But deep down I also know that if I was less helpful, the outcome could have been very different. It turns out that the editor really can—to an extent at least—make or break the publication experience. I am glad to have had this early experience, but I wish I had known some things about editorship earlier.
I appreciate the value of experience—if understood as learning through our encounters with “life.” At the same time, not all encounters need occur against the backdrop of ignorance. Vygotsky’s “zone of proximal development,” for example, is predicated upon encounters with more knowledgeable others. Why bother with any kind of formal learning or education if we didn’t collectively believe that learning can take forms other than just trial and error? While there are many things in life, like riding a bike, that can only be learned by doing, I’m not sure that academic authoring, reviewing, and editing fall into that category of activities.
No doubt there are graduate programs in music education where mentorship of authorship, reviewership, and editorship occurs. Based on what I have witnessed of reviewing, responses to reviews, and editorial handling of the process, however, I am left with the impression that mentorship in our field needs to become more widespread than currently exists. I do not presume to offer mentoring knowledge or advice in this article. Instead, I share personal views and experiences in the hope of inspiring graduate programs in music education to embed more mentorship into their courses of study. The more we openly talk about and share our experiences of authoring, reviewing, and editing, the better chance we have in ensuring that future generations will not be left to trial and error, where “experience” is the name given to mistakes.
The Author-Reviewer-Editor Relationship
Like most academics, I first encountered the author-reviewer-editor relationship when I submitted my first manuscripts to journals (and, in one case, an edited book project). Lacking in awareness (because my mentors assumed I should just know?)—and lacking in the good sense to seek out the wisdom of others—I often made a mess of things. In my early innocence, I thought that peer review meant that author and reviewer were peers—equals. I failed to grasp that what peer review really means is that the reviewers are peers and that authors are, by design of process, subservient to reviewers. If authors were considered equals, more reviewers would write comments as though they were addressing their colleagues, not their students, and authors would be able to legitimately challenge the comments of reviewers as peers. In theory, authors can challenge reviewer comments. It is common with many journals for authors to submit a response to the reviewers’ comments as part of the revise and resubmit process. But in practice, the response to comments is understood to be an explanation of how the author has appeased the reviewers’ concerns, not a response that refutes or challenges the concerns. Done artfully and with sensitivity (ideally with the support of a mentor), an author can successfully push back on some reviewer comments, but there is always the risk that a response that refutes concerns might lead to rejection, not publication. Few authors, especially those pre-tenure, are likely to take that risk.’
As an author, there is an inevitable frustration when review comments appear to have missed the mark. I have felt this on more than one occasion. In the beginning, I tried to resist. This was not always easy, however. Sometimes reviewers rely on the “weight of authority” to pass judgement. Comments in such cases do not offer evidence in support of their criticisms. Opinions are asserted without warrant. Reviewers presume that their opinion is unassailable by virtue of being a reviewer. This was certainly the case when I received a review that, in its entirety, stated, “I reject this manuscript because of methodology and anti-American bias.” That was it. That was the review. There was no explanation of what was problematic with the methodology or what, precisely, constituted anti-American bias. It was simply a pronouncement by a reviewer who felt entitled to write and submit such a review.[2] Another reviewer (long ago) once chastised me for referring to my interview transcripts as “data.” According to the reviewer, only numbers constitute data. The editor did not offer an opinion on the matter; the review was the review.
Jokes about “Reviewer No. 2” are legion in academia. Inevitably, it seems, one of the peer reviews on a manuscript stands out from the other(s) in terms of its negativity and/or harshness.[3] This was certainly my experience when a Reviewer No. 2 concluded that my manuscript was “pointless.”[4] I sheepishly confess that I have probably been Reviewer No. 2 on multiple occasions but I have never called anyone’s manuscript pointless. I have also never accused an author of “anti-American bias” or any other kind of bias—save for examples where I detect the clear presence of an undisclosed or unaddressed conflict-of-interest. I have no way of knowing if my reviews offend authors, but I can claim that I always strive to present evidence in support of my criticisms.
At an MENC Philosophy SRIG[5] presentation in Anaheim, California in 2010, Estelle Jorgensen presented a paper where she read out reviewer comments she had received, followed by passages from her manuscripts that illustrated poor judgement on the part of the reviewers. One vivid example that stands out in my memory was a reviewer comment that claimed her manuscript contained no research questions, followed by Jorgensen quoting directly from her manuscript, where she had written, “My research questions are ____.” Some people might criticize Jorgensen for acting unethically in sharing what is sometimes understood to be privileged correspondence. Given that her purpose was not to embarrass individuals (as everything was anonymous) but to shine a light on bad reviewing in music education, I would argue Jorgensen acted ethically in the best long-term interests of the profession. It was, in a sense, a form of mentoring the audience.
The preceding discussion raises an important question: Do editors have a responsibility to vet reviews? Given that journal editors have been authors themselves, it is surprising that editors do not intervene more often on behalf of authors in response to questionable peer review comments. No doubt some editors do, but that has not been my experience. Now that I am in the role of editor, I am more sympathetic to the problem. Peer reviewing—in our discipline and many others—is a service activity. It is uncompensated labour that counts for very little on annual activity reports. Many reviewers I have spoken with typically spend three to five hours per review. This is time not spent on one’s own research or any other activity that might actually “count.”[6] While there are some people willing to review manuscripts to advance their own prestige, it seems too often the case that these individuals are not always the best reviewers—the kind of reviewers who offer thoughtful, critical appraisals. Editors understand the value and importance of their review board; reviewers are, in many ways, the lifeblood of a journal. The last thing a journal editor wants to do is to alienate a good reviewer by critiquing their work. When push comes to shove, it is in the best interests of the editor to side with the reviewer rather than the author.[7]
The preceding discussion also raises questions about the quality of reviewing in our profession. In my original draft of this article, I wrote the following:
As a thought exercise, what if all reviews were subjected to a similar process of peer review, whereby reviews could be rejected for being substandard, and reviewers might, as a result, lose their right to serve as a reviewer in the future? I suspect the quality of peer reviews would go up.
One of the reviewers of my draft pointed out to me that, while reviews are not necessarily subjected to the exact process I described, the Sage journal system has a built-in “reviewer scorecard.” I discovered that I do in fact have a reviewer “score” based on my previous work for journals published by Sage.[8] I do not know if other publishing platforms have a similar reviewer scoring system. To the best of my knowledge, the current system used by Intellect Publishing (publishers of International Journal of Community Music) does not.[9]
The author-reviewer-editor relationship can be conceived in various ways. In my experience as an author submitting to journals, the editor often seems like an administrative go-between who, in most cases, merely communicates reviews to the author and, in cases of resubmission, communicates revisions back to the reviewers. My experience co-editing two Oxford handbooks highlighted for me a different kind of ethical relationship: that between the editor and author. While reviewers can be viewed as functioning as a form of gatekeeping and quality improvement mechanism, they typically remain anonymous and are not publicly-accountable.[10] The editor, on the other hand, is not anonymous and is ultimately responsible for what gets published.
The two Oxford handbooks in question were intentionally international in their scope and intent, and included authors from various English language traditions and authors whose first language was not English. I was suddenly confronted with a dilemma: To what extent does an editor have a responsibility to ensure a sense of consistency in language usage? How does one balance the desire to honor (or honour, as I would spell it) the voice of the author with the desire to ensure the prose reflects the normative conventions of academic English expression (and whose norms should prevail)? Take, for example, an instance where an author’s written expression risks having them come across as less elegant, or perhaps even less intelligent. Should an editor intervene in such cases, believing they are ultimately acting in the best interests of the author (and reader), or should they take a hands off approach, letting the chips fall where they may?
And what of authors? Do authors have a right to claim ultimate authority over their words, or do they have an ethical responsibility to readers of their work? Do authors not have an ethical responsibility to recognize that, because the buck ultimately stops with the editor and not the author in any given project, they have an obligation to regard the publishing process as dialogic rather than monologic? This need not mean capitulating to the authority of the editor, but rather, to treat editorial suggestions with grace and as the basis for dialogue. In my experience with the two Oxford handbooks, several authors expressed gratitude for my editorial suggestions. I had one author express displeasure with my editorial suggestions, however, and one author felt so uncomfortable with my suggestions that they ultimately pulled out of the project. I have no way of knowing how all the other authors felt beyond what might be inferred from them accepting my suggested edits.
The Matter of Trust
The issue of author motivation is likely familiar to anyone who has ever written. Why write? For academics, this is something of a rhetorical question as most of us do not have much of a choice. It is typically an expectation of the job; we are supposed to conduct (and publish) research. The real question, however, is not “Why write?” but “Why write this rather than that?” Writing involves countless choices that are within our control as authors. We cannot alter the data—at least we cannot if we are acting ethically[11]—but we can put our foot on the scale, not only in terms of the data we choose to generate and how we choose to generate it, but in terms of how we choose to interpret it and report on it.[12] This inevitably occurs when we research and write about something we care about or believe in. The pages of our music education journals are filled with studies and articles that report on and extol the benefits and virtues of music learning and teaching. How many articles report on or argue the opposite? When reading through our music education journals I am often reminded of the lyrics to “Home on the Range”: Where seldom is heard, a discouragin’ word. Where are all the empirical studies on ineffective music teaching practices—the ones that point out the flaws and question the place of music in the curriculum? Prior to a recent wave of interest in “trauma” in music education, there have been very few published studies on abusive practices in music education. Should not the lack of critical studies about music education raise questions about the motives of researchers? Why should readers trust what is published if everything is positive?
This is not to imply that critique does not exist in our field. The MayDay Group–and its flagship journal, ACT—were founded on critique.[13] Some academics in our field have fashioned their research agendas on critiquing music education practices. I would argue, however, that there is a difference between rationalism and empiricism. While some reviewers may be willing to give a green light to manuscripts offering criticisms based on an underlying goal of improving practice (though critically-oriented authors no doubt have stories about challenges in the peer review process[14]), it is highly unlikely—based on an appraisal of extant published research—that an empirical manuscript demonstrating negative or ineffectual impacts of music learning and teaching will see the light of day. I have no way of knowing how many—if any—manuscripts of empirical studies demonstrating negative or ineffectual impacts of music education have been submitted for peer review, but if they have and have been rejected, I suspect the result is that authors learn very quickly that it is in one’s best career interests to publish studies that reflect positively on music learning and teaching.
“Post publication critique,” where authors interrogate published empirical studies and claims made in their name, is a practice that is rare in music education. Common is other fields (e.g., Hardwicke et al. 2022), I suspect that our field is too small and that few researchers (especially pre-tenure) are willing to risk becoming a pariah by publishing work critical of their colleagues. One of the few people to do this in a music education-related field is Stephen Clift, whose critiques of arts and health research (e.g., Clift, Phillips and Pritchard 2021) serve as a reminder that, no matter how badly one might want particular findings to emerge in support of our beliefs, our reputation as a field rests on researchers maintaining the integrity of the process.
There is another aspect to the ethics of motive, however—one that speaks to researchers protecting their long-term interests. I have wrestled with this issue over the years. While I have always attempted to bring criticality to my research, I am mindful of the “burning a bridge” phenomenon, where I may not be well-received by research participants in the future if I publish something that paints them in a negative light. This might be an acceptable risk if the group in question is, say, community band members (if I’m prepared to walk away from them as a research population in the future), but I might not be so willing to risk an ongoing relationship with music teachers. Imagine, for example, a case where a researcher in music education observes music classrooms or interviews music teachers and then publishes an article that is harshly critical of the people involved. Who would consent to be observed or interviewed by this researcher in the future? Put differently, researchers are motivated to maintain positive relationships with research participants so they do not jeopardize future research possibilities.[15] It is also respectful to avoid being critical: in most cases, research participants are uncompensated for their time and, unlike the researchers, receive no professional rewards for their volunteering. The least that researchers can do as a way of “giving back” is to publish something positive.
And then there are those situations where author motives may be influenced by external forces, the clearest example of which being research that is sponsored or commissioned by entities that are not arms-length.[16] Although this kind of funding is not widespread in music education, at least based on the infrequent mentions of such funding in article disclosure statements, it does occur. A non-profit entity that has appeared in music education in recent times, for example, is Music Will (until 2022 known as Little Kids Rock), an organization that, despite having no legal or jurisdictional standing in statutory schooling, somehow claims to run “the largest nonprofit music program in the US public school system.”[17] A Google Scholar search in December 2024 revealed no fewer than forty peer-reviewed articles returned for a search of music education and Music Will’s branded version of popular music in schools, “Modern Band.” The majority of these articles have been authored by people who have (or have had) close connections with the organization (as employees, former employees, or as researchers Music Will has sponsored). These associations are almost never mentioned in the published author affiliation descriptions. If and when a disclosure statement is included, the author(s) almost never report a conflict of interest.[18] It is one thing for those of us who know of the connections between authors and Little Kids Rock (Music Will) based on personal acquaintance and insider knowledge, as we can interpret the articles in this light, but for others who read such articles without such knowledge it represents a breach of trust. This becomes especially apparent when journals subsequently publish corrigenda to correct the failure to properly disclose conflicts of interest from the outset.[19]
To be clear: I am not suggesting that research funded by or written by those with close affiliations with Little Kids Rock (Music Will) or any other entity is invalid or compromised. Rather, I am highlighting a way in which the motives of researchers can be potentially influenced by non-arm’s length sponsors or employers, and the importance of, at a minimum, disclosing these associations to readers. As should be obvious, researchers are incentivized to generate studies with positive findings to appease funders—something that may increase the possibility for future funding. One does not bite the hand that feeds you, so to speak. In extreme cases, funders may even believe that they are entitled to research with positive findings because they have, in effect, purchased it.[20] Clearly, purchasing “results” is wrong. Period. And authors do need to disclose their relationships. Period. But, to my earlier point about the lack of critical empirical studies in our field, not all funded research is bad. Trust has to work both ways: authors need to be transparent in their disclosures, but reviewers and editors also need to be transparent about how disclosures are handled and how such disclosures impact on the evaluation of a manuscript. Otherwise, the result is a gatekeeping mechanism that prevents the introduction of new ideas—especially those that challenge the status quo.
Although uncommon, another way researcher motives may be influenced is through coercion or intimidation—be it perceived or real. While many of us in music education no doubt regard most of what we do as benign or not of interest to many people outside of our relatively small circle, the stakes are much higher for non-governmental organizations (NGO’s) or other entities whose operations depend on positive public perception. Consider, for example, something like research on El Sistema or some other large-scale program connected to powerful people. It is not impossible to imagine those with power and influence being displeased with critical research—perhaps to the point of directly or indirectly threatening a researcher or engaging in tactics to discredit the reputation of a researcher. No matter how much I may want to act with integrity, I am not willing to gamble with my career at this point. Better to stay silent than potentially incur the wrath of an unhappy NGO.[21]
Ethical Guidance for Authorship
There are many sources of ethical guidance for authors. Many of these sources, however, are found in health care fields. I would argue that the contributions and statements by national educational research associations provide guidance with the closest disciplinary relevance for the field of music education. My internet search revealed several educational association websites that include statements and provide resources on the ethical conduct and reporting of research. The Australian Association for Research in Education’s “Code of Ethics for Research in Education,” for example, lays out “Four Basic Principles.”[22] These are notable for their framing, which speaks to the consequence of a given piece of research, the ethical expectation that research help to develop human goods, the risk of harm to an individual, and the respect for the dignity and worth of individuals and the welfare of students, research participants, and the public. The complete document is attractive for its concision, although the brevity comes at the cost of detail. There is, for example, but one brief mention of disclosure and no section on conflicts of interest.
Compared to the Australian Association for Research in Education, the British Educational Research Association (BERA) website is more extensive in its documents on research ethics. In addition to its “Ethical Guidelines for Educational Research,”[23] which contains thorough discussions of disclosure and conflicts of interest, the BERA has to date published three companion “case study” documents on emerging topics, a “charter” on research staff in education, and a document on “close-to-practice” research. The latter document is notable for its engagement with arguably the number one issue that distinguishes educational research from research in other fields: the practical and ethical challenges of researching educational settings in ways that can never fulfill research expectations for objectivity and distance.
The current website of the American Educational Research Association (AERA), while not the most user-friendly, contains broad-based information for authors at all career stages (though surprisingly, it lacks information on “close-to-practice” research found on the BERA website). The earliest AERA example on research ethics is a 1992 statement published in Educational Researcher (“Ethical Standards of the American Educational Research Association”). This original statement, specific to AERA journals (but which served as the inspiration for the BERA ethics statement), provides insight into a growing awareness of the importance of professional ethics in educational research. The current AERA webpages on research ethics and standards for research conduct are easily accessible sources.[24] The AERA web pages point to a number of downloadable resources, such as the current “Code of Ethics” (adopted in 2011, replacing the 1992 Ethical Standards), and a curious, non-descript PDF called “AERA General Guidelines for Reviewers.” Importantly, this PDF has links to “Standards for Reporting on Empirical Social Science Research in AERA Publications” and a companion contribution that draws attention to the existence of non-empirical research and its distinct standards of quality: “Standards for Reporting on Humanities-Oriented Research in AERA Publications.” These two documents contain a wealth of information that could inform mentoring efforts.
The current AERA research ethics page also contains a link to a 2014 AERA statement that endorses the Singapore Statement on Research Integrity. According to the website of the organization now known as World Conferences on Research Integrity, seven conferences have been held: Lisbon (2007), Singapore (2010), Montreal (2013), Rio de Janeiro (2015), Amsterdam (2017), Hong Kong (2019), and Cape Town (2022).[25] These conferences have produced a number of global statements on research integrity: the Singapore Statement on Research Integrity, the Montreal Statement on Research Integrity in Cross-Boundary Research Collaborations, the Amsterdam Agenda, the Hong Kong Principles for Assessing Researchers, and the Cape Town Statement on Fostering Research Integrity through Fairness and Equity. The latter is particularly notable for the inclusion of the section, “Indigenous Knowledge Recognition and Epistemic Justice.”[26]
The Singapore Statement appears to represent something of a landmark document. It is notable for its simplicity. It’s four core principles read: Honesty in all aspects of research; Accountability in the conduct of research; Professional courtesy and fairness in working with others; Good stewardship of research on behalf of others. The subsequent fourteen responsibilities, while seemingly self-evident to those with research experience, provide an excellent summary resource for graduate students and early career researchers. Salient to my earlier point about trust in authorship is responsibility No. 9: “Conflict of Interest: Researchers should disclose financial and other conflicts of interest that could compromise the trustworthiness of their work in research proposals, publications and public communications as well as in all review activities.”
Mentorship for Reviewing
“Writing anonymous peer reviews is an academic ‘black art.’ Such assessments are vital to scholarly publishing but we receive no formal training in how to write one” (Haggarty 2012, np). Haggarty’s “First Person” perspective, published in The Chronicle of Higher Education, squares with my own experience. My internet search for literature on the mentorship of peer reviewing turned up several non-book sources, but not as many as I had anticipated. I found no sources in a disciplinary area related to education. Save for a journal editor’s introduction in the field of management (Carpenter 2009) and a published conference presentation in engineering (Benson et al. 2022), the articles I found were all from health care fields. Six articles specifically examined mentorship of reviewers (Hernandez et al. 2022; Kirby et al. 2021; Lala and Mentz 2022; Rodríguez-Carrio et al. 2018; Houry et al. 2012; Xu et al. 2016).
I did not discover a book on peer reviewing with mentoring in the title, but my search revealed four recent books specifically aimed at examining the issue of conducting academic peer reviews. Although one of these (Spyns and Vidal 2015) was disappointing (offering little beyond the stock advice to be honest, objective, and fair), books by Barczak and Griffin (2021), Paltridge (2017), and Starck (2017) all provide thoughtful and comprehensive discussions of academic peer reviewing. Any of these could serve as required reading for graduate students and early career researchers. Adopting a speech acts and corpus linguistics perspective, Paltridge’s The Discourse of Peer Review: Reviewing Submissions to Academic Journals is particularly notable for its clarity of structure. It classifies genres (e.g., grant applications, tenure files, books, book proposals, journal articles), and makes a distinction between discourse and content in reviews and how these might appear in “accept reviews,” “minor revisions reviews,” “major revisions reviews,” and “reject reviews.” It also discusses language issues, politeness, the evaluation of language found in reviews, and, importantly, mentoring and reviewer training. It is worth noting that journal publishers have taken it upon themselves to start offering guidance. One example I have recently come across is the Sage Journal Reviewer Gateway (https://us.sagepub.com/en-us/nam/journal-reviewer-gateway).
More on the Author-Reviewer Relationship
The bottom line of peer reviewing is, in most cases, offering judgement on whether or not a manuscript should be recommended for publication. But peer reviewing is (or should be) about more than that. The narrative nature of a typical peer review takes the form of a special kind of dialogue between author and reviewer. I say “special” kind of dialogue for four reasons. First, there is the anonymous aspect. Anonymity is usually claimed as important for the integrity of the review. It is, for example, supposed to allow the reviewer to provide an appraisal that is free from concerns over potential professional retribution. But, as with many instances of anonymous authorship, this removes the “human” element in dialogue. In some cases, this can lead to bad behaviour where a reviewer expresses thoughts differently than they would if they signed their review.[27] As Haggarty (2012) observes, “Anonymity can tempt people to write needlessly mean-spirited remarks” (np). Before submitting a review these days, I always ask myself if I would be willing to put my name at the end of it. If not, I edit.
The second reason I consider reviews a special kind of dialogue is that reviewers are not just offering a summative evaluative judgement (which, by definition, would not constitute dialogue) akin to what an adjudicator might offer on a musical performance, but a judgement intended to be formative. In this sense, articles should be the result of a collaborative effort between author and reviewers who ideally share the same goal (motive) of wanting the best possible published result. As Barczak and Griffin (2021) assert, “the reviewer’s role is to help authors craft a manuscript that clearly communicates rigorously executed research answering an important research question, and which contributes sufficient new knowledge to warrant publication” (31). Regrettably, this conception is not always shared by authors and reviewers, who sometimes take an adversarial stance or fail to recognize the formative function of reviewing. I have received reviews, for example, that, even if supportive, offer virtually nothing I can use to improve my work.[28] Conversely, I have occasionally encountered responses to reviews I have written where the author is resistant to—even insulted by—any suggestion that their manuscript could or should be improved. These authors apparently do not embrace a conception of reviewing-as-dialogue.
A third reason reviews are a special kind of dialogue is that they function in the context of gatekeeping and disciplinary evolution. As Haggarty (2012) unapologetically states, “Serving as a peer reviewer makes you a gatekeeper, as you put your own small stamp on the types of works that are recognized and rewarded” (np).[29] Ideally, reviewers are cognizant of the precedent nature of research publication. Giving a green light to a manuscript means it will become part of the disciplinary base of the profession. This is a massive responsibility and I am not convinced all reviewers take this seriously enough based on the number of questionable articles published in our field. Just because a study does not contain any methodological errors is not sufficient reason for it to be published. The citational aspect of research and scholarship means that studies are rationalized by citing existing published research. If a manuscript is greenlighted based on the citing of trivial studies (because reviewers rarely have time to sufficiently appraise all the cited literature), the result is triviality built upon triviality ad nauseam.
Viewed from the perspective of gatekeeping, author and reviewers ideally share an understanding that, no matter how much an author might want (or “need”) a publication credit, it is professionally irresponsible to publish trivial research because it undermines the credibility of peer-reviewed research as a marker of trust (see Barczak and Griffin 2021). As mentioned in the editorial for this special issue, peer review is far from perfect. (The worst system except for all the others.) Publishing trivial research only makes the system weaker. How can we expect others to take us seriously if we don’t take ourselves seriously? Better mentorship might lead more reviewers to consider the long-time implications of greenlighting manuscripts with precious little to commend them beyond the fact that the author didn’t make any mistakes.[30]
I do recognize the challenge here. Thomas Kuhn (1962) long ago pointed out how disciplinary forces incline toward conservatism. As discussed earlier, authors in music education are more likely than not to conduct empirical studies that are positive about music learning and teaching—and reviewers are more likely than not to be favourable to manuscripts that contain positive findings about music learning and teaching and less favourable toward manuscripts that fail to provide positive findings. It is very difficult to disrupt the paradigm. But this conservatism goes beyond just favouring positivity in research findings. It extends to the paradigmatic aspect of what counts as research to begin with. Our research journals in music education, for example, continue to publish articles that bandy about words like “validity” and “reliability” in ways that many other fields left behind decades ago. There is a big difference between claiming a given piece of research is trivial and claiming that a piece of research does not qualify as research. How might an author and reviewers engage in dialogue that recognizes the importance of gatekeeping for the purpose of ensuring trivial research is not published while at the same time recognizing and acknowledging when gatekeeping is being weaponized to prevent “disagreeable” or “non-traditional” research from coming to light?
A fourth reason reviews are a special kind of dialogue is that they function as a socializing and normalizing practice. Lacking mentorship, reviewers inevitably learn by doing (see the contribution by Patrick Schmidt in this issue). But they do not only learn by writing reviews; they learn based on the reviews they receive. The problem is hopefully clear: if one receives reviews with authoritative pronouncements, that lack evidence or support for opinions, or that focus on copy-editing issues rather than issues of substance, one is socialized into believing that peer reviewing is a matter of expressing one’s personal opinion (without support or evidence) or commenting on commas. The dialogue between reviewers and authors matters because, in the absence of mentorship, it is how most of us learn how to do peer review.
Individual journal practices vary, but it appears (to me) to be increasingly common for journals to send individual reviewers the set of reviews for a given manuscript, either at the decision stage (accept, reject, resubmit) or upon resubmission for reviewers to assess the quality of revisions in response to the initial set of reviews. I confess that I was initially fearful of this process, as it meant that other reviewers would see my reviews. (Would my review measure up to the other reviews? What if I missed something obvious that the other reviewers identified?) But I now see this practice as vital to the ongoing development of peer reviewing. This practice functions as a form of feedback for reviewers, who can compare their reviewing with others—hopefully recognizing when and where their own efforts have fallen short.
On Rubrics and Evaluating “Quality”
These days, several journals with which I am familiar use some form of rubric as part of the evaluation process. These were rarer when I first started peer reviewing but have become increasingly common. As with their use as part of any evaluation process, rubrics can be a double-edged sword. On the one hand, rubrics help to provide a more level playing field in manuscript evaluation and offer authors a more explicit set of criteria. They help take some of the guesswork out of the equation. On the other hand, the codification and standardization endemic to rubrics risk reducing manuscripts to a formula. Codification and standardization also risk the rejection of manuscripts that do not conform to the rubric, by which I do not mean the qualitative evaluation aspects, but rather, the categories themselves. The criteria are not just means of measurement; they define what is and is not considered research.
While it may be fair game to claim that each journal has its scope, one defined in some ways by the evaluation rubric it uses, this justification risks an ever-narrowing reduction in richness in the discipline. If, for example, an increasing number of journals employ a rubric system using empirical research categories such as “research questions,” “methodology,” “analysis,” and “findings,” and reviewers become more and more accustomed to these categories—ones central to social science research but not to humanities-oriented research—then over time these categories become the very definition of research. Anything that does not conform is considered either bad research or not research at all. Disciplinary approaches used in historical and philosophical research, for example, are pushed to the margins.[31] Or, consider the emergence of post-qualitative research (e.g., St. Pierre 2021), which, by definition, seeks to challenge and transcend the epistemological assumptions common to most empirical research. According to several peer review rubrics, post-qualitative research does not count as research. Because of the lack of widespread mentorship in peer reviewing, the socialization process by which people become reviewers who have only used rubrics runs the risk of producing a generation of reviewers for whom the rubric categories are gospel. Without exposure to or awareness of the breadth of what might constitute research, the richness of research in our profession erodes over time.
A Modest Starting Place for Mentoring
At the risk of undercutting my own critique about rubrics (which I do not see as in any way as bad if put in perspective), I propose here—in the spirit of ACT (the A standing for Action), and with deep respect for the fine books that deal comprehensively with writing peer reviews—some basic guiding questions that might serve as a starting point for mentoring in authoring, reviewing, and editing. I imagine these questions, or variations of these questions, used in multiple ways and in multiple settings. The most obvious use may be in a graduate seminar class, but they could also serve as the basis for conversations between thesis supervisor and student, between thesis supervisors, between student authors, between established authors, between reviewers, between editors, between authors and reviewers, and between reviewers and editors—given that these questions should be considered at the creation stage and evaluation stage. In this sense, I consider mentorship not only vertically but horizontally. I imagine mentorship less about passing on wisdom (though that is certainly part of it) than about generating conversations that bring important issues out into the light of day rather than left in some sort of black box as part of “black art.”
- To what extent does the manuscript situate itself within an ongoing discourse in music education? How is it positioned in relation to what has come before and what lies ahead?
- To what extent does the manuscript articulate how it might be relevant to identified “stakeholders” (considered very loosely and broadly), and to what extent is the manuscript transparent about the author’s positionality and relationship to stakeholders? Who might this manuscript help and in what possible ways?
- To what extent is the manuscript clear and transparent about what it considers as source material about the world (“data” as broadly construed; concepts, theories, and ideas can be data) and how this source material has been regarded and treated in relation to ongoing discourses in music education?
- To what extent does the manuscript demonstrate an awareness of epistemology and ontology in terms of what it takes as “real” and what it claims as real for others?
- What potential does the manuscript hold for making more than a trivial contribution to the field?
These questions are intentionally broad, striving to accommodate a conception of research that, in light of AERA’s distinction between standards for empirical social science research and humanities-oriented research, is as open as possible. Admittedly, these questions demand a rich and rigorous background and understanding that may be initially beyond the grasp of some emerging (and maybe even some established) scholars. The first bullet point, for example, is not just a fancy way of saying “literature review.” If used sensitively, however, I can imagine a set of questions such as those proposed above being used as a way of opening up discussions about what it means to do research and how we might better go about making judgements about “quality” that are organic to form, genre, content, and so on. These discussions would include conversations—in the best sense of mentoring—about how we might better communicate in the peer review process so that authors, reviewers, and editors come to share a common goal of improving manuscripts in ways that are guided by ethical concerns at every level.
About the Author
Roger Mantie is Professor, Department of Arts, Culture and Media at University of Toronto Scarborough, with a graduate appointment at the Ontario Institute for Studies in Education. He enjoyed previous appointments at Arizona State University and Boston University. He is the author of Music, Leisure, Education: Historical and Philosophical Perspectives (2021, Oxford University Press), co-author of Education, Music, and the Social Lives of Undergraduates: Collegiate A Cappella and the Pursuit of Happiness (2020, Bloomsbury), co-editor of the Oxford Handbook of Technology and Music Education (2017) and the Oxford Handbook of Music Making and Leisure (2016), and is the Senior Editor of the International Journal of Community Music. Complete information at rogermantie.com
References
Barczak, Gloria, and Abbie Griffin. 2021. How to conduct an effective peer review. Edward Elgar Publishing.
Benson, Lisa, Kristina Edström, Karin Jensen, Gary Lichtenstein, Rebecca Bates, Cynthia Finelli, and Evan Ko. 2022. Equity and inclusion in peer reviewing: Grand challenges for engineering education researchers. In IEEE Frontiers in Education Conference (FIE), 1–2.
Carpenter, Mason A. 2009. Editor’s comments: Mentoring colleagues in the craft and spirit of peer review. Academy of Management Review 34 (2): 191–95.
Clift, Stephen, Kate Phillips, and Stephen Pritchard. 2021. The need for robust critique of research on social and health impacts of the arts. Cultural Trends, 30 (5): 442–59. https://doi.org/10.1080/09548963.2021.1910492
Haggarty, Kevin D. 2012. How to write an anonymous peer review. The Chronicle of Higher Education. https://www.chronicle.com/article/how-to-write-an-anonymous-peer-review/
Hardwicke, Tom E., Robert T. Thibault, Jessica E. Kosie, Loukia Tzavella, Theiss. Bendixen, Sarah A. Handcock, Vivian E. Köneke, and John P. A. Ioannidis. 2022. Post-publication critique at top-ranked journals across scientific disciplines: A cross-sectional assessment of policies and practice. Royal Society Open Science 9 (8): 220139. https://doi.org/10.1098/rsos.220139.
Hernandez, Lyndon V., Michael B. Wallace, Deborah E. Bowman, Stephanie Kinnan, Dominic Klyve, and Amandeep K. Shergill. 2022. Mentoring our young reviewers: Learning from our learning curve. Gastrointestinal Endoscopy 95 (1): 164–67.
Houry, Debra, Steven Green, and Michael Callaham. 2012. Does mentoring new peer reviewers improve review quality? A randomized trial. BMC Medical Education 12: 1–7.
Kirby, Alana, Daniel Garbin di Luca, and Christopher Goetz. 2021. Implementation and early outcomes of a peer reviewing education and mentoring program (2483). Neurology 96 (15). https://n.neurology.org/content/96/15_Supplement/2483.
Kuhn, Thomas. 1962. The structure of scientific revolutions. University of Chicago Press.
Lala, Anuradha, and Robert J. Mentz. 2022. The JCF reviewer mentorship program. Journal of Cardiac Failure 28 (1): 1–2.
Miksza, Peter. 2025. Forum. Journal of Research in Music Education 73 (1): 3–4. https://doi.org/10.1177/00224294251313597.
Paltridge, Brian. 2017. The discourse of peer review: Reviewing submissions to academic journals. Palgrave Macmillan.
Rodríguez-Carrio, Javier, Polina Putrik, Alexandre Sepriano, Anna Moltó, Elena Nikiphorou, Laure Gossec, Tore K. Kvien, and Sofia Ramiro. 2018. Improving the peer review skills of young rheumatologists and researchers in rheumatology: The EMEUNET peer review mentoring program. RMD Open 4 (1): e000619.
St. Pierre, Elizabeth. 2021. Post qualitative inquiry, the refusal of method, and the risk of the new. Qualitative Inquiry 27 (1): 3–9. https://doi.org/10.1177/1077800419863005
Starck, J. Matthias. 2017. Scientific peer review: Guidelines for informative peer review. Springer Fachmedien Wiesbaden.
Spyns, Peter, and María-Esther Vidal. 2015. Scientific peer reviewing: Practical hints and best practices. Springer International Publishing.
Xu, Jiayun, Kyounghae Kim, Melissa Kurtz, and Marie T. Nolan. 2016. Mentored peer reviewing for PhD faculty and students. Nurse Education Today 37: 1–2.
[1] At IJCM, the practice for special issues used to see guest editors operate outside the usual online journal submission system. This was, in part, to keep the special issue manuscripts from getting mixed up with the regular submissions, and in part because granting a guest editor access to the online journal system would involve a learning curve (as the former online system was not overly intuitive) and introduce potential ethical issues (because a guest editor would be able to view other aspects of the online management system, not just the manuscripts for the special issue). The online portal has recently changed.
[2] Fortunately, the other two reviewers felt differently. The article was published as “A Comparison of ‘Popular Music Pedagogy’ Discourse(s),” Journal of Research in Music Education 61 (3): 334–352. https://doi.org 10.1177/0022429413497235. Readers can judge for themselves whether it is methodologically sound or exhibits “anti-American bias.”
[3] See also: Lauren Kapalka Richerme, “Ode to Reviewer Two: What Could Philosophizing Be?” in Action, Criticism, and Theory for Music Education, 24 (1): 1–6. My understanding is that many journals used to solicit three reviews for each manuscript. It appears to be more common these days to solicit two—I assume due to the labour involved and the relative shortage of reviewers.
[4] The current editor of Journal of Research in Music Education, Peter Miksza, recently shared (on Facebook) his frustrations with a particularly unpleasant peer review he experienced. He subsequently published an editorial in JRME with mentoring suggestions for reviewers aimed at maintaining civility alongside high standards (Miksza 2025).
[5] MENC was the former name of the National Association for Music Education. SRIG is short for special research interest group.
[6] It should be noted that Web of Science (which appears to have taken over Publons), supports the relatively recent practice of publishing the names of reviewers who elect to be acknowledged for reviewing specific articles. This practice, which I understand as a mechanism to give credit for the unpaid labour of reviewers, entails the public acceptance of responsibility for a particular review but in so doing removes the anonymity factor—i.e., authors can see who their reviewers were. Some journals, such as those published by Frontiers, print the names of reviewers as part of the article’s official record. Frontiers is only single-blind, however (i.e., reviewers see the names of the authors).
[7] An argument could be made, I suppose, that if editors started to challenge bad reviews that it might weed out poorer reviewers. The problem is that there are not enough good reviewers to go around.
[8] Perhaps this is common knowledge, but I did not know about this scoring system. I wish I had better mentorship.
[9] In the interest of transparency, our editorial team at IJCM does keep an informal record of reviewer work, which we factor into reviewing assignments.
[10] Other than the recent practice where some journals have begun to publish the names of the reviewers of articles as part of the publishing record.
[11] There are unfortunate stories out there where researchers have fabricated data. See, for example, this disturbing story on behavioural ecologist, Jonathan Pruitt: https://thewalrus.ca/a-rock-star-researcher-spun-a-web-of-lies-and-nearly-got-away-with-it/ (accessed May, 2025).
[12] I mean data in the most open-ended sense of research, including philosophical work where data might refer to what and who we choose to read.
[13] The MayDay Group, founded in 1993 based on the belief that music education was in distress (hence “mayday”; see http://www.maydaygroup.org/about-us/history), represents the outlier in the music education eco-system. Action, Criticism, and Theory for Music Education was created as an open access forum intended to respond to the MayDay Group’s “Action Ideals”—a set of action aims and beliefs intended to provide a platform to, presumably, challenge the status quo (http://www.maydaygroup.org/about-us/action-for-change-in-music-education/).
[14] Brent Talbot and I detailed our experience of a failed attempt to offer a critique of the status quo in “How Can We Change Our Habits if We Don’t Talk About Them?” Action, Criticism, and Theory for Music Education, vol. 14, no. 1, 128–53. Perhaps if we had had better mentorship, our initial effort may have been more successful.
[15] To be clear, I am drawing a distinction between authors who are critical of teaching practices generally speaking, as one often finds in ACT, and empirical studies that are openly critical of teachers who are research participants—the latter of which one almost never finds in the literature.
[16] By arms-length I am thinking of examples such as government grants with no “strings” attached.
[17] See https://musicwill.org/about/ (accessed December, 2024). Despite the fact that it is not an accredited credentialing or licensing body, school teachers are described by Music Will as “our teachers.”
[18] The Journal of Popular Music Education, in which many “modern band” articles appear (and for which I serve on the editorial board), does not, at time of writing, have a disclosure statement as part of its publishing practice.
[19] See, for example, https://journals.sagepub.com/doi/10.1177/02557614231188590.
[20] I have experienced this directly. My work on a funded project was vetted, with my words altered to suit the desires of the funder. In addition, an entire section of my team’s research—over 3000 words, was removed by the funder to avoid its publication, something they felt justified in doing because they had paid for the research (even though the money was not technically theirs, but rather, funding supplied by a philanthropic donation).
[21] I am not posing this rhetorically. I am choosing to stay silent about some aspects of a particular NGO rather than risk potential consequences that might arise if I were to pursue publication about it.
[22] https://www.aare.edu.au/assets/documents/policies/AARE-Code-of-Ethics.pdf, accessed May, 2025.
[23] This document is available in Spanish, Portuguese and, for unapparent reasons, Ukrainian.
[24] https://www.aera.net/About-AERA/AERA-Rules-Policies/Professional-Ethics and https://www.aera.net/Publications/Standards-for-Research-Conduct, accessed May, 2025. Regrettably, the link for “AERA PRE (Peer Review Evaluation) Resources” was dead.
[25] https://www.wcrif.org/origin-and-ojectives, accessed May, 2025.
[26] https://www.singaporestatement.org/guidance/singapore-statement, https://www.singaporestatement.org/guidance/montreal-statement, https://www.singaporestatement.org/guidance/amsterdam-agenda, https://www.singaporestatement.org/guidance/hong-kong-principles, https://www.singaporestatement.org/guidance/cape-town-statement. All URLS accessed May, 2025.
[27] One is, of course, reminded of vitriolic exchanges on the internet where people, only known by their generic usernames, express themselves in ways they likely never would in a face-to-face encounter.
[28] To be clear, I am not suggesting that reviewers take on the labour of “teaching” authors how to do research or write a manuscript.
[29] For an excellent philosophical perspective on gatekeeping in music education, see Lauren Kapalka Richerme, “Uses and Abuses of Gatekeeping: A Call for Affect, Ethics, and Rigor” in Philosophy of Music Education Review 33, no. 1 (2025): 22–36, https://dx.doi.org/10.2979/pme.00020. Richerme offers a rich discussion of the ethics of peer reviewing. Regrettably, her article was published after this one was written and typeset and there was not time to incorporate more of her arguments.
[30] I have avoided discussing what might be considered the elephant in the room: the insatiable desire of neoliberal-infected institutions to quantify and measure—a situation that pressures junior faculty (and non-junior faculty seeking advancement) to achieve publication credits by any means necessary. The inevitable result is research that is rationalized according to the time and energy required and the “time to publication.” What junior faculty member is going to opt for a rigorous, time-intensive study rather than a quick-and-easy study (let alone a longitudinal study!)? Why take the time to conduct and analyze thirty interviews when you can get away with doing three? What junior faculty member is going to value the importance of the disciplinary knowledge base over their own self-interest in surviving the tenure review process? Mentorship may not solve this problem, but at least it might foster awareness of and sensitivity to the issue of not publishing trivial research.
[31] The emergence of specialized journals, such as Philosophy of Music Education Review or Journal of Historical Research in Music Education become a double-edged sword: they provide a forum for publication but also provide an excuse for other journals to claim “out of scope” for any manuscript that is philosophical or historical in nature.