JOSEPH MICHAEL ABRAMO
University of Connecticut, Storrs (USA)
July 2025
Published in Action, Criticism, and Theory for Music Education 24 (4): 132–39 [pdf]. https://doi.org/10.22176/act24.4.132
Guest editors Mantie and Ruthmann have compiled a thought-provoking collection on editorship in the twenty-first century by astute authors, noting challenges and possible avenues forward. In response, I want to situate some themes in this issue within the changing landscape of technology. I, like others, believe that GenAI and other technologies require those involved in publishing to adapt their approaches (Naddaf 2025). I write as an author, reviewer, and editor. I have served as a reviewer on eight editorial boards, and I held the position of Senior Editor of Visions of Research in Music Education, an open-access journal. I currently serve as Editor of the Bulletin of the Council for Research in Music Education, along with Dr. Patrick Schmidt, who has written a piece in this issue.
A theme I detect throughout this issue is what Mantie and Ruthmann in the editorial identify as conservativism within peer review. Writing about practitioner-based publications, Kaschub and Strand (this issue) note, “peer reviews may often fail to recognize and promote excellence and/or innovation, rejecting innovative (even radically new) ideas if they can only think about the manuscript’s topic more conventionally.” I believe this extends to more research-focused publications as well. As the authors in this special issue argue, this conservativism is the result of lack of transparency (Schmidt), including lack of education in doctoral programs (Schmidt; Mantie) and the privileging of certain epistemologies (Schmidt) and forms of presentation (Kaschub and Strand). This creates a “closed loop” (Kaschub and Strand), where those who have succeeded in the system are worthy of serving as a reviewer, thus enforcing normative practices.
Newer technologies now have the capacity to automate, expand, and accelerate the closed loop of conservativism. The hallmark of current GenAI is that it is based on Large Language Models, or the ability to take large data sets and use that data to create new material that is distinctly different yet similar (Narayanan and Kapoor 2024; Wiggins and Jones 2023). As some have noted (Bender et al. 2021), this process has the effect of recreating and amplifying norms. GenAI uses these large data sets to produce more data, which is then placed within the data set it draws upon to create new material. The result is a feedback loop, where the data get bigger and increasingly more similar. Artifacts, such as writing, images, podcasts, teaching or anything else that humans and GenAI can create, will increasingly look the same in a GenAI world.
Now, perhaps more than ever, is the time to reimagine research as a product of something that cannot be created without humans. It might require writers to eschew common research formulae, such as the post-positivistic form (background, literature review, method, findings, and discussion), for newer, experimental ventures. Via large language models, GenAI can create “more of the same”; it can create writing in predictable forms better than humans because in a matter of seconds, it draws language data sets larger than humans could consume in a lifetime of reading. Perhaps, when the creation of research in common forms can be outsourced to machines, the evaluation of human-made research might prize creativity. This does not mean that humans should not use GenAI in their artistic and scholarly output, just that research communities might begin to ask: What is the value added of having a human create or do anything? What can they do a machine cannot?
Another theme in this issue is how the peer review process can become adversarial. As Aróstegui (this issue) notes, some authors “do not embrace a conception of reviewing-as-dialogue.” In the editorial to this issue, Mantie and Ruthmann quote Koner and Gee (2024) who report an author’s views of the peer-review process: “a large amount of suggested revisions … don’t actually make a huge difference” (13). Some authors do not find that the process improves their work. On the other side, Reviewers might become frustrated with some authors’ perceived recalcitrance. As Mantie mentions, the famous “Reviewer 2” finds faults and looks to reject all submissions that come their way, using the review process to sadistically shame people and work out their own personal demons. Reviewer 2s turn the potentially collaborative and constructive process into something less positive.
I think it is important to situate the adversarial positionings within systemic issues rather than blame it on stubborn authors and Reviewer 2s. As Aróstegui notes at the beginning of his article, the system is set up so that review is not a dialogue—it is a process where authors must make the changes or risk rejection. Rejection can have serious consequences, including securing employment and receiving tenure. Similarly, as Schmidt notes, the reviewers are laboring for free, while large companies, like Sage and Taylor & Francis, profit.
Despite peer review issues being systemic, they can feel personal. Similar to what Mantie (this issue) notes in his article, I have also felt that reviewers have misunderstood my submissions. And, like him, I am afraid to admit that some authors probably have felt I have misunderstood their papers I reviewed. As an author, reviewer, and editor, I have seen these misunderstandings play out through the politics of citation. I have seen reviewers be inconsistent in what counts as sufficient evidence to support a claim or ideas. I have seen hostile reviewers essentially run interesting and innovative submissions into the ground through repeated comments to cite more. Some reviewers require that authors’ every step and claim be “supported” with citation and evidence. Conversely, I have seen some claims that fit into existing paradigms written without justification pass without comments from reviewers.
As an author, I have seen this inconsistency in the difference in the reviewers’ comments when I write about identity politics or class. As Mark Fisher (2013) notes, “The petit bourgeoisie which dominates the academy … has all kinds of subtle deflections and pre-emptions which prevent the topic [of class] even coming up, and then, if it does come up, they make one think it is a terrible impertinence, a breach of etiquette, to raise it” (n.p.). It appears that writing about class is “a terrible impertinence” to some, and one of the “subtle deflections” that some reviewers employ in response to that perceived impertinence is met with “show me the evidence.” Conversely, when writing about other equally important issues of inequity, such as race, sexuality, disability, and gender, I have been afforded much more leniency to make claims without citation or support. To me, this reveals a perhaps odd conservativism, one where the once-radical discourses of equity have ossified into a set of orthodoxies.
I am, of course, apprehensive to write the paragraph above because it might appear woefully unself-aware and biased. We all might not be the best judges of writing, and the review process is there in part to account for that. I also know that a common rule of academic writing is that bold claims require robust evidence and justification, and reasonable readers could take umbrage with the previous paragraph. Also, as an author, I am probably prone to under citing. However, my position is a product with a nagging feeling that has increased with the proliferation of information and more widespread use of GenAI. Fisher (2014/2022) noted that technology, for more than two decades now, has made vast bodies of knowledge available to the average consumer. “In conditions of digital recall, loss is itself lost” (2); information “disappearing” has become rarer. Everything is or can be documented on the internet. Journal volumes and books at one time disappeared. Where journals once were hiding in a library somewhere (if not altogether destroyed), and they only could be accessed by going to a physical location, now, studies live on forever, available at any computer; loss itself is lost.
Those involved in publishing might contend with the loss of loss when it comes to citation as well as what counts as evidence and quality. I have struggled to conceptualize my role as an author, reviewer, and editor in a GenAI-ridden, post-truth world. Why should I synthesize research and cite sources when GenAI can do it better and there is so much information any list of citation will never fully capture the completeness of thought? If citation is there to lend evidence to a statement being true, how am I to contend with the fact that there is some piece of quality information out there to support any claim? How am I, as a reviewer and editor, supposed to put a stamp on “quality” when information is so readily available, and the public goes where they want to validate their already-held perspectives? Why am I participating in a process that takes months if not years, slowing down the spread of information, when information via other media disseminates so rapidly? It seems that this last question is particularly pressing for journals such as ACT and others whose focus is on critical discourse. As the types of critiques –antiracism, anticolonialism, queer theory, etc.—that have been central to the frameworks of articles in ACT for decades have been adopted in the mainstream, the difference between critical scholarship and “think-piece” blogs and traditional news has blurred. While academic pieces take months or years to become public, think pieces are disseminated sometimes in the matter of hours. In this landscape, the protracted process of peer review seems insufficient to create impact.
Many of these questions and concerns of the previous paragraph are often answered with pro-peer review logic: reviewers are there to make sure that the claims are true, that citations are not cherry picked to make unfounded claims. The truth requires deliberation and should not be rushed. These are the strengths of peer review. However, this logic is based on old forms of dissemination, which assume that researchers have access to relatively small bodies of knowledge the general public does not. In the past, dialogue transpired within and through journals, to which only “experts” read. Studies were published for researchers, and other researchers responded to these studies with more publication within the journals. Aróstegui (this issue) provides evidence of how publications in music education have increased in the last decade. No longer is there a closed loop of experts conversing with themselves for themselves in a small number of journals. These closed systems have opened up through more journals and the internet providing many people the ability to access information (even if much of it is behind a paywall).
Aróstegui makes a strong argument to not abandon are pursuit of the (scientific) truth, and I too am not arguing for abandonment. Instead, I am suggesting that we need to adapt to the changing landscape of epistemology that technology plays a part in creating. Our beliefs about quality research and standards of argument and evidence have not evolved with how information is accessed, by whom, and how quickly people need that information.
Technology’s influence resides not only in epistemology and the speed of publication, but also its form. As Kaschub and Strand (this issue) note, “Reviewers [in practitioner-based writings] may vote to reject manuscripts with colloquial writing even when they contain content that would benefit teachers with similar experiences as the author.” I believe we need to expand that insightful observation about tone of writing to the media we choose to disseminate ideas. Again, I turn to Fisher (2009); while he notes how technology has influenced how his university students consume readings, I believe this is applicable to the broader public, including practicing teachers:
Ask students to read for more than a couple of sentences and many—and these are A-level students mind you—will protest that they
can’t do it.
The most frequent complaint teachers hear is that
it’s boring.
It is not so much the content of the written Material that is at issue here; it is the act of reading itself that is deemed to be ‘boring’. What we are facing here is … the mismatch between a post-literate ‘New Flesh’ that is ‘too wired to concentrate’ and the confining, concentrational logics of decaying disciplinary systems. To be bored simply means to be removed from the communicative sensation-stimulus matrix of texting, YouTube and fast food; to be denied, for a moment, the constant flow of sugary gratification on demand. Some students want Nietzsche in the same way that they want a hamburger; they fail to grasp—and the logic of the consumer system encourages this misapprehension—that the indigestibility, the difficulty
is
Nietzsche. (22–23)
Like Fisher, and I believe many academics, I like the difficulty of texts; I chose an academic life because I delight at getting inside writings’ multiple meanings, ambiguities, and tensions. However, I believe academics must acknowledge that these desires are not that of most people. Rather than take on a sense of distain that I believe Fisher has here, I think it more productive to acknowledge that people consume media differently than in the past. Kaschub and Strand (this issue) make a strong case for practitioners having more say in the content of articles in practitioner journals. My suspicion would be if we asked this same constituency about which forms they would like to receive the content, written texts would not be at the top of the list. It appears that print is no longer the preferred form of media consumption for the population in general and for music practitioners (Simpson 2023). Increasingly, teachers are going to social media for professional development (O’Leary and Bannerman 2025). “Edfluencers” create podcasts and post on TikTok and Instagram accounts that provide unreviewed content (O’Leary 2023), and this seem to resonate with teachers more than the journal articles scholars write.
Again, some might use pro-peer review logic here to argue against these media: They have not been reviewed and so their quality is unknown. However, we might acknowledge the possibility that practicing teachers prefer hamburgers to Nietzsche, so to speak. If we as part of academic publishing question that preference and the new literacies that practicing teachers possess, construing it as inferior to “serious” research and consumption through written media, we risk not reaching the constituencies we purportedly aim to influence.
These are the challenges of editorship moving forward: aiming for innovation in an environment where GenAI will amplify conservativism and conventional forms; addressing the systematic issues that can make the process adversarial; redefining research quality in a time of increased automation; and imagining paradigms that are nimble and take the form that will reach a broad constituency of researchers and practitioners. If editors face these changes, they can evolve research and turn these challenges into potentials. The pieces in this issue provide us questions and answers to move forward.
References
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021, March. On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 610–23. https://doi.org/10.1145/3442188.3445922
Fisher, Mark. 2009. Capitalist realism: Is there no alternative? John Hunt Publishing.
Fisher, Mark. 2013, November 22. Exiting the vampire castle. The North Star. https://www.opendemocracy.net/en/opendemocracyuk/exiting-vampirecastle/
Fisher, Mark. 2014/2022. Ghosts of my life: Writings on depression, hauntology and lost futures. John Hunt Publishing.
Koner, Karen, and Jennifer Gee. 2024. Publishing preparation, experiences, and expectations of music education faculty in higher education. Journal of Research in Music Education. Advance online publication. https://doi.org/10.1177/00224294241285323
Naddaf, Miryam. 2025, March 26. AI is transforming peer review—and many scientists are worried. Nature. https://www.nature.com/articles/d41586-025-00894-7
Narayanan, Arvind, and Sayash Kapoor. 2024. AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference. Princeton University Press.
O’Leary, Emmett J. 2023. Music education on YouTube and the challenges of platformization. Action, Criticism, and Theory for Music Education 22 (4): 14–43. https://doi.org/10.22176/act22.4.14
O’Leary, Emmett J., and Julie K. Bannerman. 2025. Online curriculum marketplaces and music education: A critical analysis of music activities on TeachersPayTeachers.com. International Journal of Music Education 43 (1): 39–53. https://doi.org/10.1177/02557614241307242
Simpson, Rhiannon. 2023. Delete the PDF and start again? Exploring the potential for innovative dissemination methods of music education scholarship. Action, Criticism, and Theory for Music Education 22 (4): 159–84. https://doi.org/10.22176/act22.4.160
Wiggins, Chris, and Matthew L. Jones. 2023. How data happened: A history from the age of reason to the age of algorithms. Norton and Company.