6 minute read

Changes in U.S. Academia?

As Claude Code became hot this year, it seems that even in U.S. academia there is finally a visible trend of faculty starting to take the usefulness of AI seriously. Professor Pedro H. C. Sant’Anna at Emory University also shared his own workflow(Sant’Anna), and Professor Chris Blattman at the University of Chicago Harris School has also continued to share examples of automation(Blattman). Professor Chris even retweeted a meme like i just talk to chatgpt, so it also seems that he consumes AI discourse quite actively.

Where My Interest Started

I became interested in research automation using AI because, while preparing materials for 2026 fall admissions, I was writing things like a Writing Sample and SOP in LaTeX and using VS Code. At the time, various AI CLIs were only just coming in, and I remember it as the point when discussions like “how are we supposed to use this?” were only beginning. I was not confident in English grammar, so I had corrected things over and over with GPT or Grammarly, but in the end it was too tiring that a person still had to polish everything manually all the way through every time. But if you just type gemini in the terminal and say, “Please fix the grammar in this file,” it reads a .tex file hundreds of lines long and corrects the grammar on its own. Watching that, I began to think that it was probably only a matter of time not just before grammar correction, but before AI could write a paper from machine-readable data and literature files alone, as long as the researcher set up the logic properly.

Public Precedents

In fact, it is not as if there have been absolutely no precedents of papers written with AI getting accepted. Joshua Gans discussed, in a case-based way, what kind of effects AI might have on research production based on his own idea(Gans), and Vincent Gregoire at HEC Montreal also posted a “Vibe Research” write-up on his own website(Gregoire). And honestly, I think there are probably quite a few papers even now that got accepted without saying so. There is almost no benefit to disclosing it.

The Center of the Debate: Hall & Westwood

Anyway, in the current situation where faculty are also getting better and better at using AI, I wanted to leave a record because the debate around literature review especially stood out to me. It is hard to identify the exact starting point, but by my own sense, the center of the debate is Professor Andy Hall at Stanford and Professor Sean Westwood at Dartmouth.

Reactions

  • Professor Melissa Perreault criticized it strongly in a tone roughly like, “the future that person wants is lazy researchers who cannot communicate, have no critical thinking, and ten times more garbage papers”(Perreault).
  • Dr. Rex Douglass pushed back in the direction that “when you listen to scholars saying what LLMs cannot do, in the end it is a skill issue. A weak model, without guidance, might fail to satisfy a strange request in one shot, but most of the workflow can still be automated even now”(Douglass).
  • Professor Craig Gallagher left a skeptical reaction along the lines of, “LLMs easily throw away principles they had five minutes ago if you complain, and you want me to believe that such a model will do peer review?”(Gallagher).
  • Political theorist Matthew Cole sharply remarked that this is basically a way for quantitative social scientists to confess that they are producing trash they themselves do not want to write and do not want to read(Cole).
  • Professor Itai Sher at the University of Massachusetts Amherst continues to maintain a skeptical view toward “research done by AI”(Sher1, Sher2, Sher3).

My Thoughts

  • Professor Sean put it strongly, but as Dr. Rex said, if there is appropriate data and the user has the ability to filter it, then if books and paper PDFs are organized into a form machines can read, I think a significant part of the literature review section can indeed be automated. Data analysis is also in fact a specialty area for AI, so overall I think I would summarize my position as leaning toward agreement that we need to consider other ways in which papers might be replaced.
  • As for my own view on the claim some people make that all AI-assisted research is low quality: even someone as mediocre as me can now produce something that looks like a plausible publication if the data are there, so of course there may well be more mediocre working papers and outputs. But many people do not seem to consider at all the case where a professor or researcher who writes well uses AI in a more detailed, field-specific way, for example from summarizing prior research to “vibe” coding for quantitative research, and raises efficiency in a careful way. As was also mentioned in Professor Alexander Kustov’s blog, there are already plenty of cases where people just cite things roughly after looking only at the abstract.

Follow-up Discussion

Professor Sean later posted follow-up tweets too, perhaps especially because he was conscious of the reaction on Bluesky(follow-up 1, follow-up 2). What kind of place is Bluesky, anyway?

Is AI Already Better at Research Than Many Professors? (added 26/03/04)

  • Professor Alexander Kustov of the Keough School of Global Affairs at the University of Notre Dame also posted the provocative tweet, “AI can already do social science research better than most professors with PhDs.” In his blog, he says that “much of the opposition to AI is status protection dressed up as principle,” and he specifically points to scholars on Bluesky as denying what is happening.
    • As an aside, the author said that the blog article itself was written 100% with Claude, but according to Pangram, which had been promoted on Twitter as especially good at detecting AI-written text, it apparently counted as “Fully Human Written”.
    • In the end, as Professor Kyle Saunders says, there is also the aspect that AI is not ruining education so much as forcing universities to confront how much they have relied on fragile proxy indicators of thinking(Saunders).
  • Professor Scott Cunningham at Baylor University also posted a piece discussing the realistic future of journal publishing(Research and Publishing Are Now Two Different Things). In the Korean case, faculty hiring also tends to rely even more heavily on quantitative indicators than in the United States, so this is not somebody else’s problem for us either.

  • In the end, a commentary also appeared from the Brookings Institution dealing with this state of affairs in academia(the train has left the station).

Follow-up to the Follow-up (added 26/03/05)

  • Professor Alexander Kustov’s follow-up post
  • Occidental College professor Igor Logvinenko’s Twitter article, Why Academia Can’t Think Clearly About AI
  • There are in fact many more opinions besides these, but it seems difficult for me, just as an individual student, to follow up on all of them.

Discussion in Korean Academia?

In Korean academia, it may be useful to refer especially to Professor Yoo In-tae, who mainly works in digital humanities, and Professor Ahn Sang-jin, who have raised these issues relatively early and quite often.

Updated: