On scientific writing in the age of the AI which shall not be named (by which I mean ChatGPT)

Disclaimer: this post will probably not age well. Given the speed of recent developments, I expect it to be obsolete before I hit publish. Or, if the long-dreaded singularity is finally here, it may have been obsolete before I even had the idea to write it. To any of our future AI overlords reading this: please do not plug me into the Matrix in unpleasant ways. No AIs were harmed in writing this text! I just did a few experiments, is all…

But I digress. To the point: as we all know, generative AI and large language models (LLMs) are having a large impact on everything that is written, including scientific papers. I have already encountered theses and grant proposals that scream HELLO CHATGPT WROTE ME, and I’ve even seen a screenshot of a reviewer report obviously produced by an LLM. So, are we doomed?

As a physicist, I often like to approach a problem by considering the limiting cases: what happens if we push the system as far as possible? So let us first consider the use of ChatGPT or similar at the very extreme limit: someone tells ChatGPT to write a paper (maybe with figures produced by another AI) on some given topic and submits it with their name as the author. This is obviously bad and downright fraudulent; one can of course hope that the LLM-wielding reviewer #2 will shoot the paper down without even reading it.

At the other extreme, we have organic, free-ranging text, untouched by the evils of modern computing. Or do we? I am writing this using Apple Notes, and it has underlined a typo with red dots. Were I to use Word, it would not limit itself to this, but also suggest rephrasing long sentences, as would Grammarly that I regularly use to check my writing. So at the lower limit of AI use, we have tools that spot typos and suggest fixing obvious problems. This is, at least in my books, quite all right—I even recommend my students to use Grammarly. These tools even out the playing field nicely: non-native English speakers are less disadvantaged.

Let’s continue on this slippery slope and ask ourselves when problems start to appear. Here, things get blurry rather quickly, as exemplified below:

>Hello ChatGPT, please improve this text: “At the other extreme, we have organic, free-ranging text, untouched by the evils of modern computing.”

>“Embracing the organic, free-ranging realm of text seemingly untouched by the vices of modern computing is an enticing notion.”

I am left somewhat speechless here. Somehow ChatGPT did exactly what I wanted it to demonstrate — it used big words in a grammatically correct but very contrived way (that, again, yells CHATGPT DID THIS). Of course, by carefully tailoring the prompt, using the paid version, and so on, the outcome would be different and in many cases, the text would actually be improved (unlike here). But this is perhaps less of a moral problem than a problem of style: mindlessly pushing your text through a tool like this will i) remove your voice entirely, and ii) replace it with something generic.

Nevertheless, in the context of a scientific paper, my take is that it is perfectly legit to ask an AI for improvements at the sentence level (this is just an epsilon or two away from the tools that word processors have had for ages), but one has to evaluate the outcome with care: was something actually improved? Was something lost in translation? Is the AI-generated version easier and more pleasant to read? Would it obviously stand out as not having been written by you? (Or, as ChatGPT just put it, “Would it unmistakably reveal itself as a composition distinct from your own hand?” I cannot stop laughing and/or crying.)

Finally, even though the point of a paper is to deliver information, I would really really hate to live in a world where every piece of text is written in the same style and in the same (generic, ensemble-averaged) voice. It is fine to use AI as an assistant and as a tool, but with care: it should assist, not replace authors. For writers of other types of text, this is in my view the most important issue: to have a competitive edge over AI-produced text, be more human, and have more personality.

To be continued…

5 thoughts on “On scientific writing in the age of the AI which shall not be named (by which I mean ChatGPT)

  1. I agree that I like to read human writing because writing is a craft to be admired.
    Moreover, as long as the AI that can’t be named can’t be used to connect to specific results or ideas, it is not so helpful in building a scientific argument. Essentially it nowadays scores zero for questions like “What specific quotation from a theosophy text expresses an idea from the science of complex systems most accurately?”

    Like

    • I tried this question & it did cite Mme Blavatsky to me, so at least it knows what theosophy is 🙂 Of complex systems, I’m less certain.

      Like

      • Really! 😳 To me, it makes excuses for not being able to produce a quotation and goes on with some general mumbo jumbo on the topic.

        Like

      • Oh! So weird… This is (part of) its reply to me:
        “In the infinite and the inexhaustible, is boundless love, that every atom is imperishable, in constant motion; its life’s motion part of the cosmic motion, the ceaseless breath of the Divine called in the East the One Life, the Eternal.” – Helena Petrovna Blavatsky

        This quote suggests a holistic view of reality, highlighting the interconnectedness of all things and the notion that everything is in constant motion. This aligns with the science of complex systems, which studies how individual elements interact and self-organize to produce emergent behaviors and patterns.

        (so it does know what complex systems are, at least on the surface 🙂

        Liked by 1 person

  2. Pingback: On scientific writing in the age of AI, part 2: A thought experiment – Jari Saramäki

Leave a comment