In the spirit of my post last week, let us continue figuring out the role of AI in scientific writing through a Gedankenexperiment. Where we left off was the use of AI as an assistant — a virtual editor if you’d like — to suggest improvements to one’s text, instead of churning out autogenerated content. Think Grammarly++, or similar. This is, at least to me, perfectly fine. However, I would appreciate it if the text still retains its voice and human touch, lest everything sound exactly the same.
Now, fast forward to the future. If people still write science 25 years from now, how will they use AI tools? What are those tools capable of?
Here is where I feel science — at least natural science — might diverge from more creative forms of writing, as the purpose of written science is ultimately to transmit information. It might even become desirable to have AI write up our results.
Consider the following: suppose that I have carried out an experiment and want to write a paper on its results. I feed my plots, maybe together with a few lines of text about background, impact, etc, to my virtual writing assistant, and off it goes, returning with a complete manuscript. As my virtual assistant has been taught to write in my voice, the manuscript actually sounds like me. I read the manuscript and find that it is factually correct, and submit it to a journal.
Now, if the information in this paper is factually correct and it is written in a way that is appreciated by human readers, how should we feel about this? Is this ethical or unethical? Is this a future we’d like to see or not?
For this to be ethical, it should be done openly and the use of AI acknowledged. Which is of course very easy to do. Maybe this will be common: maybe most papers will be written by AIs that have been fed with original research results.
Beyond ethics, is this good or bad? That, I guess, depends. If all papers sound the same, it is bad. But what if the papers are indistinguishable from human writing, considering that everyone trains their own AI to write in their voice? What might be lost here is the finesse of argumentation, nuances, deep thoughts, and all those things that make famous writers/academics famous. On the other hand, perhaps this loss would be compensated by far fewer crappy, incomprehensible papers… just maybe.
It may also be that written scientific papers will become obsolete, or at least obsolete as stand-alone products (this is already happening with all the Jupyter notebooks and SI data sets and so on). There are also already now paper formats in some fields (e.g., biomedicine) that leave very little room for creative writing—these are mostly just data containers.
Perhaps scientific papers will in the end not be structured for human readers, but for other AIs that can then better pick up their arguments to propose new theories, experiments, and so on — in other words, replace us, scientists. But I have my doubts on this, as I at least hope that science requires creativity that is beyond mere statistics of words. Let us hope that humans can still out-weird AIs in the years to come (is that even a word)!
To be still continued, I think…