Illustration by Nikita Bhor
A recent reader poll in Nature revealed that 44% of the 1,600 participants in the survey find AI to be a useful and time-saving tool for writing research grant proposals. Meanwhile, 36% expressed that AI has assisted them in identifying flaws in their research study designs. However, 17% consider it a form of outright cheating, involving copying and infringing on the intellectual property of content freely available on the internet.
My own perspective on ChatGPT and similar tools, like DALL·E, has evolved from being in awe to feeling cheated by their creators, and I now agree that they breach intellectual property rights.
My views completely changed after a conversation with a fellow research scholar. I encourage others to read excerpts from those conversations in ‘Harnessing the Power of ChatGPT and AI.’ My key takeaway from this dialogue was that ChatGPT can indeed reduce the workload, provided it is used ethically. Some such applications include grammar editing, coding, summarizing scattered thoughts on a topic, and discussing scientific questions. These applications require you to provide a complete idea rather than just a simple prompt, allowing ChatGPT to coherently expand on your input. Isn’t this essentially similar to instructing a content writer to create content, provided you supply all the necessary pointers and convey the entire idea clearly, as opposed to asking them to write freely on a topic like AI?
In the August 2023 Research Highlights, half of the research summaries were generated by ChatGPT to explore the differences in their quality, although this practice will not continue in the future. In the September 2023 Research Highlights of this issue, one summary was written by ChatGPT and then edited by our team. I invite our readers to share their opinions and attempt to identify the summary produced by ChatGPT. However, it’s important to note that ChatGPT itself should not be consulted, as it assumes it has authored everything. I wonder who is fooling whom!
But as Noam Chomsky says, “AI models, including those based on statistical language processing, focus on performance but do not truly grasp the underlying structure of language as humans do.”
In my own experiments with ChatGPT, I found something interesting. In a distressed state after a difficult conversation with a colleague, when I asked ChatGPT to make sense of that conversation, it not only validated me but replied just like a therapist would. So, if ChatGPT picked up our biases and all the filtered information from the world wide web, how does it end up imbibing the essence of human empathy? I will leave this question for cognitive neuroscience and psychology who may find its applications in criminology.
Disclaimer: We have used ChatGPT for editing the grammar of our content, but not for writing of the content.
Poorti Kathpalia, a scientist by training, is now pursuing her passion for making science fun and accessible through her sci-comm activities.