leidenpsychologyblog

Rapid advancements in AI: How to stay ahead of the game?

Rapid advancements in AI: How to stay ahead of the game?

AI advancements are rapid. The question is not if, but rather how and when AI will be adopted in education and research. We have to stay ahead of the game by promoting critical thinking in our students, before the dead internet theory becomes a reality.

AI is all around us

Since the launch of ChatGPT in November 2022, the use of AI has exploded. The technology was not distinctly innovative or novel, but what was new was that it was publicly available, allowing a large audience to interact with and experience AI for the first time. ChatGPT gained over a million users within five days of its launch, and it marked a turning point, not in AI technology per se, but in the awareness of AI and its integration into everyday tasks. That was also the point that I became more aware of all the AI technology that surrounds us.

AI is used by postal services to automate package sorting and optimize routes; by supermarkets to predict stock and apply dynamic pricing; and by airports to handle baggage, predict maintenance, manage crowds, and predict the required number and type of meals per flight. Our neighbors at the Leiden University Medical Center (LUMC) currently already use a myriad of AI applications (e.g., for triage at admission, in radiology, and in various decision-making processes), and the LUMC’s strategic plan for 2024-2028 emphasizes the integration of AI across all its operations, aiming to expand AI use even more in the near future.

In sum, we are already – whether we are aware of it or not – surrounded and influenced by AI on a daily basis. It is therefore not a question of if, but rather how and when AI will be adopted in our research and education in the future.

Promoting critical thinking: Shit in = Shit out

What has surprised me is how slowly we are responding to this. This year a questionnaire was sent out to Leiden University’s employees about the use of AI by students. The survey included questions about how we could detect the use of AI or how we would counteract it. I believe we’re asking the wrong questions here. We should not punish the use of AI. If there is one thing I learned in during the last few years using large language models (LLMs), it is ‘Shit in = shit out’. In my view, the focus should be on teaching students how to use AI responsibly. This includes asking LLMs like ChatGPT to provide you with sources, then checking those sources yourself, thinking about how to finetune your questions, and obtaining information from other sources to cross-check. If we teach students to use AI in a thoughtful way, we will actually promote critical thinking, which is one of the skills we were afraid to lose when these LLMs were launched.

Of course, teaching students about responsible and thoughtful use of AI will be challenging. We need to be aware of the limitations. For example, LLMs aim to provide answers even to ill-posed questions. They may fabricate data, hallucinate events, make up references, etc. In addition, LLMs may have biases that were present in the human data that they were trained on.

In light of these limitations, using AI can be frustrating. For example, when trying to create a schematic figure for a review paper, I spent hours specifying the prompt, but the figure still looked far from what I envisioned, so I ended up making the figure manually after all. Similarly, LLMs sometimes switched between programming languages; forgot about issues we had solved a few messages above, reintroducing errors to my code; and sometimes reached opposite conclusions when I formulated my question slightly differently. In other words, getting something useful out of AI is not as straightforward as it may seem. We and our students should be aware of these limitations, but should also aim to stay ahead of the game: AI is constantly developing, so many of the issues we currently encounter will likely be solved in the near future.

Dead Internet Theory

These advancements are extremely rapid. A year ago, it was relatively easy to differentiate between AI and non-AI images, e.g., AI-generated images often included strange shadows, jumbled faces, or extra fingers or hands. I recently did an “AI or not” test and only scored 62% correct when trying to differentiate between pictures that were AI-generated or real. The same holds true for AI-written messages on social media, that often get tons of likes or upvotes, as AI is extremely good at cracking the algorithm. How can we continue to fact-check when we have reached the stage where the Dead Internet Theory – i.e., the conspiracy theory that the internet mainly consists of automatically generated content – has become reality?

This is extremely worrisome, as we have seen with recent fake news events around the world, also for us humans: Shit in = shit out. For now, I hope we continue to emphasize critical thinking in the next generation. I for one have already downloaded Wikipedia (it <100 GB), so that also in 20 years I can still fact-check my statements!

Related