ChatGPT and co: how will AI change our lives?
Launched in November 2022, the AI-based chatbot ChatGPT can answer almost every question and is able to formulate its responses in a natural, conversational way. Whether - and in which genres - the texts generated can match those produced by humans is a matter of debate. Commentators discuss how to handle this new and powerful tool.
Humans more necessary than ever
NRC Handelsblad warns against blindly trusting ChatGPT:
“This must remain a collaboration. Human judgment, creativity and moral considerations will only gain importance with this new technology. Be critical of the content, of who owns and controls this application, of the potential social and economic consequences: lawmakers and regulators need to be more vigilant than they were in the first days of social media, as do all those who work with it. Precisely because the answers the programme gives are so impressive that there is the danger that people will trust it blindly.”
Make the best of opportunities
Europe must not allow itself to be driven by fear, Delo urges:
“For artificial intelligence to be truly accepted as an engine of the economy, confidence in the way it functions must be built up. ... The use of AI can promote progress in a wide range of activities: in the fight against climate change, in the optimisation of industrial processes, in a more rational use of resources, in healthcare, in infrastructure, in care for the elderly. So European policy should not be guided by fear, but by the use of opportunities that arise while respecting fundamental rights.”
EU regulation is inadequate
The European Commission's artificial intelligence regulation doesn't properly address the risks posed by tools like ChatGPT, IT security experts warn in Le Monde:
“ChatGPT would only fall into the last category of risky AI, called 'limited risk': AI that interacts with humans. According to the regulation, humans only have to be warned about the use of this form of AI, without any further explanation. Is this enough to develop a critical attitude towards a new way of accessing knowledge?”
The end of writing
The Spectator fears an existential crisis for an entire profession:
“That's it. It's time to pack away your quill, your biro, and your shiny iPad: the computers will soon be here to do it better. ... The machines will come for much academic work first - essays, PhDs, boring scholarly texts (unsurprisingly it can churn these out right now). Fanfic is instantly doomed, as are self-published novels. Next will be low-level journalism ... then high-level journalism will go, along with genre fiction, history, biography, screenplays. ... 5,000 years of the written human word, and 500 years of people making a life, a career, and even fame out of those same human words, are quite abruptly coming to an end.”
A game changer in schools
ChatGPT will pose a major challenge for teachers in particular, the Irish Examiner predicts:
“In the UK, lecturers have been urged to review forms of course assessment in the context of this new tool, which has the potential to produce credible and high-quality content with minimal human input. ... Teachers must decide whether they will harness this technology and find different forms of assessment, or spend their time trying to identify transgressors. No one pursued a career in education to do that.”
Bots no good in oral exams
El País suggests a way for schools to avoid cheating with AI:
“Any teacher can now find out if a student's work is just cut-and-paste. All it takes is a Google search to detect plagiarism. ... All this simplifies access to information, but makes teaching and learning more difficult. Eventually, artificial intelligence will be incorporated into the classroom, but we will have to know how to handle it. It is possible that in the end innovation will paradoxically lead us back to orality. It will be the only way to assess students. Let them use all the tools at their disposal to search for information, but they should be able to explain the result. Personally, and in their own words.”
Man is not a machine
The danger lies not in AI itself but in the debate about it, warns philosopher Thomas Robert in Le Temps:
“The priority now is to regain control of the narrative used by the proponents of AI. The danger is that they will convince us that intelligence is nothing more than the accumulation of knowledge, or that creativity is reducible to the most likely answer. In other words, the greatest danger is not in the advances of AI, but in the accompanying discourse that tends towards a puny definition of humans. Three centuries after the fact, Silicon Valley programmers are once again tinkering with [Julien Offray de La Mettrie's theory] of man as a machine. It is up to us whether we succumb to it or not.”
Negotiate international rules
Ethical rules are important but difficult to implement, economist Inês Domingos points out in Observador:
“The lack of critical thinking in machines is forcing AI developers to integrate ethical boundaries into their programmes, which is a growing concern for public policy, not least because of the discrepancies between ethical values in different regions of the world. The EU produced a set of guidelines for the ethical use of AI in 2018, as did China in 2019. ... But the implementation of principles and the values of these two regions are very different, especially with regard to privacy.”