Nobel Prize in Physics: how explosive is AI?
John Hopfield and Geoffrey Hinton have been awarded the Nobel Prize in Physics for their pioneering work on Artificial Intelligence. However the scientists themselves have voiced concerns about the potential consequences of their research. Europe's press reflects.
A linguistic nuclear bomb
Paolo Bennati, consultant to the Pope, warns in La Repubblica:
“The result could be a bomb that instead of destroying buildings or cities acts as a cultural explosive in the linguistic version of GPTs and other generative AIs, shattering the cultural bond that allows us to live together when used to create rifts and fake news. Perhaps this is also the reason why Geoffrey Hinton left Google in 2023, after a decade with the company. ... When he announced his resignation he voiced concerns that AI could be used by malicious actors for harmful purposes, such as manipulating voters or winning wars, comparing his situation to that of Robert Oppenheimer.”
Stockholm is doing what it can
It's not often that a scientist who has warned against the potential consequences of his own work receives the prize, Der Tagesspiegel stresses:
“Paul Berg, who received the award in 1980 and passed away at a ripe old age last year, ultimately helped to ensure that genetic research and applications are now heavily regulated, and that people in most countries use them with an informed critical eye. For artificial intelligence, this ship has clearly long since sailed. The applications are everywhere and the leading companies are among the most highly valued on stock markets worldwide. ... With this prize, Stockholm is doing what it can to draw the public's attention not only to the opportunities but also the risks of artificial intelligence.”
Don't lose sight of the huge advances
La Stampa puts things in perspective:
“Alarm bells ringing when a new technology emerges are nothing new. They've always rung out when something potentially revolutionary comes onto the market and becomes part of our culture and social life. It happened with the car, electricity and the Internet. ... A debate based on fear cannot help but be one-sided. It runs the risk of overlooking the positive aspects. ... AI powered by deep learning algorithms – like those researched by Hinton and Hopfield – has been successfully applied in cancer screening and is already used to prevent diseases such as breast, liver and prostate cancer.”
Where's the imagination?
NRC cites a cautionary example from the Netherlands, where the book publisher VBK asks its authors to help with AI-generated translations into English:
“The problems are of an ethical and an aesthetic nature. Translations using AI do not broaden horizons, but narrow them. AI can perhaps be used to enhance human creativity, but not replace it. Translating a text into another language is not just about translation, it's also about a cultural shift. AI does not recognise stylistic devices; neologisms are flattened into something recognisable. Everyone would probably agree that literature has something to do with imagination, which is precisely what AI lacks.”