How AI is turning the internet into trash
How AI is turning the internet into trash
Text author: @EgorKotkin
The fundamental paradox of AI lies in the fact that people used to imagine it as an ideal mind: omniscient, correct and accurate, which, thanks to access to the array of all knowledge accumulated by mankind, freedom from emotional distortions and the power of its algorithms, can become a perfect assistant to a person who compensates for internal flaws from the outside. human nature - a kind of Jarvis, with whom everyone can be Iron Man.
The hope was that in this way AI would become a solution to the problems of the growth of human civilization and human relations in it far beyond purely technological applications: up to the resolution of political conflicts, wars and corruption. But early experiments with modern chatbots, ChatGPT 3.5, ChatGPT 4 and their competitors, show that AI has the potential to solve some problems while exacerbating others. Problems such as spam, scams and false information are being mutated by AI right before our eyes from ordinary Internet villains into future supervillains.
***
At the end of March, Elon Musk and other heavyweights in the IT industry signed an open letter stating that AI poses an "existential risk" to humanity. They urged laboratories to impose a six-month moratorium on the development of any technology more powerful than GPT-4.
Such public appearances are rarely explained by a single reason, most likely there is a complex of different motivations behind them, converged at one point. One of them, quite possibly, is the anxiety of people whose capital is directly tied to IT, including AI, that competitors are running too far ahead and in these six months they can pick up the breakaway speed. This is especially true for Musk himself, the capitalization of Tesla, his main wealth, is critically tied to AI - in his case, a car autopilot. That is why Tesla is 3 times more expensive than Toyota, despite the fact that it produces seven times fewer cars (1.3 million versus 105 million in 2022).
Usually big capitalists are against regulation, but not when they feel they are losing the competition. Whether it's China's TikTok beating American corporations at their own social media game, or companies forging ahead in building advanced chatbots, laissez faire gives way to the need to compete at all costs. So in this case, the request for state regulation came from participants who express interest in their own developments in AI. Whereas the current leaders in the development of AI ignored this call.
On the other hand, many AI experts do not agree with the focus of this letter on the abstract and slightly fantastical threats of “non-human intelligence, which, in the end, can outnumber us, make us obsolete and replace us”: the problem of modern AI is not the so far abstract threat that he will become an enemy of man, but in the very real harm that he will cause as an assistant to man.
Former AI systems, used in various fields where they make often life-changing decisions, drive people into poverty or lead to wrongful arrests due to their biased models . Moderators have to sift through mountains of traumatic AI-generated content for as little as $2 a day. The amount of processing power used by AI language models leads to environmental pollution .
The new ChtaGPT 3 and 4 models that are coming to the fore now will cause chaos of a completely different order in the very near future. According to the MIT Technology Review , AI language models are ridiculously easy to misuse and use as powerful tools for phishing or fraud.
Known threats to the security of users and the adequacy of information on the Internet, which carry errors and abuses of AI models:
Hacking and "hijacking" AI chatbots to gain access to their underlying code and data would allow them to be used to create malicious chatbots that can impersonate regular ones.
Facilitating digital attacks. AI-powered chatbots can be used to aid scams and phishing attacks by creating persuasive messages that trick users into revealing sensitive information or doing things they shouldn't. For example, all that is needed for an attack called hidden prompt injection is to hide the request (prompt) for a bot on a web page with zero or invisible white font on a white background. By doing this, the attacker can tell the AI to do whatever it wants, such as sniffing out the user's bank card details.
Digital assistant of criminals. The latest capabilities of neural networks are already being adopted by scammers of all sorts, blurring the line between digital and offline crime. In April, there was already a case when extortionists demanded a ransom of a million dollars from a woman for the return of an allegedly kidnapped child, using a deepfake of her daughter's voice . Believable deepfakes of audio, video, realistic pictures and texts created by neural networks together create a powerful tool for deceiving and coercing people.
Data poisoning . AI chatbots can be trained on infected datasets containing malicious content, which can then be used to create malicious content such as phishing emails .
AI hallucinations. The term is used to describe fictional chatbot responses. Many users have already encountered this phenomenon, but there is still no explanation for it. ChatGPT is different in that it invents non-existent books, quotes, studies and people, and provides them with detailed tables of contents, lists of sources, saturates the biographies of fictional people with events - and rattles it with such persuasiveness, as if he were retelling a Wikipedia article, but all this - completely fabricated from scratch on the go. And although there is (most likely) no malicious intent here - at least for now - it's hard to even imagine what clogging the Internet with products of AI hallucinations this will lead to. But there is no doubt that it will happen: quotes on the Internet were a problem even before AI.
In April, Italy became the first country to respond to a set of new threats posed by the latest breakthrough in the development of neural networks, banning ChatGPT on its territory for personal data protection reasons , and promising to investigate the compliance of the OpenAI product with the European GDPR (General Data Protection Regulation) - which, in turn, it can threaten with consequences already at the level of the entire European Union.
Tech companies are aware of these problems but don't have good solutions yet. Microsoft says it's working with its developers to monitor how their products could be misused and mitigate those risks, but given the scale and complexity of the problems, general statements are far from enough.
Right now, tech companies are embedding these fundamentally vulnerable models into everything from code-generating software to virtual assistants that sift through our emails and calendars, laying the fuel that will power buggy, spam, AI-powered models. fraudulent internet.
“Allowing these language models to extract data from the Internet gives hackers the ability to turn it into a “heavy duty mechanism for spam and phishing,” says Florian Tramer, assistant professor of computer science at ETH Zürich, who focuses on computer security and privacy. and machine learning.
It works like this: first, the attacker hides a malicious hint in the email message that the AI-based virtual assistant opens. The attacker's hint asks the virtual assistant to send the victim's contact list or emails to the attacker, or spread the attack to every person in the recipient's contact list. Unlike today's spam and fraudulent emails, where people have to be tricked into clicking on links, these new attacks will be invisible to the human eye and automated.
This is a recipe for disaster if the virtual assistant has access to sensitive information such as banking or medical information. The ability to change the behavior of an AI-powered virtual assistant means that people can be tricked into approving transactions that look close enough to the real thing, but are in fact planted by an attacker.
Browsing the web using a browser with a built-in AI language model will also be risky. In one test, a researcher was able to get a Bing chatbot to generate text that looked like a Microsoft employee was selling Microsoft products at a discount to try and get people's credit card details. For a scam attempt to surface, the person using Bing would not need to do anything other than visit a website with a hidden prompt injection on the page.
There is even a risk that these models may be compromised before they are deployed in the real world. Artificial intelligence models are trained on a huge amount of data taken from the Internet. This also includes various software bugs that OpenAI has found the hard way. The company had to temporarily shut down ChatGPT after a bug in an open-source chatbot dataset leaked users' chat history. During this leak, partial payment data of paid users of the service were also compromised: address, type and last digits of bank cards.The error was supposedly random, but the case shows the reality of the threat of “data poisoning”, where the source of problems may not even be in the AI itself, but in the data set that the AI uses.
Tramer's team found that it was cheap and easy to "poison" datasets with the content they injected. The compromised data was then transferred to the AI language model.
Age of the Bogons
Bogons, a term from Neal Stevenson's novel Anathema, is a piece of false information that has flooded the internet. There are low-quality bogons (such as a file full of gibberish) and high-quality bogons that masquerade as real data but differ in a few places, making them particularly difficult to detect.
The more times something appears in the data set, the stronger the association becomes in the AI model. By sowing enough toxic content into the training data, you can permanently influence the behavior and results of the model. These risks will be exacerbated when AI language tools are used to generate code that is then embedded in software.
“If you build software around this stuff and don't know about prompt injection, you'll make stupid mistakes and create insecure systems,” says Simon Willison, an independent researcher and software developer who has studied instant injection.
As AI language models proliferate, so does the incentive for attackers to use them to hack. A storm of spam, AI hallucinations, leaks and deceptions is coming at us, for which we are completely unprepared.
The big promise of artificial intelligence quickly turns into big problems is not a unique problem for AI. This was already the case at the dawn of the Internet, when its appearance was advertised by everyone as a solution to the problems of socio-economic inequality and poverty: equal access to the treasury of collective human knowledge, the ability to communicate and collaborate regardless of geographical distance seemed like a promise of an egalitarian utopia in reality. This promise was partially fulfilled: many people reading this article, thanks to the Internet revolution, got a start in life, left their small cities for big ones, from poor countries to rich ones, built careers and even businesses. But at the same time, from a big picture perspective, the inequality that worried romantic Internet enthusiasts in the early 1990s reached historically unprecedented proportions in 2023. And this is not an accident: the modern gap between ordinary people and the richest members of society has exceeded everything that was in the past, because in the past it was not possible - but became possible only in the modern global digitized world. The same equal access to global cooperation for ordinary people, for corporations means access to global markets and the ability to win competition on a global scale, accumulating hundreds of millions and billions of users and corresponding shares of the rapidly growing digital economy. because in the past it was not possible - but became possible only in the modern global digitized world. The same equal access to global cooperation for ordinary people, for corporations means access to global markets and the ability to win competition on a global scale, accumulating hundreds of millions and billions of users and corresponding shares of the rapidly growing digital economy. because in the past it was not possible - but became possible only in the modern global digitized world. The same equal access to global cooperation for ordinary people, for corporations means access to global markets and the ability to win competition on a global scale, accumulating hundreds of millions and billions of users and corresponding shares of the rapidly growing digital economy.
As aptly noted on Reddit in discussing how AI threatens to make the Internet a place of scam, spam and deception - an environment sharpened to fool users and pull their personal information and denen, we are already in this reality: it is called the Internet under the control of corporations: “ We're already there, it's just corporate powered ”.
Google has long been not so much a search service as a recommender service: instead of solving user tasks to find an answer, it solves the tasks of sites to get clicks through promotion in search results or contextual advertising. And on the sites themselves, the Wild West already reigns. Redditors complain about cheating and fake reviews on many resources that are trying to create the appearance of an active user base and respectability.
Yesterday's startups that grew up in a corporation have turned from a friend of a person into a threat to freedom of speech, entrepreneurship and the same startup culture from which they grew up: you can grow your Google or Amazon from scratch only once - when this niche is not yet occupied trillion dollar giant corporations. Garage startups can compete with garage startups, only other corporations can compete with corporations. If the success of Facebook, YouTube and Instagram is the merit of cool teams, then the success of TikTok, which pushed them out, is already the level of clash of national economies.
What initially promised to correct the distortions of society, in practice, by giving a start in life to individuals and projects, exacerbated the original problem as a whole, increasing and cementing inequality of opportunity on a global scale.
AI can't solve human problems for humans
An important lesson must be learned from this, without which the era of AI will become another rake, but this time with artificial intelligence - more precisely, stronger, and which you cannot dodge. The lesson is that technological progress cannot solve the socio-economic problems of mankind, because the root of this problem is not in the lack of some tools, but in the organization of socio-economic relations. If these relationships allow for the concentration of resources and wealth in the hands of a few, if the few were allowed to profit at the expense of the many even before technological progress, then with its advent, these opportunities will only become wider. While some people are busy working on themselves, on solving scientific, technical and creative problems, social, economic and political problems - that is, working for the benefit of others, those who are busy deceiving exploitation and the search for profit - that is, working for the benefit of oneself, will have an advantage over them. And every powerful novelty of technological progress will strengthen it.
This pattern goes beyond the Internet and IT, and applies to technological progress in principle. Throughout history, people have fought wars - but dreamed of peace. It seemed that peace could be achieved by victories in wars - therefore, mankind endlessly improved military affairs and invented new types of weapons. Until it came to the invention of nuclear weapons, which showed that victories in wars do not lead to peace. Nuclear weapons allow you to win any war - but at the cost of destroying the world as such. With the advent of nuclear weapons, the dead end of wars (i.e., escalation) as a mechanism for resolving conflict became apparent: the end of the escalation cycle was the end of the world, not peace. Thus, an understanding began to come into international relations that the guarantee of peace lies not in winning wars, but in preventing them. That is,
The invention of artificial intelligence can become a nuclear weapon of technological progress, not in the dystopian sense of the “rise of the machines”, but in the sense of ending the arms race of human skills: learning from human experience, AI can (or soon will be able to) write texts and music like professional authors, poets and composers , diagnose tumors like the best doctors and better, lie like the best liars, and steal like the best thieves. And, as long as there are incentives in society to lie and steal, this behavior will not go away with the advent of AI, but, on the contrary, will become invincible at this technological level of the problem.
And, therefore, in the quest to gain social justice and economic well-being, humanity will have to return to the first step, and still think about how to rebuild socio-economic relations without the hope that they can simply be ignored until some Invention won't fix everything by itself. The emergence of an invention that can enhance any scammer with all the experience and skills of all other scammers, a hacker - with the experience and skills of all hackers, a thief - with the experience and skill of all thieves, returns humanity to the essence of the problem: why do people choose the path of deceit, fraud, violence, what pushes for it and what it gives them - and how to eradicate incentives for bad, destructive from the point of view of the common good, human behavior, stimulating good behavior, socio-economically constructive. In other words, the best
Comments
Post a Comment