“The first ultra-intelligent machine is the last invention man needs to make, as long as the machine is documented enough to tell us how to keep it under control”, mathematician and science fiction writer IJ Good wrote more than 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed.
In recent weeks, many have gasped as they witnessed the transformation of AI from a practical but decidedly innocuous recommendation algorithm, to something that sometimes seemed to act worryingly. human-like. Some reporters were so surprised that they reported your conversation histories with the word-for-word Bing Chat large language model. And for good reason: few expected what we thought to be glorified autofill programs to suddenly threaten their users, decline to carry out orders they found insulting, breach security in an attempt to save a child’s lifeeither declare your love for us. However, all this happened.
Read more: The new AI-powered Bing threatens users. that’s no laughing matter
It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models in our daily work? Any knowledge worker who thought they wouldn’t be affected by automation for the foreseeable future suddenly has reason to worry.
Beyond these direct consequences of currently existing models, however, awaits the more fundamental AI question that has been on the table since the field’s inception: what if we succeed? That is, what if AI researchers succeed in creating Artificial General Intelligence (AIG), or an AI that can perform any cognitive task on a human level?
Surprisingly few scholars have seriously engaged with this question, despite working night and day to get to this point. It is obvious, however, that the consequences will be far-reaching, far beyond the consequences of even the best big language models today. If remote work, for example, could be done in the same way by an AGI, employers could simply turn on some new digital employees to perform any task. The job prospects, economic value, self-esteem, and political power of anyone who doesn’t own the machines could therefore be completely diminished. Those who possess this technology could accomplish almost anything in very short periods of End-shutdown. That could mean breakneck economic growth, but also rising inequality, while making meritocracy obsolete.
Read more: The AI arms race is changing everything
But a true AGI could not only transform the world, it could also transform itself. Since AI research is one of the tasks that AGI could do better than us, it should be hoped that it can improve the state of AI. This could trigger a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits.
This would perhaps be more positive than alarming, were it not for the fact that this technology has the potential to become uncontrollable. Once an AI has a certain goal and improves itself, there is no known method to adjust this goal. In fact, an AI should be expected to resist any such attempt, since changing the goal would jeopardize the completion of the current one. So, instrumental convergence he predicts that the AI, whatever its goals, could start upgrading and acquiring more resources once it is capable enough to do so, as this should help it achieve any additional goals it may have.
In such a scenario, the AI would become capable enough of influencing the physical world, while still remaining misaligned. For example, AI could use natural language to influence people, possibly using social media. You could use your intelligence to acquire economic resources. Or AI could use hardware, for example, hacking into existing systems. Another example could be an AI being asked to create a universal vaccine for a virus like COVID-19. That AI might understand that the virus mutates in humans and conclude that having fewer humans will limit mutations and make their job easier. Therefore, the vaccine it develops could contain a characteristic to increase infertility or even increase mortality.
Therefore, it is not surprising that, according to recent AI Impacts Surveynearly half of the 731 leading AI researchers believe there is at least a 10% chance that human-level AI will lead to an “extremely negative outcome” or existential risk.
Therefore, some of these researchers have branched out into the novel subfield of AI Safety. They are working to control future AI or strongly align it with our values. The ultimate goal of solving this alignment problem is to make sure that even a hypothetical self-improving AI would, under all circumstances, act in our interest. However, investigation shows that there is a fundamental trade-off between an AI’s ability and its controllability, casting doubt on how feasible this approach is. In addition, current AI models have been shown to behave differently in practice than was intended during training.
Even if the future AI could align with human values from a technical point of view, it remains an open question. whose values with which it would be aligned. Tech industry stocks, perhaps? Big tech companies don’t have the best track record in this area. Facebook’s algorithms, which optimize revenue rather than social value, have been linked to ethnic violence such as rohingya genocide. Google fired AI ethics researcher Timnit Gebru after she criticized some of the company’s most lucrative jobs. Elon Musk fired the entire ‘Ethical AI’ team on Twitter at once.
Read more: Human-AI romances are blossoming, and this is just the beginning
What can be done to reduce the risks of AGI misalignment? A tricky place to start would be for AI tech companies to increase the number of researchers investigating the topic beyond the 100 or so people currently available. Ways to make the technology secure, or to regulate it in a reliable and international manner, urgently need to be thoroughly explored by AI security researchers, AI governance academics, and other experts. As for the rest of us, reading up on the subject, starting with books like compatible human by Stuart Russell and superintelligence by Nick Bostrom, is something everyone, especially those in a position of responsibility, should find End-shutdown for.
In the meantime, AI researchers and entrepreneurs should at least keep the public informed about the risks of AGI. Because with today’s big language models acting the way they do, the first “ultra-intelligent machine,” as IJ Good called it, may not be as far off as you think.
More TIME must-reads