Are We Ready For AGI? According to the AI Safety Clock, No.

The new AI Safety Clock launched by IMD to assess AGI safety shows that we’re 29 minutes to midnight. What does this mean for us?

The AI Safety Clock measures the risk of an uncontrolled AGI; Source: IMD

Remember the Doomsday Clock? Yeah, researchers at IMD have now developed an AI Safety Clock, set to 29 minutes before midnight; 29 minutes to the point where Uncontrolled Artificial General Intelligence (UAGI) could pose a significant threat to humanity

But what does all this even mean? 

Generative AI already significantly impacts our everyday lives—from large-language models (LLMs) to social media algorithms to robots being developed to relieve the human workforce in critical tasks. So far, AI is manageable. But AI is also becoming more powerful by the day. 

Governments and AI developers across the globe are now preparing for a future of AI in the form of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI). An AGI differs from the GenAI we have at the moment in that it would be able to be as good or even better than humans in most tasks. 

AGI is the next great milestone expected to be achieved in AI research. Meanwhile, ASI—better than any human—is still far away. And nobody knows for sure when AGI will be reached. A decade ago most AI researchers agreed that AGI would be achieved by 2075

With every year, this estimate has come closer to the present time. 

By now, the general consensus for the arrival of AGI is before 2050. And some believe AGI might be closer than expected, with estimates ranging from as far as the mid-2040s to within this decade. 

Whether or not researchers are being too optimistic, it’s clear that AGI lies ahead of us. However, as the AI Safety Clock shows, we are not ready for it. 

There are a lot of problems that our current, narrow AI faces—discrimination, bias, privacy. But the problems an Uncontrolled AGI (UAGI) could present are worse than that. 

As one of the researchers who developed the AI Safety Clock, Michael Wade, wrote for Time: an AGI that works independently without human interference could decide to deploy military weapons or start large-scale misinformation campaigns. 

We already saw the power of AI during this year’s presidential election in the US. Deepfake photos and audio have been used to paint wrong pictures of elections and drive opinions to a certain direction. 

Feels familiar? The same techniques used during the 2016 US Presidential Election are exploited all over again—and it has become much more difficult to distinguish between what’s real and what’s fake; the line between reality and cyberspace keeps shrinking. 

But all these misinformation campaigns have been instiged by humans. A UAGI would do this without first having to ask for human permission; it simply acts. 

AGI offers a lot of opportunities to improve the world we live in. Controlled AGI would benefit us all, but a UAGI will make AI indeed humanity’s last invention. 

But let’s not be too pessimistic. 

As the past has shown, researchers have long been over-optimistic about the development of AI, and if we manage to raise an AGI that is kept within a fence, we will win once more. 

Thank you for reading my article! If you would like to support a young writer, check out my Ko-Fi!

Comments

Popular posts from this blog

Europa Clipper is On Its Way to Jupiter: This is How the Mission Will Change Our Understanding of Life Forever

Billions of Stars Travel Through the Intergalactic Medium. Why Were They Expelled From Their Galaxies?