Openai Unleashes Revolutionary Video Chatgpt Experience
OpenAI has finally unleashed its highly anticipated video feature for ChatGPT’s Advanced Voice …
28. December 2024
British-Canadian computer scientist Prof. Geoffrey Hinton, often hailed as the “godfather” of artificial intelligence, has revised his estimate on the risk of AI triggering human extinction within the next three decades. In an interview with BBC Radio 4’s Today programme, Hinton now believes that the odds of such a catastrophic outcome are between 10% and 20%, down from previously stated 10%.
Hinton’s shift in analysis comes as he has acknowledged that the pace of AI development is “much faster” than initially anticipated. Citing examples of human systems being controlled by more intelligent entities, such as parents guiding children, Hinton emphasized the vast intelligence gap between humans and future highly advanced AI systems.
In comparison to humans, these AI systems will be akin to toddlers, making them significantly more capable than us. To illustrate this point, Hinton likens humans to three-year-olds, while the future of AI would be equivalent to a far more intelligent child.
Hinton’s concerns about AI development are rooted in his worries about unconstrained AI and its potential risks. He has long advocated for stricter regulations on AI development, citing the dangers posed by “bad actors” exploiting these systems to harm others. The creation of artificial general intelligence (AGI) – systems that surpass human intelligence – poses a particular threat, as it may evade human control.
Reflecting on his initial predictions about AI’s trajectory, Hinton notes that he underestimated the pace of progress. He believes that by the time most experts in the field think we’ll develop AGI, which is likely within 20 years, it will be significantly more advanced than initially thought. This accelerated development has led Hinton to emphasize the need for government regulation of AI.
The benefits of proper regulation are clear: while companies may prioritize profit over safety, governments can enforce stricter guidelines and encourage research into AI’s risks. In this way, regulations could mitigate the potential dangers posed by unconstrained AI development.