Warning of potential risks, an OpenAI co-founder suggests AGI could arrive sooner than expected

Warning of potential risks, an OpenAI co-founder suggests AGI could arrive sooner than expected

Warning of potential risks, an OpenAI co-founder suggests AGI could arrive sooner than expected

The race for Artificial General Intelligence (AGI) is heating up. OpenAI’s CEO, Sam Altman, recently caused a stir by advocating for aggressive AGI development, regardless of the financial burden. This comes on the heels of advancements like ChatGPT, which sparked renewed discussions about AGI’s potential. While some, like AI pioneer Geoffrey Hinton, caution about potential dangers, others believe AGI can be harnessed for good, with proper safeguards.

A leader at OpenAI predicts a rapid arrival of artificial general intelligence (AGI) and urges tech companies to collaborate on safety measures before deploying this powerful technology.

A rapid and unexpected arrival of AGI would necessitate prioritizing safety measures. We might need to slow down development and deployment to ensure we can handle it responsibly. A deeper understanding of its workings and potential impact is essential. Our current knowledge base is simply not sufficient for such a powerful technology.

 

 

 

Schulman explained that even if companies train the more advanced version of this technology, there are limitations. They can’t guarantee it will always be safe (be supremely cautious) or perfectly trained. Additionally, they need to be thoughtful about how widely they use this technology (mindful of the scale at which they deploy it).

Schulman offered a surprising timeline for the arrival of AGI, suggesting it’s closer than many anticipate. While experts have traditionally predicted a timeframe of five to ten years, Schulman believes it could happen within the next two to three years.

Given the potential for rapid progress, Schulman stressed the urgency of industry coordination. He argued that companies need to come together and establish reasonable limitations on both deploying AGI and further research on even more advanced versions. Without such collaboration, a dangerous “arms race” could erupt, with each company prioritizing staying ahead of the competition at the expense of safety. To mitigate this risk, Schulman proposes some form of coordinated effort among the major players involved in AGI development.

Also read : A new era for PCs! Microsoft and Qualcomm join forces to launch Snapdragon X Series laptops packed with AI features.

Schulman outlined an ideal scenario for AI development. He envisions companies gradually releasing improved AI systems with a relentless focus on safety. This measured approach would allow for close monitoring and intervention if progress takes a concerning turn. In his words, “We’d be releasing improvements incrementally, each one building on the last while prioritizing safety. If things ever seem to be getting out of hand, we could hit the brakes and slow things down.” This cautious, collaborative approach is Schulman’s hope for the future of AI.

Related Posts

Leave a Reply

Your email address will not be published.