The Future of GPT: Ensuring Safety and Aligning with Society's Interests
As ChatGPT continues to make waves in the AI community, there is much speculation about the future of GPT. Greg Brockman, President and Co-founder of OpenAI, recently shared his insights on the matter in a tweet, and it’s worth taking a closer look at what he had to say.
➡️ Safety, safety, safety
Safety is undoubtedly the elephant in the room, from preventing misuse and self-harm to mitigating bias and even existential threats, it is not a topic to be taken lightly.
In fact, recent surveys of AI researchers have shown that 50% believe there is a 10% or greater chance of humans going extinct from our inability to control AI. To put it simply, if half of airplane engineers believed there was a 10% chance of everyone dying from a flight, would you get on that plane?
One example of GPT-4's amazing capabilities is its ability to conduct scientific research autonomously and generate novel compounds in an emergent way. While this sounds really impressive, it also raises concerns about the potential misuse of GPT-4 in producing harmful chemical compounds, like modern day “Mr White”, or worse.
Even though these examples may not be fully representative, they do highlight the need to address safety concerns. But it is important to consider whether these concerns are immediate or whether we are at risk of stifling potential technological breakthroughs by exaggerating the issue.
Brockman emphasises that alignment and governance are essential to ensure that the advancement of AI benefits society as a whole. OpenAI has made efforts to make GPT safer throughout its training process. However, it is critical to remain vigilant and continue monitoring the development of AI. As Brockman reminds us that “It’s a special opportunity and obligation for us all to be alive at this time, to have a chance to design the future together”.
➡️ A continuum of incrementally-better AIs
Future releases of GPT will be done in a continuous fashion by deploying subsequent checkpoints of a given training run, rather than through big-bang releases like previous versions. By interacting with subsequent checkpoints, GPT can correct errors and improve safety. Additionally, GPT grows its intelligence by learning from previous checkpoints of each run, much like how one may notice different nuances while reading a book multiple times. Sebastian Boubec's experiment of asking GPT to draw a unicorn repeatedly in his talk on Sparks of AGI is an example of this fascinating growth.
➡️ Incorrect confident predictions
Predicting the behaviour of a complex system like AI is often a futile exercise. Even AI experts who are most familiar with exponential curves can make incorrect or poor predictions about AI progress, as highlighted in The A.I. Dilemma by Tristan Harris and Aza Raskin. Therefore, it's essential to remain curious, humble, and cautious when dealing with AI.
Comments
Post a Comment