Former President Obama warns that ‘disruptive’ artificial intelligence may require rethinking jobs and the economy
This week’s The Verge podcast Decryption He interviewed former US President Barack Obama to discuss “artificial intelligence, freedom of expression, and the future of the Internet.”
Obama warns that future copyright issues are only part of a larger issue. “If AI turns out to be as widespread and powerful as its proponents expect — and I have to say the more I look at it, the more I think it will be that disruptive — we’re going to have to think not just about intellectual property; we’re going to have to think about jobs and the economy differently.”
Specific issues may include the length of the work week and the fact that health insurance coverage is currently tied to employment – but it goes further than that:
The broader question is what will happen when 10% of current jobs can eventually be implemented by a large language model or other type of artificial intelligence? Will we have to reconsider how we educate our children and what jobs will be available…?
The fact of the matter is that during my presidency, I think there was a little bit of naivety, where people were saying, you know, “The solution to lifting people out of poverty and making sure they get high enough wages is we’re going to retrain them and we’re going to educate them, and they should become We are all programmers, because this is the future.” Well, if AI programming is better than all but the best programmers ever? If ChatGPT can create a research note better than a third or fourth year associate – perhaps it is not the associate with particular experience or judgement? – Now what do you say to the upcoming young people?
While Obama believes in the transformative potential of artificial intelligence, “We have to be a little more intentional about how our democracies interact with what is generated primarily from the private sector. What rules of the road do we set, and how do we define them? Make sure we maximize good and maybe Underestimate some of the bad?
Obama believes that the impact of artificial intelligence will be a global problem, which may require “cross-border frameworks, standards and rules.” (He expressed hope that governments could educate the public on the idea that AI is “a tool, not a friend.”) During the 44-minute interview, Obama predicted that AI would eventually force a “more robust” public conversation about what rules are needed. . of social media — and that at least some of that pressure could come from how consumers interact with companies. (Obama also says there will still be a market for products that… no Just show you what you want to see.)
“One of Obama’s concerns is that the government needs the insight and experience to properly regulate AI,” The Verge’s editor-in-chief wrote in an article about the interview, “and you’ll hear him pitch an idea for why people with that expertise should tour the government to make sure… “That we are doing these things right.”
You’ll hear me get excited about a case called Red Lion Broadcasting v. FCC, a 1969 Supreme Court decision that said the government could impose something called the equity doctrine on radio and television broadcasters because the public owns the airwaves and therefore can impose requirements about how they use them. There is no similar framework for cable television or the Internet, which do not use public airwaves, and this makes regulating them more difficult, if not impossible. Obama says he disagrees with the idea that social networks are so-called “common carriers” that should distribute all information equally.
Obama also praised the White House’s recent executive order last month, a 100-page document that Obama described as important as “the beginning of building a framework.”
We don’t know all the problems that will arise from this. We don’t know all the promising potential of AI, but we are beginning to lay the foundations for what we hope will be an intelligent framework for dealing with it. Its safety protocols and testing systems may not be where they need to be yet. I think it’s absolutely appropriate for us to raise the flag and say, “Okay, frontier companies, you need to disclose your safety protocols to make sure that we don’t have rogue software going off and hacking into our financial system,” for example. Tell us what tests you use. Make sure we have some independent verification that these things currently work.
But this frame cannot be a fixed frame. These models are evolving so rapidly that supervision and any regulatory framework must be flexible, and must be nimble.