Search

Eric Schmidt: AI Could Worsen Our Misinformation Problem - The Atlantic

For years now, artificial intelligence has been hailed as both a savior and a destroyer. The technology really can make our lives easier, letting us summon our phones with a “Hey, Siri” and (more importantly) assisting doctors on the operating table. But as any science-fiction reader knows, AI is not an unmitigated good: It can be prone to the same racial biases as humans are, and, as is the case with self-driving cars, it can be forced to make murky split-second decisions that determine who lives and who dies. Like it or not, AI is only going to become an even more omnipresent force: We’re in a “watershed moment” for the technology, says Eric Schmidt, the former Google CEO.

Schmidt is a longtime fixture in a tech industry that seems to constantly be in a state of upheaval. He was the first software manager at Sun Microsystems, in the 1980s, and the CEO of the former software giant Novell in the ’90s. He joined Google as CEO in 2001, then was the company’s executive chairman from 2011 until 2017. Since leaving Google, Schmidt has made AI his focus: In 2018, he wrote in The Atlantic about the need to prepare for the AI boom, along with his co-authors Henry Kissinger, the former secretary of state, and the MIT dean Daniel Huttenlocher. The trio have followed up that story with The Age of AI, a book about how AI will transform how we experience the world, coming out in November.

During The Atlantic Festival today, Schmidt spoke with Adrienne LaFrance, The Atlantic’s executive editor, about the future of AI and how to prepare for it. Their conversation below has been lightly edited for length and clarity.


Adrienne LaFrance: For decades, we’ve been waiting for artificial intelligence to catch up with our imagination. And now the technology really is advancing incredibly rapidly. How different do you think life will be a year from now, five years from now, 10 years from now as a result?

Eric Schmidt: Well, we think it’s going to be quite different. And the reason we think it’s quite different is that this is the beginning of a new epoch of human civilization. Frankly, when the Renaissance happened, we developed collectively 500 years ago the notion of reason, the notion that things did not just descend from God, that independent actors could criticize each other. Under that same argument, the arrival of a different intelligence that’s not human life, but similar to humans, is such a watershed moment.

LaFrance: So you’re invoking the Scientific Revolution, the Age of Enlightenment. We’ve all just lived through a triple revolution with the internet, smartphones, and the social web—will the AI revolution be bigger than all three of those?

Schmidt: I do think it will be bigger, and the authors collectively disagree on how positive or negative, but we articulate these points in the book. If you look back 20 years ago, people were talking about social networks. No one had any idea that social networks would become so important and would shape the political discourse of elections, how people are treated. It would give a voice to people who are underrepresented but also people we don’t necessarily want to hear from. And we didn’t, at the time, understand the implications of putting everyone on the same network. We need to think now of what happens when artificial intelligence is co-resident with us in the world. It lives with us; it watches us; it helps us, maybe interferes with us occasionally. We don’t really know.

I’ll give you a good example: If you imagine a child born today, you give the child a baby toy or a bear, and that bear is AI-enabled. And every year the child gets a better toy. Every year the bear gets smarter, and in a decade, the child and the bear who are best friends are watching television and the bear says, ’“I don’t really like this television show.” And the kid says, ’“Yeah, I agree with you.” What do we think when humans and these AI systems become best friends? Do we lose our communications and our warmth among humans, or does it get stronger? We don’t know.

LaFrance: You’re posing it as a question: What do we think of this? I can tell you, I find it mildly creepy, the bear-best-friend example. Would you want your children or your grandchildren to be raised with an AI best friend? Does that concern you, or is it exciting to you?

Schmidt: I think a lot depends on how it actually works. If that AI best friend is also the best teacher in the world, the best mentor in the world, the warmest possible version of what a child needs, then I’m all for it. But let’s imagine that this bear has a hidden bug in it; it was inserted from an evil person, where the bear is slightly racist, which is a value that I do not want my child to be exposed to. That’s not okay. Or let’s reverse the scenario and imagine that I am in fact a racist, which I’m not, and I want my bear to be racist. And I want to program the racist bear. We haven’t figured out what the rules are. Because these are systems that will be interacting with humans every day; they will have an outsize impact on their experience of daily life.

The information space, the social world that we all live in now, is so governed by the online media and, frankly, many of the attributes of clickbait and all those other kinds of things. It’s really disturbing. Now, as you know, Dr. Kissinger wrote in 2018 an article which was entitled “How the Enlightenment Ends,” and we followed up with an article in The Atlantic called “The Metamorphosis.” We’ve been working on these ideas for a while to frame them in a context not about the technology, which hopefully we will describe accurately, but rather on how society will react to this, and it’s coming fast. Dr. Kissinger says his concern is us technologists are building the stuff without a historical record and without an understanding of the deep issues in humanity that we are touching in our technology.

LaFrance: You chaired the 2021 National Security Commission on AI, and one of the recommendations that came out of it was to recommend to President Biden to reject a ban on autonomous weapons systems. That recommendation runs against an argument that many of your peers have made. They fear a world in which AI weapons would make battle decisions that we can’t understand or explain. Talk a little bit about your position.

Schmidt: When I speak about AI, most of my friends think I’m talking about a robot that has gone amok and a woman scientist slays the robot. But we’re not talking about that. What we’re talking about are information systems and knowledge systems that are with us every day. And today it’s fair to say that those systems are imprecise. They’re dynamic in that they change all the time. They’re emergent in that when you combine them, you can get unexpected behaviors, some good, some bad. And they’re capable of learning. Well, it’s the combination of those four points that makes these AI systems both problematic but also very powerful. So if you then apply it to a life critical decision—and I can think of no more life critical decision than the launch of a very, very deadly weapon—how would we know that the computer was making the right decision? We can’t prove it. If it makes mistakes, it’s a mistake that’s intolerable.

People who think that this is an easy problem, let me give you another example. You’re on a ship and there’s a missile coming in that you can’t see. And the AI system is telling you about this and says, ’“You will be dead if you don’t press this button within 20 seconds.” Now the rule in the military is the human needs to be in control. And there is no question the human has to press the button. What are the odds of that human not pressing the button? Pretty low. The notion of the compression of time in the context of these life critical decisions, especially military defense, is really important. If you go back to the original RAND study, it’s about automatic defense. And if you go back to the movie with Peter Sellers where they threaten to launch a nuclear weapon and the doomsday machine automatically launches one in return, which is retaliatory—that’s a good proxy for what we’re talking about. I’ll give you another example: We’re collectively, as an industry, working on general intelligence, and the general intelligence will emerge in the next 10 or 20 years. And depending on how you define it, those systems will be enormously powerful, and those systems will be enormously dangerous in the wrong hands. So they’re going to have to be treated in a way similar to nuclear weapons but with a new doctrine, because we don’t understand proliferation and we don’t understand how to do containment with these new weapons systems.

LaFrance: I want to go back to that in a minute, the question of what happens to our theories of deterrence with these new kinds of weapons. But first the geopolitical context: What is the U.S. position in all of this, relative to what we imagine China might do or Russia might do? Talk about the extent to which your position that we should reject a ban on autonomous weapons is about making sure that the United States is strong relative to other superpowers.

Schmidt: The core argument that we’re making is that there will be a small number of countries that will master this technology. And it’s important that there be discussions about how to limit them. It’s important that those discussions be bilateral; the number of countries that will be able to field at this level will be in the handful. So a reasonable expectation will be that these kinds of systems, when they eventually emerge, will be under the control of large national governments such as the United States and China, and maybe a few others. We feel quite strongly that the discussions about where the limits and lines should be should start now. We don’t want a situation where there has to be an explosion before there’s a treaty. If you go back to 1945, with the use of nuclear weapons in World War II, which ended it, there was no treaty at the time banning or limiting their use. We spent the next 15 years developing the concepts of containment, mutually assured destruction, balance of power, and so forth. We’re going to need to rediscover those in this new age of dangerous weapons.

LaFrance: With nuclear weapons, so far, thankfully deterrence has worked. But when you look at AI weapons, you have the possibility for targeted drone killings carried out with extreme precision and accuracy, and not necessarily by nations but by terrorists. The nature of these attacks could be very hard to trace. This breaks our entire sense of how war and deterrence might work. So you say we need a doctrine or framework, but where do we start?

Schmidt: Well, many people believe that the best way to solve this problem is to create the equivalent of a Los Alamos Center, where the United States invests its many great people and money to build and advance in this area. The question is: How long, how much of an advantage would that give us? In 1945 we did this, and we had an advantage of about four years before the Soviet Union was able to do something similar. We estimated in my AI commission that the gap between the United States and China was far shorter already. And the reason the gap is there is because software and the knowledge about sovereign algorithms proliferate very quickly. So with nuclear, we’ve had the capability to limit the actual proliferation of the actual fissionable material, even though the underlying knowledge of how to build such a weapon has been broadly known for at least 50 years pretty much everywhere. We don’t have the equivalent of uranium-235 to restrict it in software; we have to find other ways to avoid the proliferation of the tools you just described in the hands of every evil player.

LaFrance: You’ve talked before about how AI is here, we can’t uninvent it, so we have to figure out how to live with it. When you think about the arc of technological history, is there a technology that you wish we could uninvent, one where humans would be better off if it never existed?

Schmidt: Certainly nuclear weapons and biological weapons would be where we would start. If you just use death from conflict, famine, things like that, I would find a way to reverse whatever technology enabled them. I think that the impact of AI, in the way you’re describing, will be much broader felt and much harder to identify. I’ll give you a good example. With AI systems, they’re particularly good at finding patterns and finding exceptions. And they’re particularly good at looking at you and your life pattern and coming up with things that are useful for you. So, for example, if you know your search history, we can suggest your next search. If we know your ad history, we can suggest the next ad. Those are, in my view, benign uses of technology within a control space. But now let’s imagine that we build an AI system that looks at human behaviors and figures out my weaknesses. So let’s say, for example, that I’m subject to the first person: Whatever you say, I believe first. This is called anchoring bias. And let’s imagine that the computer system can determine how to anchor me on a falsehood. So the current problem of misinformation will be child’s play compared to many people trying to manipulate the impression I have of the world around me using this technology. This is a problem to be solved. We’re all online. All of these systems will be learning what we care about, but we could then be manipulated. That will have as big an impact as the discussions that we’re having, for example, in conflict.

LaFrance: I’m glad you brought up misinformation. When you were Google CEO, in 2006 Google acquired YouTube. It was a very, very different internet then, and now YouTube is among one of the places—among many other platforms, including Google as well, for that matter—where we see the threat of extremism and misinformation. What do you wish you knew then that you know now?

Schmidt: I can tell you, in the 2000s, our view, and I think I speak for the company during that time, was the American view that the solution to bad information is more information. And so we took a position that was quite aggressively against removing content, even if it was harmful. And we viewed that as both the right thing technologically, but also the right thing culturally. We were criticized heavily in other countries for this. To me, the most interesting thing about COVID online has been that platforms like Twitter and Facebook, which have resisted any form of takedown, have worked aggressively, or they have claimed to be working aggressively, to take down COVID misinformation. COVID is an example of something where falsehoods are on the other side of the line. I can tell you today I’m not involved with it—I left Google—but YouTube has a team which makes these decisions every day and tries to figure out where the line is. You get to decide where you think that line is. Many people I know believe that the lying in misinformation that goes on on social media is on the other side of the line, it should be prohibited, but I don’t know how to prohibit it in a systematic way. And I furthermore believe that the people who are spreading misinformation will get access to these tools that will make it even easier for them to spread information. The industry has got to figure out a way to stop the spread of misinformation. One of them, by the way, just to digress technically, is that the information is not watermarked in such a way that we know its origin. It will be relatively easy for the industry as a whole to start by saying, “Where did this information actually come from? Where did this picture or this text or video or whatever—where did it enter all of our systems, and then who modified it?” That alone would help us at least understand the source of the manipulation.

LaFrance: Do you think it’s possible that these platforms are just too big and humans shouldn’t be connected at this scale? I think Facebook has 2.9 billion users. Is it possible that this is just not how humanity is meant to be organized?

Schmidt: It’s pretty clear that there are things that humans are not very good at, and one is a cascade of information. So we’re not very good with rapid change and lots of information—it just causes our brains to go crazy, with maybe a few notable exceptions. For normal people it’s like, Oh my God, I just can’t take it anymore. And it leads to rises of cortisol stress, perhaps mental illness, God knows. So we have a problem that the information that is coming at you is so profound. In our book, what we talk about is that this coexistence with this information resource—it’s driven by features that are not human—is a change in the information space that is profound. We worry a lot in the book about how you regulate it. We’re not in favor of censorship; we’re not trying to filter people’s opinions. So what are the appropriate restrictions of the misuse of this new technology?

Now, you haven’t asked me all the great benefits of all of this. Let me give you an example of why we should fight for AI. The people I talked to in science, which is where I spend most of my time now, would love to have an AI that would just read everything, because the explosion in knowledge in science is overwhelming their brilliant brains. You can imagine that the AI system, which sees everything and can summarize, can be used to move biology forward, drug discovery forward, and so forth at enormous rates. In the introduction to our book, we use three index points. The first is AlphaGo, which beat the Korean and Chinese competitors and invented moves that had never been seen in 2,500 years. I know because I was physically present when this happened. We talked about a drug called Halicin, where the computer took 100 million compounds, figured out how they work, and figured out how to assemble a new broad-class antibiotic, which had never been conceived of, which is in trials now. And we also talk about technology called GPT-3, which is a representative of what are called universal models, where they read everything and then you try to figure out what they understand and what they can do. The results are miraculous. We’re just at the beginning of this ramp that I’m describing. And now is the time to say, “Let’s figure out how we want to handle these things.” So, for example, is it okay—I use my facetious example of the bear. Is it okay to have a racist bear? Maybe people will think that’s okay. I don’t think it’s okay, but that’s an appropriate debate to have.

LaFrance: You’ve written before that the “intelligence” in artificial intelligence is needlessly anthropomorphic, and that we need to reframe the way we think about what knowledge is. What is your view of what human intelligence then becomes, if machines can do all the things we can do, and better. Is there any form of intelligence that remains uniquely human, and should we worry if there isn’t? What realm remains ours?

Schmidt: I could speculate that in a few hundred years there will probably be no places where computer intelligence is not as good as humans. But certainly for the rest of all of our lifetimes, there will be things that are uniquely human that involve the synthesis of ideas out of left field, things which are just true ideas, true new discovery, true innovation. If you look at the pattern of AI, it’s not replacing the brilliant lawyers, columnists, writers, teachers, researchers; it’s replacing menial jobs. So what happened with AI was it started working on vision. And an awful lot of people just look and monitor things, which is pretty mind-numbing. And now computers are better at vision than humans, which allows humans to be freed up to do higher-order things. So for the foreseeable period, and certainly for our lifetimes, there will be a place for humans. Hopefully that place for humans is using what we do best, which is our creativity, our sense of morals, our sense of spirit. Those are going to be very difficult for the current tasks and current focus of how AI evolves, at least in this generation of invention.

Adblock test (Why?)



"much" - Google News
September 28, 2021 at 03:18AM
https://ift.tt/3kKFVyk

Eric Schmidt: AI Could Worsen Our Misinformation Problem - The Atlantic
"much" - Google News
https://ift.tt/37eLLij
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update

Bagikan Berita Ini

0 Response to "Eric Schmidt: AI Could Worsen Our Misinformation Problem - The Atlantic"

Post a Comment

Powered by Blogger.