Washington, D.C. – Senate Majority Leader Chuck Schumer (D-NY) and Senator Rounds held a pen and pad to discuss yesterday’s inaugural AI Insight Forum:
Senator Rounds: You know, we've talked a lot about in the last couple of weeks, this AI is not going away. And what we did yesterday, we hope to be able to redo again in the future probably with the doors open next time around. But the goal here was to try to get as much information as we could from folks that are actively engaged in the business. This is not something that's just going to be a flash in the pan stuff. It is right now. It's not every battlefield. It is a real competition between the United States and China, and Russia to some degree, but it's also going to change the way we do business in this country and it's going to change the quality of life in this country. Personally, I think, to the better and I think if we do this on a bipartisan basis where we can put in a framework that could survive the test of time, that stability will continue to bring more people into the United States that want to make AI their business. And if that happens, then we win this battle. We win the battle with China, but the quality of life goes up because of what AI can do, whether it's health care, whether it's education. This is not like it's just gonna be you know, you create a bomb and once you created the bomb, such as in the Manhattan Project, and then it goes away kinda. This is going to continue on, and it's going to continue on for generations to come. And I have appreciated the bipartisan approach.
Leader Schumer: Well, thank you. Yesterday's hearing, as Mike indicated, was just unique. In some ways it was historic. And that's because AI is probably going to affect this generation and the next generation more than just about any issue that we will have to deal with, and it's very hard to deal with, for a bunch of reasons.
First, it's very complicated. One of the people at the hearing yesterday said he doesn't understand all the algorithms and it was Eric Schmidt, no slouch.
Second, it affects just about every aspect of society. There's not just one area or another area, another area.
And third, it keeps changing. So the normal reaction of a legislative body would probably say, it's too hard, let someone else do it or let's kick the can down the road.
But we can't afford to do that. AI is too important to leave alone. And the good news is we did have some fundamental early consensuses at the hearing.
One of the most important moments was when I asked all the participants to raise their hands if they thought government had to be involved in managing AI and every single participant and there's a broad range of participants from all sides, raised their hands. And as the discussion went forward, a couple of other things became clear: that we need a government involvement on sort of both sides of the ledger.
You know, there are two sides to AI. One is the stuff Mike mentioned. The amazing potential, you can't just say let's ignore it. If we could cure cancer, and Mike talked very touchingly of his wife's passing of cancer and how if AI could find a cure for cancer, other people wouldn't have to suffer like that. If we can feed the world, which one of the participants thought was a very real potential with AI, who oversaw millions, billions I guess of starving people. We could strengthen our national security and make sure Russia or China government doesn't take advantage of us. If we could improve education dramatically. These are huge benefits that you don't want to look away from.
And there's an agreement that you really need government involvement. One of the participants said you needed $32 billion, just on that side of the ledger. Because you know, not every company can create these huge databases or use them for the right purposes. This is a very expensive thing to be done. But you also, maybe even more importantly, need guardrails on the other side. To prevent the negatives and we had people in the room whose constituencies, had real world impacts. We had labor there. How do you make sure new jobs are created? And what do you do with old jobs that are lost? What about bias? There's plenty of bias in society and if you just take the systems of using all the data and numbers, those biases incorporated there. How do you deal with intellectual property? We had someone from the Writers Guild there. And how do you make sure that the things, frankly, that you write and that all these creators write is not just used by AI without some kind of recompensed and, you know, something that did get some attention. What about the doomsday scenarios? What if, you know, instead of curing cancer, you have some kind of virus that could destroy or greatly hurt humanity.
So you need these guardrails and only government can do it. There are some companies who want to try to make their systems better. Good, deal with these impacts, But there’ll always be outlier companies and outlier countries that don't do it. And so the final place we came to in the morning was there ought to be some balance, you need both. You can't just focus on one without the other and how much push and pull is there on that balance? So it was exciting. It was exciting.
Rounds: You make the point and I think sometimes we think when we talk about regulation, it's a matter of it's gonna be new regulations and so forth. But really what we're talking about is an extension of a lot of those regulatory policies that are already in place, even if you didn't have AI involved. And the one that comes to mind for me right away is copyrights and patents. We know right now that that regulatory body of work has developed over the years. That's helped us to develop an economy here. People know that if they create something new, they get paid for it, and they get protected. And then we have discussions about how long you protect a patent and so forth. What if it's an AI developed patent or what if it's something that builds off of a patent that's already in existence? Those are the types of regulatory things that we're talking about. That if we fix them early on, we can actually help to develop and expand on AI capabilities here in this country. And if we do that, suddenly those folks want to be here and we can share a lot of that with the rest of the world. But we want that development here. And the way that we approach the regulatory process here, if we do it in a fashion that actually promotes AI development here, and provides protections for those folks, that's a positive thing and we will be long term the winner in this process.
Schumer: Why don’t we take questions?
Reporter: I have a quick question. There's been a lot of criticism of Congress over the way that social media kind of got away from you guys and you weren't able to really regulate it and still haven't been able to. Have you completely ruled out creating a regulatory agency to handle this or do you want this to come from Congress?
Schumer: Yeah, that was one of the discussions we had. A few people thought there should be a new regulatory agency. A few people thought we ought to enhance the power of some existing regulatory agencies. Yesterday's Forum was supposed to first come to a conclusion as to whether government should be involved. That's an overwhelming yes. And then, what are the big questions that we have to answer? We didn't expect to answer any of them in that three hour period. These are so complicated, but a lot of them were laid on the table and that was one of them. What kind of agencies should do the regulation? Should it be existing agencies? Should we create a new agency? Could it be sort of a combination of existing agencies in some areas and a new agency in others? And then related, was the question of how do you deal with this internationally? And a number of people brought up, it would be very good to have sort of what we did, like after the movie Oppenheimer, if you saw the movie Oppenheimer, they created I think I have the right letters IAEA. Is that right? I'm getting a thumbs up over there. For this, now, that's a good thing. But at the same time, you don't want to wait until you have half the countries of the world for it because the horse may be out of the barn.
Rounds: It's there, I mean, we're already dealing with AI and we're already seeing some of the after effects of it, even today. But they talked about Section 230 and about the fact that that probably wasn't a helpful item with regard to AI development. One thing that was of interest to me and that I can't get out of my mind is they talked about referees. And then they said, you know, it's one thing for Congress to say we want to do this, but it means you've got to have the expertise to actually do this correctly. And where do we find the expertise? Well, if you've got lots of different agencies that currently are going to be impacting AI development, and each of the committees is going to be looking at their portion of it, you know, with regular order in the Senate - you're going to have the Commerce Committee, the Judiciary Committee, the Armed Services Committee, Intel, Banking - where do they go to get this body of expertise, specifically with regard to AI? One of the suggestions was maybe you should be developing the experts that can then advise all of these agencies or these committees on appropriate, or at least the guidelines, even the definitions of what it is and what it means so that you don't do something by accident that could really harm the development. And, you know, we started out with NIST, National Institute of Standards and Technology, I believe, and they're saying, you know, that's one area. The other areas are national labs have got experts. A lot of the universities have got experts. Is there a way we can put together a body of these folks to actually help us understand it as we look at the policy implications, committee by committee? And then you've got this built in group of people that can really be the experts and then you don't have to have a group of experts in every single one, but rather a really high-quality group to help us develop some of this legislation on an ongoing basis.
Schumer: You know, one of the problems, which is one of the hardest things that we're ever going to undertake. And I don't want people to think oh, it's going to be easy. It isn't. It's just that the need to do it was so great, that we have to and that's why we came up with a unique type of forum yesterday, which usually doesn't happen, because this is so different than anything we've tackled. Yes.
Reporter: Congress has to have its own body, the Office of Technology Assessment that would look at these emerging technologies and how deep true experts, on how legislation should tackle them. That was eliminated 25 years ago, then we saw the social media boom. Do you think some of those collaborations that you're talking about, bringing people together, is there a place for that here on Capitol Hill, so you have resources at your hands, whenever you need them, as you're putting together proposals?
Schumer: Again, we're at the beginning here, and there are lots of ideas, some ideas already out there. There'll be many more new ideas. It's something we should look - no one's made a determination. The problem may be that this is so complicated and so deep, that we may need to go beyond just an OTA.
Reporter: One of the things you mentioned yesterday, that's a priority, you seem to be particularly urgent about it, especially given what the timeline is to deal with use of AI in elections, things like deep fakes. You guys have seen dirty tricks in your campaigns. This could potentially supercharge that right before an election. Do you think that there's, sort of, bipartisan willingness to take on that issue and that you can actually get something done in a timeframe that you're actually going to affect the next year's election?
Rounds: I see that as a bipartisan issue that could be addressed. Federal Elections Commission, where's their expertise on it and what can they do? Once again, I think you hit it on the head, can we do it in a timely fashion? But this is my own personal opinion. I haven't talked with other members on either side about it, but to me, that would be one of the priorities that I would like to see as far as one of the first steps out there because the election is coming up on us quickly and clearly there are going to be bad people out there and other countries that would love to make life miserable because they don't want democracy to succeed. And if they could impute it - if Putin could stand up and say, see, look at what they're doing, our approach to this, using authoritarian approaches is better than that old fashioned democracy that doesn't work, we've got to be able to say that we can make democracy, you know a Republic work, and we're going to have to do everything we can to make sure that those elections absolutely are fair and that misinformation is identified. How you go about doing that, with an agreement by both sides is going to be the real challenge. First Amendment rights are critical. But make sure that if somebody's going to implicate or play games, or make, you know, clearly illusionary messages, there's got to be a way in this society to address it.
Schumer: And in terms of the timing question. Look, we'd like to do everything at once. But it may be some areas are a lot harder to get our arms around than others because of technical knowledge, or because maybe there are divisions or whatever. So if we have to do something sooner and other things later, our preference would be to do it all, but obviously have the difficulty and enormity of the task may not allow us to do that. Some things may have to go sooner than others and elections is one of the things that we may have to try to do soonest.
Reporter: Leader Schumer, so even if you had relative consensus among Senators on a bipartisan level, you still do have another chamber in Congress.
Schumer: What’s that called?
Reporter: I can't remember. So how do you come up with something that can be addressed also in the House, particularly as they have their own challenges that they're dealing with? And do you believe that there's enough inter-chamber communication right now about this issue?
Schumer: I’ve talked to Leader McCarthy about this, Speaker McCarthy rather, about that we were doing this and he was very encouraging. And I think one of the best things we can do is set an example. If it's truly bipartisan here, and it's off to a very good bipartisan start, it will help importune House to be more bipartisan. This issue, maybe in a few areas, is more partisan, but in almost all areas doesn't lend it. To cure cancer or to stop disease or whatever, that's not a partisan issue. So I think we can keep it bipartisan here in the Senate and hopefully that sets an example in the House. There are some people in the House who have already called us and talked to us and want to talk to us. They're busy with other things right now, but hopefully, they'll get on it to.
Rounds: Let me be clear. Senator McConnell and his team have been very supportive of our efforts in moving this forward
Reporter: Is it important to set deadlines? You know, people look at Congress and just think, they’ll keep talking about and talking about and talking about it. But is it important for you, especially as the person who’s in charge of the floor, to start setting deadlines for committees?
Schumer: Well, we've told committees that we want them to move, you know, there's two sides to that coin. If you move too slowly, the horse is out of the barn and obviously there are certain set dates, like elections, that we’ve got to deal with. But if you move too quickly, you may screw it up. And if you look at the European Union, they will very, very fast and now everyone's having to go back to the drawing board. So it's a balance. It's a very hard issue. We've already, and Mike has stressed this with his caucus as I have with mine, the committees are going to be where the action is in terms of drawing up legislation. Now, we may have to coordinate, we may have to push, we may have to do other things, but I don't think a set deadline - on something as bold and brilliant as this- you can't just set a deadline and say we have to have everything by November 15.
Rounds: I know Senator Thune has been working on stuff on our side. I know Senator Cruz has been working, but they're doing it through the committee process. That's the best way to do it, because that's where the expertise is at and those are the committees that work on the regulatory end of this thing to begin with. But let's not lose them in this process to where they don't feel like they've been a part of it from the beginning.
Reporter: Senator Schumer, you mentioned section 230. Do you have any intention to bring the floor legislation restricting or addressing section 230. Particular Judiciary Committee passed a bill on CSAM? It passed unanimously.
Schumer: That's not on this subject, we’re staying on this subject. But we are working very, very hard on child privacy legislation and involves both the Judiciary Committee and the Commerce Committee.
Reporter: Leader Schumer, data privacy - this is related I swear. Data privacy has been noted as sort of a precursor issue to a lot of the AI legislation we don’t have the tangible pieces that is a pretty primary key issue versus a key committee on this. So the question is basically, are we just trying to wrap data privacy stuff into AI? Or are we trying to deal with that, like, what's the –
Schumer: You can't deal with AI without dealing with certain privacy issues. And some of the privacy issues that affect social media and things like that will probably be wrapped up in what we do. But data, you know, AI by definition involves these huge databases, and there's been very few privacy rules involving them. So it's something you have to look at very carefully. It's an important issue to discuss.
Reporter: Can you illuminate some of the upcoming forums? I understand you’ll hold many of them the rest of this year. Will some of them be open to the press? Open to public? What will they look like?
Schumer: Yeah, we haven't yet. We talked about the next forum should focus on innovation because that’s a circumstance on both sides of the ledger. What we call transformational innovation, all the positive stuff, and sustainable innovation, the dealing with the guardrails in the negative, so that'll be our next one. We'll be setting a date shortly on that and then there'll be a bunch of others.
Reporter: Will any of them be open?
Schumer: So we're deciding that. Some will be open, some will not. We'll be making those decisions. We thought yesterday's was a big success. And obviously by the questions you're asking, lots of the information came out. But at the same time we had 70 senators, listening to all sides talk in an unvarnished way to one another. And the reaction among the Senators has been, on both sides of the aisle, extremely positive of how we did it.
Rounds: One thought on that. One of them will be on health, we know that. But think about this, we actually had a group of experts more than two years ago, begin the process of looking at AI and all of its implications. You've all had access to the original AI report for DOD purposes. But there was also an extremely highly classified portion of that, that even a number of the members of the Senate couldn't see. And because of the issues of surrounding health and what could happen if the wrong types of biologics were released and so forth, that just simply they couldn't publicly discuss. But what we can do with health care is something that should improve the quality of life in this country and around the world like we have never seen before. I mean, this is something that I think needs to be opened but talking about it openly can be extremely sensitive, because people can take it the wrong way in terms of what can happen when you use AI to develop new types of biologics and so forth. But the top side of this, the opportunities here are so great, that we can't not pursue it.
Schumer: So, I conclude by saying we believe we're off to a great start on one of the hardest issues we will ever tackle. Thanks everybody.