There's a number of audience questions coming in I want to get to as well, but we talked about
regulation a bit earlier, and I'd love to get both of your takes on this, because, of course, here in the
US President Biden has just made history, right, signing the most expansive regulatory attempt yet
with a sweeping artificial intelligence order at the end of October. He's essentially invoking broad
emergency powers to harness the potential and to tackle what he says are the risks of what he
called "the most consequential technology of our time". Michael, let's just start with you and your
reaction to that executive order. And in the US in particular. Is that kind of regulation necessary at
this stage of development, and what do you think will be its impact?
28:54
Michael Kratsios
Yeah, it was an interesting order. I think the key question that the Biden administration tried to solve
for and out of the gate, most prominently was what do you do about existential risk posed by these
large language models? So, the question of could a bad actor potentially use one of these models to
ask a simple question like, "How can I create a biological weapon using materials I can purchase on
Amazon?" Things like that, where you could essentially, they are low probability, but very high
impact risks. And the question is, can the government, and should the government be in a position
to essentially work with these companies to ensure that type of risk is minimized?
The Biden administration took the strong position that they should be involved, and all the major
large language model providers are now required to be sharing their testing data with the federal
government. The second piece of executive order thinks a lot about how you can actually start
building a regulatory regime around these large language model use cases. And it did not call for a
new AI agency of any sort, but it did call for a number of agencies to start beginning the hard work of
thinking through what a regulatory regime would look like. Most prominently, the standards agency
of the federal government, called NIST, has been tasked with creating a set of standards, if you will,
for testing and evaluation of these models. These, I think, could be very valuable for the larger
industry, where we can all be singing from the same song sheet. When it comes to before you
deploy something, what do you end up testing? But I think there is a bit of a question in some parts
of the industry whether or not this is a little bit too early. If you can think a little bit about the
general take that the US and Europe has had. Europe has always taken sort of this precautionary
principle, this idea where you have to think a little bit about the harms and maybe create regulations
to try to minimize those before those harms have been materialized or realized.
And in some sense, I think many people are viewing this executive order maybe a little leaning in
that direction where this is great technology, we know some things may go wrong, like let's start
thinking about those now and regulate it. So, there's a big push and pull. But I think the big fact is
that these large LM providers are now required to disclose certain things to the government, which
is a pretty big action forward.
31:06
Amna Nawaz
Is it fair to say the US is far ahead out in front of everybody else when it comes to this development?
31:11
Michael Kratsios
At least on the regulatory front, the European Union has been working on an AI act for many years
now. So, it almost felt like the US was a bit trying to sort of catch up with it. If you're in a race to who