or a citizen of the world, but also just worried about how comprehensively it could upend their
enterprise.
And so, for instance, I was talking to the chief executive of a large retail chain recently, who said that
he'd just decided he didn't know, but he'd just decided that within the next five years, he needed to
assume that he didn't need 30% of the jobs that were currently being done in his enterprise that he
did today. And he was going to use that as an assumption, not because it was going to be precisely
right, but directionally it was going to be right. And how could he retrain people to do some of the
higher value jobs? How could you think about reconfiguring his business around that assumption?
And he is now very much doing it, looking at the data sets that they have as a business to think
about, what are the unmined pools of value in that data that by exploring that unmined pool of data
with generative AI, they can create new ways of imagining revenue. Of course, every chief executive
is serving at the altar of growth, because revenue growth is the most direct proxy of equity growth.
And so, how to use the data they have the new capabilities that they will have access to through
these large language models to create streams of revenue growth they don't have today, whilst at
the same time making some assumptions to how their cost will be configured, particularly around
headcount. So, I think, honestly, this is something that keeps every chief executive awake most
nights.
48:24
Amna Nawaz
Michael, how do you look at this in particular. In the conversations in Washington, are the concerns
they're raising valid from your viewpoint or are they largely just tied up in political tensions, right
now?
48:36
Michael Kratsios
I think what I have seen is that there's been these stories or these vignettes about the way that large
language models could be used to sort of create catastrophic results or pose dramatic national
security risk. They get a lot of clicks, people love to talk about them on the Hill. My general
perspective is these are sort of low probability, high impact events. And if you're a government and
you're responsible for taking care of the country and ensuring everyone is safe and secure, yes,
absolutely, there should be people working on this and figuring out what's the best way to minimize
the risk, that bad actors don't use these technologies to do horrible things.
But one thing that I will say though, is you can't allow the entire conversation to be dominated by
that conversation. The reality is that there are immense benefits that this technology can bring for
the American people and the way of life of everyday Americans. And if we're too caught up in
dealing with these low probability events, we're not focusing the energy on the way that we can
potentially safely deploy this technology in a wide variety of industries. So, to me, I think there's an
important balance that needs to be struck and hopefully that can continue, and we don't get too
caught up in this sort of almost fear mongering, extreme risk conversation.
49:53
Amna Nawaz
Let me make sure I get to some of these audience questions that are coming in, and keep them
coming in - we've got a few more minutes. I will try to work them all in. One person is asking
specifically about the impact on the consulting sector. So, Simon, let me put this to you - "In light of