working, where it's not working, what areas, where it's not performing, where it is performing. And
then from there you can go out and generate more data to help train the model in the right
direction. But I think the top level, 30,000 foot view is trying to find use cases that are important for
the business but aren't going to create a huge headache if they go wrong. So, sort of basic, you
wouldn't want to create a large language model for an outside customer-facing solution out of the
gate. You probably want something internal for your own teams to be using first and then slowly
grow into something more external.
21:42
Amna Nawaz
Can I ask you something based purely on a consumer standpoint? We hear this a lot from people
with consumer facing products in particular, is that this kind of development model sort of relies on
things going wrong first before they get fixed, right? If the inputs haven't already been implemented
into the model. And I wonder how we should think about that outside of business leaders, because
as we keep mentioning, the platforms are only as good as the inputs. So, it feels a little bit like kind
of building the plane as you're flying it. Is that fair?
22:13
Michael Kratsios
A little bit, yeah. I think both when it comes to regulation and even when you're trying to implement
these in a corporate setting, it's all about taking a risk-based approach to giving the green light for
deployment and there's very sort of simple, basic, easy things that you can do, that the world's not
going to end, everything's going to be fine if it doesn't work out. And there are other ones that
aren't. So you, as a leader in a corporate setting, have to be very thoughtful around trying to create a
risk framework yourself on what are the use cases and how do they fit within sort of the importance
of the business. And then from there, you can figure out how much testing and evaluation
preparation you need to do before you hit the deploy button.
22:52
Amna Nawaz
Related to that, there is this issue of internal biases, how they get implemented unintentionally,
unwillingly, sometimes into the platforms and the work that we put out. There's actually an
audience question that just came in that helps me make this pivot. Someone watching, thank you for
your question, has written, "How are leaders looking to mitigate any biases that can arise in the
LLMs?" Simon, let's just start with this. Is this part of the conversation? Is this something that leaders
are cognizant of among the CEOs you talk to, that it can fuel efficiency, AI can fuel efficiency, it can
boost competitive, solve this. But the consideration of impact and potential biased impact on
consumers, are they talking about that?
23:37
Simon Freakley
They absolutely are and, of course, it's not as if they've just started thinking about it with the advent
of generative AI. The whole question of is there systemic bias in all sorts of processes, from the
obvious one of recruitment, for instance, but all the way through the different business processes,
how does one spot bias? How does one adjust for bias? And this has been a very live and important
discussion that's now been going on for quite some years. So, I think the discussion that we're now
having on the back of how large language models are programmed to ensure that they don't display
or inherit the bias that's there in the data, but actually further cleanse the bias out of the system, is a