Episode Transcript
[00:00:01] Speaker A: Welcome to 1810, a podcast produced by the Lawrenceville School. In 18 minutes and 10 seconds, we explore the future of education with insights from bright minded individuals, inspiring new ways of thinking.
So school as we know it has been around for about 200 years, but on November 30, 2022, everything changed when OpenAI released ChatGPT3 and the question then emerged, is it time for us to reimagine education? We're on the campus of the Lawrenceville School in Lawrenceville, New Jersey.
[00:00:31] Speaker B: Our topic today is artificial intelligence or AI.
[00:00:34] Speaker A: My name is Jennifer Parnell and I'll be your host. I'm the Director of Innovation and Student Projects at the Lawrenceville School, a 9 to 12 boarding school in the Northeastern United States. I teach in the history department, I coach track and field, and I serve in one of our residential houses. My background is not technical, but I have a keen interest in the convergence of technology and learning. And I've invited a professional colleague here today, Mr. A.J. dahl, who's senior Vice President of Global Data Solutions and applied AI for a Fortune 500 consumer products company and a parent of a recent graduate. And actually today's conversation, Today's podcast is really just the most recent iteration of a series of conversations AJ and I have had since that fall of 2022.
I think for me, one of the biggest things is we've been able to talk really about the intersectionality of AI in education, connections between Lawrenceville and the world beyond the gates, and really the blending of past and future for our students. So, AJ welcome to the Lawrenceville campus.
[00:01:35] Speaker C: Thank you, Jennifer. It's lovely being back on campus. It's lovely seeing you and I look forward to the conversation today.
[00:01:41] Speaker A: So, AJ let's give our listeners a summary of what to expect for the first part of Today's episode of 1810. We we really hope to provide contextualization for our audience and then discuss the relevance and exponential growth of AI. Following that, we're going to move our conversation to more specific discussions about the corporate experience and then balance that with some different issues and perspectives in the debate swirling around AI development. For me, all of this started with a cup of coffee.
One of the seniors in my Honors government class said, my dad is really into AI. You're really into AI. You should connect because you're both always talking about AI. And so I distinctly remember our first conversation at a coffee shop in Princeton and one of your first suggestions was to start an incubation group. Do you remember that?
[00:02:28] Speaker C: I clearly do. I remember that first coffee chat And I remember we talked about knowledge management.
We talked about how do you get some initial proof points so you start getting people familiarized with AI. And then I also talked about how do you not do this alone and instead you do this in partnership between academia and industry.
[00:02:52] Speaker A: I think for me, that's been one of the most important aspects of this, really. The opportunity to collaborate. We get to marvel at the changes, we get to commiserate at the relentless pace of AI. We get to share the joy of being lifelong learners. And it's frankly almost a bit of.
[00:03:09] Speaker B: Therapy as we all try to adapt.
[00:03:11] Speaker A: To the ubiquitous pace and space of AI. So I'm really grateful for this collaboration.
And I want to start with a quote from an MIT AI conference this past weekend in New York City.
[00:03:25] Speaker B: It was called AI at the Crossroads.
[00:03:27] Speaker A: And an assistant professor at Wharton, Ethan Mollock, commented that one of the key things to remember about AI is that nobody knows anything. Everyone is learning.
[00:03:38] Speaker B: The key is just to start using it, get over the barriers to use.
[00:03:42] Speaker A: And then get others to try it. To what extent does that describe your journey with AI?
[00:03:47] Speaker C: I grew up in two very impressive, very successful Fortune 500 companies, but they were the traditional legacy companies. They were not digitally native like Facebook or Google. And so my only interaction with technology was frankly through my laptop. So a bit like you, not very technology fluent. But over the last four years, I've had to really reinvent myself through AI and I've become a student of AI. I'm a practitioner of AI and I've really surrounded myself with bright minds like yourself and many outside in the field, in academia, in industry, to make sure that this journey happens in a much more collaborative fashion.
[00:04:29] Speaker A: I think that's really important for so many of us. This is a profound change in our professional journeys. It's not what we expected, but it's here and we're trying to embrace it. And I think it's important that we start with a quick explanation of sort.
[00:04:45] Speaker B: Of how AI affected your life.
[00:04:48] Speaker A: Right from the beginning, at least for me. We introduced it in my classroom. We put up some questions and the students were just amazed. The applications were immediately apparent. Was that your first interaction with an AI model?
[00:05:02] Speaker C: You know, my first interaction was when GPT broke, and that happened in early 2022. And like everybody else in the world, I went started prompting GPT. Frankly, I was doing what I think in hindsight was essentially a Google search. I was having ChatGPT do my workout regimens, cooking recipes. But I think the point was it began to get me familiarized with this new world of call it GPT.
[00:05:30] Speaker A: I think the demystification is very important. And let's start there just briefly for our listeners.
[00:05:35] Speaker B: It's.
[00:05:36] Speaker A: I think one way to think about AI is really to think about it as an integration of mathematical and computational concepts, really that allows you to process and generate language.
[00:05:48] Speaker B: It uses algebraic structures, it uses calculus.
[00:05:50] Speaker A: To optimize its understanding and generation of language. And then it uses computational algorithms to scale those processes to handle really large volumes of text efficiently, basically by simulating a simplified, abstracted version of the brain's structure and learning process.
What do you feel has been, at least for you, the most foundational aspect of AI in terms of how do.
[00:06:17] Speaker C: You think it's transformative in a business context? We are actively experimenting.
I think we've identified three big areas that we are really leveraging AI. One is the predictive power. So a great example of this would be my ability to predict at the retail shelf the presence of my products and brands so that when a shopper comes in, it's in stock and they can actually pick up the product they've come in for. The second piece is we're a brand marketing company and so we create a lot of marketing content. So the ability of leveraging generative AI for marketing content creation, that's a very big call it a use case capability that we're maturing. Then last but not least, what's monumental is to take institutional knowledge in the company, be able to ingest that data and then be able to do a knowledge search on that to quickly glean insights and obviously with the end game of with great velocity, taking these insights and executing them into action.
[00:07:27] Speaker A: I think for me, one of the most remarkable things has been just looking at the growth of AI.
[00:07:31] Speaker B: Originally it started out just with, at least for us, it was sort of.
[00:07:35] Speaker A: Simple list creation and data summary and text extraction and maybe some coding, some mathematical problems. But for me, what's been the most amazing thing is looking at what it can do now as compared to what it could do just a mere 18 months ago in terms of image and video generation, the multimodal capabilities. Now models can handle text, images, sound, simultaneously looking at the translation. It can even analyze sentiment from different sources.
Obviously we have the autonomous robotics and navigation.
We have a complexity that we hadn't seen earlier in terms of code generation and especially with digital Personas. So I think this idea of AI agents is really at the frontier.
How do you feel this is going to affect your particular aspect in terms of finance Retail and manufacturing, I envision.
[00:08:29] Speaker C: These machines will begin talking to each other. It's actually happening as we speak.
And I think we'll see from just mere insights from these machines to insights that actually can be executed and actioned by multiple machines, all connected to each other. So I think that's where we're evolving to, is to say, how do you take big processes and begin to kind of create the mechanics where you get the insights, you connect the machines, and it's all happening with, frankly, very limited human intervention. And what's really taking place is you're going from insights into tasks and into actions in much greater velocity.
And why that's important is because with the advent of E commerce, with the advent of social commerce, the ability of businesses to respond very quickly, we're all looking for, how do you do this? With a muscle of resilience and a muscle of speed.
[00:09:31] Speaker A: At least in the education world. Some of that is actually what's alarming because what we see as efficiency is also complicated by the fact that it's not necessarily transparent. With AI, it is arguably incredibly efficient. It can train itself in ways that are not easily understood. And so there is this understanding that AI can be both transparent or efficient, but not both simultaneously.
What are your thoughts on that complication?
[00:10:02] Speaker C: I think there's some truth to it. What I would say is a lot of the space is evolving, and the bigger piece that we're all looking for is transparency of these models, security of the models, confidentiality of the information, privacy.
So I think it's all call it tangled together to say, how do we go on this journey in a much more responsible way, but being very mindful of all of these aspects we've just talked about.
[00:10:32] Speaker B: It's this approach of be curious, but be careful.
[00:10:35] Speaker C: Indeed, indeed.
[00:10:37] Speaker A: And I think, at least on campus, that really manifests itself in terms of what it can do well and what it can't. And in some ways, some of the models are exceptionally good at chemistry, at mathematics, at physics, mostly theoretical physics.
[00:10:50] Speaker B: Struggles with related rates, struggles with word problems.
[00:10:53] Speaker A: It's not correct all the time.
Sometimes it just makes up sources and it can't solve simple reasoning queries. Where do the weaknesses emerge in the business world?
[00:11:04] Speaker C: Yeah. So I think we've been very declarative that we want to have the human in the loop.
We recognize these models are not perfect.
I cannot see anytime in the near future that marketing content gets published on behalf of the company, just through a machine, or for that matter, any actions are taking place in the absence of Human overlooking, call it the output of these GPT machines.
So I think those are the guiding principles for us to say, a recognition that the technology is maturing.
How do we become students of it? How do we nudge it? How do we push it?
How do we leverage the latest developments but go through a journey that is very responsible, that's leveraging the technology advances, but very much with the human in the loop?
[00:11:58] Speaker A: I would agree. I don't see AI models becoming your child's teacher anytime soon.
One of the other questions I want to pivot to is this idea of.
[00:12:11] Speaker B: Is AI today the best or the worst that it will be?
[00:12:14] Speaker A: In other words, it's improving all the time.
[00:12:17] Speaker B: So therefore it's arguably the worst it will ever be.
[00:12:20] Speaker A: But on the flip side, obviously AI is starting to use AI generated data itself, which adversely affects the entire model. And if it's both the consumer and the producer of data, the content might become unusable. I think New York Times and the Atlantic both recently reported on the cannibalization of data.
To what extent will this render models less useful?
[00:12:43] Speaker C: My personal view is that I think we're going to quickly run out of data in the public domain.
I think that's already happening.
What I see as the possibilities and the opportunity is data that sits within enterprises, within universities, within academia. That is the intellectual property of the enterprise and the ability to actually harness that data, that information and been able to get insights from that. I think that's the big opportunity that's still forthcoming.
[00:13:18] Speaker A: That's interesting because I'd like to segue into sort of where the growth of AI is going to happen.
Right now. ChatGPT, according to Reuters, has 180 million unique user.
If we looked at the global AI market, currently valued at $279 billion, which is a growth of 80 billion just from 2023, that's growth of one hundred and twenty percent year over year. We're looking at some other statistics from Forbes. Forbes mentioned in a recent article that 2015 only 10% of companies really had AI in their plans. Now more than 83% have AI in their plans. And PricewaterhouseCoopers reported that really, at least just from AI technology, we could see increased revenue of more than $15 trillion by the end of this decade. Does that translate to tangible business gains in the real world?
[00:14:12] Speaker C: Yeah. What I would say is that clearly there's lots of business opportunity. The promise of AI is real.
Most companies are on the spectrum of early proof of concepts all the way to full scale adoption and Monetization, depending on the industry, the type of company, frankly the level of executive engagement on this journey.
So I would say that this is progressing, right? It is going to progress. I think it's going to progress much faster. I will also tell you that today the overall value realization and what's the value that's been captured is actually less than the investment that's being made because we're looking at really maturing the technology.
[00:14:57] Speaker A: Which is the case for most technologies, isn't it? There's going to be a cycle that has to occur. And at least in the cycle we're also starting to see some newer, newer.
[00:15:07] Speaker B: Perspectives that really look at the dire.
[00:15:08] Speaker A: Warnings and these are the headlines that most people are familiar with. Everything from job losses, ethical questions, copyright issues, data privacy, deep fakes, and this algorithmic bias that's going to be magnified over time.
And I think, at least for me, it's a bit unnerving when models provide erroneous answers with such astonishing confidence.
And layered on top of this is a lot of the issues of implementation within an organization, adoption, accessibility and to some extent over reliance. How do you feel that your company is starting to address these issues?
[00:15:47] Speaker C: Yeah, I would say the human friction is real.
I think there's a very pervasive, call it, fear of technology.
What we are doing is to being very intentional in terms of upskilling, in terms of training our talent, frankly in terms of demystifying this through small incubation use cases, I think it's what Ethan Mollick says, you just have to lean in and start working with it. And I think over time what we're finding is it's like lighting a match. Once you light it, the fire begins.
And it happens in different color demographic segments. You can clearly also see some of the more digitally native employees. They're obviously embracing this a lot quicker than what I would say is people that did not grow up in a lot of digital technology. So I think it's a journey, I think it's happening across the different companies and industries in terms of this massive push to upskill and create calling all the learning methods to get people to embrace this.
[00:16:52] Speaker A: I think that's a very important part of it. I think the other pushback that I've heard quite extensively is the massive energy use that these models require. Beyond just the regular acknowledged concerns about.
[00:17:05] Speaker B: Safety or the future of work or.
[00:17:07] Speaker A: Creativity, we're looking at a trade off in terms of energy use. AI is one of the most energy intensive modern IT undertakings and I think it was an article in Forbes that said the world concerned with carbon emissions may not actually be ready.
What are your thoughts on energy use as a limiting factor?
[00:17:29] Speaker C: Yeah, I think that's real. I think today it is a limitation, and that's why you've got many bright minds, companies, investments taking place in the space of how do you create quantums of new energy sources? And you can see that playing out in the US through the resurgence of the nuclear industry.
[00:17:49] Speaker A: So certainly a myriad of issues in the AI space. AJ thank you for your time today. To our listeners, we hope you enjoyed today's session. Check back soon as the conversation continues in episode two, where we do a deeper dive into AI and education. That's 1810 for today. Inspiring ideas from Lawrenceville to you. We look forward to our next exploration.