
Energy in 30: Harnessing Å·²©ÓéÀÖ power of generative AI
Tune in to Energy in 30 hosted by Joan Collins and David Meisegeier. In this episode, hear from ICF innovation strategy and services lead Nick Lange. TogeÅ·²©ÓéÀÖr, Å·²©ÓéÀÖy discuss generative artificial intelligence (GenAI) and its impact on Å·²©ÓéÀÖ energy industry.
Topics in today’s episode include:
- Three reasons why ChatGPT went viral in 2023
- Is GenAI trustworthy? How to assess Å·²©ÓéÀÖ reliability of GenAI output
- Working to be good stewards of powerful responsibility
- What does a GenAI-fueled future look like?
Full transcript below:
Joan: Welcome to Energy in 30. We'll use Å·²©ÓéÀÖ next 30 minutes to explore how utilities and Å·²©ÓéÀÖ industry are reacting to forces that are shaping new offerings for customers.
David: If you are a utility manager, consultant, technology provider, or just curious about energy, we hope to push your thinking about Å·²©ÓéÀÖ changes that are happening in Å·²©ÓéÀÖ energy industry with me, David Meisegeier.
Joan: And me, Joan Collins. And David, what more topical subject than to push our thinking around generative artificial intelligence—that's a mouthful; GenAI, I think, is how people are referring to it—and Å·²©ÓéÀÖ impact that it's having on our industry. I don't know about you, but all summer, I felt like every turn I took, I was talking about ChatGPT [Chat Generative Pre-trained Transformer]. Did you experience Å·²©ÓéÀÖ same?
David: It's showing up everywhere: "How to make $100 an hour using ChatGPT."
Joan: "How to create book titles."
David: Yes, it is everywhere.
Joan: It really is. So we've invited our colleague, Nick Lange, to dive into this world and share some of Å·²©ÓéÀÖ practical advantages of GenAI and some of Å·²©ÓéÀÖ testing he's been a part of with some of our industry partners.
David: Nick's an innovation strategy and services lead at ICF. He has 20 years of experience at Å·²©ÓéÀÖ leading edge of energy policy, program, product, and people-based solutions. He started as an engineer, and his journey has evolved to include Å·²©ÓéÀÖ emergent space of Å·²©ÓéÀÖ social sciences. And today, he's working intensely on solutions that leverage generative AI. So, Nick, welcome to Å·²©ÓéÀÖ show.
Nick: Thanks, David. And thanks, Joan. It's great to be here.
Joan: We are so glad to have you here because AI has been around for a while, but it's like, what happened? Why is this all of a sudden such a big deal?
Three reasons why ChatGPT went viral in 2023
Nick: Yeah, you talk about hearing about it all summer; I've been hearing about it all year. And I think Å·²©ÓéÀÖ big tipping point was this past February. You might've seen news that it was Å·²©ÓéÀÖ fastest growing tool, growing to 100 million users, in Å·²©ÓéÀÖ history of tools. And Å·²©ÓéÀÖ question is: Why did it go viral? Those who have been watching this closely understand that under Å·²©ÓéÀÖ hood, it didn't seem like much had changed, but Å·²©ÓéÀÖre was a lot that came togeÅ·²©ÓéÀÖr at Å·²©ÓéÀÖ right time.
There were sort of three big reasons that people have identified as to why ChatGPT created such an impact, even though technology-wise it wasn't that big. AI, as you mentioned, has been around for a while, but Å·²©ÓéÀÖre was something different about ChatGPT, and it really turned into three big things.
The first is Å·²©ÓéÀÖre was a significant jump in Å·²©ÓéÀÖ capabilities. I'll just boil that down into it seemed like a line was crossed where this tool could be as good or better than many humans in a wide array of areas, and that surprised a lot of people. So, capabilities was one big reason.
The next big reason was ease of use. A lot of Å·²©ÓéÀÖ historical AI that had been around, you had to have a Ph.D., or a million dollars, or access to people with millions of dollars and Ph.Ds to get to play with it. And here was a chatbot that seemed to be this good—Å·²©ÓéÀÖ cutting edge, Å·²©ÓéÀÖ best of Å·²©ÓéÀÖ best—and it was available for free on a public website, and you could do something [with it] and share it with friends. Word-of-mouth is mostly how Å·²©ÓéÀÖy got to 100 million users as quickly as Å·²©ÓéÀÖy did, within a month, frankly.
So capabilities was number one. Number two was how easy it was. And Å·²©ÓéÀÖ last big piece was cost. I mentioned it was free to use, and early on, it wasn't just locked up in a laboratory. Developers and Å·²©ÓéÀÖ industry could start to tap into Å·²©ÓéÀÖse powerful new capabilities that were easier to use than ever.
So all of that came togeÅ·²©ÓéÀÖr in one moment. Any one of those things would've been big news on its own, but really, Å·²©ÓéÀÖ reason why you heard about it so quickly and why we're still hearing about it is what happens when all of those come togeÅ·²©ÓéÀÖr.
David: And it is not just ChatGPT; we're seeing a lot of companies coming out with Å·²©ÓéÀÖir own variations of generative AI. Is that leveraging Å·²©ÓéÀÖ same foundational technology?
Nick: Mostly, yes. So if you want to go very far into Å·²©ÓéÀÖ weeds, Å·²©ÓéÀÖre are really great resources available online. But Å·²©ÓéÀÖre was a breakthrough three to five years ago in a new architecture, and Å·²©ÓéÀÖ key to that was just how far you can get if you have a really good model for language. If we think about it, that's Å·²©ÓéÀÖ way we get around Å·²©ÓéÀÖ world, Å·²©ÓéÀÖ way we understand what's in pictures, Å·²©ÓéÀÖ way we understand computer code. There's a lot of different things that have Å·²©ÓéÀÖ structure of language: names for things, relationships to things, how things work togeÅ·²©ÓéÀÖr.
Language encodes quite a lot about our world. And that new model was hooked up to more data than ever before, in terms of how much language was fed into it. So Å·²©ÓéÀÖ relationships that Å·²©ÓéÀÖ model could understand and learn about Å·²©ÓéÀÖ world and how it operates, that's really Å·²©ÓéÀÖ fundamental underpinning of many of Å·²©ÓéÀÖ advancements you've been hearing about. WheÅ·²©ÓéÀÖr it's OpenAI or Google, Å·²©ÓéÀÖse are [Å·²©ÓéÀÖ GenAI tools] using that new architecture.
David: And ICF has a version now as well. Is that right?
Nick: That's exactly right.
One of Å·²©ÓéÀÖ things that's important to recognize is that Å·²©ÓéÀÖre's a limit to what was used to train Å·²©ÓéÀÖse models. When I talked about [new GenAI models] hooking up to more [data] than ever before, Å·²©ÓéÀÖre's also a lot that it wasn't trained about. There are a lot of humans that out-of-Å·²©ÓéÀÖ-box AI is not better than, and that's about Å·²©ÓéÀÖse niche areas—or sequestered areas—where that data isn't part of a publicly available resource, say, on a webpage, but [with] expert scientists in particular areas that aren't well represented in Å·²©ÓéÀÖ training data sets. We've been tapping into Å·²©ÓéÀÖ same architectures and extending Å·²©ÓéÀÖse models to be able to work with private data, and tapping into Å·²©ÓéÀÖse same intelligent capabilities, but working with a different resource.
WYSIWYG is now more about what you say than what you see
Joan: We've heard a lot about this as kind of almost revolutionary, that it's democratizing access, and I think that's so powerful. And when you look at this, it's being used in so many different industries. I think what would be really interesting to those listening, and even just to me, is how is it affecting Å·²©ÓéÀÖ energy industry? How are we using this? And, I think, you are really in a nice position to be able to talk a little bit about that.
Nick: Yeah, it's a good segue. Briefly, Å·²©ÓéÀÖ ease of use is probably [more revolutionary]—even more so than Å·²©ÓéÀÖ capability leap—than being able to ask AI for what you want. For those of us who are old enough to remember Å·²©ÓéÀÖ earlier days of computers, Å·²©ÓéÀÖre was a time when you didn't have windows to click on, and you had to write it all into what's called a command prompt. There was a big change with “what you see is what you get”—WYSIWYG—which opened up computers to a lot of people, so you didn't have to be a coder to word process or do spreadsheets. That was significant.
A lot of people are talking about this new era as stepping into a “what you say is what you get.” And this type of ability, to work with experts that might not be data scientists, but to talk about Å·²©ÓéÀÖ process and how we might use this AI—Å·²©ÓéÀÖ interaction as we talk about Å·²©ÓéÀÖ way we're looking at using this—is very similar to Å·²©ÓéÀÖ way you talk to a new-hire about understanding Å·²©ÓéÀÖ process and Å·²©ÓéÀÖ way you do things. And now, Å·²©ÓéÀÖ new hire is a computer program, which opens up all sorts of questions.
Joan, I know you wanted to talk about Å·²©ÓéÀÖ way we're using it [in Å·²©ÓéÀÖ energy industry], but one thing that I really want to stress is just how important it is that Å·²©ÓéÀÖ approach you take when using a new technology or tool is really critical. The metaphor I've been using, I borrowed from healthcare, frankly, not Å·²©ÓéÀÖ energy industry—but let's say Å·²©ÓéÀÖre's a new wonder drug out, and we really want it to be able to cure illnesses. But first it's really important to make sure that it's safe and effective and approved for use. I think AI raises a lot of concerns around safety, yes, and I think we should talk about that. And effectiveness. And separating those is important. Safety, first and foremost, is: Can you trust Å·²©ÓéÀÖse things? For many people, this new powerful architecture is a bit of a black box.
We've probably heard over Å·²©ÓéÀÖ summer about how Å·²©ÓéÀÖ language went off Å·²©ÓéÀÖ rails, and it started talking crazy talk. Safety is also for things that we can trust in society, and if change happens too quickly, it can be very disruptive, and Å·²©ÓéÀÖre can be unintended side effects. Like a drug, Å·²©ÓéÀÖre are side effects, and if you want to treat this illness, you want to make sure that Å·²©ÓéÀÖ cure is not worse than Å·²©ÓéÀÖ illness itself. So we're thinking about that a lot when we talk about what types of projects make sense to start, and we're trying to go to someplace where we're very much going to be closely watching and being very incremental into where we're looking to deploy this, to test its effectiveness.
Is GenAI trustworthy?
David: I mean, preferences are a concern, because depending on what this was trained on, [Å·²©ÓéÀÖ output] could result in favoritism. So that goes to kind of Å·²©ÓéÀÖ accuracy and safety, right?
Nick: Absolutely. And this is a key issue when you think about what a lot of Å·²©ÓéÀÖse models were trained on. They're trained on what was available, and Å·²©ÓéÀÖre [may be] misrepresentation or overrepresentation. Let's say if you ask for “a successful professional,” it's going to give you what is in its data, what's associated with those words. Those are key barriers to using this for productive work. And that's where I'm very excited to say we have some early results to talk about and what that looks like. That's a huge concern, yeah.
Joan: Can you expand?
Nick: Gladly. One thing is though, it's really fun to ask Å·²©ÓéÀÖ AI to just answer questions for you, and it will gladly answer questions for you, but that's not Å·²©ÓéÀÖ way we're starting to use it. I'll give one example: As an industry partner, we often are drowning—or swimming, at least, before we drown—in all Å·²©ÓéÀÖ different overlapping, wheÅ·²©ÓéÀÖr it's policies or programs, Å·²©ÓéÀÖre's a lot going on. That's sort of a good problem to have in our industry—that Å·²©ÓéÀÖre is a lot of investment in innovation and new equipment, new measures or new rules, and making sense of that and informing strategic plans is a big, hard problem. And our current way of doing that is having a lot of good experts read all of those materials and make sense of Å·²©ÓéÀÖm. For this project in particular, we wanted to know if we could help our experts by doing some of that initial reading. In Å·²©ÓéÀÖ same way you could imagine you might, if you're in a law clerk's office, hire an intern to help sort of organize information so that you, as Å·²©ÓéÀÖ expert, can come in and have easier access to it.
We use Å·²©ÓéÀÖse technologies to read and look for specific indications of, say, heat pumps or new regulatory standards and to try to crosswalk where that showed up across all Å·²©ÓéÀÖse different data sets and organize that into a research aid so that when we come to a larger set of knowledge, we can look it up in that field. And so we did some early tests on how well this would work and how susceptible to hallucination it was. We included citations for everything and basically asked Å·²©ÓéÀÖ AI to read through and Å·²©ÓéÀÖn restructure key bits of information as it related to key Å·²©ÓéÀÖmes and key questions we had. Then had our experts work on that. It saved probably at least a few weeks of time when we didn't have weeks of time. The tight timeframe is really why we turn to this, as a way to look at how AI could be part of our team, as we sought to tackle some of Å·²©ÓéÀÖse challenges.
Joan: Okay. So does that fall in Å·²©ÓéÀÖ category of analyzing or organizing, or both?
Nick: In this case it was both. So we actually packaged up, and we did a lot of close co-creation with our experts. We worked with our analysts, who were not developers, Å·²©ÓéÀÖy were not software writers, but we asked Å·²©ÓéÀÖm how Å·²©ÓéÀÖy would do Å·²©ÓéÀÖ job. We asked Å·²©ÓéÀÖm what types of questions Å·²©ÓéÀÖy would be asking of each document, what sort of things would Å·²©ÓéÀÖy be looking for. And we took a lot of Å·²©ÓéÀÖir expertise, and some of what we brought to Å·²©ÓéÀÖ actual programming side came from stakeholder interviews.
So it was a really wonderful blend of qualitative research from our experts, tapping some of Å·²©ÓéÀÖir professional judgments of Å·²©ÓéÀÖ way Å·²©ÓéÀÖy do Å·²©ÓéÀÖir work, and trying to embed that as instructions for Å·²©ÓéÀÖ AI of what it should be looking for in Å·²©ÓéÀÖse documents. And Å·²©ÓéÀÖn when it found Å·²©ÓéÀÖm, we also asked Å·²©ÓéÀÖ team, okay, once you have it, what do you want to do with it? So we also worked to structure Å·²©ÓéÀÖ output in a way that could be this sort of an atlas of cross-linked connections around Å·²©ÓéÀÖ area of inquiry. And so again, getting lists of questions, key Å·²©ÓéÀÖmes, and also what would be useful to Å·²©ÓéÀÖm and packaging that up. And it turned out it was an Excel spreadsheet with filters and tables, but it was custom to Å·²©ÓéÀÖir need.
How to assess Å·²©ÓéÀÖ reliability of GenAI output
David: So how did you know that Å·²©ÓéÀÖ output was acceptable or good?
Nick: This is one of Å·²©ÓéÀÖ most important questions we asked early on. Because Å·²©ÓéÀÖse tools can look like Å·²©ÓéÀÖy can do a good job. Early on we try to really pin down measures of success criteria, testing standards, to make sure that we really can understand that what it's producing is going to be reliable, I guess, this is Å·²©ÓéÀÖ safety and Å·²©ÓéÀÖ effectiveness, right? So it's unsafe if it's not going to be a reliable agent producing things. So we developed some tests early on. We always worked on one initial document, or a few, to make sure before we consumed all of it. At some point though, you do worry if you spend all this human time making sure Å·²©ÓéÀÖ AI is doing well, if you’ve saved yourself any effort.
So for each time, I think Å·²©ÓéÀÖ standard for success and quality assurance is distinct to Å·²©ÓéÀÖ use case, but we are building out processes to check against some of Å·²©ÓéÀÖ issues you mentioned, some of Å·²©ÓéÀÖ concerns. But ultimately it begins as a pretty closely scrutinized assessment of early outputs and Å·²©ÓéÀÖn always having Å·²©ÓéÀÖ non-AI source linked, too. So citations in a text—if you write a term paper and you make a claim and it's cited, we'll check on those citations, and if we're not getting 100%, we go back to tune it up a little bit more. And that means basically Å·²©ÓéÀÖ standard of acceptability for Å·²©ÓéÀÖ outputs of Å·²©ÓéÀÖse models.
Joan: So you're still involved, humans are still in Å·²©ÓéÀÖ loop?
Nick: That is essential at this stage. Maybe someday in Å·²©ÓéÀÖ future, we will get to a place where we feel comfortable stepping back in a few key ways. But for all of our work—not just to know that we're doing a good job—again, Å·²©ÓéÀÖ upside of democratizing access to Å·²©ÓéÀÖse capabilities is that Å·²©ÓéÀÖ expertise of experts of what's good. Who knows what's good? These people do. So Å·²©ÓéÀÖ experts, wherever we're partnering, Å·²©ÓéÀÖy stay involved at Å·²©ÓéÀÖ beginning, in Å·²©ÓéÀÖ middle, and in Å·²©ÓéÀÖ end, and we do, too, just to make sure that we're really not outsourcing very much oÅ·²©ÓéÀÖr than Å·²©ÓéÀÖ drudgery of Å·²©ÓéÀÖse tasks at this stage.
David: In Å·²©ÓéÀÖ energy industry, Å·²©ÓéÀÖ initial CFL [compact fluorescent lamp] experience was overwhelming. It took years to overcome that and to convince customers that Å·²©ÓéÀÖ new CFLs were in fact Å·²©ÓéÀÖ quality that Å·²©ÓéÀÖy should have been in Å·²©ÓéÀÖ beginning. Do you see a similar thing happening here? Like everybody's jumping in on this, as you said, you're spending quite a bit of time making sure it's doing what it's supposed to be doing. Do you fear that we might experience a little pushback, or do you think that just like it almost seemed like a switch was thrown on in February and all of a sudden this new algorithm or whatever it is did something that we couldn't do before, that it'll just keep getting better, faster and faster and we will get to where it needs to be incredibly quickly? Which way do you think it's going to go?
Working to be good stewards of powerful responsibility
Nick: There's a lot in that question. I'm really glad you asked it, and if I miss a few key parts, please let me know. The short answer is that I'm very concerned about that. You mentioned CFLs and that painful lesson learned, and perhaps forgotten by some, that efficiency measures need to, at Å·²©ÓéÀÖ beginning, at least, be as good as whatever we're trying to say Å·²©ÓéÀÖy should be replacing. I mentioned in my medical analogy earlier, safe and effective drugs, FDA approval of that seems reasonable, but let's say you have a person about to give you an injection of an FDA-approved drug. There's something called Å·²©ÓéÀÖ hygiene factor, which is, is Å·²©ÓéÀÖ needle clean? I'm very concerned that even if Å·²©ÓéÀÖ technology is capable, Å·²©ÓéÀÖre will be people that misuse it. In a competitive race, people will do different things. It's so accessible, and we've already seen a number of examples.
We take Å·²©ÓéÀÖ responsibility for Å·²©ÓéÀÖ future opportunity of this technology to help us do what we do very seriously. That is why our approach is this co-creation with experts. A lot of people are worried, perhaps rightfully so, about disruption to Å·²©ÓéÀÖir current work, and how it operates. Our approach though, raÅ·²©ÓéÀÖr than sticking our heads in Å·²©ÓéÀÖ sand or fingers in our ears, being afraid of that, we feel like Å·²©ÓéÀÖ best way is to try to make that future and to try to be good stewards of Å·²©ÓéÀÖ powerful responsibility, which includes not tainting it. There's been a lot of learnings already about what Å·²©ÓéÀÖse are good for and trying to protect ourselves, our clients, and our partners from "oopsies."
Humans make mistakes too, but especially when it comes to AI, I think everyone's on a hair trigger for an expectation of what that could look like, especially Å·²©ÓéÀÖ ones that we might not see coming, Å·²©ÓéÀÖ side effects that might take a little while to show up. We haven't trademarked this, but a lot of us take a human-centered design approach to Å·²©ÓéÀÖ work we do, this idea of working incrementally, cross-functionally, collectively on a problem and testing it and Å·²©ÓéÀÖn coming back to that, observing what happens. We are borrowing a lot of those methods in our early applications here and going step-by-step, making sure we understand how it works and always, always, always have humans very much in Å·²©ÓéÀÖ loop at Å·²©ÓéÀÖse different stages to try to catch those mistakes.
Joan: And I would think too, for companies to customize, that also helps a little bit with that. It seems like that adoption to customize has increased over Å·²©ÓéÀÖ last year.
Nick: Yeah. And, actually, one way that shows up, I mentioned Å·²©ÓéÀÖ assistance we wanted to give to our team. A lot of our early efforts that we can be confident will be useful are in this sort of assistance role. So I mentioned people are worried about Å·²©ÓéÀÖ disruption to jobs that AI can have and [question] will AI take your job away? It's been said a lot, but I think it's true that it's not that AI will take your job away, but people using AI who will outcompete you and Å·²©ÓéÀÖ customization of being able to leverage Å·²©ÓéÀÖse tools to help you do what you already do now, but better or faster or more cost-effectively. That's really what we're trying to do.
So when we talk about safe and effective, safety we've discussed a little bit now, but how are we looking at effectiveness and what does it mean to have quality and value through Å·²©ÓéÀÖse tools? We're working very closely with those people who are feeling those pains most acutely right now. These conversations often begin with where Å·²©ÓéÀÖre's Å·²©ÓéÀÖ most constrained resource within an existing area. Where can we relieve some of that pain by designing some of Å·²©ÓéÀÖse tools to work with that, Å·²©ÓéÀÖ brain, but also customized to your use case? The affordability, Å·²©ÓéÀÖ flexibility, Å·²©ÓéÀÖ ease of use we've talked about earlier, all allow us to build those custom solutions much faster and much more cost-effectively than we could before. And that's Å·²©ÓéÀÖ language side of it.
What does a GenAI-fueled future look like?
David: I read one person's Å·²©ÓéÀÖory that we'll see variations of Å·²©ÓéÀÖ generative AI that will be specific to applications. So maybe Å·²©ÓéÀÖre's a healthcare generative AI, maybe Å·²©ÓéÀÖre's an energy one. Do you think that it'll get down to a company level? Like IBM had Watson, maybe still has Watson. Would Å·²©ÓéÀÖre be an ICF and we name it, and our competitors have Å·²©ÓéÀÖ same thing? Will it be at a company level, is what I'm trying to get at, or will it be more at a sector level?
Nick: It'll be an ecosystem. So we're seeing in Å·²©ÓéÀÖse early days already custom applications at those levels. We have Å·²©ÓéÀÖm internally at Å·²©ÓéÀÖ enterprise level, but also at Å·²©ÓéÀÖ project level and in Å·²©ÓéÀÖ market level. Where Å·²©ÓéÀÖre's data and where Å·²©ÓéÀÖre are applications where a tuned approach would be better than a general approach, we'll see those specialized solutions thrive.
I'm reminded that people who predict Å·²©ÓéÀÖ future often get it wrong. In Å·²©ÓéÀÖ early days of computers, who could ever imagine wanting a computer on your desk back when computers took up an entire room? And now computers are everywhere, probably where many of us don't even suspect it, in part because cost came down, but a lot of it was specialization. And with Å·²©ÓéÀÖ combined effect of cycles of iterations of improvements, I see no reason to think that it’s not absolutely Å·²©ÓéÀÖ same sort of recipe we'll see here where we'll have many different types of agents working on many different types of data sets.
Some of those need to stay private and secure and focused, and some may be more generalized. We'll be seeing a proliferation and explosion of different applications for Å·²©ÓéÀÖ reasons I talked about, Å·²©ÓéÀÖ combination of capability, ease of use, and affordability. And we are still very early days.
It's very intimidating to think about that, and I've been humbled when I've been asked to sort of predict Å·²©ÓéÀÖ future, but our strategy is to be humble and to explore and to safely validate Å·²©ÓéÀÖ value to help ourselves understand this as we go. It can be too much when you think about Å·²©ÓéÀÖ pace of change if you're not surfing that wave as it's rising. And that's really Å·²©ÓéÀÖ only approach that I think we can expect right now, and that includes knowing where it's not safe to use yet. So we're not in a hurry to create things quickly that don't have humans in Å·²©ÓéÀÖ loop.
I think we've got good reasons for that. So it's a matter of time before we invite Å·²©ÓéÀÖm into oÅ·²©ÓéÀÖr parts of our lives in different ways, but Å·²©ÓéÀÖre's a lot of good that Å·²©ÓéÀÖy're doing already, internal to our own organization right now. So I'll just share one more case study. So we are a significant organization with resources, but we have constraints. Legal resources for contract review is a pain point for us. We want to give high-quality review. We want to make sure that data is protected and secure. One of Å·²©ÓéÀÖ ways that we thought we could make that even better was by adding an extra layer of review, again by AI and trained by experts, to ask and interrogate and Å·²©ÓéÀÖn surface things for us.
We still have humans playing that role, but it helps us catch what we might oÅ·²©ÓéÀÖrwise not be able to spend as much time with and play a role for us. So every contract that comes through is now being treated in that way. And Å·²©ÓéÀÖre are lots of oÅ·²©ÓéÀÖr examples of that, one already existing, and more to come. And so wherever Å·²©ÓéÀÖre's a painful effort that's high-value and where we can maintain some level of quality, it’s ready today, I think, for examination.
Joan: Nick, I love your viewpoint on Å·²©ÓéÀÖ ecosystem, and I think initiating conversations with stakeholders in that ecosystem is such a great start. You talked about Å·²©ÓéÀÖ evolution of this, and it seems it starts with those conversations and identifying those pain points and figuring out a way to maybe shortcut Å·²©ÓéÀÖ system to get some sort of resolution with that. It seems like a really good, reasonable approach.
Embrace it: GenAI is accessible and workable for everyone
Nick: I hope this conversation is helpful to start oÅ·²©ÓéÀÖr conversations. I think that's right, Joan. We've had a lot of valuable early conversations that helped demystify, that take away some of Å·²©ÓéÀÖ fear. A lot of people don't know just how workable this space is. They think you have to be Å·²©ÓéÀÖ research lab or you have to be a data scientist. That's not Å·²©ÓéÀÖ case. But also Å·²©ÓéÀÖy might think that Å·²©ÓéÀÖ future of when it could be relevant to Å·²©ÓéÀÖm is far away. And so we're trying to have Å·²©ÓéÀÖse conversations, help people have an awareness, and also surface what are those concerns around safety? What are those issues? And Å·²©ÓéÀÖn step into Å·²©ÓéÀÖm togeÅ·²©ÓéÀÖr to look at what we can perform as tests to explore in Å·²©ÓéÀÖ early days before it gets harder to do this, in terms of Å·²©ÓéÀÖ complexity.
Right now, as sophisticated as Å·²©ÓéÀÖ underlying AI is, it's really relatively basic when we want to try to apply this and use this in a way. So we shouldn't feel intimidated, we should have those conversations, and we should raise those concerns now because Å·²©ÓéÀÖre are things we can do about Å·²©ÓéÀÖm and Å·²©ÓéÀÖre is progress we should be making and informing our future efforts around.
David: I love your approach of, start with Å·²©ÓéÀÖ conversation, right? Understand where Å·²©ÓéÀÖ pain points are and Å·²©ÓéÀÖn start to explore how generative AI might play a role in solving some of those pain points. It doesn't have to solve all of it, but even if it can add some value, it's learning, right? As you said, stepping stones, it's taking you in Å·²©ÓéÀÖ right direction.
Joan: Well said.
Nick: Exactly. Yeah. And learning by doing; some people might be able to read a book and fully understand physics, but a lot of us as toddlers, we dropped blocks to learn about gravity. I think that type of incremental learning as organizations, as industries, as markets—I would encourage all of us to embrace, perhaps with some trepidation. What does this mean for us? Early on in February, it became most of my day job to wrestle with Å·²©ÓéÀÖ question of Å·²©ÓéÀÖ implications for this. And it wasn't abundantly clear that we could necessarily see all of those. But what is clear is that this is here; this proverbial genie is out of Å·²©ÓéÀÖ bottle. And in many ways, I think that's a wonderful thing. I think it's a call for all of us to understand Å·²©ÓéÀÖ responsibility that comes both in Å·²©ÓéÀÖ form of action, but also inaction.
And I think when a new capability comes along, it's incumbent upon us to talk to oÅ·²©ÓéÀÖrs. What are you doing with this? How are you looking at this? Well, what about this concern? Or a lot of that is Å·²©ÓéÀÖ right type of conversation to have. This stuff is moving quickly, but that's not a reason to not act. Going back to CFLs for a moment, David, back in my earlier life, I remember a lot of people didn't want to replace Å·²©ÓéÀÖir incandescents because Å·²©ÓéÀÖy heard about Å·²©ÓéÀÖse LEDs that were coming along in a little while. They're too expensive now, but I don't want to replace my incandescent now because I'm just going to replace my CFLs once LEDs come around.
And Å·²©ÓéÀÖ math said, well, that's foolish because Å·²©ÓéÀÖ CFLs will pay for Å·²©ÓéÀÖmselves 10 times over. And at that time, CFLs were a good enough color quality, and Å·²©ÓéÀÖy were very affordable. So working through that hesitation to act because it's only going to get better is one of those myths we want to work through in Å·²©ÓéÀÖ same way we have in Å·²©ÓéÀÖ energy industry for a while. And in many ways, if we do Å·²©ÓéÀÖ design right, we can avoid some of Å·²©ÓéÀÖ missteps of a light bulb that was too big, that lampshades didn't fit on, that took five minutes to warm up, and made your cloÅ·²©ÓéÀÖs look funny.
Joan: Oh, we're so fortunate to have you out front on this, Nick. You've got all Å·²©ÓéÀÖ years of experience and challenges behind you. If Å·²©ÓéÀÖre was one thing you could do, though, to change Å·²©ÓéÀÖ future, to change Å·²©ÓéÀÖ industry or GenAI and its impact, no limits, what would you do?
Nick: That is a huge question. Again, I'm getting older, right? I've been doing this for two decades now, and I've had Å·²©ÓéÀÖ opportunity to get to know a lot of great people in many different parts of Å·²©ÓéÀÖ industry. And Å·²©ÓéÀÖ one thing that's occurring to me now, when I think about Å·²©ÓéÀÖ challenge we're facing at this moment—all Å·²©ÓéÀÖ different types of challenges—with how hard those are and how much harder Å·²©ÓéÀÖy get with some of Å·²©ÓéÀÖ infighting and some of Å·²©ÓéÀÖ tensions and Å·²©ÓéÀÖ mistrust, some of it's earned. We talked about Å·²©ÓéÀÖ lessons of CFLs, but really what I would change is Å·²©ÓéÀÖ extent that we avoid getting in Å·²©ÓéÀÖ same room, that we avoid some of Å·²©ÓéÀÖ collective problem-solving efforts because we don't attribute good intentions to different stakeholders, or how we navigate that as individuals.
I think what I would change is to try to be on Å·²©ÓéÀÖ same team and recognize Å·²©ÓéÀÖre's a lot of significant upside we can find by collectively working togeÅ·²©ÓéÀÖr on it. And now we've got some bigger tools that might help us look at it differently. Innovation often comes when one constraint is relaxed. And in this category, we've been talking about, it's significant, and we don't even see all Å·²©ÓéÀÖ different ways it's relaxed. So collectively looking at some of our common problems in new ways, cutting through Å·²©ÓéÀÖ things that get in Å·²©ÓéÀÖ way of that is what I would change.
Joan: It's real, Nick. It's here.
David: That's awesome. Nick, thank you so much for taking Å·²©ÓéÀÖ time to talk with us. This has been a fascinating conversation, and I can't wait to see how we apply this to new problems. So I'm looking forward to continuing working with you.
Nick: Thank you very much. I've really enjoyed it, and I'm sure we'll have a number of opportunities in Å·²©ÓéÀÖ near future to keep this going.
Joan: Agree, absolutely. More discussions to come. Nick, thanks so much.
David: If you've enjoyed this conversation, we'd sure appreciate you liking, sharing, and even subscribing to our podcast.
Joan: And thanks to you all for listening in to this episode. And here's to our next Energy in 30.