John Donovan on the Potential of Large Language Models in the Enterprise

Last week, we hosted a webinar diving into the different types and uses of Large Language Models and were lucky to be joined by John Donovan, technologist, prior CEO of AT&T Communications, and Cresta board member. 

Cresta’s co-founder and COO, Tim Shi, laid out the technical pros and cons of public LLMs – like ChatGPT, Bard, and others – as well as domain and fine-tune models – you can read about that on last week’s blog post – Does one Large Language Model fit all?

During the webinar, Cresta CMO Scott Kolman and John discussed new research from a joint study by Fortune and Deloitte; more than half (55%) of CEOs globally are experimenting with generative AI. And an overwhelming 79% of them believe that the technology will ultimately help to increase efficiency, with 52% also responding that they foresee tremendous opportunities for growth. 

This week, we’re digging into John’s insights from his decades of experience as a C-level executive making pivotal evaluations and decisions about new and emerging technologies. 

As John described, the advent of ChatGPT has brought the potential of implementing AI to the forefront of the C-suite and the board. John shared the key questions that these executives are asking when it comes to deploying large language models: 

  • What is our organization specifically trying to accomplish with AI? 
  • How do we make it safe? 
  • What are the risks? 
  • What will the costs be? 
  • What are the consequences of inaccuracy or inaction? 

These may feel like familiar questions being debated in your own organization. So how to navigate these questions and arrive at a plan of action that is measurable, sustainable, and well-suited to your company? John advises addressing these three drivers: 

  1. Accuracy – how accurate are the models you’re working with? What is the threshold for deployment when it comes to the accuracy you’re seeing? 
  2. Efficiency – how much will this cost to fully deploy? To maintain? To scale? How much will this generate in cost savings down the line? 
  3. Explainability – how do you explain results if they aren’t as anticipated? If the results aren’t consistent with what the humans in your organization are already delivering with the existing tech stack, how do you adjust and pivot accordingly? 

To hear more from John on his C-level perspective, including how he believes companies should approach securing funding for new technologies and how to build innovation into the core budget, download the webinar replay and share with your team now

Does One Large Language Model Fit All?


Four Key Ways to Ensure Your Generative AI Plan is Ready


Cresta’s Real-Time Intelligence: 6 Ways our Generative AI is Different