In part two of this series on adopting new technologies, we dive into a realistic scenario that explores how cross-functional teams can effectively vet tools such as LLMs and Generative AI. Following a university director’s journey to investigate AI’s benefits and risks, the article highlights the necessity of a formal team charter and executive buy-in. By leveraging diverse expertise—from engineering to industrial process modeling—the team identifies critical risks, such as "hallucinations," that could impact donor relations. Ultimately, the story demonstrates how disciplined, collaborative research leads to strategic recommendations that prioritize institutional reputation over media hype.
Not-for-profits are increasingly interested in how Large Language Models (LLMs) and Generative AI can help them achieve more with finite resources. This Connections article explores the use of cross-functional "improvement teams" to vet these novel technologies. By leveraging a formal charter, senior-leader champions, and subject matter experts, organizations can move past the hype to find objective, data-driven solutions for their specific missions
Struggling to explain prospect research to a new hire or wondering where to start with a donor record? Read the latest from Connections as Laurie Holmes previews Apra Fundamentals: Prospect Research, taking place April 7–9. This virtual program demystifies the research lifecycle—from capacity ratings to ethics—with practical, jargon-free instruction. Don’t miss the interactive courtroom-style debate on the ethics of AI! Register now to build your foundation.