Prospect Research · Ethics
How Fundraisers Can Use Generative AI Ethically and Responsibly
By Allison Fine and Beth Kanter | December 07, 2023
Fundraisers are turning to generative artificial intelligence (AI) tools — like ChatGPT and others — to help write donor communications and craft grant proposals. Benefits reported by users include significant time savings and content quality improvements. From writing donor thank you notes, newsletters, press releases and donor personas, to developing fundraising campaign images and content, users are raising more money in less time. One fundraiser recently shared with us that they had a 50% reduction in time spent on tasks, while a grant writer attributed the funding of three proposals to AI-enhanced editing.
Other types of AI are taking hold beyond the development office. Chatbots — or conversational interfaces using large language models (LLMs) — like ChatGPT can answer questions from website users and donors or provide internal information to nonprofit staff on demand. AI-powered screening tools are assessing potential job applicants for fundraising positions. There has also been an explosion of AI-driven database tools to help with donor prospecting and managing donor communication. Fully integrating AI into the nonprofit workplace will likely take years, as a recent McKinsey study indicates, but the process is already underway.
Used carefully and skillfully, AI has the potential to save fundraisers thousands of hours of rote, time-consuming work a year. This frees users up to do the kinds of things only people can do, like building better relationships with donors.
Approaches like “co-boting” — or using AI tools to augment rather than replace human jobs — can bring out the best in bots and fundraisers while yielding the dividend of time. This can lead to improved donor retention rates and increased dollars raised with less stress.
Risks and Ethical Complexities of Integrating AI
Generative AI raises a host of ethical questions and complexities. Public LLMs like ChatGPT are trained on content from the internet, including sites like Wikipedia, Twitter and Reddit. These data sets often overrepresent white supremacist, misogynistic and ageist views — meaning generative AI can amplify biases and harms. ChatGPT can also provide false information, or what researchers call “hallucinations,” responding to user prompts or questions with very persuasive but inaccurate text.
There's also the risk of inadvertently exposing sensitive donor information when using public AI models, as well as ethical storytelling, transparency and internet protocol (IP) issues. The Toronto-based nonprofit Furniture Bank, for example, switched to AI-generated but realistic-looking images meant to evoke sympathy in its 2022 holiday campaign, raising questions about ethical storytelling and transparency alongside donations.
Applying a Responsible Approach To AI
We urge fundraisers and their organizations to create a written ethical and responsible use policy based on the steps and discussion of ethical quagmires below. Such a policy will provide some guidance on ethical decision-making and a framework for ongoing exploration, experimentation and growth.
Addressing anxiety and fears: Perhaps the greatest fear about AI is that it will take away jobs. However, the likely reality according to the World Economic Forum job report is that jobs will change over time rather than immediately and completely disappear. Fundraising team leaders can mitigate staff anxiety around changes to jobs and work by having open and honest conversations about the use of the technology and steps the organization is taking to ensure that its use is values-aligned and human-centered.
Leaning into learning: Resist the temptation to assign your office intern the task of using generative AI. Senior fundraising team members need to understand how AI tools function and are being integrated into their development office’s work. This is very different to how we approached the use of social media, where we rushed into its use, experimented, often failed and no harm was done.
Staying human-centered: Before adopting AI — and as part of your ethical and responsible use statement — create a written pledge explaining these tools will only be used in human-centered ways. It should state people will always oversee the technology, make final decisions on its use and that the use of AI will not create or exacerbate biases. As Amy Sample Ward, CEO of the Nonprofit Technology Enterprise Network (NTEN), shares, “I think one of the simplest and most important guidelines is that tools should not make decisions. That's been a core part of NTEN's internal approach.”
Mitigating risks and biases: The cybersecurity field uses a process called threat modeling to envision what could go wrong when moving an AI use-case concept from prototype to pilot to full implementation. Nonprofits should deploy a similar risk-based planning approach to ensure AI use is safe and unbiased, including discussing and identifying worst-case scenarios.
The nonprofit Best Friends Animal Society, for example, wanted to use a chatbot to help potential cat adopters select cats during its Black Cat Adoption Week campaign. However, during the testing stage it was discovered that the chatbot could easily be prompted to repeat racial slurs or inappropriate sexual innuendos. Even with a controlled pilot, it was almost impossible to train the bot on what not to say. In addition, programmers had favorite cats to recommend, creating bias, so the team ultimately decided to pursue other strategies.
Using data safely: The nonprofit sector can and should raise the bar on the safe and ethical use of data. Leaders must answer yes to questions like: Are we giving our donors and other supporters opportunities to be forgotten as the European Union recommends? When it comes to generative AI, your ethical use policy should include guidance about not sharing donor information in public models.
Identifying the right use cases: Fundraising teams should begin by using AI to solve exquisite pain points, bottlenecks and problems keeping them from cultivating donors and getting good results. For example, after the National Aquarium in Baltimore recognized it was spending enormous amounts of time doing prospect research, development staff turned to an AI-driven donor database that fully integrated wealth screening data — in part by automating the process of copy-pasting data and research from different sources.
Generative AI can help make any part of the writing process for fundraising materials less painful, whether helping with the first draft or editing. Other pain points might include frustration and wasted time searching for documents or fielding the same questions from donors. Tasks that are both time-consuming and extremely repetitive are often good candidates for AI automation or augmentation.
Piloting the use of AI: When getting started with generative AI, fundraisers should begin with small, time-limited experiments on specific writing tasks to learn how to write good prompts. At first, it will be an iterative process assessed by staff to evaluate responses generated by AI. As fundraisers begin to incorporate these tools into their workflow, they should also evaluate how it’s affecting their jobs, relationships with donors and their time. In addition to measuring these impacts, organizations need to carefully and thoroughly check the accuracy of results and determine any errors before using.
Practicing, learning and job redesign: Organizations should create a shared generative AI playbook with best practices and include guidance on things like how to craft prompts, train on your organization’s writing style and disclose AI-generated images. As AI frees up staff time and allows them to take on other fundraising functions, job descriptions will need updating. Some staff will need upskilling to effectively oversee and reap benefits of tools such as ChatGPT or take on other types of tasks due to the dividend of freed-up time.
Preparing for Ethical Use of AI
AI is going to find its way into every aspect of fundraising over the next few years and become an indispensable tool for fundraisers to raise more money in less time. Fundraisers should prepare by drafting robust ethical and responsible use policies to ensure their organizations stay human-centered. Responsible, thoughtful engagement will not only reap productivity benefits and improve the work experience for nonprofit staff, but also help nonprofits attract and retain more donors.
Allison Fine
President, Every.org
Allison Fine is a trailblazing force in the realm of technology for social good. Her expertise and captivating speaking style have made her a sought-after keynote speaker at conferences around the world. Her engaging presentations inspire audiences to embrace technology as a tool for positive change and provide actionable strategies for harnessing its potential.
She is the co-author with Beth Kanter of The Smart Nonprofit about the use of AI by nonprofit organizations.
Fine currently serves as the president of Every.org, a nonprofit organization that enables every nonprofit organization to accept any kind of donation payment method online without charging any platform or direct transaction fees.
Beth Kanter
Author, Facilitator and Trainer
Beth Kanter is an internationally recognized thought leader and trainer in digital transformation and well-being in the nonprofit workplace. She is the co-author of the award-winning Happy Healthy Nonprofit: Impact without Burnout and co-author with Allison Fine of The Smart Nonprofit.
Named one of the most influential women in technology by Fast Company and recipient of the NTEN Lifetime Achievement Award, she has over three decades of experience in designing and delivering training programs for nonprofits and foundations. As a sought-after keynote speaker and workshop leader, she has presented at nonprofit conferences around the world to thousands of nonprofits. Learn more about Kanter at www.bethkanter.org.