
Abstract: In the past five years, AI-for-good initiatives have emerged to “create a better world through AI” or to “create intelligent, ethical and scalable technologies for a better, fairer world.” AI for good has it all: an innovative and impressive way to “do good” and to shift the paradigm of for-purpose initiatives. From this trend, I have made a few comments:
There are little strong cases of nonprofits developing AI themselves. A typical application of the technology is a collaboration with an existing AI initiative. The third party offers to leverage its technology for the charity. Among the known examples, there are:
The Children's Society used Microsoft translator, a service developed by Microsoft to translate text and speech in real-time, to provide another option for interacting with their non-English speaking beneficiaries. The British charity was able to help them access their rights and entitlements in the UK. (link) Microsoft has developed many other partnerships with nonprofits using its suite of frontier technologies initially developed for for-profit purposes and now applied to the social sector.
Another tech actor in that space is Google. Google for AI partnered with Fisheries and Oceans Canada (DFO) ""to apply machine learning to protect killer whales in the Salish Sea."" (link) DFO provided 1,800 hours of underwater audio and 68,000 labels that identified the origin of the sound for Google to spot the presence of a killer whale.
Other examples include applications from AWS, IBM, Facebook or other small AI companies.
Most successful known cases of AI are the results of a collaboration with an external tech company instead of being developed internally. The social sector is dependent on larger willing technologist companies. While these partnerships have shown impressive results, there are two problems with this situation:
the design of the algorithms: the level of complexity of choices in the process underlying the implementation of AI does not always allow the employees of the organisation to be at the heart of the design of these technologies. There is a degree of complexity not only in the technology itself but also within the nonprofit that might not be fully leveraged in AI answering the nonprofit needs. Because of the potential impact these AI initiatives can have on their beneficiaries, it is important for them to create strong knowledge bridges between both parties. So far, there hasn’t been evidence of such investment despite the surge in AI for good applications.
the actual impact: As a consequence of the above, there is little evidence at the moment of the actual impact of these AI programs. The existing examples showcased have been focusing on the actions and less on the outcomes. We don’t know if these AI initiatives had any impact at all, or more dangerously, if they could have a negative impact on the overall work of the charities. Social impact measurement is a particular field in itself and, while it accompanies any social initiatives, it hasn’t followed AI for good applications.
Afteer the session you will have a better understanding of the pitfalls and opportunities of practical applications of AI in the social impact world. I will go through 2 or 3 of the case studies and explain why it worked (or didn't), and how to better design AI projects to create the impact desired.
Bio: Ethel Karskens is the founder and president of Civita, a not-for-profit that "releases the power of data to everyone". She has worked in the corporate, the tech and the not-for-profit worlds as a data analyst. Today, she helps restore better data sovereignty to Indigenous communities through her work with Blak Impact and she spreads data equity with Civita.

Ethel Karskens
Title
CEO, Data Lead | Civita, Blak Impact
