Share it and Latitudes, in partnership with Data for Good and Bayes Impact, are developing a program for acculturation and concrete implementation of generative AI in the service of the general interest.
We interviewed Théo Alves Da Costa, Head of AI for Sustainability & Climate for Ekimetrics and co-president of Data for Good about generative AI: its definition, its potential and its link with the common good.
AI has existed for a long time, artificial neural networks (one of the first AI algorithms) were in 1943, and the discipline was really launched in the 1950s. Arthur Samuel, an AI pioneer at IBM, was already giving a definition of Machine learning: “giving computers the ability to learn from data rather than being explicitly programmed.”
After numerous phases, and the explosion of available data and computing power, the 2010s saw a new wave of artificial intelligence explode in research, industry and consumer applications. Social media algorithms, Google search or even banking or marketing targeting algorithms are already full of artificial intelligence working on structured (excel) or unstructured data (image, text, sound, etc.).
Around this time, the first conclusive generative AI algorithms appeared, with GaNS and variational autoencoders, which already made it possible to generate realistic images and began to ask societal questions about deepfakes.
In June 2017, the scientific and algorithmic concept of Transformers was released with the scientific paper from Google”Attention is all you need”. And a few months later, this innovative algorithm for semantic understanding (text analysis) was put into production on Google search.
5 years later in November 2022, this same Transformers algorithm is the T in ChatGPT that comes out, and becomes the most used digital tool in history with 1 million users in 5 days and more than 200 million daily a few months later.
Generative AI is therefore a subfamily of the large field of AI, which aims to produce content based on examples shown previously to the algorithm. It is a field that is not new but that is exploding in popularity, in accessibility and that has accelerated very strongly in the last 2 years.
Artificial intelligence is now integrated into many aspects of our daily lives, manifesting itself in various ways that are sometimes rather discreet.
- In our digital interactions, of course, for example, between social media feeds and AI algorithms on Netflix that personalize movie recommendations and visuals, and influence production planning. On platforms like Amazon, AI adjusts product suggestions and prices based on user buying behaviors. Our GPS systems calculate routes and estimate travel times using AI. Our smartphone keyboards predict the next word, the spam filters in our emails sort out unwanted messages, and search engines optimize the results according to the queries.
- Without us realizing it, AI is used as well beyond our digital uses in our society : for example, AI is used by CAF to assess entitlements to benefits, while banks use it to detect fraud attempts. In medicine, artificial intelligence helps to detect diseases through the analysis of medical images and patient data.
Something special is happening right now with the explosion of generative AI. It's not true that AI is inevitably going to be everywhere like something automatic. If AI is taking root as much in the public debate as in our tools, it is the result of political and economic choices and interests that are driving very strong.
First you have to ask yourself if the associations are “concerned” and then whether to use. This acceleration of AI is influencing society and political and economic orientations. This of course affects the context and the mission of associative actors and the general interest. For example, an association that works on Inclusion in the suburbs may be affected by the acceleration of the imaginations conveyed by Midjourney. A non-profit that works on gender equality must be interested in discriminatory algorithms, while a structure working on education can only be concerned with the time spent by children on screens and algorithms. To understand the world that is coming (and is almost imposed on us), you must therefore of course be concerned by AI, be interested in it and understand what is behind it, its impacts and its potential.
Because there is indeed potential, in the neutral sense of the term, these algorithms are powerful and work. And that is why they are interesting for many industrial and everyday applications. Should associations therefore use AI for their functioning, those who have limited resources and time constraints?
It's not easy and sometimes you have to have the humility to sometimes not know how to respond. There is a world between translating a text with DeepL, doing a Google search, using Canva automatic editing, writing a funding request with ChatGPT or generating a visual with Midjourney. In reality, they are several AI systems and not a “unique” AI, with their potentials and their numerous impacts (copyright, bias, over-technologization, creativity, energy consumption, etc.).
At the moment I have three intuitions:
- Putting (generative) AI everywhere will never be a good idea, and it should never be automatic.
- It is essential to learn and train on this subject in order to at least understand it, with lucidity and without being afraid of it either.
- We must feel concerned, question ourselves and participate in shaping this transformation in the name of the general interest and among citizens.
In this sense, a support program for actors in the general interest seems to me key to providing them with all the resources they need to adopt AI.

To go further, Data for Good's one-hour webinar on the main challenges of generative AI