Artificial intelligence (AI) is increasingly present in various fields such as social, environmental and knowledge sharing. Wikimedia France, a key player in the dissemination of free and collaborative knowledge, is a perfect example of this growth. With high-impact initiatives, such as a tool to combat misinformation and an audit of LLMs to control AI models and verify their reliability and neutrality, Wikimedia France is positioned at the forefront of ethical AI and committed to the transparency and reliability of information.
Founded in 2004, Wikimedia France supports iconic projects like Wikipedia and Wiktionary, which offer free and open access to knowledge. The non-profit has 13 employees and mobilizes more than 200 regular volunteers among an active community of several thousand contributors per month. Its actions include partnerships with cultural institutions, schools, and advocacy initiatives to protect free and open content.
Wikimedia isn't just maintaining knowledge; it's exploring new technologies like AI to support its missions. Faced with the current challenges of misinformation and content bias, Wikimedia France is committed to innovative projects to promote a more reliable and transparent Internet.
Detecting misinformation: a major European project
The fight against misinformation is one of the priorities of the Wikimedia movement. To this end, the organization is partnering with a project funded by the European Commission to develop an AI model capable of detecting misinformation attempts on social networks and Wikipedia. This project brings together 18 universities, as well as OPSCI, a company specialized in AI, the media Euractiv, and other actors. Covering eight languages, the initiative aims to provide an automated tool to help moderators identify and manage questionable information effectively.
To carry out such a project successfully, investments are important. The initiative mobilizes three members of the Wikimedia France team, including one part-time employee, as well as committed volunteers and an overall budget of 2 million euros allocated to the project. For Wikimedia France, the job is to build up a corpus of examples of disinformation, i.e. a set of falsely informative or misleading content that may have circulated. The consultation of the moderators of the platform is also planned to adjust the tool according to their specific needs.
LLM audit: towards ethical AI
The rapid evolution of AI language models, or LLMs, raises crucial ethical questions, especially when it comes to bias and the representativeness of information. Wikipedia, which represents up to 25% of the training bodies for certain models like Copilot, plays a central role in these issues. If Wikipedia articles contain biases, LLMs trained on this data risk amplifying them, potentially reproducing misogynistic, discriminatory, or unreliable content. To prevent these excesses, Wikimedia France will collaborate with OPSCI in a project funded by the BPI aimed at auditing the biases present in Wikipedia articles, in order to preserve the quality and integrity of this information, which is crucial for the AI ecosystem.
With the launch of the project scheduled for the beginning of 2025, this audit of LLMs, which benefits from global funding of 1.15 million euros (including €150,000 for Wikimedia), demonstrates that ambitious initiatives are necessary to develop responsible and ethical AI.
Internal challenges: supporting change
The adoption of AI within Wikimedia is eliciting mixed reactions among contributors. Indeed, some fear that AI could compromise the reliability of Wikipedia by amplifying uncited content, or even by using the work of contributors without credit. Copyright and attribution issues continue to generate discussions in the community, motivating advocacy actions to protect authors' rights and ensure the recognition of sources in AI models.
To address these concerns Wikimedia insists on the responsible use of AI, ensuring that it supports the reliability and transparency of information while respecting the principles of collaboration and ensuring that content authors are recognized and credited for their work.
A plea for ethical AI
Wikimedia France is actively involved in European discussions to frame the use of AI technologies. In collaboration with experts in digital ethics and international organizations, it campaigns for laws that guarantee responsible practices, such as the attribution of sources and the valorization of human work. These actions are part of a global effort to defend ethical AI.
Perspectives: awareness and innovation
Wikimedia France joins the program IA Coffee, supported by the National Digital Council (CNUM) and which aims to raise awareness among the general public about the impacts of AI by creating accessible and collaborative spaces for exchange and reflection. It is thanks to its network of internal experts but also its collaboration with partners such as OPSCI that the non-profit is able to carry out such an initiative - which requires financial, human resources, technical and educational expertise.
General interest, collaboration, investment: the Wikimedia movement is proof that an AI “for Good” will not be built alone or overnight. Do you want to support these initiatives? We are always happy to answer your questions about AI for the common good:).