While AI is taking charge of everything from images to emails, Wikipedia offers this lovely counterpoint by supporting artificial intelligence by not replacing its editors but to assist them. On Wednesday, the Wikimedia Foundation revealed a three-year AI roadmap, indicating that the community-driven model will be human-first even in the age of large language models.
Wikipedia is going to work with AI to eliminate technical hurdles and alleviate the behind-the-scenes work done largely by its army of volunteers rather than abandoning encyclopedic authority to algorithms. The ultimate goal is to free up more time for real-time decision making and consensus building, which is the essence of Wikipedia.
Automation where it is needed
Instead of replacing editorial positions in writing, AIs will be used to work upon the more mundane and repetitive management activities associated with the world’s largest open-source knowledge platform. Instead of being less of a robot writer, it should be more along the lines of a smart assistant, such as AI-enabled workflows that automate routine moderation tasks, translation aids to support multinational editors, and intelligent search capability are found to increase discoverability of information.
AI upgrades will also assist volunteers in terms of orientation, easing their entry into Wikipedia’s policies and editing practices. The idea behind these improvements is to try to lower the technical barriers that serve as deterrents to participation without detracting from the ideals of open access and collaborative editing that have come to define the platform.
Human-Centric Approach
Chris Albon, the Director of machine learning at the Wikimedia Foundation, along with Leila Zia, Director and Head of Research at Wikimedia writes,
“We believe that our future work with AI will be successful not only because of what we do, but how we do it.”
They emphasized on a human-centered approach that respects transparency, open-source development, privacy, and human rights.
AI models have become responsible for hallucinating facts, so Wikipedia’s approach appears to be not only responsible but genuinely imperative. The organization intends to further strengthen the integrity of its content by focusing human effort on making high-level decisions while letting machines do the work.
The blog post adds,
“Our efforts will use our long-held values, principles, and policies (like privacy and human rights) as a compass: we will take a human-centered approach and will prioritize human agency; we will prioritize using open-source or open-weight AI; we will prioritize transparency; and we will take a nuanced approach to multilinguality, a fundamental part of Wikipedia.”
Preservation of Trust
Wikipedia AI concept acts in a glaring contrast to many tech platforms shifting rapidly to AI-driven content production in realizing itself as trustable and accurate. There is a reminder that even in a high-tech future, being humane would do justice to some things. Where value and scarcity of trust in information stands, betting on human beings, with the help of AI may prove to be the most radical turn of all.
Author