Study Shows That AI Can Enhance the Reliability of Wikipedia

Tech

AI Enhances Reliability of Wikipedia, New Study Suggests

Wikipedia, despite being a popular online library, has faced concerns about its reliability. The open editing system allows anyone to contribute, which increases the possibility of inaccurate or misleading information. To address this issue, a new study reveals the potential for artificial intelligence (AI) to improve Wikipedia’s reliability.

London-based AI company, Samaya AI, has made efforts to enhance Wikipedia’s reference system. Using AI technology, the company scrutinizes sources to distinguish between reliable and questionable ones. It then provides its own suggestions for better references. Fabio Petroni, co-founder of Samaya AI, emphasizes that AI can assist in locating superior citations, especially in terms of language proficiency and online search expertise.

Samaya AI’s model, known as SIDE, was trained on a vast dataset of Wikipedia entries. It was subsequently used to evaluate previously unexamined articles, analyzing sources and presenting alternative reference options. The outcomes of this process were evaluated by Wikipedia users.

According to the study, when SIDE classified Wikipedia sources as unverifiable and provided its own recommendations, users preferred SIDE’s suggestions approximately 70% of the time. Notably, in around half of the cases, SIDE proposed the same sources initially suggested by Wikipedia.

“We demonstrate that existing technologies have reached a stage where they can effectively and pragmatically support Wikipedia users in verifying claims,” said Petroni. He also mentioned that future research will focus on assessing references in Wikipedia beyond text, including images, videos, and printed publications.

The study highlights the significance of verifiability as a crucial content policy for Wikipedia. It emphasizes the need for improved tools to assist humans in maintaining the quality of references on the platform.

SIDE distinguishes citations on Wikipedia that may not strongly support their claims and suggests more reliable alternatives from the web. The model was trained using existing Wikipedia citations, leveraging the expertise of numerous Wikipedia editors.

Through crowdsourcing, the study found that for the top 10% of citations most likely to be considered unverifiable, human users preferred the alternatives suggested by SIDE in 70% of cases. In an interaction with the English-speaking Wikipedia community, SIDE’s initial citation suggestion was favored twice as often as the current Wikipedia citation for the same claims potentially deemed unverifiable by SIDE. This highlights the potential of an AI-powered system working with humans to enhance Wikipedia’s credibility.

The study’s findings hold promise for improving the reliability of Wikipedia, ensuring the platform remains a trusted source of factual information.

© 2023 TECHTIMES.com. All rights reserved. Do not reproduce without permission.