A new study found that using artificial intelligence to police entries to open-source encyclopedias like Wikipedia could result in a more reliable source of information for users.
The study, published in Nature Machine Intelligence earlier this month, found that AI tools can help police some of the inaccurate or incomplete references that often plague online encyclopedias like Wikipedia, which could lead to improved quality and reliability, according to a report in Scientific American.
Researchers for the study developed an AI system called “SIDE,” which analyzed Wikipedia references to find missing and broken links or see if the references actually support the claims of the article. For references that didn’t meet the standard, the system was able to suggest better alterative references that would be more useful.
Christopher Alexander, the chief analytics officer of Pioneer Development Group, told Fox News Digital there could be some advantages to the use of AI in encyclopedias, including the idea of entirely AI-created resources.
“The principal advantage is removing human bias. An AI could look at multiple interpretations, verify facts and constantly monitor the research or reporting around an entry,” Alexander said. “Another advantage is that the AI works 24 hours a day, seven days a week and will not tire. Humans simply cannot keep up with that sort of productivity.“
But there are also pitfalls, Alexander noted, including proprietary algorithms that would make it difficult for users to understand.
“The second disadvantage is found in the current state of AI platforms,” Alexander said. “Presently, the AI wants, more than anything else, to make you happy. Accuracy is secondary to being useful. This can lead to AIs providing inaccurate information.”
“An ‘AI Wikipedia’ would have more information and potentially less narrative bias.”
Putting the study’s system to the test, the researchers had their AI system check Wikipedia’s featured articles and suggest references, with nearly 50% of cases resulting in the system’s choice for a reference already being cited by an article. In cases where the system suggested alternatives, the researchers found that 21% of users preferred the citations generated by AI, compared to 10% who preferred the human citations. Another 39% did not express a preference.
Samuel Mangold-Lenett, a staff editor at The Federalist, told Fox News Digital, an encyclopedia run by AI could produce superior results to humans versions.
“AI is perfect for developing a so-called ‘better’ version of Wikipedia,” Mangold-Lenett said. “This would likely be another predictive text language model similar to Chat GPT and whenever someone searches a topic, like Wikipedia, a biographical-topical article is generated.”
Mangold-Lenett also noted that an AI-run Wikipedia would allow “for the rapid processing of massive datasets, theoretical utilization of every relevant available source,” something that would result in “ironclad fact-checking” and potentially eliminate “human error and bias.”
“An ‘AI Wikipedia’ would have more information and potentially less narrative bias,” Mangold-Lenett said.
Phil Siegel, the founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), argued that what would be considered a “better” version of Wikipedia could be up for debate, though he noted AI would result in a “more comprehensive” product.
“It would also probably be more grammatically correct and would be better at generating and managing links across entries,” Siegel told Fox News Digital. “It would also generate entries for more obscure information as long as it was quality controlled.”
But Siegel also noted an AI-run encyclopedia would also be less timely and would require a “very good prompt management process that could update it quickly with new news.”
“For entries that don’t ever really change, that’s OK. But you wouldn’t want to count on that, for example, for an entry for a currently famous person,” Siegel said.
Ultimately, Siegel argued, AI could help supplement the tasks carried out by humans.
“I would still have humans edit and quality control this information in order to make sure it was complete, up to date and not hallucinating,” Siegel said. “More of a human-artificial intelligence ‘partnership.'”