Artificial Intelligence and Truth

#1
Researchers at institutions and companies are examining how AI technology can improve the reliability of information in numerous domains, from health care to economics and law. This forum is for topics and projects that may be relevant to the Politician Truth Ratings Project and Candle.
 
Last edited:
#2
Knowhere News https://knowherenews.com/

This company is attempting to address the fake news problem "by incorporating machine learning methods into the journalistic process." "Knowhere collects a lot of data in order to write completely unbiased news." After prioritizing current topics, their system aggregates the information to "build a tree of facts — that is, mapping facts and how facts relate to one another. ... Based on that, the system finds the most coherent and salient path to write the most coherent and salient article.” Knowhere’s system takes headlines from both left and right-leaning publications (meaning they don’t cater to either side politically) to create a headline that tells the facts as plainly as possible without imparting an opinion.

Their "team of journalists have the unprecedented tools required to identify every source, analyze every bias and form the most accurate and impartial account possible of the events shaping our lives." https://knowherenews.com/about/tech

There's a membership program that they are using to find people who want to contribute, as well as spread the word. This might be an interesting idea for Thwink, or for us to connect with them by helping them out.

The technology they use to build a tree of facts may help with adding content to the TRS and Candle. If it can be adopted or replicated in such a way as to identify claims within an article, and connect them, then the Thwink team can focus on adding the rules and ensuring that the algorithm did its job properly.
 
#4
The Potential of AI regarding truth seeking / knowledge making

Much of what passes for civil society in the developed world is underpinned by oral argument – whether it be by politicians debating on TV, or lawyers arguing before the Supreme Court. If a computer can perform these functions better than a human, and after Project Debater it appears they someday could, then there is every reason to believe they’ll begin muscling out the human competition. So long as there’s a vibrant civil discourse producing well-reasoned articles and supporting statistical data that these AIs are trained upon, then we’re probably better off letting a computer search and summarize those positions for us. They will do so better, and more efficiently, than a human can.
IBM’s Project Debater to Set Stage for New Kind of Civil Society
 
Last edited:
#5
The Risks of AI regarding truth seeking / knowledge making
Whatever can be said for the positions espoused by Project Debater during oral argument, one thing is sure: It wasn’t without bias. Much of that bias came from the humans who generated the data on which it was trained. Garbage in, garbage out, as they say in programming circles. If the statistical datasets on which it formed its opinions weren’t gathered with care, or if the human generated articles it read contained erroneous logic or other fallacies, then that would be reflected in the sentences it composed. ... Projector Debater seemed an extension of IBM’s Watson platform – searching over and summarizing thousands of human-generated articles. That’s no small feat, and already I believe the repercussions of this limited capability will extend far and wide, potentially toppling our entire method of public discourse.
IBM’s Project Debater to Set Stage for New Kind of Civil Society

Any system - technological, social or otherwise, can be hacked. A system that cannot determine the validity of claims will undermine society's ability to discern the truth and cripple its ability to solve problems or even to function. Our (apparently innate) tendency to put faith in technology when it appears to work reliably is a significant risk factor, since our ability to determine the validity of information is already sketchy at best. A centralized system such as Candle amplifies this risk if it becomes the primary source of information for a society.
 

Jack Harich

Administrator
Staff member
#6
Tremendous! Thanks Scott.

I was surprised and pleased at how far Knowhere News is today and is trying to go tomorrow. Their fancy website is lacking on details. This makes me wonder how much of the AI is actually implemented. Still, it's a very impressive vision. This article on A New AI “Journalist” Is Rewriting the News to Remove Bias has some impressive summary info:

Here’s how it works. First, the site’s artificial intelligence (AI) chooses a story based on what’s popular on the internet right now. Once it picks a topic, it looks at more than a thousand news sources to gather details. Left-leaning sites, right-leaning sites – the AI looks at them all.

Then, the AI writes its own “impartial” version of the story based on what it finds (sometimes in as little as 60 seconds). This take on the news contains the most basic facts, with the AI striving to remove any potential bias. The AI also takes into account the “trustworthiness” of each source, something Knowhere’s co-founders preemptively determined. This ensures a site with a stellar reputation for accuracy isn’t overshadowed by one that plays a little fast and loose with the facts.
Their actual articles are pretty dry and often very short, with no links or photos. But still, if they are over 90% NLP (natural language processing) generated, that's very good work.

And it sure dovetails with what we're trying to do. But our strategic goals are very different. We are trying to raise truth literacy. They are trying to raise the level of truth in news media. I thwink I know which is the higher leverage point.

IBM's Project Debater work is also impressive.


Any system - technological, social or otherwise, can be hacked. A system that cannot determine the validity of claims will undermine society's ability to discern the truth and cripple its ability to solve problems or even to function. Our (apparently innate) tendency to put faith in technology when it appears to work reliably is a significant risk factor, since our ability to determine the validity of information is already sketchy at best. A centralized system such as Candle amplifies this risk if it becomes the primary source of information for a society.
Beautiful. I agree. I wonder how well our project is addressing this risk?

Thanks Scott !
 
#7
I finally took the time to read Scott's posts in this thread. Knowhere News shows how much interest there is out there for unbiased and reliable news.

WikiTribune is trying to produce objective news by building a community to hold each other accountable.
Knowhere News is trying to produce objective news using AI as the tool to achieve it.
Thwink is trying to produce objective and educational news using the Truth Rating System as a tool to achieve it.

All three have the overlapping common goal of producing objective news, but I think the greatest difference between the other two and us, is (as Jack already mentioned) that we're doing it as part of a broader vision of raising truth literacy.

Still, I think there are great collaboration opportunities with them, considering that both WikiTribune and Knowhere News are still small projects with small teams behind them (i.e. still low change resistance). I think the main contribution Thwink could make would be to share with them our (analysis-based) vision, and get them to adopt it in their projects. That could make WikiTribune the first claim-checking news outlet, and Nowhere News the first claim-checking news outlet using AI.
 

Jack Harich

Administrator
Staff member
#8
Nice post. I like the way you listed the three types of action/strategy each group uses. Yes, the greatest difference is our broader, or deeper, vision.

"I think there are great collaboration opportunities with them" - Yes!

And it's really cool how you are looking ahead to what could happen as a result of that collaboration: "That could make WikiTribune the first claim-checking news outlet, and Nowhere News the first claim-checking news outlet using AI." Not to mention the work you showed us at the meeting, how WikiTribune's slack channels look. That and your understanding of what they see as their own needs, in the FAQ, sure looks like the royal road to successful collaboration. Thanks!

Related to this, I was just talking to Martha about our work. I said we have a layered vision:
  1. The top easy to see layer is claim-checks and structured argument analysis. Creating the tool to allow this and running experiments to "prove" it works is our first crucial project. If we can get this far, the rest should be so much easier.
  2. The next layer is the reason we're doing this, to produce Politician Truth Ratings.
  3. The next layer, much harder to see, grasp, and accept, is that this work arises from a root cause analysis of the difficult large-scale social problems our world has been unable to solve, but must if it is to avoid looming catastrophe. That's the fundamental layer, where the root causes lie. The Truth Ratings System's ultimate goal is to raise political truth literacy enough to resolve the main root cause of systemic change resistance to solving problems whose solution would benefit the common good.
  4. And finally, we have an even deeper, much broader layer in our vision. A database of rules, facts, and reusable claims, all with a known truth confidence level, is an entirely new form of knowledge. Nothing like it, at least on a large-scale basis, has never happened before. Imagine the beneficial impact of more and more public knowledge being truth-rated. It's kind of a truth-rated Tree of Knowledge. (This last one has much of the Candle aspect in it.)
Wow. What a vision!