Our applied research project, Politician Truth Ratings, has reached what appears to be the crux of the entire project. How can a claim-check measure the truth of a claim accurately and precisely?
Let's define our terms. According to Wikipedia:
What makes this problem hard is it's a classic problem of philosophy that's never been solved before. What is "truth"? Isn't it like beauty, which is in the eyes of the beholder? Thus isn't truth ultimately a subjective judgement that inherently cannot be measured?
No. The hard sciences have figured out how to accurately and precisely measure countless things that could not be measured before, like weight, distance, and color. The social sciences have done the same with measuring how sleepy a person is, how depressed a person is, and how economically productive an economic system is. We're not trying to define what the truth is philosophically. We're trying to capture how people determine the level of truth of a proposition, so they can make rational decisions based on that knowledge.
What we have here is just one more case of inventing a new form of measurement. Montserrat has found this research area is labeled "instrumentation." Or as Scott Collison said, what we're trying to do is "operationalize the truth."
Let's see if we can pinpoint our knowledge gap. Then we can focus on filling the gap.
We are attempting to measure the truth of a claim in a claim-check. What exactly is the truth here? It's the calculated truth confidence level of the argument's claim.
That truth depends on all the numbers and their relationships used in the calculation. Therefore, if we can improve Structured Argument Analysis so that it can help users accurately and precisely set each of those numbers and relationships, then we have a tool for accurately and precisely measuring the truth of a claim. The tool must support these categories of decisions:
How accurate and precise? Close enough so that the societies using the tool can come reasonably close to the goal of democracy, which is optimizing the long term common good of all.
Over time, what we're trying to do is illustrated in this diagram:
Let's define our terms. According to Wikipedia:
In the fields of science and engineering, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantity's true value. The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Although the two words precision and accuracy can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method.
The field of statistics, where the interpretation of measurements plays a central role, prefers to use the terms bias and variability instead of accuracy and precision: bias is the amount of inaccuracy and variability is the amount of imprecision.
A measurement system is considered valid if it is both accurate and precise.
The field of statistics, where the interpretation of measurements plays a central role, prefers to use the terms bias and variability instead of accuracy and precision: bias is the amount of inaccuracy and variability is the amount of imprecision.
A measurement system is considered valid if it is both accurate and precise.
No. The hard sciences have figured out how to accurately and precisely measure countless things that could not be measured before, like weight, distance, and color. The social sciences have done the same with measuring how sleepy a person is, how depressed a person is, and how economically productive an economic system is. We're not trying to define what the truth is philosophically. We're trying to capture how people determine the level of truth of a proposition, so they can make rational decisions based on that knowledge.
What we have here is just one more case of inventing a new form of measurement. Montserrat has found this research area is labeled "instrumentation." Or as Scott Collison said, what we're trying to do is "operationalize the truth."
Let's see if we can pinpoint our knowledge gap. Then we can focus on filling the gap.
We are attempting to measure the truth of a claim in a claim-check. What exactly is the truth here? It's the calculated truth confidence level of the argument's claim.
That truth depends on all the numbers and their relationships used in the calculation. Therefore, if we can improve Structured Argument Analysis so that it can help users accurately and precisely set each of those numbers and relationships, then we have a tool for accurately and precisely measuring the truth of a claim. The tool must support these categories of decisions:
- Setting the confidence level of facts.
- Setting the confidence level of rules.
- Selecting the correct fact or reusable claim.
- Selecting the correct rule.
- Setting the weights used in rule inputs.
- Defining the argument tree relationships.
How accurate and precise? Close enough so that the societies using the tool can come reasonably close to the goal of democracy, which is optimizing the long term common good of all.
Over time, what we're trying to do is illustrated in this diagram:

Last edited: