Here are some doubts regarding the claim-check experimentation, that came to my mind while using the prototype:
1) For the claim-check-tests I understand we will pick a fact-check article, and based on that, each one of us will write its own claim-check article. I was thinking about this, and I guess it would be important collecting and analyzing data on the articles that we wrote themselves, as well as the argument maps that each of us created. I'm still not exactly sure about how this should be done, or what exactly should be measured there, but the thing is that someone could arrive to (let's say) a very similar claim confidence level, but having done a rather different analysis (i.e. argument map) behind the scenes. I don't know if this is too obvious and something you (@Jack Harich ) were already expecting, or something that you don't consider necessary, but in my opinion, producing similar argument maps is part of producing an accurate measurement. What do you think?
2) It is very likely, that at the beginning you (@Jack Harich) will be an outlier, regardless of the protocol we're using, simply because of your experience, and as you've said, you already think very much in terms of Structured Argument Analysis. It would probably make sense that we all practiced together with you the process of claim-checking an article from scratch, before starting the actual experimentation, to level up a little the starting point from everyone and avoid big differences due to very basic mistakes. Is this something you were already considering? If not, do you think it's necessary?
1) For the claim-check-tests I understand we will pick a fact-check article, and based on that, each one of us will write its own claim-check article. I was thinking about this, and I guess it would be important collecting and analyzing data on the articles that we wrote themselves, as well as the argument maps that each of us created. I'm still not exactly sure about how this should be done, or what exactly should be measured there, but the thing is that someone could arrive to (let's say) a very similar claim confidence level, but having done a rather different analysis (i.e. argument map) behind the scenes. I don't know if this is too obvious and something you (@Jack Harich ) were already expecting, or something that you don't consider necessary, but in my opinion, producing similar argument maps is part of producing an accurate measurement. What do you think?
2) It is very likely, that at the beginning you (@Jack Harich) will be an outlier, regardless of the protocol we're using, simply because of your experience, and as you've said, you already think very much in terms of Structured Argument Analysis. It would probably make sense that we all practiced together with you the process of claim-checking an article from scratch, before starting the actual experimentation, to level up a little the starting point from everyone and avoid big differences due to very basic mistakes. Is this something you were already considering? If not, do you think it's necessary?