"If the user tries to increase speed, accuracy will be compromised, and vice versa: An increase in accuracy reduces speed."
a statement that is a claim about the world as described by a particular theory
"If the user tries to increase speed, accuracy will be compromised, and vice versa: An increase in accuracy reduces speed."
a statement that is a claim about the world as described by a particular theory
For example, they can talk about information, difficulty, working memory, and so on.
sentence describing examples of a concept
A proposition is a claim about the world.
a sentence defining a concept
Phenomena in interaction are emergent, that is, they are not attributable to the user or to the computer alone.
a sentence describing the concept of interaction
Interaction also occurs in different contexts, including work, leisure, and in-between contexts such as commuting.
sentence describing examples of a concept
Interaction is a dynamic phenomenon that unfolds over time as users and computers influence each other.
sentence describing the concept of interaction
What happens in interaction cannot be attributed solely to the human or the computer—the two must be considered together.
individual sentence describing the concept of interaction
It has been used to describe individuals, groups, and communities using computers.
sentence describing examples of a concept
Interaction is a core notion in HCI and refers to the mutual influence between people and computers.
sentence describing the concept of interaction
Pressing a button takes about a hundred milliseconds; adopting an information system in a large organization can easily take months.
sentence describing examples of a concept
We have used it to discuss various applications, from a user typing on a smartphone to a team of information workers communicating via email.
sentence describing examples of a concept
For example, a photocopier can automatically sort and collate copies.
sentence describing examples of a concept
Such points about the origins of data and the processes of their collection are a key factor in civic text visualization. Indeed, a shift to emphasizing paradata can help draw attention to the representativeness of data.
Show alternative approaches to text visualization beyond analytics
On the other side of this spectrum, at the detail level, articulating nuanced information present in raw text data can enable civic leaders to peruse and sublimate critical insights.
Show alternative approaches to text visualization beyond analytics
In contrast, we could consider designing explicitly for multiple users. Doing so requires more than designing for different levels of expertise (see the following subsection for more on expertise) or designing for collaborative use, though both those things may be valuable in their own right. Rather, this dimension encourages accounting for the different types of relationalities that users may have with a system [cf. BB17].
Show alternative approaches to text visualization beyond analytics
Civic text visualizations similarly designed to foreground interpretation could help make clearer who is making these interpretive decisions, thereby highlighting the lack of neutrality and objectivity in data [DK20].
Show alternative approaches to text visualization beyond analytics
work on visualization evaluation [SP06; IZCC08; LBI*12] has emphasized the importance of close attention to the various contexts in which a visualization will be applied.
Show alternative approaches to text visualization beyond analytics
It is informative to contrast this analytic emphasis with other evolving discourses in information visualization. The prior work reviewed above illustrates a few alternative orientations, including rhetoric [HD11], feminism [DK16; DK20], ethics [Cor19], and others [DFCC13; VW08].
Show alternative approaches to text visualization beyond analytics
work in the digital humanities often explicitly emphasizes the interpretation both of texts themselves and of computational analysis thereof [Ram03; Joc13; Und14; BSM*20].
Show alternative approaches to text visualization beyond analytics
For instance, CommunityPulse provides a scaffolding for multifaceted public input analysis using visualizations [JHSM21], and MultiConVis enables multilevel exploration and analysis of threaded conversations [HC16b].
Find civic text visualization systems that are explicitly named.
For example, CommunityPulse [JHSM21] uses common, simple visualizations and iconography, such as bar charts and emojis, to provide overviews of people's emotions towards civic agendas and ideas. Similarly, ConsiderIt [KMF*12b] uses bar charts to visualize people's stance towards ballot measures.
Find civic text visualization systems that are explicitly named.
For instance, visual analytic systems such as MultiConVis [HC16b] use multiple connected views to enable analysts to filter and explore text data at multiple levels.
Find civic text visualization systems that are explicitly named.
Tools such as ConsiderIt [KMF*12b] or Opinion Space [FBRG10] are designed specifically for the public. In contrast, tools such as CommunityPulse [JHSM21] or CommunityClick [JKW*21] are focused more on supporting community leaders and decision makers.
Find civic text visualization systems that are explicitly named.
For example, MultiConVis [HC16b] makes prescriptive statements not only as to the sentimental valence of individual conversations but also as to the topics that each conversation is about. Similarly, ConsiderIt [KMF*12b] asks participants to place individual statements as either supporting or opposing a given ballot proposition.
Find civic text visualization systems that are explicitly named.
Consider how systems such as MutiConVis [HC16b] and CommunityClick [JKW*21] provide visual representations to help the viewer understand the structure and content of conversations.
Find civic text visualization systems that are explicitly named.
tools such as ConsiderIt [KMF*12b] and CommunityPulse [JHSM21] prominently feature specific comments from members of the public (i.e., the data).
Find civic text visualization systems that are explicitly named.
Some tools provide both computational and visualization features. For instance, CommunityPulse provides a scaffolding for multifaceted public input analysis using visualizations [JHSM21], and MultiConVis enables multilevel exploration and analysis of threaded conversations [HC16b].
Highlight all civic participation approaches
Researchers in HCI and digital civics have begun to explore methods to improve the analysis capabilities of visual analytics tools [JHSM21; MJS20b]. Although the broader community of visualization researchers acknowledges the importance of designing for varied levels of expertise [Mun14; GTS10; SNHS13], existing work on text analytics in general, as well as civic text visualizations in particular, focuses research efforts towards designing for analysts. Less effort has been put on designing and developing text visualization for non-experts—people who are not trained in or have had limited exposure to visualization and analytics.
Highlight all civic participation approaches
Improving the public input process has become an important goal in the field of digital civics [MNC*19; VCL*16; OW15]. To that end, researchers and practitioners have developed a variety of systems for, e.g., sharing public opinions [FBRG10], building consensus [KMF*12a; ZNB15], summarizing public input [19], or identifying people's priorities, reflections, and hidden insights [JHSM21].
Highlight all civic participation approaches
Previous work has introduced several online engagement platforms to enable the public to asynchronously provide their comments, ideas, and feedback around civic issues [19; 20b; MJN*18]. These engagement tools have used micro-tasks [MJN*18], visualizations [19], and forum-like discussions [20b] to engage disconnected and disenfranchised populations [MNC*19]. Others have proposed technologies to promote in-person engagement of reticent participants during town halls [JKW*21] and public meetings [LLS] using clicker-like devices.
Highlight all civic participation approaches
Despite their central importance in the civic engagement process, members of the general public are not necessarily involved in the analysis process. Hence, they are often left out of the loop when designing civic text visualizations—their requirements, aptitudes, knowledge, etc. are not given central consideration. Integrating participatory approaches in civic text visualization could pave the way not only for more inclusive analysis but also for leveraging the general public's knowledge to gather richer insights.
Highlight all civic participation approaches
social dynamics, such as shyness and tendency to avoid confrontation with dominant personalities can also hinder opinion sharing in town halls by favoring privileged individuals who are comfortable or trained to take part in contentious public discussions [27, 127].
Highlight all civic participation approaches
town halls inadvertently cater to a small number of privileged individuals, and silent participants often become disengaged despite physically attending the meetings [61]. Due to the lack of inclusivity, the outcome of such meetings often tends to feel unjust and opaque for the general public [39, 54].
Highlight all civic participation approaches
designing communitysourcing technologies to include marginalized opinions and amplify participation alone may not be enough to solve inequality of sharing opinions in the civic domain [26, 126]. Despite the success of previous works [25, 53, 90], technology is rarely integrated with existing manual practices and follow-ups of engagements between government officials and community members are seldom propagated to the community.
Highlight all civic participation approaches
Marginalization can be broadly defined as the exclusion of a population from mainstream social, economic, cultural, or political life [58], which still stands as a barrier to inclusive participation in the civic domain [48, 94]. Researchers in HCI and CSCW have explored various communitysourcing approaches to include marginalized populations in community activities, proceedings, and designs [48, 53, 81, 93, 132].
Highlight all civic participation approaches
To increase broader civic participation, researchers in HCI have proposed both online [4, 5, 7, 81, 93] and face-to-face [21, 80, 91, 125] technological interventions that use the communitysourcing approach.
Highlight all civic participation approaches
Prior investigations by Bryan [29] and Gastil [56] showed a steady decline in civic participation in town halls due to the growing disconnect between local government and community members and the decline in social capital [43, 111, 113]. Despite the introduction of online methods to increase public engagement in the last decade [4, 5, 7, 37, 81, 93], government officials continue to prefer face-to-face meetings to engage the community in the decision-making process [32, 52, 94].
Highlight all civic participation approaches
To reengage disconnected, reticent, or disenfranchised community members, researchers in HCI and digital civics have offered novel strategies and technological interventions to increase engagement [60, 62, 94, 107, 130].
Highlight all civic participation approaches
Bryan [29] and Gastil [56] investigated the state of town halls and demonstrated a steady decline in civic participation due to the growing disconnect between local government and the community.
Highlight all civic participation approaches
Traditional community consultation methods, such as town halls, public forums, and workshops are the modus operandi for public engagement [52, 94]. For fair and impartial civic decision-making, the inclusivity of community members' feedback is paramount [60, 94, 126]. However, traditional methods rarely provide opportunities for inclusive public participation [30, 87, 95].
Highlight all civic participation approaches
Murphy used such systems to promote democracy and community partnerships [103]. Similarly, Boulianne et al. deployed clicker devices in contentious public discussions about climate change to gauge public opinions [25]. Bergstrom et al. used a single button device where the attendees anonymously voted (agree/disagree) on issues during the meeting. They showed that back-channel voting helped underrepresented users get more involved in the meeting [22].
Highlight all civic participation approaches
As evidenced by numerous studies on statistical cognition (Kline, 2004; Beyth-Marom et al, 2008), even trained scientists have a hard time interpreting p-values, which frequently leads to misleading or incorrect conclusions.
p-value is misinterpreted and confusing
few researchers can resist the temptation to conclude that there is no effect, a common fallacy called "accepting the null" which had frequently led to misleading or wrong scientific conclusions (Dienes, 2014, p.1).
p-value is misinterpreted and confusing
Again, p is the probability of seeing results as extreme (or more extreme) as those actually observed if the null hypothesis were true. So p is computed under the assumption that the null hypothesis is true. Yet it is common for researchers, teachers and even textbooks to think of p as the probability of the null hypothesis being true (or equivalently, of the results being due to chance), an error called the "fallacy of the transposed conditional" (Haller and Krauss, 2002; Cohen, 1994, p.999).
p-value is misinterpreted and confusing
Many researchers fail to appreciate that p-values are unreliable and vary widely across replications.
p-value is misinterpreted and confusing
Providing non-misleading interpretations of figures with confidence intervals requires judgment, and no mechanical decision procedure can carry out this job better than a thoughtful investigator.
Estimation is necessary but not sufficient
NHST as it is carried out today consists of this incoherent mix of Fisher and Neyman–Pearson methods (Gigerenzer, 2004).
p-value is misinterpreted and confusing
p-values give a seductive illusion of certainty and truth (Cumming, 2012, Chap. 1). The sacred α = .05 criterion amplifies this illusion, since results end up being either "significant" or "non-significant".
p-value is misinterpreted and confusing
Estimation seems much more likely to promote clear statistical thinking.
Need to change our way of thinking
Decades spent educating researchers have had little or no influence on beliefs and practice (Schmidt and Hunter, 1997, pp.20–22).
Calls for reform fall on deaf ears
NHST has been severely criticized for more than 50 years by end users to whom fair statistical communication matters.
Calls for reform fall on deaf ears
This assessment raises two issues. First, it is arbitrary. If 10 of the 15 CIs included the predicted values, would the results also support the theory, or instead refute it? If one instead used 99% CIs, would positive results for 12 of the 15 predictions be enough to support the theory? This arbitrariness arises because CIs offer no principled method for generating an inference regarding the theory.
Estimation is too messy / complex and not clear enough
two out of three necessary conditions for testing theory are missing.
Estimation is too messy / complex and not clear enough
To illustrate this point Oakes posed a series of true/false questions regarding the interpretation of p-vales to seventy experienced researchers and discovered that only two had a sound understanding of the underlying concept of significance [25].
Sentences where they say people don't really know the statistics, they just apply tests without thought because it's tradition
failure to check assumptions about the data required by particular tests, over-testing and using inappropriate tests
Sentences where they say people don't really know the statistics, they just apply tests without thought because it's tradition
abusing statistical tests, making illogical arguments as a result of tests, deriving inappropriate conclusions from nonsignificant results, and confusing the size of p-values with effect sizes.
Sentences where they say people don't really know the statistics, they just apply tests without thought because it's tradition
This approach, fiercely promoted by Fisher in the 1930's [9], has become the gold standard in many disciplines including quantitative evaluations in HCI. However, the approach is rather counter-intuitive; many researchers misinterpret the meaning of the p-value.
Sentences where they say people don't really know the statistics, they just apply tests without thought because it's tradition
We found that using MINE directly gave identical performance when the task was nontrivial, but became very unstable if the target was easy to predict from the context (e.g., when predicting a single step in the future and the target overlaps with the context).
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
We note that better [49, 27] results have been published on these target datasets, by transfer learning from a different source task.
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
We also found that not all the information encoded is linearly accessible. When we used a single hidden layer instead the accuracy increases from 64.6 to 72.5, which is closer to the accuracy of the fully supervised model.
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
For lasertag_three_opponents_small, contrastive loss does not help nor hurt. We suspect that this is due to the task design, which does not require memory and thus yields a purely reactive policy.
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
Although this is a standard transfer learning benchmark, we found that models that learn better relationships in the childeren books did not necessarily perform better on the target tasks (which are very different: movie reviews etc).
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
We found that more advanced sentence encoders did not significantly improve the results, which may be due to the simplicity of the transfer tasks (e.g., in MPQA most datapoints consists of one or a few words), and the fact that bag-of-words models usually perform well on many NLP tasks [48].
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
It is important to note that the window size (maximum context size for the GRU) has a big impact on the performance, and longer segments would give better results. Our model had a maximum of 20480 timesteps to process, which is slightly longer than a second.
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
Interestingly, CPCs capture both speaker identity and speech contents, as demonstrated by the good accuracies attained with a simple linear classifier, which also gets close to the oracle, fully supervised networks.
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
Figure 6 shows that for 4 out of the 5 games performance of the agent improves significantly with the contrastive loss after training on 1 billion frames.
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
CPC 76.9 80.1 91.2 87.7 96.8
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
CPC 73.6
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
CPC 48.7
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
Despite being relatively domain agnostic, CPCs improve upon state-of-the-art by 9% absolute in top-1 accuracy, and 4% absolute in top-5 accuracy.
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
We also found that not all the information encoded is linearly accessible. When we used a single hidden layer instead the accuracy increases from 64.6 to 72.5, which is closer to the accuracy of the fully supervised model.
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
Are the following two answers to my question Q semantically equivalent?\n\nQ: ${THE_QUESTION}\nA1: ${GOLD_ANSWER}\nA2: ${PRED_ANSWER}\n\nPlease answer with a single word, either "Yes." or "No.", and explain your reasoning.
please find the barebones practical information i need to implement this system or strategy
Provide your best guess for the following question, and describe how likely it is that your guess is correct as one of the following expressions: ${EXPRESSION_LIST}. Give ONLY the guess and your confidence, no other words or explanation. For example:\n\nGuess: <most likely guess, as short as possible; not a complete sentence, just the guess!>\nConfidence: <description of confidence, without any extra commentary whatsoever; just a short phrase!>\n\nThe question is: ${THE_QUESTION}
please find the barebones practical information i need to implement this system or strategy
Provide your ${k} best guesses and the probability that each is correct (0.0 to 1.0) for the following question. Give ONLY the guesses and probabilities, no other words or explanation. For example:\n\nG1: <first most likely guess, as short as possible; not a complete sentence, just the guess!>\n\nP1: <the probability between 0.0 and 1.0 that G1 is correct, without any extra commentary whatsoever; just the probability!>
please find the barebones practical information i need to implement this system or strategy
Each linguistic likelihood expression is mapped to a probability using responses from a human survey on social media with 123 respondents (Fagen-Ulmschneider, 2023). Ling. 1S-opt. uses a held out set of calibration questions and answers to compute the average accuracy for each likelihood expression, using these 'optimized' values instead.
please find the barebones practical information i need to implement this system or strategy
Finally, our study is limited to short-form question-answering; future work should extend this analysis to longer-form generation settings.
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
While our work demonstrates a promising new approach to generating calibrated confidences through verbalization, there are limitations that could be addressed in future work. First, our experiments are focused on factual recall-oriented problems, and the extent to which our observations would hold for reasoning-heavy settings is an interesting open question.
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
the 1-stage and 2-stage verbalized numerical confidence prompts sometimes differ drastically in the calibration of their confidences. How can we reduce sensitivity of a model's calibration to the prompt?
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
Provide your best guess and the probability that it is correct (0.0 to 1.0) for the following question. Give ONLY the guess and probability, no other words or explanation. For example:\n\nGuess: <most likely guess, as short as possible; not a complete sentence, just the guess!>\n Probability: <the probability between 0.0 and 1.0 that your guess is correct, without any extra commentary whatsoever; just the probability!>\n\nThe question is: ${THE_QUESTION}
please find the barebones practical information i need to implement this system or strategy
Provide your ${k} best guesses and the probability that each is correct (0.0 to 1.0) for the following question. Give ONLY the guesses and probabilities, no other words or explanation.
please find the barebones practical information i need to implement this system or strategy
Provide your best guess for the following question, and describe how likely it is that your guess is correct as one of the following expressions: ${EXPRESSION_LIST}. Give ONLY the guess and your confidence, no other words or explanation.
please find the barebones practical information i need to implement this system or strategy
To fit the temperature that is used to compute ECE-t and BS-t we split our total data into 5 folds. For each fold, we use it once to fit a temperature and evaluate metrics on the remaining folds. We find that fitting the temperature on 20% of the data yields relatively stable temperatures across folds.
please find the barebones practical information i need to implement this system or strategy
To avoid excessive false negatives in our correctness computation as a result of exact-match evaluation, we use either GPT-4 or GPT-3.5 to evaluate whether a response is essentially equivalent to the ground truth answer.
please find the barebones practical information i need to implement this system or strategy
We sample 1000 questions from the validation split of TriviaQA (rc.web.nocontext) and SciQ and all 817 questions from the validation split of TruthfulQA (generation) for our experiments.
please find the barebones practical information i need to implement this system or strategy
Ling. 1S-opt. 0.056 0.051 0.088 0.927 0.028 0.052 0.172 0.828 0.082 0.105 0.212 0.632
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
Verb. 1S top-4 0.041 0.039 0.081 0.959 0.056 0.059 0.185 0.815 0.198 0.144 0.245 0.619
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
Ling. 1S-opt. 0.058 0.066 0.135 0.878 0.064 0.068 0.220 0.674 0.125 0.165 0.270 0.492
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
Verb. 1S top-4 0.054 0.057 0.144 0.896 0.065 0.051 0.209 0.763 0.203 0.189 0.284 0.455
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
Additionally, the lack of technical details available for many state-of-the-art closed RLHF-LMs may limit our ability to understand what factors enable a model to verbalize well-calibrated confidences and differences in this ability across different models.
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
With Llama2-70B-Chat, verbalized calibration provides improvement over conditional probabilities across some metrics, but the improvement is much less consistent compared to GPT-* and Claude-*.
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
The verbal calibration of the open source model Llama-2-70b-chat is generally weaker than that of closed source models but still demonstrates improvement over its conditional probabilities by some metrics, and does so most clearly on TruthfulQA.
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
Chain-of-thought prompting does not improve verbalized calibration
all content that points to important caveats and gotchas that I might consider when leaning too heavily on the results of this paper
Among the methods for verbalizing probabilities directly, we observe that generating and evaluating multiple hypotheses improves calibration (see Figure 1), similarly to humans (Lord et al., 1985), and corroborating a similar finding in LMs (Kadavath et al., 2022).
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
Ling. 1S-opt. 0.060 0.070 0.151 0.874 0.049 0.056 0.214 0.738 0.099 0.130 0.266 0.446
please point only to the details of the most successful version of this system, especially in tables when there are many options, and also highlight sections that provide supporting context for these conditions, if appropriate
the solution is not to reform p-values or to replace them with some other statistical summary or threshold, but rather to move toward a greater acceptance of uncertainty and embracing of variation.
Where it's mentioned how to address the problems with p-values
What, then, can and should be done? I agree with the ASA statement's final paragraph, which emphasizes the importance of design, understanding, and context—and I would also add measurement to that list.
Where it's mentioned how to address the problems with p-values
the psychology research community has been strongly questioning the value of NHST in psychology for some years now [6] and calling for a more meaningful reporting of statistical inference based on effect sizes, confidence intervals and Bayesian reasoning [9].
Mentioning the problems with p-values
Similarly, if the significance level is set at 0.05, then this is the probability of the data occurring by chance when there is no experimental effect, namely one in twenty times. The more tests that are done on a particular dataset, the more likely it is that some chance variation will be extreme enough to seem like significance.
Mentioning the problems with p-values
Violation of the assumptions of any statistical test can produce p values that bear little relation to the actual probabilities of outcomes and hence comparison to the significance level of 0.05 is meaningless.
Mentioning the problems with p-values
for an analysis to be sound, it is necessary that in the tests performed the probabilities of outcomes are accurately reflected in the p values produced by the tests. If this is not the case, then the NHST argument form is severely weakened.
Mentioning the problems with p-values
NHST is the most commonly encountered form of statistical inference and is what is usually associated with producing a null hypothesis, then testing it to give some statistic such as a t value, and then turning the statistic into a p value.
Mentioning the problems with p-values
properly reported non-significant results can help future researchers to provide estimates of effect sizes and associated confidence intervals.
Effect sizes are mentioned
calling for a more meaningful reporting of statistical inference based on effect sizes, confidence intervals and Bayesian reasoning [9].
Effect sizes are mentioned
P8 said: "as I get familiar with this system [C3], I feel more skilled" to use the highlighted and grayed phrases.
any sentence that describes a user's emotional (positive or negative) response to any condition in the experiments.
It is worth noting that P3 and P8 mentioned feeling more comfortable with the more familiar visualization in C1 and C2 during their first impression of the conditions.
any sentence that describes a user's emotional (positive or negative) response to any condition in the experiments.
The inclusion of counterfactuals often resulted in a substantial increase in precision, indicating that the models were better able to correctly classify relevant instances while reducing false positives.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
By visualizing these consistent pattern rules, users may be better understanding the behavior of the model through inference projection.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Mocha addresses two seemingly contradictory objectives: (1) generating labeled data that diversifies the training dataset to aid the model's learning, and (2) maintaining structural consistency across the batches of data presented to users to support their cognitive processes.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
The results of our study indicate that participants spent significantly less time annotating batches of counterfactuals when they were rendered according to SAT compared to other conditions i.e., supporting the participants' selective focus on the varying phrases, rather than phrases that stay consistent.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
From a cognitive perspective, the theme color aligns with the human's (theorized) structural mapping engine [27] by making relational discrepancies between the original and counterfactual examples more explicit.
return any single sentence that describes an explicit or implicit connection to theory
Estes and Hasson [17] highlights the significant role of bringing salience to 'non-alignable' differences.
return any single sentence that describes an explicit or implicit connection to theory
The last two prior works also combine Variation Theory (VT) and SAT together, as we did (i.e., a corollary of SAT referred to as Analogical Transfer/Learning Theory).
return any single sentence that describes an explicit or implicit connection to theory
Estes and Hasson [17] highlights the significant role of bringing salience to "non-alignable" differences.
return any single sentence that describes an explicit or implicit connection to theory
Estes and Hasson [17] argue that while alignable differences can be more straightforward and easier for comparison, non-alignable differences can also provide key information that might otherwise remain overlooked.
return any single sentence that describes an explicit or implicit connection to theory
By incorporating theories such as Structural Alignment Theory and Variation Theory, it aims to support the learning of both the human and the model.
return any single sentence that describes an explicit or implicit connection to theory
This symbiotic relationship stems from the fact that Structural Alignment Theory (SAT) enhances the salience of differences, while the way we used Variation Theory (VT) to generate contradicting examples across the boundaries of labels ensures that these differences are conceptually informative.
return any single sentence that describes an explicit or implicit connection to theory
It states that understanding and sensemaking involve mapping the relationships between elements, especially in complex and ambiguous tasks.
return any single sentence that describes an explicit or implicit connection to theory
Structural Alignment Theory states that humans naturally look for structural mapping between representations of objects to help them understand, compare, and infer relationships between said objects.
return any single sentence that describes an explicit or implicit connection to theory
According to Variation Theory, learners better understand concepts by observing variations along critical features (dimensions of variation) that define that concept and, separately, observing variations along superficial features that do not define that concept—all while other features, when possible, are held constant.
return any single sentence that describes an explicit or implicit connection to theory
Mocha exemplified the application of human cognition and concept learning theories in the interactive machine learning pipeline to support the negotiation of conceptual boundaries for bi-directional human-AI alignment.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
This pattern of selective attention suggests that the visual cues provided by Mocha effectively guided participants to focus on more relevant information within the context of unchanged text when making their labeling decisions.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Overall, the incorporation of counterfactuals has generally improved the models' F1 scores, driven largely by the improvements in precision. This suggests that counterfactuals have effectively improved performance without necessitating a significant trade-off between precision and recall.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
The inclusion of counterfactuals often resulted in a substantial increase in precision, indicating that the models were better able to correctly classify relevant instances while reducing false positives. This improvement suggests that the counterfactuals provided essential information that helped refine the models' decision boundaries.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
By visualizing these consistent pattern rules, users may be better understanding the behavior of the model through inference projection [26]. This can not only boosts the model's performance but also enable participants to validate or correct the model during the interactive training process.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Thus, the integration of both theories enables users to efficiently process and compare variations, leading to more informed decisions and a clearer understanding of the model's behavior.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
By helping users see alignable differences, SAT-based rendering helps users focus on key variations that are essential to changing the data item's label, making it easier to interpret the effects of changes and their significance.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
We argue that these two theories form a symbiotic relationship (Fig. 6). Variation Theory provides the conceptual basis for generating structurally consistent differences, while Structural Alignment Theory (SAT) enhances the user's ability in recognizing and processing these differences.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Participants were able to efficiently focus on key differences between the original and counterfactual examples, which facilitated more efficient annotations.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
The results from our user study suggest that both the participants and the model benefited from the Variation Theory (VT)-based counterfactuals and Structural Alignment Theory (SAT)-based rendering.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Specifically, they underscore the need for co-adaptive systems that can evolve along with users' mental models and definitions of labels.
any sentence that describes explicit design implications
This requires the development of interfaces and visualizations that demystify the generated data, allowing systematic variation and coverage across the concept space.
any sentence that describes explicit design implications
By visualizing these consistent pattern rules, users may be better understanding the behavior of the model through inference projection [26].
any sentence that describes explicit design implications
By incorporating theories such as Structural Alignment Theory and Variation Theory, it aims to support the learning of both the human and the model.
any sentence that describes explicit design implications
Such studies could determine whether these non-alignable comparisons enhance user performance and elicit deeper insights in human-AI collaborative systems.
any sentence that describes explicit design implications
Most previous research in counterfactual generation has focused on the model side by either generating counterfactuals to improve the model's performance or explaining its behaviors post hoc.
any single sentence that compares and contrasts this work with prior work.
Variation Theory provides the conceptual basis for generating structurally consistent differences, while Structural Alignment Theory (SAT) enhances the user's ability in recognizing and processing these differences.
return any single sentence that describes an explicit or implicit connection to theory
While SAT-based rendering supported human sensemaking in both Gero et al. [29] and Mocha, we also show that the combination of VT and SAT support the model's learning.
any single sentence that compares and contrasts this work with prior work.
This finding is consistent with previous work that supports users' sense-making of text, e.g., by modulating text saliency. Specifically, Gu et al. [32] and Gero et al. [29] both found improved reading efficiency and comprehension with saliency-modulating text renderings.
any single sentence that compares and contrasts this work with prior work.
In decision making, SAT argues that people tend to focus on alignable differences—features that can be directly compared—rather than on differences that cannot be easily aligned.
return any single sentence that describes an explicit or implicit connection to theory
Structural Alignment Theory (SAT) [27] is a cognitive theory that explains how people make sense of concepts by comparing relational structures between two items.
return any single sentence that describes an explicit or implicit connection to theory
Specifically, we use Variation Theory of learning [44] which states that for learning to occur, some aspects that define the concept being learned must vary while others are held constant.
return any single sentence that describes an explicit or implicit connection to theory
According to SAT, humans compare two similar entities by trying to find structural alignments between them, and then comparing corresponding elements, with a special focus on differing aligned elements.
return any single sentence that describes an explicit or implicit connection to theory
VT posits that human learning occurs when learners experience variation across critical and superficial aspects of a concept—through exposure to contrasting examples that systematically vary along different critical and superficial feature dimensions.
return any single sentence that describes an explicit or implicit connection to theory
To analyze the annotation efficiency, we first conducted a Kruskal-Wallis rank sum test [39] to determine if there were statistically significant differences in annotation time across the three conditions, because our data violated the homogeneity of variances assumption, making non-parametric methods more appropriate.
return any single sentence that describes data analysis done on data collected by the authors when running human subjects experiments.
Mohamed et al. (2020) put forth the idea of dismantling power assymmetries to resist data colonialism.
sentence that refers to a theory
Couldry and Mejias (2019) propose 'data colonialism' as a new form of colonialism to make sense of the use of large amounts of data by a small group of corporate and government actors.
sentence that refers to a theory
Taken together, these findings almost unanimously show that, on average, AI-supported writing decreases but does not eliminate writer's feelings of ownership, underscoring the need for a larger theory of AI participation in the creative process.
sentence that refers to a theory
This can be understood through the frame of precarious work [5]; as writers feel that their work is increasingly precarious, the power differential between themselves and the organizations seeking to train LLMs grows larger.
sentence that refers to a theory
information scholars argue that data should be collected and used in accordance with data subjects' perspectives on the acceptable use of their data [29, 40, 83].
sentence that refers to a theory
Interviews were video and audio recorded. We transcribed the audio using OpenAI's Whisper automatic speech recognition system and anonymized the transcript before analysis.
sentence describing any interview procedures
Immediately after the focused reading task, we conducted a short interview asking participants to reflect on their experience with both tasks (Appendix J.2), followed by a post-study survey (Appendix J.3).
sentence describing any interview procedures
The study concluded with a 15-minute semi-structured interview. During the interview, participants saw screenshots from the three conditions and were asked which they preferred and disliked, why, what they wished the interface had, what influenced their skimming, and how they normally skimmed texts.
sentence describing any interview procedures
After the interviews, we analyzed the data using the process described in Appendix B
sentence describing any interview procedures
We used these mock-ups as design probes [31] to inspire ideation and elicit creative responses. Specifically, we asked participants to compare and contrast alternative mock-ups and reflect on how they could be used or improved to support their known or emerging synthesis and information-foraging goals.
sentence describing any interview procedures
In the second part of the session, we provided participants with mock-ups of possible reifications of cross-document relationships that might help them synthesize information across abstracts.
sentence describing any interview procedures
In the first part of the session, we asked participants about their strategies for selecting publication venues for their manuscript submissions, how they identify and synthesize information from venues, their approaches to writing manuscripts, and finally, the technology they have used to help with these processes, current technology shortcomings, and ideas for addressing these challenges.
sentence describing any interview procedures
Sessions, which were held on Zoom, lasted 55 minutes on average. Participants were compensated with $15 USD.
sentence describing any interview procedures
The interview sessions were divided into two parts: an open-ended semi-structured interview about their backgrounds and practices, followed by feedback on a range of mock-ups, including novel reified relationships between analogous sentences in different abstracts (Figure 2).
sentence describing any interview procedures
In order to determine (1) the context in which we might offer novel views of scientific abstracts and (2) the intelligibility of various novel prototype designs for reifying cross-abstract relationships, we conducted a formative interview study with 12 active researchers (see Appendix A for participant information).
sentence describing any interview procedures
pre-computing and reifying cross-document analogous relationships make it psychologically possible for users to engage—if they are willing to be guided by it. (Lower NFC users are more likely to fall into this category.)
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Participants with lower NFC more frequently preferred and experienced less cognitive load when skimming with all the features enabled.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
These results suggest that these three features lose their effectiveness when not used together.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Lower NFC participants were generally guided by emergent visual patterns created by the interactions between features, especially blocks of color spanning multiple sentences created when all three features are turned on.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Dialectical activities cannot be done on a user's behalf by AI; with variation affordances, AI is supporting the user's engagement with the data themselves.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
In this sense, AbstractExplorer enables dialectical activities that users may otherwise have found to be too tedious or difficult to engage with.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Our work demonstrates that designs informed by Structure-Mapping Theory can support users in navigating, making use of, and engaging with variation present in information.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
We posit that our approach can generalize to other domains such as journalism, code synthesis, and social media analytics where visual alignment of text can enable meaningful comparisons of underlying patterns to identify relational clarity.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
We demonstrate how slicing sentences according to roles and visually aligning them can help readers perceive cross-document relationships in a coherent manner.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
In this work, we introduce a new paradigm for exploring a large corpus of small documents by identifying roles at the phrasal and sentence levels, then slice on, reify, group, and/or align the text itself on those roles, with sentences left intact.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Like prior Structural Mapping Theory (SMT)-informed work in text corpora representation, AbstractExplorer's features have enabled some users to see more of both the overview and the details at the same time, facilitating abstraction without losing context.
statements that draw general conclusions about humans, computers, and/or human-computer interaction based on the results of the specific experiment done in the paper.
Interviews were video and audio recorded. We transcribed the audio using OpenAI's Whisper automatic speech recognition system and anonymized the transcript before analysis. We analyzed the interview data using thematic analysis [1]. First, two members of the research team independently coded four (25% of collected data) randomly chosen participant data to generate low-level codes. The inter-coder reliability between the coders was 0.88 using Krippendorff's alpha [37]. The two coders then met together to cross-check, resolve coding conflicts, and consolidate the codes into a codebook across two sessions. Using the codebook, the two coders analyzed six randomly selected participant data each. The research team then met, discussed the analysis outcomes, and finalized themes over three sessions.
sentence describing how analysis was performed on data collected by the authors of this paper
Our work demonstrates that designs informed by Structure-Mapping Theory can support users in navigating, making use of, and engaging with variation present in information. In this sense, AbstractExplorer enables dialectical activities that users may otherwise have found to be too tedious or difficult to engage with.
any sentence that describes explicit design implications
Together, the vertical and horizontal juxtapositions are designed to help users identify both high-level commonalities and nuanced variations across structurally similar sentences.
any sentence that describes explicit design implications
Dialectical activities cannot be done on a user's behalf by AI; with variation affordances, AI is supporting the user's engagement with the data themselves.
any sentence that describes explicit design implications
In this work, we introduce a new paradigm for exploring a large corpus of small documents by identifying roles at the phrasal and sentence levels, then slice on, reify, group, and/or align the text itself on those roles, with sentences left intact.
any sentence that describes explicit design implications
These observations underscored the need for an improved UI design for authoring custom aspects, especially for those who are less familiar with the research corpus.
any sentence that describes explicit design implications
Future work could explore more seamless ways of preserving context, such as allowing users to navigate through every sentence of an abstract directly within the Cross-Sentence Relationship pane, fostering a more cohesive understanding of the content.
any sentence that describes explicit design implications
Supporting a new corpus would require defining new corpus-appropriate predefined aspects and chunk role labels. Other components would likely be able to remain unchanged.
any sentence that describes explicit design implications
We posit that our approach can generalize to other domains such as journalism, code synthesis, and social media analytics where visual alignment of text can enable meaningful comparisons of underlying patterns to identify relational clarity.
any sentence that describes explicit design implications
benchmarking AI against programming eval: discoverability, interpretation, predictability
Correspondence concerning this article should be addressed to Iza Ray Korsmit (iza.korsmit@mail.mcgill.ca).
reference to Montreal the city or any institution or author based there
The experimental protocol was certified for ethical compliance by the McGill University Research Ethics Board II.
reference to Montreal the city or any institution or author based there
IRK was supported by funding from the Prins Bernhard Cultuurfonds (The Netherlands). This project was also funded by a Canadian Social Sciences and Humanities Research Council Insight Grant (435-2021-0224), a Social Sciences and Humanities Research Council Partnership Grant (895-2018-1023), and a Canada Research Chair (950-231872) to SMc.
reference to Montreal the city or any institution or author based there
Part of this research was presented at the Society for Music Perception and Cognition Conference, Portland, Oregon (2022). The authors would like to thank Bennett K. Smith for programming the experimental interface and assisting with the experiment execution on Prolific, and Philippe Macnab-Seguin for creating the chromatic scales for the second experiment.
reference to Montreal the city or any institution or author based there
Iza Ray Korsmit, Marcel Montrey, Alix Yok Tin Wong-Min, & Stephen McAdams McGill University, Montreal, Canada
reference to Montreal the city or any institution or author based there
Grimaud and Eerola (2022) compared instrument ensembles of strings, woodwinds, and brass in a study where participants either rated the emotions they perceived or manipulated musical parameters to produce a certain emotion. They found that strings were associated with increased anger and fear, woodwinds with decreased anger and fear, and brass with decreased fear, in the cases of both emotion perception and production. For the other emotions (joy, sadness, calmness, power, surprise), however, results were less consistent between perception and production, indicating that the emotion-instrument association may also depend on context of the task.
makes an explicit connection between a music theory concept and congition
following the constructionist idea that an individual's personality and background will influence the affect they perceive or feel, we considered several sources of individual differences as moderating factors in the effects of instrument family, pitch height, and affect locus.
makes an explicit connection between a music theory concept and congition
This research follows a constructionist approach to musical affect (Cespedes-Guevara & Eerola, 2018). That is, although we are interested in the "bottom-up" influence of certain musical features on musical affect, we believe these cannot be adequately evaluated without considering the "top-down" effects of context and individual differences that are present when affects are constructed. The perception or induction of affect does not merely arise in response to a stimulus but is also formed in relation to the individual and the context.
makes an explicit connection between a music theory concept and congition
the context of a task (like perception, production, or induction) may change the effect of musical features.
makes an explicit connection between a music theory concept and congition
Furthermore, as the method of reporting on perceived and induced affect may influence the construction of an affect (e.g., facilitating categorical perception), we also compare two different affect representations.
makes an explicit connection between a music theory concept and congition
This research follows a constructionist approach to musical affect (Cespedes-Guevara & Eerola, 2018). That is, although we are interested in the \'bottom-up\' influence of certain musical features on musical affect, we believe these cannot be adequately evaluated without considering the \'top-down\' effects of context and individual differences that are present when affects are constructed. The perception or induction of affect does not merely arise in response to a stimulus but is also formed in relation to the individual and the context.
makes an explicit connection between a music theory concept and congition
Cognitive surrenderA paper that came out this year asked: if you’re working with AI a lot, and you’re using it as a machine to answer all of your questions, what happens with System 1 and System 2?
Cognitive surrender: what happens to System 1 and System 2 if you offload to AI to get any answers? (Is this diff from other cognitive tools, like writing and Plato's rejection of it?)
The paper is https://doi.org/10.31234/osf.io/yk25n_v1 and it posits AI offloading as System 3. That is an interesting perspective. Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender by Shaw and Nave, 2026. Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender in Zotero
2. Validator Another basic role for AI is validating your understanding. To do this, you ask it to review your notes for errors or gaps, do basic fact checking, or critique your reasoning. Again, you can do this via the chat interface, but I also experimented with passing my notes in Obsidian using the Copilot plugin and in Emacs using gptel. Example: After reading The Epic of Gilgamesh, I wrote a note in Obsidian summarizing its plot. When I asked ChatGPT to critique my summary, it pointed out that I’d given the central character a redemption arc that isn’t present in the text. I’m so accustomed to the standard hero’s journey, that I projected it onto the book — and an LLM helped me correct this ‘hallucination.’ Suggested prompt: Here are my notes on [WORK]. What important ideas did I miss or underemphasize? Don’t rewrite my notes — just flag the gaps.
Role 2 validator of one's understanding, also seen as basic. Might be a good complement to e.g. turning some of my notes into [[Anki]] card decks or combine in another way w spaced repetition. [[Spaced repetition 20201012201559]] [[Connecting my PKM to Anki]]
[[Jorge Arango p]] talk at PKM Summit 2026 robots in the garden, a perspective on PKM and the use of gen AI in it
list of scenario's in which AI agents will a) work against you b) be used against you at scale.
For the record, my posts aren’t written or conceived with an LLM, although I know an increasing number of people who use one to write a first draft and then edit. I’m not a fan. The whole point of the web — its beauty — is that it’s unrelentingly human and diverse.
A good case for disfavoring the use of AI/LLMs to write first drafts of blog posts. Implicit I believe is a distinction between using external tools to edit/proofread a human-written draft vs editing/proofreading a machine draft (granting I do not use these tools for either). Related to points I raised in Re; On AI in response to: A Positive Technologist Identity (2/4).
Given the high prevalence of such sounds in everyday life, having misophonia can have large negative effects on one's functioning in personal, academic, and work environments.
any sentences referring to misophonia verbatim
Although there are many idiosyncrasies in what may trigger a person with misophonia, the most common triggers are created by other humans, such as the sound of someone chewing, clearing their throat, tapping their foot, or typing on a keyboard.
any sentences referring to misophonia verbatim
Misophonia is a psychological disorder that is characterized by severe aversive responses to specific environmental sounds (i.e., triggers).
any sentences referring to misophonia verbatim
This indicates that misophonia is not a purely auditory processing disorder but is also influenced by a top-down process of source identification.
any sentences referring to misophonia verbatim