352 Matching Annotations
  1. Apr 2022
    1. Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume, but it was an accurate reflection of what others were posting. That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own “Share” button, which became available to smartphone users in 2012. “Like” and “Share” buttons quickly became standard features of most other platforms.Shortly after its “Like” button began to produce data about what best “engaged” its users, Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well. Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.

      The Firehose versus the Algorithmic Feed

      See related from The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning, except with more depth here.

    1. Algorithms in themselves are neither good nor bad. And they can be implemented even where you don’t have any technology to implement them. That is to say, you can run an algorithm on paper, and people have been doing this for many centuries. It can be an effective way of solving problems. So the “crisis moment” comes when the intrinsically neither-good-nor-bad algorithm comes to be applied for the resolution of problems, for logistical solutions, and so on in many new domains of human social life, and jumps the fence that contained it as focusing on relatively narrow questions to now structuring our social life together as a whole. That’s when the crisis starts.

      Algorithms are agnostic

      As we know them now, algorithms—and [[machine learning]] in general—do well when confined to the domains in which they started. They come apart when dealing with unbounded domains.

    1. The way technologies like fMRI are applied is aproduct of our brainbound orientation; it has not seemed odd or unusual toexamine the individual brain on its own, unconnected to others.

      In part because of modalities of studying the brain using methods like fMRI where the images are of an individual's head, we focus too much and too exclusively on single brains bound to individuals rather than on brains working in concert.

      Greater flexibilities in tools and methods should help do studies of humans working in concert.


      Link this to the anecdote:

      I recall a radiology test within a medical school setting in which students were asked to diagnose an x-ray of a human patient's skull. Most either guessed small hairline fractures in the skull or that there was nothing wrong with the patient.

      Can you diagnose the patient?

      Almost all the students failed the question, and worse felt like idiots when the answer was revealed: the patient must be dead because the spinal column and the rest of the body are not attached. Compare:

  2. Mar 2022
    1. computers might therefore easily outperform humans at facial recognition and do so in a much less biased way than humans. And at this point, government agencies will be morally obliged to use facial recognition software since it will make fewer mistakes than humans do.

      Banning it now because it isn't as good as humans leaves little room for a time when the technology is better than humans. A time when the algorithm's calculations are less biased than human perception and interpretation. So we need rigorous methodologies for testing and documenting algorithmic machine models as well as psychological studies to know when the boundary of machine-better-than-human is crossed.

    1. In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible.

      Although the model was driven "towards compounds such as the nerve agent VX", it found VX but also many other known chemical warfare agents and many new molecules...that looked equally plausible."

      AI is the tool. The parameters by which it is set up makes something "good" or "bad".

    1. The study’s authors suggest that this discrepancy may emerge fromdifferences in boys’ and girls’ experience: boys are more likely to play withspatially oriented toys and video games, they note, and may become morecomfortable making spatial gestures as a result. Another study, this oneconducted with four-year-olds, reported that children who were encouraged togesture got better at rotating mental objects, another task that draws heavily onspatial-thinking skills. Girls in this experiment were especially likely to benefitfrom being prompted to gesture.

      The gender-based disparity of spatial thinking skills between boys and girls may result from the fact that at an early age boys are more likely to play with spatially oriented toys and video games. Encouraging girls to do more spatial gesturing at an earlier age can dramatically close this spatial thinking gap.

    1. Newton arranged an experiment in which one person — a “tapper” — was asked to tap out the melody of a popular song, while another person — the “listener” — was asked to identify it. The tappers assumed that their listeners would correctly identify about 50% of their melodies; they were amazed to learn that the listeners only got about one out of 40 songs correct. To the tappers, their melodies sounded perfectly clear and obvious, but the listeners heard no music, no instrumentation in their heads — only the muffled noise of a finger tapping on a table.

      An example of the curse of knowledge effect.

  3. Feb 2022
    1. The velocity of social sharing, the power of recommendation algorithms, the scale of social networks, and the accessibility of media manipulation technology has created an environment where pseudo events, half-truths, and outright fabrications thrive.

      As it has been stated by Daniel Kahneman, we all are "cognitively lazy." This a very telling statement that helps to reveal the different reasonings of why we are in a world full of "half-truths" but, deeper than that, why we all continue to accept these half-truths. A lot of times we do not want to take the necessary time it takes to evaluate information instead of just accepting things to be true.

    1. Deepti Gurdasani. (2022, January 10). Lots of people dismissing links between COVID-19 and all-cause diabetes. An association that’s been shown in multiple studies- whether this increase is due to more diabetes or SARS2 precipitating diabetic keto-acidosis allowing these to be diagnosed is not known. A brief look👇 [Tweet]. @dgurdasani1. https://twitter.com/dgurdasani1/status/1480546865812840450

    1. Read for Understanding

      Ahrens goes through a variety of research on teaching and learning as they relate to active reading, escaping cognitive biases, creating understanding, progressive summarization, elaboration, revision, etc. as a means of showing and summarizing how these all dovetail nicely into a fruitful long term practice of using a slip box as a note taking method. This makes the zettelkasten not only a great conversation partner but an active teaching and learning partner as well. (Though he doesn't mention the first part in this chapter or make this last part explicit.)

    2. Reading, especially rereading, caneasily fool us into believing we understand a text. Rereading isespecially dangerous because of the mere-exposure effect: Themoment we become familiar with something, we start believing wealso understand it. On top of that, we also tend to like it more(Bornstein 1989).

      The mere-exposure effect can be dangerous when rereading a text because we are more likely to falsely believe we understand it. Robert Bornstein's research from 1989 indicates that we will tend to like the text more, which can pull us into confirmation bias.

      Bornstein, Robert F. 1989. “Exposure and Affect: Overview and Meta-Analysis of Research, 1968-1987.” Psychological Bulletin 106 (2): 265–89.

    3. The linear process promoted by most study guides, which insanelystarts with the decision on the hypothesis or the topic to write about,is a sure-fire way to let confirmation bias run rampant.

      Many study and writing guides suggest to start ones' writing or research work with a topic or hypothesis. This is a recipe for disaster to succumb to confirmation bias as one is more likely to search out for confirming evidence rather than counter arguments. Better to start with interesting topic and collect ideas from there which can be pitted against each other.

    4. “I had [...]during many years followed a golden rule, namely, that whenever apublished fact, a new observation or thought came across me, whichwas opposed to my general results, to make a memorandum of itwithout fail and at once; for I had found by experience that such factsand thoughts were far more apt to escape from the memory thanfavorable ones. Owing to this habit, very few objections were raisedagainst my views, which I had not at least noticed and attempted toanswer.” (Darwin 1958, 123)

      Charles Darwin fought confirmation bias by writing down contrary arguments and criticisms and addressing them.

    5. psychologists call the mere-exposure effect: doing something many times makes us believe wehave become good at it – completely independent of our actualperformance (Bornstein 1989). We unfortunately tend to confusefamiliarity with skill.

      The mere-exposure effect leads us to confuse familiarity with a process with actual skill.

    6. Our brains work not that differently in terms of interconnectedness.Psychologists used to think of the brain as a limited storage spacethat slowly fills up and makes it more difficult to learn late in life. Butwe know today that the more connected information we alreadyhave, the easier it is to learn, because new information can dock tothat information. Yes, our ability to learn isolated facts is indeedlimited and probably decreases with age. But if facts are not kept

      isolated nor learned in an isolated fashion, but hang together in a network of ideas, or “latticework of mental models” (Munger, 1994), it becomes easier to make sense of new information. That makes it easier not only to learn and remember, but also to retrieve the information later in the moment and context it is needed.

      Our natural memories are limited in their capacities, but it becomes easier to remember facts when they've got an association to other things in our minds. The building of mental models makes it easier to acquire and remember new information. The down side is that it may make it harder to dramatically change those mental models and re-associate knowledge to them without additional amounts of work.


      The mental work involved here may be one of the reasons for some cognitive biases and the reason why people are more apt to stay stuck in their mental ruts. An example would be not changing their minds about ideas of racism and inequality, both because it's easier to keep their pre-existing ideas and biases than to do the necessary work to change their minds. Similar things come into play with respect to tribalism and political party identifications as well.

      This could be an interesting area to explore more deeply. Connect with George Lakoff.

    7. Just followyour interest and always take the path that promises the mostinsight.

      What specific factors does one evaluate for determining what particular paths will provide actual (measurable) insight?

      Most people have a personal gut reaction about which directions to go in heuristically, but can these heuristics be broken down explicitly to enable better evaluating them? How can they be used to avoid cognitive biases?

    1. Deepti Gurdasani. (2022, January 30). Have tried to now visually illustrate an earlier thread I wrote about why prevalence estimates based on comparisons of “any symptom” between infected cases, and matched controls will yield underestimates for long COVID. I’ve done a toy example below here, to show this 🧵 [Tweet]. @dgurdasani1. https://twitter.com/dgurdasani1/status/1487578265187405828

  4. Jan 2022
    1. An over-reliance on numbers often leads to bias and discrimination.

      By their nature, numbers can create an air of objectivity which doesn't really exist and may be hidden by the cultural context one is working within. Be careful not to create an over-reliance on numbers. Particularly in social and political situations this reliance on numbers and related statistics can create dramatically increased bias and discrimination. Numbers may create a part of the picture, but what is being left out or not measured? Do the numbers you have with respect to your area really tell the whole story?

    2. Current approaches to improving digital well-being also promote tech solutionism, or the presumption that technology can fix social, cultural, and structural problems.

      Tech solutionism is the presumption that technology (usually by itself) can fix a variety of social, cultural, and structural problems.

      It fits into a category of problem that when one's tool is a hammer then every problem looks like a nail.

      Many tech solutionism problems are likely ill-defined to begin with. Many are also incredibly complex and difficult which also tends to encourage bikeshedding, which is unlikely to lead us to appropriate solutions.

    1. Most of us simply take it for granted that ‘Western’observers, even seventeenth-century ones, are simply an earlierversion of ourselves;

      It is likely a good broad generality that from a historical perspective, those looking at people from the past do so by considering them simply an earlier version of ourselves.

      This sort of isocultural cognitive bias is something to be very cognizant of particularly in cases without extensive context as it is likely to cause massive context collapse.

    1. many people accept the scientific consensus on, say, vaccine effectiveness not because they value peer-reviewed research but because they are impressed by people in lab coats who use big words
    2. the fact that many Bitcoin enthusiasts say bizarre things does not, in itself, mean that cryptocurrencies are a bad idea

      Is this some kind of attribution bias?

    1. In the new film, she has been in the city for years, caring for her father (it’s hinted that he died), and she expresses, in a single line, a desire to go to college. Bernardo is now a boxer just beginning his career. Chino, an undefined presence in the original, is now in night school, studying accounting and adding-machine repair. But nothing comes of these new practical emphases; the characters have no richer inner lives, cultural substance, or range of experience than they do in the first film. Maria still has little definition beyond her relationship with Tony; she remains as much of a cipher as she was in the 1961 film.

      The writer is purposely making these characters seem way different while ignoring that the movie was made in a completely different era to relate more to today's problems rather than problems in 1961. The speaker fails to recognize that the movie is going to have a different look because it is a new producer.

    1. Always listen to your patients before runningtests—they will tell you their diagnosi

      bias

    Tags

    Annotators

  5. Dec 2021
    1. When we simply guess as to whathumans in other times and places might be up to, we almostinvariably make guesses that are far less interesting, far less quirky– in a word, far less human than what was likely going on.

      Definitely worth keeping in mind, even for my own work. Providing an evidential structure for claims will be paramount.

      Is there a well-named cognitive bias for the human tendency to see everything as nails when one has a hammer in their hand?

    2. ‘What is it about the ancients,’ Pinker asks at one point, ‘that theycouldn’t leave us an interesting corpse without resorting to foul play?’

      Part of their point here seems to be that Pinker is suffering from a form of bias related to the most sensational cases which will tend to heighten the availability bias. (Is there a name for this sort of sensationalism effect?)

      Is there also some survivorship bias at play here as well?

      We don't have access to a wide statistical survey of dead bodies from a large swath of times and places which makes it difficult to determine actual numbers.

    3. Now, this may seem counter-intuitive to anyone who spendsmuch time watching the news, let alone who knows much about thehistory of the twentieth century.

      Are they suffering from potential availability heuristic (cognitive bias) here? Are they encouraging it in us? Just because we see violence on the news every day doesn't mean it's ubiquitous.

      Apparently we'll need real evidence here to provide actual indications.

      Does Steven Pinker provide archaeological evidence in his book? What are the per capita rates of violence and/or death over time?

    1. In a nutshell, then, there was never a time when humans uniformly lived in small, simple egalitarian hunter-gatherer societies, and a time when they started to switch to agriculture- thus inevitably switching to a  sedentary, hierarchical, and more complex life style. This is not because the correct trajectory is a different one, but because there was never a linear trajectory to begin with.

      Is there a reason or cognitive bias we've got that would tend to make us think that there's a teleological outcome in these cases?

      Why should it seem like there would be a foregone conclusion to all of human life or history? Why couldn't/shouldn't it just keep evolving from its current context to the next

    1. A sharp rise in reported active volcanoes immediately post-WW II was followed by another steep increase in the early 1950s that has no obvious relationship to historic events.

      'No obvious relationship to historic events' is blatantly inaccurate here. The US military was active in the Pacific for the entirety of this time frame reestablishing the power in the Pacific US colonies. It naturally would follow that volcanic activity would be reported at higher rates as military vessels were combing the area.

    1. Sean Phelan. (2021, November 26). Striking how some media coverage is assuming (without caveats) that the Belgian case brought the new variant “from” Egypt or Turkey.There’s no chance they picked it up after returning to Belgium of course. How could that happen..we only have a 7-day average of 17,000 cases a day [Tweet]. @seanphelan8. https://twitter.com/seanphelan8/status/1464252432033136659

  6. Nov 2021
    1. I know a number of my subs and viewers are in India and I've noticed on Twitter and on Abhijit Chavda's channel that there's quite a bit of controversy about the way Indian History is taught to Indian students. That interests me a lot, but what I'm PARTICULARLY interested in is, how World History surveys throughout the world cover world history. If part of this involves continuing the narratives introduced by colonizers, like the Aryan Invasion myth, that's relevant to my question.
  7. Oct 2021
    1. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.

      One of the flaws of Mark Zuckerberg's spectrum disorder is that he either has no sense of shame or his confirmation bias and loss aversion biases are incredibly large.

    1. There are many other more subtle biases of the evolved human brain—its tendency to focus on the thing that changes rather than the thing that’s constant,

      Is there a name for this bias?

    1. 02:18 So we gave people information and as a result it caused polarization, it didn’t cause 02:23 people to come together.
  8. Sep 2021
    1. One last resource for augmenting our minds can be found in other people’s minds. We are fundamentally social creatures, oriented toward thinking with others. Problems arise when we do our thinking alone — for example, the well-documented phenomenon of confirmation bias, which leads us to preferentially attend to information that supports the beliefs we already hold. According to the argumentative theory of reasoning, advanced by the cognitive scientists Hugo Mercier and Dan Sperber, this bias is accentuated when we reason in solitude. Humans’ evolved faculty for reasoning is not aimed at arriving at objective truth, Mercier and Sperber point out; it is aimed at defending our arguments and scrutinizing others’. It makes sense, they write, “for a cognitive mechanism aimed at justifying oneself and convincing others to be biased and lazy. The failures of the solitary reasoner follow from the use of reason in an ‘abnormal’ context’” — that is, a nonsocial one. Vigorous debates, engaged with an open mind, are the solution. “When people who disagree but have a common interest in finding the truth or the solution to a problem exchange arguments with each other, the best idea tends to win,” they write, citing evidence from studies of students, forecasters and jury members.

      Thinking in solitary can increase one's susceptibility to confirmation bias. Thinking in groups can mitigate this.

      How might keeping one's notes in public potentially help fight against these cognitive biases?

      Is having a "conversation in the margins" with an author using annotation tools like Hypothes.is a way to help mitigate this sort of cognitive bias?

      At the far end of the spectrum how do we prevent this social thinking from becoming groupthink, or the practice of thinking or making decisions as a group in a way that discourages creativity or individual responsibility?

  9. Aug 2021
    1. The Attack on "Critical Race Theory": What's Going on?

      https://www.youtube.com/watch?v=P35YrabkpGk

      Lately, a lot of people have been very upset about “critical race theory.” Back in September 2020, the former president directed federal agencies to cut funding for training programs that refer to “white privilege” or “critical race theory, declaring such programs “un-American propaganda” and “a sickness that cannot be allowed to continue.” In the last few months, at least eight states have passed legislation banning the teaching of CRT in schools and some 20 more have similar bills in the pipeline or plans to introduce them. What’s going on?

      Join us for a conversation that situates the current battle about “critical race theory” in the context of a much longer war over the relationship between our racial present and racial past, and the role of culture, institutions, laws, policies and “systems” in shaping both. As members of families and communities, as adults in the lives of the children who will have to live with the consequences of these struggles, how do we understand what's at stake and how we can usefully weigh in?

      Hosts: Melissa Giraud & Andrew Grant-Thomas

      Guests: Shee Covarrubias, Kerry-Ann Escayg,

      Some core ideas of critical race theory:

      • racial realism
        • racism is normal
      • interest convergence
        • racial equity only occurs when white self interest is being considered (Brown v. Board of Education as an example to portray US in a better light with respect to the Cold War)
      • Whiteness as property
        • Cheryl Harris' work
        • White people have privilege in the law
        • myth of meritocracy
      • Intersectionality

      People would rather be spoon fed rather than do the work themselves. Sadly this is being encouraged in the media.

      Short summary of CRT: How laws have been written to institutionalize racism.

      Culturally Responsive Teaching (also has the initials CRT).

      KAE tries to use an anti-racist critical pedagogy in her teaching.

      SC: Story about a book Something Happened in Our Town (book).

      • Law enforcement got upset and the school district
      • Response video of threat, intimidation, emotional blackmail by local sheriff's department.
      • Intent versus impact - the superintendent may not have had a bad intent when providing an apology, but the impact was painful

      It's not really a battle about or against CRT, it's an attempt to further whitewash American history. (synopsis of SC)

      What are you afraid of?

    1. Named after Soviet psychologist Bluma Zeigarnik, in psychology the Zeigarnik effect occurs when an activity that has been interrupted may be more readily recalled. It postulates that people remember unfinished or interrupted tasks better than completed tasks. In Gestalt psychology, the Zeigarnik effect has been used to demonstrate the general presence of Gestalt phenomena: not just appearing as perceptual effects, but also present in cognition.

      People remember interrupted or unfinished tasks better than completed tasks.

      Examples: I've had friends remember where we left off on conversations months/years later and we picked right back up.

      I wonder what things effect these memories/abilities? Context? Importance? Other?

  10. Jul 2021
    1. Prof Nichola Raihani on Twitter: “Submitted a paper reporting null results to a mid tier journal. Guess how it went. I literally don’t care at this point but I do feel bad for the first author (who I won’t name here). Https://t.co/sX5lTcEl29” / Twitter. (n.d.). Retrieved July 16, 2021, from https://twitter.com/nicholaraihani/status/1415308025179656194

    1. The researchers started with 140,000 hours of YouTube videos of people talking in diverse situations. Then, they designed a program that created clips a few seconds long with the mouth movement for each phoneme, or word sound, annotated. The program filtered out non-English speech, nonspeaking faces, low-quality video, and video that wasn’t shot straight ahead. Then, they cropped the videos around the mouth. That yielded nearly 4000 hours of footage, including more than 127,000 English words.

      The time and effort required to put together this dataset is significant in itself. So much of the data we need to train algorithms simply doesn't exist in a useful format. However, the more we need to manipulate the raw information, the more likely we are to insert our own biases.

    1. at least two of the following symptoms:

      This means that by design the trial would MISS asymptomatic COVID19 cases that nevertheless make up a substantial proportion (about 60% of covid cases). But this was mitigated by taking monthly swabs for Category 2 participants.

    2. per-protocol population

      Why was the decision of per-protocol taken if the study was blinded to the participants and the study personnel BUT unblinded to the analysts?

    3. Participants, investigators, study coordinators, 126study-related personnel, and the sponsor were masked to the treatment group allocation, and 127masked study nursesat each site were responsible for vaccine preparation and administration

      That still leaves open the role of unblinded data analysts. Why were they not blinded as well or at least made agnostic as to the status of the allocation. How was allocation concealment achieved and why was this not described here?

    4. adult volunteers 18 years

      How did they mitigate self-report bias?

  11. Jun 2021
    1. Betsch, C., & Sachse, K. (2013). Debunking vaccination myths: Strong risk negations can increase perceived vaccination risks. Health Psychology: Official Journal of the Division of Health Psychology, American Psychological Association, 32(2), 146–155. https://doi.org/10.1037/a0027387

    1. . For example, if a miscalling occurs at the end of a hairpin in a top strand read, the bottom strand read would correctly basecall this sequence before the hairpin is encountered

      strand bias example

  12. May 2021
    1. Examples of this sort of non-logical behaviour used to represent identity can be found in fiction in:

      • Dr. Seuss' The Butter Battle Book (Random House,1984) which is based on
      • the war between Lilliput and Blefuscu in Jonathan Swift's 1726 satire Gulliver's Travels, which was based on an argument over the correct end to crack an egg once soft-boiled.

      It almost seems related to creating identity politics as bike-shedding because the real issues are so complex that most people can't grasp all the nuances, so it's easier to choose sides based on some completely other heuristic. Changing sides later on causes too much cognitive dissonance, so once on a path, one must stick to it.

    1. Rohrer, J. M., Schmukle, S., & McElreath, R. (2021). The Only Thing That Can Stop Bad Causal Inference Is Good Causal Inference. PsyArXiv. https://doi.org/10.31234/osf.io/mz5jx