36 Matching Annotations
  1. Last 7 days
  2. Sep 2024
    1. Therefore, similar to Ribes et al. in their study of domain [113], the epistemic positions we propose aim to provide conceptual tools for reasoning about different styles of organizing creativity-oriented research practices in HCI.

      David Ribes' work explores the definition of domain in computing and data science; offers insight into how studying domains helps organize computational systems.

  3. Jul 2024
    1. A critique on the Mass Media... The problem is that they want the Mass Media system to operate on the code of "True/False" rather than "Known/Unknown"... But if it were to be so, it would not be Mass Media anymore, but rather the Science System.

      For Mass Media to be Mass Media it needs to be concerned with selection and filtering, to condense and make known, not to present "all the facts". Sure, they need to be concerned with truth to a certain degree, but it's not the primary priority.


      This is a reflection based on my knowledge of Luhmann's theory of society as functionally differentiated systems; as explained by Hans-Georg Moeller (Carefree Wandering) on YouTube.

  4. Apr 2024
    1. Either system canbe s tart ed with a small li stof captions and be increasedscientifically.

      Scientific principles had bled so thoroughly into both culture and business that even advertising for filing systems in business in the 1930 featured their ability to be used and expanded scientifically.

  5. Jul 2023
  6. Jun 2023
  7. Apr 2023
  8. Jan 2023
    1. A term recommended by Eve regarding an interdisciplinary approach that accounts for multiple feedback loops within complex systems. Need to confer complex systems science to see if ADHD is already addressed in that domain.

  9. Sep 2022
    1. When we talk about air in a room, we can describe it by listing the properties of each and every molecule, or we speak in coarse-grained terms about things like temperature and pressure. One description is more "fundamental," in that its regime of validity is wider; but both have a regime of validity, and as long as we are in that regime, the relevant concepts have a perfectly good claim to "existing."

      Another way of saying this is that temperature and pressure are emergent properties of the more fundamental properties of the molecules of air.

      The problem with applying this to free will, though, is that unlike temperature, we have no way to measure free will. If we can't measure it, I am quite comfortable in denying this analogy.

  10. Apr 2022
  11. Mar 2022
    1. As Professor Rangi Mātāmua, a Māoriastronomy scholar, explains:Look at what our ancestors did to navigate here—you don’t do that onmyths and legends, you do that on science. I think there is empiricalscience embedded within traditional Māori knowledge ... but what they didto make it meaningful and have purpose is they encompassed it withincultural narratives and spirituality and belief systems, so it wasn’t just seenas this clinical part of society that was devoid of any other connection toour world, it was included into everything. To me, that cultural elementgives our science a completely new and deep and rich layer of meaning
  12. Nov 2021
  13. Sep 2021
  14. Jun 2020
  15. May 2020
  16. Nov 2019
  17. Sep 2019
  18. Jan 2016
    1. Stupid models are extremely useful. They are usefulbecause humans are boundedly rational and because language is imprecise. It is often only by formalizing a complex system that we can make progress in understanding it. Formal models should be a necessary component of the behavioral scientist’s toolkit. Models are stupid, and we need more of them.

      Formal models are explicit in the assumptions they make about how the parts of a system work and interact, and moreover are explicit in the aspects of reality they omit.

      -- Paul Smaldino

    2. Microeconomic models based on rational choice theory are useful for developing intuition, and may even approximate reality in a fewspecial cases, but the history of behavioral economics shows that standard economic theory has also provided a smorgasbord of null hypotheses to be struck down by empirical observation.
    3. Where differences between conditions are indicated, avoid the mistake of running statistical analyses as if you were sampling from a larger population.

      You already have a generating model for your data – it’s your model. Statistical analyses on model data often involve modeling your model with a stupider model. Don’t do this. Instead, run enough simulations to obtain limiting distributions.

    4. A model’s strength stemsfromits precision.

      I have come across too many modeling papers in which the model – that is, the parts, all their components, the relationships between them, and mechanisms for change – is not clearly expressed. This is most common with computational models (such as agent-based models), which can be quite complicated, but also exists in cases of purely mathematical models.

    5. However, I want to be careful not to elevate modelers above those scientists who employ other methods.

      This is important for at least two reasons, the first and foremost of which is that science absolutely requires empirical data. Those data are often painstaking to collect, requiring clever, meticulous, and occasionally tedious labor. There is a certain kind of laziness inherent in the professional modeler, who builds entire worlds from his or her desk using only pen, paper, and computer. Relatedly, many scientists are truly fantastic communicators, and present extremely clear theories that advance scientific understanding without a formal model in sight. Charles Darwin, to give an extreme example, laid almost all the foundations of modern evolutionary biology without writing down a single equation.

    6. Ultimately,the theory has been shown to be incorrect, and has been epistemically replaced by the theory of General Relativity. Nevertheless, the theory is able to make exceptionally good approximations of gravitational forces –so good that NASA’s moon missions have relied upon them.

      General Relativity may also turn out to be a "dumb model". https://twitter.com/worrydream/status/672957979545571329

    7. Table 1.Twelve functions served by false models. Adapted with permissionfrom Wimsatt

      Twelve good uses for dumb models, William Wimsatt (1987).

    8. To paraphrase Gunawardena (2014), a model is a logical engine for turning assumptions into conclusions.

      By making our assumptions explicit, we can clearly assess their implied conclusions. These conclusions will inevitably be flawed, because the assumptions are ultimately incorrect or at least incomplete. By examining how they differ from reality, we can refine our models, and thereby refine our theories and so gradually we might become less wrong.

    9. the stupidity of a model is often its strength. By focusing on some key aspects of a real-world system(i.e., those aspectsinstantiated in the model), we can investigate how such a system would work if, in principle, we really couldignore everything we are ignoring. This only sounds absurd until one recognizes that, in our theorizing about the nature of reality –both as scientists and as quotidianhumans hopelessly entangled in myriad webs of connection and conflict –weignore thingsall the time.
    10. The generalized linear model, the work horse ofthe social sciences, models data as being randomly drawn from a distribution whose mean varies according to some parameter. The linear model is so obviously wrong yet so useful that the mathematical anthropologist Richard McElreathhas dubbed it “the geocentric model of applied statistics,”in reference to the Ptolemaic model of the solar system that erroneously placed the earth rather than the sun at the center but nevertheless produced accurate predictions of planetary motion as they appeared in the night sky(McElreath 2015).

      A model that approximates some aspect of reality can be very useful, even if the model itself is flat-out wrong.

      But on the other hand, we can't accept approximation of reality as hard proof that a model is correct.

    11. Unfortunately, my own experience working with complex systems and working among complexity scientistssuggests that we are hardly immune to such stupidity. Consider the case of Marilyn Vos Savantand the Monty Hall problem.

      Many people, including some with training in advanced mathematics, contradicted her smugly. But a simple computer program that models the situation can demonstrate her point.

      2/3 times, your first pick will be wrong. Every time that happens, the door Monty didn't open is the winner. So switching wins 2/3 times.

      http://marilynvossavant.com/game-show-problem/

    12. Mitch Resnick, in his book Turtles, Termites, and Traffic Jams, details his experiences teaching gifted high school students about the dynamics of complex systems using artificial life models (Resnick 1994). He showed them how organized behavior could emerge when individualsresponded only to local stimuli using simple rules, without the need for a central coordinating authority. Resnick reports that even after weeks spent demonstrating the principles of emergence,using computer simulations that the students programmed themselves, many students still refused to believe that what they were seeing could really work without central leadership.