 Jan 2023

en.wikipedia.org en.wikipedia.org

A term recommended by Eve regarding an interdisciplinary approach that accounts for multiple feedback loops within complex systems. Need to confer complex systems science to see if ADHD is already addressed in that domain.

 Jun 2020


Jazayeri, A., & Yang, C. C. (2020). Motif Discovery Algorithms in Static and Temporal Networks: A Survey. ArXiv:2005.09721 [Physics]. http://arxiv.org/abs/2005.09721

 Jan 2016

smaldino.com smaldino.com

Stupid models are extremely useful. They are usefulbecause humans are boundedly rational and because language is imprecise. It is often only by formalizing a complex system that we can make progress in understanding it. Formal models should be a necessary component of the behavioral scientist’s toolkit. Models are stupid, and we need more of them.
Formal models are explicit in the assumptions they make about how the parts of a system work and interact, and moreover are explicit in the aspects of reality they omit.

Microeconomic models based on rational choice theory are useful for developing intuition, and may even approximate reality in a fewspecial cases, but the history of behavioral economics shows that standard economic theory has also provided a smorgasbord of null hypotheses to be struck down by empirical observation.

Where differences between conditions are indicated, avoid the mistake of running statistical analyses as if you were sampling from a larger population.
You already have a generating model for your data – it’s your model. Statistical analyses on model data often involve modeling your model with a stupider model. Don’t do this. Instead, run enough simulations to obtain limiting distributions.

A model’s strength stemsfromits precision.
I have come across too many modeling papers in which the model – that is, the parts, all their components, the relationships between them, and mechanisms for change – is not clearly expressed. This is most common with computational models (such as agentbased models), which can be quite complicated, but also exists in cases of purely mathematical models.

However, I want to be careful not to elevate modelers above those scientists who employ other methods.
This is important for at least two reasons, the first and foremost of which is that science absolutely requires empirical data. Those data are often painstaking to collect, requiring clever, meticulous, and occasionally tedious labor. There is a certain kind of laziness inherent in the professional modeler, who builds entire worlds from his or her desk using only pen, paper, and computer. Relatedly, many scientists are truly fantastic communicators, and present extremely clear theories that advance scientific understanding without a formal model in sight. Charles Darwin, to give an extreme example, laid almost all the foundations of modern evolutionary biology without writing down a single equation.

Ultimately,the theory has been shown to be incorrect, and has been epistemically replaced by the theory of General Relativity. Nevertheless, the theory is able to make exceptionally good approximations of gravitational forces –so good that NASA’s moon missions have relied upon them.
General Relativity may also turn out to be a "dumb model". https://twitter.com/worrydream/status/672957979545571329

Table 1.Twelve functions served by false models. Adapted with permissionfrom Wimsatt
Twelve good uses for dumb models, William Wimsatt (1987).

To paraphrase Gunawardena (2014), a model is a logical engine for turning assumptions into conclusions.
By making our assumptions explicit, we can clearly assess their implied conclusions. These conclusions will inevitably be flawed, because the assumptions are ultimately incorrect or at least incomplete. By examining how they differ from reality, we can refine our models, and thereby refine our theories and so gradually we might become less wrong.

the stupidity of a model is often its strength. By focusing on some key aspects of a realworld system(i.e., those aspectsinstantiated in the model), we can investigate how such a system would work if, in principle, we really couldignore everything we are ignoring. This only sounds absurd until one recognizes that, in our theorizing about the nature of reality –both as scientists and as quotidianhumans hopelessly entangled in myriad webs of connection and conflict –weignore thingsall the time.

The generalized linear model, the work horse ofthe social sciences, models data as being randomly drawn from a distribution whose mean varies according to some parameter. The linear model is so obviously wrong yet so useful that the mathematical anthropologist Richard McElreathhas dubbed it “the geocentric model of applied statistics,”in reference to the Ptolemaic model of the solar system that erroneously placed the earth rather than the sun at the center but nevertheless produced accurate predictions of planetary motion as they appeared in the night sky(McElreath 2015).
A model that approximates some aspect of reality can be very useful, even if the model itself is flatout wrong.
But on the other hand, we can't accept approximation of reality as hard proof that a model is correct.

Unfortunately, my own experience working with complex systems and working among complexity scientistssuggests that we are hardly immune to such stupidity. Consider the case of Marilyn Vos Savantand the Monty Hall problem.
Many people, including some with training in advanced mathematics, contradicted her smugly. But a simple computer program that models the situation can demonstrate her point.
2/3 times, your first pick will be wrong. Every time that happens, the door Monty didn't open is the winner. So switching wins 2/3 times.

Mitch Resnick, in his book Turtles, Termites, and Traffic Jams, details his experiences teaching gifted high school students about the dynamics of complex systems using artificial life models (Resnick 1994). He showed them how organized behavior could emerge when individualsresponded only to local stimuli using simple rules, without the need for a central coordinating authority. Resnick reports that even after weeks spent demonstrating the principles of emergence,using computer simulations that the students programmed themselves, many students still refused to believe that what they were seeing could really work without central leadership.
