Play Backyard Baseball Unblocked online — relive the classic childhood game with Pablo Sanchez and the gang! Swing for the fences, build your dream team, and enjoy nostalgic baseball fun right from your browser. No downloads, no limits! ⚾
- Jun 2025
-
www.biorxiv.org www.biorxiv.org
-
Reviewer #3 (Public review):
Summary:
The study investigates the development of reinforcement learning across the lifespan with a large sample of participants recruited for an online game. It finds that children gradually develop their abilities to learn reward probability, possibly hindered by their immature spatial processing and probabilistic reasoning abilities. Motor noise and exploration after a failure all contribute to children's subpar performance.
Strengths:
Experimental manipulations of both the continuity of movement options and the probabilistic nature of the reward function enable the inference of what cognitive factors differ between age groups. <br /> A large sample of participants is studied.<br /> The model-based analysis provides further insights into the development of reinforcement learning ability.
Weaknesses:
The conclusion that immature spatial processing and probabilistic reasoning abilities limit reinforcement learning here still needs more direct evidence.
-
Author response:
The following is the authors’ response to the original reviews
Overview of changes in the revision
We thank the reviewers for the very helpful comments and have extensively revised the paper. We provide point-by-point responses below and here briefly highlight the major changes:
(1) We expanded the discussion of the relevant literature in children and adults.
(2) We improved the contextualization of our experimental design within previous reinforcement studies in both cognitive and motor domains highlighting the interplay between the two.
(3) We reorganized the primary and supplementary results to better communicate the findings of the studies.
(4) The modeling has been significantly revised and extended. We now formally compare 31 noise-based models and one value-based model and this led to a different model from the original being the preferred model. This has to a large extent cleaned up the modeling results. The preferred model is a special case (with no exploration after success) of the model proposed in Therrien et al. (2018). We also provide examples of individual fits of the model, fit all four tasks and show group fits for all, examine fits vs. data for the clamp phases by age, provide measures of relative and absolute goodness of fit, and examine how the optimal level of exploration varies with motor noise.
Reviewer #1 (Public review):
Summary:
Here the authors address how reinforcement-based sensorimotor adaptation changes throughout development. To address this question, they collected many participants in ages that ranged from small children (3 years old) to adulthood (1 8+ years old). The authors used four experiments to manipulate whether binary and positive reinforcement was provided probabilistically (e.g., 30 or 50%) versus deterministically (e.g., 100%), and continuous (infinite possible locations) versus discrete (binned possible locations) when the probability of reinforcement varied along the span of a large redundant target. The authors found that both movement variability and the extent of adaptation changed with age.
Thank you for reviewing our work. One note of clarification. This work focuses on reinforcementbased learning throughout development but does not evaluate sensorimotor adaptation. The four tasks presented in this work are completed with veridical trajectory feedback (no perturbation).
The goal is to understand how children at different ages adjust their movements in response to reward feedback but does not evaluate sensorimotor adaptation. We now explain this distinction on line 35.
Strengths:
The major strength of the paper is the number of participants collected (n = 385). The authors also answer their primary question, that reinforcement-based sensorimotor adaptation changes throughout development, which was shown by utilizing established experimental designs and computational modelling.
Thank you.
Weaknesses:
Potential concerns involve inconsistent findings with secondary analyses, current assumptions that impact both interpr tation and computational modelling, and a lack of clearly stated hypotheses.
(1) Multiple regression and Mediation Analyses.
The challenge with these secondary analyses is that:
(a) The results are inconsistent between Experiments 1 and 2, and the analysis was not performed for Experiments 3 and 4,
(b) The authors used a two-stage procedure of using multiple regression to determine what variables to use for the mediation analysis, and
(c)The authors already have a trial-by-trial model that is arguably more insightful.
Given this, some suggested changes are to:
(a) Perform the mediation analysis with all the possible variables (i.e., not informed by multiple regression) to see if the results are consistent.
(b) Move the regression/mediation analysis to Supplementary, since it is slightly distracting given current inconsistencies and that the trial-by-trial model is arguably more insightful.
Based on these comments, we have chosen to remove the multiple regression and mediation analyses. We agree that they were distracting and that the trial-by-trial model allows for differentiation of motor noise from exploration variability in the learning block.
(2) Variability for different phases and model assumptions:
A nice feature of the experimental design is the use of success and failure clamps. These clamped phases, along with baseline, are useful because they can provide insights into the partitioning of motor and exploratory noise. Based on the assumptions of the model, the success clamp would only reflect variability due to motor noise (excludes variability due to exploratory noise and any variability due to updates in reach aim). Thus, it is reasonable to expect that the success clamps would have lower variability than the failure clamps (which it obviously does in Figure 6), and presumably baseline (which provides success and failure feedback, thus would contain motor noise and likely some exploratory noise).
However, in Figure 6, one visually observes greater variability during the success clamp (where it is assumed variability only comes from motor noise) compared to baseline (where variability would come from: (a) Motor noise.
(b) Likely some exploratory noise since there were some failures.
(c) Updates in reach aim.
Thanks for this comment. It made us realize that some of our terminology was unintentionally misleading. Reaching to discrete targets in the Baseline block was done to a) determine if participants could move successfully to targets that are the same width as the 100% reward zone in the continuous targets and b) determine if there are age dependent changes in movement precision. We now realize that the term Baseline Variability was misleading and should really be called Baseline Precision.
This is an important distinction that bears on this reviewer's comment. In clamp trials, participants move to continuous targets. In baseline, participants move to discrete targets presented at different locations. Clamp Variability cannot be directly compared to Baseline Precision because they are qualitatively different. Since the target changes on each baseline trial, we would not expect updating of desired reach (the target is the desired reach) and there is therefore no updating of reach based on success or failure. The SD we calculate over baseline trials is the endpoint variability of the reach locations relative to the target centers. In success clamp, there are no targets so the task is qualitatively different.
We have updated the text to clarify terminology, expand upon our operational definitions, and motivate the distinct role of the baseline block in our task paradigm (line 674).
Given the comment above, can the authors please:
(a) Statistically compare movement variability between the baseline, success clamp, and failure clamp phases.
Given our explanation in the previous point we don't think that comparing baseline to the clamp makes sense as the trials are qualitatively different.
(b) The authors have examined how their model predicts variability during success clamps and failure clamps, but can they also please show predictions for baseline (similar to that of Cashaback et al., 2019; Supplementary B, which alternatively used a no feedback baseline)?
Again, we do not think it makes sense to predict the baseline which as we mention above has discrete targets compared to the continuous targets in the learning phase.
(c) Can the authors show whether participants updated their aim towards their last successful reach during the success clamp? This would be a particularly insightful analysis of model assumptions.
We have now compared 31 models (see full details in next response) which include the 7 models in Roth et al. (2023). Several of these model variants have updating even after success with so called planning noise). We also now fit the model to the data that includes the clamp phases (we can't easily fit to success clamp alone as there are only 10 trials). We find that the preferred model is one that does not include updating after success.
(d) Different sources of movement variability have been proposed in the literature, as have different related models. One possibility is that the nervous system has knowledge of 'planned (noise)' movement variability that is always present, irrespective of success (van Beers, R.J. (2009). Motor learning is optimally tuned to the properties of motor noise. Neuron, 63(3), 406-417). The authors have used slightly different variations of their model in the past. Roth et al (2023) directly Rill compared several different plausible models with various combinations of motor, planned, and exploratory noise (Roth A, 2023, "Reinforcement-based processes actively regulate motor exploration along redundant solution manifolds." Proceedings of the Royal Society B 290: 20231475: see Supplemental). Their best-fit model seems similar to the one the authors propose here, but the current paper has the added benefit of the success and failure clamps to tease the different potential models apart. In light of the results of a), b), and c), the authors are encouraged to provide a paragraph on how their model relates to the various sources of movement variability and ther models proposed in the literature.
Thank you for this. We realized that the models presented in Roth et al. (2023) as well as in other papers, are all special cases of a more general model. Moreover, in total there are 30 possible variants of the full model so we have now fit all 31 models to our larger datasets and performed model selection (Results and Methods). All the models can be efficiently fit by Kalman smoother to the actual data (rather than to summary statistics which has sometimes been done). For model selection, we fit only the 100 learning trials and chose the preferred model based on BIC on the children's data (Figure 5—figure Supplement 1). After selecting the preferred model we then refit this model to all trials including the clamps so as to obtain the best parameter estimates.
The preferred model was the same whether we combined the continuous and discrete probabilistic data or just examin d each task separately either for only the children or for the children and adults combined. The preferred model is a pecial case (no exploration after success) of the one proposed in Therrien et al. (2018) and has exploration variability (after failure) and motor noise with full updating with exploration variability (if any) after success. This model differs from the model in the original submission which included a partial update of the desired reach after exploration this was considered the learning rate. The current model suggests a unity learning rate.
In addition, as suggested by another reviewer, we also fit a value-based model which we adapted from the model described in Giron et al. (2023). This model was not preferred.
We have added a paragraph to the Discussion highlighting different sources of variability and links to our model comparison.
(e) line 155. Why would the success clamp be composed of both motor and exploratory noise? Please clarify in the text
This sentence was written to refer to clamps in general and not just success clamps. However, in the revision this sentence seemed unnecessary so we have removed it.
(3) Hypotheses:
The introduction did not have any hypotheses of development and reinforcement, despite the discussion above setting up potential hypotheses. Did the authors have any hypotheses related to why they might expect age to change motor noise, exploratory noise, and learning rates? If so, what would the experimental behaviour look like to confirm these hypotheses? Currently, the manuscript reads more as an exploratory study, which is certainly fine if true, it should just be explicitly stated in the introduction. Note: on line 144, this is a prediction, not a hypothesis. Line 225: this idea could be sharpened. I believe the authors are speaking to the idea of having more explicit knowledge of action-target pairings changing behaviour.
We have included our hypotheses and predictions at two points in the paper In the introduction we modified the text to:
"We hypothesized that children's reinforcement learning abilities would improve with age, and depend on the developmental trajectory of exploration variability, learning rate (how much people adjust their reach after success), and motor noise (here defined as all sources of noise associated with movement, including sensory noise, memory noise, and motor noise). We think that these factors depend on the developmental progression of neural circuits that contribute to reinforcement learning abilities (Raznahan et al., 2014; Nelson et al., 2000; Schultz, 1998)."
In results we modified the sentence to:
"We predicted that discrete targets could increase exploration by encouraging children to move to a different target after failure.”
Reviewer #2 (Public review):
Summary:
In this study, Hill and colleagues use a novel reinforcement-based motor learning task ("RML"), asking how aspects of RML change over the course of development from toddler years through adolescence. Multiple versions of the RML task were used in different samples, which varied on two dimensions: whether the reward probability of a given hand movement direction was deterministic or probabilistic, and whether the solution space had continuous reach targets or discrete reach targets. Using analyses of both raw behavioral data and model fits, the authors report four main results: First, developmental improvements reflected 3 clear changes, including increases in exploration, an increase in the RL learning rate, and a reduction of intrinsic motor noise. Second, changes to the task that made it discrete and/or deterministic both rescued performance in the youngest age groups, suggesting that observed deficits could be linked to continuous/probabilistic learning settings. Overall, the results shed light on how RML changes throughout human development, and the modeling characterizes the specific learning deficits seen in the youngest ages.
Strengths:
(1) This impressive work addresses an understudied subfield of motor control/psychology - the developmental trajectory of motor learning. It is thus timely and will interest many researchers.
(2) The task, analysis, and modeling methods are very strong. The empirical findings are rather clear and compelling, and the analysis approaches are convincing. Thus, at the empirical level, this study has very few weaknesses.
(3) The large sample sizes and in-lab replications further reflect the laudable rigor of the study.
(4) The main and supplemental figures are clear and concise.
Thank you.
Weaknesses:
(1) Framing.
One weakness of the current paper is the framing, namely w/r/t what can be considered "cognitive" versus "non-cognitive" ("procedural?") here. In the Intro, for example, it is stated that there are specific features of RML tasks that deviate from cognitive tasks. This is of course true in terms of having a continuous choice space and motor noise, but spatially correlated reward functions are not a unique feature of motor learning (see e.g. Giron et al., 2023, NHB). Given the result here that simplifying the spatial memory demands of the task greatly improved learning for the youngest cohort, it is hard to say whether the task is truly getting at a motor learning process or more generic cognitive capacities for spatial learning, working memory, and hypothesis testing. This is not a logical problem with the design, as spatial reasoning and working memory are intrinsically tied to motor learning. However, I think the framing of the study could be revised to focus in on what the authors truly think is motor about the task versus more general psychological mechanisms. Indeed, it may be the case that deficits in motor learning in young children are mostly about cognitive factors, which is still an interesting result!
Thank you for these comments on the framing of our study. We now clearly acknowledge that all motor tasks have cognitive components (new paragraph at line 65). We also explain why we think our tasks has features not present in typical cognitive tasks.
(2) Links to other scholarship.
If I'm not mistaken a common observation in tudies of the development of reinforcement learning is a decrease in exploration over-development (e.g., Nussenbaum and Hartley, 2019; Giron et al., 2023; Schulz et al., 2019); this contrasts with the current results which instead show an increase. It would be nice to see a more direct discussion of previous findings showing decreases in exploration over development, and why the current study deviates from that. It could also be useful for the authors to bring in concepts of different types of exploration (e.g. "directed" vs "random"), in their interpretations and potentially in their modeling.
We recognize that our results differ from prior work. The optimal exploration pattern differs from task to task. We now discuss that exploration is not one size fits all, it's benefits vary depending upon the task. We have added the following paragraphs to the Discussion section:
"One major finding from this study is that exploration variability increases with age. Some other studies of development have shown that exploration can decrease with age indicating that adults explore less compared to children (Schulz et al., 2019; Meder et al., 2021; Giron et al., 2023). We believe the divergence between our work and these previous findings is largely due to the experimental design of our study and the role of motor noise. In the paradigm used initially by Schulz et al. (2019) and replicated in different age groups by Meder et al. (2021) and Giron et al. (2023), participants push buttons on a two-dimensional grid to reveal continuous-valued rewards that are spatially correlated. Participants are unaware that there is a maximum reward available and therefore children may continue to explore to reduce uncertainty if they have difficulty evaluating whether they have reached a maxima. In our task by contrast, participants are given binary reward and told that there is a region in which reaches will always be rewarded. Motor noise is an additional factor which plays a key role in our reaching task but minimal if any role in the discretized grid task. As we show in simulations of our task, as motor noise goes down (as it is known to do through development) the optimal amount of exploration goes up (see Figure 7—figure Supplement 2 and Appendix 1). Therefore, the behavior of our participants is rational in terms of R230 increasing exploration as motor noise decreases.
A key result in our study is that exploration in our task reflects sensitivity to failure. Older children make larger adjustments after failure compared to younger children to find the highly rewarded zone more quickly. Dhawale et al. (2017) discuss the different contexts in which a participant may explore versus exploit (i.e., stick at the same position). Exploration is beneficial when reward is low as this indicates that the current solution is no longer ideal, and the participant should search for a better solution. Konrad et al. (2025) have recently shown this behavior in a real-world throwing task where 6 to 12 year old children increased throwing variability after missed trials and minimized variability after successful trials. This has also been shown in a postural motor control task where participants were more variable after non-rewarded trials compared to rewarded trials (Van Mastrigt et al., 2020). In general, these studies suggest that the optimal amount of exploration is dependent on the specifics of the task."
(3) Modeling.
First, I may have missed something, but it is unclear to me if the model is actually accounting for the gradient of rewards (e.g., if I get a probabilistic reward moving at 45°, but then don't get one at 40°, I should be more likely to try 50° next then 35°). I couldn't tell from the current equations if this was the case, or if exploration was essentially "unsigned," nor if the multiple-trials-back regression analysis would truly capture signed behavior. If the model is sensitive to the gradient, it would be nice if this was more clear in the Methods. If not, it would be interesting to have a model that does "function approximation" of the task space, and see if that improves the fit or explains developmental changes.
The model we use (similar to Roth et al. (2023) and Therrien et al. (2016, 2018)) does not model the gradient. Exploration is always zero-mean Gaussian. As suggested by the reviewer, we now also fit a value-based model (described starting at line 810) which we adapted from the model presented in Giron et al. (2023). We show that the exploration and noise-based model is preferred over the value-based model.
The multiple-trials-back regression was unsigned as the intent was to look at the magnitude and not the direction of the change in movement. We have decided to remove this analysis from the manuscript as it was a source of confusion and secondary analysis that did not add substantially to the findings of these studies.
Second, I am curious if the current modeling approach could incorporate a kind of "action hysteresis" (aka perseveration), such that regardless of previous outcomes, the same action is biased to be repeated (or, based on parameter settings, avoided).
In some sense, the learning rate in the model in the original submission is highly related to perseveration. For example if the learning rate is 0, then there is complete perseveration as you simply repeat the same desired movement. If the rate is 1, there is no perseveration and values in between reflect different amounts of perseveration. Therefore, it is not easy to separate learning rate from perseveration. Adding perseveration as another parameter would likely make it and the learning unidentifiable. However, we now compare 31 models and those that have a non-unity learning rate are not preferred suggesting there is little perseveration.
(4) Psychological mechanisms. There is a line of work that shows that when children and adults perform RL tasks they use a combination of working memory and trial-by-trial incremental learning processes (e.g., Master et al., 2020; Collins and Frank 2012). Thus, the observed increase in the learning rate over development could in theory reflect improvements in instrumental learning, working memory, or both. Could it be that older participants are better at remembering their recent movements in short-term memory (Hadjiosif et al., 2023; Hillman et al., 2024)?
We agree that cognitive processes, such as working memory or visuospatial processing, play a role in our task and describe cognitive elements of our task in the introduction (new paragraph at line 65). However, the sensorimotor model we fit to the data does a good job of explaining the variation across age, which suggests that that age-dependent cognitive processes probably play a smaller role.
Reviewer #3 (Public review):
Summary:
The study investigates reinforcement learning across the lifespan with a large sample of participants recruited for an online game. It finds that children gradually develop their abilities to learn reward probability, possibly hindered by their immature spatial processing and probabilistic reasoning abilities. Motor noise, reinforcement learning rate, and exploration after a failure all contribute to children's subpar performance.
Strengths:
(1) The paradigm is novel because it requires continuous movement to indicate people's choices, as opposed to discrete actions in previous studies.
(2) A large sample of participants were recruited.
(3) The model-based analysis provides further insights into the development of reinforcement learning ability.
Thank you.
Weaknesses:
(1 ) The adequacy of model-based analysis is questionable, given the current presentation and some inconsistency in the results.
Thank you for raising this concern. We have substantially revised the model from our first submission. We now compare 31 noise-based models and 1 value-based model and fit all of the tasks with the preferred model. We perform model selection using the two tasks with the largest datasets to identify the preferred model. From the preferred model, we found the parameter fits for each individual dataset and simulated the trial by trial behavior allowing comparison between all four tasks. We now show examples of individual fits as well as provide a measure of goodness of fit. The expansion of our modeling approach has resolved inconsistencies and sharpened the conclusions drawn from our model.
(2) The task should not be labeled as reinforcement motor learning, as it is not about learning a motor skill or adapting to sensorimotor perturbations. It is a classical reinforcement learning paradigm.
We now make it clear that our reinforcement learning task has both motor and cognitive demands, but does not fall entirely within one of these domains. We use the term motor learning because it captures the fact that participants maximize reward by making different movements, corrupted by motor noise, to unmarked locations on a continuous target zone. When we look at previous ublications, it is clear that our task is similar to those that also refer to this as reinforcement motor learning Cashaback et al. (2019) (reaching task using a robotic arm in adults), Van Mastrigt et al. (2020) (weight shifting task in adults), and Konrad et al. (2025) (real-world throwing task in children). All of these tasks involve trial-by-trial learning through reinforcement to make the movement that is most effective for a given situation. We feel it is important to link our work to these previous studies and prefer to preserve the terminology of reinforcement motor learning.
Recommendations for the authors:
Reviewing Editor Comments:
Thank you for this summary. Rather than repeat the extended text from the responses to the reviewers here, we point the Editor to the appropriate reviewer responses for each issue raised.
The reviewers and editors have rated the significance of the findings in your manuscript as "Valuable" and the strength of evidence as "Solid" (see eLife evalutation). A consultancy discussion session to integrate the public reviews and recommendations per reviewer (listed below), has resulted in key recommendations for increasing the significance and strength of evidence:
To increase the Significance of the findings, please consider the following:
(1) Address and reframe the paper around whether the task is truly getting at a motor learning process or more generic cognitive decision-making capacities such as spatial memory, reward processing, and hypothesis testing.
We have revised the paper to address the comments on the framing of our work. Please see responses to the public review comments of Reviewers #2 and #3.
(2) It would be beneficial to specify the differences between traditional reinforcement algorithms (i.e., using softmax functions to explore, and build representations of state-action-reward) and the reinforcement learning models used here (i.e., explore with movement variability, update reach aim towards the last successful action), and compare present findings to previous cognitive reinforcement learning studies in children.
Please see response to the public review comments of Reviewer #1 in which we explain the expansion of our modeling approach to fit a value-based model as well as 31 other noise-based models. In our response to the public review comments of Reviewer #2, we comment on our expanded discussion of how our findings compare with previous cognitive reinforcement learning studies.
To move the "Strength of Evidence" to "Convincing", please consider doing the following:
(1 ) Address some apparently inconsistent and unrealistic values of motor noise, exploration noise, and learning rate shown for individual participants (e.g., Figure 5b; see comments reviewers 1 and take the following additional steps: plotting r squares for individual participants, discussing whether individual values of the fitted parameters are plausible and whether model parameters in each age group can extrapolate to the two clamp conditions and baselines.
We have substantially updated our modeling approach. Now that we compare 31 noise-based models, the preferred model does not show any inconsistent or unrealistic values (see response to Reviewer #3). Additionally, we now show example individual fits and provide both relative and absolute goodness of fit (see response to Reviewer #3).
(2) Relatedly, to further justify if model assumptions are met, it would be valuable to show that the current learning model fits the data better than alternative models presented in the literature by the authors themselves and by others (reviewer 1). This could include alternative development models that formalise the proposed explanations for age-related change: poor spatial memory, reward/outcome processing, and exploration strategies (reviewer 2).
Please see response to public review comments of Reviewer #1 in which we explain that we have now fit a value-based model as well as 31 other noise-based models providing a comparison of previous models as well as novel models. This led to a slightly different model being preferred over the model in the original submission (updated model has a learning rate of unity). These models span many of the processes previously proposed for such tasks. We feel that 32 models span a reasonable amount of space and do not believe we have the power to include memory issues or heuristic exploration strategies in the model.
(3) Perform the mediation analysis with all the possible variables (i.e., not informed by multiple regression) to see if the results are more consistent across studies and with the current approach (see comments reviewer 1).
Please see response to public review comments of Reviewer #1. We chose to focus only on the model based analysis because it allowed us to distinguish between exploration variability and motor noise.
Please see below for further specific recommendations from each reviewer.
Reviewer #1 (Recommendations for the author):
(1) In general, there should be more discussion and contextualization of other binary reinforcement tasks used in the motor literature. For example, work from Jeroen Smeets, Katinka van der Kooij, and Joseph Galea.
Thank you for this comment. We have edited the Introduction to better contextualize our work within the reinforcement motor learning literature (see line 67 and line 83).
(2) Line 32. Very minor. This sentence is fine, but perhaps could be slightly improved. “select a location along a continuous and infinite set of possible options (anywhere along the span of the bridge)"
Thank you for this comment. We have edited the sentence to reflect this suggestion.
(3) Line 57. To avoid some confusion in successive sentences: Perhaps, "Both children over 12 and adolescents...".
Thank you for this comment. We have edited the sentence to reflect this suggestion.
(4) Line 80. This is arguably not a mechanistic model, since it is likely not capturing the reward/reinforcement machinery used by the nervous system, such as updating the expected value using reward predic tion errors/dopamine. That said, this phenomenological model, and other similar models in the field, do very well to capture behaviour with a very simple set of explore and update rules.
We use mechanistic in the standard use in modeling, as in Levenstein et al. (2023), for example. The contrast is not with neural modeling, but with normative modeling, in which one develops a model to optimize a function (or descriptive models as to what a system is trying to achieve). In mechanistic modeling one proposes a mechanism and this can be at a state-space level (as in our case) or a neural level (as suggested my the reviewer) but both are considered mechanistic, just at different levels. Quoting Levenstein "... mechanistic models, in which complex processes are summarized in schematic or conceptual structures that represent general properties of components and their interactions, are also commonly used." We now reference the Levenstein paper to clarify what we mean by mechanistic.
(5) Figure 1. It would be useful to state that the x-axis in Figure 1 is in normalized units, depending on the device.
Thank you for this comment. We have added a description of the x-axis units to the Fig. 1 caption.
(6) Were there differences in behaviour for these different devices? e.g., how different was motor noise for the mouse, trackpad, and touchscreen?
Thank you for this question. We did not find a significant effect of device on learning or precision in the baseline block. We have added these one way ANOVA results for each task in Supplementary Table 1.
(7) Line 98. Please state that participants received reinforcement feedback during baseline.
Thank you for this comment. We have updated the text to specify that participants receive reward feedback during the baseline block.
(8) Line 99. Did the distance from the last baseline trial influence whether the participant learned or did not learn? For example, would it place them too far from the peak success location such that it impacted learning?
Thank you for this question. We looked at whether the position of movement on the last baseline block trial was correlated with the first movement position in the learning block. We did not find any correlations between these positions for any of the tasks. Interestingly, we found that the majority of participants move to the center of the workspace on the first trial of the learning block for all tasks (either in the presence of the novel continuous target scene or the presentation of 7 targets all at once). We do not think that the last movement in the baseline block "primed" the participant for the location of the success zone in the learning block. We have added the following sentence to the Results section:
"Note that the reach location for the first learning trial was not affected by (correlated with) the target position on the last baseline trial (p > 0.3 for both children and adults, separately)."
(9) The term learning distance could be improved. Perhaps use distance from target.
Thank you for this comment. We appreciate that learning distance defined with 0 as the best value is counter intuitive. We have changed the language to be "distance from target" as the learning metric.
(10) Line 188. This equation is correct, but to estimate what the standard deviation by the distribution of changes in reach position is more involved. Not sure if the authors carried out this full procedure, which is described in Cashaback et al., 2019; Supplemental 2.
There appear to be no Supplemental 2 in the referenced paper so we assume the reviewer is referring to Supplemental B which deals with a shuffling procedure to examine lag-1 correlations.
In our tasks, we are limited to only 9 trials to analyze in each clamp phase so do not feel a shuffling analysis is warranted. In these blocks, we are not trying to 'estimate what the standard deviation by the distribution of changes in reach position' but instead are calculating the standard deviation of the reach locations and comparing the model fit (for which the reviewer says the formula is correct) with the data. We are unclear what additional steps the reviewer is suggesting. In our updated model analysis, we fit the data including the clamp phases for better parameter estimation. We use simulations to estimate s.d. in the clamp phase (as we ensure in simulations the data does not fall outside the workspace) making the previous analytic formulas an approximation that are no longer used.
(11) Line 197-199. Having done the demo task, it is somewhat surprising that a 3-year-old could understand these instructions (whose comprehension can be very different from even a 5-year old).
Thank you for raising this concern. We recognize that the younger participants likely have different comprehension levels compared to older participants. However, we believe that the majority of even the youngest participants were able to sufficiently understand the goal of the task to move in a way to get the video clip to play. We intentionally designed the tasks to be simple such that the only instructions the child needed to understand were that the goal was to get the video clip to play as much as possible and the video clip played based on their movement. Though the majority of younger children struggled to learn well on the probabilistic tasks, they were able to learn well on the deterministic tasks where the task instructions were virtually identical with the exception of how many places in the workspace could gain reward. On the continuous probabilistic task, we did have a small number (n = 3) of 3 to 5 year olds who exhibited more mature learning ability which gives us confidence that the younger children were able to understand the task goal.
(12) Line 497: Can the authors please report the F-score and p-value separately for each of these one-way ANOVA (the device is of particular interest here).
Thank you for this request. We have added ina upplementarytable (Supplementary Table 1) with the results of these ANOVAs.
(13) Past work has discussed how motivation influences learning, which is a function of success rate (van der Kooij, K., in 't Veld, L., & Hennink, T. (2021). Motivation as a function of success frequency. Motivation and Emotion, 45, 759-768.). Can the authors please discuss how that may change throughout development?
Thank you for this comment. While motivation most probably plays a role in learning, in particular in a game environment, this was out of the scope of the direct focus of this work and not something that our studies were designed to test. We have added the following sentence to the discussion section to address this comment:
"We also recognize that other processes, such as memory and motivation, could affect performance on these tasks however our study was not designed to test these processes directly and future work would benefit from exploring these other components more explicitly."
(14) Supplement 6. This analysis is somewhat incomplete because it does not consider success.
Pekny and collegues (2015) looked at 3 trials back but considered both success and reward. However, their analysis has issues since successive time points are not i.i.d., and spurious relationships can arise. This issue is brought up by Dwahale (Dhawale, A. K., Miyamoto, Y. R., Smith, M. A., & R475 Ölveczky, B. P. (2019). Adaptive regulation of motor variability. Current Biology, 29(21), 3551-3562.). Perhaps it is best to remove this analysis from the paper.
Thank you for this comment. We have decided to remove this secondary analysis from the paper as it was a source of confusion and did not add to the understanding and interpretation of our behavioral results.
Reviewer #2 (Recommendations for the author):
(1 ) the path length ratio analyses in the supplemental are interesting but are not mentioned in the main paper. I think it would be helpful to mention these as they are somewhat dramatic effects
Thank you for this comment. Path length ratios are defined in the Methods and results are briefly summarized in the Results section with a point to the supplementary figures. We have updated the text to more explicitly report the age related differences in path length ratios.
(2) The second to last paragraph of the intro could use a sentence motivating the use ofthe different task features (deterministic/probabilistic and discrete/continuous).
Thank you for this comment. We have added an additional motivating sentence to the introduction.
Reviewer #3 (Recommendations for the author):
The paper labeled the task as one for reinforcement motor learning, which is not quite appropriate in my opinion. Motor learning typically refers to either skill learning or motor adaptation, the former for improving speed-accuracy tradeoffs in a certain (often new) motor skill task and the latter for accommodating some sensorimotor perturbations for an existing motor skill task. The gaming task here is for neither. It is more like a
decision-making task with a slight contribution to motor execution, i.e., motor noise. I would recommend the authors label the learning as reinforcement learning instead of reinforcement motor learning.
Thank you for this comment. As noted in the response to the public review comments, we agree that this task has components of classical reinforcement learning (i.e. responding to a binary reward) but we specifically designed it to require the learning of a movement within a novel game environment. We have added a new paragraph to the introduction where we acknowledge the interplay between cognitive and motor mechanisms while also underscoring the features in our task that we think are not present in typical cognitive tasks.
My major concern is whether the model adequately captures subjects' behavior and whether we can conclude with confidence from model fitting. Motor noise, exploration noise, and learning rate, which fit individual learning patterns (Figure 5b), show some quite unrealistic values. For example, some subjects have nearly zero motor noise and a 100% learning rate.
We have now compared 31 models and the preferred model is different from the one in the first submission. The parameter fits of the new model do not saturate in any way and appear reasonable to us. The updates to the model analysis have addressed the concern of previously seen unrealistic values in the prior draft.
Currently, the paper does not report the fitting quality for individual subjects. It is good to have an exemplary subject's fit shown, too. My guess is that the r-squared would be quite low for this type of data. Still, given that the children's data is noisier, it might be good to use the adult data to show how good the fitting can be (individual fits, r squares, whether the fitted parameters make sense, whether it can extrapolate to the two clamp phases). Indeed, the reliability of model fitting affects how we should view the age effect of these model parameters.
We now show fits to individual subjects. But since this is a Kalman smoother it fits the data perfectly by generating its best estimate of motor noise and exploration variability on each trial to fully account for the data — so in that sense R<sup>2</sup> is always 1 so that is not helpful.
While the BIC analysis with the other model variants provides a relative goodness of fit, it is not straightforward to provide an absolute goodness of fit such as standard R<sup>2</sup> for a feedforward simulation of the model given the parameters (rather than the output of the Kalman smoother). There are two problems. First, there is no single model output. Each time the model is simulated with the fit parameters it produces a different output (due to motor noise, exploration variability and reward stochasticity). Second, the model is not meant to reproduce the actual motor noise, exploration variability and reward stochasticity of a trial. For example, the model could fit pure Gaussian motor noise across trials (for a poor learner) by accurately fitting the standard deviation of motor noise but would not be expected to actually match each data point so would have a traditional R<sup>2</sup> of O.
To provide an overall goodness of fit we have to reduce the noise component and to do so we exam ined the traditional R<sup>2</sup> between the average of all the children's data and the average simulation of the model (from the median of 1000 simulations per participant) so as to reduce the stochastic variation. The results for the continuous probabilistic and discrete probabilistic task are R<sup>2</sup> of 0.41 and 0.72, respectively.
Not that variability in the "success clamp" doe not change across ages (Figure 4C) and does not contribute to the learning effect (Figure 4F). However, it is regarded as reflecting motor noise (Figure SC), which then decreases over age from the model fitting (Figure 5B). How do we reconcile these contradictions? Again, this calls the model fitting into question.
For the success clamp, we only have 9 trials to calculate variability which limits our power to detect significance with age. In contrast, the model uses all 120 trials to estimate motor noise. There is a downward trend with age in the behavioral data which we now show overlaid on the fits of the model for both probabilistic conditions (Figure 5—figure Supplement 4) and Figure 6—figure Supplement 4). These show a reasonable match and although the variance explained is 1 6 and 56% (we limit to 9 trials so as to match the fail clamp), the correlations are 0.52 and 0.78 suggesting we have reasonable relation although there may be other small sources of variability not captured in the model.
Figure 5C: it appears one bivariate outlier contributes a lot to the overall significant correlation here for the "success clamp".
Recalculating after removing that point in original Fig 5C was still significant and we feel the plots mentioned in the previous point add useful information to this issue. With the new model this figure has changed.
It is still a concern that the young children did not understand the instructions. Nine 3-to-8 children (out of 48) were better explained by the noisy only model than the full model. In contrast, ten of the rest of the participants (out of 98) were better explained by the noisy-only model. It appears that there is a higher percentage of the "young" children who didn't get the instruction than the older ones.
Thank you for this comment. We did take participant comprehension of the task into consideration during the task design. We specifically designed it so that the instructions were simple and straight forward. The child simply needs to understand the underlying goal to make the video clip play as often as possible and that they must move the penguin to certain positions to get it to play. By having a very simple task goal, we are able to test a naturalistic response to reinforcement in the absence of an explicit strategy in a task suited even for young children.
We used the updated reinforcement learning model to assess whether an individual's performance is consistent with understanding the task. In the case of a child who does not understand the task, we expect that they simply have motor noise on their reach, and crucially, that they would not explore more after failure, nor update their reach after success. Therefore, we used a likelihood ratio test to examine whether the preferred model was significantly better at explaining each participant's data compared to the model variant which had only motor noise (Model 1). Focusing on only the youngest children (age 3-5), this analysis showed that that 43, 59, 65 and 86% of children (out of N = 21, 22, 20 and 21 ) for the continuous probabilistic, discrete probabilistic, continuous deterministic, and discrete deterministic conditions, respectively, were better fit with the preferred model, indicating non-zero exploration after failure. In the 3-5 year old group for the discrete deterministic condition, 18 out of 21 had performance better fit by the preferred model, suggesting this age group understands the basic task of moving in different directions to find a rewarding location.
The reduced numbers fit by the preferred model for the other conditions likely reflects differences in the task conditions (continuous and/or probabilistic) rather than a lack of understanding of the goal of the task. We include this analysis as a new subsection at the end of the Results.
Supplementary Figure 2: the first panel should belong to a 3-year-old not a 5-year-old? How are these panels organized? This is kind of confusing.
Thank you for this comment. Figure 2—figure Supplement 1 and Figure 2—figure Supplement 2 are arranged with devices in the columns and a sample from each age bin in the rows. For example in Figure 2—figure Supplement 1, column 1, row 1 is a mouse using participant age 3 to 5 years old while column 3, row 2 is a touch screen using participant age 6 to 8 years old. We have edited the labeling on both figures to make the arrangement of the data more clear.
Line 222: make this a complete sentence.
This sentence has been edited to a complete sentence.
Line 331: grammar.
This sentence has been edited for grammar.
-
-
www.cs.toronto.edu www.cs.toronto.edudqn.pdf1
-
Playing Atari with Deep Reinforcement Learning 19 Dec 2013 · Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller
The paper from 2013 that introduced the DQN algorithm for using Deep Learning with Reinforcement Learning to play Atari game.
-
-
mutabit.com mutabit.com
-
Pokemon,

La primera generación de Pokémon, que inició con los juegos de Game Boy (Rojo, Verde y Azul en Japón), presenta 151 especies diferentes. Estos 151 Pokémon son todos originarios de la región de Kanto. El último Pokémon de esta generación es Mew, el número 151.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Another change was that as computers became small enough for people to buy them for their homes, they became seen as toys for boys and not girls. The same transition is seen in video game consoles from being for the whole family to being for boys only [s64
While computers have definitely become more compact with laptops becoming more advanced over time, an industry like gaming is still dominated by men even today. While females are slowly gaining more traction with streamers and developers alike, games are viewed as much more neutral than they were 20 years ago in my opinion. I have plenty of female friends who play games and are in the Infomatics or CS fields.
-
-
vsblog.netlify.app vsblog.netlify.app
-
Integrative modeling: by combining evolutionary game theory with agent-based simulations, the research provides a mechanistic account of how epistemic conditional strategies could have evolved naturally from simpler reactive behaviors, filling a gap in existing literature.
Я бы убрал
-
Evolutionary game-theoretic modeling:
Тоже сильное заявление про моделирование
-
Conceptual analysis: The dissertation begins by clarifying key concepts such as social conventions, game-theoretic equilibria, conditional strategies and social norms. This involves critical engagement with existing philosophical literature to establish a coherent conceptual framework that guides the subsequent modeling work.
Это я бы убрал, скорее набор инструментов, а не метод
-
The entire process can be modeled using evolutionary game theory, demonstrating the plausibility of a naturalistic emergence of epistemic correlation devices and proto-normativity without presupposing more advanced cognitive constructs like collective intentionality or explicit sanctioning mechanisms. This transition is not merely conceptual but can be computationally investigated through agent-based simulations that capture the dynamics of the cognitive evolution of correlation devices represented as ecological cues of different layers of abstraction
Это тоже сильные тезисы, я бы не выносил на защиту и вообще вы все модели убрал
-
-
github.com github.com
-
It support offline login and viewing of any locally cached files.
offline first server with an installer
Game Changer
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Do you think this game was realistic?
The game was semi-realistic. I thought some of the questions and issues raised were accurate to real world problems. But, there are some aspects that obviously oversimplify issues for the sake of the game.
-
- May 2025
-
www.platformatichq.com www.platformatichq.com
-
For larger projects with multiple interconnected components, monorepos can be a game-changer, providing efficient dependency management, atomic commits, simplified code sharing, and an improved developer experience.
Tags
Annotators
URL
-
-
fakepixels.substack.com fakepixels.substack.com
-
This may be the good news for those that didn’t dare to fully lean into what they love and want to do. What if the most game-optimal play in the new system is actually to become relentlessly, unapologetically you?
Be you
-
Leisure's opportunity cost skyrockets. When an hour of work generates what once took days, rest becomes luxury taxed by your own conscience. Every pause carries an invisible price tag that flickers in your peripheral vision.Productivity breeds new demand. Like efficient engines creating new energy uses, AI can create entirely new work categories and expectations.Competition intensifies. The game theory is unforgiving: when everyone can produce 10x more, the baseline resets, leaving us all running faster just to stay in place.
Consequences
-
-
www.sahilbloom.com www.sahilbloom.com
-
Life is a game of awareness and action: Awareness to understand something's importance and action to execute on that importance.
Awareness and action are key
-
-
snooplyrics.com snooplyrics.com
-
I’ve been hot for five years, ab karun paanch saal aur (Yeah)
- "Hot" here means successful, popular, or at the top of his game.
- "Ab karun paanch saal aur" is Hindi for "now I'll do five more years."
Literal Meaning So, he's saying he's been consistently successful for five years and he plans to keep that momentum going for at least another five years. It's a statement of ambition and confidence in his longevity.
-
-
snooplyrics.com snooplyrics.com
-
Pehla rap daala jab DHH wasn’t a thing
"Pehla rap daala jab DHH wasn’t a thing"
-
Raftaar is saying, "I've been in the game since before Desi Hip Hop was even popular, I was rapping when it was still new and underground."
-
With this line, Raftaar is asserting himself as a pioneer and veteran in the Indian hip-hop scene. He's making a strong statement about his longevity and his contribution to the genre's growth
Source: 1. LINK 2. LINK 3. [LINK](https://en.wikipedia.org/wiki/Raftaar#:~:text=Dilin%20Nair%20(born%2016%20November,into%20the%20mainstream%20music%20industry.)
-
-
Got the game from Kane, Wayne se bling
"Got the game from Kane, Wayne से bling"
-
"Kane" = likely refers to Big Daddy Kane, a pioneer of 80s–90s lyrical rap — Raftaar salutes OG lyrical technique here.

-
"Wayne से bling" = Lil Wayne, the bling era, punchlines, and swagger-filled wordplay — he took style from Wayne, substance from Kane.

-
Duality shown here: Knowledge from Kane, flash from Wayne — bars + buzz.
-
Subliminal flex: He didn’t just mimic desi rappers — he studied the game from the source.
-
-
Banna chahta tha main baller, jaise rappers in the game
"Banna chahta tha main baller, jaise rappers in the game"
Here, "Baller" means a wealthy and successful person who lives lavishly.
KR$NA desired the feeling that comes with fame and wanted to be a "baller" .
-
Have the game on lock aur yahi tha plan mera (Plan mera)
"Have the game on lock aur yahi tha plan mera (Plan mera)"
-
KR$NA completely control and dominate the rap industry, and achieving this level of power was always his deliberate intention and strategy.
-
Name-dropping "The Game", is a prominent American rapper from Compton, California. He rose to fame in the mid-2000s and is largely credited with helping to revitalize the West Coast hip-hop scene.

-
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Index on Censorship. Interview with a troll. Index on Censorship, September 2011. URL: https://www.indexoncensorship.org/2011/09/interview-with-a-troll/
The interview with the anonymous troll really stood out to me. What surprised me most was how casually he talked about causing distress online—almost like it was a game. He admitted to targeting people not out of deep personal hatred, but just to provoke a reaction or gain attention. This made me think about how anonymity can remove a sense of responsibility, and how moderation online has to deal with behavior that’s intentionally disruptive but not always illegal.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Fold-It [p19] is a game that lets players attempt to fold proteins. At the time, researchers were having trouble getting computers to do this task for complex proteins, so they made a game for humans to try it. Researchers analyzed the best players’ results for their research and were able to publish scientific discoveries based on the contributions of players.
I remember playing a game very similar to this on xbox as a kid
-
Researchers analyzed the best players’ results for their research and were able to publish scientific discoveries based on the contributions of players.
I think it's really interesting that a video game like Fold-It helped scientists make real discoveries. It shows how powerful crowdsourcing can be, even for serious scientific problems. I’ve never thought of games being useful in science before, but now I wonder if more scientific research could be turned into fun challenges for regular people to help with.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
[p19] Foldit. September 2023. Page Version ID: 1175905648. URL: https://en.wikipedia.org/w/index.php?title=Foldit&oldid=1175905648 (visited on 2023-12-08).
Foldit is a game where players fold protein structures as best as possible, and the answers they create are checked against real world proteins as it's not possible to automatically create solutions. This allowed solutions to protein structures to be crowd sourced, and many proteins were actually solved through this. Its a great example of crowdsourcing scientific knowledge in a way that makes use of things humans can do that computers couldn't.. Now I believe that there is AI that can fold protein structures.
-
Mike Gavin. Canucks' staffer uses social media to find fan who saved his life.
I saw this awhile ago when it initially came out. At a game, a fan pointed out a mole on the neck of a Canuck staff and saved his life. He went to the doctor to check it out and it was cancer. The doctor told him he would've only had 4-5 years left if gone unchecked. The staff member ended up finding the fan through social media when his story went viral.
-
Kickstarter. URL: https://www.kickstarter.com/ (visited on 2023-12-08).
I am not too familiar with Kickstarter, but I do know that it is a great platform to fund various projects through crowdfunding. Some examples probably include technology, art, or film production. There have been some pretty cool projects that have come from Kickstarter too, such as game "Exploding Kittens".
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In what ways do you think you’ve participated in any crowdsourcing online?
I've participated in crowdsurfing on a Fandom Wiki for a game called Warframe. For context, Warframe is a game similar to Halo but more fast-paced and consisting of its own complex lore. The game still gets consistent updates to this day, updates that add to both the gameplay and lore. I've added a few of my own entries into the wiki involving the lore to help other people get a better idea of what-means-what, mainly due to how complicated and extensive the devs make it out to be.
-
-
dynomight.net dynomight.net
-
We all play the game we think we can do better at.
Is that actually true? Surely there are examples where people play the game that they're less suited for—where the decision is driven by desire?
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Spamming. December 2023. Page Version ID: 1187995774. URL: https://en.wikipedia.org/w/index.php?title=Spamming&oldid=1187995774 (visited on 2023-12-08).
One of the sources mentioned in the Wikipedia article is Spam Kings by Brian S. McWilliams (2005), which deals with the growth of spam operations and the individuals involved in them. Something that caught my attention among review and summary descriptions of this book is how spam is not merely a technical problem—it’s a human one. McWilliams tracks real-life spammers and anti-spammers alike, demonstrating the cat-and-mouse game that developed in the early 2000s. Something that surprised me in reading descriptions of this book is that a number of spammers lived with a sense of pride and even regarded themselves as entrepreneurs, not scammers. It challenged me to consider how ethics of online practice might differ profoundly based on perspective and how some individuals might justify nefarious digital practices as innovative or innocuous business tactics. It also relates back to coursewide topics of online regulation and the fuzzy line between “free enterprise” and exploitative practice online.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Have you ever reported a post/comment for violating social media platform rules? Have you ever faced consequences for breaking social media rules (or for being accused of it)? In unmoderated online spaces who has the most power and ability to speak and be heard? Who has the least power and ability to speak and be heard?
I have reported post/comment for violating social media platform rule, I assume video game is a form of social media, and I reported a lor of players for being toxic or racist or harassment. I never got banned for saying inappropriate stuff but I did get banned for quit in the middle of the game. which technically count as violation of the rules. Whoever has the ability to ban others has the most power, the rest has the least power.
-
Have you ever reported a post/comment for violating social media platform rules?
Yes I always do this in League of Legend, and I always get report feedback. Recently Riot are getting serious to any comment that is rude. I have been banned talking and typing in game, because I borrow my account to other people. They are less good at English and communicate more aggresively
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
We will revise the statements of novelty in the introduction by more clearly emphasizing how our model addresses gaps in the existing literature. In addition, we will clarify the description of the dispersal process. Briefly, we use the same dispersal gene β to represent the likelihood an individual will either leave or join a group, thereby quantifying both dispersal and immigration using the same parameter. Specifically, individuals with higher β are more likely to remain as floaters (i.e., disperse from their natal group to become a breeder elsewhere), whereas those with lower β are either more likely to remain in their natal group as subordinates (i.e., queue in a group for the breeding position) or join another group if they dispersed. Immigrants that join a group as a subordinate help and queue for a breeding position, as does any natal subordinate born into the group. To follow the suggestion of the referee and more fully explore the impact of competition between subordinates born in the group and subordinate immigrants, we will explore extending our model to allow dispersers to leave their natal group and join another as subordinates, by incorporating a reaction norm based on their age or rank (D = 1 / (1 + exp (β<sub>t</sub> * t – β<sub>0</sub>)) . This approach will allow individuals to adjust also their dispersal strategy to their competitiveness and to avoid kin competition by remaining as a subordinate in another group.
We apologize that there was some confusion with terminology. We use the term “disperser” to describe individuals that disperse from their natal group. Dispersers can assume one of three roles: (1) they can migrate to another group as "subordinates"; (2) they can join another group as "breeders" if they successfully outcompete other candidates; or (3) they can remain as "floaters" if they fail to join a group. "Floaters" are individuals who persist in a transient state without access to a breeding territory, waiting for opportunities to join a group in an established territory. Therefore, dispersers do not work when they are floaters, but they may later help if they immigrate to a group as a subordinate. Consequently, immigrant subordinates have no inherent competitive advantage over natal subordinates (as step 2.2. “Join a group” is followed by step 3. “Help”, which occurs before step 5. “Become a breeder”). Nevertheless, floaters can potentially outcompete subordinates of the same age if they attempt to breed without first queuing as a subordinate (step 5) when subordinates are engaged in work tasks. We believe that this assumption is realistic and constitutes part of the costs associated with work tasks. However, floaters are at a disadvantage for becoming a breeder because: (1) floaters incur higher mortality than individuals within groups (eq. 3); and (2) floaters may only attempt to become breeders in some breeding cycles (versus subordinate groups members, who are automatically candidates for an open breeding position in the group in each cycle). Therefore, due to their higher mortality, floaters are rarely older than individuals within groups, which heavily influences dominance value and competitiveness. Additionally, any competitive advantage that floaters might have over other subordinate group members is unlikely to drive the kin selection-only results because subordinates would preferably choose defense tasks instead of work tasks so as not to be at a competitive disadvantage compared to floaters.
We note that reviewers also mention that floaters often aren't usually high resource holding potential (RHP) individuals and, therefore, our assumptions might be unrealistic. As we explain above, floaters are not inherently at a competitive advantage in our model. In any case, empirical work in a number of species has shown that dispersers are not necessarily those of lower RHP or of lower quality. In fact, according to the ecological constraints hypothesis, one might predict that high quality individuals are the ones that disperse because only individuals in good condition (e.g., larger body size, better energy reserves) can afford the costs associated with dispersal (Cote et al., 2022). By adding a reaction norm approach to explore the role of age or rank in the revised version, we can also determine whether higher or lower quality individuals are the ones dispersing. We will address the issues of terminology and clarity of the relative competitive advantage of floaters versus subordinates, and also include more information in the Supplementary Tables (e.g., the number of floaters). As a side note, the “scramble context” we mention was an additional implementation that we decided to remove from the final manuscript, but we forgot to remove from Table 1 before submission.
The reviewers also raised a question about asexual reproduction and relatedness more generally. As we showed in the Supplementary Tables and the section on relatedness in the SI (“Kin selection and the evolution of division of labor"), high relatedness does not appear to explain our results. In evolutionary biology generally and in game theory specifically (with the exception of models on sexual selection or sex-specific traits), asexual reproduction is often modelled because it reduces unnecessary complexity. To further study the effect of relatedness on kin structures more closely resembling those of vertebrates, however, we will create an additional “relatedness structure level”, where we will shuffle half of the philopatric offspring using the same method used to remove relatedness completely. This approach will effectively reduce relatedness structure by half and overcome the concerns with our decision to model asexual reproduction.
Briefly, we will elaborate on the concept of division of labor and the tasks that cooperative breeders perform. In nature, multiple tasks are often necessary to successfully rear offspring. For example, in many cooperatively breeding birds, the primary reasons that individuals fail to produce offspring are (1) starvation, which is mitigated by the feeding of offspring, and (2) nest depredation, which is countered by defensive behavior. Consequently, both types of tasks are necessary to successfully produce offspring, and focusing solely on one while neglecting the other is likely to result in lower reproductive success than if both tasks are performed by individuals within the group. We simplify this principle in the model by maximizing reproductive output when both tasks are carried out to a similar extent, allowing for some flexibility from the mean. In response to the reviewer suggestion about making fecundity a function of work tasks and offspring survival as a function of defensive tasks, these are actually equivalent in model terms, as it’s the same whether breeders produce three offspring and two die, or if they only produce one. This represents, of course, a simplification of the natural context, where breeding unsuccessfully is more costly (in terms of time and energy investment) than not breeding at all, but this is approach is typically used in models of this sort.
The scope of this paper was to study division of labor in cooperatively breeding species with fertile workers, in which help is exclusively directed towards breeders to enhance offspring production (i.e., alloparental care). Our focus is in line with previous work in most other social animals, including eusocial insects and humans, which emphasizes how division of labor maximizes group productivity. Other forms of “general” help are not considered in the paper, and such forms of help are rarely considered in cooperatively breeding vertebrates or in the division of labor literature, as they do not result in task partitioning to enhance productivity.
How do we model help? Help provided is an interaction between H (total effort) and T (proportion of total effort invested in each type of task). We will make this definition clearer in the revised manuscript. Thank you for pointing out an error in Eq. 1. This inequality was indeed written incorrectly in the paper (but is correct in the model code); it is dominance rank instead of age (see code in Individual.cpp lines 99-119). We will correct this mistake in the revision.
There was also a question about bounded and unbounded helping costs. The difference in costs is inherent to the nature of the different task (work or defense): while survival is naturally bounded, with death as the lower bound, dominance costs are potentially unbounded, as they are influenced by dynamic social contexts and potential competitors. Therefore, we believe that the model’s cost structure is not too different to that in nature.
Thank you for your comments about the parameter landscape. It is important to point out that variations in the mutation rate do not qualitatively affect our results, as this is something we explored in previous versions of the model (not shown). Briefly, we find that variations in the mutation rates only alter the time required to reach equilibrium. Increasing the step size of mutation diminishes the strength of selection by adding stochasticity and reducing the genetic correlation between offspring and their parents. Population size could, in theory, affect our results, as small populations are more prone to extinction. Since this was not something we planned to explore in the paper directly, we specifically chose a large population size, or better said, a large number of territories (i.e. 5000) that can potentially host a large population.
During the exploratory phase of the model development, various parameters and values were also assessed. However, the manuscript only details the ranges of values and parameters where changes in the behaviors of interest were observed, enhancing clarity and conciseness. For instance, variation in y<sub>h</sub> (the cost of help on dominance when performing “work tasks”) led to behavioral changes similar to those caused by changes in x<sub>h</sub> (the cost of help in survival when performing “defensive tasks”), as both are proportional to each other. Specifically, since an increase in defense costs raises the proportion of work relative to defense tasks, while an increase in the costs of work task has the opposite effect, only results for the variation of x<sub>h</sub> were included in the manuscript to avoid redundancy. We will make this clearer in the revision.
Finally, following the advice from the reviewers, we will add the symbols of the variables to the figure axes, and clarify whether the values shown represent a genetic or phenotypic trait. In Figure 2, the x-axis is H and the y-axis is T. In Figure 3A, the subindex t in x-axis is incorrect; it should be subindex R (reaction norm to dominance rank instead of age), the y-axis is T. In Figure 3B, the x-axis is R, and the y-axis is T. All values of T, H and R are phenotypic expressed values (see Table 1). For instance, T values are the phenotypic expressed values from the individuals in the population according to their genetic gamma values and their current dominance rank at a given time point.
References
Cote, J., Dahirel, M., Schtickzelle, N., Altermatt, F., Ansart, A., Blanchet, S., Chaine, A. S., De Laender, F., De Raedt, J., & Haegeman, B. (2022). Dispersal syndromes in challenging environments: A cross‐species experiment. Ecology Letters, 25(12), 2675–2687.
-
-
pressbooks.library.torontomu.ca pressbooks.library.torontomu.ca
-
“Oh, you needs tuh learn how. ’Tain’t no need uh you not knowin’ how tuh handle shootin’ tools. Even if you didn’t never find no game, it’s always some trashy rascal dat needs uh good killin’,”
If you know you know
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Mike Masnick, Randy Lubin, and Leigh Beadon. Moderator Mayhem: A Content Moderation Game. URL: https://moderatormayhem.engine.is/ (visited on 2023-12-17).
This game is very interesting and gives an insight into what content moderation might be. The time limit kept me stressed while trying to quickly read through the card prompts and waiting a few seconds to find out more info. Seeing how many requests were coming in also gave me more stress and forced me to be even more sloppy with my job. I can see why it would be stressful to be a content moderator aside from the NSFW/NSFL material that they may have to come across. The speed and accuracy that is needed is stressful and not something that can be perfected.
-
-
easternpeak.com easternpeak.com
-
The way these platforms adjust content in real time based on each student’s progress feels like a real game-changer for education
-
-
www.biorxiv.org www.biorxiv.org
-
Reviewer #3 (Public review):
Summary:
A very thorough technical report of a new standalone, open-source software for microscopy image processing and analysis (MorphoNet 2.0), with a particular emphasis on automated segmentation and its curation to obtain accurate results even with very complex 3D stacks, including timelapse experiments.
Strengths:
The authors did a good job of explaining the advantages of MorphoNet 2.0, as compared to its previous web-based version and to other software with similar capabilities. What I particularly found more useful to actually envisage these claimed advantages is the five examples used to illustrate the power of the software (based on a combination of Python scripting and the 3D game engine Unity). These examples, from published research, are very varied in both types of information and image quality, and all have their complexities, making them inherently difficult to segment. I strongly recommend the readers to carefully watch the accompanying videos, which show (although not thoroughly) how the software is actually used in these examples.
Weaknesses:
Being a technical article, the only possible comments are on how methods are presented, which is generally adequate, as mentioned above. In this regard, and in spite of the presented examples (chosen by the authors, who clearly gave them a deep thought before showing them), the only way in which the presented software will prove valuable is through its use by as many researchers as possible. This is not a weakness per se, of course, but just what is usual in this sort of report. Hence, I encourage readers to download the software and give it time to test it on their own data (which I will also do myself).
In conclusion, I believe that this report is fundamental because it will be the major way of initially promoting the use of MorphoNet 2.0 by the objective public. The software itself holds the promise of being very impactful for the microscopists' community.
-
Author response:
eLife Assessment
This work presents an important technical advancement with the release of MorphoNet 2.0, a user-friendly, standalone platform for 3D+T segmentation and analysis in biological imaging. The authors provide convincing evidence of the tool's capabilities through illustrative use cases, though broader validation against current state-of-the-art tools would strengthen its position. The software's accessibility and versatility make it a resource that will be of value for the bioimaging community, particularly in specialized subfields.
We would like to thank the editors and reviewers for their careful and constructive evaluation of our manuscript “MorphoNet 2.0: An innovative approach for qualitative assessment and segmentation curation of large-scale 3D time-lapse imaging datasets”. We are grateful for the positive assessment of MorphoNet 2.0 as a valuable and accessible tool for the bioimaging community, and for the recognition of its technical advancements, particularly in the context of complex 3D+t segmentation tasks.
The reviewers have highlighted several important points that we will address in the revised manuscript. These include:
- The need for a clearer demonstration that improvements in unsupervised quality metrics correspond to actual improvements in segmentation quality. In response, we will provide comparisons with gold standard annotations where available and clarify how to interpret metric distributions.<br /> - The potential risk of circular logic when using unsupervised metrics to guide model training. We now explicitly discuss this limitation and emphasize the importance of external validation and expert input.<br /> - The value of comparing MorphoNet 2.0 to other tools such as FIJI and napari. We will include a comparative table to help readers understand MorphoNet’s positioning and complementarity.<br /> - The importance of clearer documentation and terminology. We will overhaul the help pages, standardize plugin naming, and add a glossary-style table to the manuscript.<br /> - Suggestions for future developments, such as mesh export and interoperability with napari, which we will explore for the revision.
We appreciate the detailed feedback on both scientific and editorial aspects, including corrections to figures and text, and we will integrate all suggested revisions to improve the manuscript’s clarity and impact. We are confident that these changes will strengthen the manuscript and enhance the utility of MorphoNet 2.0 for the community.
Public Reviews:
Reviewer #1 (Public review):
The authors present a substantial improvement to their existing tool, MorphoNet, intended to facilitate assessment of 3D+t cell segmentation and tracking results, and curation of high-quality analysis for scientific discovery and data sharing. These tools are provided through a user-friendly GUI, making them accessible to biologists who are not experienced coders. Further, the authors have re-developed this tool to be a locally installed piece of software instead of a web interface, making the analysis and rendering of large 3D+t datasets more computationally efficient. The authors evidence the value of this tool with a series of use cases, in which they apply different features of the software to existing datasets and show the improvement to the segmentation and tracking achieved.
While the computational tools packaged in this software are familiar to readers (e.g., cellpose), the novel contribution of this work is the focus on error correction. The MorphoNet 2.0 software helps users identify where their candidate segmentation and/or tracking may be incorrect. The authors then provide existing tools in a single user-friendly package, lowering the threshold of skill required for users to get maximal value from these existing tools. To help users apply these tools effectively, the authors introduce a number of unsupervised quality metrics that can be applied to a segmentation candidate to identify masks and regions where the segmentation results are noticeably different from the majority of the image.
This work is valuable to researchers who are working with cell microscopy data that requires high-quality segmentation and tracking, particularly if their data are 3D time-lapse and thus challenging to segment and assess. The MorphoNet 2.0 tool that the authors present is intended to make the iterative process of segmentation, quality assessment, and re-processing easier and more streamlined, combining commonly used tools into a single user interface.
We sincerely thank the reviewer for their thorough and encouraging evaluation of our work. We are grateful that they highlighted both the technical improvements of MorphoNet 2.0 and its potential impact for the broader community working with complex 3D+t microscopy datasets. We particularly appreciate the recognition of our efforts to make advanced segmentation and tracking tools accessible to non-expert users through a user-friendly and locally installable interface, and for pointing out the importance of error detection and correction in the iterative analysis workflow. The reviewer’s appreciation of the value of integrating unsupervised quality metrics to support this process is especially meaningful to us, as this was a central motivation behind the development of MorphoNet 2.0. We hope the tool will indeed facilitate more rigorous and reproducible analyses, and we are encouraged by the reviewer’s positive assessment of its utility for the community.
One of the key contributions of the work is the unsupervised metrics that MorphoNet 2.0 offers for segmentation quality assessment. These metrics are used in the use cases to identify low-quality instances of segmentation in the provided datasets, so that they can be improved with plugins directly in MorphoNet 2.0. However, not enough consideration is given to demonstrating that optimizing these metrics leads to an improvement in segmentation quality. For example, in Use Case 1, the authors report their metrics of interest (Intensity offset, Intensity border variation, and Nuclei volume) for the uncurated silver truth, the partially curated and fully curated datasets, but this does not evidence an improvement in the results. Additional plotting of the distribution of these metrics on the Gold Truth data could help confirm that the distribution of these metrics now better matches the expected distribution.
Similarly, in Use Case 2, visual inspection leads us to believe that the segmentation generated by the Cellpose + Deli pipeline (shown in Figure 4d) is an improvement, but a direct comparison of agreement between segmented masks and masks in the published data (where the segmentations overlap) would further evidence this.
We agree that demonstrating the correlation between metric optimization and real segmentation improvement is essential. We will add new analysis comparing the distributions of the unsupervised metrics with the gold truth data before and after curation. Additionally, we will provide overlap scores where ground truth annotations are available, confirming the improvement. We will also explicitly discuss the limitation of relying solely on unsupervised metrics without complementary validation.
We would appreciate the authors addressing the risk of decreasing the quality of the segmentations by applying circular logic with their tool; MorphoNet 2.0 uses unsupervised metrics to identify masks that do not fit the typical distribution. A model such as StarDist can be trained on the "good" masks to generate more masks that match the most common type. This leads to a more homogeneous segmentation quality, without consideration for whether these metrics actually optimize the segmentation
We thank the reviewer for this important and insightful comment. It raises a crucial point regarding the risk of circular logic in our segmentation pipeline. Indeed, relying on unsupervised metrics to select “good” masks and using them to train a model like StarDist could lead to reinforcing a particular distribution of shapes or sizes, potentially filtering out biologically relevant variability. This homogenization may improve consistency with the chosen metrics, but not necessarily with the true underlying structures.
We fully agree that this is a key limitation to be aware of. We will revise the manuscript to explicitly discuss this risk, emphasizing that while our approach may help improve segmentation quality according to specific criteria, it should be complemented with biological validation and, when possible, expert input to ensure that important but rare phenotypes are not excluded.
In Use case 5, the authors include details that the errors were corrected by "264 MorphoNet plugin actions ... in 8 hours actions [sic]". The work would benefit from explaining whether this is 8 hours of human work, trying plugins and iteratively improving, or 8 hours of compute time to apply the selected plugins.
We will clarify that the “8 hours” refer to human interaction time, including exploration, testing, and iterative correction using plugins.
Reviewer #2 (Public review):
Summary:
This article presents Morphonet 2.0, a software designed to visualise and curate segmentations of 3D and 3D+t data. The authors demonstrate their capabilities on five published datasets, showcasing how even small segmentation errors can be automatically detected, easily assessed, and corrected by the user. This allows for more reliable ground truths, which will in turn be very much valuable for analysis and training deep learning models. Morphonet 2.0 offers intuitive 3D inspection and functionalities accessible to a non-coding audience, thereby broadening its impact.
Strengths:
The work proposed in this article is expected to be of great interest to the community by enabling easy visualisation and correction of complex 3D(+t) datasets. Moreover, the article is clear and well written, making MorphoNet more likely to be used. The goals are clearly defined, addressing an undeniable need in the bioimage analysis community. The authors use a diverse range of datasets, successfully demonstrating the versatility of the software.
We would also like to highlight the great effort that was made to clearly explain which type of computer configurations are necessary to run the different datasets and how to find the appropriate documentation according to your needs. The authors clearly carefully thought about these two important problems and came up with very satisfactory solutions.
We would like to sincerely thank the reviewer for their positive and thoughtful feedback. We are especially grateful that they acknowledged the clarity of the manuscript and the potential value of MorphoNet 2.0 for the community, particularly in facilitating the visualization and correction of complex 3D(+t) datasets. We also appreciate the reviewer’s recognition of our efforts to provide detailed guidance on hardware requirements and access to documentation—two aspects we consider crucial to ensuring the tool is both usable and widely adopted. Their comments are very encouraging and reinforce our commitment to making MorphoNet 2.0 as accessible and practical as possible for a broad range of users in the bioimage analysis community.
Weaknesses:
There is still one concern: the quantification of the improvement of the segmentations in the use cases and, therefore, the quantification of the potential impact of the software. While it appears hard to quantify the quality of the correction, the proposed work would be significantly improved if such metrics could be provided.
The authors show some distributions of metrics before and after segmentations to highlight the changes. This is a great start, but there seem to be two shortcomings: first, the comparison and interpretation of the different distributions does not appear to be trivial. It is therefore difficult to judge the quality of the improvement from these. Maybe an explanation in the text of how to interpret the differences between the distributions could help. A second shortcoming is that the before/after metrics displayed are the metrics used to guide the correction, so, by design, the scores will improve, but does that accurately represent the improvement of the segmentation? It seems to be the case, but it would be nice to maybe have a better assessment of the improvement of the quality.
We thank the reviewer for this constructive and important comment. We fully agree that assessing the true quality improvement of segmentation after correction is a central and challenging issue. While we initially focused on changes in the unsupervised quality metrics to illustrate the effect of the correction, we acknowledge that interpreting these distributions may not be straightforward, and that relying solely on the metrics used to guide the correction introduces an inherent bias in the evaluation.
To address the first point, we will revise the manuscript to provide clearer guidance on how to interpret the changes in metric distributions before and after correction, with additional examples to make this interpretation more intuitive.
Regarding the second point, we agree that using independent, external validation is necessary to confirm that the segmentation has genuinely improved. To this end, we will include additional assessments using complementary evaluation strategies on selected datasets where ground truth is accessible, to compare pre- and post-correction segmentations with an independent reference. These results reinforce the idea that the corrections guided by unsupervised metrics generally lead to more accurate segmentations, but we also emphasize their limitations and the need for biological validation in real-world cases.
Reviewer #3 (Public review):
Summary:
A very thorough technical report of a new standalone, open-source software for microscopy image processing and analysis (MorphoNet 2.0), with a particular emphasis on automated segmentation and its curation to obtain accurate results even with very complex 3D stacks, including timelapse experiments.
Strengths:
The authors did a good job of explaining the advantages of MorphoNet 2.0, as compared to its previous web-based version and to other software with similar capabilities. What I particularly found more useful to actually envisage these claimed advantages is the five examples used to illustrate the power of the software (based on a combination of Python scripting and the 3D game engine Unity). These examples, from published research, are very varied in both types of information and image quality, and all have their complexities, making them inherently difficult to segment. I strongly recommend the readers to carefully watch the accompanying videos, which show (although not thoroughly) how the software is actually used in these examples.
We sincerely thank the reviewer for their thoughtful and encouraging feedback. We are particularly pleased that the reviewer appreciated the comparative analysis of MorphoNet 2.0 with both its earlier version and existing tools, as well as the relevance of the five diverse and complex use cases we selected. Demonstrating the software’s versatility and robustness across a variety of challenging datasets was a key goal of this work, and we are glad that this aspect came through clearly. We also appreciate the reviewer’s recommendation to watch the accompanying videos, which we designed to provide a practical sense of how the tool is used in real-world scenarios. Their positive assessment is highly motivating and reinforces the value of combining scripting flexibility with an interactive 3D interface.
Weaknesses:
Being a technical article, the only possible comments are on how methods are presented, which is generally adequate, as mentioned above. In this regard, and in spite of the presented examples (chosen by the authors, who clearly gave them a deep thought before showing them), the only way in which the presented software will prove valuable is through its use by as many researchers as possible. This is not a weakness per se, of course, but just what is usual in this sort of report. Hence, I encourage readers to download the software and give it time to test it on their own data (which I will also do myself).
We fully agree that the true value of MorphoNet 2.0 will be demonstrated through its practical use by a wide range of researchers working with complex 3D and 3D+t datasets. In this regard, we will improve the user documentation and provide a set of example datasets to help new users quickly familiarize themselves with the platform. We are also committed to maintaining and updating MorphoNet 2.0 based on user feedback to further support its usability and impact.
In conclusion, I believe that this report is fundamental because it will be the major way of initially promoting the use of MorphoNet 2.0 by the objective public. The software itself holds the promise of being very impactful for the microscopists' community.
-
-
faculty.washington.edu faculty.washington.edu
-
There are two adjacent commands that do very different things.
I observed this design problem on several instances in different applications. Recently, I was playing a video game where tried purchasing a new car and I was one click away from destroying my previous car that I worked hard for (you're only allowed to have one). There was just one warning which looked like any other generic warning in the game that this would happen and I almost pressed yes which would've been pitiful. It's really important when designing interfaces to pay attention to these little things that can make or break the user experience.
-
-
www.myaslashartistry.com www.myaslashartistry.com
-
Why Lash Strips Are a Game-Changer
-
-
docdrop.org docdrop.org
-
n schools where dance and cheer is given less status than football and basketball and young women’s bodies are perceived as fair game for commentary
There truly is a blueprint for how to act and look in regards to women rather for men because in my opinion society still suffers from seeking women as "decor" against a man to help him look better. Muscular or "bigger" sized women should be normalized not just in sports or such groups but also in the media and society as a whole.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What experiences do you have of social media sites making particularly good recommendations for you?
One algorithm that I really enjoy is the daily music recommendations on Netease music, and I knew its development trajectory really well. The recommendation algorithm was first introduced into the music app about 6-7 years ago, and only have a feature of "daily recommendation" that contains 30 songs based on the songs the users listen and added to playlists before. It wasn't a big feature and was not so advanced back then, but the algorithm was rapidly evolved and start to hit more and more people's sweet spot by assesing both short term data (like the songs the user listened to the day before) and long term data (like the song genres the user is interested in this month), and adding more accurate and precise labels to songs. Sometimes the labels are so precise that the algorithms can deduce the game the songs I'm listening to come from and recommend other pieces from the same game. About 2 years ago, the algorithm experiences another great upgrade and start to have sub-category recommendations. For example, there are classic and J-pop recommendations that only provides these kinds of pieces based on my interest, yet the effectiveness of the recommendation is still top-tier. It's not an exagerration to say that the recommendation algorithm of Netease music perfectly addresses the pain of finding new songs that fit my taste. I also really appreciate the data privacy of the algorithm, since it does not ask for privacy information like location, contacts, etc. Even a newly established account with no info can use the algorithm.
-
-
learnenglish.britishcouncil.org learnenglish.britishcouncil.org
-
quiz
a game or competition in which you answer questions 問答比賽,智力競賽
- UK A lot of pubs have quiz nights once or twice a week. 許多酒吧每週都會有一兩次的智力競賽之夜。
-
-
-
Mangione took a software programming internship after high school at Maryland-based video game studio Firaxis, where he fixed bugs on the hit strategy game Civilization 6, according to a LinkedIn profile. Firaxis’ parent company, Take-Two Interactive, said it would not comment on former employees.
I fail to see how this is relevant to the discussion. I feel like his software programming internship doesn’t explain why he ended up "killing" Brian Thompson.
-
-
hiphopdx.com hiphopdx.com
-
You going against Compton????
Kendrick automatically has the whole west coast on his side
-
loyalty to his hometown
loyalty is seemingly very important in Compton culture
-
-
www.theringer.com www.theringer.com
-
Jay and Nas needed to overcome each other to elevate themselves to the top of the pile.
Drake and Kendrick are undoubtedly among the top of the game right now
-
height of Drake’s and Kendrick’s respective careers?
very debatable, both have grown a ton since 2013 only getting better. Kendrick has been putting out music less consistently and has been experimenting more but that doesn't mean he's not at the top of his game now
-
-
news.northeastern.edu news.northeastern.edu
-
“sneak dissing.”
extremely popular tactic in todays rap game
-
Competition is “intrinsic to hip-hop” culture, Forman says.
it has been said before that hip hop is a game, aka the rap "game"
-
-
people.com people.com
-
Lamar denounced the notion of himself and Drake being on the same level.
kendrick rapped, "motherfuck the big 3, it's just big me" the big 3 being drake, kendrick, j cole. all of them are considered to be the best rappers in game by a long shot
-
-
www.rollingstone.com www.rollingstone.com
-
evoking other people in a beef, especially a dead person, is fair game.
article said earlier, "there are no rules in rap beef", contradiction
-
why artists are coming out against him
drake has made a lot of enemies in his time in the rap game, his song "Fake Love" says, "i got fake people showing fake love to me" implying that he knows some of his friends aren't really his friends
-
Drake has long been accused of being a culture vulture
longstanding diss of drake, who is lightskin, in one of his songs he says, "i used to get bullied for being black and when i get here i'm not black enough", "here" being the rap game
-
-
www.biorxiv.org www.biorxiv.org
-
Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.
Learn more at Review Commons
Reply to the reviewers
Reviewer #1 (Evidence, reproducibility and clarity)
*This study examines the reorganization of the microtubule (MT) cytoskeleton during early neuronal development, specifically focusing on the establishment of axonal and dendritic polarity. Utilizing advanced microscopy techniques, the authors demonstrate that stable microtubules in early neurites initially exhibit a plus-end-out orientation, attributed to their connection with centrioles. Subsequently, these microtubules are released and undergo sliding, resulting in a mixed-polarity orientation in early neurites. Furthermore, the study elegantly illustrates the spatial segregation of microtubules in dendrites based on polarity and stability. The experiments are rigorously executed, and the microscopy data are presented with exceptional clarity. The following are my primary concerns that warrant further consideration by the authors. *
-
Potential Bias in the MotorPAINT Assay: Kinesin-1 and kinesin-3 motors exhibit distinct preferences for post-translationally modified (PTM) microtubules. Given that kinesin-1 preferentially binds to acetylated microtubules over tyrosinated microtubules in the MotorPAINT assay, the potential for bias in the results arises. Have the authors explored the use of kinesin-3, which favors tyrosinated microtubules, to corroborate the observed microtubule polarity?
We thank the reviewer for the careful assessment of our manuscript. As the reviewer noted, it has indeed been demonstrated that kinesin-1 prefers microtubules marked by acetylation (Cai et al., PLoS Biol 2009; Reed et al., Curr Biol 2006) and kinesin-3 prefers microtubules marked by tyrosination in cells (Guedes-Dias et al., Curr Biol 2019; Tas et al., Neuron 2017); however, these preferences are limited in vitro, as demonstrated for example in Sirajuddin et al. (Nat Cell Biol 2014). When motor-PAINT was introduced, it was verified that purified kinesin-1 moves over both acetylated and tyrosinated microtubules with no apparent preference in this assay (Tas et al., Neuron 2017). This could be due to the more in vitro-like nature of the motor-PAINT assay (e.g. some MAPs may be washed away) and/or because of the addition of Taxol during the gentle fixation step, which converts all microtubules into those preferred by kinesin-1. We will clarify this in the text.
Planned revisions:
- We will clarify the lack of kinesin-1 selectivity in motor-PAINT assays in the text by adding the following sentence in the main text when introducing motor-PAINT: Importantly, while kinesin-1 has been shown to selectively move on stable, highly-modified microtubules in cells (Cai et al., PLoS Biol 2009; Reed et al., Curr Biol 2006), this is not the case after motor-PAINT sample preparation (Tas et al., Neuron 2017).
Axon-Like Neurites in Stage 2b Neurons: The observation of axon-like neurites in Stage 2b neurons, characterized by an (almost) uniformly plus-end-out microtubule organization, is noteworthy. Have the authors confirmed this polarity using end-binding (EB) protein tracking (e.g., EB1, EB3) in Stage 2b neurons? Do these neurites display distinct morphological features, such as variations in width? Furthermore, do they consistently differentiate into axons when tracked over time using live-cell EB imaging, rather than the MotorPAINT assay? Could stable microtubule anchoring impede free sliding in these neurites or restrict sliding into them? Investigating microtubule sliding dynamics in these axon-like neurites would provide valuable insights.
We thank the reviewer for highlighting this finding. Early in development, cultured neurons are known to transiently polarize and have axon-like neurites that may or may not develop into the future axon (Burute et al., Sci Adv 2022; Schelski & Bradke, Sci Adv 2022; Jacobson et al., Neuron 2006). In the absence of certain molecular or physical factors (e.g. Burute et al., Sci Adv 2022; Randlett et al., Neuron 2011), this transient polarization is seemingly random and as such, we do not expect the axon-like neurites in stage 2b neurons to necessarily become the axon. Interestingly, anchoring stable microtubules in a specific neurite using cortically-anchored StableMARK (Burute et al., Sci Adv 2022) or stabilizing microtubules in a specific neurite using Taxol (Witte et al., JCB 2008) has been shown to promote axon formation, but these stable microtubules have slower turnover (perhaps necessitating the use of laser severing as in Yau et al., J Neurosci 2016) and may not always bear EB comets given that EB comets are less commonly seen at the ends of stable microtubules (Jansen et al., JCB 2023).
Planned revision:
- We will add additional details to the text to clarify the likely transient nature of this polarization in agreement with previous literature and specify that they are otherwise not morphologically distinct.
- We will perform additional EB3 tracking experiments in Stage 2b neurons to examine potential differences between neurites.
*Taxol and Microtubule Sliding: Taxol-induced microtubule stabilization is known to induce the formation of multiple axons. Does taxol treatment diminish microtubule sliding and prevent polarity reversal in minor neurites, thereby facilitating their development into axons? *
We thank the reviewer for this interesting suggestion. Taxol converts all microtubules into stable microtubules. Given that the initial neurites tend to be of mixed polarity, having stable microtubules pointing the "wrong" way may impede sliding and polarity sorting. Alternatively, since it is precisely the stable microtubules that we see sliding between and within neurites using StableMARK, Taxol may also increase the fraction of microtubules undergoing sliding. Because of this, it is not straightforward to predict how Taxol affects microtubule (re-)orientation and sliding. Preliminary motor-PAINT experiments do suggest that the multiple axons induced by Taxol treatment all contain predominantly plus-end-out microtubules, as expected, and that this is the case from early in development. We will further develop these findings to include them in our manuscript.
Planned revision:
- We have already performed some experiments in which we treat neurons with 10 nM Taxol and verify that we observe the formation of multiple axons by motor-PAINT. We will perform additional experiments in which we add this low dose of Taxol to the cells and determine its effect on microtubule sliding dynamics.
*Sorting of Minus-End-Out Microtubules (MTs) in Developing Axons: Traces of minus-end-out MTs are observed proximal to the soma in both Stage 2b axon-like neurites and Stage 3 developing axons (Figure S4). Does this indicate a clearance mechanism for misoriented MTs during development? If so, is this sorting mechanism specific to axons? Could dynein be involved? Pharmacological inhibition of dynein (e.g., ciliobrevin-D or dynarrestin) could assess whether blocking dynein disrupts uniform MT polarity and axon formation. *
We indeed think that a clearance mechanism is involved for removing misoriented microtubules in the axon after axon specification. Many motor proteins have been implicated in the polarity sorting of microtubules in neurons and for axons, dynein is believed to play a role (Rao et al., Cell Rep 2017; del Castillo et al., eLife 2015; Schelski & Bradke, Sci Adv 2022). A few of these studies already employed ciliobrevin, noting that it increases the fraction of minus-end-out microtubules in axons (Rao et al., Cell Rep 2017) and reduces the rate of retrograde flow of microtubules in immature neurites (Schelski & Bradke, Sci Adv 2022). These findings are in line with the suggestion of the reviewer. Interestingly, however, as we highlight in the discussion, the motility we observe for polarity reversal is extremely slow on average (~60 nm/minute) because the microtubule end undergoes bursts of motility and periods in which it appears to be tethered and rather immobile. Given that most neurites are non-axon-like, we assume these sliding events are mostly not taking place in axons or axon-like neurites. These events may thus be orchestrated by other motor proteins (e.g. kinesin-1, kinesin-2, kinesin-5, kinesin-6, and kinesin-12) that have been implicated in microtubule polarity sorting in neurons. We do observe retrograde sliding of stable microtubules in these neurites at a median speed of ~150 nm/minute, which is again much slower than typical motor speeds and occurs in almost all neurites and not specifically in one or two axon-like neurites. It is thus unclear which motors may be involved, and it is difficult to predict how any drug treatments would affect microtubule polarity.
Dissecting the mechanisms of microtubule sliding will require many more experiments and will first require the recruitment and training of a new PhD student or postdoc. Therefore, we feel this falls outside the scope of the current work, which carefully maps the microtubule organization during neuronal development and demonstrates the active polarity reversal of stable microtubules during this process.
Planned revision:
- We will expand our discussion of the potential mechanisms facilitating polarity sorting in axons and axon-like neurites in the discussion.
Impact of Kinesin-1 Rigor Mutants on MT Polarity and Dynamics: Would the expression of kinesin-1 rigor mutants alter MT dynamics and polarity? Validation with alternative methods, such as microtubule photoconversion, would be beneficial.
It is important to note that StableMARK and its effects on microtubule stability have been extensively verified in the paper in which it was introduced (Jansen et al., JCB 2023). At low expression levels (where StableMARK has a speckled distribution along microtubules), StableMARK does not alter the stability of microtubules (e.g., they are still disassembled in response to serum starvation), alter their post-translational modification status or their distribution in the cell, or impede the transport of cargoes along them. Given that we chose to image neurons with very low expression levels of StableMARK (as inferred by the speckled distribution along microtubules), we expect its effects on the microtubule cytoskeleton to be minimal.
Planned revision:
- We will clarify the potential effects of StableMARK in the manuscript. We will perform experiments with photoactivatable tubulin to examine whether we still see microtubules that live for over 2 hours. We will furthermore examine whether it allows us to see microtubule sliding between neurites similar to work performed in the Gelfand lab (Lu et al., Curr Biol 2013).
*Molecular Motors Driving MT Sliding: Which specific motors drive MT sliding in the soma and neurites? If a motor drives minus-end-out MTs into neurites, it must be plus-end-directed. The discussion should clarify the polarity of the involved motors to strengthen the conclusions. *
We thank the reviewer for highlighting this point and will improve our discussion to clarify the polarity of the involved motors.
Planned revision:
- We will expand our discussion of the motors potentially involved in sliding microtubules when revising the manuscript.
Stability of Centriole-Derived Microtubules: Microtubules emanating from centrioles are typically young and dynamic. How do they acquire acetylation and stability at an early stage? Do centrioles exhibit active EB1/EB3 comets in Stage 1/2a neurons? If these microtubules are severed from centrioles, could knockdown of MT-severing proteins (e.g., Katanin, Spastin, Fidgetin) alter microtubule polarity during neuronal development? A brief discussion would be valuable.
We thank the reviewer for raising these interesting questions and suggestions. As suggested, we will include a brief discussion of these issues. What is known about the properties of stable microtubules is limited, so it is currently unclear how they are made. For example, we do not know if they are converted from labile microtubules or nucleated by a distinct pathway. If they are nucleated by a distinct pathway, do these microtubules grow in a similar manner as labile microtubules and do they have EB comets at their plus-ends (given that EB compacts the lattice (Zhang et al., Cell 2015, PNAS 2018) and stable microtubules have an expanded lattice in cells (de Jager et al., JCB 2025))? If they are converted, does something first cap their plus-end to limit further growth (given that EB comets are rarely observed at the ends of stable microtubules (Jansen et al., JCB 2023))?
We also do not know how the activity of the tubulin acetyltransferase αTAT1 is regulated. Is its access to the microtubule lumen regulated or is its enzymatic activity stimulated by some means (e.g., microtubule lattice conformation or a molecular factor)?
We find the possibility that microtubule severing enzymes release these stable microtubules from the centrioles very exciting and hope to test the effects of their absence on microtubule polarity in the future. We will discuss this in the manuscript as suggested.
Planned revision:
-
We will expand our discussion about the centriole-associated stable microtubules in the revised manuscript. Minor Points
-
In Movies 3 and 4, please use arrowheads or pseudo-coloring to highlight microtubules detaching from specific points. In Movie 5, please mark the stable microtubule that rotates within the neurite. These annotations would enhance clarity.
Planned revision:
- We will add arrowheads/traces to the movies to enhance clarity.* *
The title states: 'Stable microtubules predominantly oriented minus-end-out in the minor neurites of Stage 2b and 3 neurons.' However, given that the minus-end-out percentage increases after nocodazole treatment but only reaches a median of 0.48, 'predominantly' may be an overstatement. Please consider rewording.
We thank the reviewer for catching this mistake and will adjust the statement to better reflect the median value.
Planned revision:
- We will reword this statement in the revised text.
*Please compare the StableMARK system with the K560Rigor-SunTag approach described by Tanenbaum et al. (2014). What are the advantages of StableMARK over the SunTag method? *
While the SunTag is certainly a powerful tool to visualize molecules at low copy number, we believe that StableMARK is more appropriate than the K560Rigor-SunTag tool for our assays due to two main reasons. Firstly, K560Rigor-SunTag is based on the E236A kinesin-1 mutation, while StableMARK is based on the G234A mutation. These are both rigor mutations of kinesin-1 but behave differently; the E236A mutant is strongly bound to the microtubule in an ATP-like state (neck linker docked), while the G234A mutant is also strongly bound, but not in an ATP-like state (Rice et al., Nature 1999). This means that they may have different effects on or preferences of the microtubule lattice. Indeed, while StableMARK (G234A) has been shown to preferentially bind microtubules with an expanded lattice (Jansen et al., JCB 2023; de Jager et al., JCB 2025), this may not be the case for the E236A mutant. In support of this, it has been shown that, while nucleotide free kinesin-1 can expand the lattice of GDP-microtubules at high concentrations (>10% lattice occupancy) in vitro (Peet et al., Nat Nanotechnol 2018; Shima et al., JCB 2018), kinesin-1 in the ATP-bound state does not maintain this expanded lattice (Shima et al., JCB 2018). Thus, we expect the kinesin-1 rigor used by Tanenbaum et al. (Cell 2014) to not be specific for stable microtubules (with an expanded lattice) in cells. In addition, given the dense packing of microtubules in neurites (not well-established in developing neurites, but with an inter-microtubule distance of ~25 nm in axons and ~65 nm in dendrites (Chen et al., Nature 1992)), the very large size of the SunTag could be problematic. The K560Rigor-SunTag tool from Tanenbaum et al. (Cell 2014) is bound by up to 24 copies of GFP (each ~3 nm in size), meaning that it may obstruct or be obstructed by the dense microtubule network in neurites.
Planned revision:
- Given that, unlike the K560Rigor-SunTag construct, StableMARK has been carefully validated as a live-cell marker for stable microtubules, we believe that the above discussion goes beyond the scope of the manuscript.* *
Microscopy data (Movies 2, 3, and 4) show microtubule bundling with StableMARK labeling, which is absent in tubulin immunostaining. Could this be an artifact of ectopic StableMARK expression? If so, a brief note addressing this potential effect would be beneficial.
As with any overexpression, there is a risk of artifacts. We feel that in the cells presented, the risk of artifacts is limited because we have chosen neurons expressing StableMARK at very low levels. Prior work has demonstrated that in cells where StableMARK has a speckled appearance on microtubules, it has limited undesired effects on stable microtubules or the cargoes moving along them (Jansen et al., JCB 2023). Perhaps some of the apparent differences in the amount of bundling can be explained in that the expansion microscopy images shown may have less apparent bundling because of the improved z-resolution and thus optical sectioning. Any z-slice imaged using expansion microscopy will contain fewer microtubules, so bundling may be less obvious. If we compare the amount of bundling seen in StableMARK expressing cells with the amount of bundling of acetylated microtubules (a marker for stable microtubules) in DMSO/nocodazole treated (non-electroporated) cells imaged by confocal microscopy in Figure S7, we feel that the difference is not so large. Nonetheless, we can briefly address this potential effect in the text.
Planned revision:
- We will improve the transparency of the manuscript by briefly mentioning this in the text. Reviewer #1 (Significance)
It is an important paper challenging established ideas of microtubule organization in neurons. It is important to the wide audience of cell and neurobiologists.__ __
Reviewer #2 (Evidence, reproducibility and clarity)
*The manuscript uses state-of-the-art microscopy (e,g. expansion microscopy, motorPAINT) to observe microtubule organization during early events of differentiation of cultured rat hippocampal neurons. The authors confirm previous work showing that microtubules in neurites and dendrites are of mixed polarity whereas they are of uniform plus-end-out polarity in axons. They show that stable microtubules (labeled with antibody against acetylated tubulin) are located in the central region of neurite cross-section across all differentiation stages. They show that acetylated microtubules are associated with centrioles early in differentiation but less so at later stages. And they show that stable microtubules can move from one neurite to another, presumably by microtubule sliding. *
Comments
-
*I found the manuscript difficult to read. There are lots of "segregations" of microtubules occurring over these stages of neuronal differentiation: segregation between the center of a neurite and the outer edge with respect to neurite cross-section, segregation between the region proximal to the cell body and the region distal to the cell body, and segregation over time (stages). The authors don't do a good job of distinguishing these and reporting the major findings in a way that is clear and straightforward. *
We thank the reviewer for their feedback and will go over the text to make it easier to read. Within neurites, we use the word 'segregated' in the manuscript to mean that the microtubules form two spatially separate populations across the width of the neurites (i.e., their cross-section if viewed in 3D). Because of variability seen in the neurites of this stage, this segregation does not always present as a peripheral vs. central enrichment of the different populations of microtubules as we sometimes observed two side-by-side populations instead. We will make sure that we properly define this in the manuscript to avoid any confusion.
When discussing other types of segregation, we tried to use different wording such as when discussing the proximal-distal distribution of microtubules with different orientations in axon-like neurites in this excerpt:
Sometimes these axons and axon-like neurites had a small bundle of minus-end-out microtubules proximal to the soma (Figure S4). This suggests that plus-end-out uniformity emerges distally first in these neurites, perhaps by retrograde sliding of these minus-end-out microtubules (see Discussion).
When discussing changes related to a particular stage, we instead aimed to list which stage we were talking about, such as seen in the discussion:
Emerging neurites of early stage 2 neurons already contain microtubules of both orientations and these are typically segregated. These emerging neurites also contain segregated networks of acetylated (stable) and tyrosinated (labile) microtubules. In later stage 2, stage 3, and stage 4 neurons, stable (nocodazole-resistant) microtubules are oriented more minus-end-out compared to the total (untreated) population of microtubules; however, in early stage 2 neurons, stable microtubules are preferentially oriented plus-end-out, likely because their minus-ends are still anchored at the centrioles at this stage. The fraction of anchored stable microtubules decreases during development, while the appearance of short stumps of microtubules attached to the centrioles suggests that these microtubules may be released by severing.
We appreciate the reviewer's concerns and will review the text carefully for clarity.
Planned revision:
- We will carefully go through the text when revising the manuscript to ensure that these distinctions are clear and consider using synonyms or other descriptors where they would enhance clarity.
*The major focus is on microtubule changes between stages 2a and 2b. This is introduced in the text and in the methods but not reflected in Figure 1A which should serve as an orientation of what is to come. It would be helpful to move the information about stages to the main text and/or Figure 1A. *
We thank the reviewer for pointing this out and will be more explicit about the distinction between stages 2a and 2b in the main text and make the suggested change to Figure 1A.
Planned revision:
- We will incorporate the suggested changes in the revised manuscript.
For Figure 1, the conclusions are generally supported by the data with the exception of the data for stage 2b in 1D and 1H. The images in D and the line scan in H suggest that for stage 2b, minus-end-out are on one edge whereas the plus-end-out are on the other edge of the neurite cross-section. But this is only true for one region along this example neurite. If the white line in D was moved proximal or distal along the neurite, the line scan for stage 2b would look like those of stages 2a and 3.
We thank the reviewer for noting this in the figure. For these earlier stages in neuronal development, the distribution of different types of microtubules within the neurite is more variable and does not always adhere to the central-peripheral distribution described for more mature neurons (Tas et al., Neuron 2017). We did not intend to suggest that neurites of stage 2b neurons consistently have a different radial distribution of microtubules of opposite orientation, but rather that microtubules of the same orientation tend to bundle together. Sometimes this bundling produces a central or peripheral enrichment, as described for mature neurons (Tas et al., Neuron 2017) and as seen in Figure 1D-F at certain points along the length of the neurites, and sometimes the bundling simply produces two side-by-side populations. To reflect this diversity, we chose two different examples in the figure. The line scans presented in Figure 1H were taken approximately at the midpoint of the presented ROIs. In addition, as our imaging in this case is two-dimensional, we do not want to make explicit claims about the radial distribution of the different populations of microtubules.
Planned revision:
- We will adjust our description of this figure in the main text to be more explicit about how we interpret these results. We will ensure that it is apparent that we do not think there is a specific radial distribution of microtubules depending on the developmental stage.
*For Figure 2, I found it difficult to relate panels A-F to panels G-J. I recommend combining 2G-J with 3A-B for a separate figure focused on the orientation of stable microtubules across different stages. *
We thank the reviewer for this suggestion and will take it into consideration when preparing the revised manuscript, making sure that our figure organization is well justified.
For Figure 3, it is difficult to reconcile the traces with the corresponding images - that is, there are many acetylated microtubules in the top view image that appear to contact centrioles but are not in the tracing. Perhaps the tracings would more accurately reflect the localization of the acetylated microtubules in the top view images if a stack of images was shown rather than the max projections. Or if the authors were to stain for CAMSAPs to identify non-centrosomal microtubules. I find the data unconvincing but I do believe their conclusion because it is consistent with published data in the field. The data need to be quantified.
We thank the reviewer for noting this. Importantly, the tracing was done on a three-dimensional stack of images, whereas we present maximum projections of a few slices in Figure 3C for easy visualization. Projection artifacts indeed make it look as though some additional microtubules are attached to the centrioles, whereas in the three-dimensional stacks it is apparent that they are not. We can include the z-stacks as supplementary material so that readers can also verify this themselves. We will additionally clarify that this is the case in the text related to Figure 3C.
Planned revision:
- We will better explain how the tracing was done in the methods section and make a brief note of the projection artifacts in the main text.
- We will also include the z-stacks as supplementary data.
*I have a major concern with the conclusions of Figure 4. Here the authors use StableMARK to argue that microtubules do not depolymerize in one neurite and then repolymerize in another neurite but rather can be moved (presumably by sliding) from one neurite to another. The problem is that StableMARK-decorated microtubules do not depolymerize. So yes, StableMARK-decorated microtubules can move from one neurite to another but that does not say anything about what normally happens to microtubules during neuronal differentiation. In addition, the text says that the focus on Figure 4 is on how microtubules change between stages 2a and 2b but data is only shown for stage 2b. *
As noted by the reviewer, StableMARK can indeed hyperstabilize microtubules when over-expressed; however, it is important to note that this strongly depends on the level of overexpression of the marker. This is discussed in detail in the paper introducing StableMARK, where it is described that at low expression levels, StableMARK does not alter the stability of microtubules (i.e., StableMARK decorated microtubules can still depolymerize/disassemble and they are disassembled in response to serum starvation), alter their post-translational modification status or their distribution in the cell, or impede the transport of cargoes along them (Jansen et al. JCB 2023). Despite this, we agree that it is important to validate these findings in our experimental system (primary rat hippocampal neurons) and so we plan to perform experiments with photoactivatable tubulin to verify the long lifetime of stable microtubules and aim to also observe microtubule sliding (similar to assays performed in the Gelfand lab (Lu et al., Curr Biol 2013)) in the absence of StableMARK.
Planned revision:
- We will confirm our findings using photoactivatable tubulin. We hope to demonstrate the long lifetime of the microtubules in this case and observe the sliding of microtubules by another means.
- We will also revise the text to better explain the potential impacts of StableMARK and that we chose the lowest expressing cells we could find so early after electroporation.
*The data are largely descriptive and it is of course important to first describe things before one can dive into mechanism. But most of the findings confirm previous work and new findings are limited to showing that e.g. microtubule segregation appears earlier than previously observed. *
Our study is the first to use Motor-PAINT to carefully map changes in microtubule orientations during neuronal development. Furthermore, it is the first to use the recently introduced live-cell marker for stable microtubules to directly demonstrate the active polarity reversal of stable microtubules during this process.
Optional: It would be nice if the authors could investigate some potential mechanisms. For example, does knockdown or knockout of severing enzymes prevent the loss of centriolar microtubules shown in Figure 3? Does knockdown or knockout of kinesin-2 or EB1 prevent the reorientation of microtubules (Chen et al 2014)?
We agree with the reviewer that these are exciting experiments to perform, and we hope to unravel the mechanisms underlying microtubule reorganization in future work. However, this will require many more experiments, as well as the recruitment and training of a new PhD student or postdoc, given that the first author has left the lab. Therefore, we feel that this falls outside the scope of the current work, which carefully maps the microtubule organization during neuronal development and demonstrates the active polarity reversal of stable microtubules during this process.
*Overall, the methods are presented in such a way that they can be reproduced. One exception is in the motor paint sample prep section: is it three washes for 1 min each or three washes over 1 min? *
We thank the reviewer for pointing out this mistake and will adjust this step in the methods section accordingly.
Planned revision:
- We will revise the methods section to read 'washed three times for 1 minute each'.
*No statistical analysis is provided. The spread of the data in the violin plots is very large and it is difficult to ascertain how strongly one should make conclusions based on different data spreads between different conditions. *
We thank the reviewer for noting this and will add statistical tests to the graphs showing the fraction of minus-end-out microtubules in different stages/conditions.
Planned revision:
- We will include statistical tests in the specified graphs.
For Figure S5, the excluded data (axons and axon-like neurites) should also be shown.
We thank the reviewer for this suggestion and will include this data.
Planned revision:
- We will adjust this supplemental figure to also include the specified data.
*For the movies, it would be helpful to have the microtubule moving from one neurite to another identified in some way as it is difficult to tell what is going on. *
We thank the reviewer for pointing this out.
Planned revision:
- We will trace the microtubule in this movie to enhance clarity.* * Reviewer #2 (Significance)
A strength of the study is the state-of-the-art microscopy (e,g. expansion microscopy, motorPAINT) and its application to a classic experimental model (rat hippocampal neurons). The information will be useful to those interested in the details of neuronal differentiation. A limitation of the study is that it appears to mostly confirm previous findings in the field (microtubule segregation, loss of centriolar anchoring, microtubule sliding). The advance to the field is that the manuscript shows that these events occur earlier in differentiation that previously known.
- *
Reviewer #3 (Evidence, reproducibility and clarity)
*The study by Iwanski and colleagues explores the establishment of the specific organisation of the neuronal microtubule cytoskeleton during neuronal differentiation. They use cultures of dissociated primary hippocampal rat neurons as a model system, and apply the optimised motor-PAINT technology, expansion microscopy/immunofluorescence and live cell imaging to investigate the polarity establishment and the distribution of differentially modified microtubules during early development. *
They show that in young neurons microtubules are of mixed polarity, but at this stage already the stable (acetylated) microtubules are preferentially oriented plus-end-out, and are connected to the centrioles. In later stages, the stable microtubules are released from the centrioles and reverse their orientation by moving around inside the cell body and the neurites.
*Overall, the conclusions are well supported by the presented data. The experiments are conducted thoroughly, the figures are clearly presented (for minor comments, see below) and the manuscript is well and clearly written. *
Major comments
-
What is the proportion of neurons with different types of neurites (axon-like, non-axon-like) in stage 2b? (middle paragraph page 5 and Fig 1E). Please provide a quantification. * How was the quantification in Fig 2B-D-F done? Why do the curves all start at 0? Please provide a scheme explaining these measurements. Furthermore, the data in Fig 2B do not reflect the statement "the segregation (...) was less evident" than in later stages (top of page 6): while it is less evident than in stage 2b, it is extremely similar to stage 3. Please revise accordingly.*
We thank the reviewer for pointing out these important details. We will make the suggested changes in the text, adding the proportion of neurons with different types of neurites and adjusting statement mentioned.
The radial intensity distributions were quantified as described in Katrukha et al. (eLife 2021). In the methods section, we describe the process in brief:
To analyze the radial distribution of acetylated and tyrosinated microtubules in expanded neurites, deconvolved image stacks were processed using custom scripts in ImageJ (v1.54f) and MATLAB (R2024b) as described in detail elsewhere (Katrukha et al., 2021). Briefly, on maximum intensity projections (XY plane), we drew polylines of sufficient thickness (300 px) to segment out neurite portions 44 µm (10 µm when corrected for expansion factor) in length proximal to the cell soma. Using Selection > Straighten on the corresponding z-stacks generated straightened B-spline interpolated stacks of the neurite sections. These z-stacks were then resliced perpendicularly to the neurite axis (YZ-plane) to visualize the neurite cross-section. From this, we could semi-automatically find the boundary of the neurite in each slice using first a bounding rectangle that encompasses the neurite (per slice) and then a smooth closed spline (approximately oval). To build a radial intensity distribution from neurite border to center, closed spline contours were then shrunken pixel by pixel in each YZ-slice while measuring ROI area and integrated fluorescence intensity. From this, we could ascertain the average fluorescence intensity per contour iteration, allowing us to calculate a radial intensity distribution by calculating the radius corresponding to each area (assuming the neurite cross-section is circular).
The curves thus all start at 0 because no intensity "fits" into a circle of radius 0 and then gradually increase because very few microtubules "fit" into circles with the smallest radii.
Planned revision:
- We will revise the text to include the suggested changes and add a brief statement to the methods section to explain why the curves start at 0.* *
*It should be stressed in the text, that the modification-specific antibodies only detect modified microtubules. Thus, in figure 3, in the absence of total tubulin staining, it is possible that there are more microtubules than revealed with the anti-acetylated tubulin antibody. A possible explanation should be discussed. *
We thank the reviewer for highlighting this point and will adjust the text accordingly.
Planned revision:
- We will clarify this in the revised text by adding the following sentence: In addition, given that we specifically stained for acetylated tubulin (a marker for stable microtubules), it is possible that other non-acetylated and thus perhaps dynamic microtubules are also associated with the centrioles.* *
*OPTIONAL: As discussed in the manuscript's discussion, testing some of the proposed mechanisms regulating microtubule cytoskeleton architecture in development (motors, crosslinkers, severing enzymes) would significantly increase the impact of this study. Exploring these phenomena in a more complex system (3D culture, brain explants) closer to the intricate character of the brain than the 2D dissociated neurons would be a real game-changer. *
We agree that sorting out the mechanisms driving microtubule reorganization would be very exciting. However, this will require many more experiments, as well as the recruitment and training of a new PhD student or postdoc, given that the first author has left the lab. Therefore, we feel this falls outside the scope of the current work, which carefully maps the microtubule organization during neuronal development and demonstrates the active polarity reversal of stable microtubules during this process.
Minor comments
-
*It could be useful to write on each panel whether the images were obtained with expansion or motor-PAINT technique: the rendering of the figures is very similar, and despite the different colour scheme can be confusing. *
We thank the reviewer for pointing this out.
Planned revision:
- We will incorporate this suggestion when revising our manuscript.
Reviewer #3 (Significance)
This manuscript provides insights into the establishment of the microtubule cytoskeleton architecture specific to highly polarised neurons. The imaging techniques used, improved from the ones published before (motor-PAINT: Kapitein lab in 2017, U-ExM: Hamel/Guichard lab in 2019), yield beautiful and convincing data, marking an improvement compared to previous studies.
*However, the novelty of some of the findings is relatively limited. Indeed, a mixed microtubule orientation in very young neurites has already been shown (Yau et al, 2016, co-authored by Kapitein), as has the separate distribution of acetylated and tyrosinated / stable and labile / plus-end-out and plus-end-in microtubules in dendrites (Tas, ..., Kapitein, 2017). *
*On the other hand, observation of the live movement of microtubules with the resolution allowing to see single (stable) microtubules is new and important. It provides an exciting setup to explore the mechanisms of polarity reversal of microtubules in neuronal development and it is regrettable that these mechanisms have not been explored further. *
*The association of (stable) microtubules with the centrioles is also a technically challenging analysis. Despite not being able to visualise all microtubules, but only acetylated ones, these data are novel and exciting. *
*This work will be of interest for neuronal cell biologists, developmental neurobiologists. The impact would be larger if the mechanistic questions were addressed using these sophisticated methodologies. *
*This reviewer's expertise is the regulation of the microtubule cytoskeleton and its impact on molecular, cellular and organism levels. *
- *
-
-
Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
Referee #3
Evidence, reproducibility and clarity
The study by Iwanski and colleagues explores the establishment of the specific organisation of the neuronal microtubule cytoskeleton during neuronal differentiation. They use cultures of dissociated primary hippocampal rat neurons as a model system, and apply the optimised motor-PAINT technology, expansion microscopy/immunofluorescence and live cell imaging to investigate the polarity establishment and the distribution of differentially modified microtubules during early development. They show that in young neurons microtubules are of mixed polarity, but at this stage already the stable (acetylated) microtubules are preferentially oriented plus-end-out, and are connected to the centrioles. In later stages, the stable microtubules are released from the centrioles and reverse their orientation by moving around inside the cell body and the neurites.
Major comments:
-
Overall, the conclusions are well supported by the presented data.
-
What is the proportion of neurons with different types of neurites (axon-like, non-axon-like) in stage 2b? (middle paragraph page 5 and Fig 1E). Please provide a quantification. How was the quantification in Fig 2B-D-F done? Why do the curves all start at 0? Please provide a scheme explaining these measurements. Furthermore, the data in Fig 2B do not reflect the statement "the segregation (...) was less evident" than in later stages (top of page 6): while it is less evident than in stage 2b, it is extremely similar to stage 3. Please revise accordingly.
-
It should be stressed in the text, that the modification-specific antibodies only detect modified microtubules. Thus, in figure 3, in the absence of total tubulin staining, it is possible that there are more microtubules than revealed with the anti-acetylated tubulin antibody. A possible explanation should be discussed.
-
OPTIONAL: As discussed in the manuscript's discussion, testing some of the proposed mechanisms regulating microtubule cytoskeleton architecture in development (motors, crosslinkers, severing enzymes) would significantly increase the impact of this study. Exploring these phenomena in a more complex system (3D culture, brain explants) closer to the intricate character of the brain than the 2D dissociated neurons would be a real game-changer.
Minor comments:
-
The experiments are conducted thoroughly, the figures are clearly presented (for minor comments, see below) and the manuscript is well and clearly written.
-
It could be useful to write on each panel whether the images were obtained with expansion or motor-PAINT technique: the rendering of the figures is very similar, and despite the different colour scheme can be confusing.
Significance
-
This manuscript provides insights into the establishment of the microtubule cytoskeleton architecture specific to highly polarised neurons. The imaging techniques used, improved from the ones published before (motor-PAINT: Kapitein lab in 2017, U-ExM: Hamel/Guichard lab in 2019), yield beautiful and convincing data, marking an improvement compared to previous studies.
-
However, the novelty of some of the findings is relatively limited. Indeed, a mixed microtubule orientation in very young neurites has already been shown (Yau et al, 2016, co-authored by Kapitein), as has the separate distribution of acetylated and tyrosinated / stable and labile / plus-end-out and plus-end-in microtubules in dendrites (Tas, ..., Kapitein, 2017).
-
On the other hand, observation of the live movement of microtubules with the resolution allowing to see single (stable) microtubules is new and important. It provides an exciting setup to explore the mechanisms of polarity reversal of microtubules in neuronal development and it is regrettable that these mechanisms have not been explored further.
-
The association of (stable) microtubules with the centrioles is also a technically challenging analysis. Despite not being able to visualise all microtubules, but only acetylated ones, these data are novel and exciting.
-
This work will be of interest for neuronal cell biologists, developmental neurobiologists. The impact would be larger if the mechanistic questions were addressed using these sophisticated methodologies.
-
This reviewer's expertise is the regulation of the microtubule cytoskeleton and it's impact on molecular, cellular and organism levels.
-
-
-
blog.sens-public.org blog.sens-public.org
-
une définition non ambigüe de ce qu’est penser
Pas forcément de « penser », mais « jouer » – Turing abandonne la première dans son texte (“The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion.”, p. 442):
“We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’” (p. 434).
Ce qui compte en pratique (pas juste pour Turing: pour nous aussi aujourd’hui), c’est : est-ce que la machine peut faire ce qu’on veut qu’elle fasse (jouer, parler, écrire sans fautes, bref correspondre à nos attentes d’intelligence).
May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.<br /> (p. 435)
-
- Apr 2025
-
www.reddit.com www.reddit.com
-
embodyingcyberspace.com embodyingcyberspace.com
-
One moment you’re flying high and everything is as clear as it can be. You’re experiencing a surge of unadulterated pleasure, a burst of sheer delight. But then, all of a sudden, the situation flips and you’re down in the dumps feeling the nagging necessity for another game, another sweet, another hit, another shot.
for - adjacency - cyber ghosts - hungry ghosts - the hunger is temporarily satisfied, but the hunger pangs start again - cycling in samsara - consumerism - David Loy - inability for consumerism to fill our sense of lack - https://hyp.is/WuaFQCKZEfCFA-eSTwduzg/www.youtube.com/watch?v=yWRA4cUCid8
-
-
boffosocko.com boffosocko.com
-
Ispinix: Synthesizing Data for Holistic iGaming Understanding
Ultimately, the core value of Ispinix lies in its synthesis of diverse, high-quality iGaming information. It brings together early access demos, exhaustive technical specifications, broad provider coverage, detailed reviews, and insights into various game types (slots, crash, slingo) onto one accessible platform. This holistic approach caters perfectly to the needs of experienced players seeking strategic depth and industry professionals requiring comprehensive market intelligence. By meticulously curating and presenting data from developers like Endorphina, NetEnt, and Relax Gaming, it serves as an indispensable resource for anyone aiming for a deep, nuanced understanding of the online casino gaming landscape.
-
At www.bookmaker-ratings.com, we understand the excitement of sports often lies in the details. Our comprehensive sports recaps detail the most significant events, match outcomes, and turning points in competitions. These recaps not only inform you about what happened but also provide context that enhances your appreciation of the game. With expert commentary, we analyze key moments and player contributions, helping you understand the dynamics at play in each match and tournament. Join us for thoughtful commentary and an enriched viewing experience!
-
-
www.linkedin.com www.linkedin.com
-
navigating the inner terrain of long-game work.
long game work
-
-
dev.to dev.to
-
How I designed an abuse-resistant, fault-tolerant, zero cost, multiplayer online game
for trystero

-
How I designed an abuse-resistant, fault-tolerant, zero cost, multiplayer online game

-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What was accurate, inaccurate, or surprising about your ad profile?
My ad profile was pretty accurate except for that it thinks I'm in a relationship yet I'm currently not. I went to the page about my recent ad topics and this part is more inaccurate. It has been providing me with ads about gaming yet I don't usually game. I'm just curious how the system would assume that I'm in a relationship and how it thinks that I'm into gaming. I would hope that google could provide more information to what data they have collected to reach this conclusion.
-
What was accurate, inaccurate, or surprising about your ad profile? How comfortable are you with Google knowing (whether correctly or not) those things about you?
It is mostly inaccurate, because I'm an international student, who has ramdom demand on online shopping. Sometime I choose jewelry for friends birthday gift, and sometime I buy internet stuff to make my pc game smoother. I feel comfortable with Google knowing this. I’m generally not interested in most of the advertisements I see while studying abroad. I usually shop based on my own needs, so I find most ads quite meaningless to me.
-
-
-
Coming to grips with the nature of asynchronicity can prove very demandingfor conference and forum participants.All new online learners and e-moderatorshave some problems with it during their training (or if you allow them to workuntrained directly with participants).'There is no quick and easy way around thisproblem. They really do need to experience it for themselves. For instance,participants ‘post’ contributions to one conference then immediately readmessages from others, or vice versa.A participant might read all his or her unreadmessages in several conferences and then post several responses and perhaps postsome topics to start a new theme. In any conference, this reading and postingof messages by a number of individuals can make the sequencing difficult tofollow.All the messages are available for any participant (or researcher) to view online,so the sequencing of messages, when viewed after an e-tivity is completed, looksrather more ordered than during the build-up. Yet trying to understand themafterwards is rather like following the moves of a chess or bridge game after itis over. When participants start using e-tivities, this apparent confusion causes awide range of responses. The twists of time and complexity can elicit quiteuncomfortable, confused reactions from participants and severe anxiety in a few.Although many people are now familiar with email, they are not used to thecomplexity of online conferences, bulletin boards or forums. I suggest that goodstructure, pacing and clear expectations of participants should be provided, notonly for the scaffolding process as a whole but for each e-tivity. In addition, thee-moderator, or his or her delegate, should summarize after 10 or 20 messages.
Esta parte do texto parece-me fundamental e relaciona-se com o anteriormente abordado, salvo erro, na primeira semana de aprendizagem deste curso. Relaciona-se diretamente com a necessidade crítica de ambientação dos estudantes aos ambientes digitais, salientando que a experiência prática da assincronia é fundamental para ultrapassar dificuldades iniciais. O excerto evidencia como a complexidade inerente às interações online pode causar confusão, desconforto ou mesmo ansiedade, especialmente em utilizadores pouco familiarizados com dinâmicas digitais síncronas e assíncronas.
Neste sentido, reforça-se a importância da emissão prévia de guias pedagógicos semanais (GPS) estruturantes (tal como indicado no artigo que nos foi oferecido a ler anteriormente), que orientem de forma explícita e detalhada os estudantes sobre como devem navegar e participar nestes contextos de ensino-aprendizagem. Estes guias devem indicar claramente quais são as expectativas relativamente à participação, ao ritmo de interação e ao tipo de contribuições esperadas, para que os estudantes se sintam seguros, orientados e capazes de gerir a sua aprendizagem de forma autónoma e eficaz no meio digital. A recomendação expressa no texto, para que o e-moderador realize periodicamente sínteses das mensagens (a cada 10 ou 20 intervenções), parece-me um exemplo prático e eficaz de orientação estruturante que facilita a compreensão e o acompanhamento dos conteúdos discutidos, mitigando dificuldades decorrentes da complexidade e da assincronia característica destes ambientes digitais. No entanto vou ao encontro daquilo que foi dito por um colega na sessão síncrona sobre populações de ensino muitos alargadas. Para o e-moderador - e a menos que possa utilizar atores de inteligência artificial para o ajudar neste contexto - será complexo gerir toda a informação gerada pela estudantes.
António Lista
-
-
theanarchistlibrary.org theanarchistlibrary.org
-
It is, presumably, to preserve the possibility of winning the game that intellectuals insist, in discussing each other, on continuing to employ just the sort of Great Man theories of history they would scoff at in just about any other context: Foucault’s ideas, like Trotsky’s, are never treated as primarily the products of a certain intellectual milieu, as something that emerged from endless conversations and arguments involving hundreds of people, but always, as if they emerged from the genius of a single man (or, very occasionally, woman).
Marxism and the academy seem to operate by Carlyle's "great man" theory of history.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
In the early Internet message boards that were centered around different subjects, experienced users would “troll for newbies” by posting naive questions that all the experienced users were already familiar with. The “newbies” who didn’t realize this was a troll would try to engage and answer, and experienced users would feel superior and more part of the group knowing they didn’t fall for the troll like the “newbies” did. These message boards are where the word “troll” with this meaning comes from.
I think this is really interesting coming from someone with a lot of gaming background. I think now a days, trolling has more negative implications than it originally did. Most people who are considered to be "trolling" are doing it to spark a reaction out of other players. It usually involving griefing other players and ruining the game experience rather than something like forum boards of naive questions. It makes me wonder when this change happened and why.
-
These were the precursors to more modern Massively multiplayer online role-playing games (MMORPGS [g15]).
Reading about how MUDs evolved into MMORPGs made me think about how much online gaming has changed over time. I remember playing games like World of Warcraft when I was younger, and it’s interesting to realize that those games came from such simple text-based beginnings. It’s kind of mind-blowing how far we've come in terms of game design and online interaction.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this video, I want to cover the high-level architecture of the Amazon Translate product. This is another machine learning product available within AWS, and if you need any other knowledge beyond architecture, there will be additional videos following this one. If you only see this video, don't worry, it just means that this is the only knowledge that you need. Now, let's just jump in and get started straight away.
Amazon Translate, as the name suggests, is a text translation service based on machine learning. It translates text from a native language to other languages, one word at a time. The translation process is actually two parts. First, we have the encoder, which reads the source text and then outputs a semantic representation, which you can think of as the meaning of that source text. Remember that the way certain points are conveyed between languages differs. It's not always about direct translation of the same words between two different languages. So, the encoder takes the native source text and outputs a semantic representation or meaning, and then the decoder reads in that meaning and writes it to the target language.
Now, there's something called an attention mechanism, and Amazon Translate uses this to understand context. It helps decide which words in the source text are the most relevant for generating the target output, ensuring that the whole process correctly translates any ambiguous words or phrases. The product is capable of auto-detecting the source text language. So, you can explicitly state what language the source text is in, or you can allow that to be auto-detected by the product.
In terms of some of the use cases for Amazon Translate, it can offer a multilingual user experience. All the documents within businesses are generally going to be stored in the main language of that business. However, this allows you to offer those same documents, such as meeting notes, posts, communications, and articles, in all the languages that staff within your business speak. This can make it much easier for organizations with offices in different countries to operate more efficiently. This also means that you can offer things like emails, in-game chat, or customer live chat in the native language of the person you're communicating with, which can increase the operational efficiency of your business processes. It also allows you to translate incoming data, such as social media, news, and communications, from the language they're written in into the native language of the staff interpreting those incoming communications.
More commonly, Amazon Translate can also offer language independence for other AWS services. For example, you might have services such as Comprehend, Transcribe, and Polly, which operate on information, and Translate offers the ability for these services to operate in a language-independent way. It can also be used to analyze data stored in S3, RDS, DynamoDB, and other AWS data stores.
Generally, with this product, you're going to find that it's used more commonly as an integration product. So, rather than using it directly, it's more common to see it integrated with other AWS services, other applications (including ones that you develop), and other platforms. So, in the exam, if you see any type of scenario that requires text-to-text translation, think Translate. If you see any scenario that might need text-to-text translation as part of a process, Translate can form part of that process. You might want to translate one language into another and then speak that language, or you might want to take audio in one language, output text, and then translate that to a different textual language.
Keep in mind that Translate is often used as a component of a business process. So, really keep that in mind. It's not always used in isolation. Now, with that being said, that is everything I wanted to cover in this video. Go ahead and complete the video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this video, I want to talk about another really cool AWS product called Amazon Recognition with a K. Now let's jump in and get started because I'm actually super excited to step through this product and how it works. Recognition is a deep learning-based image and video analysis product. Deep learning is a subset of machine learning. So this is to say that Recognition can look at images or videos and intelligently do or identify things based on those images or videos.
Specifically, it can identify objects in images and videos such as cats, dogs, chickens, or even hot dogs. It can identify specific people, text—for example, license plates—activities, and what people are doing in images and videos. It can help with content moderation, so identifying if something is safe or not. It can detect faces, analyze faces for emotion and other visual indications, and compare faces, checking images and videos for identified individuals. It can do pathing—so identify movements of people in videos—and an example of this might be post-game analysis on sports games, and much, much more. It's actually one of the coolest machine intelligence services that AWS has, and that's saying a lot.
The product is also pay-as-you-use, with per-image pricing or per-minute-of-video pricing. It integrates with applications via APIs and it's event-driven, so it can be invoked, say, when an image is uploaded to an S3 bucket. But one of its coolest features is that it can analyze live video by integrating with Kinesis video streams. This might include doing facial recognition on security camera footage for security-type situations, distinguishing between the owner of a property and somebody who's attempting to commit a crime. All in all, it's a super flexible product.
Now generally, for all of the AWS exams, you will need to have a basic understanding of the architecture. There are some AWS exams—for example, machine learning—where you might need additional understanding. And if you're studying a course where that additional understanding is required, there will be follow-up videos. In general, though, it's only a high-level architecture understanding. And one example architectural flow might look something like this: An image containing Whiskers and Woofy is uploaded to S3. Now we've configured S3 events, and so this invokes a Lambda function. The Lambda function calls Recognition to analyze the image. It returns the results, and then the Lambda function stores the metadata together with a link to the image into DynamoDB for further business processes.
To give some context as to the other things that Recognition can do, let's just take an entirely random selection of images from the internet. Recognition can identify celebrities such as Ironman. It can also identify mic chambers, that I think as a machine learning service, it might be slightly biased. It can identify text in images or videos, such as license plates on cars or other internet memes. It can even identify objects, animals, or people in those same memes. For faces specifically, it can identify emotions or other attributes. So, for example, identifying this random doctor is a male who currently has his eyes open and is looking very, very serious rather than being happy in any way.
So that's Recognition. If you have questions in the exam which need general analysis performing on images or videos for content, emotion, text, activities—anything I've mentioned in this lesson—then you should default to picking Recognition. It's probably going to be the correct answer. Now with that being said, that is everything I wanted to cover in this video. Go ahead and complete this video, and when you're ready, I look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this lesson I want to cover the ElastiCache product. This is one which features relatively often in all of the associate AWS exams and fairly often at a professional level. It’s a product that you’ll need to understand if you’re delivering high performance applications. It’s one of a small number of products which allows your applications to scale to truly high-end levels of performance. So let’s jump in and take a look.
So what is ElastiCache? Well, at a high level it’s an in-memory database for applications which have high-end performance requirements. If you think about RDS, that’s a managed product which delivers database servers as a service. Databases generally store data persistently on disk. Because of this, they have a certain level of performance. No matter how fast the disk is, it’s always going to be subject to performance limits. An in-memory cache holds data in memory, which is orders of magnitude faster, both in terms of throughput and latency. But it’s not persistent and so it’s used for temporary data. ElastiCache provides two different engines, Redis and Memcache D, and both of them are delivered as a service.
Now, in terms of what you’d use the product for—well, if you need to cache data for workloads which are read heavy (read heavy being the key term that you need to remember at this point), or if you have low latency requirements, then using ElastiCache is a viable option. For read-heavy workloads, ElastiCache can reduce the workloads on a database. And this is important because databases aren’t the best at scaling, especially relational databases. Databases are also expensive relative to the data that they store and the performance that they deliver. So for heavy reads, offloading these to a cache can really help reduce costs—so it’s cost-effective. Remember that for the exam.
ElastiCache can also be used as a place to store session data for users of your application, which can help to make your application servers stateless. This is used in most highly available and elastic environments—those that use load balancers and auto scaling groups. But for any systems which need to be fault tolerant, where users can’t notice if components fail, then generally everything needs to be stateless, and so ElastiCache can help with this type of architecture.
Now, one really important thing to understand for the exam is that using ElastiCache means that you need to make application changes. It’s not something that you can just use. Your application needs to understand a caching architecture. It needs to know to use a cache to check for data, and if data isn’t in the cache, then it needs to check the underlying database. Applications need to be able to write data and understand cache invalidation. This functionality doesn’t come for free, and so if you’re answering any exam questions which state no application changes, then ElastiCache probably won’t be a suitable solution.
So let’s have a look visually at how some of these architectures work. Architecturally, let’s say that you have an application—obviously the Categorum application—and this application is being accessed by a customer, in this case, Bob. The application uses Aurora as its back-end database engine and it’s been adjusted to use ElastiCache. The first time that Bob queries the application, the Categorum application will check the cache for any data. It won’t be there though, because it’s the first time it’s been accessed, and so this will be a cache miss. That means the application will need to go to the database for the data, which is slower and more expensive. When it’s accessed this data for the first time, the application will write the data it just queried the database for into the cache. If Bob queries the same data again, then it will be retrieved directly from ElastiCache and no database reads are required. This is known as a cache hit. It will be faster and cheaper because the database won’t be used for the query.
Now with this small-scale interaction, it’s hard to see the immediate architectural benefit of using ElastiCache. But what if there are more users? What if instead of one Bob, we have many Bobs? Assuming the patterns of data access are the same or similar, then we’ll have a lot more cache hits and a much smaller increase in the number of database reads. This will allow us to scale our application and accept more customers. If the application data access patterns of our user base is similar at scale, then it will mean that most of the increased load placed on the application will go directly onto ElastiCache. We won’t have a proportional increase in the number of direct accesses against our database. This will allow us to scale the architecture in a much more cost-effective way than if everything used direct database access. We can deliver much higher levels of read workload and offer performance improvements at scale. This is a caching architecture and a very typical architecture that ElastiCache will be used for.
Let’s take a look at another use case, and this is using the product to help us with session state data for our users. Let’s say again that we’re looking at our Categorum application, but now it’s running within an auto scaling group with three EC2 instances and a load balancer. It’s using Aurora for the persistent data layer. Again, we have a user of our application—Bob. The application I’m demoing in this part of the lesson is actually the fault tolerant extreme edition of Categorum. Even when components of the system fail, the application can continue operating without disrupting our user Bob. The way it does this is by using ElastiCache to store session data. This means that when Bob first connects to any of the application instances, his session data is written by that instance into ElastiCache. It’s kept updated if Bob purchases any limited edition cat prints. So the first time Bob connects to any of the instances, that instance writes and maintains the session data for Bob using ElastiCache.
If at any point the application needs to deal with the failure of an instance—where previously the session data would be lost and the application functionality disrupted—the Categorum extreme edition can tolerate this. If this occurs, Bob’s connection is moved to another instance by the load balancer, and his experience continues uninterrupted because the session data is loaded by the new instance from ElastiCache. This is another common use case for ElastiCache: storing user session data externally to application instances, allowing the application to be built in a stateless way. This in turn allows it to go beyond simple high availability and move towards being a fault tolerant application.
ElastiCache commonly helps with either read-heavy performance improvements, cost reductions, or session state storage for users. What’s also important for the exam is that ElastiCache actually provides access to two different engines: Redis and Memcache D. It’s important that you understand the differences between these two engines at a high level. So let’s look at that next.
The differences between Memcache D and Redis start with the fact that both engines offer sub-millisecond access to data. They both support a range of programming languages, so no matter what your application uses, you can use either engine. But they diverge when it comes to the data structures each supports. Memcache D supports simple data structures only, such as strings. Redis, on the other hand, supports much more advanced types of data, including lists, sets, sorted sets, hashes, bit arrays, and many more. For example, an application could use Redis to store data related to a game leaderboard and keep a list sorted by rank. Redis can store both the data and the order of the data, significantly improving application performance.
Another difference is that Redis supports replication of data across multiple availability zones, making it highly available by design. This can also be used to scale reads using those replicas. Memcache D doesn’t support replication in the same way. While you can create multiple nodes and manually shard your data—such as storing certain usernames in one node and others in another—Redis supports true replication across instances for scalability. So in the exam, if you see questions about multi-availability zones or high availability and resilience, Redis should be your likely answer.
Additionally, Redis supports backup and restores, meaning that a cache can be restored to a previous state after a failure. Memcache D does not support that; it lacks persistence. So if the exam question asks about recovery of a cache without data loss, Redis is your best choice. Memcache D does have an advantage in that it’s multi-threaded by design and can better utilize multi-core CPUs, offering significantly more performance on that front. A notable Redis-only feature is transactions—this is where multiple operations are treated as a single unit, meaning either all succeed or none do. This is useful when strict consistency is required.
Both of these engine types can use a range of instance types and sizes. I’ve included a link in the lesson description that provides an overview of the different resources that can be allocated to both caching engines. You don’t need to know the specifics for the exam, but architecturally, be aware that instance types with more or faster memory will offer an advantage when running ElastiCache.
For the exam, focus on recognizing the types of architectures that benefit from an in-memory cache. These include systems with read-heavy workloads, needs for cost reduction when accessing databases, sub-millisecond access requirements, or systems that require external session state storage. Just remember—it doesn’t come for free. You need to make application changes. This is not a plug-and-play solution for apps that can’t be modified. With that being said, that’s everything I wanted to cover from a theory perspective in this lesson. Go ahead, complete the lesson, and when you’re ready, I’ll look forward to you joining me in the next.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
What do you think is the best way to deal with trolling?
While I partially agree with Film Crit Hulk that "skilled moderation" should be utilized in the disempowerment of online trolls, I remain of the persuasion that the best way someone can deal with a troll is to report them (or directly contact an administrator), block them, and not respond to them. I've had my fair share of experiences with trolls in the past and it took me a minute to figure out that it's easy to preserve your own mental well-being when you detach yourself from online interactions and not engage in the mind game the troll wishes to play. The amusement they derive from the reaction you give them is intoxicating and addictive; you wouldn't give an alcoholic more liquor, don't give a troll your time.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Trolling is when an Internet user posts inauthentically (often false, upsetting, or strange) with the goal of causing disruption or provoking an emotional reaction. When the goal is provoking an emotional reaction, it is often for a negative emotion
This section reminds me of times when I’ve seen people post obviously fake or mean comments just to make others upset, especially in livestream chats or game forums. I used to think they were just joking, but now I realize it was trolling. It’s interesting to learn that some people do it to feel powerful or smart. I’ve even seen people try to “troll the newbies,” which I didn’t know was an actual term before reading this.
-
-
www.reddit.com www.reddit.com
-
https://www.reddit.com/r/typewriters/comments/1k2rus1/typewriter_in_singularity/
Dirty/rusted typewriter seen in the video game Singularity
-
-
www.reddit.com www.reddit.com
-
https://www.reddit.com/r/typewriters/comments/1k322dc/mafia_2002/
Typewriter in video game Mafia (2002).
-
-
learn.cantrill.io learn.cantrill.ioCloudHSM1
-
Welcome back and in this lesson I want to talk about CloudHSM. Now this is a product which is similar to KMS in terms of the functionality which it provides, in that it's an appliance which creates, manages and secures cryptographic material or keys. Now there are a few key differences and you need to know these differences because it will help you decide on when to use KMS and when to use CloudHSM. And you might face an exam question where you need to select between these two. So let's jump in and get started.
Now I promised you at the start of the course I wouldn't use facts and figures in lessons unless absolutely required. You shouldn't have to remember lots of different facts and figures unless they influence the architecture. Now this unfortunately is going to be one of the lessons where I do have to introduce some keywords that you simply need to remember. Because in this lesson the detail, the difference between CloudHSM and KMS really matters.
Now let's start by quickly talking about KMS. KMS is the key management service within AWS. So it's used essentially for encryption within AWS and it integrates with other AWS products. So it can generate keys, it can manage keys, other AWS services integrate with it for their encryption. But it has one security concern, at least if you operate in a really demanding security environment. And that's that it's a shared service. While your part of KMS is isolated, under the covers you're using a service which other accounts within AWS also use. What's more, while the permissions within AWS are strict, AWS do have a certain level of access to the KMS product. They manage the hardware and the software of the systems which provide the KMS product to you as a customer.
Now behind the scenes KMS uses what's called a HSM which stands for Hardware Security Module. And these are actually industry standard pieces of hardware which are designed to manage keys and perform cryptographic operations. Now you can actually run your own HSM on-premise. Cloud HSM is essentially a true single tenant HSM that's hosted within the AWS cloud. So if you hear the term HSM mentioned, it could refer to both Cloud HSM which is hosted by AWS or an on-premise HSM device.
Now specifically focusing on Cloud HSM, AWS provision it and they're responsible for hardware maintenance. But they have no access to the part of the unit where the keys are stored and managed. It's actually a physically tamper resistant piece of hardware. So it's not something that they can gain access to. Generally if you as the customer lose access to a HSM, that's it, game over. You can reprovision them but there's no easy way to recover data.
Now there's actually a well-known standard for these cryptographic modules. It's called the Federal Information Processing Standard Publication 140-2. You can easily determine the capability of any HSM modules based on their compliance with this standard. And I've included a link in the lesson description with additional information. But Cloud HSM is FIPS 140-2 Level 3 compliant and it's the Level 3 which really matters in the context of this lesson. KMS in comparison is overall 140-2 Level 2 compliant and some of the areas of the KMS product are also compliant with Level 3.
Now this matters. This is really important. If you see an exam question or if you're in a real world production situation which requires 140-2 Level 3 overall, then you have to use Cloud HSM or your own on-premises HSM device. And that's a fact that you really need to remember for the exam.
Another important distinction between KMS and Cloud HSM is how you access the product. With KMS, all operations are performed with AWS standard APIs and all permissions are also controlled with IAM permissions. Now Cloud HSM isn't so integrated with AWS and this is by design. With Cloud HSM, you access it with industry standard APIs. Now examples of this are PKCS 11, the JCE extensions or the CryptoNG extensions. And I've highlighted the keywords that you should try to build up an association with Cloud HSM. So if you see any of these keywords listed in the exam or in production situations, then you know you need a HSM appliance, either on-premise or Cloud HSM hosted by AWS.
Now it used to be that there was no real overlap between Cloud HSM and KMS. They were completely different. But more recently, you can use a feature of KMS called a custom key store. And this custom key store can actually use Cloud HSM to provide this functionality, which means that you get many of the benefits with Cloud HSM together with the integration with AWS. So when you're facing any exam questions, you still should be able to look for these keywords to distinguish between situations when you use KMS versus Cloud HSM.
Now just to summarize before we move on from this screen, I want you to focus on doing your best to remember all of the three key points that are highlighted with the exam power-up icon. If you can remember those, then you should be in a really good position to determine whether to use KMS or Cloud HSM within exam questions. Now I want to look at the architecture of Cloud HSM as a product, and I think it's best that we do that visually.
Now architecturally, Cloud HSMs are not actually deployed inside a VPC that you control. They're deployed into an AWS managed Cloud HSM VPC that you have no visibility of. So architecturally, this is how that looks. So on the left, we've got a customer managed VPC. On the right, we've got the Cloud HSM VPC that's managed by AWS. We're using two availability zones, and inside the customer managed VPC, we've gone ahead and created two private subnets, one in availability zone A and one in availability zone B.
Now inside the Cloud HSM VPC, to achieve high availability, you need to deploy multiple HSMs and configure them as a cluster. So a HSM by default is not a highly available device. It's a physical network device that runs within one availability zone. So in order to provide a fully highly available system, we need to create a cluster and have at least two HSMs in that cluster, one of them in every availability zone that you use within a VPC.
Now once HSM devices are configured to be in a cluster, then they replicate any keys, any policies, or any other important configuration between all of the HSM devices in that cluster. So that's managed by default, by the appliances themselves. That's not something that you need to configure. So the HSMs operate from this AWS managed VPC, but they're injected into your customer managed VPC via elastic network interfaces. So you get one elastic network interface for every HSM that's inside the cluster injected into your VPC. Once these interfaces have been injected into your customer managed VPC, then any services which are also inside that VPC can utilize the HSM cluster by using these interfaces. And if you want to achieve true high availability, then logically instances will need to be configured to low balance across all of the different interfaces.
Now also in order to utilize the cloud HSM devices, then a client needs to be installed on the EC2 instances, which are going to be configured to access the cloud HSM. So this is a background process known as the cloud HSM client. And this needs to be installed on the EC2 instance in order for it to access the HSM appliances. And then once the cloud HSM client is installed, then you can utilize industry standard API's such as PK, CS11, JCE and crypto NG to access the HSM cluster.
Now a really important thing to understand about cloud HSM, because this is a distinguishing factor between it and KMS, is that while AWS do provision the HSM, they're actually partitioned and they're tamper resistant. So AWS have no access to the area of the HSM appliances which store the keys. Only you can control these. You manage them, you're responsible for them. Now AWS can perform things like software updates and other maintenance tasks, but these don't take place on the area of the HSM which is used to perform cryptographic operations. Only you as an administrator or anyone that you delegate that to has the ability to interact with the secure area of the HSM devices.
Now before we finish this lesson, there are a few more things that I want to cover. So these are points that I think you should be aware of. So some of these are use cases, some of these are limitations that will help you select between using cloud HSM and using something like KMS.
So first, by default there's no native integration between cloud HSM and any AWS products. So one example of this is that you can't use cloud HSM in conjunction with S3 server-side encryption. That's not a capability that it has. Cloud HSM is not accessed using AWS standard APIs at least by default and so you can't integrate it directly with any AWS services. Now you could, for example, use cloud HSM to perform client-side encryption. So if you've got an encryption library on a particular local machine and you want to encrypt objects before you upload them to S3, then you can use it to perform that encryption on the object before you upload it to the S3 service. But this is not integrated with S3. You're just using it to perform encryption on the objects before you provide them to S3.
Now a cloud HSM can also be used to offload SSL or TLS processing from web servers. And if you do that, then the web servers can benefit from A, not having to perform those cryptographic operations, but also the cloud HSM is a custom designed piece of hardware that accelerates those processes. So it's much more economical and efficient to have a cloud HSM device performing those cryptographic operations versus doing it on a general purpose EC2 instance. So that's something that a cloud HSM can do for you, but KMS natively cannot.
Now other products that you might use inside AWS can also benefit from cloud HSM, products which are able to interact using these industry standard APIs. And this includes products like Oracle databases. So they can utilize cloud HSM for performing transparent data encryption or TDE. So this is a method that Oracle has for encrypting data that it manages on your behalf. And it can utilize a cloud HSM device to perform the encryption operations and to manage the keys. Now this does mean that because a cloud HSM device is something that's entirely managed by you, you're the only entity that initially starts off with access to be able to interact with the encryption materials. So the keys, it means that if you use a cloud HSM and integrate it with an Oracle database, then you're doing so in a way which means that AWS have no ability to decrypt that data. And so if you're operating in a highly restricted regulatory environment where you really need to use strong encryption and verify exactly who has the ability to perform encryption operations, then generally cloud HSM is an ideal product to support that.
And then lastly in a similar way, cloud HSM can also be used to protect the private keys for a certificate authority. So if you're running your own certificate authority, you can utilize cloud HSM to manage the private keys for that certificate authority.
Now just to summarize at this point, the overall theme is that for anything which isn't specific to AWS, for anything which expects to have access to a hardware security module using industry standard APIs, then the ideal product for that is cloud HSM. For anything that uses standards for anything that has to integrate with products which aren't AWS, then cloud HSM is ideal. For anything which does require AWS integration, then natively cloud HSM isn't suitable. If FIPS 140-2 Level 3 is mentioned, then it's cloud HSM. If integration with AWS is mentioned, then it's probably going to be KMS. If you need to utilize industry standard encryption APIs, then it's likely to be cloud HSM.
Now that's everything that we need to cover. I just wanted you to be able to handle any curveball HSM style questions that you might encounter in the exam. So thanks for watching, go ahead and complete this video and then when you're ready, I'll look forward to you joining me in the next one.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to cover Amazon Kinesis Data Analytics. This is a real-time data processing product, and it's critical that you understand its features together with when you should and shouldn't use it for the exam. Before I start talking about Kinesis Data Analytics, I want to position the product relative to everything else. Kinesis data streams are used to allow the large-scale ingestion of data into AWS and the consumption of that data by other compute resources known as consumers. Kinesis Data Firehose provides delivery services. It accepts data in and then delivers it to supported destinations in near real-time and it can also use Lambda to perform transformation of that data as it passes through. Kinesis Data Analytics is a service that provides real-time processing of data which flows through it using the structured query language known as SQL. Data inputs at one side, queries run against that data in real-time, and then data is output to destinations at the other.
The product ingests from either Kinesis data streams or Kinesis Firehose and can optionally pull in static reference data from S3, but I'll show you how that works visually in a moment. Now after data is processed, it can be sent on in real-time to destinations, and currently, the supported destinations are Firehose and indirectly any of the destinations which Firehose supports. But keep in mind, if you're using Firehose, then the data becomes near real-time rather than real-time. The product also directly supports AWS Lambda as a destination, as well as Kinesis Data Streams, and in both of those cases, the data delivery is real-time. So you only have near real-time if you choose Firehose or any of those indirect destinations. If you use Lambda or Kinesis Data Streams, then you keep the real-time nature of the data. Conceptually, the product fits between two streams of data: input streams and output streams, and it allows you, in real-time, to use SQL queries to adjust the data from the input to the output.
Now let's look at it visually because it will be easier to see how all of the various components fit together. So on the left, we start with the inputs, the source streams, and this can be Kinesis Streams or Kinesis Firehose. In the middle, we create a Kinesis Analytics application; this is a real-time product, and I'll explain what that means in a second. The Kinesis Analytics application can also take data in from a static reference source, an S3 bucket, and then the Kinesis Analytics application will output to destination streams on the right, so Kinesis Streams or Kinesis Firehose. Remember, all of these are external sources or destinations; they exist outside of Kinesis Data Analytics. Kinesis Data Analytics doesn't actually modify the sources in any way. What actually happens is this: inside the analytics application, you define sources and destinations known as inputs and outputs.
So conceptually, what happens is for the input side, objects called in-application input streams are created based on the inputs. Now you can think of these like normal database tables, but they contain a constantly updated stream of data from the input sources, the actual Kinesis Streams or Firehose. These exist inside the analytics application, but they always match what's happening on the streams which are outside of the application. Now the reference table is a table which matches data contained within an S3 bucket and it can be used to store static data which can enrich the data coming in over the streams. Consider the example of a popular online game where a Kinesis Stream has all of the data about player scores and player activities. In this particular case, the reference table might contain data on player information which can augment the stuff coming in via the stream. So if the stream only contains the raw score and activity data, then the reference data will contain other metadata about those players, so maybe player names, certain items the player has, or awards, and these can all be used to enrich the data that's coming in real-time from Kinesis Streams.
Now the core to the Kinesis Analytics application is the application code, and this is coded using the structured query language, or SQL. It processes inputs and it produces outputs. So in this case, it operates on data in the in-application input stream table and the reference table, and any output from the SQL statement is added to in-application output streams. Again, think of these like tables which exist within the Kinesis Analytics application; only these tables map onto real external streams, so any data that's outputted into those tables by the Kinesis Analytics application is entered onto the Kinesis Stream or Kinesis Firehose, and then these will feed into any consumers of the stream or destinations of the firehose. Additionally, any errors generated by the SQL query can be added to an in-application error stream, and all of this happens in real time. So data is captured from the source streams via the in-application input stream, the virtual tables. It's manipulated by the analytics application using the SQL query, and then stored into the in-application output streams which put that data into either the external Kinesis Stream or external Kinesis Firehose.
All of this just to stress it again happens in real time, and if the output data is delivered into a Kinesis Stream, then it stays real-time. If the output data is delivered into a Kinesis Firehose, then it becomes near real-time, delivering to all of those supported destinations. Now, you only pay for the data processed by the application, but it is not cheap, so you should only use it for scenarios which really fit this type of need. Before we finish this lesson, let's talk about the scenarios where you might choose to use Kinesis Data Analytics. There are some particular use cases or scenarios which fit using Kinesis Data Analytics. At a high level, this is anything which uses streaming data that needs real-time SQL-based processing, so things like time series analytics, maybe election data and e-sports, things like real-time dashboards for games, high score tables or leaderboards, and even things like real-time metrics for security and response teams. Anything which needs real-time stream-based SQL processing is an ideal candidate for Kinesis Data Analytics.
Now, I mentioned in the previous lesson that Data Firehose can also support transformation of data using Lambda, but remember the key differentiator is that Data Firehose is not a real-time product, and using Lambda you're restricted to relatively simple manipulations of data. Using Kinesis Data Analytics, you can create complex SQL queries and use those queries to manipulate input data into whatever format you want for the output data. So it has a lot more in terms of features than Data Firehose, so if you're dealing with any exam questions which need really complex manipulation of data in real-time, then Kinesis Data Analytics is the product to choose. Okay, so with that being said, that's everything that I wanted to cover in this theory lesson. Go ahead and complete the lesson, and then when you're ready, I look forward to you joining me in the next lesson.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back, and in this lesson, I want to cover application and network load balances in a little bit more detail. It's critical for the exam that you understand when to pick application load balances and when to pick network load balances, as they're both used for massively different situations. Now, we do have a lot to cover, so let's jump in and get started.
I want to start by talking about consolidation of load balances. Historically, when using classic load balances, you connected instances directly to the load balancer or you integrated an auto scaling group directly with that load balancer, an architecture which looked something like this: a single domain name, categor.io, using a single classic load balancer with an attached single SSL certificate for that domain, and then an auto scaling group is attached to that, with the classic load balancer distributing connections over those instances.
The problem is that this doesn't scale because classic load balancers don't support SNI, and you can't have multiple SSL certificates per load balancer, meaning every single unique HTTPS application that you have requires its own classic load balancer, which is one of the many reasons that classic load balancers should be avoided. In this example, we have Catergram and Dogagram, both of which are HTTPS applications, and the only way to use these is to have two different classic load balancers.
Compare this to the same application architecture, with both applications—Catergram and Dogagram—this time using a single application load balancer. This one handles both applications, and we can use listener-based rules, which I’ll talk about later in the lesson, where each of these listener-based rules can have an SSL certificate handling HTTPS for both domains. Then we can have host-based rules which direct incoming connections at multiple target groups that forward these on to multiple auto scaling groups, which is a two-to-one consolidation—halving the number of load balancers required to deliver these two different applications.
But imagine how this would look if we had a hundred legacy applications and each of these used a classic load balancer; moving from version one to version two offers significant advantages, and one of those is consolidation. So now I just want to focus on some of the key points about application load balancers—things which are specific to the version two or application load balancer.
First, it's a true layer seven load balancer and it's configured to listen on either HTTP or HTTPS protocols, which are layer seven application protocols that an application load balancer understands and can interpret information carried using both. Now, the flip side is that the application load balancer can't understand any other layer seven protocols—so things such as SMTP, SSH, or any custom gaming protocols are not supported by a layer seven load balancer like the application load balancer, and that's important to understand.
Additionally, the application load balancer has to listen using HTTP or HTTPS listeners; it cannot be configured to directly listen using TCP, UDP, or TLS, and that does have some important limitations and considerations that you need to be aware of, which I’ll talk about later on in this lesson.
Because it's a layer seven load balancer, it can understand layer seven content—so things like the type of the content, any cookies used by your application, custom headers, user location, and application behavior—meaning the application load balancer is able to inspect all of the layer seven application protocol information and make decisions based on that, something that the network load balancer cannot do. It has to be a layer seven load balancer, like the application load balancer, to understand all of these individual components.
An important consideration about the application load balancer is that any incoming connections—HTTP or HTTPS (and remember HTTPS is just HTTP transiting using SSL or TLS)—in all of these cases, whichever type of connection is used, are terminated on the application load balancer. This means that you can't have an unbroken SSL connection from your customer through to your application instances—it’s always terminated on the load balancer, and then a new connection is made from the load balancer through to the application.
This matters to security teams, and if your business operates in a strict security environment, this might be very important and, in some cases, can exclude using an application load balancer. It can't do end-to-end unbroken SSL encryption between a client and your application instances, and it also means that all application load balancers which use HTTPS must have SSL certificates installed on the load balancer, because the connection has to be terminated there and then a new connection made to the instances.
Application load balancers are also slower than network load balancers because additional levels of the networking stack need to be processed, and the more levels involved, the more complexity and the slower the processing. So if you're facing any exam questions that are really strict on performance, you might want to look at network load balancers instead.
A benefit of application load balancers is that, because they're layer seven, they can evaluate the application health at layer seven—in addition to just testing for a successful network connection, they can make an application layer request to the application to ensure that it's functioning correctly.
Application load balancers also have the concept of rules, which direct connections arriving at a listener—so if you make a connection to a load balancer, what it does with that connection is determined by rules, which are processed in priority order. You can have many rules affecting a given set of traffic, and they’re processed in priority order, with the last one being the default catch-all rule, though you can add additional rules, each of which can have conditions.
Conditions inside rules include checking host headers, HTTP headers, HTTP request methods, path patterns, query strings, and even source IP, meaning these rules can take different actions depending on the domain name requested (like categor.io or dogogram.io), the path (such as images or API), the query string, or even the source IP address of any customers connecting to that application load balancer.
Rules can also have actions—these are the things the rules do with traffic: they can forward that traffic to a target group, redirect it to something else (maybe another domain name), provide a fixed HTTP response (like an error or success code), or perform authentication using OpenID or Cognito.
Visually, this is how it looks: a simple application load balancer deployment with a single domain, categor.io, using one host-based rule with an attached SSL certificate. The rule uses host header as a condition and forward as an action, forwarding any connections for categor.io to the target group for the categor application.
If you want additional functionality, let’s say that you want to use the same application load balancer for a corporate client trying to access categor.io—maybe users of Bowtie Incorporated using the 1.3.3.7 IP address are attempting to access it, and you want to present them with an alternative version of the application. You can handle that by defining a listener rule where the condition is the source IP address of 1.3.3.7, and the action forwards traffic to a separate target group—an auto scaling group handling a second set of instances dedicated to this corporate client.
Because the application load balancer is a layer seven device, it can see inside the HTTP protocol and make decisions based on anything in that protocol or up to layer seven. Also, the connection from the load balancer to the instances for target group two will be a separate set of connections—highlighted by a slightly different color—because HTTP connections from enterprise users are terminated on the load balancer, with a separate connection to the application instances. There’s no option to pass through encrypted connections to the instances—it must be terminated—so if you need unbroken encrypted connections, you must use a network load balancer.
Since it’s a layer seven load balancer, you can use rules that work on layer seven protocol elements, like routing based on paths or headers, or redirecting traffic at the HTTP level. For example, if this ALB also handles traffic for dogogram.io, you could define a rule that matches dogogram.io and, as an action, configure a redirect toward categor.io—the obviously superior website. These are just a small subset of features available within the application load balancer, and because it's layer seven, you can perform routing decisions based on anything observable at that level, making it a really flexible product.
Before finishing, let’s take a look at network load balancers. They function at layer four, meaning they can interpret TCP, TLS, and UDP protocols, but have no visibility or understanding of HTTP or HTTPS. They can't interpret headers, see cookies, or handle session stickiness from an HTTP perspective, as that requires cookie awareness—which a layer four device doesn’t have.
Network load balancers are incredibly fast, capable of handling millions of requests per second with about 25% of the latency of application load balancers, since they don't deal with upper layers of the stack. They’re ideal for non-HTTP or HTTPS protocols—such as SMTP (email), SSH, game servers, or financial applications that don’t use web protocols.
If exam questions refer to non-web or non-secure web traffic that doesn’t use HTTP/HTTPS, default to network load balancers. One downside is that health checks only verify ICMP and basic TCP handshaking, not application awareness, so no detailed health checking is possible.
A key benefit is that they can be allocated static IPs, which is useful for white-listing—corporate clients can white-list NLB IPs to let them pass through firewalls, which is great for strict security environments. They can also forward TCP directly to instances, and because network layers build on top of each other, the network load balancer doesn’t interrupt any layers above TCP, allowing unbroken encrypted channels from clients to application instances.
This is essential to remember for the exam—using network load balancers with TCP listeners is how you achieve end-to-end encryption. They're also used for PrivateLink to provide services to other VPCs—another crucial exam point.
To wrap up, let’s do a quick comparison. I find it easier to remember when to use a network load balancer, and if it’s not one of those cases, default to application load balancers for their added flexibility. Use network load balancers if you need unbroken encryption between client and instance, static IPs for white-listing, the best performance (millions of RPS and low latency), non-HTTP/HTTPS protocols, or PrivateLink.
For everything else, use application load balancers—their functionality is often valuable in most scenarios. That’s everything I wanted to cover about application and network load balancers for the exam. Go ahead and complete this video, and when you're ready, I'll look forward to you joining me in the next.
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to spend a few minutes just covering the evolution at the Elastic Load Balancer product; it's important for the exam and real world usage that you understand its heritage and its current state. Now this is going to be a super quick lesson because most of the detail I'm going to be covering in dedicated lessons which are coming up next in this section of the course, so let's jump in and take a look.
Now there are currently three different types of Elastic Load Balancers available within AWS; if you see the term ELB or Elastic Load Balancers then it refers to the whole family, all three of them. Now the load balancers are split between version 1 and version 2; you should avoid using the version 1 load balancer at this point and aim to migrate off them onto version 2 products which should be preferred for any new deployments, and there are no scenarios at this point where you would choose to use a version 1 load balancer versus one of the version 2 types.
Now the load balancer product started with the classic load balancer known as CLB which is the only version 1 load balancer and this was introduced in 2009, so it's one of the older AWS products. Now classic load balancers can load balance HTTP and HTTPS as well as lower level protocols but they aren't really layer 7 devices, they don't really understand HTTP and they can't make decisions based on HTTP protocol features; they lack much of the advanced functionality of the version 2 load balancers and they can be significantly more expensive to use.
One common limitation is that classic load balancers only support one SSL certificate per load balancer which means for larger deployments you might need hundreds or thousands of classic load balancers and these could be consolidated down to a single version 2 load balancer, so I can't stress this enough for any questions or any real world situations you should default to not using classic load balancers.
Now this brings me on to the new version 2 load balancers; the first is the application load balancer or ALB and these are truly layer 7 devices so application layer devices, they support HTTP, HTTPS and the web socket protocols, and they're generally the type of load balancer that you'd pick for any scenarios which use any of these protocols.
There's also network load balancers or NLBs which are also version 2 devices but these support TCP, TLS which is a secure form of TCP and UDP protocols, so network load balancers are the type of load balancer that you would pick for any applications which don't use HTTP or HTTPS; for example if you wanted to load balance email servers or SSH servers or a game which used a custom protocol so didn't use HTTP or HTTPS then you would use a network load balancer.
In general version 2 load balancers are faster and support target groups and rules which allow you to use a single load balancer for multiple things or handle the load balancing different based on which customers are using it. Now I'm going to be covering the capabilities of each of the version 2 load balancers separately as well as talking about rules but I wanted to introduce them now as a feature.
Now for the exam you really need to be able to pick between network load balancers or application load balancers for a specific situation so that's what I want to work on over the coming lessons; for now though this has just been an introduction lesson that talks about the evolution of these products and that's everything that I wanted to cover in this lesson, so go ahead complete lesson and when you're ready I'll look forward to you joining me in the next.
-
-
iskconeducation.org iskconeducation.org
-
WITH YUDHISHTHIRAFOR A HUSBANDL I WILL NEVER BEk FREE O F GRIEF.|V THE INSULT AFTERI k THE GAME OFDICE STILLRANKLES IN M
Enough is enough. Draupadi had to go through a lot because of the cowardice of her husband, Yudhishthira in particular. Thanks to Bheema, the problem of Keechaka had been resolved. Had it not been for him, Yudhishthira might have just watched her being disrobed again and again without even trying to help her.
-
0 KURU ELPERS,1 CANNOT BEARTH I5 PERSECUTIONANY LON&ER.AM IWON OR. N O T ?I SHALL ABIDE Ak BY YOURf k VERDICT
I would say that she was the bravest among the braves present in the room. Nobody thought that it was the best to interrupt the game when a woman was objectified and nobody questioned Duryodhana and Shakuni's play. They were all there for their petite entertainment. And when Dhritrarastra realized that it was wrong, it was too late and wanted to cover up the incident by fulfilling three of her wishes.
-
I SHALL, IN THEBATTLEFI ELD, TEAROPEN THE BREASTOF THI5 VILLAIN OFTHE BHARATARACE, AND DRINKHIS LIFEBLOOP
What he said eventually came true. However, I believe that he had the power to stop all this from happening by just advising his brother that it was enough when he bet Draupadi in the game. Where did his conscience and bravery go when Draupadi needed it the most?
-
Draupadi was the total wom an ; complex and yetfemi
Draupadi was far more intelligent than her husbands. When Yudhishthira messed up in the dice game, she had to take matters on her hands. She questioned her husbands, their cousins, uncles and everybody who witnessed the game about their morality and humanity. She vowed not to tend to her hair so that her husbands would be reminded of the injustice that she had to go through just because of them. In a sense, it was her way of getting justice herself that her husbands ignored.
-
D O N O T B E I M P E T U O U S . I TW O U L D B E A 6A IN S TP H A R M A ,W H IC H ISDIVINE A N D SU PER IO RT O L IF E ITSELF. IA G R E E D T O T H ES T A K E S T H O U G HI K N E W 5 H A K U N I
If he knew what was going to come, then why did he even do it? If playing the game of dice was his karma to gain dharma, then it does not make any sense at all. Personally, I do not wish to have a husband who's going to put me and his brothers through a lot of suffering just because he wanted to take a risk. And him advising Bheema to be patient is very hypocritical at this moment. I would like to comment that he failed as a husband and also as a brother, the moment he agreed to Shakuni's game knowing that he would be dishonest.
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
The following is the authors’ response to the original reviews
eLife Assessment
The authors present an algorithm and workflow for the inference of developmental trajectories from single-cell data, including a mathematical approach to increase computational efficiency. While such efforts are in principle useful, the absence of benchmarking against synthetic data and a wide range of different single-cell data sets make this study incomplete. Based on what is presented, one can neither ultimately judge if this will be an advance over previous work nor whether the approach will be of general applicability.
We thank the eLife editor for the valuable feedback. Both benchmarking against other methods and validation on a synthetic dataset (“dyntoy”) are indeed presented in the Supplementary Note, although this was not sufficiently highlighted in the main text, which has now been improved.
Our manuscript contains benchmarking against a challenging synthetic dataset in Figure 1; furthermore, both the synthetic dataset and the real-world thymus dataset have been analyzed in parallel using currently available TI tools (as detailed in the Supplementary Note). z other single-cell datasets (single-cell RNA-seq) were added in response to the reviewers' comments.
One of the reviewers correctly points out that tviblindi goes against the philosophy of automated trajectory inference. This is correct; we believe that a new class of methods, complementary to fully automated approaches, is needed to explore datasets with unknown biology. tviblindi is meant to be a representative of this class of methods—a semi-automated framework that builds on features inferred from the data in an unbiased and mathematically well-founded fashion (pseudotime, homology classes, suitable low-dimensional representation), which can be used in concert with expert knowledge to generate hypotheses about the underlying dynamics at an appropriate level of detail for the particular trajectory or biological process.
We would also like to mention that the algorithm and the workflow are not the sole results of the paper. We have thoroughly characterized human thymocyte development, where, in addition to expected biological endpoints, we found and characterized an unexpected activated thymic T-reg endpoint.
Public Reviews:
Reviewer #1 (Public Review):
Summary:
The authors present tviblindi, a computational workflow for trajectory inference from molecular data at single-cell resolution. The method is based on (i) pseudo-time inference via expecting hitting time, (ii) sampling of random walks in a directed acyclic k-NN where edges are oriented away from a cell of origin w.r.t. the involved nodes' expected hitting times, and (iii) clustering of the random walks via persistent homology. An extended use case on mass cytometry data shows that tviblindi can be used elucidate the biology of T cell development.
Strengths:
- Overall, the paper is very well written and most (but not all, see below) steps of the tviblindi algorithm are explained well.
- The T cell biology use case is convincing (at least to me: I'm not an immunologist, only a bioinformatician with a strong interest in immunology).
We thank the reviewer for feedback and suggestions that we will accommodate, we respond point-by-point below
Weaknesses:
- The main weakness of the paper is that a systematic comparison of tviblindi against other tools for trajectory inference (there are many) is entirely missing. Even though I really like the algorithmic approach underlying tviblindi, I would therefore not recommend to our wet-lab collaborators that they should use tviblindi to analyze their data. The only validation in the manuscript is the T cell development use case. Although this use case is convincing, it does not suffice for showing that the algorithms's results are systematically trustworthy and more meaningful (at least in some dimension) than trajectories inferred with one of the many existing methods.
We have compared tviblindi to several trajectory inference methods (Supplementary note section 8.2: Comparison to state-of-the-art methods, namely Monocle3 (v1.3.1) Cao et al. (2019), Stream (v1.1) Chen et al. (2019), Palantir (v1.0.0) Setty et al. (2019), VIA (v0.1.89) Stassen et al. (2021), StaVia (Via 2.0) Stassen et al. (2024), CellRank 2 (v2.06) Weiler et al. (2024) and PAGA (scanpy==1.9.3) Wolf et al. (2019). We added thorough and systematic comparisons to the other algorithms mentioned by reviewers. We included extended evaluation on publicly available datasets (Supplementary Note section 10).
Also, in the meantime we have successfully used tviblindi to investigate human B-cell development in primary immunodeficiency (Bakardjieva M, et al. Tviblindi algorithm identifies branching developmental trajectories of human B-cell development and describes abnormalities in RAG-1 and WAS patients. Eur J Immunol. 2024 Dec;54(12):e2451004. doi: 10.1002/eji.202451004.).
- The authors' explanation of the random walk clustering via persistent homology in the Results (subsection "Real-time topological interactive clustering") is not detailed enough, essentially only concept dropping. What does "sparse regions" mean here and what does it mean that "persistent homology" is used? The authors should try to better describe this step such that the reader has a chance to get an intuition how the random walk clustering actually works. This is especially important because the selection of sparse regions is done interactively. Therefore, it's crucial that the users understand how this selection affects the results. For this, the authors must manage to provide a better intuition of the maths behind clustering of random walks via persistent homology.
In order to satisfy both reader types: the biologist and the mathematician, we explain the mathematics in detail in the Supplementary Note, section 4. We improved the Results text to better point the reader to the mathematical foundations in the Supplementary Note.
- To motivate their work, the authors write in the introduction that "TI methods often use multiple steps of dimensionality reduction and/or clustering, inadvertently introducing bias. The choice of hyperparameters also fixes the a priori resolution in a way that is difficult to predict." They claim that tviblindi is better than the original methods because "analysis is performed in the original high-dimensional space, avoiding artifacts of dimensionality reduction." However, in the manuscript, tviblindi is tested only on mass cytometry data which has a much lower dimensionality than scRNA-seq data for which most existing trajectory inference methods are designed. Since tviblindi works on a k-NN graph representation of the input data, it is unclear if it could be run on scRNA-seq data without prior dimensionality reduction. For this, cell-cell distances would have to be computed in the original high-dimensional space, which is problematic due to the very high dimensionality of scRNA-seq data. Of course, the authors could explicitly reduce the scope of tviblindi to data of lower dimensionality, but this would have to be stated explicitly.
In the manuscript we tested the framework on the scRNA-seq data from Park et al 2020 (DOI: 10.1126/science.aay3224). To illustrate that tviblindi can work directly in the high-dimensional space, we applied the framework successfully on imputed 2000 dimensional data. Furthermore we successfully used tviblindi to investigate bone marrow atlas scRNA-Seq dataset Zhang et al. (2024) and atlas of mouse gastrulation Pijuan-Sala et al. (2019). The idea behind tviblindi is to be able to work without the necessity to use non-linear dimensionality reduction techniques, which reduce the dimensionality to a very low number of dimensions and whose effects on the data distribution are difficult to predict. On the other hand the use of (linear) dimensionality reduction techniques which effectively suppress noise in the data such as PCA is a good practice (see also response to reviewer 2). We have emphasized this in the revised version and added the results of the corresponding analysis (see Supplementary note, section 9).
- Also tviblindi has at least one hyper-parameter, the number k used to construct the k-NN graphs (there are probably more hidden in the algorithm's subroutines). I did not find a systematic evaluation of the effect of this hyper-parameter.
Detailed discussion of the topic is presented in the Supplementary Note, section 8.1, where Spearman correlation coefficient between pseudotime estimated using k=10 and k=50 nearest neighbors was 0.997. The number k however does affect the number of candidate endpoints. But even when larger k causes spurious connection between unrelated cell fates, the topological clustering of random walks allows for the separation of different trajectories. We have expanded the “sensitivity to hyperparameters” section 8.1 also in response to reviewer 2.
Reviewer #2 (Public Review):
Summary:
In Deconstructing Complexity: A Computational Topology Approach to Trajectory Inference in the Human Thymus with tviblindi, Stuchly et al. propose a new trajectory inference algorithm called tviblindi and a visualization algorithm called vaevictis for single-cell data. The paper utilizes novel and exciting ideas from computational topology coupled with random walk simulations to align single cells onto a continuum. The authors validate the utility of their approach largely using simulated data and establish known protein expression dynamics along CD4/CD8 T cell development in thymus using mass cytometry data. The authors also apply their method to track Treg development in single-cell RNA-sequencing data of human thymus.
The technical crux of the method is as follows: The authors provide an interactive tool to align single cells along a continuum axis. The method uses expected hitting time (given a user input start cell) to obtain a pseudotime alignment of cells. The pseudotime gives an orientation/direction for each cell, which is then used to simulate random walks. The random walks are then arranged/clustered based on the sparse region in the data they navigate using persistent homology.
We thank the reviewer for feedback and suggestions that we have accommodated, we responded point-by-point below.
Strengths:
The notion of using persistent homology to group random walks to identify trajectories in the data is novel.
The strength of the method lies in the implementation details that make computationally demanding ideas such as persistent homology more tractable for large scale single-cell data. This enables the authors to make the method more user friendly and interactive allowing real-time user query with the data.
Weaknesses:
The interactive nature of the tool is also a weakness, by allowing for user bias leading to possible overfitting for a specific data.
tviblindi is not designed as a fully automated TI tool (although it implements a fully automated module), but as a data driven framework for exploratory analysis of unknown data. There is always a risk of possible bias in this type of analysis - starting with experimental design, choice of hyperparameters in the downstream analysis, and an expert interpretation of the results. The successful analysis of new biological data involves a great deal of expert knowledge which is difficult to a priori include in the computational models.
tvilblindi tries to solve this challenge by intentionally overfitting the data and keeping the level of resolution on a single random walk. In this way we aim to capture all putative local relationships in the data. The on-demand aggregation of the walks using the global topology of the data allows researchers to use their expert knowledge to choose the right level of detail (as demonstrated in the Figure 4 of the manuscript) while relying on the topological structure of the high dimensional point cloud. At all times tviblindi allows to inspect the composition of the trajectory to assess the variance in the development, possible hubs on the KNN-graph etc.
The main weakness of the method is lack of benchmarking the method on real data and comparison to other methods. Trajectory inference is a very crowded field with many highly successful and widely used algorithms, the two most relevant ones (closest to this manuscript) are not only not benchmarked against, but also not sited. Including those that specifically use persistent homology to discover trajectories (Rizvi et.al. published Nat Biotech 2017). Including those that specifically implement the idea of simulating random walks to identify stable states in single-cell data (e.g. CellRank published in Lange et.al Nat Meth 2022), as well as many trajectory algorithms that take alternative approaches. The paper has much less benchmarking, demonstration on real data and comparison to the very many other previous trajectory algorithms published before it. Generally speaking, in a crowded field of previously published trajectory methods, I do not think this one approach will compete well against prior work (especially due to its inability to handle the noise typical in real world data (as was even demonstrated in the little bit of application to real world data provided).
We provided comparisons of tviblindi and vaevictis in the Supplementary Note, section 8.2, where we compare it to Monocle3 (v1.3.1) Cao et al. (2019), Stream (v1.1) Chen et al. (2019), Palantir (v1.0.0) Setty et al. (2019), VIA (v0.1.89) Stassen et al. (2021), StaVia (Via 2.0) Stassen et al. (2024), CellRank 2 (v2.06) Weiler et al. (2024) and PAGA (scanpy==1.9.3) Wolf et al. (2019). We added thorough and systematic comparisons to the other algorithms mentioned by reviewers. We included extended evaluation on publicly available datasets (Supplementary Note section 10).
Beyond general lack of benchmarking there are two issues that give me particular concern. As previously mentioned, the algorithm is highly susceptible to user bias and overfitting. The paper gives the example (Figure 4) of a trajectory which mistakenly shows that cells may pass from an apoptotic phase to a different developmental stage. To circumvent this mistake, the authors propose the interactive version of tviblindi that allows users to zoom in (increase resolution) and identify that there are in fact two trajectories in one. In this case, the authors show how the author can fix a mistake when the answer is known. However, the point of trajectory inference is to discover the unknown. With so much interactive options for the user to guide the result, the method is more user/bias driven than data-driven. So a rigorous and quantitative discussion of robustness of the method, as well as how to ensure data-driven inference and avoid over-fitting would be useful.
Local directionality in expression data is a challenge which is not, to our knowledge, solved. And we are not sure it can be solved entirely, even theoretically. The random walks passing “through” the apoptotic phase are biologically infeasible, but it is an (unbiased) representation of what the data look like based on the diffusion model. It is a property of the data (or of the panel design), which has to be interpreted properly rather than a mistake. Of note, except for Monocle3 (which does not provide the directionality) other tested methods did not discover this trajectory at all.
The “zoom in” has in fact nothing to do with “passing through the apoptosis”. We show how the researcher can investigate the suggested trajectory to see if there is an additional structure of interest and/or relevance. This investigation is still data driven (although not fully automated). Anecdotally in this particular case this branching was discovered by a bioinformatician, who knew nothing about the presence of beta-selection in the data.
We show that the trajectory of apoptosis of cortical thymocytes consists of 2 trajectories corresponding to 2 different checkpoints (beta-selection and positive/negative selection). This type of a structure, where 2 (or more) trajectories share the same path for most of the time, then diverge only to be connected at a later moment (immediately from the point of view of the beta-selection failure trajectory) is a challenge for TI algorithms and none of tested methods gave a correct result. More importantly there seems to be no clear way to focus on these kinds of structures (common origin and common fate) in TI methods.
Of note, the “zoom in” is a recommended and convenient method to look for an inner structure, but it does not necessarily mean addition of further homological classes. Indeed, in this case the reason that the structure is not visible directly is the limitation of the dendrogram complexity (only branches containing at least 10% of simulated random walks are shown by default). In summary, tviblindi effectively handled all noise in the data that obscured biologically valid trajectories for other methods. We have improved the discussion of the robustness in the current version.
Second, the paper discusses the benefit of tviblindi operating in the original high dimensions of the data. This is perhaps adequate for mass cytometry data where there is less of an issue of dropouts and the proteins may be chosen to be large independent. But in the context of single-cell RNA-sequencing data, the massive undersampling of mRNA, as well as high degree of noise (e.g. ambient RNA), introduces very large degree of noise so that modeling data in the original high dimensions leads to methods being fit to the noise. Therefore ALL other methods for trajectory inference work in a lower dimension, for very good reason, otherwise one is learning noise rather than signal. It would be great to have a discussion on the feasibility of the method as is for such noisy data and provide users with guidance. We note that the example scRNA-seq data included in the paper is denoised using imputation, which will likely result in the trajectory inference being oversmoothed as well.
We agree with the reviewer. In our manuscript we wanted to showcase that tviblindi can directly operate in high-dimensional space (thousands of dimensions) and we used MAGIC imputation for this purpose. This was not ideal. More standard approach, which uses 30-50 PCs as input to the algorithm resulted in equivalent trajectories. We have added this analysis to the study (Supplementary note, section 9).
In summary, the fact that tviblindi scales well with dimensionality of the data and is able to work in the original space does not mean that it is always the best option. We have added a corresponding comment into the Supplementary note.
Reviewer #3 (Public Review):
Summary:
Stuchly et al. proposed a single-cell trajectory inference tool, tviblindi, which was built on a sequential implementation of the k-nearest neighbor graph, random walk, persistent homology and clustering, and interactive visualization. The paper was organized around the detailed illustration of the usage and interpretation of results through the human thymus system.
Strengths:
Overall, I found the paper and method to be practical and needed in the field. Especially the in-depth, step-by-step demonstration of the application of tviblindi in numerous T cell development trajectories and how to interpret and validate the findings can be a template for many basic science and disease-related studies. The videos are also very helpful in showcasing how the tool works.
Weaknesses:
I only have a few minor suggestions that hopefully can make the paper easier to follow and the advantage of the method to be more convincing.
(1) The "Computational method for the TI and interrogation - tviblindi" subsection under the Results is a little hard to follow without having a thorough understanding of the tviblindi algorithm procedures. I would suggest that the authors discuss the uniqueness and advantages of the tool after the detailed introduction of the method (moving it after the "Connectome - a fully automated pipeline".
We thank the reviewer for the suggestion and we have accommodated it to improve readability of the text.
Also, considering it is a computational tool paper, inevitably, readers are curious about how it functions compared to other popular trajectory inference approaches. I did not find any formal discussion until almost the end of the supplementary note (even that is not cited anywhere in the main text). Authors may consider improving the summary of the advantages of tviblindi by incorporating concrete quantitative comparisons with other trajectory tools.
We provided comparisons of tviblindi and vaevictis in the Supplementary Note, section 8.2, where we compare it to Monocle3 (v1.3.1) Cao et al. (2019), Stream (v1.1) Chen et al. (2019), Palantir (v1.0.0) Setty et al. (2019), VIA (v0.1.89) Stassen et al. (2021), StaVia (Via 2.0) Stassen et al. (2024), CellRank 2 (v2.06) Weiler et al. (2024) and PAGA (scanpy==1.9.3) Wolf et al. (2019). We added thorough and systematic comparisons to the other algorithms mentioned by reviewers. We included extended evaluation on publicly available datasets (Supplementary Note section 10).
(2) Regarding the discussion in Figure 4 the trajectory goes through the apoptotic stage and reconnects back to the canonical trajectory with counterintuitive directionality, it can be a checkpoint as authors interpret using their expert knowledge, or maybe a false discovery of the tool. Maybe authors can consider running other algorithms on those cells and see which tracks they identify and if the directionality matches with the tviblindi.
We have indeed used the thymus dataset for comparison of all TI algorithms listed above. Except for Monocle 3 they failed to discover the negative selection branch (Monocle 3 does not offer directionality information). Therefore, a valid topological trajectory with incorrect (expert-corrected) directionality was partly or entirely missed by other algorithms.
(3) The paper mainly focused on mass cytometry data and had a brief discussion on scRNA-seq. Can the tool be applied to multimodality data such as CITE-seq data that have both protein markers and gene expression? Any suggestions if users want to adapt to scATAC-seq or other epigenomic data?
The analysis of multimodal data is the logical next step and is the topic of our current research. At this moment tviblindi cannot be applied directly to multimodal data. It is possible to use the KNN-graph based on multimodal data (such as weighted nearest neighbor graph implemented in Seurat) for pseudotime calculation and random walk simulation. However, we do not have a fully developed triangulation for the multimodal case yet.
Recommendations for the authors:
Reviewer #1 (Recommendations For The Authors):
Suggestions for improved or additional experiments, data or analyses:
- Benchmark against existing trajectory inference methods.
- Benchmark on scRNA-seq data or an explicit statement that, unlike existing methods, tviblindi is not designed for such data.
We provided comparisons of tviblindi and vaevictis in the Supplementary Note, section 8.2, where we compare it to Monocle3 (v1.3.1) Cao et al. (2019), Stream (v1.1) Chen et al. (2019), Palantir (v1.0.0) Setty et al. (2019), VIA (v0.1.89) Stassen et al. (2021), StaVia (Via 2.0) Stassen et al. (2024), CellRank 2 (v2.06) Weiler et al. (2024) and PAGA (scanpy==1.9.3) Wolf et al. (2019). We added thorough and systematic comparisons to the other algorithms mentioned by reviewers. We included extended evaluation on publicly available datasets (Supplementary Note section 10).
- Systematic evaluation of the effetcs of hyper-parameters on the performance of tviblindi (as mentioned above, there is at least one hyper-parameter, the number k to construct the k-NN graphs).
This is described in Supplementary Note section 8.1
Recommendations for improving the writing and presentation:
- The GitHub link to the algorithm which is currently hidden in the Methods should be moved to the abstract and/or a dedicated section on code availability.
- The presentation of the persistent homology approach used for random walk clustering should be improved (see public comment above).
This is described extensively in Supplementary Note
- A very minor point (can be ignored by the authors): consider renaming the algorithm. At least for me, it's extremely difficult to remember.
We choose to keep the original name
Minor corrections to the text and figures:
- Labels and legend texts are too small in almost all figures.
Reviewer #2 (Recommendations For The Authors):
(1) On page 3: "(2) Analysis is performed in the original high-dimensional space avoiding artifacts of dimensionality reduction." In mass cytometry data where there is no issue of dropouts, one may choose proteins such that they are not correlated with each other making dimensionality reduction techniques less relevant. But in the context of an unbiased assays such as single-cell RNA-sequencing (scRNA-seq), one measures all the genes in a cell so dimensionality reduction can help resolve the redundancy in the feature space due to correlated/co-regulated gene expression patterns. This assumption forms the basis of most methods in scRNA-seq. More importantly, in scRNA-seq data the dropouts and ambient molecules in mRNA counts result in so much noise that modeling cells in the full gene expression is highly problematic. So the authors are requested to discuss in detail how they would propose to deal with noise in scRNA-seq data.
On this note, the authors mention in Supplementary Note 9 (Analysis of human thymus single-cell RNA-seq data): "Imputed data are used as the input for the trajectory inference, scaled counts (no imputation) are shown in line plots". The line plots indicate the gene expression trends along the obtained pseudotime. The authors use MAGIC to impute the data, and we request the authors to mention this in the Methods section (currently one must look through the code on Supplementary Note 1.3 to find this). Data imputation in single-cell RNA-seq data are intended to enable quantification of individual gene expression distribution or pairwise gene associations. But when all the genes in an imputed data are used for visualization, clustering or trajectory inference, the averaging effect will compound and result in severely smoothed data that misses important differences between cell states. Especially, in the case of MAGIC, which uses a transition matrix raised to a power, it is over-smoothing of the data to use a transition matrix smoothed data to obtain another transition matrix to calculate the hitting time (or simulate random walks). Second, the authors' proposal to use scaled counts to study gene trends cannot be generalized to other settings due to drop out issue. Given the few genes (and only one branch) that are highlighted in Figure 7D-G and Figure 31 in Supplementary Note, it is hard to say if scaling raw values would pick up meaningful biology robustly here for other branches.
We recommend that this data be reanalyzed with non-imputed data used for trajectory inference and imputed gene expression used for line plots.
As stated above in the public review, we reanalyzed the scRNA Seq data using a more standard approach (first 50 principal components). We have also analyzed two additional scRNA Seq datasets (Section 1 and section 10 of Supplementary Note)
On the same note, the authors use Seurat's CellCycleScoring to obtain the cell cycle phase of each cell and later use ScaleData to regress them out. While we agree that it is valuable to remove cell cycle effect from the data for trajectory inference (and has been used previously in other methods), the regression approach employed in Seurat's ScaleData is not appropriate. It is an aggressive approach that severely changes expression pattern of many genes and can result in new artifacts (false positives) in the data. We recommend the authors to explore this more and consider using a more principled alternatives such as fscLVM (https://genomebiology.biomedcentral.com/articles/10.1186/s13059-017-1334-8).
Cell cycle correction is an open problem (Heumos, Nat Rev Genetics, 2023)
Here we use an (arguably aggressive) approach to make the presentation more straightforward. The cells we are interested here (end #6) are not dividing and the regression does not change the conclusion drawn in the paper
(2) The figures provided are extremely low in resolution that it is practically impossible to correctly interpret a lot of the conclusion and references made in the figure (especially Figure 3 in the main text).
Resolution of the Figures was improved
(3) There are many aspects of the method that enable easy user biases and can lead to substantial overfitting of the data.
a. On page 7: "The topology of the point cloud representing human T-cell development is more complex ... and does not offer a clear cutoff for the choice of significant sparse regions. Interactive selection allows the user to vary the resolution and to investigate specific sparse regions in the data iteratively." This implies that the method enables user biases to be introduced into the data analysis. While perhaps useful for exploration, quantitative trajectory assessment using such approach can be faulty when the user (A) may not know the underlying dynamics (B) forces preconceived notion of trajectory.
The authors should consider making the trajectory inference approach less dependent on interactive user input and show that the trajectory results are robust to any choices the user may make. It may also help if the authors provide an effective guide and mention clearly what issues could result due to the use of such thresholds.
As explained in the response in public reviews, tviblindi is not designed as a fully automated TI tool, but as a data driven framework for exploratory analysis of unknown data.
There is always a risk of possible bias in this type of analysis - starting with experimental design, choice of hyperparameters in the downstream analysis, and an expert interpretation of the results. The successful analysis of new biological data involves a great deal of expert knowledge which is difficult to a priori include in the computational models. To specifically address the points raised by the reviewer:
“(A) may not know the underlying dynamics” - tviblindi is designed to perform exploratory analysis of the unknown underlying dynamics. We showcase in the study how this can be performed and we highlight possible cases which can be resolved expertly (spurious connections (doublets), different scales of resolution (beta selection)). Crucially, compared to other TI methods, tviblindi offers a clear mechanism on how to discover, focus and resolve these issues which would (and do) contaminate the trajectories discovered fully automatically by tested methods (cf. the beta selection, or the development of plasmacytoid dendritic cells (PDCs) (Supplementary note, section 10.1).
“(B) forces preconceived notion of trajectory” - user interaction in tviblindi does not force a preconceived notion of the trajectory. The random walks are simulated before the interactive step in an unbiased manner. During the interactive step the user adjusts trajectory specific resolution - incorrect choice of the resolution may result in either merging distinct trajectories into one or over separating the trajectories (which is arguably much less serious). However the interactive step is designed to deal with exactly this kind of challenge. We showcase (e.g. beta selection, or PDCs development) how to address the issue - tviblindi allows us to investigate deeper structure in any considered trajectory.
Thus, tviblindi represents a new class of methods that is complementary to fully automated trajectory inference tools. It offers a semi-automated tool that leverages features derived from data in an unbiased and mathematically rigorous manner, including pseudotime, homology classes, and appropriate low-dimensional representations. These can be integrated with expert knowledge to formulate hypotheses regarding the underlying dynamics, tailored to the specific trajectory or biological process under investigation.
b. In Figure 4, the authors discuss the trajectory of cells emanating from CD3 negative double positive stage and entering apoptotic phase and mention tviblindi may give "the false impression that cells may pass through an apoptotic phase into a later developmental stage" and propose that the interactive version of tviblindi can help user zoom into (increase resolution) this phenomenon and identify that there are in fact two trajectories in one. Given this, how do the other trajectories in the data change if a user manually adjusts the resolution? A quantification of the robustness is important. Also, it appears that a more careful data clean up could avoid such pitfalls where the algorithm infers trajectory based on mixed phenotype and the user would not have to manually adjust the resolution to obtain clear biological conclusion. We not that the original publication of this data did such "data clean up" using simple diffusion map based dimensionality reduction which the authors boast they avoid. There is a reason for this dimensionality reduction (distinguishing signal from noise), even in CyTOF data, let alone its importance in single cell data.
The reviewer is concerned about two different, but intertwined issues we wish to untangle here. First, data clean-up is typically done on the premise that dead cells are irrelevant and they are a source of false signals. In the case of the thymocytes in the human thymus this premise is not true. Apoptotic cells are a legitimate (actually dominant) fate of the development and thus need to be represented in the TI dataset. Their biological behavior is however complex as they stop expressing proteins and thus lose their surface markers gradually, as dictated by the particular protein degradation kinetics. So can we clean up dead and dying cells better? Yes, but we don't want to do it since we would lose cells we want to analyze. Second, do trajectories change when we zoom into the data? No, only the level of detail presented visually changes. Since we calculate 5000 trajectories in the dataset, we need to aggregate them already for the hierarchical clustering visualization. Note that Figure 4, panel A highlights 159 trajectories selected in V. group. Zooming in means that the hierarchy of trajectories within V. group is revealed (panel D, groups V.a and Vb.) and can be interpreted on the vaevictis and lineplot graphs (panel E, F).
c. In the discussion, the authors write "[tviblindi] allows the selection and grouping of similar random walks into trajectories based on visual interaction with the data". This counters the idea of automated trajectory inference and can lead to severe overfitting.
As explained in reply to Q3, our aim was NOT to create a fully automated trajectory inference tool. Even more, in our experience we realized that all current tools are taking this fully automated approach with a search for an “ideal” set of hyperparameters. This, in our experience, leads to a “blackbox” tool that is difficult to interpret for the expert in the biological field. To respond to this need we designed a modular approach where the results of the TI are presented and the expert can interact with them to focus the visualization and to derive interpretation. Our interactive concept is based on 15 years of experience with the data analysis in flow cytometry, where neither manual gating nor full automation is the ultimate solution but smart integration of both approaches eventually wins the game.
Thus, tviblindi represents a new class of methods that is complementary to fully automated trajectory inference tools. It offers a semi-automated tool that leverages features derived from data in an unbiased and mathematically rigorous manner. These features include pseudotime, homology classes, and appropriate low-dimensional representations. These features can be integrated with expert knowledge to formulate hypotheses regarding the underlying dynamics, tailored to the specific trajectory or biological process under investigation.
d. The authors provide some comment on the robustness to the relaxation parameter for witness complex construction in Supplementary Note Section 8.1.2 but it is limited given the importance of this parameter and a more thorough investigation is recommended. We request the authors to provide concrete examples with figures of how changing alpha2 parameter leads to simplicial complexes of different sizes and an assessment of contexts in which the parameter is robust and when not (in both simulated and publicly available real data). Of note, giving the users a proper guide for parameter choice based on these examples and offering them ways to quantify robustness of their results may also be valuable.
Section 8 in Supplementary Note was extended as requested.
e. The authors are requested for an assessment of possible short-circuits (e.g. cells of two distantly related phenotypes that get connected erroneously in the trajectory) in the data, and how their approach based on persistent homology deals with it.
If a short circuit results in a (spurious) alternative trajectory, the persistent homology approach allows us to distinguish it from genuine trajectories that do not follow the short circuit. This prevents contamination of the inferred evolution by erroneous connections. The ability to distinguish and separate distinct trajectories with the same fate is a major strength of this approach (e.g., the trajectory through doublets or the trajectories around checkpoints in thymocytes’ evolution).
(4) The authors propose vaevictis as a new visualization tool and show its performance compared to the standard UMAP algorithm on a simulated data set (Figure 1 in Supplementary Notes). We recommend a more comprehensive comparison between the two algorithms on a wide array of publicly available single-cell datasets. As well as comparison to other popular dimensionality reduction approaches like force directed layouts, which are the most widely used tool specifically to visualize trajectories.
We added Section 10 to Supplementary Note that presents multiple comparisons of this kind. It is important to note that tviblindi works independently of visualization and any preferred visualization can be used in the interactive phase (multiple visualisation methods are implemented).
(5) In Supplementary Note 8.2, the authors compare tviblindi against the other methods. We recommend the authors to quantify the comparison or expand on their assesments in real biological data. For example, in comparison against Palantir and VIA the authors mention "... discovers candidate endpoints in the biological dataset but lacks toolbox to interrogate subtle features such as complex branching" and "fails to discover subtle features (such as Beta selection)" respectively. We recommend the authors to make these comparisons more precise or provide quantification. While the added benefit of interactive sessions of tviblindi may make it more user friendly, the way tviblindi appears to enable analysis of subtle features (e.g. Figure 1H) should be possible in Palantir or VIA as well.
We extended the comparisons and presented them in Section 8 and 10 in Supplementary Note.
(6) The notion of using random walk simulations to identify terminal (and initial states) has been previously used in single-cell data (CellRank algorithm: https://www.nature.com/articles/s41592-021-01346-6). We request the authors to compare their approach to CellRank.
We compared our algorithm to the CellRank successor CellRank 2 (see section 8.2, Supplementary Note)
(7) The notion of using persistent homology to discover trajectories has been previously used in single cell data https://pubmed.ncbi.nlm.nih.gov/28459448/. we request a comparison to this approach
The proposed algorithm was not able to accommodate the large datasets we used.
scTDA (Rizvi, Camara et al. Nat. Biotechnol. 2017) has not been updated for 6 years. It is not suited for complex atlas-sized datasets both in terms of performance and utility, with its limited visualization tools. It also lacks capabilities to analyze individual trajectories.
(8) In Figure 3B, the authors visualize the endpoints and simulated random walks using the connectome. There is no edge from start to the apoptotic cells here. It is not clear why? If they are not relevant based on random walks, can the user remove them from analysis? Same for the small group of pink cells below initial point.
The connectome is a fully automated approach (similar to PAGA) which gives a basic overview of the data. It is not expected to be able to compete with the interactive pipeline of tviblindi for the same reasons as the fully automated methods (difficult to predict the effect of hyperparameters).
(9) In Supplementary Figure 3, in relation to "Variants of trajectories including selection processes" the author mention that there is a spurious connection between CD4 single positive, and the doublet set of cells. The authors mention that the presence of dividing cells makes it difficult to remove the doublets. We request the authors to discuss why. For example, the authors seem to have cell cycle markers (e.g. Ki67, pH3, Cyclin) and one would think that coupled with DNA intercalator 191/193lr one could further clean-up the data. Can the authors employ alternative toolkits such as doublet detection methods?
To address this issue, we do remove doublets with illegitimate cell barcodes (e.g. we remove any two cells from two samples with different barcode which present with double barcode). Although there are computational doublet removal approaches for mass cytometry (Bagwell, Cytometry A 2020), mostly applied to peripheral blood samples (where cell division is not present under steady state immune system conditions), these are however not well suited for situations where dividing samples occur (Rybakowska P, Comput Struct Biotechnol J. 2021), which is the case of our thymocyte samples. Furthermore, there are other situations where doublet formation is not an accident, but rather a biological response (Burel JG, Cytometry A (2020). Thus, the doublet cell problem is similar to the apoptotic cell problem discussed earlier.
We could remove cells with the double DNA signal, but this would remove not only accidental doublets but also the legitimate (dividing) cells. So the question is how to remove the illegitimate doublets but not the legitimate?
Of note, the trajectory going through doublets does not affect the interpretation of other trajectories as it is readily discriminated by persistent homology and thus random walks passing through this (spurious) trajectory do not contaminate the markers’ evolution inferred for legitimate trajectories.
We therefore prefer to remove only the barcode illegitimate and keep all others in analysis, using the expert analysis step also to identify (using the cell cycle markers plus other features) the artificially formed doublets and thus spurious connections.
(10) The authors should discuss how the gene expression trend plots are made (e.g. how are the expression averaged? Rolling mean?).
The development of those markers is shown as a line plot connecting the average values of a specific marker within a pseudotime segment. By default, the pseudotime values are divided into uniform segments (each containing the same number of points) whose number can be changed in the GUI. To focus on either early or late stages of the development, the segment division can be adjusted in GUI. See section 6 of the Supplementary Note.
Reviewer #3 (Recommendations For The Authors):
The overall figures quality needs to be improved. For example, I can barely see the text in Figure 3c.
Resolution of the Figures was improved
-
-
getinkspired.com getinkspired.com
-
Hiring dedicated Flutter developers in 2025 offers businesses a competitive edge through faster cross-platform deployment, reduced development costs, and seamless performance across Android and iOS. This blog explores why expert Flutter talent is essential for delivering modern, high-performing mobile apps.
-
-
go-gale-com.ezp.idm.oclc.org go-gale-com.ezp.idm.oclc.org
-
"tie back to some of those common themes that we experience at the beginning of Warcraft, when we're playing those original zones."
Shows a genuine intention to invoke nostalgic game elements
-
-
osf.io osf.io
-
Reviewer #3 (Public review):
In this paper, the authors use a three-phase economic game to examine the tendency to engage in prosocial versus competitive exchanges with three anonymous partners. In particular, they consider individual differences in the tendency to infer about others' tendencies based on one's preferences and to update one's preferences based on observations of others' behavior. The study includes a sample of individuals diagnosed with borderline personality disorder and a matched sample of psychiatrically healthy control participants.
On the whole, the experimental design is well-suited to the questions and the computational model analyses are thorough, including modern model-fitting procedures. I particularly appreciated the clear exposition regarding model parameterization and the descriptive Table 2 for qualitative model comparison. In the revised manuscript, the authors now provide a more thorough treatment of examining group differences in computational parameters given that the best-fitting model differed by group. They also examine the connection of their task and findings to related research focusing on self-other representation and mentalization (e.g., Story et al., 2024).
The authors note that the task does not encourage competition and instead captures individual differences in the motivation to allocate rewards to oneself and others in an interdependent setting. The paper could have been strengthened by clarifying how the Social Value Orientation framework can be used to interpret the motivations and behavior of BPD versus CON participants on the task. Although the authors note that their approach makes "clear and transparent a priori predictions," the paper could be improved by providing a clear and consolidated statement of these predictions so that the results could be interpreted vis-a-vis any a priori hypotheses.
Finally, the authors have amended their individual difference analyses to examine psychometric measures such as the CTQ alongside computational model parameter estimate differences. I appreciate that these analyses are described as exploratory. The approach of using a partial correlation network with bootstrapping (and permutation) was interesting, but the logic of the analysis was not clearly stated. In particular, there are large group (Table 1: CON vs. BPD) differences in the measures introduced into this network. As a result, it is hard to understand whether any partial correlations are driven primarily by mean differences in severity (correlations tend to be inflated in extreme groups designs due to the absence of observation in middle of scales forming each bivariate distribution). I would have found these exploratory analyses more revealing if group membership was controlled for.
Tags
Annotators
URL
-
-
blog.dnevnik.hr blog.dnevnik.hr
-
The mobile app space is changing at a breakneck pace, and companies are in a hurry to provide high-performing, seamless applications to users. As the competition goes through the roof, technology and the right development team must be chosen correctly. That's where Flutter app development comes in as a game-changer with cross-platform support, quick development cycles, and a native-like experience all using one codebase.
Unlock faster time-to-market, native-like performance, and cross-platform scalability by hiring skilled Flutter developers for your next app idea. This article explores how expert Flutter talent can help businesses build efficient, user-centric apps while saving time and costs.
-
-
templeu.instructure.com templeu.instructure.com
-
Cable has stood ready to supplant broadcasting from the very beginning of both radio and television; its failure so to is a further vivid example of the operation of the ‘law’ of the suppression of radical potential.
This sentence is saying that cable technology was always good enough to replace traditional radio and TV broadcasts, even from the start. But it didn’t happen—not because the technology wasn’t ready, but because powerful groups didn’t want it to. The author is pointing out how new, game-changing ideas or inventions often get blocked because they could shake up the way things already work or threaten people in charge.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
They also have a history of collectively choosing a target website or community and doing a “raid” where they all try to join and troll and offend the people in that community.
This part was very shocking for me; I found it disquieting that an anonymous group could coordinate attacks to hurt or offend others and treat it like a game. This made me reflect on how online spaces can encourage people to lose empathy when there's no accountability or prevent this kind of behavior without restricting free speech; similar to our discussions in class about the responsibility of platform design when anonymous platforms allow cruelty rather than actual freedom.
-
-
gamesfromwithin.com gamesfromwithin.com
-
all a game does is transform some data (assets, inputs, state) into some other data (graphics commands, new game states)
very profound statement.
-
minimize the amount of transformations
does "transformations" here mean, for example, loading some files into the game?
-
-
worldtreasures.org worldtreasures.org
-
10. True: Following the death of her husband, Eliza Hamilton took it upon herself to tell the story of her husband, which also benefited her! Following the Reynolds Pamphlet, Eliza’s story was exploited without her permission. By telling Hamilton’s story after his death, Eliza was able to reclaim her own narrative as well. As a reminder, the Reynolds Pamphlet was published by Alexander Hamilton to clear his name of being involved in political corruption after being blackmailed by James Reynolds. In doing so, he admitted to his romantic involvement with James’ wife, Maria, and thus humiliating Eliza in the process. Another truth to the musical is the depiction of Eliza burning love letters between her and Hamilton, though it cannot be known for certain her reasoning behind this. Finally, in the last song of the musical, Eliza sings of her philanthropy that she became involved in after Hamilton’s death. This was also true! Eliza founded The Hamilton Free School which was the first school in Washington Heights and became heavily involved in helping orphans and widows.
I didn't have much to say about this article but I like the layout of it and I feel like i could use this as a fun game during the interview, im having many ideas from this about things we could do!
-
-
docs.google.com docs.google.com
-
He loved boxing, though. He knew the names of all the Mexican fighters as if they lived here, as if they were Dodgers players, like Steve Sax or Steve Yeager, Dusty Baker, Kenny Landreaux or Mike Marshall, Pedro Guerrero. Roque did know about Fernando Valenzuela, as everyone did, even his mom, which is why she agreed to let Roque take them to a game
Roque, despite not being a huge fan of baseball, takes Erick out, in an attempt to connect with him but also show him that he loves not just Erick's mother, but Erick as well.
-
His mom was saying something, and Roque, too, and then, finally, it was just him and that ball and his stinging hands.
He was so shocked by his catch that everything froze around him, he wasn't just at a game, or just watching his favorite players, he was experiencing the feel, holding a ball they had played with, and he was able to experience this because Roque had shown him more actual care and attention than any of the other men had.
-
-
social-media-ethics-automation.github.io social-media-ethics-automation.github.io
-
Affordances [e28] are what a user interface lets you do. In particular, it’s what a user interface makes feel natural to do. So for example, an interface might have something that looks like it should be pressed, or an interface might open by scrolling a little so it is clear that if you touch it you can make it scroll more (see a more nuanced explanation here [e29])
I've played a few games, and I think the design of some of the games' user interfaces could be applied to social platforms as well. Different buttons in a game interface lead to different functions, and the most important functions use enlarged fonts and frames, or prominent colors. To achieve the property of affordance, the design should be clean and clear, so that the user can see at a glance where the buttons lead to, and can quickly find the important buttons at any time.
-
-
nautil.us nautil.us
-
Artur Garcez and Luis Lamb wrote a manifesto for hybrid models in 2009, called Neural-Symbolic Cognitive Reasoning. And some of the best-known recent successes in board-game playing (Go, Chess, and so forth, led primarily by work at Alphabet’s DeepMind) are hybrids. AlphaGo used symbolic-tree search, an idea from the late 195
example symbols
-
A wakeup call came at the end of 2021, at a major competition, launched in part by a team of Facebook (now Meta), called the NetHack Challenge. NetHack, an extension of an earlier game known as Rogue, and forerunner to Zelda, is a single-user dungeon exploration game that was released in 1987.
example symbols
-
-
learn.cantrill.io learn.cantrill.io
-
Welcome back and in this lesson I want to talk about two volume types available within AWS GP2 and GP3. Now GP2 is the default general purpose SSD based storage provided by EBS, and GP3 is a newer storage type which I want to include because I expect it to feature on all of the exams very soon. Now let's just jump in and get started.
General Purpose SSD storage provided by EBS was a game changer when it was first introduced; it's high performance storage for a fairly low price. Now GP2 was the first iteration and it's what I'm going to be covering first because it has a simple but initially difficult to understand architecture, so I want to get this out of the way first because it will help you understand the different storage types.
When you first create a GP2 volume it can be as small as 1 GB or as large as 16 TB, and when you create it the volume is created with an I/O credit allocation. Think of this like a bucket. So an I/O is one input output operation, and an I/O credit is a 16 kb chunk of data. So an I/O is one chunk of 16 kilobytes in one second; if you're transferring a 160 kb file that represents 10 I/O blocks of data—so 10 blocks of 16 kb—and if you do that all in one second that's 10 credits in one second, so 10 I/Ops.
When you aren't using the volume much you aren't using many I/Ops and you aren't using many credits, but during periods of high disc load you're going to be pushing a volume hard and because of that it's consuming more credits—for example during system boots or backups or heavy database work. Now if you have no credits in this I/O bucket you can't perform any I/O on the disc.
The I/O bucket has a capacity of 5.4 million I/O credits, and it fills at the baseline performance rate of the volume. So what does this mean? Well, every volume has a baseline performance based on its size with a minimum—so streaming into the bucket at all times is a 100 I/O credits per second refill rate. This means as an absolute minimum regardless of anything else you can consume 100 I/O credits per second which is 100 I/Ops.
Now the actual baseline rate which you get with GP2 is based on the volume size—you get 3 I/O credits per second per GB of volume size. This means that a 100 GB volume gets 300 I/O credits per second refilling the bucket. Anything below 33.33 recurring GB gets this 100 I/O minimum, and anything above 33.33 recurring gets 3 times the size of the volume as a baseline performance rate.
Now you aren't limited to only consuming at this baseline rate—by default GP2 can burst up to 3000 I/Ops so you can do up to 3000 input output operations of 16 kb in one second, and that's referred to as your burst rate. It means that if you have heavy workloads which aren't constant you aren't limited by your baseline performance rate of 3 times the GB size of the volume, so you can have a small volume which has periodic heavy workloads and that's OK.
What's even better is that the credit bucket it starts off full—so 5.4 million I/O credits—and this means that you could run it at 3000 I/Ops, so 3000 I/O per second for a full 30 minutes, and that assumes that your bucket isn't filling up with new credits which it always is. So in reality you can run at full burst for much longer, and this is great if your volumes are used initially for any really heavy workloads because this initial allocation is a great buffer.
The key takeaway at this point is if you're consuming more I/O credits than the rate at which your bucket is refilling then you're depleting the bucket—so if you burst up to 3000 I/Ops and your baseline performance is lower then over time you're decreasing your credit bucket. If you're consuming less than your baseline performance then your bucket is replenishing, and one of the key factors of this type of storage is the requirement that you manage all of the credit buckets of all of your volumes, so you need to ensure that they're staying replenished and not depleting down to zero.
Now because every volume is credited with 3 I/O credits per second for every GB in size, volumes which are up to 1 TB in size they'll use this I/O credit architecture, but for volumes larger than 1 TB they will have a baseline equal to or exceeding the burst rate of 3000—and so they will always achieve their baseline performance as standard; they don't use this credit system. The maximum I/O per second for GP2 is currently 16000, so any volumes above 5.33 recurring TB in size achieves this maximum rate constantly.
GP2 is a really flexible type of storage which is good for general usage—at the time of creating this lesson it's the default but I expect that to change over time to GP3 which I'm going to be talking about next. GP2 is great for boot volumes, for low latency interactive applications or for dev and test environments—anything where you don't have a reason to pick something else. It can be used for boot volumes and as I've mentioned previously it is currently the default; again over time I expect GP3 to replace this as it's actually cheaper in most cases but more on this in a second.
You can also use the elastic volume feature to change the storage type between GP2 and all of the others, and I'll be showing you how that works in an upcoming lesson if you're doing the CIS Ops or developer associate courses. If you're doing the architecture stream then this architecture theory is enough.
At this point I want to move on and explain exactly how GP3 is different. GP3 is also SSD based but it removes the credit bucket architecture of GP2 for something much simpler. Every GP3 volume regardless of size starts with a standard 3000 IOPS—so 3000 16 kB operations per second—and it can transfer 125 MB per second. That’s standard regardless of volume size, and just like GP2 volumes can range from 1 GB through to 16 TB.
Now the base price for GP3 at the time of creating this lesson is 20% cheaper than GP2, so if you only intend to use up to 3000 IOPS then it's a no brainer—you should pick GP3 rather than GP2. If you need more performance then you can pay for up to 16000 IOPS and up to 1000 MB per second of throughput, and even with those extras generally it works out to be more economical than GP2.
GP3 offers a higher max throughput as well so you can get up to 1000 MB per second versus the 250 MB per second maximum of GP2—so GP3 is just simpler to understand for most people versus GP2 and I think over time it's going to be the default. For now though at the time of creating this lesson GP2 is still the default.
In summary GP3 is like GP2 and IO1—which I'll cover soon—had a baby; you get some of the benefits of both in a new type of general purpose SSD storage. Now the usage scenarios for GP3 are also much the same as GP2—so virtual desktops, medium sized databases, low latency applications, dev and test environments and boot volumes.
You can safely swap GP2 to GP3 at any point but just be aware that for anything above 3000 IOPS the performance doesn't get added automatically like with GP2 which scales on size. With GP3 you would need to add these extra IOPS which come at an extra cost and that's the same with any additional throughput—beyond the 125 MB per second standard it's an additional extra, but still even including those extras for most things this storage type is more economical than GP2.
At this point that's everything that I wanted to cover about the general purpose SSD volume types in this lesson—go ahead, complete the lesson and then when you're ready, I'll look forward to you joining me in the next.
-
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.netview1
-
children from families in the top 20 percent of the income distribution already outscore children from the bottom 20 percent by 106 points in early literacy.
When inequality appears this early, it invalidates the very idea of a level playing field. It makes the American dream feel like a rigged game pretending to be fair.
-
-
docdrop.org docdrop.org
-
Instead of sprawling console games, the team is “starting to realize it would just be better to create small, more impactful experiences.”
Highlights how smaller studios must limit scale—unlike miHoYo, which can build a massive game like Genshin thanks to its resources.
-
-
Local file Local file
-
We hear it, shrill and silver, an echo from a volleyballgame of long ago.
A comparison to sport, this vicious and powerful collective. To aunt lydia this is nothing but entertainment, but to the handmaid's this is a competitive game of volleyball, or killing.
-
-
templeu.instructure.com templeu.instructure.com
-
During the 1970s the game was changing
This quote discusses the shift in the cable industry's structure during the 70s. It explains that the early mom-and-pop cable businesses were being replaced by larger entities called Multiple System Operators or MSOs. By doing this it meant that the industry was becoming more centralized and that the more powerful and wealthy companies were starting to take control. This change in the landscape had significant consequences or just changes for the types of things that were then being broadcasted to consumers.
-
-
www.rollingstone.com www.rollingstone.com
-
Franchises will probably also get in on the action, delivering targeted ads. Say, for example, you download your favorite basketball team’s app; the app knows when you’ve gone to a game and shares that data with the Walmart app you happen to have on your phone.
This discusses how sports franchises can leverage mobile apps to deliver targeted advertisements by tracking user behavior. For example, data shared between a team’s app and other apps like Walmart allows brands to target fans with personalized ads based on their attendance at games and other activities. This practice uses data integration and real-time tracking to create highly relevant marketing strategies.
-
This is just the tip of the iceberg regarding sports tech — not to mention sports betting, video games and other modes of tech. There are a host of elements that come into play. Technology will continue to influence sports. Those looking to remain relevant and progressive within the industry should embrace the technology being developed. ADVERTISEMENT These trends offer countless ways for enterprising sports companies to invest, allowing brands to develop innovative ways to stay ahead of the rest. A competitive imbalance is on the horizon, and brands that embrace these powerful new tools will remain in the game.
In this conclusion, Schreiber considers the more expansive implications of emerging technologies on the sports industry, in that their influence extends well beyond the development of player performance to areas like gaming, sports betting, and brand strategy. The article contends that clubs, as well as other bodies, must adopt and invest in emerging technology to keep pace, with a coming gap between leaders and laggards. The perspective is particularly insightful for researchers and practitioners concerned with the strategic and financial elements of technology adoption within sports.
-
Technology is set to affect how athletes train, companies market, fans follow their favorite teams, games are broadcast and even how players interact with their followers. The stakeholders who get on board will likely see rapid success, while those who miss the boat might find it hard to catch up.
This is the main purpose of the article, technology whether we like it or not is going to revolutionize sports but not only the game but the management side. We are seeing this change in football specifically where next-gen stats and broadcasting are slowly evoloving the game. On the management side we are seeing major companies using technology that has been recently introduced to sports, with new technology that help athletes perform at a higher level more businesses are emerging claiming to have the best recovery tech, comfort for athletes, and even next gen stats.
-
-
lawenforcementtoday.com lawenforcementtoday.com
-
Destroyer represents a game-changing level of firepower being applied to combat drug and human smuggling operations by the Mexican Cartels.
Shows how the destroyer is going to add significant resources to the fight of gangs like the cartel infiltrating the United States at the southern border.
-
-
www.reddit.com www.reddit.com
-
If you're curious about some of the technical details and how the are affected by the distribution of typefaces and sizes, I laid out some of them the other day: https://www.reddit.com/r/BaseballScorecards/comments/1jn2475/comment/mks9rbc/
Lou also has some great examples of scorekeeping across display sizes and level of data in his offerings at https://thirty81press.com/.
The broader issue for most scorers is the limitation to 8.5 x 11" paper which is the most common page size for the ubiquitous portable and ultraportable typewriters from the mid-century. While there are some portables with carriages and platens that might accomodate up to 12" wide paper, they're not super common.
To get machines with wider platens to get 11x14 or 11x17, you're going to need the significantly larger standard machines and unless you're rich enough to have a suite that you can securely store one in or a journalist with your own booth, not many baseball fans are going to cart a 35-45+ pound typewriter with them to all their games. Though this wouldn't prevent the fan viewing at home from scoring this way easily. My example above was done on a standard width carriage on a standard machine, but I did have several options to do it on a 12", 14", and even two 16" standard typewriters. Interestingly, most of my larger carriage machines are elite 12/6 (12CPI with 6 lines/inch) formats, and I don't think Lou has designed yet for that standard which would allow for an additional 15 characters to be distributed amidst the columns (while still keeping a minimum of 1/2" margins for some balanced white space). I'll be tinkering around with some of this myself in the coming week or so on 11x14" paper using a 15" wide platen on an elite machine to see how things might look.
Perhaps a modified format at 8.5 x 11 that alternates the teams and splits a 12 inning game format across three sheets so that the typist can type down a single page without swapping sheets every half inning and realigning their page every time? But this would cause a lot of formating change versus traditional layouts to do so.
I've also been tinkering with using small space characters like the - and the _ to indicate data (with or without the use of the variable line spacing mechanism) for things like tracking RBIs. The underline is particularly useful for this in Lou's three space layout.
-
-
meresophistry.substack.com meresophistry.substack.com
-
Suggestive sentences can be interpreted in multiple ways, thereby increasing their opportunities to be cited by scholars. If a sentence is suggestive while remaining ambiguous, then future academics can use it for their own work. It’s the quoting game of modern academic scholarship. A game that is full of citation chains where the more citations in the chain, the better.
See what you want to see
-
-
remiller1450.github.io remiller1450.github.io
-
home_score ~ game_type
home score by game type
-
-
Local file Local file
-
jects which can be processed by distributing the game enginebetween multiple servers whilst maintaining 60 frames/s
Why does it jump from 1-2-4-6-9?
-
ypical video game engines must limit the num-ber of objects within their game worlds as beyond a certain number, the hardware thegame engine is running on prevents it from processing all updates within the 16.66mstime window required to maintain 60FPS.
clunky sentence
-
st, they are especially importan
Again, don't need to state. Just state
In this chapter, we study the empirical performance of the Distribute Worlds system to demonstrate its viability as a novel game engine architecture.
-
game engine
boundary? margin?
-
is permanentlydisconnected, and the player object can be removed from the game world
it is safe to remove the player object from this node?
not from the entire game world. I don't think you defined game world in the way you mean here anywhere.
-
game
Perhaps mention that this logic could be abstracted away if it were a sort of game engine, where the engine manages duplicated message sending and the user (game dev) just manages input message sending and position handling.
-
player
Would it not suffice to send input to any nodes which the player is very close to? Rather than EVERY connected node? This distance could be dependent on the maximum velocity in the game
-
4.2.1 Node boundaries and margins
Worth having a para on the fact that in 2D systems, you can use congruent squares as in Figure 4.1 but that your architecture generalises to any configuration e.g. hexagons, cubes in 3D or any shapes.
Also worth mentioning that the entire game world must be covered by nodes?
Such that your architecture allows for any collection of nodes whose union of boundaries covers the entire game world. And that the boundaries must be pairwise disjoint.
For example, a single node architecture fits into this conception.
-
To meet the goal of elastic scaling and handling large amounts of traffic from manyplayers, it must be possible to add more compute power to the system to cope withdemand. To be able to do this, the game world first needs to be broken down intoscalable units called nodes
This isn't strictly true. You could vertically scale. I think you could substantiate the second sentence more precisely to reflect this.
-
The functionality of scaling the world is then entirely contained within the DistributedWorlds system.
This sentence is a bit meaningless to me. Isn't the scaling of of any game world contained within its system? I think I see what you're trying to say but could reword this. I think my confusion mostly comes from the term Distributed Worlds system. It isn't well defined to me.
-
Althoughthis dissertation aims to prove that the performance of large-scale multiplayer gamescan be greatly improved, it will not develop a commercial video game
Rephrase:
This work focuses on proving that ... but does not implement a video game.
-
Client downtime to change servers
Cap?
I think this section would be made more concrete with an example of a video game where it is common practice for players to do this
-
cts in a video game world is limited by theperformance of the computer processing that worl
So? Sentence feels a little out of place. Make it make sense in this para
-
This means that this game world can handle no morethan ≈ 1200 objects
Slightly too repetitive. Maybe 'We interpret this as meaning that the game world can handle...'
-
Figure 3.2: Number of Boids in the game world vs.
What does the figure tell me? A description is fine but I need you to draw a conclusion from it.
-
his
No need for a 'this' here. I would just say : Therefore, the requirement for all testing conducted is that the game must run at at least 60 FPS to be considere ...
-
To demonstrate the limitations of traditional multiplayer server architecture, a simpleimplementation of a multiplayer game engine is created to benchmark these existinglimitations.
This sentence is hard to read
-
Thismeans that the maximum number of people in a single game world is still limited, butthere are many separate, disjointed worlds
This whole sentence doesn't quite track to me. The 'but' particularly throws me off
-
Game engines typically address this challenge through variousoptimisation strategies, such as component update scheduling, spatial partitioning tolimit updates to relevant objects, or data-oriented design approaches that organise com-ponents for cache-friendly memory access
Got a citation for this?
-
simulation that requires a game loop to run.
Does a boid program need the render part?
-
ng th
comma? Or reword to be less clunky. Maybe just While the game is the running, the game loop ...
-
This dissertation
This shouldn't be how you start the diss. Something more like
The fundamental limitations of traditional multiplayer game architectures ...
-
-
vsblog.netlify.app vsblog.netlify.app
-
challenges in game theory
скорее in social philosophy, ТИ тут инструмент
-
game
strategic game
-
-
www.youtube.com www.youtube.com
-
Voici un sommaire de la vidéo avec des indications temporelles basées sur le déroulement du contenu :
-
Introduction (Début de la vidéo) : L'introduction est faite par Elena, la fondatrice de Toadhouse Games. Elle explique que ce tutoriel est conçu pour les débutants qui n'ont aucune connaissance en codage et que les premières vidéos seront gratuites sur YouTube. Elle présente Ren'Py comme un moteur de roman visuel utilisé par des milliers de créateurs.
-
Qu'est-ce que Ren'Py ? (Environ 0:00 - 1:00) : Ren'Py est un moteur pour créer des romans visuels et de la fiction interactive. Bien qu'il fonctionne avec du code Python, il n'est pas nécessaire de savoir coder pour l'utiliser. Le logiciel fournit tout ce dont vous avez besoin, y compris des éditeurs de texte.
-
Téléchargement de Ren'Py (Environ 1:00 - 2:00) : Il faut se rendre sur ri.org et cliquer sur le bouton de téléchargement. Différentes versions sont disponibles pour Windows, Mac, Linux, Android et iOS. Une fois le fichier téléchargé, il faut l'exécuter et extraire les fichiers dans le dossier de votre choix.
-
Ouverture et présentation du lanceur Ren'Py (Environ 2:00 - 4:00) : Dans le dossier extrait, double-cliquez sur l'application Ren'Py (l'icône avec un anime) pour ouvrir le lanceur. Le lanceur affiche les projets ouverts (tutoriel et question par défaut) et les fichiers associés à chaque projet. Sur la droite, l'option "script" permet d'accéder aux fichiers de code, qui peuvent être édités dans un éditeur de texte comme Atom. Ren'Py peut télécharger et installer Atom pour vous.
-
Exploration des fichiers du projet (Environ 4:00 - 5:00) : Le dossier "game" contient tous les fichiers du jeu (audio, musique, images, etc.). Un raccourci vers le dossier "images" est également disponible. Le fichier "script" contient le code du jeu, y compris les dialogues, les transitions, la musique et les scènes. Les options et les écrans (screens) permettent de personnaliser l'apparence du jeu.
-
Construction et distribution du jeu (Environ 5:00 - 5:30) : L'option "build distributions" permet de créer une version jouable de votre jeu pour la partager avec d'autres sur différentes plateformes comme PC, Linux, Mac, itch.io ou Steam.
-
Exercice pratique avec le projet "The Question" (Environ 5:30 - 8:00) : Il est recommandé de sélectionner le projet "the question" et de lancer le projet pour jouer au jeu. Ensuite, ouvrez le script du projet "the question". L'exercice consiste à jouer au jeu tout en regardant le code correspondant dans l'éditeur de texte. Cela permet de comprendre comment le code contrôle le déroulement du jeu (musique, scènes, dialogues, choix). Il est possible de faire de petites modifications dans le script et de recharger le jeu pour voir les changements.
-
Présentation de Scrivener (Environ 8:00 - 9:00) : Scrivener est un logiciel optionnel qui peut être utilisé pour écrire le dialogue et organiser le contenu de votre roman visuel. Un modèle Ren'Py pour Scrivener créé par Toadhouse Games est disponible. Scrivener propose des conseils d'écriture de base et des modèles pour les profils de personnages et le code Ren'Py.
-
Conclusion (Environ 9:00 - Fin de la vidéo) : Elena encourage les spectateurs à commencer à expérimenter avec Ren'Py en modifiant le projet "the question". Des tutoriels plus avancés sur les "flags" et les choix seront proposés ultérieurement. Des ressources d'aide sont disponibles sur Twitter, par e-mail (teamtoadhouse@gmail.com), sur les subreddits et les forums Ren'Py, ainsi que sur le Discord de Toadhouse Games.
-
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
Voici un sommaire avec des indications temporelles basées sur le déroulement de la vidéo :
- Introduction (Début) : Elena Linaire, fondatrice et directrice créative de Team Toad House et Toad House Games, présente le studio et annonce un game jam de visual novels sur itch.io.
Elle mentionne des ateliers animés par des professionnels de Toad House pour aider à la création de visual novels. L'objectif est de rendre la création de jeux accessible aux débutants.
- Téléchargement de Ren'Py (Environ 2-3 minutes) : Elena explique comment télécharger Ren'Py depuis le site renpy.org.
Elle précise que Ren'Py est un moteur de jeu open source et gratuit spécialement conçu pour les visual novels. Elle cite d'autres moteurs de jeu comme Unity, Unreal, Game Maker, Godot et Twine, notant qu'ils sont adaptés à différents types de jeux.
Elle souligne que la connaissance de Python n'est pas nécessaire pour utiliser Ren'Py, bien que le "pi" dans Ren'Py fasse référence à Python.
- Documentation et Ressources (Environ 4-5 minutes) :
Elena mentionne que le site de Ren'Py contient de la documentation, qui est parfois considérée comme peu conviviale.
Elle recommande également le serveur Discord Ren'Py et le forum Lumisoft comme ressources d'aide.
La documentation couvre les bases et les utilisations plus spécifiques de Ren'Py, y compris les systèmes de dates, de monnaie et d'inventaire.
- Lancement et Création d'un Nouveau Projet (Environ 6-10 minutes) :
Elena montre l'interface du lanceur Ren'Py, affichant des projets existants comme ceux de Toad House Games.
Elle explique comment créer un nouveau projet, choisir la langue (avec des options comme le pig latin) et sélectionner un éditeur de texte (recommandant Adam, qui peut être téléchargé directement depuis Ren'Py).
Il est possible de choisir la résolution du projet, avec 1280x720 comme valeur par défaut, et un schéma de couleurs clair ou foncé pour l'interface (GUI).
- Structure des Fichiers d'un Projet (Environ 11-13 minutes) :
Elena présente la structure des dossiers créés pour un nouveau projet Ren'Py, notamment les dossiers
game(contenantimages,audio,gui,saves),audio cacheet autres.Elle explique que le fichier
script.rpyest l'endroit où le code du jeu est écrit.Elle montre comment remplacer l'icône de l'application et modifier les éléments de l'interface graphique dans le dossier
gui.- Jeu Ren'Py par Défaut et Code de Base (Environ 14-16 minutes) :
Elena lance le projet par défaut de Ren'Py pour montrer les fonctionnalités intégrées comme les sauvegardes, les chargements, les préférences (volume, plein écran, saut, etc.) et l'écran "À propos". Elle exécute le court jeu par défaut pour illustrer la structure de base : arrière-plan (
bg), sprites et dialogue.Elle ouvre ensuite le fichier
script.rpydans Adam pour montrer le code correspondant, expliquant les déclarations de personnages (define) et le point de départ du jeu (label start).- Définir des Personnages et Écrire du Dialogue (Environ 17-19 minutes) : Elena explique comment définir des personnages avec un nom et une couleur de texte.
Elle montre comment écrire du dialogue en utilisant le nom du personnage défini. Elle aborde la question de la gestion de plusieurs personnages avec des noms similaires.
-
Outil Narratif Scrivener (Environ 19-22 minutes) : Elena présente Scrivener comme un outil utile pour la planification et l'écriture du récit d'un visual novel, permettant d'organiser l'intrigue, les dialogues et même d'intégrer des éléments de code de base avant de les copier-coller dans Ren'Py.
-
Narration et Positionnement du Texte (Environ 22-24 minutes) : Elena explique comment gérer le texte de narration (sans nom de personnage), souvent utilisé pour les pensées internes.
Elle mentionne les deux modes de texte principaux dans Ren'Py : en bas de l'écran et en plein écran (NVL). Elle déconseille de placer le texte narratif ailleurs à cause de la complexité du code.
- Choix, Sauts (Jump), Appels (Call) et Drapeaux (Flags) (Environ 25-41 minutes) :
Elena démontre comment créer des choix (menu) dans Ren'Py, en utilisant les mots-clés
menu, les options et les actions à entreprendre (texte,jumpvers un autre label,callà un autre label).Elle explique la différence entre
jump(saut sans retour) etcall(appel avec retour après unreturn).Elle introduit le concept de drapeaux (flags), qui sont des variables utilisées pour suivre les décisions du joueur et influencer le déroulement de l'histoire (
default nom_du_drapeau = False,\$ nom_du_drapeau = True,if nom_du_drapeau:).Elle montre comment les drapeaux peuvent être utilisés avec des instructions
ifpour afficher du contenu conditionnel.- Analyse des Jeux Tutoriel et "The Question" (Environ 41-47 minutes) :
Elena examine le jeu tutoriel inclus avec Ren'Py, soulignant ses fonctionnalités (sauvegarde, chargement, préférences, rollback, historique) et son contenu éducatif sur les bases de Ren'Py.
Elle explore ensuite le jeu d'exemple "The Question", attirant l'attention sur l'analyse du jeu du point de vue d'un développeur (apparence des sprites, positionnement, expressions, choix).
Elle montre comment le code du jeu "The Question" utilise la définition de personnages avec des couleurs de texte personnalisées (codes hexadécimaux), les drapeaux et la structure des labels pour créer des choix et des embranchements narratifs.
- Exemple de Code de "Good Looking Home Cooking" (Environ 47-1 heure 17 minutes) :
Elena présente le code de son jeu "Good Looking Home Cooking" comme un exemple plus complexe, montrant l'utilisation de définitions pour les sons, les curseurs et les personnages (avec des propriétés comme la couleur et le texte alternatif pour la synthèse vocale).
Elle explique l'utilisation de variables pour suivre les choix importants (drapeaux) et comment ces drapeaux sont utilisés dans les menus et les instructions conditionnelles pour créer différents embranchements et fins.
Elle illustre l'utilisation de transformations et de positionnement pour les sprites, les transitions (dissolve, fade to black), et la création d'une séquence de crédits animée à partir d'une image PNG défilante. Elle discute des différentes fins possibles dans un visual novel (bonne, mauvaise, tiède).
- Taille des Images et Arrière-Plans (Environ 1 heure 17 minutes - 1 heure 21 minutes) :
Elena aborde la question de la taille des images, expliquant qu'elle est déterminée par essai et erreur pour correspondre à la résolution du projet et à l'aspect souhaité. Elle montre des exemples d'arrière-plans filtrés dans le jeu Ren'Py "Karashojo".
- Conseils de Gestion du Temps et Conclusion (Environ 1 heure 21 minutes - Fin) :
Elena termine en donnant des conseils pour la gestion du temps et éviter le "crunch" pendant le game jam, notamment en se fixant des mini-échéances régulières et en étant réaliste quant à la portée du projet.
Elle encourage les participants à utiliser des outils comme Scrivener et à rejoindre la communauté Discord pour trouver de l'aide et des collaborateurs.
Elle rappelle l'importance d'inclure la mention légale de Ren'Py dans le jeu.
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
Document de Briefing : Conception de Jeux Vidéo de Type Visual Novel
Source : Extraits de la diffusion en direct "Basic Visual Novel Art | Characters and Environments | Lecture Stream | with Heather Gartner"
Date : Non spécifiée dans le texte
Présentateur : Heather Gartner (Thunderbird Paints), Lead Artist et Art Director chez Toad House Games.
Public Cible : Artistes débutants et développeurs de visual novels, potentiellement participants à une game jam (Toad House Jam).
Objectif de la Diffusion : Fournir des bases et des conseils pratiques pour la conception de personnages, d'environnements et potentiellement d'interfaces utilisateur (UI) pour les visual novels.
Thèmes Principaux et Idées Clés :
I. Introduction et Préambule :
- L'hôte, Heather Gartner (Thunderbird Paints), remplace Alanna, la dirigeante habituelle de Toad House.
- Elle est la lead artist et art director de Toad House Games, ayant travaillé sur "Call Me Sarah", "Role for Confidence" et "Good Looking Home Cooking".
- La session est prévue pour durer environ 2 à 2h30 et se concentrera sur les bases de la création d'éléments visuels pour les visual novels.
- L'outil principal utilisé sera Adobe Photoshop, bien que Heather mentionne également son intérêt pour Affinity Photo et son utilisation de Krita pour l'animation.
- Un thème fictif de visual novel sera créé en direct pour illustrer les concepts. Le thème initialement suggéré par le chat était un "cowboy western" ou une "ren fair", mais un thème "dungeons and dragons" (fantasy RPG) sera finalement retenu.
II. Conception de Personnages :
- Point de Départ : L'Idée du Scénario : "the first thing that you want to have is an idea of what your plot is going to be". La connaissance de l'intrigue est essentielle pour informer la conception des personnages.
- Thèmes Overarching : Définir le contexte général (futuriste, moderne, historique, etc.) est une étape préliminaire importante.
- Personnage Principal : Il peut être statique ou adaptable (via un générateur de personnage). L'oratrice préfère un personnage principal spécifique car il est au cœur du jeu et son design initial influence la conception des autres personnages et des scènes spéciales (CGs).
- Détails du Personnage Principal : Pour l'exemple, un barde non-binaire avec un accent texan et une origine demi-orc est choisi. L'importance de considérer la race et le type de personnage est soulignée.
- Scènes Spéciales (CGs) : Ces scènes, souvent débloquables, nécessitent de considérer l'interaction du personnage principal avec les autres. Pour les personnages créés via un builder, il faut s'assurer de la cohérence esthétique (taille, proportions, etc.).
- Processus de Dessin :Commencer par une idée de la structure corporelle de base (le "frame").
- Heather Gartner réalise généralement des "turnarounds" (vues de face et de dos) pour ses personnages.
- Utilisation de calques est essentielle pour la flexibilité et la correction d'erreurs.
- Des lignes de repère (pour la hauteur, les épaules, la taille, etc.) sont recommandées pour assurer la cohérence des proportions.
- Le processus de dessin d'Heather est souvent direct avec la couleur, sans forcément passer par un lineart précis.
- Détails et Reconnaissance :Ajouter des détails distinctifs (un chapeau de cowboy dans l'exemple) pour rendre le personnage reconnaissable par sa silhouette.
- Considérer la "cosplayability" du personnage peut influencer le design.
- Les formes (angulaires vs. rondes) peuvent communiquer des aspects de la personnalité.
- Coloration : Heather utilise généralement une seule couche pour l'illustration principale, avec une esquisse en dessous. L'utilisation de couleurs profondes (vert olive pour la peau de l'orc) est explorée.
- Personnalisation : L'importance de la communication avec l'équipe (écrivains, etc.) pour comprendre la personnalité, le background et les préférences physiques du personnage est soulignée. L'artiste doit parfois traduire des "sentiments" en esthétique visuelle.
- Expressions Faciales : Il est conseillé de prévoir différentes expressions (colère, tristesse, joie) pour les sprites. Un nombre limité (6-7) peut suffire pour la plupart des situations, à moins d'un arc narratif spécifique nécessitant des changements d'apparence.
- Sprites et Assets : Les expressions faciales sont généralement stockées comme des assets séparés du corps. Parfois, le corps entier est redessiné pour correspondre à l'expression et au langage corporel souhaités. Des exemples de sprites de "Call Me Sarah" et "Good Looking Home Cooking" sont montrés pour illustrer cela.
- Création Rapide de Personnages : Pour les projets avec des délais courts (un mois), il est conseillé de simplifier les designs (moins de détails, peu d'accessoires complexes comme les chapeaux), de réutiliser les assets (formes corporelles) et de limiter le nombre d'expressions faciales (3 expressions génériques peuvent être suffisantes).
III. Conception d'Environnements :
- Perspective : Bien que moins cruciale pour les environnements organiques, la perspective est importante pour les scènes en intérieur ou les paysages urbains. Des outils comme les "perspective brushes" peuvent être utiles. La ligne d'horizon est généralement placée à mi-hauteur pour un visual novel.
- Réutilisation des Environnements : Concevoir des environnements qui peuvent être réutilisés avec des variations (éclairage, saisons, jour/nuit) est une stratégie efficace.
- Esquisse Rapide : Commencer par des formes simples (boîtes pour les bâtiments). Le style et l'atmosphère sont ajoutés avec les détails.
- Astuces de Dessin : Utiliser la touche Shift pour tracer des lignes parfaitement droites.
- Placement des Personnages : Dessiner les arrière-plans en ayant une idée de l'emplacement prévu des personnages pour s'assurer qu'ils s'intègrent bien dans la scène. Ajuster la taille des éléments de l'arrière-plan en fonction de la distance perçue du personnage.
- Utilisation de Références : Les photos de lieux réels peuvent servir de référence pour la composition, l'échelle et les formes.
- Coloration des Environnements : Bloquer les couleurs de base, puis ajouter des détails. Les bâtiments et les éléments naturels sont souvent placés sur des calques séparés pour faciliter les modifications.
- Éclairage et Ambiance : L'éclairage est un outil puissant pour créer différentes ambiances (jour, nuit, crépuscule). Des calques de couleur en mode de fusion (overlay, multiply) peuvent être utilisés pour simuler différents moments de la journée. La manipulation des couleurs (ajout de jaune pour simuler la lumière du jour sur un ciel bleu) est expliquée.
- Ombres : L'ajout d'ombres pour les bâtiments et le terrain est crucial pour donner du volume et du réalisme.
- Arrière-Plans Simples vs. Détaillés : Il est acceptable d'avoir des arrière-plans moins détaillés ou flous, surtout si la narration est le point central.
- Environment-Tober : Heather mentionne un événement annuel (en octobre) où elle propose des thèmes pour la création d'environnements, encourageant la pratique.
IV. Interface Utilisateur (UI) :
Le temps imparti ne permet pas d'aborder en détail la conception de l'UI. Heather invite les spectateurs à la contacter sur Discord pour toute question concernant l'UI.
V. Conclusion :
- Heather remercie les spectateurs pour leur participation et pour les dons de subs.
- Elle encourage à la créativité, à la bienveillance envers soi-même et se dit disponible pour répondre aux questions sur Discord.
- Citations Pertinentes :
- "the first thing that you want to have is an idea of what your plot is going to be" (concernant la conception des personnages).
- "visual novels can just be a platform for telling a story hard stop. it is basically a playable story" (soulignant la nature narrative des visual novels au-delà de la romance).
En Résumé :
Cette diffusion en direct offre une introduction pratique à la conception visuelle pour les visual novels.
Heather Gartner partage son expérience et ses méthodes de travail pour la création de personnages et d'environnements, en mettant l'accent sur l'importance du scénario comme point de départ, la considération des détails pour la reconnaissance des personnages, et l'utilisation de références et de techniques d'éclairage pour les environnements.
Elle encourage également la simplification et la réutilisation d'assets pour les projets avec des contraintes de temps. Bien que l'UI n'ait pas été abordée, la session fournit une base solide pour les artistes débutants et les développeurs de visual novels.
Tags
Annotators
URL
-
-
moodle-courses2425.wolfware.ncsu.edu moodle-courses2425.wolfware.ncsu.edu
-
Warmer climate, current competitors
When these alpine plants experience warmer temperatures but the usual competitors, the effect of competition increases. (Lower log response ratio of response of competition.) This is exacerbated when novel competitors are introduced. BUT overall these effects are weak and insignificant because it's a short term study and competition is a long term game
-
-
www.baseball-almanac.com www.baseball-almanac.com
-
Comprehensive Abbreviation List In Alphabetical Order Abbreviations Definitions * or ! Great Defensive Play 1B Single 2B Double 3B Triple A Assist BB Base on Balls BK Balk BS Blown Save BT Bunt CG Complete Game CS Caught Stealing DP Double Play DH Designated Hitter E Error Et Error on Throw F Foul FC Fielder's Choice FO Force-Out FP Fielding Percentage G Game GA Games Ahead GB Games Behind GIDP Grounded into Double Play GS Games Started HB Hit by Ball HBP Hit By Pitch HR Home Run I Interference IBB Intentional Base on Balls IF Infield Fly IP Innings Pitched IW Intentional Walk K Strikeout Kc Strikeout - Called Ks Strikeout - Swinging L Left or Losses LD Line Drive LOB Left on Base LP Losing Pitcher NP Number of Pitches Thrown Obs Obstruction OF Outfield OS Out Stealing PB Passed Ball PH Pinch Hit PO Putout PR Pinch Runner R Right RBI Runs Batted In RS Runner(s) Stranded S or SH Sacrifice Hit SAC Sacrifice SB Stolen Base SF Sacrifice Fly SHO Shutout SO Strikeouts SV Save T Triple TB Total Bases TP Triple Play U Unassisted Putout W Walk WP Wild Pitch
-
-
www.numbersgame.co www.numbersgame.co
-
Numbers Game Scorebooks<br /> https://www.numbersgame.co/
Tags
Annotators
URL
-
-
calmatters.org calmatters.org
-
the game of cat-and-mouse gets turned on its head.
I am not sure how I feel about using this metaphor in this article. This aspects of this situation are mre than just a "game of cat and mouse."
-
-
education.nationalgeographic.org education.nationalgeographic.orgZoo1
-
Only recently has a single zoo, Gondwana Game Reserve in South Africa, offered all Big Five animals in one place. Gondwana sits on 10,000 hectares (24,710 acres) near the center of South Africa’s southern coast. Like many large game reserves, Gondwana has diverse ecosystems that occur naturally and has no need for landscape immersion. In Gondwana, grasslands coexist with shrubland called fynbos. Visitors to Gondwana, like many game reserves, can stay in hotels right in the park.
Gondwana Game reserve, offering animals a real and large space without causing harm to them and protecting them from poachers.
-
-
www.youtube.com www.youtube.com
-
Voici un sommaire de la discussion avec des indications temporelles basées sur l'ordre d'apparition dans le transcript :
- Introduction et présentation d'Albert Moukheiber (Docteur en neurosciences et psychologue clinicien). Il explique qu'il essaie d'allier la compréhension du fonctionnement du cerveau avec ses applications pratiques.
- Genèse du livre "Neuromania". L'idée initiale était de déconstruire des phrases courantes et simplistes entendues sur le cerveau et notre fonctionnement. Des exemples incluent l'idée que les gens n'aiment pas changer ou que la tristesse est due à un déséquilibre chimique. L'auteur cite Stanislas Lem sur le danger de la sursimplification.
- Pourquoi ces simplifications ont pris autant de place. Plusieurs raisons sont évoquées : le caractère récent des neurosciences, l'incertitude scientifique ouvrant la voie aux interprétations, et l'instrumentalisation des découvertes scientifiques (comme la radioactivité ou la physique quantique à des fins marketing).
- Remise en question du paradigme selon lequel "les gens n'aiment pas changer". Ce concept est davantage lié aux systèmes qu'aux individus. La résistance au changement est souvent utilisée pour justifier des passages en force. Le changement dépend du contexte et de ce qu'il implique.
- La possibilité de changer et la question de la patience. Changer de tempérament est différent de la résistance au changement organisationnel. Les traits de personnalité (stables) se distinguent des états (temporaires). Le changement prend du temps, ce qui peut être problématique pour quelqu'un d'impatient.
- Différences entre traits et états de personnalité. Des exemples sont donnés pour illustrer comment quelqu'un peut être colérique (trait) mais pas toujours en colère (état), et comment le contexte influence l'expression de ces traits.
- La stabilité des traits pour la prédictibilité et la coopération. Un changement trop rapide et constant rendrait la collaboration difficile. L'idée d'un "self" unique et constant est remise en question par la notion de "self switching" selon les contextes.
- Comment travailler sur l'impatience. Identifier les situations déclenchantes, les conditions propices et agir au niveau approprié. Accepter que le progrès se fait par étapes et prend du temps. La "cognition incarnée" souligne que nos réactions dépendent de notre corps et de notre environnement.
- Stratégies pour gérer l'impatience. "Think our way out of it" est possible dans certaines situations (exemple du métro en retard), mais pas dans d'autres. Distraction, relaxation corporelle et se souvenir des succès ("count the hits not the misses") sont des pistes. Il faut aussi accepter parfois de ressentir de la frustration.
- La dichotomie émotion/raison. L'auteur a toujours eu un problème avec la hiérarchisation de ces processus. Souvent, les pensées justifient les émotions. L'opposition frontale est rare.
- Distinction entre émotions (substrat biologique), ressenti phénoménologique (affects) et communication de l'émotion. Le langage français utilise souvent le même mot pour ces trois réalités, contrairement à l'anglais ("emotions" et "feelings"). Dans le cerveau, il y a des neurones, pas des émotions ou des pensées en soi.
- L'opposition se fait plutôt entre différentes émotions et pensées. L'exemple de la jalousie ou de l'énervement suivi d'un apaisement est donné. Penser en termes de diades ou de triades (incluant le ressenti corporel) est plus pertinent. Les personnes anosognosiques (ne reconnaissant pas leurs émotions) ne deviennent pas de meilleurs décideurs rationnels, car les émotions sont une forme de feedback.
- Rôle des émotions comme feedback pour adapter notre comportement. Le déficit émotionnel rend le comportement inopérant. Les bases biologiques des émotions sont dans le cerveau et le corps (cognition incarnée). L'opposition émotion/raison remonte à l'Antiquité (Platon).
- Les émotions peuvent être remplacées. Nous avons des émotions et des pensées automatiques, mais aussi des processus métacognitifs et métaémotionnels qui permettent de les modifier progressivement. La rumination (une cognition) nourrit l'émotion, montrant leur interdépendance.
- La porte d'entrée pour casser un cycle émotionnel/cognitif dépend des individus. Injonction émotionnelle, raisonnement, action corporelle, distraction attentionnelle ou acceptation peuvent être utilisés. Les émotions négatives sont importantes et adaptées dans certains contextes (deuil après un licenciement). Vouloir toujours aller bien est une forme de folie.
- L'idéologie de la maximisation et de l'efficience. Cette tendance, visible dans le minimalisme et la technologie, se reflète dans la volonté de maximiser les expériences positives et d'éliminer les émotions négatives. L'auteur ne cherche pas à se maximiser constamment dans tous les domaines.
- Apprendre plus vite est parfois une illusion. Il n'y a pas toujours de raccourcis, notamment pour l'apprentissage des langues. Des vendeurs profitent du désespoir pour vendre des méthodes inefficaces. L'immersion est souvent nécessaire.
- La quête d'amélioration et la comparaison aux autres. Admirer ceux qui apprennent vite ne doit pas nous faire oublier que les compétences varient. Il faut accepter les "règles du jeu" de notre esprit, comme on accepte les limites physiques de notre corps.
- Comparaison avec les capacités physiques. On n'aurait pas la même discussion sur la possibilité de sauter du troisième étage ou de soulever une voiture. L'amélioration du cerveau passe par son utilisation. La quête d'une amélioration constante peut être une source d'insatisfaction.
- La motivation et le changement constant. Ce qui motive l'auteur n'est pas une quête intense, mais plutôt l'intérêt. Le changement est inévitable ; la question est comment et à quelle vitesse.
- Philosophie de vie et éducation de sa fille. Ne pas se prendre au sérieux est une protection. Ne pas lui dire qu'elle est spéciale, mais normale.
- Remise en question du système scolaire. L'école n'est pas conçue pour le bien-être des enfants, mais pour permettre aux parents de travailler. L'heure de réveil est absurde. Malgré cela, sa fille ira à l'école comme les autres.
- Inculquer la tolérance à la frustration. La société actuelle cherche à éviter la frustration, ce qui pourrait être problématique. Le côté coercitif de l'école peut paradoxalement aider à développer cette tolérance.
- La période d'adaptation à l'école. L'objectif initial est la survie et l'adaptation. L'apprentissage viendra plus tard. L'auteur évite les pratiques éducatives qu'il juge néfastes. Il n'a pas lu de livres sur la parentalité pour éviter de se "mêler le cerveau".
- Le phénomène de l'éducation positive. L'enfant d'une amie, élevé sans se voir dire non, semble bien se développer, mais l'avenir reste incertain. Il faut éviter le piège de la cause unique : de nombreux facteurs influencent le développement d'un enfant (amis, professeurs, etc.). L'éducation positive reflète aussi le désir des adultes d'éviter la frustration.
- La notion de "bon niveau explicatif". Pour comprendre un phénomène, il faut l'observer au niveau pertinent (exemple de l'embouteillage, de la voiture, d'Alzheimer, de la dépression). Pour comprendre le développement d'un enfant, il faut considérer la famille, l'entourage et la société. Nous avons des pistes sur ce qu'il ne faut pas faire (manque d'affection) et quelques-unes sur ce qu'il faut faire, mais pas de feuille de route précise. La société actuelle produit de la frustration, d'où l'importance de la tolérance.
- L'approche translationnelle (aller-retour entre théorie et pratique). L'expérience clinique (thérapie) a montré que le déséquilibre chimique n'est pas le bon niveau explicatif pour la dépression, car des facteurs de vie concrets sont souvent en cause. La science a longtemps cherché à objectiver, mais la subjectivité humaine est essentielle.
- Le défi de développer une science de la subjectivité. La douleur est un exemple de phénomène subjectif difficile à objectiver. Les inégalités de traitement de la douleur entre hommes et femmes sont mentionnées. La recherche sur le cerveau se fait majoritairement en IRM, dans des conditions non naturelles. Il y a une surestimation de ce que l'on sait sur le cerveau ("Neuromania").
- Appel à refaire de la science avec émerveillement et imagination. La science actuelle est perçue comme trop aseptisée (blouses blanches et statistiques). L'imagination et l'émotion sont nécessaires pour faire avancer la science. La collaboration entre différentes expertises (philosophie, mathématiques, biologie) est cruciale.
- Paradoxe apparent entre la critique de la surresponsabilisation de l'individu et l'appel à une science de la subjectivité. Retour à la notion de niveau explicatif : selon le phénomène étudié, le niveau pertinent varie. S'intéresser à la subjectivité est important pour comprendre l'individu, mais pour certains phénomènes (comme un embouteillage ou une tendance sociétale), il faut considérer un niveau d'analyse plus large.
- ** L'exemple de la propriété émergente de l'eau (nécessité de six molécules pour "mouiller")**. Certains phénomènes n'apparaissent qu'à un certain niveau d'organisation. Pour comprendre le vote, regarder l'activité cérébrale individuelle est descriptif, pas explicatif.
- Importance de distinguer le bon niveau explicatif, la correspondance entre les niveaux et la différence entre description et explication (corrélation vs causalité). L'exemple de la corrélation entre le nombre de McDo et les cas de COVID est donné.
- Facteurs méta (sociétaux) sous-estimés. Organisation sociétale, rythme social, heures de travail, pression financière et matérielle. La perte des aides sociales et communautaires d'avant est évoquée. L'explosion des burnouts et des troubles anxieux ces dernières années est notable.
- Le biais des populations "WEIRD" (White, Industrialized, Educated, Rich, and Democratic). La majorité des études en psychologie et neurosciences sont réalisées sur ces populations, puis généralisées à toute l'humanité, ce qui est problématique car ces populations ne sont pas représentatives.
- Exemple de jeu économique (ultimatum game) montrant des différences culturelles dans les comportements. Les étudiants américains réagissent différemment des populations d'autres cultures face à des offres inéquitables.
- Remise en question de croyances psychologiques universelles basées sur des échantillons restreints. La culture a une influence profonde. L'exemple du faible taux de natalité en Corée, potentiellement lié au patriarcat, est donné.
- La complexité des problèmes et le rejet des solutions simplistes. L'auteur se sent dépassé par cette complexité. L'exemple de la dépression, parfois liée à l'environnement social et culturel, est mentionné.
- Nécessité d'une "thérapie de la société" plutôt que d'une sur-focalisation sur l'individu et le développement personnel. La volonté individuelle a ses limites face aux déterminants environnementaux et sociaux.
- Expérience de l'auteur face à des patients où il se sent impuissant à aider. Il en parle ouvertement avec le patient et l'oriente parfois vers d'autres thérapeutes plus adaptés. La psychologie clinique est encore une discipline jeune avec des limites. Il est important de reconnaître ces limites pour éviter de culpabiliser les patients.
- Reconnaissance des limites actuelles de la psychologie clinique. Le mythe d'une compréhension totale de la créativité, des émotions, etc., est dangereux. Il est crucial d'expliquer aux patients que leur souffrance n'est pas forcément de leur faute. La psychologie clinique n'est pas aussi mature que d'autres domaines de la santé.
- Le mensonge des cinq sens. Nous en avons neuf : la vue, l'ouïe, l'odorat, le goût, le toucher, la thermosception (chaleur/froid), la proprioception (position du corps), la nociception (douleur) et l'interception (organes internes). L'omission des quatre autres est un mystère.
- Le pseudo "sixième sens" (souvent associé à l'intuition). L'auteur pense que ceux qui ont popularisé cette idée ignoraient l'existence des quatre autres sens. Le sixième sens est interprété différemment par chacun.
- La fascination du réel déjà complexe. L'auteur ne comprend pas pourquoi on cherche à ajouter des artifices (comme des "auras") alors que le fonctionnement réel est déjà extraordinaire. La réalité est plus complexe à comprendre qu'à inventer. La facilité n'est pas nécessairement ce que les gens aiment, mais certains profitent de ce biais apparent.
- Les raisons du succès des solutions simplistes et erronées. Promesse de résultats rapides et faciles, détresse des personnes cherchant de l'aide, asymétrie argumentative (il est plus facile de convaincre de ne pas faire quelque chose en exagérant un risque que de convaincre de faire quelque chose en prouvant une sécurité à 100%). La loi du nombre explique le succès des démarcheurs téléphoniques.
- Les neuf sens et la misophonie (sensibilité excessive à certains sons). La question de savoir comment agir sur ces sens est posée. L'auteur partage son expérience de la misophonie et de la difficulté à la gérer.
- Le bon niveau explicatif pour la misophonie est l'interaction, pas seulement l'individu. Avant de chercher à changer, il faut évaluer l'impact du problème. Des solutions mécaniques (atténuation du son) peuvent être envisagées, mais sont contraignantes. Un travail sur la gestion émotionnelle peut aider. Parfois, l'acceptation est la meilleure voie.
- Fonctionnement "bottom-up" (des sens vers le cerveau) et découverte récente du fonctionnement "top-down" (du cerveau vers les sens). Notre cerveau "hallucine" activement le réel (processing prédictif).
- Le cerveau influence activement notre perception sensorielle. Les illusions d'optique illustrent ce phénomène. Il serait intéressant d'explorer si des processus "top-down" pourraient aider à moduler la sensibilité à certains sons comme dans la misophonie. Référence à Andy Clark et son livre "The Experience Machine" sur le cerveau prédictif.
- Expériences personnelles de l'auteur en matière de changement. Il a consciemment travaillé sur son anxiété, sa manière de parler et sa gestuelle. Il essaie d'avoir plus d'opinions et de ne pas être toujours indifférent aux choix collectifs. Le changement est constant.
- Recommandation d'une ressource marquante récente : la série "Severance" sur Apple TV+. Une série de science-fiction explorant la séparation entre vie professionnelle et personnelle au niveau cérébral.
- Le plus grand obstacle surmonté par l'auteur : son anxiété et son stress durant l'adolescence. L'acceptation de soi et le fait de moins se prendre au sérieux l'ont aidé.
- Personnes que l'auteur aimerait entendre au micro : Zoé Dubu (historienne des psychédéliques) ou Lucy Berkovitz (psychiatre travaillant sur la psilocybine).
- Définition de "ne pas prendre le pouvoir de sa vie" : la subir. Clarification importante : il ne s'agit pas de prôner la soumission, mais de souligner que vouloir tout contrôler et subir sont tous deux indésirables. Maximiser l'agentivité est essentiel, mais cela passe par un effort collectif et ne dépend pas uniquement de la volonté individuelle.
- Conclusion et remerciements. Invitation à soutenir les librairies indépendantes.
Tags
Annotators
URL
-
-
openurl-ebsco-com.ezproxy.midlandstech.edu openurl-ebsco-com.ezproxy.midlandstech.edu
-
marketing to children is no longer confined to toys or sugared cereals.
just like music have no more parental advisory warning labels, advertising is free game to children.
-
- Mar 2025
-
boffosocko.com boffosocko.com
-
Rocket Queen is a multiplayer game that offers unique mechanics and a captivating storyline. Your challenge? Place a bet and exit the game just in time before the rocket with the charming heroine launches. If you miss, your bet is lost! Experience non-stop excitement and entertainment at our site. Are you ready for the challenge?
-
Experience the thrill of Bombucks, where classic Minesweeper gameplay meets contemporary entertainment. This engaging game simplifies the rules for beginners while focusing on strategy and luck. Take your chances in each round of Bombucks, and see if your decisions lead you to victory. Test your skills today!
-
-
www.edutopia.org www.edutopia.org
-
(I recommend that the teacher be the first leader; then students can lead after they understand the game.)
I like this as a strategy to start, otherwise it could do the opposite and cause anxiety if students don't understand the concept first
-
-
theinsight17.wordpress.com theinsight17.wordpress.comSPORTS1
-
.
You could have added more details in the game.
-
-
pressbooks.online.ucf.edu pressbooks.online.ucf.edu
-
By means whereof the honest widows may without danger play at the close buttock game with might and main, and as hard as they can, for the space of the first two months after the decease of their husbands. I pray you, my good lusty springal lads, if you find any of these females, that are worth the pains of untying the codpiece-point, get up, ride upon them, and bring them to me; for, if they happen within the third month to conceive, the child should be heir to the deceased, if, before he died, he had no other children, and the mother shall pass for an honest woman.
his use of humor in Rabelais’ depiction of widows engaging in sexual activity to potentially conceive a legitimate heir plays with social conventions and challenges moral boundaries. This reflects the Bhagavad Gita's teaching: "You have the right to work, but never to the fruit of work." Here, the act of sexual engagement is framed as a mere process, detached from the moral implications typically associated with it. Just as the Gita advocates detachment from the outcomes of one’s actions, Rabelais uses humor to detach these actions from societal judgment, treating them as absurdly natural and disconnected from the usual expectations of legitimacy and virtue.
"You have the right to work, but never to the fruit of work. Let not the fruit of action be your motive, nor let your attachment be to inaction." — Bhagavad Gita 2.47
-
-
ia801905.us.archive.org ia801905.us.archive.org
-
It is an invariable principle of all play, finite and infinite,that whoever plays, plays freely. Whoever must play, cannotplay
I choose to enter into a conflict. I can also choose to walk away. I choose where my attention goes. Perhaps I am not playing in a particular finite game, but is my attention consumed with watching it, being a spectator? What does that accomplish?
-
-
www.espn.com www.espn.com
-
then it became the Cooper Flagg show
If Flagg has a good game, Duke will win the game.
-
-
scalar.case.edu scalar.case.edu
-
The users outside knowledge is the "spectator" aspect of being a spectactor, but still goes through the actor's experience of discovering his brothers death.
We agree with the idea of the player acting as the "spect-actor" in the game "c ya laterrrrr" and believe, like Boal said, that it fosters the player to think more deeply about the experience presented.
We also acknowledge that the user must use critical thinking skills to decipher the death of the brother.
-
-
scalar.case.edu scalar.case.edu
-
The chaotic nature of the game also serves to represent the horrors and chaos that one experiences during trafficking. This page has paths: 1 2025-01-23T03:17:35+00:00 Kristine Kelly 704347a0fb0f4b5c42bc63d040b84f065ec3a67c Immersion and Simulation Work Kristine Kelly 6 plain 2025-03-27T18:42:55+00:00 Kristine Kelly 704347a0fb0f4b5c42bc63d040b84f065ec3a67c Contents of this path: 1 2025-03-25T19:09:10+00:00 Immersion Exercise Between “C ya laterrrr” and Murray 6 Yash, Arnesa, Shaun plain 2025-03-25T19:37:41+00:00 1 2025-03-25T19:07:57+00:00 Immersion and Simulation Group 2 9 Rhetorical Analysis of "Motions" plain 2025-03-27T18:46:24+00:00 1 2025-03-25T19:23:04+00:00 Group 3 - "c ya laterrrr" and Frasca - Julia, Ava, Sanjana, and Sriya 8 plain 2025-03-27T18:47:17+00:00 1 2025-03-25T19:06:23+00:00 Immersion and Simulation Work (Frasca & Motions Group 4) 19 Reef / Alex / Justin plain 2025-03-27T18:44:34+00:00 This page references: 1 2025-03-25T19:08:58+00:00 Motions - Register Your Child 1 Hazel Smith - Motions plain 2025-03-25T19:08:58+00:00 1 2025-03-25T19:10:42+00:00 Motions - Djamel Hides 1 Hazel Smith - Motions plain 2025-03-25T19:10:42+00:00
We also agree that the format of the game can illustrate the severities of trafficking and allow users to emphasize with the victims. However, as referred to in the concept of being a "spectactor", the player is still unable to fully understand how a victim of human trafficking may feel.
-
The main goal of works like Motion is not to solve the issues surrounding human trafficking, but to inform, share people's experiences, and encourage conversation around the issue.
We agree that the main goal of Motion is to inform the audience on issues surrounding human trafficking. The creator of the game stresses the point that human trafficking is not a simple issue and emphasizes this by putting various scenarios and stories all over the page. The game also allows for more open conversations surrounding the complexities of human trafficking and works to reduce the stigma around these difficult conversations.
-
-
scalar.case.edu scalar.case.edu
-
However, to regulate this immersion, the narrator breaks the fourth wall in the second image. This aligns with Murrays ideas of how good examples of immersion do not truly push the reader into a state of fantasy.
This is a really interesting take on the ending of "c ya laterrr." When the majority of our group members finished the game, we never really viewed this ending slide as a way to break the immersion. Not to say the point is wrong, it is just a new perspective, and it may have been the authors intention to break the immersion - or it could have been coincidence. Either way, it is an interesting example of not pushing the reader into fantasy.
-
-
human.libretexts.org human.libretexts.org
-
Note. Although this is one of the shortest and apparently most trivial of the Odes in the Book of Poetry, it is credited by the Chinese editors with as much meaning as the largest. It is regarded, like so many more, as illustrating the extent of the reformation brought about by King Wăn. Not only was the kingdom better ruled, society better regulated, and individuals more self-disciplined and improved in manners, but the reformation affected all things: vegetation flourished, game became most abundant, hunting was attended to at the right seasons, and the benign influence of the King was everywhere felt by the people. The poet thinks it is sufficient to dwell upon these last characteristics. Probably the lines were written after some royal hunt.
A better explanation, the poem makes it seem the society seem on the brink of collapse. It read like that of poetry of 1500 Western and Southern Europe: a struggling and emotionally charged group of kingdoms. Much like that of (The story of) Julius Caesar or Romeo and Juliet
-
-
pressbooks.library.torontomu.ca pressbooks.library.torontomu.ca
-
or at school
This is something that even NSU is aware of, after freshman year they have heavily implemented the SSO sign-on. This may seem time consuming to students having yo log and receive a text 2 times before they can ever access their schoolwork, but NSU is on a very big network with over a 1000 students. NSU try's to regulate the usage of users and ensure they are using the public WIFI for the right means. In a lot of classrooms in Public schools they also do this by forming blockers, I remember wanting to play a online game called "Bad Eggs" and this becoming banned on the internet and nearly impossible to access. Having the firewall is useful and hopefully prevents an amount of misuse. We can even individually download VPN's and Blockers to prevent others effecting or access your personal information.
-
-
ipfs.indy0.net ipfs.indy0.net
-
he name of the game
Zombie spotting and holon energising
-
-
mellow-brace-913.notion.site mellow-brace-913.notion.site
-
Disjointed features meant users constantly lost context. One particularly telling quote: "I feel like I'm playing a game of memory, trying to remember what I set up in five different places."
Love all these quotes you're incorporating
-
-
pressbooks.library.torontomu.ca pressbooks.library.torontomu.ca
-
Business was dull all day, because numbers of people had gone to the game. She decided to close early, because it was hardly worth the trouble of keeping open on an afternoon like this. She had set six o’clock as her limit.
There’s no business because of the game.
-
-
stackblitz.com stackblitz.com
-
StackBlitz unlocks a true one-click startup experience with the full stack running in the browser—it's a game-changer.
stackblitz one click start up experience
-
-
Local file Local file
-
You are part of this team. Just because you don’t get passed the ball as oftenduring game times, doesn’t take away from your contributions in other ways. You’re great at
Patrick is showing that he values Colin's feelings and is complimenting him by highlighting strengths. He also expresses that he can relate to him by putting himself in Colin's situation (Difficult Conversations Book Summary).
-
I would like to focus on resolving this situation as well. I now realize that passing tome isn’t the only part of the game and doesn’t take away from my other contributions to theteam and who I am in general. I still would appreciate being passed to more often. Is thissomething you can do?
Colin and Patrick seem to agree that they both want a resolution. Colin has an aha moment and then makes a request which is an example of non violent communication (How You Can Use the NVC Process).
-
I may have gotten stuck in a bias and gotten used to not passing to you. I am opento passing to you more often. Let’s work on passing more often during practice andexhibition games, so we can build more skills and trust with each other. I think in doingthat, we’ll get a better sense of how we can work together come game time
Patrick has some self realization and compromise for the conversation. He proposes a possible solution with positive words such as build, trust, and work together (Let's Rumble).
-
-
templeu.instructure.com templeu.instructure.com
-
Each of these incidents represented an opportunity for radio to try itself out under differing circumstances, and with the conclusion of each effort the broadcasters knew more about what they were doing than when they had started. This meant that the next time they would be a little bit better, and it also meant that radio was beating the press at its own game: fast reporting of the news.
I find this aspect of broadcasting important to understand. Broadcasting became quicker and more efficient to reach audiences. The press had to now realize that they were losing the top spot in their industry. Since broadcasting was so new there was lots of room for growth and development. This aspect also gives them a leg up when it come to press and newspapers. Now most people want their news faster so they look to broadcasts over press/newspapers.
-
-
www.biorxiv.org www.biorxiv.org
-
Reviewer #1 (Public review):
Summary:
There has been intense controversy over the generality of Hamilton's inclusive fitness rule for how evolution works on social behaviors. All generally agree that relatedness can be a game changer, for example allowing for otherwise unselectable altruistic behaviors when c < rb, where c is the fitness cost to the altruism, b is the fitness benefit to another, and r their relatedness. Many complications have been successfully incorporated into the theory, including different reproductive values and viscous population structures.
The controversy has centered on another dimension; Hamilton's original model was for additive fitness, but how does his result hold when fitnesses are non-additive? One approach has been not to worry about a general result but just find results for particular cases. A consistent finding is that the results depend on the frequency of the social allele - non-additivity causes frequency dependence that was absent in Hamilton's approach. Two other approaches derive from Queller via the Price equation. Queller 1 is to find forms like Hamilton's rule, but with additional terms that deal with non-additive interaction, each with an r-like population structure variable multiplied by a b-like fitness effect (Queller 1985). Queller 2 redefines the fitness effects c and b as partial regressions of the actor's and recipient's genes on fitness. This leaves Hamilton's rule intact, just with new definitions of c and b that depend on frequency.
Queller 2 is the version that has been most adopted by the inclusive fitness community along with assertions that Hamilton's rule in completely general. In this paper, van Veelen argues that Queller 1 is the correct approach. He derives a general form that Queller only hinted at. He does so within a more rigorous framework that puts both Price's equation and Hamilton's rule on firmer statistical ground. Within that framework, the Queller 2 approach is seen to be a statistical misspecification - it employs a model without interaction in cases that actually do have interaction. If we accept that this is a fatal flaw, the original version of Hamilton's rule is limited to linear fitness models, which might not be common.
Strengths:
While the approach is not entirely new, this paper provides a more rigorous approach and a more general result. It shows that both Queller 1 and Queller 2 are identities and give accurate results, because both are derived from the Price equation, which is an identity. So why prefer Queller 1? It identifies the misspecification issue with the Queller 2 approach and points out its consequences. For example, it will not give the minimum squared differences between the model and data. It does not separate the behavioral effects of the individuals from the population state (b and c become dependent on r and the population frequency).
The paper also shows how the same problems can apply to non-social traits. Epistasis is the non-additivity of effects of two genes within the individual. (So one wonders why have we not had a similarly fierce controversy over how we should treat epistasis?)
The paper is clearly written. Though somewhat repetitive, particularly in the long supplement, most of that repetition has the purpose of underscoring how the same points apply equally to a variety of different models.<br /> Finally, this may be a big step towards reconciliation in the inclusive fitness wars. Van Veelen has been one of the harshest critics of inclusive fitness, and now he is proposing a version of it.
Weaknesses:
van Veelen argues that the field essentially abandoned the Queller 1 approach after its publication. I think this is putting it too strongly - there have been a number of theoretical studies that incorporate extra terms with higher-order relatednesses. It is probably accurate to say that there has been relative neglect. But perhaps this is partly due to a perception that this approach is difficult to apply.
The model in this paper is quite elegant and helps clarify conceptual issues, but I wonder how practical it will turn out to be. In terms of modeling complicated cases, I suspect most practitioners will continue doing what they have been doing, for example using population genetics or adaptive dynamics, without worrying about neatly separating out a series of terms multiplying fitness coefficients and population structure coefficients.
For empirical studies, it is going to be hard to even try to estimate all those additional parameters. In reality, even the standard Hamilton's rule is rarely tested by trying to estimate all its parameters. Instead, it is commonly tested more indirectly, for example by comparative tests of the importance of relatedness. That of course would not distinguish between additive and non-additive models that both depend on relatedness, but it does test the core idea of kin selection. It will be interesting to see if van Veelen's approach stimulates new ways of exploring the real world.
-
-
-
Florida took 34 foul shots and made 22 of them while UConn got 22 foul shots and made 19.
This is a fact from the game stats that was added to the article to explain the coaches reaction to the reffing.
-
Dan Hurley is not exactly a classy loser.
this is an example of a conclusion that the author came to. Calling him a sore loser is a characterization of the quotes from the coach and his reaction to the game.
-
-
templeu.instructure.com templeu.instructure.com
-
his meant that the next time they would be a little bit better, and it also meant that radio was beating the press at its own game: fast reporting of the news
This transformed the idea of fast news. Broadcasting networks were able to get this information to citizens at a much faster rate than the press. This seemed made broadcasting networks more popular at this point because citizens were able to receive the same information just at a faster and more effective rate.
-
-
templeu.instructure.com templeu.instructure.com
-
radio was beating the press at its own game: fast reporting of the news.
This is a shift in power. Speed becomes more important than depth. It reminds me of Twitter/X and social media breaking stories before major news outlets do today. We’re still living in this tension—immediacy vs. credibility.
-
What I've noticed is that these battles of technology are often about speed. Throughout history media is constantly fighting to be faster and more instant than their competitors. The radio was a game changer in this instantaneous form of delivering information. The amount of information and the speed in which we can acquire it continues to increase.
-
-
www.youtube.com www.youtube.com
-
Serbia is such an important player in this part of the world. And this isn't the first round of student protests. They played a big role in the 1990s as well.
for - question - Serbia - student protests - how to avoid making the same mistake? - People make the same mistake, - big protests give opportunity for the next authoritarian leader to game representative democracy - Something must be done fundamentally differently to prevent this from happening in the future
-
-
socialsci.libretexts.org socialsci.libretexts.org
-
Of course, we don’t just communicate verbally—we have various options, or channels for communication. Encoded messages are sent through a channel, or a sensory route on which a message travels, to the receiver for decoding. While communication can be sent and received using any sensory route (sight, smell, touch, taste, or sound), most communication occurs through visual (sight) and/or auditory (sound) channels. If your roommate has headphones on and is engrossed in a video game, you may need to get his attention by waving your hands before you can ask him about dinner.
This is especially interesting to me now, as we saw the rise of smartphones people started to talk to each other in person less and less, especially after 2020. And I just think it's interesting to see how peoples interactions with each other changed after that. There is simply a lot more communication that is only text based now, I'd argue more now than there has ever been before. And I know from experience how easy it can be to misinterpret a text that someone sent, because you can't tell what tone they said it in through text and you can't see if they make a hand or arm motion to show its a joke, or a million other things could happen and cause someone to misjudge the situation that could never happen in person for a million different reasons.
-
-
thehill.com thehill.com
-
Regardless, we can state with certainty that his ability to function at the top of his game will decrease as he approaches his eighties.
This conclusion is based on observation of human nature (we tend to lose physical and mental performance as we age). The author did not set a specific date for when this will happen but still presented this as a fact although it is speculating something that may or may not happen in the future.
-
-
uwaterloo.ca uwaterloo.ca
-
References Farzan, R., DiMicco, J. M., Millen, D. R., Dugan, C., Geyer, W., and Brownholtz, E. A. (2008). Results from deploying a participation incentive mechanism within the enterprise. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 563–572. Landers, R. N., & Landers, A. K. (2015). An empirical test of the theory of gamified learning the effect of leaderboards on time-on-task and academic performance. Simulation & Gaming, 45(6), 769-785. Vandercruysse, S., Vandewaetere, M., & Clarebout, G. (2012). Game-based learning: A review on the effectiveness of educational games. In M. M. Cruz-Cunha (Ed.), Handbook of research on serious games as educational, business, and research tools (pp. 628–647). Hershey, PA: IGI Global.
Reference list
-
Game elements and their pedagogical role Most games feature elements such as rules, goals, interaction, feedback, problem solving, competition, story, and fun (see Vandercruysse, Vandewaetere, & Clarebout, 2012). Though not all of the elements are needed to successfully gamify a learning activity, carefully selecting those elements that help meet the learning objectives of the course can be useful. The pedagogical value of game features often associated with gamification are discussed below.
Game Mechanics
-
GOBLIN (Games Offer Bold Learning Insights Nowadays) Education.
I should do research about this .
-
-
elearningindustry.com elearningindustry.com
-
The importance of gamification in human resources Top Gamification Statistics of 2020: Next Level Gaming The Influences of Emotion on Learning and Memory IS GAMIFICATION EFFECTIVE: THE NEUROSCIENCE OF GAMIFICATION IN ONLINE LEARNING Game-Based Learning vs Gamification 10 Examples of Gamification for Employee Engagement 25 Insightful Gamification Stats, Facts, & Trends for 2020 HOW DOES GAMIFICATION AFFECT THE LEARNING PROCESS?
Sources i can visit for more information and get proper reference.
-
How do I design my learning game effectively? Generally speaking, a learning game includes three things: a defined goal, a set of rules, and a way to earn points. Ideally, it would also involve some sort of engagement from the user, and be easy to comprehend while staying relevant to your end topic.
How question,why .
-
ome examples could be the following:
Precedence of Design practice .checking existing games and study them ,what have they done and what can i improve and also incorporate in my game.
-
Game-based learning makes games a part of the learning process. It is an instructional method where students learn specific skills or knowledge from playing an actual game. This type of learning takes educational content and transforms it into a game that students can play. On the other hand, gamification only makes use of game elements in a non-game context to enhance content comprehension and promote better retention of information
Difference between Game-Based leaning & Gamification
-
elements such as background, characters, plot twists, and more.
Game mechanics or what to consider when making a game ,PlotTwist,Music,Characters(Villain ) ,Obstacles ,anti hero
-
Gamification is the process of using game elements in a non-game context. It has many advantages over traditional learning approaches, including: Increasing learner motivation levels Improving knowledge retention Better learner engagement through social mechanisms like badges, points, or leaderboards
Gamification Definition and what are advantages over traditional learning approach .how does it improve knowledge and better learning engagement through social mechanism like badges ,points or leaderboards
-
-
Local file Local file
-
‘exhaustion’ of the development model it had promotedwhen it was in government and its loss of capacity to represent the interests ofthe citizenry. These interpretations maintain that it was mainly other politicalactors on the Right and Left who intensified ‘the game of centrifugal attacks,resistance and opposition
Direct quotes
-
-
Local file Local file
-
Patel
Will your game need line drawing and path finding in a hexagonal grid, or how more specifically do you find this source relevant?
-
the sustainability of life inan ecosystem
if second stage goal is to survive, then how do sustainability come in? Is that a third stage, or sustainability not a goal but a factor for an egocentric game goal?
-
-
docs.google.com docs.google.com
-
Insight:
"Political leaders, however, must be open about the fact that such a policy involves tradeoffs. Tightening export controls on Russia will reduce Western firms’ sales there—although, with only three percent of the world’s GDP, Russia is a small market for most companies. Cutting off Russia’s commodity exports, which would challenge the Kremlin’s ability to fund its foreign policy, would also raise prices for Western consumers in more serious ways—impacting everything from metals to gasoline."
It is interesting to see how even cutting trade with Russia, which has a relatively small population and less economic power compared to the U.S.'s other geopolitical Enemy of China, had such a big impact on global prices. i would assume that global sanctions on Russia mostly would have effected Russians and barely effected Americans, but I remember how much prices of gas skyrocketed after the war started and how there was a global shortage of wheat because of Ukraine.
"Political leaders, however, must be open about the fact that such a policy involves tradeoffs. Tightening export controls on Russia will reduce Western firms’ sales there—although, with only three percent of the world’s GDP, Russia is a small market for most companies."
Question: To what extent are governments willing to bear economic costs in order to uphold their priciples? I think that the sanctions of Russia surprised many Americans with how big the impact actually was, so I'm curious to see if America would take similiar step sthe next time there is a war.
"This type of reasoning can be uncomfortable for many U.S. and European officials, who understandably prefer economic policies that leave everyone better off. The alternative vision, by contrast, is based on a zero-sum logic of power politics and implies ongoing costs for Western economies. Like it or not, however, the United States and Europe’s relationship with Russia is now mostly zero-sum. "
Insight: It is interesting for me to see how trade can continue to be zero-sum. Usually, things like the Hecksher-Ohlin model/PPC show that trade is a positvie sum game that an benefit all countries involved.
-
-
www.biorxiv.org www.biorxiv.org
-
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public reviews):
Summary:
In this study, Fakhar et al. use a game-theoretical framework to model interregional communication in the brain. They perform virtual lesioning using MSA to obtain a representation of the influence each node exerts on every other node, and then compare the optimal influence profiles of nodes across different communication models. Their results indicate that cortical regions within the brain's "rich club" are most influential.
Strengths:
Overall, the manuscript is well-written. Illustrative examples help to give the reader intuition for the approach and its implementation in this context. The analyses appear to be rigorously performed and appropriate null models are included.
Thank you.
Weaknesses:
The use of game theory to model brain dynamics relies on the assumption that brain regions are similar to agents optimizing their influence, and implies competition between regions. The model can be neatly formalized, but is there biological evidence that the brain optimizes signaling in this way? This could be explored further. Specifically, it would be beneficial if the authors could clarify what the agents (brain regions) are optimizing for at the level of neurobiology - is there evidence for a relationship between regional influence and metabolic demands? Identifying a neurobiological correlate at the same scale at which the authors are modeling neural dynamics would be most compelling.
This is a fundamental point, and we put together a new project to address it. The current work focuses on, firstly, rigorously formalizing a prevailing assumption that brain regions optimize communication, and then uncovering what are the characteristics of communication if this optimization is indeed taking place. Based on our findings, we suspect the mechanism of an optimal communication to be through broadcasting (compared to other modes explored in our work, e.g., the shortest-path signalling or diffusion). However, we recognize that our game-theoretical framework does not directly address “how” this mechanism is implemented. Thus, in our follow-up work, we are analyzing available datasets of signal propagation in the brain to see if communication dynamics there match the predictions of the game-theoretical setup. However, following your question, we extended our discussion to cover this point, cited five other works on this topic, and what, we think, could be the neurobiological mechanism of optimal signalling.
It is not entirely clear what Figure 6 is meant to contribute to the paper's main findings on communication. The transition to describing this Figure in line 317 is rather abrupt. The authors could more explicitly link these results to earlier analyses to make the rationale for this figure clearer. What motivated the authors' investigation into the persistence of the signal influence across steps?
Great question. Figure 6 in part follows Figure 5, which summarizes a key aspect of our work: Signals subside at every step but not exponentially (Figure 5), and they nearly fall apart after around 6 steps (Figure 6 A and B). Subplots A and B together suggest that although measures like communicability account for all possible pathways, the network uses a handful instead, presumably to balance signalling robustness versus the energetic cost of signalling. Subplot C, one of our main findings, then shows how one simple model is all needed to predict a large portion of optimal influence compared to other models and variables. In sum, Figure 5 focused on the decay dynamics while Figure 6 focused on the extent, in terms of steps, given that the decay is monotonic. Together, our motivation for this figure was to show how the right assumption about decay rate and dynamics can outperform other measures in predicting optimal communication.
The authors used resting-state fMRI data to generate functional connectivity matrices, which they used to inform their model of neural dynamics. If I understand correctly, their functional connectivity matrices represent correlations in neural activity across an entire fMRI scan computed for each individual and then averaged across individuals. This approach seems limited in its ability to capture neural dynamics across time. Modeling time series data or using a sliding window FC approach to capture changes across time might make more sense as a means of informing neural dynamics.
We agree with you on the fact that static fMRI is limited in capturing neural dynamics. However, we opted not to perform dynamic functional connectivity fitting just yet for a practical reason: Other communication models used here do not fit to any empirical data and provide a static view of the dynamics, comparable to the static functional connectivity. Since one of our goals was to compare different communication regimes, and the fact that fitting dynamics does not seem to substantially change the outcome if the end result is static (Figure 7), we decided to go with the poorer representation of neural data for this work. However, part of our follow-up project involves looking into the dynamics of influence over time and for that, we will fit our models to represent more realistic dynamics.
The authors evaluated their model using three different structural connectomes: one inferred from diffusion spectrum imaging in humans, one inferred from anterograde tract tracing in mice, and one inferred from retrograde tract-tracing in macaque. While the human connectome is presumably an undirected network, the mouse and macaque connectomes are directed. What bearing does experimentally inferred knowledge of directionality have on the derivation of optimal influence and its interpretation?
In terms of if directionality changes the interpretation of optimal influence, we think it sets limits for how much we can compare communication dynamics of these two types of networks. We think interpreting optimal communication in directed graphs needs to disentangle incoming influence from outgoing influence, e.g., analyzing “projector hubs/coordinators” and “receiver hubs/integrators” instead of putting both into a common class of hubs. Also, here we showed the extent of which a signal travels before it significantly degrades, having done so in an undirected graph. One of its implications for a directed graph is the possibility that some nodes can be unreachable from others, given the more restricted navigation. A possibility that we did not observe in the human connectome as all nodes could reach others, although with limited influence (see Figure 2. C). We did not explore these differences, as we used mice and macaque connectomes primarily to control for modality-specific confounds of DSI. However, our relatively poorer fit for directed networks (Supplementary Figure 2) motivated us to analyze how reciprocal connections shape dynamics and what impact do they have on networks’ function. Using the same connectomes as the current work, we addressed this question in a separate publication (Hadaeghi et al., 2024) and plan to extend both works by analyzing the signalling properties of directed networks.
It would be useful if the authors could assess the performance of the model for other datasets. Does the model reflect changes during task engagement or in disease states in which relative nodal influence would be expected to change? The model assumes optimality, but this assumption might be violated in disease states.
This is a wonderful idea that we initially had in mind for this work as well, but decided to dedicate a separate work on deviations in different tasks states, as well as disease states (mainly neurodegenerative disorders). We noticed the practical challenges of fitting large-scale models to task dynamics and harmonizing neuroimaging datasets of neurodegenerative disorders is beyond the scope of the current work. Unfortunately, this effort, although exciting and promising, is still pending as the corresponding author does not yet have the required expertise of neuroimaging processing pipelines.
The MSA approach is highly computationally intensive, which the authors touch on in the Discussion section. Would it be feasible to extend this approach to task or disease conditions, which might necessitate modeling multiple states or time points, or could adaptations be made that would make this possible?
Continuing our response from the previous point, yes, we think, in theory, the framework is applicable to both settings. Currently, our main point of concern is not the computational cost of the framework but the harmonization of the data, to ensure differences in results are not due to differences in preprocessing steps. However, assuming that all is taken care of, we believe a reasonable compute cluster should suffice by parallelizing the analytical pipeline over subjects. We acknowledge that the process would still be time-consuming, but besides the fitting process, we expect a modern high-performance CPU with about 32–64 threads to take up to 3 days analyzing one subject, given 100 brain regions or fewer. This performance then scales with the number of cluster nodes that can each work on one subject. We note that the analytical estimators such as SAR could be used instead, as it largely predicts the results from MSA. The limitations are then the lack of dynamics over time and potential estimation errors.
Reviewer #2 (Public review):
Summary:
The authors provide a compelling method for characterizing communication within brain networks. The study engages important, biologically pertinent, concerns related to the balance of dynamics and structure in assessing the focal points of brain communication. The methods are clear and seem broadly applicable, however further clarity on this front is required.
Strengths:
The study is well-developed, providing an overall clear exposition of relevant methods, as well as in-depth validation of the key network structural and dynamical assumptions. The questions and concerns raised in reading the text were always answered in time, with straightforward figures and supplemental materials.
Thank you.
Weaknesses:
The narrative structure of the work at times conflicts with the interpretability. Specifically, in the current draft, the model details are discussed and validated in succession, leading to confusion. Introducing a "base model" and "core datasets" needed for this type of analysis would greatly benefit the interpretability of the manuscript, as well as its impact.
Following your suggestion, we modified the introduction to emphasize on the human connectome and the linear model as the main toolkit. We also added a paragraph explaining the datasets that can be used instead.
Recommendations for the authors:
Essential Revisions (for the authors):
(1) The method presents an important and well-validated method for linking structural and functional networks, but it was not clear precisely what the necessary data inputs were and what assumptions about the data mattered. To improve the clarity of the presentation for the reader, it would be beneficial to have an early and explicit description of the flow of the method - what exact kinds of datasets are needed and what decisions need to be made to perform the analysis. In addition, there were questions about how the use or interpretation of the method might change with different methods of measuring structure or function, which could be answered via an explicit discussion of the issue. For example, how do undirected fMRI correlation networks compare to directed tracer injection projection networks? Similarly, could this approach apply in cases like EM connectomics with linked functional imaging that do not have full observability in both modalities?
This is an important point that we missed addressing in detail in the original manuscript. Now we did so, by first adding a paragraph (lines 292-305, page 10) explaining the pipeline and how our framework handles different modeling choices, and then further discussing it in the Discussion (lines 733-748, page 28). Moreover, we adjusted Figure 1, by delineating two main steps of the pipeline. Briefly, we clarified that MSA is model-agnostic, meaning that, in principle, any model of neural dynamics can be used with it, from the most abstract to the most biologically detailed. Moreover, the approach extends to networks built on EM connectomics, tract-tracing, DTI, and other measures of anatomical connectivity. However, we realized that a key detail was not explicitly discussed (pointed to by Reviewer #2), that is, the fact that these models naturally need to be fitted to the empirical dataset, even though this fitting step appears not to be critical, as shown in Figure 7.
Lines 292-305:
“The MSA begins by defining a ‘game.’ To derive OSP, this game is formulated as a model of dynamics, such as a network of interacting nodes. These can range from abstract epidemic and excitable models (Garcia et al., 2012; Messé et al., 2015a) to detailed spiking neural networks (Pronold et al., 2023) and to mean-field models of the whole brain dynamics, as chosen here (see below). The model should ideally be fitted to reflect real data dynamics, after which MSA systematically lesions all nodes to derive the OSP. Put together, the framework is general and model-agnostic in the sense that it accommodates a wide range of network models built on different empirical datasets, from human neuroimaging and electrophysiology to invertebrate calcium imaging, and anything in between. In essence, the framework is not bound to specific modelling paradigms, allowing direct comparison among different models (e.g., see section Global Network Topology is More Influential Than Local Node Dynamics).”
Lines 733-740:
“As noted in the introduction, OI is model-agnostic, here, we leveraged this liberty to compare signaling under different models of local dynamics, primarily built upon undirected human connectome data. We also considered different modalities, e.g., tract tracing in Macaque (see Structural and Functional Connectomes under Materials and Methods) to confirm that the influence of weak connections is not inflated due to imaging limitations (Supplementary Figure 5. A). The game theoretical formulation of signaling allows for systematic comparison among many combinations of modeling choices and data sources.”
We then continued with addressing the issue of full observability. We clarified that in this work, full observability was assumed. However, the mathematical foundations of our method capture unobserved contributors/influencers as an extra term, similar to the additive error term of a linear regression model. To keep the paper as non-technical as possible, we omitted expanding the axioms and the proof of how this is achieved, and instead referred to previous papers introducing the framework.
Lines 740-748:
“Nonetheless, in this work, we assumed full observability, i.e., complete empirical knowledge of brain structure and function that is not necessarily practically given. Although a detailed investigation of this issue is needed, mathematical principles behind the method suggest that the framework can isolate the unobserved influences. In these cases, activity of the target node is decomposed such that the influence from the observed sources is precisely mapped, while the unobserved influences form an extra term, capturing anything that is left unaccounted for, see (Algaba et al., 2019b; Fakhar et al., 2024) for more technical details.”
(2) The value of the normative game theoretic approach was clear, but the neurobiological interpretation was less so. To better interpret the model and understand its range of applicability, it would be useful to have a discussion of the potential neurobiological correlates that were at the same level of resolution as the modeling itself. Would such an optimization still make sense in disease states that might also be of interest?
This is a brilliant question, which we decided to explore further in separate studies. Specifically, the link between optimal communication and brain disorders is a natural next step that we are pursuing. Here, we expanded our discussion with a few lines first explaining the roots of our main assumption, which is that neurons optimize information flow, among other goals. We then hypothesized that the biological mechanisms by which this goal is achieved include (based on our findings) adopting a broadcasting regime of signaling. We suspect that this mode of communication, operationalized on complex network topologies, is a trade-off between robust signaling and energy efficiency. Currently, we are planning practical steps to test this hypothesis.
Lines 943-962:
“Nonetheless, our framework is grounded in game theory where its fundamental assumption is that nodes aim at maximizing their influence over each other, given the existing constraints. This assumption is well explored using various theoretical frameworks (Buehlmann and Deco, 2010; Bullmore and Sporns, 2012; Chklovskii et al., 2002; Laughlin and Sejnowski, 2003; O’Byrne and Jerbi, 2022) and remains open to further empirical investigation. Here, we used game theory to mathematically formalize a theoretical optimum for communication in brain networks. Our findings then provide a possible mechanism for achieving this optimality through broadcasting. Based on our results, we speculate that, there exists an optimal broadcasting strength that balances robustness of the signal with its metabolic cost. This hypothesis is reminiscent of the concept of brain criticality, which suggests the brain to be positioned in a state in which the information propagates maximally and efficiently (O’Byrne and Jerbi, 2022; Safavi et al., 2024). Together, we suggest broadcasting to be the possible mechanism with which communication is optimized in brain networks, however, further research directions include investigating whether signaling within brain networks indeed aligns with a game-theoretic definition of optimality. Additionally, if it does, subsequent studies could then examine how deviations from optimal communication contribute to or result from various brain states or neurological and psychiatric disorders.”
Reviewer #1 (Recommendations for the authors):
I would recommend that the authors consider the following point in a revision, as well as the major weaknesses of the public review. Some aspects of Figure 1 could be clearer. What is being illustrated by the looping arrow to MSA? What is being represented in the matrices (labeling "source" and "target" on the matrix might enhance clarity)? Is R2 the metric used to assess the degree of similarity between communication models? These could be addressed by making small additions to the figure legend or to the figure itself.
Thank you for your constructive comment on Figure 1, which is arguably the most important figure in the manuscript. We adjusted the figure and its caption (see above) based on your suggestions. After doing so, we think the figure is now clearer regarding the pipeline used in this work.
Reviewer #2 (Recommendations for the authors):
Overall, as stated in the public review and the short assessment, the manuscript is in a clearly mature state and brings an important method to link the fields of structural and functional brain networks.
Nevertheless, the paper would benefit from an early, and clear, discussion of the:
(1) components of the model, and assumptions of each, should be stated at the end of the introduction, or early in results. (2) datasets necessary to run the analysis.
The confusion arises from lines 130-131, stating "In the present work (summarized in Figure 1), we used the human connectome, large-131 scale models of dynamics, and a game-theoretical perspective of signaling." This, to me, indicated that a structural connectivity map may be the only dataset required, as the dynamics model and game theory component are solely simulated. However, later, lines 214-216 state that the empirical functional connectivity is estimated from the structural connectivity, indicating that the method is only applied to cases where we have both.
Finally, Supplemental Figure 5 validates a number of metrics on different solely structural networks (which is a very necessary and well-done control). Similarly, while the dynamical model is discussed in depth, and beautifully shown that the specific choice of dynamical model does not directly impact the results, it would be helpful to clarify the dynamical model utilized in the early figures.
Thank you for pointing out a critical detail that we missed elaborating sufficiently early in the paper: the modelling step. Following your suggestions, we added a paragraph from line 292 to 305 (page 10) expanding on the modelling framework. We also explicitly divided the modelling step in Figure 1 and briefly clarified our modelling choices in the caption. Together, we emphasized the fact that our framework is generally model agnostic, which allows different models of dynamics to be plugged into various anatomical networks. We then clarified that, like in any modelling effort, one needs to first fit/optimize the model parameters to reproduce empirical data. In other words, we emphasized the fact that our framework relies on a computational model as its ‘game’ to infer how regions interact, and we fine-tuned our models to reproduce the empirical FC.
Again, this is not a critique of the methods, which are excellent, but the presentation. It would help readers, and even me, to have a clear indication of the model earlier. Further, it would help to discuss, both in the introduction and discussion, the datasets required for applying these methods more broadly. For instance, 2-photon recordings are discussed - would it be possible to apply this method then to EM connectomes with functional data recorded for them? In theory, it seems like yes, although the current datasets have 100% observability, whereas 2-photon imaging, or other local methods, will not have perfect overlap between structural and functional connectomes. Discussions like this, related to the assumptions of the model, the necessary datasets, and broader application directions beyond DSI, fMRI, and BOLD cases where the method was validated, would increase the impact and interpretability for a broad readership.
This is a valid point that we should have been more explicit about. The revised manuscript now contains a paragraph (lines 740-748) clarifying the fact that, throughout this work, we assumed full observability. We then briefly discuss, based on the mathematical principles of the framework, what we expect to happen in cases with partial observability. We then point at two references in which the details of a framework with partial observability are laid out, one containing mathematical proofs and the other using numerical simulations.
References:
Hadaeghi, F., Fakhar, K., & Hilgetag, C. C. (2024). Controlling Reciprocity in Binary and Weighted Networks: A Novel Density-Conserving Approach (p. 2024.11.24.625064). bioRxiv. https://doi.org/10.1101/2024.11.24.625064
-
-
connary.com connary.com
-
Argent Pixel CF is a playful bitmap version of the original dashing Argent typeface. With a pronounced x-height, this unexpectedly readable serif recreates the original font’s distinctive look in a style evocative of early Macintosh typography.
omg. I've been talking a big game about moving away from pixelly aesthetics but you know I am twitching avoiding buying this.
-
-
archive.nytimes.com archive.nytimes.com
-
I would say that it is unhelpful because its prescriptions presuppose the knowledge most of our students don’t have. What good is it to be told, “Do not join independent clauses with a comma,” if you don’t have the slightest idea of what a clause is (and isn’t), never mind an “independent” one? And even if a beginning student were provided with the definition of a clause, the definition itself would hang in mid-air like a random piece of knowledge. It would be like being given a definition of a drop-kick in the absence of any understanding of the game in which it could be deployed.
I agree. But rules in writing, more than any other, are meant to be broken. To work from a correct definition supposes that that definition is correct, easy to learn, worth learning, or not subject to change. Adaptability is key to good writing, and you learn adaptability best when you're comfortable, not being whipped by rulers. Tests of comprehension and knowledge are easy to pass for those who can memorize and conform.
-
-
www.biorxiv.org www.biorxiv.org
-
Reviewer #3 (Public review):
Summary
This manuscript outlines a series of very exciting and game-changing experiments examining the role of peripheral MORs in OIRD. The authors outline experiments that demonstrate a peripherally restricted MOR antagonist (NLX Methiodide) can rescue fentanyl-induced respiratory depression and this effect coincides with a lack of conditioned place aversion. This approach would be a massive boon to the OUD community, as there are a multitude of clinical reports showing that naloxone rescue post fentanyl over-intoxication is more aversive than the potential loss-of-life to the individuals involved. This important study reframes our understanding of successful overdose rescue with a potential for reduced aversive withdrawal effects.
Strengths:
Strengths include the plethora of approaches arriving at the same general conclusion, the inclusion of both sexes, and the result that a peripheral approach for OIRD rescue may side-step severe negative withdrawal symptoms of traditional NLX rescue.
Weaknesses:
All weaknesses were addressed.
-
-
www.seacoastmazda.com www.seacoastmazda.com
-
Mazda really has to up its game to make you consider upgrading to the Select
Mazda really has to up its game with the Mazda3 trims to make you consider the Select
-
-
Local file Local file
-
pg 14 - Robert J. Koester came up with a questionnaire for dementia wanderers - Koester says "it's the ultimate detective game" - After 2,200 wandering cases, Koester has learned that most dementia-driven wanderers in cities are found within 3.2 km from home or the location they disappeared
-
-
bookshelf.vitalsource.com bookshelf.vitalsource.com
-
Virtualization is probably the most common method for running applications designed for one operating system on a different operating system, but on the same CPU. This method works relatively efficiently because the applications were compiled for the instruction set that the target system uses. But what if an application or operating system needs to run on a different CPU? Here, it is necessary to translate all of the source CPU's instructions so that they are turned into the equivalent instructions of the target CPU. Such an environment is no longer virtualized but rather is fully emulated. Emulation is useful when the host system has one system architecture and the guest system was compiled for a different architecture. For example, suppose a company has replaced its outdated computer system with a new system but would like to continue to run certain important programs that were compiled for the old system. The programs could be run in an emulator that translates each of the outdated system's instructions into the native instruction set of the new system. Emulation can increase the life of programs and allow us to explore old architectures without having an actual old machine. As may be expected, the major challenge of emulation is performance. Instruction-set emulation may run an order of magnitude slower than native instructions, because it may take ten instructions on the new system to read, parse, and simulate an instruction from the old system. Thus, unless the new machine is ten times faster than the old, the program running on the new machine will run more slowly than it did on its native hardware. Another challenge for emulator writers is that it is difficult to create a correct emulator because, in essence, this task involves writing an entire CPU in software. In spite of these challenges, emulation is very popular, particularly in gaming circles. Many popular video games were written for platforms that are no longer in production. Users who want to run those games frequently can find an emulator of such a platform and then run the game unmodified within the emulator. Modern systems are so much faster than old game consoles that even the Apple iPhone has game emulators and games available to run within them.
Emulation enables software designed for one hardware architecture to run on a different system by translating CPU instructions. Unlike virtualization, which optimizes performance by running code natively, emulation is slower because each instruction must be translated. Emulation is valuable for legacy software preservation, allowing old programs to run on modern systems. It is also widely used in gaming, enabling old console games to run on new hardware. However, performance limitations make it impractical for high-performance computing tasks. Writing a correct emulator is challenging because it requires replicating an entire CPU in software.
-
-
-
Back in the 90s, we didn’t have the luxury of walkthrough videos or automatic mapping; instead, we crafted our own guides, meticulously jotting down notes and sketching maps by hand. Sometimes we even went to the copy shop to copy notes and maps made by friends. As I recently stumbled upon a treasure trove (well - a ring binder) of my old self-written notes and hand-drawn maps, I was transported back to those fun days of exploration and discovery. I was surprised how much time and effort we were able to put into a single topic - most likely because we had access to only a hand full of games and had (apart from school) no other distractions.
Cf. TUNIC – in that case it's trying to nudge you in the direction of that paper world, but the instruction manual exists premade. The aesthetic study IG girlies know that meticulous note-taking is its own satisfaction; what would a game (experience?) look like if designed around the idea that you would have to produce your own artifact to play it, and that the real satisfaction should come from that artifact's production?
-
-
conceptually.org conceptually.org
-
Another strategy is to try to avoid playing a game of political tug-of-war altogether. As the economist Robin Hanson puts, it: pull the rope sideways. Instead of joining a side and pulling on the rope (of the Overton window), pull it sideways in a direction no one will resist.
I thought the concept of pulling the rope sideways was great concept and idea to always keep in mind. We should all try to conceptualize the different angles we can take when it come to policy and law. Maybe we should look at different angle to impact social change.
-
-
4thgenerationcivilization.substack.com 4thgenerationcivilization.substack.com
-
It becomes possible
for - universal ledger - necessary but not sufficient
comment - traumatized humans who succeed to become the next generation of abusers will always game any system design
-
