1,169,950 Matching Annotations
  1. Oct 2024
    1. Reckless audacity came to be considered the courage of a loyal ally; prudent hesitation, specious cowardice; moderation was held to be a cloak for unmanliness; ability to see all sides of a question, inaptness to act on any. Frantic violence became the attribute of manliness; cautious plotting, a justifiable means of self-defence. The advocate of extreme measures was always trustworthy; his opponent a man to be suspected.

      extremism became accepted, no moderation. change in rationality, definition

    2. but war takes away the easy supply of daily wants, and so proves a rough master, that brings most men's characters to a level with their fortunes.

      war brings out peeoples gritty natures

    3. and although the crime imputed was that of attempting to put down the democracy, some were slain also for private hatred, others by their debtors because of the moneys owed to them.

      the excuse of killing for the nation allowed people to abuse it, kill for any reason was accepted, utter violece agasint eachother

    4. Peloponnesians, however, although victorious in the sea-fight, did not venture to attack the town, but took the thirteen Corcyraean vessels which they had captured, and with them sailed back to the continent from whence they had put out.

      Pelo take Cor ships and return home

    1. wool was the dominant textile with linen as the next important manufactured textile produced.

      this was due to undeveloped technology like the loom, etc and it wasn't til they discovered plant material could also be produced into fabric.

    2. As with many technologies, the development of the clock was driven by societal needs.

      that is why technology/strategies are made to make something easier for the people to access or know like for example the early inventions of sundials and now we have clocks.

    3. The working day of a farmer was still very difficult, even with the technological improvements of the Medieval Age.

      I think the difficuility that they had were the seasons changing, the weather, or some of the work had to be done physically.

    4. This section focuses on technologies that appear to be natively "European." Other technologies (such as paper, gunpowder, the compass, stirrups, among others) were based on older developments in other regions, particularly China.

      intersting to show how other technologies are based from older developments from other regions.

    1. If to spit in the face, cudgel in the streets, fight devils, quarrel in the law, make laws & violate them, oppress & enslave mankind, take away all the honest labours & genius of one part of the community to riot upon, and to aggrandize the rest, gamble, drink to excess, wallow in debauchery, violate the chastity of women, betray publick trust, waist the funds, deceive the people, bely other nations, enslave their citizens, & your own,

      Observation: How the America systems is unfair and display their power and dominates towards minorities through laws, oppression humans and enslaving humans while taking away honest wages etc<br /> Interpretation: Describes the mistreatment in America that Black Americans have to endure while experiencing the oppression, laws being in place towards them that doesn't benefit them but only keep African American oppressed because of laws.

    2. Our burdens are heavy & call loud for justice! call loud for mercy! I Therefore, take the liberty Sir, to address you myself upon the subject of slaviry, and ask you a few questions respecting Mr. Duane’s politicks

      Observation: The heaviness that the slaves felt from being in slavery and wanting justice and being shown mercy and wanted to ask Thomas Jefferson Questions.

      Interpretation:This sentence in the letter was describing the emotion that the anonymous slave felt and also others that was slaves which was feeling burden and wanting to seek justice while also want to experience mercy.

    1. Error of Octant 2° 00′ 00″ +. made Several other observations—I made an angle for the Wedth of the two rivers. The Missourie from the Point to the N. Side is 875 yards wide the Osage River from the point to the S. E Side is 397 yards wide, the destance between the two rivers at the pt. of high Land (ioo foot above the bottom) and 80 poles up the Missouries from the point is 40 poles, on the top of this high land under which is a limestone rock two Mouns or graves are raised—from this

      Observation: a compass is being used to described the angle of the rivers that is being observed.

      Interpretation: observing the rivers while knowing the distance between the two rivers.

    2. Friday Set out early a fair morning Passed the mouth Bear Creek 25 yds. Wide at 6 Miles, Several Small Islands in the river the wind a head from the West the Current exceedingly rapid Came to on the point of the Osarges River on the Labd Side of Missouries this osages river Verry high, felled all the Trees in the point to Make observations Sit up untill 12 oClock taken oservation this night

      Observation: Noticed a letter being written which discuss the waterways on a Friday morning

      Interpretation: How Lewis or Clark describe their journey exploring the rivers and observing the trees as well.

  2. docdrop.org docdrop.org
    1. Which of these factors are most powerful in determining a child's s Uc-cess in school?

      This chapter raises a quite interesting subject. The success of a child in school depends on many variables. The example of Alexander and Anthony provides an excellent illustration of the students' socioeconomic background as well as their parents' educational history, which helps to influence the academic success of the children.

    2. Children are more successful in school when they are able to pay at-tention, when they get along with peers and teachers, and when they are not preoccupied or depressed because of troubles at home. Using the same SAT-type metric as for reading scores, figure 3.1 shows that, according to teachers, children from more affluent families are more engaged than their low-income peers.

      I think kids do better in school when they can focus, get along with others, and aren’t weighed down by problems at home. For children from low-income families, outside factors can make it really hard to thrive. Things like family stress, lack of access to healthcare, or emotional struggles can slow down their physical and mental development. When kids are distracted by issues like financial strain or tension at home, it affects not only their academic performance but also their overall well-being.

    3. It is easy to imagine how the childhood circumstances of these two young men may have shaped their fates. Alexander lived in the suburbs while Anthony lived in the city center. Most of Alexander's suburban neighbors lived in families with incomes above the $125,000 that now sep-arates the richest 20 percent of children from the rest. Anthony Mears's school served pupils from families whose incomes were near or below the $27,000 threshold separating the bottom 20 percent (see figure 2.4). With an income of more than $300,000, Alexander's family was able to spend far more money on Alexander's education, lessons, and other enrichment activities than Anthony's parents could devote to their son's needs. Both of Alexander's parents had professional degrees, so they knew all about what Alexander needed to do to prepare himself for college. An-thony's mother completed some classes after graduating from high school, but his father, a high school dropout, struggled even to read. And in con-trast to Anthony, Alexander lived with both of his parents, which not only added to family income but also increased the amount of time available for a parent to spend with Alexander.

      This shows how greatly family income and socioeconomic background affect school results. From access to better schools to lots of resources for extracurricular activities, Alexander's family wealth and suburban surroundings give him major benefits that help him succeed in his academics and in life. Anthony's family, on the other hand, battles lower educational achievement and financial uncertainty, which reduces chances for academic excellence and parental participation in his schooling. The disparity in family structure is also quite important since Alexander gains from the presence of both parents, which not only raises household income but also improves the availability of parental time and support.

    4. When parent-child relationships are warm, children respond well. When children respond well, harsh parenting practices are less common

      This stands out to me so much because of how parenting is reflected based on everything that happens in life. In a household with constant worries and stress the parent tends to reflect those worries on their kids. It is either shown through a parents anger and always responding as if they're annoyed, yelling, and just anxious, which that tends to affect their kids behavior in school or wherever they may be. We look at things different when it comes to spending money.

    1. Built in the open Concourse's RFC process and governance model invite anyone to become a contributor, developing the project roadmap by collaborating in the open.
    1. only legalwhen it is regulated by the provincialor territorial government
      • Weiwei incident
    2. Theft, Robbery,and Fraud
      • Unlike theft, robbery involves stealing a property while a victim is involved, but the distinction is left to the police

      • Fraud example -> Scam emails that asks for our personal information

      • Another example of fraud is ponzi's scam

    3. The WayfairConspiracy
      • Wayfair was accused of smuggling children and selling them on the market, but it's not grounded in credible evidence

      • 2020

    4. Bill C-36
      • Happened for the protection of sex workers, but forced them to go to underground sex-work

      • Prevented easier screener in part of the sex worker

    5. Psychological control including:
      • Eg. Chemical dependency, grooming through a false sense of love and obligation, etc.

      • Exploits the human need for connection, even when there's exploitation and abuse involved

      • Stockholm syndrome -> Captive develops a bond with their captor as a defense mechanism under extreme defense

    6. Human trafficking and human smuggling are distinguishable.
      • Trafficking involves trafficking transporting people to another country without their consent, for the purpose of forcing them to be involved in sex trades or in forced labour

      • What, why, and how are the three elements of human trafficking

      • Smuggling involves transporting people to another country consensuallly, often by means of coercion and deception.

      • Smuggling victims are often involved due to unsafe environments in their home country and seeking a better help elsewhere

    7. State Crime
      • Perpetrated by the government, individuals in the government, or an individual in the government
      • Eg. War crimes Real life example: Residential schools of Canada
      • State crimes occur mostly for money

    Annotators

  3. docdrop.org docdrop.org
    1. Perhaps even more puzzling, why has it been so difficult to confront and transform the features embedded in the school structure that arc responsible for facilitating success for some and failure for ochers

      This has always been an issue that has been known. In this diverse society the teachers themselves pick and choose who they want to help out and see succeed. Big public schools have always to be known to segregate people of color because they do not think they have the same capacity as thers to be able to educationally succeed. On the other hand, in a school very diverse majority of the time we all tend to push very hard but because of the feeling of being let down and not having support we tend to slack off and not do the full potential.

    1. Over the next 19 years, the Muslims conquered most of Spain and were threatening to conquer France until stopped by Charles Martel

      why did charles martel stoped the muslims from conquering france?

    1. The accompanying argument in that all scientific contributions from non-European civilizations were technology-based, not science-based

      wonder why they think its non science based?

    1. Case: patient is named case #2, male

      Disease Assertion: UCD/OTCD

      Family Info:

      Case Presenting HPOs: Hyperammonemia (HP:0001987), oriticaciduria (HP:0003218), low plasma citrulline (HP:0003572), neonatal onset(HP:0003623), Hyperglutaminemia (HP:0003217)

      Case HPO FreeText:

      Case NOT HPOs:

      Case NOT HPO Free Text:

      Case Previous Testing: GDNA was isolated from lymphocytes. To examine the small mutations in the coding region of the OTC gene, all 10 exons and their flanking intron regions were amplified using PCR, and the nucleotide sequences of the amplified products were determined. To determine the intron 5 sequence of case 2, PCR was performed using primers OTCex5F and OTCint5R, and primers OTCint5F and OTCex6R (Table 1, Fig. 3). The amplified products were subcloned into the pT7 vector and the inserted DNA was sequenced using an automated DNA sequencer. Allopurinol test

      Supplemental Data: TABLE 1, Notes:

      Variant: NM_000531.6: c.540+265G>A

      ClinVarID: NA

      CAID: CA658658977

      gnomAD:

      Gene Name: OTC (ornithine transcarbamylase)

    1. Reviewer #2 (Public review):

      Goldstein et al. provide a thorough characterization of the interaction of attention and eye movement planning. These processes have been thought to be intertwined since at least the development of the Premotor Theory of Attention in 1987, and their relationship has been a continual source of debate and research for decades. Here, Goldstein et al. capitalize on their novel urgent saccade task to dissociate the effects of endogenous and exogenous attention on saccades towards and away from the cue. They find that attention and eye movements are, to some extent, linked to one another but that this link is transient and depends on the nature of the task. A primary strength of the work is that the researchers are able to carefully measure the time course of the interaction between attention and eye movements in various well-controlled experimental conditions. As a result, the behavioral interplay of two forms of attention (endogenous and exogenous) are illustrated at the level of tens of milliseconds as they interact with the planning and execution of saccades towards and away from the cued location. Overall, the results allow the authors to make meaningful claims about the time course of visual behavior, attention, and the potential neural mechanisms at a timescale relevant to everyday human behavior.

    2. Reviewer #3 (Public review):

      The present study used an experimental procedure involving time-pressure for responding, in order to uncover how the control of saccades by exogenous and endogenous attention unfolds over time. The findings of the study indicate that saccade planning is influenced by the locus of endogenous attention, but that this influence was short-lasting and could be overcome quickly. Taken together, the present findings reveal new dynamics between endogenous attention and eye movement control and lead the way for studying them using experiments under time-pressure.

      The results achieved by the present study advance our understanding of vision, eye movements, and their control by brain mechanisms for attention. In addition, they demonstrate how tasks involving time-pressure can be used to study the dynamics of cognitive processes. Therefore, the present study seems highly important not only for vision science, but also for psychology, (cognitive) neuroscience, and related research fields in general.

      I think the authors' addressed all of the reviewers' points successfully and in detail, so that I don't have any further suggestions or comments.

    3. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      The main research question could be defined more clearly. In the abstract and at some points throughout the manuscript, the authors indicate that the main purpose of the study was to assess whether the allocation of endogenous attention requires saccade planning [e.g., ll.3-5 or ll.247-248]. While the data show a coupling between endogenous attention and saccades, they do not point to a specific direction of this coupling (i.e., whether endogenous attention is necessary to successfully execute a saccade plan or whether a saccade plan necessarily accompanies endogenous attention).

      Thanks for the suggestion. We have modified the text in the abstract and at various points in the text to make it more clear that the study investigates the relationship between attention and saccades in one particular direction, first attentional deployment and then saccade planning.

      Some of the analyses were performed only on subgroups of the participants. The reporting of these subgroup analyses is transparent and data from all participants are reported in the supplementary figures. Still, these subgroup analyses may make the data appear more consistent, compared to when data is considered across all participants. For instance, the exogenous capture in Experiments 1 and 2 appears much weaker in Figure 2 (subgroup) than Figure S3 (all participants). Moreover, because different subgroups were used for different analyses, it is often difficult to follow and evaluate the results. For instance, the tachometric curves in Figure 2 (see also Figure 3 and 4) show no motor bias towards the cue (i.e., performance was at ~50% for rPTs <75 ms). I assume that the subsequent analyses of the motor bias were based on a very different subgroup. In fact, based on Figure S2, it seems that the motor bias was predominantly seen in the unreliable participants. Therefore, I often found the figures that were based on data across all participants (Figures 7 and S3) more informative to evaluate the overall pattern of results.

      Indeed, our intent was to dissociate the effects on saccade bias and timing as clearly as possible, even if that meant having to parse the data into subgroups of participants for different analyses. We do think conceptually this is the better strategy, because the bias and timing effects were distinct and not strongly correlated with specific participants or task variants. For instance, the unreliable participants were somewhat more consistently biased in the same direction, but the reliable participants also showed substantial biases, so the difference in magnitude was relatively modest. This can be more easily appreciated now that the reliable and unreliable participants are indicated in Figures 3 and 5. The impact of the bias is also discussed further in the last paragraphs of the Results, which note that the bias was not a reliable predictor of overall success during informed choices.

      Reviewer #3 (Public Review):

      (1) In this experimental paradigm, participants must decide where to saccade based on the color of the cue in the visual periphery (they should have made a prosaccade toward a green cue and an antisaccade away from a magenta cue). Thus, irrespective of whether the cue signaled that a prosaccade or an antisaccade was to be made, the identity of the cue was always essential for the task (as the authors explain on p. 5, lines 129-138). Also, the location where the cue appeared was blocked, and thus known to the participants in advance, so that endogenous attention could be directed to the cue at the beginning of a trial (e.g., p. 5, lines 129-132). These aspects of the experimental paradigm differ from the classic prosaccade/antisaccade paradigm (e.g. Antoniades et al., 2013, Vision Research). In the classic paradigm, the identity of the cues does not have to be distinguished to solve the task, since there is only one stimulus that should be looked at (prosaccade) or away from (antisaccade), and whether a prosaccade or antisaccade was required is constant across a block of trials. Thus, in contrast to the present paradigm, in the classic paradigm, the participants do not know where the cue is about to appear, but they know whether to perform a prosaccade or an antisaccade based on the location of the cue.

      The present paradigm keeps the location of the cue constant in a block of trials by intention, because this ensures that endogenous attention is allocated to its location and is not overpowered by the exogenous capture of attention that would happen when a single stimulus appeared abruptly in the visual field. Thus, the reason for keeping the location of the cue constant seems convincing. However, I wondered what consequences the constant location would have for the task representations that persist across the task and govern how attention is allocated. In the classic paradigm, there is always a single stimulus that captures attention exogenously (as it appears abruptly). In a prosaccade block, participants can prioritize the visual transient caused by the stimulus, and follow it with a saccade to its coordinates. In an antisaccade block, following the transient with a saccade would always be wrong, so that participants could try to suppress the attention capture by the transient, and base their saccade on the coordinates of the opposite location. Thus, in prosaccade and antisaccade blocks, the task representations controlling how visual transients are processed to perform the task differ. In the present task, prosaccades and antisaccades cannot be distinguished by the visual transients. Thus, such a situation could favor endogenous attention and increase its influence on saccade planning, even though saccade planning under more naturalistic conditions would be dominated by visual transients. I suggest discussing how this (and vice versa the emphasis on visual transients in the classic paradigm) could affect the generality of the presented findings (e.g., how does this relate to the interpretation that saccade plans are obligatorily coupled to endogenous attention? See, Results, p. 10, lines 306-308, see also Deubel & Schneider, 1996, Vision Research).

      Great discussion point. There are indeed many ways to set up an experiment where one must either look to a relevant cue or look away from it. Furthermore, it is also possible to arrange an experiment where the behavior is essentially identical to that in the classic antisaccade task without ever introducing the idea of looking away from something (Oor et al., 2023). More important than the specific task instructions or the structure of the event sequence, we think the fundamental factors that determine behavior in all of these cases are the magnitudes of the resulting exogenous and endogenous signals, and whether they are aligned or misaligned. Under urgent conditions, consideration of these elements and their relevant time scales explains behavior in a wide variety of tasks (see Salinas and Stanford, 2021). Furthermore, a recent study (Zhu et al., 2024) showed that the activation patterns of neurons in monkey prefrontal cortex during the antisaccade task can be accurately predicted from their stimulus- and saccade-related responses during a simpler task (a memory guided saccade task). This lends credence to the idea that, at the circuit level, the qualities that are critical for target selection and oculomotor performance are the relative strengths of the exogenous and endogenous signals, and their alignment in space and time. If we understand what those signals are, then it no longer matters how they were generated. The Discussion now includes a paragraph on this issue.

      (2) Discussion (p. 16, lines 472-475): The authors suppose that "It is as if the exogenous response was automatically followed by a motor bias in the opposite direction. Perhaps the oculomotor circuitry is such that an exogenous signal can rapidly trigger a saccade, but if it does not, then the corresponding motor plan is rapidly suppressed regardless of anything else.". I think this interesting point should be discussed in more detail. Could it also be that instead of suppression, other currently active motor plans were enhanced? Would this involve attention? Some attention models assume that attention works by distributing available (neuronal) processing resources (e.g., Desimone & Duncan, 1995, Annual Review of Neuroscience; Bundesen, 1990, Psychological Review; Bundesen et al., 2005, Psychological Review) so that the information receiving the largest share of resources results in perception and is used for action, but this happens without the active suppression of information.

      The rebound seen after the exogenously driven changes is certainly interesting, and we agree that it could involve not only the suppression of a specific motor plan but also enhancement of another (opposite) plan. However, we think that, given the lack of prior data with the requisite temporal precision, further elaboration of this point would just be too speculative in the context of the point that we are trying to make, which is simply that the underlying choice dynamics are more rapid and intricate than is generally appreciated.

      (3) Methods, p. 19, lines 593-596: It is reported that saccades were scored based on their direction. I think more information should be provided to understand which eye movements entered the analysis. Was there a criterion for saccade amplitude? I think it would be very helpful to provide data on the distributions of saccade amplitudes or on their accuracy (e.g. average distance from target) or reliability (e.g. standard deviation of landing points). Also, it is reported that some data was excluded from the analysis, and I suggest reporting how much of the data was excluded. Was the exclusion of the data related to whether participants were "reliable" or "unreliable" performers?

      The reported results are based on all saccades (detected according to a velocity threshold) that were produced after the go signal and in a predominantly horizontal direction (within ± 60° of the cue or non-cue), which were the vast majority (> 99%). Indeed, most saccades were directed to the choice targets, with 95% of them within ± 14.2° of the horizontal plane. The excluded (non-scored) trials were primarily fixation breaks plus a small fraction of trials with blinks, which compromised saccade determination. There was no explicit amplitude criterion; applying one (for instance, excluding any saccades with amplitude < 2°) produced minimal changes to the data. Overall, saccade amplitudes were distributed unimodally with a median of 7.7° and a 95% confidence interval of [3.7°, 9.7°], whereas the choice targets were located at ± 8° horizontally. This is now reported in the Methods.

      As far as data exclusion, analyses were based on urgent trials (gap > 0); non-urgent (gap < 0) trials were excluded from calculation of the tachometric curves simply because they might correspond to a slightly different regime (go signal after cue onset) and to long processing times in the asymptotic range (rPT in 200–300 ms) or beyond, which are not as informative. However, including them made no appreciable difference to the results. No data were excluded based on participant performance or identity; all psychometric analyses were carried out after the selection of trials based on the scoring criteria described above. This is now stated in the Methods.

      (4) Results, p. 9, lines 262-266: Some data analyses are performed on a subset of participants that met certain performance criteria. The reasons for this data selection seem convincing (e.g. to ensure empirical curves were not flat, line 264). Nevertheless, I suggest to explain and justify this step in more detail. In addition, if not all participants achieved an acceptable performance and data quality, this could also speak to the experimental task and its difficulty. Thus, I suggest discussing the potential implications of this, in particular, how this could affect the studied mechanisms, and whether it could limit the presented findings to a special group within the studied population.

      The ideal (i.e., best) analysis for determining the cost of an antisaccade for each individual participant (Fig. 4c) was based on curve fitting and required task performance to rise consistently above chance at long rPTs in both pro and anti trials. This is why the mentioned conditions on the fits were imposed. This is now explained in the text. This ideal analysis was not viable for all tachometric curves not necessarily because of task difficulty but also because of high variability or high bias in a particular experiment/condition. It is true that the task was somewhat difficult, but this manifested in various ways across the dataset, so attempting to draw a clean-cut classification of participants based on “difficulty” may not be easy or all that informative (as can be gleaned from Fig. S1). There simply was a range of success levels, as one might expect from any task that requires some nontrivial cognitive processing. Also note that no participants were excluded flat out from analysis. Thus, at the mentioned point in the text, we simply note that a complementary analysis is presented later that includes all participants and all conditions and provides a highly consistent result (namely, Fig. 7e). Then, in the last section of the Results, where Fig. 7 is presented, we point out that there is considerable variance in performance at long rPTs, and that it relates to both the bias and the difficulty of the task across participants.   

      Reviewer #1 (Recommendations For The Authors):

      (1) I have some questions related to the initial motor bias:

      a) Based on Figure S3, which shows the tachometric curves using data from all participants, there only seems to be a systematic motor bias in Experiments 1 and 3 but no bias in Experiments 2 and 4. It is unclear to me why this is different from the data shown in Figure 7.

      For the bars in Fig. 7, accuracy (% correct) was computed for each participant and then averaged across participants, whereas for the data in Fig. S3, trials were first pooled across participants and then accuracy was computed for each rPT bin. The different averaging methods produce slightly different results because some participants had more trials in the guessing range than others, and different biases.  

      b) Based on Figure 7 (and Figure S3), there was no motor bias in Experiment 4. Based on the correlations between motor bias and time difference between pro and antisaccades, I would expect that the rise points between pro and antisaccades would be more similar in this Experiment. Was this the case?

      No. Figs. 3c and S3d show that the rise times of pro and anti trials for Experiment 4 still differ by about 30 ms (around the 75% correct mark), and the rest of the panels in those figures show that the difference is similar for all experiments. What happens is that Figs. 7 and S3 show that on average the bias is zero for Experiment 4, but that does not mean that the average difference in rise times is zero because there is an offset in the data (correlation is not the same as regression). The most relevant evidence is in Fig. 6c, which shows that, for an overall bias of zero, one would still expect a positive difference in rise times of about 25–30 ms. This figure now includes a regression line, and the corresponding text now explains the relationship between bias and rise times more clearly. Thanks for asking; this is an important point that was not sufficiently elaborated before.

      c) If I understand correctly, the initial motor bias was predominantly observed in participants who were classified as 'unreliable performers' (comparing Figure S2 and Figure 2). Was there a correlation between the motor bias and overall success in the task? In other words: Was a strong motor bias generally disadvantageous?

      Good question. Participants classified as ‘unreliable’ were somewhat more consistently biased in the same direction than those classified as ‘reliable’, but the distinction in magnitude was not large. This can be better appreciated now in Fig. 5 by noting the mix of black (reliable) and gray labels (unreliable) along the x axes. The unreliable participants were also, by definition, less accurate in their asymptotic performance in at least one experiment (Fig. S1). In general, however, this classification was used simply to distinguish more clearly the two main effects in the data (timing cost and bias). In fact, the motor bias was not a reliable predictor of performance during informed choices: across all participants, the mean accuracy in the asymptotic range (rPT > 200 ms) had a weak, non-significant correlation with the bias (ρ = ‒0.07, p = 0.7). So, no, the motor bias did not incur an obvious disadvantage in terms of overall success in the task. Its more relevant effect was the asymmetry in performance that it promoted between pro- and antisaccade trials (Fig. 6c). This is now explained at the end of the Results.

      (2) One of the key analyses of the current study is the comparison of the rPT required to make informed pro and antisaccades (ll.246 ff). I think it would be informative for readers to see the results of this analysis separately for all four experiments. For instance, based on Figure 4a and b, it looks like the rise points were actually very similar between pro and antisaccades in Experiment 1.

      We agree that the ideal analysis would be to compute the performance rise point for pro- and antisaccade curves for each experiment and each participant, but as is now noted in the text, this requires a steady and substantial rise in the tachometric curve, which is not always obtained at such a fine-grained level; the underlying variability can be glimpsed from the individual points in Fig. 7a, b. Indeed, in Fig. 4a, b the mean difference between pro and anti rise points appears small for Experiment 1 — but note that the two panels include data from only partially overlapping sets of participants; the figure legend now makes this more clear. Again, this is because the required fitting procedure was not always reliable in both conditions (pro and anti) for a given subject in a given experiment. Thus, panels a and b cannot be directly compared. The key results are those in Fig. 4c, which compare the rise points in the two conditions for the same participants (11 of them, for which both rise points could be reliably determined). In that case the mean difference is evident, and the individual effect consistent for 9 of the 11 participants (as now noted).

      A similar comparison for Experiments 1 or 2 individually would include fewer data points and lose statistical power. However, on average, the results for Experiments 1 and 2 (separately) were indeed very similar; in both cases, the comparison between pro and anti curves pooled across the same qualifying participants as in Fig. 4c produced results that were nearly identical to those of Fig. 4d (as can be inferred from Fig. 2a, b). Furthermore, results for the four individual experiments pooled across all participants are presented in Figure S3, which shows delayed rises in antisaccade performance consistent with the single participant data (Fig. 4c).

      (3) Figure 3: It would be helpful to indicate the reliable performers that were used for Figure 3a in the bar plots in Figure 3b. Same for Figures 3c and d.

      Done. Thanks for the suggestion.

      (4) Introduction: The literature on the link between covert attention and directional biases in microsaccades seems relevant in the context of the current study (e.g., Hafed et al., 2002, Vision Res; Engbert & Kliegl, 2003, Vision Res; Willett & Mayo, 2023, Proc Natl Acad Sci USA).

      Yes, thanks for the suggestion. The introduction now mentions the link between attentional allocation and microsaccade production.

      (5) ll.395ff & Figure 7f: Please clarify whether data were pooled across all four experiments for this analysis.

      Yes, the data were pooled, but a positive trend was observed for each of the four experiments individually. This is now stated.

      (6) ll.432-433: There is evidence that the attentional locus and the actual saccade endpoint can also be dissociated (e.g., Wollenberg et al., 2018, PLoS Biol; Hanning et al., 2019, Proc Natl Acad Sci USA).

      True. We have rephrased accordingly. Thanks for the correction.

      (7) ll.438-440: This sentence is difficult to parse.

      Fixed.

      Reviewer #2 (Recommendations For The Authors):

      The manuscript is well-written and compelling. The biggest issue for me was keeping track of the specifics of the individual experiments. I think some small efforts to reinforce those details along the way would help the reader. For example, in the Figure 3 figure legend, I found the parenthetical phrase "high luminence cue, low luminence non-cue)" immensely helpful. It would be helpful and trivial to add the corresponding phrase after "Experiment 4" in the same legend.

      Thanks for the suggestion. Legends and/or labels have been expanded accordingly in this and other figures.

      Line 314: "..had any effect on performance,..." Should there be a callout to Figure 2 here?

      Done.

      It wasn't clear to me why the specific high and low luminance values (48 and 0.25) were chosen. I assume there was at least some quick perceptual assessment. If that's the case or if the values were taken from prior work, please include that information.

      Done.

      Reviewer #3 (Recommendations For The Authors):

      Minor points. Please note that the comments made in the public review above are not repeated here.

      (1) Introduction, p. 2, lines 41-45: It is mentioned that the effects of covert attention or a saccade can be quite distinct. I suggest specifying in what way.

      Done.

      (2) Introduction, p. 2, lines 46-47: It is said that the relation between attention and saccade planning was still uncertain and then it is stressed that this was the case for more natural viewing conditions. However, the discussed literature and the experimental approach of the current study still rely on experimental paradigms that are far from natural viewing conditions. Thus, I suggest either discussing the link between these paradigms and natural viewing in more detail or leaving out the reference to natural viewing at this point (I think the latter suggestion would fit the present paper best).

      We followed the latter suggestion.

      (3) Introduction (e.g. p. 3, lines 55-58): The authors discuss the effects that sustaining fixation might have on attention and eye movements. Recently, it has been found that maintaining fixation can ameliorate cognitive conflicts that involve spatial attention (Krause & Poth, 2023, iScience). It seems interesting to include this finding in the discussion, because it supports the authors' view that it is necessary to study fixation and eye movements rather than eye movements alone to uncover their interplay with attention and decision-making.

      Thanks for the reference. The reported finding is certainly interesting, but we find it somewhat tangential to the specific point we make about strong fixation constraints — which is that they suppress internally driven motor activity, including biases, that are highly informative of the relationship between attention and saccade planning (lines 466‒472, 541‒561). Whether fixation state has other subtle consequences for cognitive control is an intriguing, important issue, for sure. But we would rather maintain the readers’ focus on the reasons why less restrictive fixation requirements are relevant for understanding the deployment of attention.

      (4) Results, p. 9, lines 264-266: It is reported that "The rise points were statistically the same across experiments for both prosaccades (p=0.08, n=10, permutation test)...", but the p-value seems quite close to significance. I suggest mentioning this and phrasing the sentence a bit more carefully.

      We now refer to the rise points as “similar”.

      (5) Figure 7 a-d: It might help readers who first skim through the figures before reading the text to use other labels for the bins on the x-axis that spell out the name of the phase in the trial. It might also help to visualize the bins on the plot of a tachymetric function (in this case, changing the labels could be unnecessary).

      Thanks for the suggestion. We added an insert to the figure to indicate the correspondence between labels and time bins more intuitively.

      (6) Methods, p. 18, lines 566-567: On some trials, participants received an auditory beep as a feedback stimulus. As this could induce a burst of arousal, I wondered how it affected the subsequent trials.

      This is an interesting issue to ponder. We agree that, in principle, the beep could have an impact on arousal. However, what exactly would be predicted as a consequence? The absence of a beep is meant to increase the urgency of the participant, so some effect of the beep event on RT would be expected anyway as per task instructions. Thus, it is unclear whether an arousal contribution could be isolated from other confounds. That said, three observations suggest that, at most, an independent arousal effect would be very small. First, we have performed multisensory experiments (unpublished) with auditory and visual stimuli, and have found that it is difficult to obtain a measurable effect of sound on an urgent visual choice task unless the experimental conditions are particularly conducive; namely, when the visual stimuli are dim and the sound is loud and lateralized. None of these conditions applies to the standard feedback beep. Second, because most trials are on time, the meaningful feedback signal is conveyed by the absence of the beep. But this signal to alter behavior (i.e., respond sooner) has zero intensity and is therefore unlikely to trigger a strong exogenous, automatic response. Finally, in our data, we can parse the trials that followed a beep (the majority) from those that did not (a minority). In doing so, we found no differences with respect to perceptual performance; only minor differences in RT that were identical for pro- and antisaccade trials. All this suggests to us that it is very unlikely that the feedback alters arousal significantly on specific trials, somehow impacting the tachometric curve (a contribution to general arousal across blocks or sessions is possible, of course, but would be of little consequence to the aims of the study).

      (7) Methods, p. 18, lines 574-577: I suggest referring to the colors or the conditions in the text as it was done in the experiments, just to prevent readers being confused before reading the methods.

      We appreciate the thought, but think that the study is easier to understand by pretending, initially, that the color assignments were fixed. This is a harmless simplification. Mentioning the actual color assignments early on would be potentially more confusing and make the description of the task longer and more contrived.

      (8) Methods, p. 18, Table 1: Given that the authors had a spectrophotometer, I suggest providing (approximate) measurements for the stimulus colors in addition to the luminance (i.e. not just RGB values).

      Unfortunately, we have since switched the monitor in our setup, so we don’t have the exact color measurements for the stimuli used at the time. We will keep the suggestion in mind for future studies though.

      References

      Oor EE, Stanford TR, Salinas E (2023) Stimulus salience conflicts and colludes with endogenous goals during urgent choices. iScience 26:106253.

      Salinas E, Stanford TR (2021) Under time pressure, the exogenous modulation of saccade plans is ubiquitous, intricate, and lawful. Curr Opin Neurobiol 70:154-162.

      Zhu J, Zhou XM, Constantinidis C, Salinas E, Stanford TR (2024) Parallel signatures of cognitive maturation in primate antisaccade performance and prefrontal activity. iScience.  doi: https://doi.org/10.1016/j.isci.2024.110488.

    1. Product rule: ∂∂x(f (x)g(x)) = ∂f∂x g(x) + f (x) ∂g∂x (5.46)Sum rule: ∂∂x(f (x) + g(x)) = ∂f∂x + ∂g∂x (5.47)

      Interesting that backpropagation really is just the chain rule over and over again. I wonder why it took us so long to realize the strengths of DL.

    2. The Taylor series is a representation of a function f as an infinite sum ofterms. These terms are determined using derivatives of f evaluated at x0

      It's interesting that you do not see many Taylor series approaches in the context of deep neural networks. The idea that you can create nonlinear decision boundaries with nested perceptrons, but we do not use Taylor series, which appear powerful enough to do that too is interesting. Is it computational limits?

    1. unexpected events in ways that would not otherwise be possible

      like observing the glacier collapse/melting at certain point that no one would expect to happen

  4. www.bitbybitbook.com www.bitbybitbook.com
    1. they can enable certain kinds of research including the study of rare events, the estimation of heterogeneity, and the detection of small differences

      key points

    1. it might suggest new data that you should collect

      It's quite like the process of scholars doing research since we all are supposed to conduct our own research on the basis of preexisting findings. That's exactly why we want to do literature review.

    1. Welcome back.

      In this lesson, I'll be talking about Network Address Translation, or NAT, a process of giving a private resource outgoing only access to the internet.

      And a NAT gateway is the AWS implementation that's available within WPC.

      There's quite a bit of theory to cover, so let's get started.

      So what is NAT?

      Well, it stands for Network Address Translation.

      This is one of those terms which means more than people think that it does.

      In a strict sense, it's a set of different processes which can adjust ID packets by changing their source or destination IP addresses.

      Now, you've seen a form of this already.

      The internet gateway actually performs a type of NAT known as static NAT.

      It's how a resource can be allocated with a public IP version for address, and then when the packets of data leave those resources and pass through the internet gateway, it adjusts the source IP address on the packet from the private address to the public, and then sends the packet on, and then when the packet returns, it adjusts the destination address from the public IP address to the original private address.

      That's called static NAT, and that's how the internet gateway implements public IP version for addressing.

      Now, what most people think of when they think of NAT is a subset of NAT called IP Masquerading.

      And IP Masquerading hides a whole private side IP block behind a single public IP.

      So rather than the one private IP to one public IP process that the internet gateway does, NAT is many private IPs to one single IP.

      And this technique is popular because IP version 4 addresses are running out.

      The public address space is rapidly becoming exhausted.

      IP Masquerading, or what we'll refer to for the rest of this lesson as NAT, gives a whole private range of IP addresses outgoing only access to the public internet and the AWS public zone.

      I've highlighted outgoing because that's the most important part, because many private IPs use a single public IP.

      Incoming access doesn't work.

      Private devices that use NAT can initiate outgoing connections to internet or AWS public space services, and those connections can receive response data, but you cannot initiate connections from the public internet to these private IP addresses when NAT is used.

      It doesn't work that way.

      Now, AWS has two ways that it can provide NAT services.

      Historically, you could use an EC2 instance configured to provide NAT, but it's also a managed service, the NAT gateway, which you can provision in the VPC to provide the same functionality.

      So let's look at how this works architecturally.

      This is a simplified version of the Animals for Life architecture that we've been using so far.

      On the left is an application tier subnet in blue, and it's using the IP range 10.16.32.0/20.

      So this is a private only subnet.

      Inside it are three instances, I01, which is using the IP 10.16.32.10, I02, which is using 32.20, and I03, which is using 32.30.

      These IP addresses are private, so they're not publicly routable.

      They cannot communicate with the public internet or the AWS public zone services.

      These addresses cannot be routed across a public style network.

      Now, if we wanted this to be allowed, if we wanted these instances to perform certain activities using public networking, for example, software updates, how would we do it?

      Well, we could make the subnet's public in the same way that we've done with the public subnets or the web subnets, but we might not want to do that architecturally.

      With this multi-tier architecture that we're implementing together, part of the design logic is to have tiers which aren't public and aren't accessible from the public internet.

      Now, we could also host some kind of software update server inside the VPC, and some businesses choose to do that.

      Some businesses run Windows update services, all Linux update services inside their private network, but that comes with an admin overhead.

      NAT offers us a third option, and it works really well in this style of situation.

      We provision a NAT gateway into a public subnet, and remember, the public subnet allows us to use public IP addresses.

      The public subnet has a route table attached to it, which provides default IP version 4 routes pointing at the internet gateway.

      So, because the NAT gateway is located in this public web subnet, it has a public IP which is routable across the public internet, so it's now able to send data out and get data back in return.

      Now, the private subnet where the instances are located can also have its own route table, and this route table can be different than the public subnet route table.

      So, we could configure it so that the route table that's on the application subnet has a default IP version 4 route, but this time, instead of pointing at the internet gateway, like the web subnet users, we configure this private route table so that it points at the NAT gateway.

      This means when those instances are sending any data to any IP addresses that do not belong inside the VPC, by default, this default route will be used, and that traffic will get sent to the NAT gateway.

      So, let's have a look at how this packet flow works.

      Let's simulate the flow packets from one of the private instances and see what the NAT gateway actually does.

      So, first, instance 1 generates some data.

      Let's assume that it's looking for software updates.

      So, this packet has a source IP address of instance 1's private IP and a destination of 1.3.3.7.

      For this example, let's assume that that's a software update server.

      Now, because we have this default route on the route table of the application subnet, that packet is routed through to the NAT gateway.

      The NAT gateway makes a record of the data packet.

      It stores the destination that the packet is for, the source address of the instance sending it, and other details which help it identify the specific communication in future.

      Remember, multiple instances can be communicating at once, and for each instance, it could be having multiple conversations with different public internet hosts.

      So, the NAT gateway needs to be able to uniquely identify those.

      So, it records the IP addresses involved, the source and destination, the port numbers, everything it needs, into a translation table.

      So, the NAT gateway maintains something called a translation table which records all of this information.

      And then, it adjusts the packet to the one that's been sent by the instance, and it changes the source address of this IP packet to be its own source address.

      Now, if this NAT appliance were anywhere for AWS, what it would do right now is adjust the packet with a public routable address. - Hi. - Let's do this directly.

      But remember, all the inside of the IPC really has directly attached to it a public IP version 4 address.

      That's what the internet gateway does.

      So, the NAT gateway, because it's in the web subnet, it has a default route, and this default route points at the internet gateway.

      And so, the packet is moved from the NAT gateway to the internet gateway by the IPC router.

      At this point, the internet gateway knows that this packet is from the NAT gateway.

      It knows that the NAT gateway has a public IP version 4 address associated with it, and so, it modifies the packet to have a source address of the NAT gateway's public address, and it sends it on its way.

      The NAT gateway's job is to allow multiple private IP addresses to masquerade behind the IP address that it has.

      That's where the term IP masquerading comes from.

      That's why it's more accurate.

      So, the NAT gateway takes all of the incoming packets from all of the instances that it's managing, and it records all the information about the communication.

      It takes those packets, it changes the source address from being those instances to its own IP address, its own external-facing IP address.

      If it was outside AWS, this would be a public address directly.

      That's how your internet router works for your home network.

      All of the devices internally on your network talk out using one external IP address, your home router uses NAT.

      But because it's in AWS, it doesn't have directly attached a real public IP.

      The internet gateway translates from its IP address to the associated public one.

      So, that's how the flow works.

      If you need to give an instance its own public IP version for address, then only the internet gateway is required.

      If you want to give private instances outgoing access to the internet and the AWS public services such as S3, then you need both the NAT gateway to do this many-to-one translation and the internet gateway to translate from the IP of the NAT gateway to a real public IP version for address.

      Now, let's quickly run through some of the key facts for the NAT gateway product that you'll be implementing in the next demo lesson.

      First, and I hope this is logical for you by now, it needs to run from a public subnet because it needs to be able to be assigned a public IP version for IP address for itself.

      So, to deploy a NAT gateway, you already need your VPC in a position where it has public subnets.

      And for that, you need an internet gateway, subnets configured to allocate public IP version for addresses and default routes for those subnets pointing at the internet gateway.

      Now, a NAT gateway actually uses a special type of public IP version for address that we haven't covered yet called an elastic IP.

      For now, just know that these are IP version for addresses, which is static.

      They don't change.

      These IP addresses are allocated to your account in a region and they can be used for whatever you want until you reallocate them.

      And NAT gateways use these elastic IPs, the one service which utilizes elastic IPs.

      Now, they're talking about elastic IPs later on in the course.

      Now, NAT gateways are an AZ resilient service.

      If you read the AWS documentation, you might get the impression that they're fully resilient in a region like an internet gateway.

      They're not, they're resilient in the AZ that they're in.

      So they can recover from hardware failure inside an AZ.

      But if an AZ entirely fails, then the NAT gateway will also fail.

      For a fully region resilient service, so to mirror the high availability provided by an internet gateway, then you need to deploy one NAT gateway in each AZ that you're using in the VPC and then have a route table for private subnets in that availability zone, pointing at the NAT gateway also in that availability zone.

      So for every availability zone that you use, you need one NAT gateway and one route table pointing at that NAT gateway.

      Now, they aren't super expensive, but it can get costly if you have lots of availability zones, which is why it's important to always think about your VPC design.

      Now, NAT gateways are a managed service.

      You deploy them and AWS handle everything else.

      They can scale to 45 gigabits per second in bandwidth and you can always deploy multiple NAT gateways and split your subnets across multiple provision products.

      If you need more bandwidth, you can just deploy more NAT gateways.

      For example, you could split heavy consumers across two different subnets in the same AZ, have two NAT gateways in the same AZ and just route each of those subnets to a different NAT gateway and that would quickly allow you to double your available bandwidth.

      With NAT gateways, you'll build based on the number that you have.

      So there's a standard hourly charge for running a NAT gateway and this is obviously subject to change in a different region, but it's currently about four cents per hour.

      And note, this is actually an hourly charge.

      So partial hours are billed as full hours.

      And there's also a data processing charge.

      So that's the same amount as the hourly charge around four cents currently per gigabyte of processed data.

      So you've got this base charge that a NAT gateway consumes while running plus a charge based on the amount of data that you process.

      So keep both of those things in mind for any NAT gateway related questions in the exam.

      Don't focus on the actual values, just focus on the fact they have two charging elements.

      Okay, so this is the end of part one of this lesson.

      It's getting a little bit on the long side, and so I wanted to add a break.

      It's an opportunity just to take a rest or grab a coffee.

      Part two will be continuing immediately from the end of part one.

      So go ahead, complete the video, and when you're ready, join me in part two.

    1. Welcome back.

      In this lesson I want to talk in detail about security groups within AWS.

      These are the second type of security filtering feature commonly used within AWS, the other type being network access control lists which we've previously discussed.

      So security groups and knuckles share many broad concepts but the way they operate is very different and it's essential that you understand those differences and the features offered by security groups for both the exam and real-world usage.

      So let's just jump in and get started.

      In the lesson on network access control lists I explained that they're stateless and by now you know what stateless and stateful mean.

      Security groups are stateful, they detect response traffic automatically for a given request and this means that if you allow an inbound or outbound request then the response is automatically allowed.

      You don't have to worry about configuring ephemeral ports, it's all handled by the product.

      If you have a web server operating on TCP 443 and you want to allow access from the public internet then you'll add an inbound security group rule allowing inbound traffic on TCP 443 and the response which is using ephemeral ports is automatically allowed.

      Now security groups do have a major limitation and that's that there is no explicit deny.

      You can use them to allow traffic or you can use them to not allow traffic and this is known as an implicit deny.

      So if you don't explicitly allow traffic then you're implicitly denying it but you can't and this is important you're unable to explicitly deny traffic using security groups and this means that they can't be used to block specific bad actors.

      Imagine you allow all source IP addresses to connect to an instance on port 443 but then you discover a single bad actor is attempting to exploit your web server.

      Well you can't use security groups to block that one specific IP address or that one specific range.

      If you allow an IP or if you allow an IP range or even if you allow all IP addresses then security groups cannot be used to deny a subset of those and that's why typically you'll use network access control lists in conjunction with security groups where the knuckles are used to add explicit denies.

      Now security groups operate above knuckles on the OSI7 layer stack which means that they have more features.

      They support IP and side-based rules but they also allow referencing AWS logical resources.

      This includes all the security groups and even itself within rules.

      I'll be covering exactly how this works on the next few screens.

      Just know at this stage that it enables some really advanced functionality.

      An important thing to understand is that security groups are not attached to instances nor are they attached to subnet.

      They're actually attached to specific elastic network interfaces known as ENIs.

      Now even if you see the user interface present this as being able to attach a security group to an instance know that this isn't what happens.

      When you attach a security group to an instance what it's actually doing is attaching the security group to the primary network interface of that instance.

      So remember security groups are attached to network interfaces that's an important one to remember for the exam.

      Now at this point let's step through some of the unique features of security groups and it's probably better to do this visually.

      Let's start with a public subnet containing an easy to instance and this instance has an attached primary elastic network interface.

      On the right side we have a customer Bob and Bob is accessing the instance using HDTBS so this means TCP but 443.

      Conceptually think of security groups as something which surrounds network interfaces in this case the primary interface of the EC2 instance.

      Now this is how a typical security group might look.

      It has inbound and outbound rules just like a network ACL and this particular example is showing the inbound rules allowing TCP port 443 to connect from any source.

      The security group applies to all traffic which enters or leaves the network interface and because they're stateful in this particular case because we've allowed TCP port 443 as the request portion of the communication the corresponding response part the connection from the instance back to Bob is automatically allowed.

      Now lastly I'm going to repeat this point several times throughout this lesson.

      Security groups cannot explicitly block traffic.

      This means with this example if you're allowing 0.0.0.0.0 to access the instance on port TCP port 443 and this means the whole IP version for internet then you can't block anything specific.

      Imagine Bob is actually a bad actor.

      Well in this situation security groups cannot be used to add protection.

      You can't add an explicit deny for Bob's IP address.

      That's not something that security groups are capable of.

      Okay so that's the basics.

      Now let's look at some of the advanced bits of security group functionality.

      Security groups are capable of using logical references.

      Let's step through how this works with a similar example to the one you just saw.

      We start with a VPC containing a public web subnet and a private application subnet.

      Inside the web subnet is the Categoram application web instance and inside the app subnet is the back-end application instance.

      Both of these are protected by security groups.

      We have A4L-web and A4L-app.

      Traffic wise we have Bob accessing the web instance over port TCP port 443 and because this is the entry point to the application which logically has other users than just Bob we're allowing TCP port 443 from any IP version for address and this means we have a security group with an inbound rule set which looks like this.

      In addition to this front-end traffic the web instance also needs to connect with the application instance and for this example let's say this is using TCP port 1337.

      Our application is that good.

      So how best to allow this communication?

      Well we could just add the IP address of the web instance into the security group of the application instance or if you wanted to allow our application to scale and change IPs then we could add the side arrangers of the subnets instead of IP addresses.

      So that's possible but it's not taking advantage of the extra functionality which security groups provide.

      What we could do is reference the web security group within the application security group.

      So this is an example of the application security group.

      Notice that it allows TCP port 1337 inbound but it references as the source a logical resource the security group.

      Now using a logical resource reference in this way means that the source reference of the A4L-web security group this actually references anything which has this security group associated with it.

      So in this example any instances which have the A4L-web security group attached to them can connect to any instances which have the A4L-web security group attached to them using TCP port 1337.

      So in essence this references this.

      So this logical reference within the application security group references the web security group and anything which has the web security group attached to it.

      Now this means we don't have to worry about IP addresses or side arrangers and it also has another benefit.

      It scales really well.

      So as additional instances are added to the application subnet and web subnet and as those instances are attached to the relevant security groups they're impacted by this logical referencing allowing anything defined within the security group to apply to any new instances automatically.

      Now this is critical to understand so when you reference a security group from another security group what you're actually doing is referencing any resources which have that security group associated with them.

      So this substantially reduces the admin overhead when you have multi-tiered applications and it also simplifies security management which means it's prone to less errors.

      Now logical references provide even more functionality.

      They allow self referencing.

      Let's take this as an example a private subnet inside AWS with an ever-changing number of application instances.

      Right now it's three but it might be three, thirty or one.

      What we can do is create a security group like this.

      This one allows incoming communications on port TCP 1337 from the web security group but it also has this rule which is a self-referential rule allowing all traffic.

      What this means is that if it's attached to all of the instances then anything with this security group attached can receive communication so all traffic from this security group and this effectively means anything that also has this security group attached to it.

      So it allows communications to occur to instances which have it attached from instances which have it attached.

      It handles any IP changes automatically which is useful in these instances within an auto scaling group which is provisioning and terminating instances based on load on the system.

      It also allows for simplified management of any intra-app communications.

      An example of this might be Microsoft the main controllers or managing application high availability within clusters.

      So this is everything I wanted to cover about security groups within AWS.

      So there's a lot of functionality and intelligence that you gain by using security groups versus network ACLs but it's important that you understand that while network ACLs do allow you to explicitly deny traffic security groups don't and so generally you would use network ACLs to explicitly block any bad actors and use security groups to allow traffic to your VPC based resources.

      You do this because security groups are capable of this logical resource referencing and that means AWS logical resources in security groups or even itself to allow this free flow of communications within a security group.

      At this point that is everything I wanted to cover in this lesson so go ahead and complete the video and when you're ready I'll look forward to you joining me in the next lesson.

    1. For every Stoic was a Stoic; but in Christendom where is the Christian?

      peak literature

    2. Let a Stoic open the resources of man

      Emerson seems like quite the stoic philosopher.

    3. As our Religion, our Education, our Art look abroad, so does our spirit of society. All men plume themselves on the improvement of society, and no man improves.

      It's almost like he's telling you to spend so much time "looking within" that you forget to look around and you become closed off to the world. He's advocating exceptionalism if you think about it.

    4. And so the reliance on Property, including the reliance on governments which protect it, is the want of self-reliance.

      According to Emerson The secret to a fulfilled life is to just do everything yourself no matter how daunting.

    1. rustworthy. I

      CONNECTION ANNOTATIONS

      Just as in the Trott and Lee LLM article, word vectors are described as a matchmaking site for words. AI generated photos could be viewed in a similar way. Not every date will result in a soul mate, therefore every match is not ideal. I often find myself frustrated when typing and my cell phone will decide that it is smarter than me and knows exactly what I am going to say next. I become irate when I hit send before, I realize those changes have been made and I sound as though I only completed my elementary school education. The same can be send for these photos, when an AI model acquires many photos to compile one master image, thought to be exactly what you are looking for, only to discover that AI is so generous, that they have provided the subject of the photo with an extra arm.

      1. Similar to how Trott and Lee LLM article phrased AI as being similar to "turning the water on, making sure it's coming out of the right faucet, and when it's not, running around to tighten and loosen all the valves until it does", AI generated photos (7:20) can create biases that then we must run around to resolve. The original information inputted to generate a photo is like the water. Once it is turned on and a photo is generated, but it's not the photo we were expecting, we then have to tweak and edit the information until we receive a desired outcome.
    2. rustworthy. I

      VISUAL ANNOTATIONS

      1. (3:55) Code Carbon helps to provide a visual for the emissions produced in a certain time period. Much like the nutrition information on the outside of a cereal box, though it is helpful to know how many calories or grams of sugar are contained in one serving, that information would be useless without knowing what a serving size is. Even more so if we are unaware of how many calories we should consume in a given time frame. This information allows us to make informed decisions about what we should consume.

      2. (5:04) This visual of the photos that are generated when she inputs her own information provides a bit of shock factor to the audience. Not only are most of those images of someone who is not her, but they are quite illicit photos of someone who is not her. Taken out of context and unknown to be AI generated, photos such as these could be detrimental to someone's career or personal life.

    3. rustworthy. I

      REACTION ANOTAION:

      I never paused to consider that AI models are not contained within the digital world and exist boldly in our physical world. I figured that, since AI is so easily available on a cell phone, it couldn't possibly require so much to sustain it.

    4. rustworthy. I

      RESTATEMENT ANNOTAIONS

      1. (0:37) Though at times AI can be extremely helpful, as we often hear in the media, there is also a dark side to these functions. For instance, one chatbot suggested that the solution to one man's problems was to file for divorce. Another suggested a sure-to-please recipe for a large gathering, that contained chlorine gas among its list of ingredients.

      2. (1:19) AI does not exist in solitude. Its affects have left the digital world and have begun to affect our physical world. AI models have been noted to contribute to climate change, use artist and authors work without their consent, and discriminate against entire groups of people.

      3. (1:56) In regard to sustainability, the figurative cloud we imagine AI resides upon is not something only contained within the digital world. We often fail to acknowledge the large quantity of materials needed to build these models as well as the sizeable amount of energy required to sustain them.

    1. large number of studies have focused on the task of identifying these motives, and overall, have found that financial gains are one, but by no means the only or even most important, motive. In fact, a review of exiting evidence (Franklin & Baron, 2015) indicates that the desire for autonomy or independence is ranked as the most important by entrepreneurs. Financial gain is second, followed, in descending order, by the desire to grow and develop as a person, to escape from unpleasant work environments, to acquire status and recognition, to contribute to the well-being of their communities and societies, and to contribute to solving important social problems

      does this make sense to you? Why or why not?

    2. motivation without cognition leads to undirected, random actions, while cognition without motivation lead to inaction—no overt actions occur. Only when the two function together does goal-directed, planned behavior occur.

      strategic behaviour...

    3. The process of converting ideas into reality

      one of my favourite definitions of entrepreneurship

    4. Since all entrepreneurs start with dreams of success and are highly motivated to attain them, an important question arises: why do so many experience disappointment instead of realization of these dreams?

      the motivating question -- what brings about success?

    5. entrepreneurs need many skills, relevant knowledge and experi-ence, personal characteristics, motives, and goals. Together, these aspects of human and psychological resources provide them with what might be viewed as the tools they need for success.

      what's in your toolbox?

    6. without entrepreneurs “nothing happens, the entrepreneurial process simply does not occur.”

      d'uh

    7. motiva-tion is intimately linked with cognition

      and intelligence (the successful type!)

    8. It is a basic theme of this chapter, however, that many of the causes of entrepreneurs’ success or failure, involve the entrepreneurs themselves

      !

    9. Whatever the specific goals individuals seek or whatever actions they view as helpful in reaching these goals, their purposeful, planned actions, reflects motivation.

      *

    1. B

      Historical causality- this unique circumstance rose from __ Social causality- A may lead to B or increases the chances of B

    2. his recognition of the extreme difficulties in making entirely ex-haustive causal imputations.

      probability of human behavior doesn't come from notions of free will as much as it comes from the variety of situations and possibilities that can't always be accounted for in strict categories

    3. Weber argued, for example, that human action was truly unpredict-able only in the case of the insane,

      maybe Durkheim would differ

    1. eLife Assessment

      This technical study presents a novel sampling strategy for detecting synaptic coupling between neurons from dual pipette patch-clamp recordings in acute slices of mammalian brain tissue in vitro. The authors present solid evidence that this strategy, which incorporates automated patch clamp electrode positioning and cleaning for reuse with strategic neuron targeting, has the potential to substantially improve the efficiency of neuronal sampling with paired recordings. This technique and the extensions discussed will be useful for neuroscientists wanting to apply or already conducting automated multi-pipette patch clamp recording electrophysiology experiments in vitro for neuron connectivity analyses.

    2. Reviewer #1 (Public review):

      Summary:

      In this technical paper, the authors introduce an important variation on the fully automated multi-electrode patch-clamp recording technique for probing synaptic connections that they term "patch-walking". The patch-walking approach involves coordinated pipette route-planning and automated pipette cleaning procedures for pipette reuse to improve recording throughput efficiency, which the authors argue can theoretically yield almost twice the number of connections to be probed by paired recordings on a multi-patch electrophysiology setup for a given number of cells compared to conventional manual patch-clamping approaches used in brain slices in vitro. The authors show convincing results from recordings in mouse in vitro cortical slices, demonstrating the efficient recording of dozens of paired neurons with a dual patch pipette configuration for paired recordings and detection of synaptic connections. This approach will be of interest and valuable to neuroscientists conducting automated multi-patch in vitro electrophysiology experiments and seeking to increase efficiency of neuron connectivity detection while avoiding the more complex recording configurations (e.g., 8 pipette multi-patch recording configurations) used by several laboratories that are not readily implementable by most of the neuroscience community.

      Strengths:

      (1) The authors introduce the theory and methods and show experimental results for a fully automated electrophysiology dual patch-clamp recording approach with a coordinated patch-clamp pipette route-planning and automated pipette cleaning procedures to "patch-walk" across an in vitro brain slice.

      (2) The patch-walking approach offers throughput efficiency improvements over manual patch clamp recording approaches, especially for investigators looking to utilize paired patch electrode recordings in electrophysiology experiments in vitro.

      (3) Experimental results are presented from in vitro mouse cortical slices demonstrating the efficiency of recording dozens of paired neurons with a two-patch pipette configuration for paired recordings and detection of synaptic connections, demonstrating the feasibility and efficiency of the patch-walking approach.

      (4) The authors suggest extensions of their technique while keeping the number of recording pipettes employed and recording rig complexity low, which are important practical technical considerations for investigators wanting to avoid the more complex recording configurations (e.g., 8-10 pipette multi-patch recording configurations) used by several laboratories that are not readily implementable by most of the neuroscience community.

    3. Reviewer #2 (Public review):

      Summary:

      In this study, the authors aim to combine automated whole-cell patch clamp recording simultaneously from multiple cells. Using a 2-electrode approach, they are able to sample as many cells (and connections) from one slice, as would be achieved with a more technically demanding and materially expensive 8-electrode patch clamp system. They provide data to show that this approach is able to successfully record from 52% of attempted cells, which was able to detect 3 pairs in 71 screened neurons. The authors state that this is a step forward in our ability to record from randomly connected ensembles of neurons.

      Strengths:

      The conceptual approach of recording multiple partner cells from another in a step wise manner indeed increases the number of tested connections. An approach that is widely applicable to both automated and manual approaches. Such a method could be adopted for many connectivity studies using dual recording electrodes.

      The implementation of automated robotic whole-cell patch-clamp techniques from multiple cells simultaneously is a useful addition to the multiple techniques available to ex vivo slice electrophysiologists.

      The approach using 2 electrodes, which are washed between cells is economically favourable, as this reduces equipment costs for recording multiple cells, and limits the wastage of capillary glass that would otherwise be used once.

      Weaknesses:

      (1) Based on the revised manuscript - a discussion of the implementation of this approach to manual methods is still lacking,

      (2) A comparison of measurements shown in Figure 2 to other methods has not been addressed adequately.

      (3) The morphological identification of neurons is understandably outside the remit of this project - but should be discussed and/or addressed. It was not suggested to perform detailed anatomical analysis - but to highlight the importance of this, and it should still be discussed

      (4) The revised manuscript does not clearly state which cells were included in the analysis as far as I can see - and indeed cells with Access Resistance >40 MOhm appear to still be included in the data.

    4. Reviewer #3 (Public review):

      Summary:

      In this manuscript, Yip and colleagues incorporated the pipette cleaning technique into their existing dual-patch robotic system, "the PatcherBot", to allow sequential patching of more cells for synaptic connection detection in living brain slices. During dual-patching, instead of retracting all two electrodes after each recording attempt, the system cleaned just one of the electrodes and reused it to obtain another recording while maintaining the other. With one new patch clamp recording attempt, new connections can be probed. By placing one pipette in front of the other in this way, one can "walk" across the tissue, termed "patch-walking." This application could allow for probing additional neurons to test the connectivity using the same pipette in the same preparation.

      Strengths:

      Compared to regular dual-patch recordings, this new approach could allow for probing more possible connections in brain slices with dual-patch recordings, thus having the potential to improve the efficiency of identifying synaptic connections

      Weaknesses:

      While this new approach offers the potential to increase efficiency, it has several limitations that could curtail its widespread use.

      Loss of Morphological Information: Unlike traditional multi-patch recording, this approach likely loses all detailed morphology of each recorded neuron. This loss is significant because morphology can be crucial for cell type verification and understanding connectivity patterns by morphological cell type.

      Spatial Restrictions: The robotic system appears primarily suited to probing connections between neurons with greater spatial separation (~100µm ISD). This means it may not reliably detect connections between neurons in close proximity, a potential drawback given that the connectivity is much higher between spatially close neurons. This limitation could help explain the low connectivity rate (5%) reported in the study.

      Limited Applicability: While the approach might be valuable in specific research contexts, its overall applicability seems limited. It's important to consider scenarios where the trade-off between efficiency and specific questions that are asked.<br /> Scalability Challenges: Scaling this method beyond a two-pipette setup may be difficult. Additional pipettes would introduce significant technical and logistical complexities.

    5. Author response:

      The following is the authors’ response to the original reviews.

      We thank the reviewers and editors for insightful feedback on how we could improve the manuscript. We have revised the manuscript and addressed the points raised.

      Regarding the technical issues raised about the quality of patch clamp recordings (Reviewer 2), we acknowledge that the upper limit of the access resistance cutoff should be lower and that the accepted change should be 10-20%. To this end, we have revised the manuscript to more accurately detail the quality metrics used. The access resistance for the neurons in paired recordings were below 40 MΩ (similar to the metric used by Kolb et al. 2019), and if the access changed above 50 MΩ, we stopped recording from that neuron. Furthermore, the inclusion of neurons in the histogram with access resistance above 50 MΩ was to highlight the total number of neurons patched but not necessarily used in paired recordings. As this was done with an automated robotic system, the neurons would still undergo an initial voltage clamp and current clamp protocol before the pipette would release the neuron and patch another cell. To the point of Reviewer 2, this patch-walk protocol could also be alternatively implemented using manual recording approaches and this point has been included in the revised manuscript.

      Regarding the spatial restrictions (Reviewer 3), we agree that the average intersomatic distance is higher than ideal. This was likely due to failed patch attempts; for instance, if one pipette successfully achieved whole cell, and the other pipette had several sequential failed patch attempts, the intersomatic distance (ISD) would increase with each failed attempt due to the user selected index of cells. Ideally, the pipettes would be walking across a slice with low ISD if the whole-cell success rate was closer to 100%. To overcome this challenge in future work, automated cell identification and tracking could enable the path planning to be continuously updated after each patch attempt. Given the whole-cell success rate efficiency for a given electrophysiologist, we believe that the automated robot could be improved in later versions to include routeplanning algorithms to minimize the distance between neurons. Alternatively, this patch-walk system could also be integrated to improve connectivity yields for manual recording approaches as well.

      For the point raised about morphological identification, we believe that while important, morphological identification is out of the scope for this project. Future work will include neuronal reconstruction. Regarding the other points, we will amend the manuscript to highlight other key metrics such as maximum time we could hold a neuron under the whole-cell configuration. Additionally, we agree with Reviewer 3 that some of the current language may cause confusion, and we will amend it accordingly.

      To all the reviewers, thank you for your time, understanding, and the opportunity to improve our manuscript.

    1. ductive and heroic en-trepreneurs (Baumol, 1990; Davidsson and Wiklund, 2001) have been characterized as leading enterprises that contribute positivelyto the economy or society. Caring about impact of the business on the well-being of others is fundamental to formulating the virtuousentrepreneur. Such thinking calls for specification of the relevant others that need to be considered. Employees within the newbusiness venture are an obvious focus, but also important are how entrepreneurs impact their own families and the communities inwhich they reside.

      broader context (calling for more specific details)

    2. inesswill not be achieved if pursued as an end in its right; rather, happiness is a by-product of other more noble deeds. In the en-trepreneurial context, these more noble deeds presumably include caring about more than one's own self-gratification and profit as the business creator.

      another opportunity for me to share a Viktor Frankl quote: "Don't aim at success ... For success, like happiness, cannot be pursued; it must ensue"

    3. Virtue, as elaborated in Aristotle's Ethics, is fundamentally tied to how one functions in the community withinwhich one is embedded, whereas vicious evokes a way of behaving toward others that brings damage and harm. It is

      handy reminder...

    4. central point in elevating these contrasts is not to moralize, but rather to draw attention to the impact of these types of leaders on theeudaimonia of others (see Ryff, 2018). Stated otherwise, in both the traditional business world and in the entrepreneurial field, it iscritical to address the how the actions, motives, and priorities of those at the top impact the well-being of those who sitting belowthem in pervasive societal hierarchies

      a moral agenda, without moralizing ... hah

    5. Pleasure can also be tied to pathological needs, such as the sadistic gratification gained from inflicting pain on others.

      this idea of "happiness gone awry" is tied more to hedonism, but a psychopath could certainly get a sense of fulfilment from inflicting pain on others.

    6. se with higher levels ofpurpose in life have been found to engage in more protective health behaviors (cancer screenings, cholesterol tests, flu shots)

      so it's not that a healthy attitude/mindset automatically confers health benefits - it may simply mean you avail yourself more of the services that correlate with health benefits...

    7. epreneurialpursuits truly nurture eudaimonia because self-initiated work is a core forum for realization of personal talents and potential, notablebenefits may accrue in the physical health and longevity of the entrepreneu

      interesting...

    8. trepreneurial stress, high workloads and high business risk impact the en-trepreneur's health (e.g., anxiety, doctor visits), and how the health of the entrepreneur impacts subsequent entrepreneurial action

      two mutually related concerns!

      (entrepreneurial intention begets entrepreneurial action (or not, if entrepreneurial stress intervenes), but eventual action begets such stress inevitably, which begets other health concerns, which might affect future entrepreneurial intent/action...)

    9. ly experiences of core elements of eudaimonia such ashaving sense of purpose and meaning, feelings of mastery, and a perception of continuing self-realization and growth vis-à-vis thestresses of managing a self-initiated business. These daily experiences of eudaimonia may also predict differences in who frames dailystresses as challenges or hindrances.

      eudaemonia in the everyday! Super important. We don't just feel fulfilled when accomplishing significant tasks.

      How does eudaimonic well-being manifest in your mundane experiences?

    10. Once into the entrepreneurial endeavor, when the realities of long working hours, complex demands, and uncertainties come tothe fore –the demands and stresses of running one's own business become evident –aspects of eudaimonic well-being may emerge asimportant moderators of who persists over time versus terminates the new business venture. W

      well-being (Eudaemonia) moderates continued perseverance (not just starting ventures)

      a feeling of fulfilment (sense of accomplishment across the different components of well-being) leads to other positive feelings (positive affect / happiness), which in turn leads to more effort, engagement, success, etc...

    11. or those who are better educated and economically secure, the call of entrepreneurship may emerge fromhaving higher eudaimonia well before the new business venture takes shape. That is, those with a pre-existing sense of autonomy,mastery, and purpose, may be more likely to embark on the entrepreneurial path. Al

      proposed explanation...

    12. roaden-and-build theory

      Broaden and build:

      While negative emotions have been associated with survival and protection in response to a threatening situation, positive emotions have been related to the ability to explore the environment, to be open to new information, to create and build new resources. According to this theory, positive emotions broaden the scope of attention, enabling flexible thinking. This in turn facilitates the development of new skills, networks, and capacities that are essential to adaptively handle a stressful event.

      In the long-term, people who experience more positive emotions are more satisfied with their lives, build more positive relationships with partners, get better jobs, or even have better health.

    13. Controlling for past income and prior health, self-employed individuals, in fact, experienced greater stress than employees. Furtherfindings showing a positive impact of such stress on income of the self-employed, but a negative impact on their health (assessed interms of health behaviors –alcohol use, smoking, physical activity, weight gain). T

      so ... these sort of findings aren't limited to necessity entrepreneurs ...

    14. ntrepreneurs' negative affect directly predictedentrepreneurial effort toward tasks that were required immediately, whereas positive affect predicted venture effort beyond what isimmediately required.

      interesting ... the passion part of grit...

    15. tal health and well-beingreview elevated the theme of persistence –i.e., who stays with the entrepreneurial enterprise over time. M

      link back to grit!

    16. elf-employment among educationally and economically disadvantaged individuals, possibly accompanied byaccumulation of debt, captures a variety of entrepreneurship driven primarily by desperation. Al

      not very encouraging. Desperation and such conditions lead to stress (often of the chronic, not the acute type). And this stress has negative implications on one's health and well-being.

    17. worries behind one's financial situation and job security drive the compromised life satisfaction. A

      seems kind of obvious...

    18. oth relatedness and autonomy are core motives and core components of eudaimonic well-bein

      The same way money doesn't buy happiness, autonomy doesn't mean independence. Interesting...

    19. pportunity entrepreneurs report higher family and health satisfaction than necessityentrepreneurs, but both types report equal dissatisfaction with the lack of leisure time (Bi

      !!

    20. elf-employment that allows one to avoid re-quirements imposed by a boss or large organizational requirements may enhance the sense that one is living according to personalvalues and convictions, i.e., marching to one's own drummer (autonomy). Self-expression aspects of autonomy that involves pursuingpersonal goals that are in accord with one's values, likely contributes to a sense of realizing unique talents and capacities (personalgrowth). The opportunity to be in charge of, to lead and direct daily activities likely contributes to the sense effectively managingdemands in self-created contexts (environmental mastery). R

      Exactly!

    21. urther partitioned entrepreneurial motivation into three submotives: (a) negative freedom tied to the dislike ofhaving a boss and having to work within stifling organizational rules; (b) self-expression that involves working according to one'svalues, tastes, goals; and (c) opportunity that allows one to be in charge, to lead and direct.

      useful distinction!

    22. mine the hedonic well-being of self-employment borne out of necessity, indicated by lower educational status and higher financial strain. They found lowerlevels of reported life satisfaction compared to traditional wage earners. Si

      so, less money can = more problems!

    23. e is a fundamental differencebetween conceptualizing autonomy as a core psychological need versus conceptualizing autonomy as a key feature of well-being.Both are arguably important –one captures what fuels human activity (the motivational part) and the other examine whether suchcore motives and needs are met (the well-being part). A

      ooh - key distinction.

    24. ntrepreneur's eudaimonic well-being (thriving and activatedaffect) than from their hedonic well-being (life satisfaction and contentment)

      the distinction, crystallized again...

    25. impact of entrepreneurs on the eudaimonic well-being of others (employees, families, communities). The

      5th theme...

    26. a) the degree to which entrepreneurs feel purposefully engaged in what they do; (b) whether they seethemselves as growing and making best use of their talents and potential over time; (c) the quality of their ties to others, includingemployees and collaborators; (d) the sense that they are effective in managing their surrounding environments; (e) the degree towhich they show knowledge and acceptance of their own strengths and weaknesses; and, of course, (f) the degree to which they viewthemselves as self-determined and independent. T

      5 key concerns...

    27. eudaimonic well-being of different types of entrepreneurs, focused on the distinction between necessity versus opportunity entrepreneurs. Th

      2nd theme

    28. how and where eudaimonia might matter at different points in the entrepreneurial process, from initial pursuits to longer-term endeavors.

      3rd theme

    29. he first examines the link between entrepreneurship and autonomy, w

      Autonomy = BOTHJ motive and an aspect of well-being. Interesting!

    30. ull understanding of the well-beingof entrepreneurs demands knowledge of their family lives.

      ...

    31. he big five model of traits have been linked to the abovedimensions with numerous findings (openness is linked with personal growth, agreeableness with positive relations with others, andextraversion, conscientiousness, and neuroticism with environmental mastery, purpose in life, and self-acceptance) (Sc

      wow - awesome link to previous weeks' material...

    32. se who are married have a well-beingadvantage compared to the divorced, widowed, or never married, but single women score higher on autonomy and personal growthcompared to married women. Parenting seems to enhance well-being, particularly when children are flourishing

      ok...

    33. Self-acceptance brings a potentially neglected aspect of entrepreneurial well-being. It encompasses having positive attitudestoward oneself, but drawing on the Jungian idea of the shadow, also includes the capacity to see one's bad qualities. Thi

      self-acceptance!

    34. capacityto find meaning in the face of adversity,

      So many good Frankl quotes!

    35. plied to the entrepreneurial context, self-acceptance may be a critical asset, such that effective problem-solving and negotiating through unfolding challenges would seem todemand honest reckoning with one's self. A

      honesty is the best policy (especially with oneself!) As it can prevent you from getting in over your head...

    36. without goals, purposes, and meaning, including during periods ofchallenge and difficulty, it is difficult to fathom an entrepreneur who is experiencing genuine well-being. In

      (this is related to the question I asked at the end of our last class -- do you envy people who lead an easy life? Where do they find meaning?)

    37. Purpose in life is the existential core of eudaimonic well-being, with its emphasis on viewing one's life has having meaning,direction, and goals. These qualities comprise a kind of intentionality that involves having aims and objectives for living. Life

      Purpose in life!

    38. Positive relations with others is the most universally endorsed aspect of what it means to be well. This dimension encompasseshaving warm, trusting ties to others, being concerned about the welfare of others, understanding the give and take of social re-lationships, and having the capacity for empathy and affection.

      positive relations!

    39. Environmental mastery emphasizes the sense that one can manage the surrounding environment, including making effective useof available opportunities, while also creating contexts suitable to one's personal needs and values. Th

      Environmental mastery (tied to social settings and social skills in emotional intelligence...)

    40. igning views of subjective well-being at the time that revolved around assessmentsof happiness, life satisfaction, and positive and negative affect

      (mostly) the focus of the other article this week...

    41. The answer for him was eudaimonia, which he described as activity of the soul in accord with virtue. The key task in lifeis to know and live in truth with one's daimon, a kind of spirit given to all persons at birth. Eu

      getting very philosophical here (but context is good)

    42. The second venue addresses varieties of entrepreneurship, with a focus on the distinction between opportunity and ne-cessity entrepreneurs.

      all righty then...

    43. The first section thus examines the conceptual and philo-sophical foundations of a widely-used model of eudaimonic well-being built on the integration of perspectives from clinical, de-velopmental, existential and humanistic psychology, along with distant observations from Aristotle. These differing views convergedin their emphasis on six distinct aspects of what it means to be fully functioning and well.

      telling us what is to follow...

    44. Hedonic formulations emphasizepositive life evaluations, such as life satisfaction and positive feeling states, such as happiness and positive affect

      hedonism - often associated with the pursuit of pleasure.

      This article doesn't focus on it a lot (since it emphasizes the other main aspect of well-being), but I'd like to just highlight the connection between hedonism and happiness a little bit here by emphasizing the concept of the "hedonic treadmill".

      The hedonic treadmill is a metaphor for the human tendency to pursue one pleasure after another. That's because the surge of happiness that's felt after a positive event is likely to return to a steady personal baseline over time.

      So, it looks like this:

      This is why I was so interested in the discussion we had at the end of the last class (which we'll continue this week) about money and happiness. Mo money, mo problems. We desire more riches, but we're never satisfied ...

      Which is one of the reasons we need to look beyond hedonic well-being and explore eudaimonic well-being!

    45. ive key venues for theentrepreneurial field are then considered: (1) entrepreneurship and autonomy, viewed both as amotive (self-determination theory) and as an aspect of well-being (eudaimonic well-beingtheory); (2) varieties of entrepreneurship (opportunity versus necessity) and eudaimonic well-being; (3) eudaimonia in the entrepreneurial journey (beginning, middle, end); (4) en-trepreneurship, well-being and health; and (5) entrepreneurs and the eudaimonia of others –contrasting virtuous and vicious types.

      so, this is going to cover a lot of ground!

    46. Approaches to well-being tend to be partitioned into hedonic and eudaimonicformulations.

      ok - we were already introduced to this distinction in the last article - we're going to get into it more here!

    47. Researchers in entrepreneurial studies are increasingly interested in the psychological well-beingof entrepreneurs.

      ok - we are familiar with this rhetorical contextextualizing... (it's the last half of our whole course)

    48. e “dark side”of prosocial moti-vation, which may be good for society, but bad for the well-being of the entrepreneur. Im

      oof

    49. orking from a holistic conception of human flourishing that includes social aswell as economic benefits, they distilled three overarching virtues of the excellent/virtuous entrepreneur. These include creativity,beneficence, and integrity.

      !

    50. A first observation is that well-being in extant studies is studied primarily as anoutcome (consequent) of the business venture,

      curious - given the next statement...

    51. key aspects of eudaimonic well-being (e.g., realization of personal potential, purposeful life en-gagement, effective management of complex environments) have received little attention eventhough they may be particularly relevant to entrepreneurial pursuits.

      narrowing the focus...

    52. The third venue focuses the unfolding of the entrepreneurial process in time –how it progresses from early stages to longer-term enterprises, at least for some.

      ok

    53. The central objective of this essay is to examine the relevance ofeudaimonic well-being for understanding entrepreneurial experience. T

      let's take the road less travelled!

    1. Stretching entrepreneurs’skills and abilities through flow experiences enhance their resilience andtenacity

      ooh! I really like this link to the previous week on perseverance and grit...

    2. rincipal axis factors(PAF) analysis

      unless you're into stats and quantitative measurement, you can skip over these details ... but the social significance rather than the statistical significance of their findings is still trenchant!

    3. Entrepreneurs need internal psycho-logical resources such as intrinsic definition of success and a spiritual practiceto weather the myriad of external storms

      like the lingo!

    4. rom this eudaimonicperspective, people are looking for what makes life fulfilling and meaningful.

      all right -- leading us into the next article!

    5. The fact that the business provides a source of flow experiences isimportant for entrepreneurs’well-being. Flow is a central construct withinpositive psychology that entrepreneurs can borrow and use to their advantage.

      generally accepted as entrepreneurs are assumed to get more meaning and purpose from their work than folk who work for someone/thing else other than themselves...

    6. ubjective well-being was significantlyand positively linked to intrinsic definitions of success while the more extrin-sic/material-driven definitions of success were significantly but negativelylinked to subjective well-being.

      yep

    7. Subjective well-being questions were asked using the Satisfaction with LifeScale
      • check it out in the quiz section this week...
    8. “In my business everyone gives his/her best efforts”; “In my business workquality is a high priority for all workers”; “I am able to apply my full capabil-ity in my business”, and “My business is very efficient in getting maximumoutput from the resources we have available (e.g. money, people, equipment,etc.)”

      wow! When listed in this way, it seems as though most workplaces or types of work might not encourage group productivity. How many of these apply to your work?

    9. The ability for finding meaning at work has been found to be a dominantfactor in the conceptualization and measure of intrinsic success and spiritualityresearch

      *

    10. ntrepreneurshipmotivation appears to link to the basis of rewards, which are either driven byextrinsic measures of success (money, social recognition, financial security,power, or buying power) or on intrinsic measures (such as personal fulfillment,doing something I love, finding meaning and purpose in their lives).

      **

    11. ersonal causesand callings are examples of intrinsic or meaningful objectives. The respon-dents’definition of success included making a difference, creating lastingimpact, and being engaged in a life of personal fulfillment

      making a difference (for others...) or making a difference for oneself (being personally fulfilled)...

      summing up perceptions of success

    12. The job of leadership today is not just to make money, it’s to make meaning

      not as far as the marketplace is concerned...

    13. An individual’s well-being is considered a potential source of productivity.On the other hand, an individual’s personal life can be a source of reducedfunctioning.

      hmmm

    14. Flow is interesting in a work context because findings indicate that themore individuals experience flow, the more they report control and greatersense of enjoyment in the activity

      *

    15. Since uncertainty is central to the entrepreneurialenvironment, it is unclear if the well-being and flow linkage found in otherwork situations extends to entrepreneurial environments.

      the testable proposition...

    16. nce flow isachieved, the attributes are (1) intense focus on the activity at hand and (2)time is perceived to move differently, either more slowly or quickly. The per-son experiences (3) a sense of control and (4) a loss of self-consciousness.There is (5) a merging of action and awareness, that is, the next move is evi-dent. Flow is experienced when (6) challenges and skills are balanced and theperceived challenges of the activity are at or very slightly above the person’sskill level. Usually (7) the experience is rewarding

      can you relate this to your own experiences?

      remember what I said in my note about happiness and flow in the seminar prompt in week 7 (creative/cultural entrepreneurs)

    17. real happiness lay in the prospect of finding work thatprovided close social relationships, meaning and purpose, pursuit of personalgoals, and being involved in flow activities

      sounds sensible enough (even if slightly utopian, moreso than ever in situations of precocity and neoliberal capitalist accumulation)

    18. none of theseare assured in an entrepreneurial environment.

      leading to the key insight...

    19. the processof creating art consumed his research subjects and compelled them to losetrack of time, forget to eat, and focus completely on the work. Artistsdescribed being consumed in the process as in the “flow”of it.

      !!

    20. These same factors are related to the intrinsic motivators oftenemployed in entrepreneurial perspectives

      ooh! a key answer for one of the questions I asked as a seminar prompt in week 7 (about the propensity for people to feel intrinsically motivated to act/achieve goals....)

    21. Since work is a major factor inpeople’s lives –it takes up as much as a half of an adult’s waking life –whatgoes on at work is a major factor in understanding what gives individuals asense of meaning and fulfillment

      sad but true (and one of the reasons why entrepreneurial ventures are filled with such hope (because of all those times that one's mundane work life isn't...)

    22. happiness is associated with behaviors that create success, and thathappiness also precedes successful outcomes.

      doesn't seem that controversial ... miserable people might be successful, but one doesn't expect it to be sustainable in the face of success (the misery...)

      on the other hand, if you're happy (or satisfied), do you strive to achieve greater success?

    23. happiness precedes successful outcomes by creating positive affect, the charac-teristics of which include confidence, optimism, and self-efficacy; likability andpositive construals of others; sociability, activity, and energy; prosocial behavior;immunity, and physical well-being; effective coping with challenge and stress;and originality and flexibility. Positive affect, in turn, encourages activeinvolvement with the environment, and with the pursuit of goals

      this is quite the list -- kind of like success breeds more success, the happier you are, the more spillover there is likely to be in other related fields of achievement.

    24. When focusing on psychological well-being, there are generally two perspec-tives: the hedonic/subjective well-being perspective and the eudaimonic/mean-ing perspective.

      and we're going to focus on the latter... or at least the other reading does!

    25. Thereare three types of well-being: (1) physical well-being; (2) psychological well-being; and (3) relationship/social well-being

      while entrepreneurial activities certainly relate to the well-being of one's physical body and one's relationships, we're going to focus on the psychological...

      Feel free to ruminate on the others, though...

    26. the termeudaimonia, or a state of meaningfulness and fulfillment,

      eudaemonia is quite distinct!

    27. ristotlebelieved that meaning was not about satisfying human appetites, but aboutdoing virtuous things such that we “achieve the best that is within us

      so, eudaemonia relates to self-improvement and self-actualization...

    28. Extrinsic fac-tors can include financial and social rewards, while intrinsic personal successis often researched using meaning, happiness, and spirituality within the workcontext.

      this should resonate with one of the questions I asked you to answer before the first class...

    29. what makes life pleasant and unpleasant (Ryan and Deci2001). Research in this area looks at life satisfaction, positive affect, and nega-tive affect

      hedonism is focused on the pursuit of pleasure. What feels good...

    30. Productivity, the second factor examined, can lead to well-being from anextrinsic perspective, since greater productivity would most likely lead togreater success of the organization

      whereas flow connects with a sense of intrinsic well-being, productivity relates to extrinsic forces too

    31. Although many factors could lead to increased entrepre-neurial well-being, the present study examines how flow, productivity, andentrepreneurs’own definition of personal success (intrinsic vs. extrinsic)impacts the entrepreneur’s subjective well-being

      focal points...

      because there's a difference between being entrepreneurial and happy and being entrepreneurial to be happy (down the road...)

    32. Continued engagement is a predictorof entrepreneurial business

      we're more likely to continue acting entrepreneurially if it satisfies us (if it gives us a sense of pleasure, or well-being) as a result of pursuing the dream, seeking and creating new opportunities... being bold and innovative and creative... (even though there are risks and great costs associated with these too)

    33. The third factor examined in this study is how an entrepreneur’s de finitionof personal success impacts their subjective well-being.

      *

    34. The first factor considered in this study was flow, which describes a state ofcomplete attentional energy focused on the task at hand

      already introduced 4 weeks ago...

    35. Yet the directionality is not clear. Are happy workers more productive, orare more productive workers happier?

      where do you stand?

    36. Of interest in this paper are the psychological factors that contribute toentrepreneurs’well-being

      focusing on one's sense of subjective well-being (otherwise known as happiness).

    1. Narodený z obesenej ženy sa sirota Guts dostáva do výchovy žoldniera. Napriek neľahkému detstvu vyrastie z tohto dieťaťa silný mladík a veľmi schopný bojovník. Jeho sila mu dovoľuje používať aj masívny meč, ktorý presahuje dĺžku dospelého človeka.Pri jednom z mnohých vojnových ťažení spoznáva Cascu a charizmatického Griffitha, vodcu žoldnierskej Skupiny Jastraba a držiteľa záhadného prívesku - Behelit. Guts mu očividne padne do oka a stane sa jedným z nich. Medzi členmi družiny nájde skutočných priateľov a postupom času získa pozíciu ako Griffithova pravá ruka.Toto je iba jedna kapitola príbehu z rozsiahlej manga ságy od autora menom Kentaro Miura. Veľmi temný fantasy príbeh láka stále nových fanúšikov, pretože dej nie je ani po vyše dvadsiatich rokoch od jeho uvedenia v roku 1990 stále uzavretý. V roku 1997 sa Berserk dočkal aj svojho 25 dielneho anime prevedenia, ktoré slávilo tiež veľký úspech. Po približne ďalších pätnástich rokoch sa dostávame opäť k anime, ktoré bude ale na rozdiel od svojho seriálového predchodcu rozdelené do niekoľkých dlhších filmov. Prvý z radu filmov, ktoré majú pokryť dej celej manga predlohy, bude uvedený v roku 2012. Nesie názov Berserk Golden Age Arc I: Egg of the Supreme Ruler a ako jeden z troch filmov nám zachytí časť príbehu „zlatého obdobia“.
    1. The Animatrix

      Vítejte do světa Animatrixu, vizionářské fúze počítačové animace a japonské školy animace, od světově nejuznávanějších tvůrců animovanch filmů. Vychutnejte si snímek, který vznikl před filmem Matrix a dozvíte se o posledních městech lidské rasy, válce mezi stroji a lidmi a o závěrečném úpadku lidstva. Staňte se svědky posledního letu Osirise, který připravil půdu pro film Matrix Reloaded a videohru Enter The Matrix. Doplňte si své povědomí o Matrixu a získejte informace, které jinde nenajdete. Je čas zapojit se... (oficiální text distributora)

    1. successful entrepreneurship is not really just a story about intelligence inthe traditional sense but more fully a story about successful intelligence—the strategic mergerof analytical, creative, and practical intelligence. All three kinds of intelligence can bedeveloped and are developed through good use of experience.

      **

    2. When entrepreneurs and others shape the environment, they are basically applyingsuccessful intelligence to idea generating

      !!

    3. They may end up on used-carlots

      that's quite the burn, for an academic article!

    4. People who are high in creative intelligence but not in the otherkinds may be good at coming up with ideas but often are not good either at knowingwhether their ideas are good ones (analytical intelligence) or at selling their ideas toothers (practical intelligence

      *

    5. the tests of practical intelligence I mentioned above measure adaptive skills

      ok

    6. Successful intelligence is applied in order to balance adapting to, selecting, and shapingenvironments. When one adapts to the environment, one changes oneself in order to fit intothe environment.

      how do you demonstrate this?

    7. People who are high in analytical intelligence but not the other kinds often aregood memorizers and analyzers, but they need other people’s ideas to remember andanalyze. They make poor entrepreneurs, because entrepreneurs simply must be ideagenerators to succeed

      interesting!

    8. This is the option in which entrepreneurs mustspecialize.

      this gets into persuasion & rhetoric. In essence, also "selling" but not selling things, but ideas, selling people on your version of reality...

    9. The third option is shaping

      !

    10. The most important kind of intelligence for an entrepreneur, or really anyone else, issuccessful intelligence, which involves a balance of analytical (IQ-based), creative, andpractical intelligence

      connecting the 3 types of intelligence...

    11. One needs the creative intelligence to come upwith new ideas, the analytical intelligence to evaluate whether the ideas are good ones,and the practical intelligence to figure out a way to sell these ideas to people who maynot want to hear about them

      synergy! For success ...

    12. What matters is not the amountof experience one has but how much one has learned from that experience

      you probably have experience with this ...

    13. In sum, tests of practical intelligence are useful predictors of job-related skills, independ-ently of IQ-based tests.

      underwhelming. This really drives home the fact that this article is good mostly as a shell -- kind-of boring and potentially difficult to see (at least in an immediate fashion) how it can relate in a real sense to students' lives (beyond the idea of multiple intelligences). Still, the lens is really useful for helping us focus on social competence and tacit knowledge (practical intelligence) for moving us forward... and considering what might be necessary in order to achieve success in a given field...

    14. one starts by interviewing successful people in agiven job and asking them for how they performed in the critical incidents that distinguishthose who are highly successful in a field from those who are not. We then try to extract thetacit knowledge underlying the successful actions.

      the method...

    15. Tacit knowledge for the workplace. Test-takers are presented with situations typical ofthose encountered in low-level jobs in workplace

      classic job interview question...

    16. ome entrepreneurs leave conventional businesssettings because they are not interested in playing the game of figuring out what the tacitknowledge of the organizational environment is. They would rather play by their own rulesthan learn other people’s.

      link to independent spirit and counterfactual knowledge...

    17. tacit knowledge is the knowledge that often is most important for success in theworkplace, but it is the knowledge that people must pick up on their own

      yowza! School can't teach tacit knowledge!

    18. Of course, people who are high in practical intelligence may also be high in academicintelligence. The two are not necessarily negatively related

      hah

    19. What matters for growth of practical intelligence is not experience but ratherlearning from experience.

      (link to the growth mindset)

    20. Modifiability of practical intelligence. Practical intelligence can be developed. Indeed, itmust be developed. People are not born with the kinds of common sense they show in theireveryday lives

      crucial insight

    21. The entrepreneurial children, like theentrepreneur in the forest, seemed to have a kind of intelligence not well measuredby conventional tests

      Their "street math" was not replicated in the classroom... They exhibited "street smarts" (not book smarts)

      Have you ever been in a situation where your book smarts failed you in the "real" world? Contrarily, how often has "street" knowledge helped you in school?

    22. intelligence is not asingle entity.

      crucial insight - we have access to multiple intelligences!

      Please note - this is definitely related to the "theory of multiple intelligences" (which was the basis for the reality TV series "Canada's Smartest Person"):

      https://en.wikipedia.org/wiki/Theory_of_multiple_intelligences

    23. Clearly, more is involved in entrepreneurial success than just the academic side ofintelligence

      Think about how often your ability to act in a "socially competent" manner came off as intelligent (a way of adapting to, shaping, and sometimes selecting your environment).

      Knowing the right thing to say or how to act (cultural capital) comes off as "smart"

    24. There is obviously more to intelligence than what conventional tests of intelligence

      the moral of the story

    25. Successful entrepreneurs appearto be higher in social competence than are unsuccessful ones. In particular, four factors seemto underlie this social competence: social perception (which involves accuracy in perceivingothers), (b) impression management (which involves techniques for inducing positivereactions in others), (c) persuasiveness (which involves the ability to change others’ viewsor behavior in desired directions), and (d) social adaptability (which involves feelingcomfortable in a wide range of situations). These variables, especially social perception,seem to be key to entrepreneurial success.

      ** remember these 4 factors of social competence! (they will be very helpful next week!)

    26. Successful entrepreneurship requires a blend of analytical, creative, and practical aspects ofintelligence, which, in combination, constitute successful intelligence.

      ok - getting right to the heart of it. Introduces the key concepts and emphasizes its 3 parts.

    27. Perhaps you know the story of the college professor and the entrepreneur who arewalking in a forest.

      funny story - the punchline is at the end of the next paragraph!

    28. sometimes they change themselves to fit the environment; other times,they change the environment to fit them; still other times, they find a different environment

      what I was talking about at the end of last Thursday's class...

    29. Successful intelligenceis the ability to succeed in life, according to one’s own conception of success, within one’senvironmental context.

      key concept, defined

    30. Creative intelligence is needed in order to think flexiblyand to be ahead of the pack rather than merely with it.

      link to divergent thinking...

    31. reative thinkers defy the crowd,seeing alternative ways of defining and solving problems that others often do not see

      link to opportunity discovery/creation

    32. Creative intelligence is used to generate ideas that are novel, high in quality, andappropriate to the task one faces.

      ok - definition by contextualization (and very similar to last week's material...)

    33. Tobe effective on the job, however, requires a third kind of skill beyond the analytical (IQ-based) and the practical.

      completing the picture...

    34. Practicalskills are necessary to apply these analytical tools correctly to the problems really facing oneon the job

      but practical intelligence guides the application of intellectual intelligence...

    35. there is some kind of construct of practical intelligencethat is distinct from the kind of academic intelligence

      yep

    1. This is a story we need to know. Industrial transformation turned out tobe a bubble of promise followed by lost livelihoods and damaged landscapes.And yet: such documents are not enough. If we end the story with decay, weabandon all hope—or turn our attention to other sites of promise and ruin,promise and ruin.

      Important concept: details the cycle of extracting natural (and finite) resources.

      Style elements: speaks directly to the reader, repeats important phrases.

    Annotators

    1. La grande hétérogénéité des preuves empiriques est apparente lorsque l'on observe les analyses plus récentes au niveau de l'entreprise, qui ne confirment pas que l'automatisation a un effet négatif sur l'emploi et les salaires globaux lorsque l'on tient compte d'une période d'adaptation (Lane et Saint-Martin, 2021). Toutefois, une grande partie de la recherche empirique reste axée sur la robotisation et l'automatisation, de sorte qu'il est encore nécessaire de procéder à une analyse plus spécifique de l'impact de l'IA générative sur les résultats en matière d'emploi.

      Il est ici question des biais liés aux données collectées, aux biais des statistiques et du manque de neutralité de certaines études

    2. a réduction initiale de la demande de main-d'œuvre dans les domaines hautement automatisés pourrait potentiellement conduire à une résurgence des normes traditionnelles en matière de genre,

      Référence à notre réflexion plus haut concernant les biais préexistants, avant l'IA et Nouvelles Technologies

    3. L'IA a également le potentiel d'accroître les barrières à l'entrée pour les nouvelles générations de main-d'œuvre en raison de l'inégalité d'accès à l'éducation

      Un souci primordial à expliciter lorsqu'on parle de l'IA sur le marché du travail est celui de l'accéssiblité numérique, une accessiblité qui n'est pas encore étendue au monde entier et donc le sujet de l'IA dans le monde du travail ne concerne pour l'instant que les pays et les sociétés dit.e.s "développés" (pour la plupart en Occident) et donc opérer un shift majeur en faveur de technologies inacessibles pour beaucoup ne fera qu'alourdir le gap entre eux

    1. Welcome back and by now you should understand the difference between stateless and stateful security protection.

      In this lesson I want to talk about one security feature of AWS VPCs and a little bit more depth and that's network access control lists known as knuckles.

      Now we do have a lot to cover so let's jump in and get started.

      A network access control list we thought of as a traditional firewall available within AWS VPCs so let's look at a visual example.

      A subnet within an AWS VPC which has two EC2 instances A and B.

      The first thing to understand and this is core to how knuckles work within AWS is that they are associated with subnets.

      Every subnet has an associated network ACL and this filters data as it crosses the boundary of that subnet.

      In practice this means any data coming into the subnet is affected and data leaving the subnet is affected.

      But and this is super important to remember connections between things within that subnet such as between instance A and instance B in this example are not affected by network ACLs.

      Each network ACL contains a number of rules, two sets of rules to be precise.

      We have inbound rules and outbound rules.

      Now inbound rules only affect data entering the subnet and outbound rules affect data leaving the subnet.

      Remember from the previous lesson this isn't always matching directly to request and response.

      A request can be either inbound or outbound as can a response.

      These inbound and outbound rules are focused only on the direction of traffic not whether it's request or response.

      In fact and I'll cover this very soon knuckles are stateless which means they don't know if traffic is request or response.

      It's all about direction.

      Now rules match the destination IP or IP range, destination port or port range together with the protocol and they can explicitly allow or explicitly deny traffic.

      Remember this one network ACLs offer both explicit allows and explicit denies.

      Now rules are processed in order.

      First a network ACL determines if the inbound or outbound rules apply.

      Then it starts from the lowest rule number.

      It evaluates traffic against each individual rule until it finds a match.

      Then that traffic is either allowed or denied based on that rule and then processing stops.

      Now this is critical to understand because it means that if you have a deny rule and an allow rule which match the same traffic but if the deny rule comes first then the allow rule might never be processed.

      Lastly there's a catch all showed by the asterisk in the rule number and this is an implicit deny.

      If nothing else matches then traffic will be denied.

      So those are the basics.

      Next let's move on to some more complex elements of network ACLs.

      Now I just mentioned that network ACLs are stateless and this means that rules are required for both the request and the response part of every communication.

      You need individual rules for those so one inbound and one outbound.

      Take this example a multi-tiered application running in a VPC.

      We've got a web server in the middle and an application server on the left.

      On the right we have a user Bob using a laptop and he's accessing the website.

      So he makes a connection using HTTPS which is TCP port 443 and this is the request as you know by now and this is also going to mean that a response is required using the ephemeral port range.

      This ephemeral port is chosen at random from the available range decided by the operating system on Bob's laptop.

      Now to allow for this initial communication if we're using network ACLs then we'll need to have one associated with the web subnet and it will need rules in the inbound and outbound sections of that network ACL.

      Notice how on the inbound rule set we have rule number 110 which allows connections from anywhere and this is signified by 0.0.0.0 through this network ACL and this is allowed as long as it's using TCP port 443.

      So this is what allows the request from Bob into the web server.

      We also have on the outbound rule set rule number 120 and this allows outbound traffic to anywhere again 0.0.0.0 as long as the protocol is TCP using the port range of 1.0.2.4 to 65.5.3.5 and this is the ephemeral port range which I mentioned in the previous lesson.

      Now this is not amazingly secure but with stateless firewalls this is the only way.

      Now we also have the implicit denies and this is denoted by the rules with the star in the rule number and this means that anything which doesn't match rule 110 or 120 is denied.

      Now it's also worth mentioning while I do have rule 110 and 120 number differently the rule numbers are unique on inbound and outbound so we could have the single rule number 110 on both rule sets and that would be okay.

      It's just easier to illustrate this if I use unique rule numbers for each of the different rule sets.

      Now let's move on and increase the complexity little.

      So we have the same architecture we have Bob on the right, the web subnet in the middle and the application subnet on the left.

      You know now that because network ACLs are stateless each communication requires one request rule and one response rule.

      This becomes more complex when you have a multi-tiered architecture which operates across multiple subnets and let's step through this to illustrate why.

      Let's say the pop initiates a connection to the web server we know about this already because I just covered it.

      If we have a network ACL around the web subnet we'll need an inbound rule on the web network ACL.

      There's also going to be response traffic so this is going to use the ephemeral port range and this is going to need an outbound rule on that same web network ACL so this should make sense so far.

      But also the web server might need to communicate with the app server using some application TCP port.

      Now this is actually crossing two subnet boundaries the web at subnet boundary and the application subnet boundary so it's going to need an outbound rule on the web at subnet knuckle and also an inbound rule on the application subnet knuckle.

      Then we have the response for that as well from the app server through to the web server and this is going to be using ephemeral ports but this also crosses two subnet boundaries it leaves the application subnet which will need an outbound rule on that knuckle and it enters the web subnet which will also need an inbound rule on that network ACL and what if each of those servers need software updates it will get even more complex really quickly.

      You always have to be aware of these rule pairs the application port request and the ephemeral response for every single communication in some cases you're going to have multi-tier architecture and this might mean the communications go through different subnets.

      If you need software updates this will need more if you use network address translation or NAT you might need more rules still.

      You'll need to worry about this if you use network ACLs within a vpc for traffic to a vpc or traffic from a vpc or traffic between subnets inside that vpc.

      When a vpc is created it's created with a default network ACL and this contains inbound and outbound rule sets which have the default implicit deny but also a capsule allow and this means that the net effect is that all traffic is allowed so the default within a vpc is that knuckles have no effect they aren't used this is designed [Music] I need to be beginner friendly and reduce admin overhead.

      AWS prefer using security groups which I'll be covering soon.

      If you create your own custom network ACLs though that's a different story.

      Custom knuckles are created for a specific vpc and initially they're associated with no subnets.

      They only have one rule on both the inbound and outbound rule sets which is the default deny and the result is that if you associate this custom network ACL with any subnets all traffic will be denied so be careful with this it's radically different behavior than the default network ACL created with a vpc.

      Now this point I just want to cover some finishing key points which you need to be aware of for any real-world usage and when you're answering exam questions.

      So network access controlists remember they're known as knuckles they are stateless so they view request and response as different things so you need to add rules both for the request and for the response.

      A knuckle only affects data which is crossing the subnet boundary so communications between instances in the same subnet is not affected by a network ACL on that subnet.

      Now this can mean that if you do have data crossing between subnets then you need to make sure that each knuckle on both of those subnets has the appropriate inbound and outbound rules so you end up with a situation where one connection can in theory need two rules on each knuckle if that connection is crossing two different subnet boundaries.

      Now knuckles are able to explicitly allow traffic and explicitly deny and the deny is important because as you'll see when I talk about security groups this is a capability that you need to network ACLs.

      So network ACLs allow you to block specific IPs or specific IP ranges which are associated with bad actors so they're a really good security feature when you need to block any traffic attempting to exploit your systems.

      Now network ACLs are not aware of any logical resources they only allow you to use IPs and cyber ranges ports and protocols you cannot reference logical resources within AWS and knuckles can also not be assigned two logical resources they're only assigned to subnets within VPCs within AWS.

      Now knuckles are very often used together with security groups such as mentioned to add the capability to explicitly deny bad IPs or bad networks so generally you would use security groups to allow traffic and you use knuckles to deny traffic and I'll talk about exactly how this works in the next lesson.

      Now each subnet within a VPC has one knuckle associated with it it's either going to be the default network ACL for that VPC or a custom one which you create and associate.

      A single knuckle though can be associated with many different subnets so while a subnet can only have one network ACL one network ACL can be associated with many different subnets.

      Now this point that is everything that I wanted to cover about network ACLs for this lesson so go ahead complete the video and when you're ready I'll look forward to you joining me in the next lesson.

    1. nd optimization step extracts the point’s feature embed-ding. Memory networks also share some connections with our work, in particular, if we interpretthe neighborhood of a node as the memory, which is

      此处提出了一个新方法

    1. eLife Assessment

      This important study provides proof of principle that C. elegans models can be used to accelerate the discovery of candidate treatments for human Mendelian diseases by detailed high-throughput phenotyping of strains harboring mutations in orthologs of human disease genes. The data are compelling and support an approach that enables the potential rapid repurposing of FDA-approved drugs to treat rare diseases for which there are currently no effective treatments. The authors should provide a clearer explanation of how the statistical analyses were performed, as well as a link to a GitHub repository to clarify how figures and tables in the manuscript were generated from the phenotypic data.

    2. Reviewer #1 (Public review):

      Summary:

      As the scientific community identifies increasing numbers of genes and genetic variants that cause rare human diseases, a challenge in the field quickly identify pharmacological interventions to address known deficits. The authors point out that defining phenotypic outcomes required for drug screen assays is often a bottleneck, and emphasize how invertebrate models can be used for quick ID of compounds that may address genetic deficits. A major contribution of this work is to establish a framework for potential intervention drug screening based on quantitative imaging of morphology and mobility behavior, using methods that the authors show can define subtle phenotypes in a high proportion of disease gene knockout mutants. Overall, the work constitutes an elegant combination of previously developed high-volume imaging with highly detailed quantitative phenotyping (and some paring down to specific phenotypes) to establish proof of principle on how the combined applications can contribute to screens for compounds that may address specific genetic deficits, which can, in turn, suggest both mechanism and therapy.

      In brief, the authors selected 25 genes for which loss of function is implicated in human neuro-muscular disease and engineered deletions in the corresponding C. elegans homologs. The authors then imaged morphological features and behaviors prior to, during, and after blue light stimuli, quantitating features, and clustering outcomes as they elegantly developed previously (PMID 35322206; 30171234; 30201839). In doing so, phenotypes in 23/25 tested mutants could be separated enough to distinguish WT from mutant and half of those with adequate robustness to permit high-throughput screens, an outcome that supports the utility of related general efforts to ID phenotypes in C. elegans disease orthologs. A detailed discussion of 4 ciliopathy gene defects, and NACLN-related channelopathy mutants reveals both expected and novel phenotypes, validating the basic approach to modeling vetted targets and underscoring that quantitative imaging approaches reiterate known biology.

      The authors then screened a library of nearly 750 FDA-approved drugs for the capacity to shift the unc-80 NACLN channel-disrupted phenotype closer to the wild type. Top "mover" compounds shift outcome in the experimental outcome space; and also reveal how "side effects" can be evaluated to prioritize compounds that confer the fewest changes of other parameters away from the center.

      Strengths:

      Although the imaging and data analysis approaches have been reported and the screen is restricted in scope and intervention exposure, it is impressive, encouraging and important that the authors strongly combine tools to demonstrate how quantitative imaging phenotypes can be integrated with C. elegans genetics to accelerate the identification of potential modulators of disease (easily extendable to other goals). Generation of deletion alleles and documentation of their associated phenotypes (available in supplemental data) provide potentially useful reagents/data to the field. The capacity to identify "over-shooting" of compound applications with suggestions for scale back and to sort efficacious interventions to minimize other changes to behavioral and physical profiles is a strong contribution.

      Weaknesses:

      The work does not have major weaknesses, and in revision, the authors have expanded the discussion to potential utility and application in the field.

      The authors have also taken into account minor modifications in writing.

    3. Reviewer #2 (Public review):

      Summary and strengths:

      O'Brien et al. present a compelling strategy to both understand rare disease that could have a neuronal focus and discover drugs for repurposing that can affect rare disease phenotypes. Using C. elegans, they optimize the Brown lab worm tracker and Tierpsy analysis platform to look at movement behaviors of 25 knockout strains. These gene knockouts were chosen based on a process to identify human orthologs that could underlie rare diseases. I found the manuscript interesting and a powerful approach to make genotype-phenotype connections using C. elegans. Given the rate that rare Mendelian diseases are found and candidate genes suggested, human geneticists need to consider orthologous approaches to understand the disease and seek treatments on a rapid time scale. This approach is one such way. Overall, I have a few minor suggestions and some specific edits.

      Weaknesses:

      (1) Throughout the text on figures, labels are nearly impossible to read. I had to zoom into the PDF to determine what the figure was showing. Please make text in all figures a minimum of 10 point font. Similarly, Figure 2D point type is impossible to read. Points should be larger in all figures. Gene names should be in italics in all figures, following C. elegans convention.

      (2) I have a strong bias against the second point in Figure 1A. Sequencing of trios, cohorts, or individuals NEVER identifies causal genes in the disease. This technique proposes a candidate gene. Future experiments (oftentimes in model organisms) are required to make those connections to causality. Please edit this figure and parts of the text.

      (3) How were the high-confidence orthologs filtered from 767 to 543 (lines 128-131)? Also, the choice of the final list of 25 genes is not well justified. Please expand more about how these choices were made.

      (4) Figures 3 and 4, why show all 8289 features? It might be easier to understand and read if only the 256 Tierpsy features were plotted in the heat maps.

      (5) The unc-80 mutant screen is clever. In the feature space, it is likely better to focus on the 256 less-redundant Tierpsy features instead of just a number of features. It is unclear to me how many of these features are correlated and not providing more information. In other words, the "worsening" of less-redundant features is far more of a concern than "worsening" of 1000 correlated features.Reviewer #2 (Public review):

      Summary and strengths:

      O'Brien et al. present a compelling strategy to both understand rare disease that could have a neuronal focus and discover drugs for repurposing that can affect rare disease phenotypes. Using C. elegans, they optimize the Brown lab worm tracker and Tierpsy analysis platform to look at movement behaviors of 25 knockout strains. These gene knockouts were chosen based on a process to identify human orthologs that could underlie rare diseases. I found the manuscript interesting and a powerful approach to make genotype-phenotype connections using C. elegans. Given the rate that rare Mendelian diseases are found and candidate genes suggested, human geneticists need to consider orthologous approaches to understand the disease and seek treatments on a rapid time scale. This approach is one such way. Overall, I have a few minor suggestions and some specific edits.

      Weaknesses:

      (1) Throughout the text on figures, labels are nearly impossible to read. I had to zoom into the PDF to determine what the figure was showing. Please make text in all figures a minimum of 10 point font. Similarly, Figure 2D point type is impossible to read. Points should be larger in all figures. Gene names should be in italics in all figures, following C. elegans convention.

      (2) I have a strong bias against the second point in Figure 1A. Sequencing of trios, cohorts, or individuals NEVER identifies causal genes in the disease. This technique proposes a candidate gene. Future experiments (oftentimes in model organisms) are required to make those connections to causality. Please edit this figure and parts of the text.

      (3) How were the high-confidence orthologs filtered from 767 to 543 (lines 128-131)? Also, the choice of the final list of 25 genes is not well justified. Please expand more about how these choices were made.

      (4) Figures 3 and 4, why show all 8289 features? It might be easier to understand and read if only the 256 Tierpsy features were plotted in the heat maps.

      (5) The unc-80 mutant screen is clever. In the feature space, it is likely better to focus on the 256 less-redundant Tierpsy features instead of just a number of features. It is unclear to me how many of these features are correlated and not providing more information. In other words, the "worsening" of less-redundant features is far more of a concern than "worsening" of 1000 correlated features.

    4. Reviewer #3 (Public review):

      In this study, O'Brien et al. address the need for scalable and cost-effective approaches to finding lead compounds for the treatment of the growing number of Mendelian diseases. They used state-of-the-art phenotypic screening based on an established high-dimensional phenotypic analysis pipeline in the nematode C. elegans.

      First, a panel of 25 C. elegans models was created by generating CRISPR/Cas9 knock-out lines for conserved human disease genes. These mutant strains underwent behavioral analysis using the group's published methodology. Clustering analysis revealed common features for genes likely operating in similar genetic pathways or biological functions. The study also presents results from a more focused examination of ciliopathy disease models.

      Subsequently, the study focuses on the NALCN channel gene family, comparing the phenotypes of mutants of nca-1, unc-77, and unc-80. This initial characterization identifies three behavioral parameters that exhibit significant differences from the wild type and could serve as indicators for pharmacological modulation.

      As a proof-of-concept, O'Brien et al. present a drug repurposing screen using an FDA-approved compound library, identifying two compounds capable of rescuing the behavioral phenotype in a model with UNC80 deficiency. The relatively short time and low cost associated with creating and phenotyping these strains suggest that high-throughput worm tracking could serve as a scalable approach for drug repurposing, addressing the multitude of Mendelian diseases. Interestingly, by measuring a wide range of behavioural parameters, this strategy also simultaneously reveals deleterious side effects of tested drugs that may confound the analysis.

      Considering the wealth of data generated in this study regarding important human disease genes, it is regrettable that the data is not made accessible to researchers less versed in data analysis methods. This diminishes the study's utility. It would have a far greater impact if an accessible and user-friendly online interface were established to facilitate data querying and feature extraction for specific mutants. This would empower researchers to compare their findings with the extensive dataset created here.

      Another technical limitation of the study is the use of single alleles. Large deletion alleles were generated by CRISPR/Cas9 gene editing. At first glance, this seems like a good idea because it limits the risk that background mutations, present in chemically-generated alleles, will affect behavioral parameters. However, these large deletions can also remove non-coding RNAs or other regulatory genetic elements, as found, for example, in introns. Therefore, it would be prudent to validate the behavioral effects by testing additional loss-of-function alleles produced through early stop codons or targeted deletion of key functional domains.

    5. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review): 

      Summary: 

      As the scientific community identifies increasing numbers of genetic variants that cause rare human diseases, a challenge is how the field can most quickly identify pharmacological interventions to address known deficits. The authors point out that defining phenotypic outcomes required for drug screen assays is often challenging, and emphasize how invertebrate models can be used for quick ID of compounds that may address genetic deficits. A major contribution of this work is to establish a framework for potential intervention drug screening based on quantitative imaging of morphology and mobility behavior, using methods that the authors show can define subtle phenotypes in a high proportion of disease gene knockout mutants. 

      Overall, the work constitutes an elegant combination of previously developed high-volume imaging with highly detailed quantitative phenotyping (and some paring down to specific phenotypes) to establish proof of principle on how the combined applications can contribute to screens for compounds that may address specific genetic deficits, which can suggest both mechanism and therapy. 

      In brief, the authors selected 25 genes for which loss of function is implicated in human neuro-muscular disease and engineered deletions in the corresponding C. elegans homologs. The authors then imaged morphological features and behaviors prior to, during, and after blue light stimuli, quantitating features, and clustering outcomes as they elegantly developed previously (PMID 35322206; 30171234; 30201839). In doing so, phenotypes in 23/25 tested mutants could be separated enough to distinguish WT from mutant and half of those with adequate robustness to permit high-throughput screens, an outcome that supports the utility of general efforts to ID phenotypes in C. elegans disease orthologs using this approach. A detailed discussion of 4 ciliopathy gene defects, and NACLN-related channelopathy mutants reveals both expected and novel phenotypes, validating the basic approach to modeling vetted targets and underscoring that quantitative imaging approaches reiterate known biology. The authors then screened a library of nearly 750 FDA-approved drugs for the capacity to shift the unc-80 NACLN channel-disrupted phenotype closer to the wild type. Top "mover" compound move outcome in the experimental outcome space; and also reveal how "side effects" can be evaluated to prioritize compounds that confer the fewest changes of other parameters away from the center. 

      Strengths: 

      Although the imaging and data analysis approaches have been reported and the screen is limited in scope and intervention exposure, it is important that the authors strongly combine individual approach elements to demonstrate how quantitative imaging phenotypes can be integrated with C. elegans genetics to accelerate the identification of potential modulators of disease (easily extendable to other goals). Generation of deletion alleles and documentation of their associated phenotypes (available in supplemental data) provide potentially useful reagents/data to the field. The capacity to identify "over-shooting" of compound applications with suggestions for scale back and to sort efficacious interventions to minimize other changes to behavioral and physical profiles is a strong contribution. 

      Weaknesses: 

      The work does not have major weaknesses, although it may be possible to expand the discussion to increase utility in the field: 

      (1) Increased discussion of the challenges and limitations of the approach may enhance successful adaptation application in the field. 

      It is quite possible that morphological and behavioral phenotypes have nothing to do with disease mechanisms and rather reflect secondary outcomes, such that positive hits will address "off-target" consequences. 

      This is possible and can only be determined with human data. We now discuss the possibility in the discussion.

      The deletion approach is adequately justified in the text, but the authors may make the point somewhere that screening target outcomes might be enhanced by the inclusion of engineered alleles that match the human disease condition. Their work on sod-1 alleles (PMID 35322206) might be noted in this discussion. 

      We agree and now mention this work in the discussion. We are currently working on a collection of strains with patient-specific mutations.

      Drug testing here involved a strikingly brief exposure to a compound, which holds implications for how a given drug might engage in adult animals. The authors might comment more extensively on extended treatments that include earlier life or more extended targeting. The assumption is that administering different exposure periods and durations, but if the authors are aware as to whether there are challenges associated with more prolonged applications, larger scale etc. it would be useful to note them. 

      More prolonged applications are definitely possible. We chose short treatments for this screen to model the potential for changing neural phenotypes once developmental effects of the mutation have already occurred. We now briefly discuss this choice and the potential of longer treatments in the discussion.

      (2) More justification of the shift to only a few target parameters for judging compound effectiveness. 

      - In the screen in Figure 4D and text around 313, 3 selected core features of the unc-80 mutant (fraction that blue-light pause, speed, and curvature) were used to avoid the high replicate requirements to identify subtle phenotypes. Although this strategy was successful as reported in Figure 5, the pared-down approach seems a bit at odds with the emphasis on the range of features that can be compared mutant/wt with the author's powerful image analysis. Adding details about the reduced statistical power upon multiple comparisons, with a concrete example calculated, might help interested scientists better assess how to apply this tool in experimental design. 

      To empirically test the effect of including more features on the subsequent screen, we have repeated the analysis using increasing numbers of features. In a new supplementary figure we find increasing the number of features reduces our power to detect rescue. At 256 features, we would not be able to detect any compounds that rescued the disease model phenotype.

      (3) More development of the side-effect concept. The side effects analysis is interesting and potentially powerful. Prioritization of an intervention because of minimal perturbation of other phenotypes might be better documented and discussed a bit further; how reliably does the metric of low side effects correlate with drug effectiveness? 

      Ultimately this can only be determined with clinical trial data on multiple drugs, but there are currently no therapeutic options for UNC80 deficiency in humans. We have included some extra discussion of the side effect concept.

      Reviewer #2 (Public Review): 

      Summary and strengths: 

      O'Brien et al. present a compelling strategy to both understand rare disease that could have a neuronal focus and discover drugs for repurposing that can affect rare disease phenotypes. Using C. elegans, they optimize the Brown lab worm tracker and Tierpsy analysis platform to look at the movement behaviors of 25 knockout strains. These gene knockouts were chosen based on a process to identify human orthologs that could underlie rare diseases. I found the manuscript interesting and a powerful approach to making genotype-phenotype connections using C. elegans. Given the rate at which rare Mendelian diseases are found and candidate genes suggested, human geneticists need to consider orthologous approaches to understand the disease and seek treatments on a rapid time scale. This approach is one such way. Overall, I have a few minor suggestions and some specific edits. 

      Weaknesses: 

      (1) Throughout the text on figures, labels are nearly impossible to read. I had to zoom into the PDF to determine what the figure was showing. Please make text in all figures a minimum of 10-point font. Similarly, the Figure 2D point type is impossible to read. Points should be larger in all figures. Gene names should be in italics in all figures, following C. elegans convention. 

      We have updated all figures with larger labels and, where necessary, split figures to allow for better readability. We’ve also corrected italicisation.

      (2) I have a strong bias against the second point in Figure 1A. Sequencing of trios, cohorts, or individuals NEVER identifies causal genes in the disease. This technique proposes a candidate gene. Future experiments (oftentimes in model organisms) are required to make those connections to causality. Please edit this figure and parts of the text. 

      We have removed references to causation. We were thinking of cases where a known variant is found in a patient where causality has already been established rather than cases of new variant discovery.

      (3) How were the high-confidence orthologs filtered from 767 to 543 (lines 128-131)? Also, the choice of the final list of 25 genes is not well justified. Please expand more about how these choices were made. 

      We now explain the extra keyword filtering step. For the final filtering step, we simply examined the list and chose 25. There is therefore little justification to provide and we acknowledge these cannot be seen as representative of the larger set according to well-defined rules. The choice was based on which genes we thought would be interesting using their descriptions or our prior knowledge (“subjective interestingness” in the main text).

      (4) Figures 3 and 4, why show all 8289 features? It might be easier to understand and read if only the 256 Tierpsy features were plotted in the heat maps. 

      In this case, we included all features because they were all tested for differences between mutants and controls. By consistently using all features for each fingerprint we can be sure that the features that are different that we want to highlight in box plots can be referred to in the fingerprint.

      (5) The unc-80 mutant screen is clever. In the feature space, it is likely better to focus on the 256 less-redundant Tierpsy features instead of just a number of features. It is unclear to me how many of these features are correlated and not providing more information. In other words, the "worsening" of less-redundant features is far more of a concern than the "worsening" of 1000 correlated features. 

      This is a good point. We’ve redone the analysis using the Tierpsy 256 feature set and included this as a supplementary figure. We find that the same trend exists when looking at this reduced feature set.

      Reviewer #3 (Public Review): 

      In this study, O'Brien et al. address the need for scalable and cost-effective approaches to finding lead compounds for the treatment of the growing number of Mendelian diseases. They used state-of-the-art phenotypic screening based on an established high-dimensional phenotypic analysis pipeline in the nematode C. elegans. 

      First, a panel of 25 C. elegans models was created by generating CRISPR/Cas9 knock-out lines for conserved human disease genes. These mutant strains underwent behavioral analysis using the group's published methodology. Clustering analysis revealed common features for genes likely operating in similar genetic pathways or biological functions. The study also presents results from a more focused examination of ciliopathy disease models. 

      Subsequently, the study focuses on the NALCN channel gene family, comparing the phenotypes of mutants of nca-1, unc-77, and unc-80. This initial characterization identifies three behavioral parameters that exhibit significant differences from the wild type and could serve as indicators for pharmacological modulation. 

      As a proof-of-concept, O'Brien et al. present a drug repurposing screen using an FDA-approved compound library, identifying two compounds capable of rescuing the behavioral phenotype in a model with UNC80 deficiency. The relatively short time and low cost associated with creating and phenotyping these strains suggest that high-throughput worm tracking could serve as a scalable approach for drug repurposing, addressing the multitude of Mendelian diseases. Interestingly, by measuring a wide range of behavioural parameters, this strategy also simultaneously reveals deleterious side effects of tested drugs that may confound the analysis. 

      Considering the wealth of data generated in this study regarding important human disease genes, it is regrettable that the data is not actually made accessible. This diminishes the study's utility. It would have a far greater impact if an accessible and user-friendly online interface were established to facilitate data querying and feature extraction for specific mutants. This would empower researchers to compare their findings with the extensive dataset created here. Otherwise, one is left with a very limited set of exploitable data. 

      We have now made the feature data available on Zenodo (https://doi.org/10.5281/zenodo.12684118) as a matrix of feature summaries and individual skeleton timeseries data (the feature matrix makes it more straightforward to extract the data from particular mutants for reanalysis). We have also created a static html version of the heatmap in Figure 2 containing the entire behavioural feature set extracted by Tierpsy. This can be opened in a browser and zoomed for detailed inspection. Mousing over the heatmap shows the names of features at each position making it easier to arrive at intuitive conclusions like ‘strain A is slow’ or ‘strain B is more curved’.

      Another technical limitation of the study is the use of single alleles. Large deletion alleles were generated by CRISPR/Cas9 gene editing. At first glance, this seems like a good idea because it limits the risk that background mutations, present in chemically-generated alleles, will affect behavioral parameters. However, these large deletions can also remove non-coding RNAs or other regulatory genetic elements, as found, for example, in introns. Therefore, it would be prudent to validate the behavioral effects by testing additional loss-of-function alleles produced through early stop codons or targeted deletion of key functional domains. 

      We have added a note in the main text on limitations of deletion alleles. We like the idea of making multiple alleles in future studies, especially in cases where a project is focussed on just one or a few genes.

      Recommendations for the authors

      Reviewer #1 (Recommendations For The Authors): 

      Note that none of the above suggestions or the one immediately below are considered mandatory. 

      One additional minor point: The dual implication of mevalonate perturbations for NACLM deficiencies is striking. At the same time, the mevalonate pathway is critical for embryo viability among other things, which prompts questions about how reproductive physiology is integrated in this screen approach. It appears that sterilization protocols are not used to prepare screen target animals, but it would be useful to know if there were a signature associated with drug-induced sterility that might help identify one potential common non-interesting outcome of compound treatments in general. In this work, the screen treatment is only 4 hours, which is probably too short to compromise reproduction, but as noted above, it is likely users would intend to expose test subjects for much longer than 4-hour periods. 

      This is an interesting point. In its current form our screen doesn’t assess reproductive physiology. This is something that we will consider in ongoing projects.

      Figures 

      Figure 1D might be omitted or moved to supplement. 

      We have removed 1D and moved figure 1E as a standalone table (Table 1) to improve readability.

      Figure 2D "key" is hard to make out size differences for prestim, bluelight, and poststim -more distinctive symbols should be used. 

      We have increased the size of the symbols so that the key is easier to read.

      Line 412 unc-25 should be in italics 

      Corrected

      Reviewer #2 (Recommendations For The Authors): 

      Specific edits: 

      All of the errors below have been corrected.

      Line 47, "loss of function" should be hyphenated because it is a compound adjective that modifies mutations. 

      Line 50, "genetically-tractable" should not be hyphenated because it is not a compound adjective. It is an adverb-adjective pair. Line 102 has the same grammatical issue. 

      Line 85, "rare genetic diseases" do not "affect nervous system function". The disease might have deficits in this function, but the disease does not do anything to function. 

      Line 86, it should be mutations not mutants. Mutations are changes to DNA. Mutants are individuals with mutations. 

      Throughout, wild-type should be hyphenated when it is used as a compound adjective. 

      Figure 4, asterisks is spelled incorrectly. 

      Reviewer #3 (Recommendations For The Authors): 

      - As stated in the public review, the utility of the study is limited by the lack of access to the complete dataset. The wealth of data produced by the study is one of its major outputs. 

      We have made the data publicly available on Zenodo. We appreciate the request.

      - Describe the exact break-points of the different alleles, because it was not readily feasible to derive them from the gene fact sheets provided in the supplementary materials. 

      We have now provided the start position and total length of deletion for each gene in the gene fact sheets.

      - Figure 1C: what does "Genetic homology"/"sequence identity" refer to? How were these values calculated? 

      UNC-49 is clearly not 95% identical to vertebrate GABAR subunits at the protein level. 

      We have changed the axis label to “BLAST % Sequence Identity” to clarify that these values are calculated from BLAST sequence alignments on WormBase and the Alliance Genom Resources webpages.

      - Figure 1E : The data presented in Figure 1E appears somewhat unreliable. For example, a cursory check showed: 

      (1) Wrong human ortholog: unc-49 is a Gaba receptor, not a Glycine receptor as indicated in the second column. 

      (2) Wrong disease association: dys-1 is not associated with Bardet-Biedl syndrome; overall the data indicated in the table does not seem to fully match the HPO database. 

      (3) Inconsistent disease association: why don't the avr-14 and glc-2 (and even unc-49) profiles overlap/coincide given that they present overlapping sets of human orthologs. 

      Thank you for catching this! We have corrected gene names which were mistakenly pasted. We have also made this a standalone table (Table 1) for improved readability.

      - Error in legend to figure 4I : "with ciliopathies and N2" > ciliopathies should be "NALCN disease". 

      - Error at line 301: "Figures 2E-H" should be "Figures 4E-H". 

      Corrected.