1. Jun 2024
    1. operations manager.

      oversees the transformation of resources into goods and services manages supply chain controls delivery of raw materials

    2. tangible products

      can be seen, touched

    1. Author response:

      The following is the authors’ response to the original reviews.

      eLife assessment

      The authors report that optogenetic inhibition of hippocampal axon terminals in retrosplenial cortex impairs the performance of a delayed non-match to place task. The significance of findings elucidating the role of hippocampal projections to the retrosplenial cortex in memory and decision-making behaviors is important. However, the strength of evidence for the paper's claims is currently incomplete.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      This is a study on the role of the retrosplenial cortex (RSC) and the hippocampus in working memory. Working memory is a critical cognitive function that allows temporary retention of information for task execution. The RSC, which is functionally and anatomically connected to both primary sensory (especially visual) and higher cognitive areas, plays a key role in integrating spatial-temporal context and in goal-directed behaviors. However, the specific contributions of the RSC and the hippocampus in working memory-guided behaviors are not fully understood due to a lack of studies that experimentally disrupt the connection between these two regions during such behaviors.

      In this study, researchers employed eArch3.0 to silence hippocampal axon terminals in the RSC, aiming to explore the roles of these brain regions in working memory. Experiments were conducted where animals with silenced hippocampal axon terminals in the RSC performed a delayed non-match to place (DNMP) task. The results indicated that this manipulation impaired memory retrieval, leading to decreased performance and quicker decision-making in the animals. Notably, the authors observed that the effects of this impairment persisted beyond the light-activation period of the opsin, affecting up to three subsequent trials. They suggest that disrupting the hippocampal-RSC connection has a significant and lasting impact on working memory performance.

      Strengths:

      They conducted a study exploring the impact of direct hippocampal inputs into the RSC, a region involved in encoding spatial-temporal context and transferring contextual information, on spatial working memory tasks. Utilizing eArch3.0 expressed in hippocampal neurons via the viral vector AAV5-hSyn1-eArch3.0, they aimed to bilaterally silence hippocampal terminals located at the RSC in rats pre-trained in a DNMP task. They discovered that silencing hippocampal terminals in the RSC significantly decreased working memory performance in eArch+ animals, especially during task interleaving sessions (TI) that alternated between trials with and without light delivery. This effect persisted even in non-illuminated trials, indicating a lasting impact beyond the periods of direct manipulation. Additionally, they observed a decreased likelihood of correct responses following TI trials and an increased error rate in eArch+ animals, even after incorrect responses, suggesting an impairment in error-corrective behavior. This contrasted with baseline sessions where no light was delivered, and both eArch+ and control animals showed low error rates.

      Weaknesses:

      While I agree with the authors that the role of hippocampal inputs to the RSC in spatial working memory is understudied and merits further investigation, I find that the optogenetic experiment, a core part of this manuscript that includes viral injections, could be improved. The effects were rather subtle, rendering some of the results barely significant and possibly too weak to support major conclusions.

      We thank Reviewer#1 for carefully and critically reading our manuscript, and for the valuable comments provided. The judged “subtlety” of the effects stems from a perspective according to which a quantitatively lower effect bears less biological significance for cognition. We disagree with this perspective and find it rather reductive for several reasons.

      Once seen in the context of the animal’s ecology, subtle impairments can be life-threatening precisely because of their subtlety, leading the animal to confidently rely on a defective capacity, for such events as remembering the habitual location of a predator, or food source.

      Also, studies in animal cognition often undertake complete, rather than graded, suppression of a given mechanism (in the same sense as that of “knocking out” a gene that is relevant for behaviour), leading to a gravelly, rather that gradually, impaired model system, to the point of not allowing a hypothetical causal link to be mechanistically revealed beyond its mere presence. This often hinders a thorough interpretation of the perturbed factor’s role. If a caricatural analogy is allowed, it would be as if we were to study the role of an animal’s legs by chopping them both off and observing the resulting behaviour.

      In our study we conclude that silencing HIPP inputs in RSC perturbs cognition enough to impair behaviour while not disabling the animal entirely, as such allowing for behaviour to proceed, and for our observation of graded, decreased (not absent), proficiency under optogenetic silencing. So rather than weak, we would say the results are statistically significant, and biologically realistic.

      Additionally, no mechanistic investigation was conducted beyond referencing previous reports to interpret the core behavioral phenotypes.

      We fully agree with this being a weakness, as we wish we could have done more mechanistic studies to find out exactly what is Arch activation doing to HIPP-RSC transmission, which neurons are being affected, and perhaps in the future dissect its circuit determinants. We have all these goals very present and hope we can address them soon.

      Reviewer #2 (Public Review):

      The authors examine the impact of optogenetic inhibition of hippocampal axon terminals in the retrosplenial cortex (RSP) during the performance of a working memory T-maze task. Performance on a delayed non-match-to-place task was impaired by such inhibition. The authors also report that inhibition is associated with faster decision-making and that the effects of inhibition can be observed over several subsequent trials. The work seems reasonably well done and the role of hippocampal projections to retrosplenial cortex in memory and decision-making is very relevant to multiple fields. However, the work should be expanded in several ways before one can make firm conclusions on the role of this projection in memory and behavior.

      We thank Reviewer#2 for carefully and critically reading our manuscript, and for the valuable comments provided.

      (1) The work is very singular in its message and the experimentation. Further, the impact of the inhibition on behaviour is very moderate. In this sense, the results do not support the conclusion that the hippocampal projection to retrosplenial cortex is key to working memory in a navigational setting.

      As we have mentioned in response to Reviewer#1, the judged “very moderate” effect stems from a perspective according to which a quantitatively lower effect bears less biological significance for cognition, precluding its consideration as “key” for behaviour. We disagree with this perspective and find it rather reductive for several reasons. Once seen in the context of the animal’s ecology, quantitatively lower impairments in working memory are no less key for this cognitive capacity, and can be life-threatening precisely because of their subtlety, leading the animal to confidently rely on a defective capacity, for such events as remembering the habitual location of a predator, or food source. Furthermore, studies in animal cognition often undertake complete, rather than graded, suppression of a given mechanism (in the same sense as “knocking out” a gene that is relevant for behaviour), leading to a gravelly, rather that gradually, impaired model system, to the point of not allowing a hypothetical causal link to be mechanistically revealed beyond its mere presence. This often hinders a thorough interpretation of its role.

      In our study we conclude that silencing HIPP inputs in RSC perturbs behaviour enough to impair behaviour while not disabling the animal entirely, as such allowing for behaviour to proceed, and our observation of graded, decreased (not absent), proficiency under optogenetic silencing. So rather than weak, we would say the results are statistically significant, and biologically realistic.

      (2) There are no experiments examining other types of behavior or working memory. Given that the animals used in the studies could be put through a large number of different tasks, this is surprising. There is no control navigational task. There is no working memory test that is non-spatial. Such results should be presented in order to put the main finding in context.

      It is hard to gainsay this point. The more thorough and complete a behavioural characterization is, the more informative is the study, from every angle you look at it. While we agree that other forms of WM would be quite interesting in this context, we also cannot ignore the fact that DNMP is widely tested as a WM task, one that is biologically plausible, sensitive to perturbations of neural circuitry know to be at play therein, and fully accepted in the field. Faced with the impossibility of running further studies, for lack of additional funding and human resources, we chose to run this task.

      A control navigational task would, in our understanding, be used to assess whether silencing HIPP projections to RSC would affect (spatial?) navigation, rather than WM, thus explaining the observed impairment. To this we have the following to say: Spatial Navigation is a very basic cognitive function, one that relies on body orientation relative to spatial context, on keeping an updated representation of such spatial context, (“alas”, as memory), and on guiding behaviour according to acquired knowledge about spatial context. Some of these functions are integral to spatial working memory, as such, they might indeed be affected.

      Dissecting the determinants of spatial WM is indeed an ongoing effort, one that was not the intention of the current study, but also one that we have very present, in hope we can address in the future.

      A non-spatial WM task would indeed vastly solidify our claims beyond spatial WM, onto WM. We have, for this reason, changed the title of the manuscript which now reads “spatial working memory”.

      (3) The actual impact of the inhibition on activity in RSP is not provided. While this may not be strictly necessary, it is relevant that the hippocampal projection to RSP includes, and is perhaps dominated by inhibitory inputs. I wonder why the authors chose to manipulate hippocampal inputs to RSP when the subiculum stands as a much stronger source of afferents to RSP and has been shown to exhibit spatial and directional tuning of activity. The points here are that we cannot be sure what the manipulation is really accomplishing in terms of inhibiting RSP activity (perhaps this explains the moderate impact on behavior) and that the effect of inhibiting hippocampal inputs is not an effective means by which to study how RSP is responsive to inputs that reflect environmental locations.

      We fully agree that neural recordings addressing the effect of silencing on RSC neural activity is relevant. We do wish we could have provided more mechanistic studies, to find out exactly what is Arch activation doing to HIPP-RSC transmission, which neurons are being affected, and thus dissecting its circuit determinants. We have all these goals very present and hope we can address them soon. Subiculum, which we mention in the Introduction, is indeed a key player in this complex circuitry, one whose hypothetical influence is the subject of experimental studies which will certainly reveal many other key elements.

      (4) The impact of inhibition on trials subsequent to the trial during which optical stimulation was actually supplied seems trivial. The authors themselves point to evidence that activation of the hyperpolarizing proton pump is rather long-lasting in its action. Further, each sample-test trial pairing is independent of the prior or subsequent trials. This finding is presented as a major finding of the work, but would normally be relegated to supplemental data as an expected outcome given the dynamics of the pump when activated.

      We disagree that this finding is “trivial”, and object to the considerations of “normalcy”, which we are left wondering about.

      In lack of neurophysiological experiments (for the reasons stated above) to address this interesting finding, we chose to interpret it in light of (the few) published observations, such being the logical course of action in scientific reporting, given the present circumstances.

      Evidence for such a prolonged effect in the context of behaviour is scarce (to our knowledge only the one we cite in the manuscript). As such, it is highly relevant to report it, and give it the relevance we do in our manuscript, rather than “relegating it to supplementary data”, as the reviewer considers being “normal”.

      In the DNMP task the consecutive sample-test pairs are explicitly not independent, as they are part of the same behavioural session. This is illustrated by the simple phenomenon of learning, namely the intra-session learning curves, and the well-known behavioral trial-history effects. The brain does not simply erase such information during the ITI.

      (5) In the middle of the first paragraph of the discussion, the authors make reference to work showing RSP responses to "contextual information in egocentric and allocentric reference frames". The citations here are clearly deficient. How is the Nitzan 2020 paper at all relevant here?

      Nitzan 2020 reports the propagation of information from HIPP to CTX via SUB and RSC, thus providing a conduit for mnemonic information between the two structures, alternative to the one we target, thus providing thorough information concerning the HIPP-RSC circuitry at play during behaviour.

      Alexander and Nitz 2015 precisely cite the encoding, and conjunction, of two types of contextual information, internal (ego-) and external (allocentric).

      The subsequent reference is indeed superfluous here.

      We thank the Reviewer#2 for calling our attention to the fact that references for this information are inadequate and lacking. We have now cited (Gill et al., 2011; Miller et al., 2019; Vedder et al., 2017) and refer readers to the review (Alexander et al., 2023)  for the purpose of illustrating the encoding of information in the two reference frames. In addition, we have substantially edited the Introduction and Discussion sections, and suppressed unnecessary passages.

      (6) The manuscript is deficient in referencing and discussing data from the Smith laboratory that is similar. The discussion reads mainly like a repeat of the results section.

      Please see above. We thank Reviewer#2 for this comment, we have now re-written the Discussion such that it is less of a summary of the Results and more focused on their implications and future directions.

      Response to recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      Major

      Line 101: Even with the tapered lambda fibre optic stub, if the fibre optics were longitudinally staggered by 2 millimetres, they would deliver light to diagonal regions in the horizontal plane rather than covering the full length of the RSC. Is this staggering pattern randomized or fixed? Additionally, Figure 1C is a bit misleading, as the light distribution pattern from the tapered fibre optic is likely to be more concentrated near the surface of the fibre, rather than spreading widely in a large spherical pattern.

      The staggering is fixed. The elliptical (not spherical) contour in Fig 1C is not meant to convey any quantitative information, but rather to visually orient the reader towards the directions into which light will likely propagate, the effects of which we do not attempt to estimate here. We have made this contour smaller.

      Line 119: The authors demonstrate the viral expression pattern of a representative animal and the overall expression patterns of all other animals in Figure 1 and the Supplementary Figures. However, numerous cases in the Supplementary Figures exhibit viral leakages and strong expressions in adjacent cortical and thalamic areas. Although there is a magnified view of the RSC's expression pattern in Figure 1, authors should show the same way in the supplemental data as well. Additionally, the degree of viral expression in the hippocampal subregions varies substantially across animals. This variation is concerning and impacts the interpretation of the results.

      The viral construct was injected in the HIPP at coordinates based on our previous work (Ferreira-Fernandes et al., 2019) wherein injections of a similar vector in mid-dorsal HIPP resulted in widespread expression throughout the medial mesocortex AP extent, RSC through CG, as well as other areas in which HIPP establishes synapses. These were studied in detail then, by estimating the density of axon terminals. In the present work we did not acquire high-mag images of all slices, since they were too expensive, and we had this information from the study above. Still, we have now added further examples of high-mag images taken from eArch and CTRL animals.

      We believe it is important here to mention the fact that the virus we use, AAV5, only travels anterograde and is static (i.e. it does not travel transynaptically).

      Variations in viral expression are to be expected even if injections happen in the exact same way. It is crucial then, that fibre positioning is constant across animals, to guarantee that its relationship with viral expression is thence consistent, and to render irrelevant whatever off-target expression of the viral construct. We have ascertained this condition post-mortem in all our animals.

      Line 124: Another point regarding the viral expressions and optical fibre implants used to inhibit the HIPP-RSC pathway is that the RSC and HIPP extend substantially along the anterior-posterior axis. The authors should demonstrate how the viral expression is distributed along this axis and indicate where the tip of the tapered optical fibre ended by marking it in the histological images. This information is crucial to confirm the authors' claim that the hippocampal projection terminals were indeed modulated by optical light. Also, the manuscript would benefit from details about the power/duration and/or modulation of the light used.

      In both Figures 1 and S1 panels we can clearly see the tracks formed by the fibres. This provides examples of such dual angle placement vis a vis the expression of the construct, demonstrating that the former is fully targeted towards the latter. We have added markers to highlight these tracks and an example of a “full” track in figure S1. We did not have animals deviating from this relative positioning to any significant extent. The methods section mentions illumination power as 240mA, and we have now added estimated illumination time as well.

      Line 141: The authors should include data on task performance during learning and baseline sessions for each animal, to demonstrate that they fully grasped the task rules and that achieving a 75% performance ratio was sufficient.

      DNMP is a standard WM task used for many decades, in which animals reach performances above 75% in 4-8 sessions. We have used it extensively, and never saw any deviations from this learning rate and curve. We ran daily sessions until animals reached 75%, and thereafter until they maintained this performance, or above, for three consecutive sessions (the data points we show). We saw no deviations from what is published, nor from what is our own extensive experience, and thence are fully confident that all animals included in this manuscript grasped task rules.

      Line 146: While the study focused on inhibiting inputs during the test run (retrieval phase), it would be beneficial to also inhibit inputs during the sample run (encoding phase) and the delay period. This would help confirm whether the silencing affects only working memory retrieval, or if it also impacts encoding and maintenance.

      We agree, it would be very interesting to determine if there are any effects of silencing HIPP RSC terminals during Sample. However, since there is a limit to the number of trials per session, and to the total number of sessions, we could not run the three manipulations within each session of our experimental design, as that would lower the number of trials per condition to an extent that would affect statistical power. Silencing HIPP RSC terminals during Sample would best be a separate experiment, asking a different question, and perhaps within an experimental design distinct from the one envisioned.

      A very important point here relates to the fact that the effects of optogenetic manipulation do not limit themselves to the illumination epoch, in fact they extend far beyond onto the 3rd trial post-illumination. The insertion of Sample-illuminated trials interleaved in the same session would fundamentally affect the interpretation of experimental results, as we could not attribute lower performances to the effects in either or both manipulated epochs.

      Line 225: Figure 5 illustrates that silencing the inputs results in an extended impairment of working memory performance. However, it's unclear if there are any behavioural changes during the sample run. The inhibition could potentially affect encoding in the subsequent sample run, considering the inter-trial interval (ITI) is only 20 seconds.

      From the observation of behaviour and the analysis of our data, we saw no overt “behavioural changes during the sample run”, as latencies and speeds were essentially unchanged.

      If what is meant by your comment is the effect of optogenetic manipulation being protracted from the Test towards the Sample epoch, we find this unlikely. Conservatively, we estimate the peak of our optogenetic manipulation to occur around the time light is delivered, the Test phase, rather than 20-30 secs later.

      In theory, any effect of optogenetic silencing of HIPP terminals in RSC can cause disturbances in encoding or Sample, the ITI itself, and the epoch in which mnemonic information retrieved from the Sample epoch is confronted with the contextual information present during Test, leading to a decision. This is regardless of the illumination epoch, and even if the effect of optogenetic manipulation is not prolonged in time. 

      Since in our experiments we specifically target the Test epoch, and there is, in all likelihood, a decaying magnitude of neurophysiological effects, manifest in the reported decaying nature of the manipulation mechanism, and in our observed decrease of behavioural proficiency from subsequent trials 1:4, we are convinced that a conservative interpretation is that our major effect is concentrated in the epoch in which we deliver light - the Test epoch, the consequences of which (possibly related to short term plasticity events taking place within the HIPP-RSC neural circuit) extending further in time.

      Line 410: The methods section on the surgical procedure could be clearer, particularly regarding the coordinates for microinjection and fibre implantation. A more precise description would aid reader comprehension.

      The now-reported injection and implantation coordinates include the numbers corresponding to the distances, in mm, from Bregma to the targets, in the three stereotaxic dimensions considered: antero-posterior, medial-lateral left and right, and dorso-ventral, as well as the angle at which the fibres were positioned. We have added labels to the figures to highlight the fibreoptic track locations. We will be happy to provide further details as deemed necessary.

      Line 461: It would be helpful to know if each animal displayed a preference for the left or right side. Including a description or figure showing that the performance ratio exceeded 75% in both left and right trials would provide a more comprehensive understanding of the animals' behaviour.

      In the DNMP, an extensively used and documented WM task, it is an absolute pre-condition that no animals are biased to either side. As such, we did not use any animal that showed such bias.<br /> We have not observed this to be the case in any of our candidate animals, nor would we use any animal exhibiting such a preference.

      Minor

      Line 25: In the INTRODUCTION section, the authors introduce ego-centric and allocentric variables in the RSC. However, if they intend to discuss this feature, there is no supporting data for ego-centric or allocentric variables in the Results section.

      We agree. The extent of the discussion of ego vs allo-centric variables in our manuscript might venture a bit out of the main subject. It was included to provide wider context to our reporting of the data, considering that spatial working memory is indeed one instance in which egocentric- and allocentric-referenced cognitive mechanisms confront each other, and one in which silencing the HIPP input to a cortical region thence involved would likely disturb ensuing computations. We have now substantially edited the manuscript’s Introduction and Discussion, sections, namely toning down this aspect.

      Line 125: In the section title, DNMT -> DNMP obviously.

      We have corrected this passage.

      Figures: The quality of the figure panels does not meet the expected standards. For example, scale bars are missing in many panels (e.g., Figure 1A bottom, 1B, 1C, S1), figure labels are misaligned (as seen in Figure 3A-B compared to 3C, same with Figure 5), and there is inconsistency in color schemes (e.g., Figure 3C versus Figure 6, where 'Error' versus 'Correct' is depicted using green versus blue, respectively).

      We have now corrected these inconsistencies and mistakes.

    2. eLife assessment

      The authors report that optogenetic inhibition of hippocampal axon terminals in retrosplenial cortex impairs the performance of a delayed non-match to place task. Elucidating the role of hippocampal projections to the retrosplenial cortex in memory and decision-making behaviors is important. However, the strength of evidence for the paper's claims is incomplete.

    3. Reviewer #2 (Public Review):

      The authors examine the impact of optogenetic inhibition of hippocampal axon terminals in the retrosplenial cortex (RSP) during the performance of a working memory T-maze task. Performance on a delayed non-match-to-place task was impaired by such inhibition. The authors also report that inhibition is associated with faster decision-making and that the effects of inhibition can be observed over several subsequent trials. The work seems reasonably well done and the role of hippocampal projections to retrosplenial cortex in memory and decision-making is very relevant to multiple fields. However, the work should be expanded in several ways before one can make firm conclusions on the role of this projection in memory and behavior.

      Comments on revised version:

      The authors have provided their comments on the concerns voiced in my first review. I remain of the opinion that the experiments do not extend beyond determining whether disruption of hippocampal to retrosplenial cortex connections impacts spatial working memory. Given the restricted level of inquiry and the very moderate effect of the manipulation on memory, the work, in my opinion, does not provide significant insight into the processes of spatial working memory nor the function of the hippocampal to retrosplenial cortex connection.

    1. However, if an atom is added to B that does not exist in A, then it will be in atomspace B only. If later it is added to A, then two independent copies shall exist: one in B and one in A. Its not quite clear if this is the 'right' thing to do; but defacto, this is what happens

      Re-write refs to Atom X in B to point to the newly added same Atom X in A?

    1. eLife assessment

      The paper reports the important discovery that the mouse dorsal inferior colliculus, an auditory midbrain area, encodes sound location. The evidence supporting the claims is solid, although how the encoding of sound source position in this area relates to localization behaviors in engaged mice remains unclear. The observations described should be of interest to auditory researchers studying the neural mechanisms of sound localization.

    2. Reviewer #1 (Public Review):

      Summary: In this study, the authors address whether the dorsal nucleus of the inferior colliculus (DCIC) in mice encodes sound source location within the front horizontal plane (i.e., azimuth). They do this using volumetric two-photon Ca2+ imaging and high-density silicon probes (Neuropixels) to collect single-unit data. Such recordings are beneficial because they allow large populations of simultaneous neural data to be collected. Their main results and the claims about those results are the following:

      1) DCIC single-unit responses have high trial-to-trial variability (i.e., neural noise);

      2) approximately 32% to 40% of DCIC single units have responses that are sensitive to sound source azimuth;

      3) single-trial population responses (i.e., the joint response across all sampled single units in an animal) encode sound source azimuth "effectively" (as stated in title) in that localization decoding error matches average mouse discrimination thresholds;

      4) DCIC can encode sound source azimuth in a similar format to that in the central nucleus of the inferior colliculus (as stated in Abstract);

      5) evidence of noise correlation between pairs of neurons exists;

      and 6) noise correlations between responses of neurons help reduce population decoding error.

      While simultaneous recordings are not necessary to demonstrate results #1, #2, and #4, they are necessary to demonstrate results #3, #5, and #6.

      Strengths:<br /> - Important research question to all researchers interested in sensory coding in the nervous system.<br /> - State-of-the-art data collection: volumetric two-photon Ca2+ imaging and extracellular recording using high-density probes. Large neuronal data sets.<br /> - Confirmation of imaging results (lower temporal resolution) with more traditional microelectrode results (higher temporal resolution).<br /> - Clear and appropriate explanation of surgical and electrophysiological methods. I cannot comment on the appropriateness of the imaging methods.

      Strength of evidence for claims of the study:

      1) DCIC single-unit responses have high trial-to-trial variability -<br /> The authors' data clearly shows this.

      2) Approximately 32% to 40% of DCIC single units have responses that are sensitive to sound source azimuth -<br /> The sensitivity of each neuron's response to sound source azimuth was tested with a Kruskal-Wallis test, which is appropriate since response distributions were not normal. Using this statistical test, only 8% of neurons (median for imaging data) were found to be sensitive to azimuth, and the authors noted this was not significantly different than the false positive rate. The Kruskal-Wallis test was not performed on electrophysiological data. The authors suggested that low numbers of azimuth-sensitive units resulting from the statistical analysis may be due to the combination of high neural noise and relatively low number of trials, which would reduce statistical power of the test. This may be true, but if single-unit responses were moderately or strongly sensitive to azimuth, one would expect them to pass the test even with relatively low statistical power. At best, if their statistical test missed some azimuth-sensitive units, they were likely only weakly sensitive to azimuth. The authors went on to perform a second test of azimuth sensitivity-a chi-squared test-and found 32% (imaging) and 40% (e-phys) of single units to have statistically significant sensitivity. This feels a bit like fishing for a lower p-value. The Kruskal-Wallis test should have been left as the only analysis. Moreover, the use of a chi-squared test is questionable because it is meant to be used between two categorical variables, and neural response had to be binned before applying the test.

      3) Single-trial population responses encode sound source azimuth "effectively" in that localization decoding error matches average mouse discrimination thresholds -<br /> If only one neuron in a population had responses that were sensitive to azimuth, we would expect that decoding azimuth from observation of that one neuron's response would perform better than chance. By observing the responses of more than one neuron (if more than one were sensitive to azimuth), we would expect performance to increase. The authors found that decoding from the whole population response was no better than chance. They argue (reasonably) that this is because of overfitting of the decoder model-too few trials used to fit too many parameters-and provide evidence from decoding combined with principal components analysis which suggests that overfitting is occurring. What is troubling is the performance of the decoder when using only a handful of "top-ranked" neurons (in terms of azimuth sensitivity) (Fig. 4F and G). Decoder performance seems to increase when going from one to two neurons, then decreases when going from two to three neurons, and doesn't get much better for more neurons than for one neuron alone. It seems likely there is more information about azimuth in the population response, but decoder performance is not able to capture it because spike count distributions in the decoder model are not being accurately estimated due to too few stimulus trials (14, on average). In other words, it seems likely that decoder performance is underestimating the ability of the DCIC population to encode sound source azimuth.<br /> To get a sense of how effective a neural population is at coding a particular stimulus parameter, it is useful to compare population decoder performance to psychophysical performance. Unfortunately, mouse behavioral localization data do not exist. Therefore, the authors compare decoder error to mouse left-right discrimination thresholds published previously by a different lab. However, this comparison is inappropriate because the decoder and the mice were performing different perceptual tasks. The decoder is classifying sound sources to 1 of 13 locations from left to right, whereas the mice were discriminating between left or right sources centered around zero degrees. The errors in these two tasks represent different things. The two data sets may potentially be more accurately compared by extracting information from the confusion matrices of population decoder performance. For example, when the stimulus was at -30 deg, how often did the decoder classify the stimulus to a lefthand azimuth? Likewise, when the stimulus was +30 deg, how often did the decoder classify the stimulus to a righthand azimuth?

      4) DCIC can encode sound source azimuth in a similar format to that in the central nucleus of the inferior colliculus -<br /> It is unclear what exactly the authors mean by this statement in the Abstract. There are major differences in the encoding of azimuth between the two neighboring brain areas: a large majority of neurons in the CNIC are sensitive to azimuth (and strongly so), whereas the present study shows a minority of azimuth-sensitive neurons in the DCIC. Furthermore, CNIC neurons fire reliably to sound stimuli (low neural noise), whereas the present study shows that DCIC neurons fire more erratically (high neural noise).

      5) Evidence of noise correlation between pairs of neurons exists -<br /> The authors' data and analyses seem appropriate and sufficient to justify this claim.

      6) Noise correlations between responses of neurons help reduce population decoding error -<br /> The authors show convincing analysis that performance of their decoder increased when simultaneously measured responses were tested (which include noise correlation) than when scrambled-trial responses were tested (eliminating noise correlation). This makes it seem likely that noise correlation in the responses improved decoder performance. The authors mention that the naïve Bayesian classifier was used as their decoder for computational efficiency, presumably because it assumes no noise correlation and, therefore, assumes responses of individual neurons are independent of each other across trials to the same stimulus. The use of decoder that assumes independence seems key here in testing the hypothesis that noise correlation contains information about sound source azimuth. The logic of using this decoder could be more clearly spelled out to the reader. For example, if the null hypothesis is that noise correlations do not carry azimuth information, then a decoder that assumes independence should perform the same whether population responses are simultaneous or scrambled. The authors' analysis showing a difference in performance between these two cases provides evidence against this null hypothesis.

      Minor weakness:<br /> - Most studies of neural encoding of sound source azimuth are done in a noise-free environment, but the experimental setup in the present study had substantial background noise. This complicates comparison of the azimuth tuning results in this study to those of other studies. One is left wondering if azimuth sensitivity would have been greater in the absence of background noise, particularly for the imaging data where the signal was only about 12 dB above the noise. The description of the noise level and signal + noise level in the Methods should be made clearer. Mice hear from about 2.5 - 80 kHz, so it is important to know the noise level within this band as well as specifically within the band overlapping with the signal.

    3. Reviewer #2 (Public Review):

      In the present study, Boffi et al. investigate the manner in which the dorsal cortex of the of the inferior colliculus (DCIC), an auditory midbrain area, encodes sound location azimuth in awake, passively listening mice. By employing volumetric calcium imaging (scanned temporal focusing or s-TeFo), complemented with high-density electrode electrophysiological recordings (neuropixels probes), they show that sound-evoked responses are exquisitely noisy, with only a small portion of neurons (units) exhibiting spatial sensitivity. Nevertheless, a naïve Bayesian classifier was able to predict the presented azimuth based on the responses from small populations of these spatially sensitive units. A portion of the spatial information was provided by correlated trial-to-trial response variability between individual units (noise correlations). The study presents a novel characterization of spatial auditory coding in a non-canonical structure, representing a noteworthy contribution specifically to the auditory field and generally to systems neuroscience, due to its implementation of state-of-the-art techniques in an experimentally challenging brain region. However, nuances in the calcium imaging dataset and the naïve Bayesian classifier warrant caution when interpreting some of the results.

      Strengths:<br /> The primary strength of the study lies in its methodological achievements, which allowed the authors to collect a comprehensive and novel dataset. While the DCIC is a dorsal structure, it extends up to a millimetre in depth, making it optically challenging to access in its entirety. It is also more highly myelinated and vascularised compared to e.g., the cerebral cortex, compounding the problem. The authors successfully overcame these challenges and present an impressive volumetric calcium imaging dataset. Furthermore, they corroborated this dataset with electrophysiological recordings, which produced overlapping results. This methodological combination ameliorates the natural concerns that arise from inferring neuronal activity from calcium signals alone, which are in essence an indirect measurement thereof.

      Another strength of the study is its interdisciplinary relevance. For the auditory field, it represents a significant contribution to the question of how auditory space is represented in the mammalian brain. "Space" per se is not mapped onto the basilar membrane of the cochlea and must be computed entirely within the brain. For azimuth, this requires the comparison between miniscule differences between the timing and intensity of sounds arriving at each ear. It is now generally thought that azimuth is initially encoded in two, opposing hemispheric channels, but the extent to which this initial arrangement is maintained throughout the auditory system remains an open question. The authors observe only a slight contralateral bias in their data, suggesting that sound source azimuth in the DCIC is encoded in a more nuanced manner compared to earlier processing stages of the auditory hindbrain. This is interesting, because it is also known to be an auditory structure to receive more descending inputs from the cortex.

      Systems neuroscience continues to strive for the perfection of imaging novel, less accessible brain regions. Volumetric calcium imaging is a promising emerging technique, allowing the simultaneous measurement of large populations of neurons in three dimensions. But this necessitates corroboration with other methods, such as electrophysiological recordings, which the authors achieve. The dataset moreover highlights the distinctive characteristics of neuronal auditory representations in the brain. Its signals can be exceptionally sparse and noisy, which provide an additional layer of complexity in the processing and analysis of such datasets. This will be undoubtedly useful for future studies of other less accessible structures with sparse responsiveness.

      Weaknesses:<br /> Although the primary finding that small populations of neurons carry enough spatial information for a naïve Bayesian classifier to reasonably decode the presented stimulus is not called into question, certain idiosyncrasies, in particular the calcium imaging dataset and model, complicate specific interpretations of the model output, and the readership is urged to interpret these aspects of the study's conclusions with caution.

      I remain in favour of volumetric calcium imaging as a suitable technique for the study, but the presently constrained spatial resolution is insufficient to unequivocally identify regions of interest as cell bodies (and are instead referred to as "units" akin to those of electrophysiological recordings). It remains possible that the imaging set is inadvertently influenced by non-somatic structures (including neuropil), which could report neuronal activity differently than cell bodies. Due to the lack of a comprehensive ground-truth comparison in this regard (which to my knowledge is impossible to achieve with current technology), it is difficult to imagine how many informative such units might have been missed because their signals were influenced by spurious, non-somatic signals, which could have subsequently misled the models. The authors reference the original Nature Methods article (Prevedel et al., 2016) throughout the manuscript, presumably in order to avoid having to repeat previously published experimental metrics. But the DCIC is neither the cortex nor hippocampus (for which the method was originally developed) and may not have the same light scattering properties (not to mention neuronal noise levels). Although the corroborative electrophysiology data largely eleviates these concerns for this particular study, the readership should be cognisant of such caveats, in particular those who are interested in implementing the technique for their own research.

      A related technical limitation of the calcium imaging dataset is the relatively low number of trials (14) given the inherently high level of noise (both neuronal and imaging). Volumetric calcium imaging, while offering a uniquely expansive field of view, requires relatively high average excitation laser power (in this case nearly 200 mW), a level of exposure the authors may have wanted to minimise by maintaining a low the number of repetitions, but I yield to them to explain. Calcium imaging is also inherently slow, requiring relatively long inter-stimulus intervals (in this case 5 s). This unfortunately renders any model designed to predict a stimulus (in this case sound azimuth) from particularly noisy population neuronal data like these as highly prone to overfitting, to which the authors correctly admit after a model trained on the entire raw dataset failed to perform significantly above chance level. This prompted them to feed the model only with data from neurons with the highest spatial sensitivity. This ultimately produced reasonable performance (and was implemented throughout the rest of the study), but it remains possible that if the model was fed with more repetitions of imaging data, its performance would have been more stable across the number of units used to train it. (All models trained with imaging data eventually failed to converge.) However, I also see these limitations as an opportunity to improve the technology further, which I reiterate will be generally important for volume imaging of other sparse or noisy calcium signals in the brain.

      Transitioning to the naïve Bayesian classifier itself, I first openly ask the authors to justify their choice of this specific model. There are countless types of classifiers for these data, each with their own pros and cons. Did they actually try other models (such as support vector machines), which ultimately failed? If so, these negative results (even if mentioned en passant) would be extremely valuable to the community, in my view. I ask this specifically because different methods assume correspondingly different statistical properties of the input data, and to my knowledge naïve Bayesian classifiers assume that predictors (neuronal responses) are assumed to be independent within a class (azimuth). As the authors show that noise correlations are informative in predicting azimuth, I wonder why they chose a model that doesn't take advantage of these statistical regularities. It could be because of technical considerations (they mention computing efficiency), but I am left generally uncertain about the specific logic that was used to guide the authors through their analytical journey.

      That aside, there remain other peculiarities in model performance that warrant further investigation. For example, what spurious features (or lack of informative features) in these additional units prevented the models of imaging data from converging? In an orthogonal question, did the most spatially sensitive units share any detectable tuning features? A different model trained with electrophysiology data in contrast did not collapse in the range of top-ranked units plotted. Did this model collapse at some point after adding enough units, and how well did that correlate with the model for the imaging data? How well did the form (and diversity) of the spatial tuning functions as recorded with electrophysiology resemble their calcium imaging counterparts? These fundamental questions could be addressed with more basic, but transparent analyses of the data (e.g., the diversity of spatial tuning functions of their recorded units across the population). Even if the model extracts features that are not obvious to the human eye in traditional visualisations, I would still find this interesting.

      Finally, the readership is encouraged to interpret certain statements by the authors in the current version conservatively. How the brain ultimately extracts spatial neuronal data for perception is anyone's guess, but it is important to remember that this study only shows that a naïve Bayesian classifier could decode this information, and it remains entirely unclear whether the brain does this as well. For example, the model is able to achieve a prediction error that corresponds to the psychophysical threshold in mice performing a discrimination task (~30 {degree sign}). Although this is an interesting coincidental observation, it does not mean that the two metrics are necessarily related. The authors correctly do not explicitly claim this, but the manner in which the prose flows may lead a non-expert into drawing that conclusion. Moreover, the concept of redundancy (of spatial information carried by units throughout the DCIC) is difficult for me to disentangle. One interpretation of this formulation could be that there are non-overlapping populations of neurons distributed across the DCIC that each could predict azimuth independently of each other, which is unlikely what the authors meant. If the authors meant generally that multiple neurons in the DCIC carry sufficient spatial information, then a single neuron would have been able to predict sound source azimuth, which was not the case. I have the feeling that they actually mean "complimentary", but I leave it to the authors to clarify my confusion, should they wish.

      In summary, the present study represents a significant body of work that contributes substantially to the field of spatial auditory coding and systems neuroscience. However, limitations of the imaging dataset and model as applied in the study muddles concrete conclusions about how the DCIC precisely encodes sound source azimuth and even more so to sound localisation in a behaving animal. Nevertheless, it presents a novel and unique dataset, which, regardless of secondary interpretation, corroborates the general notion that auditory space is encoded in an extraordinarily complex manner in the mammalian brain.

    4. Reviewer #3 (Public Review):

      Summary: Boffi and colleagues sought to quantify the single-trial, azimuthal information in the dorsal cortex of the inferior colliculus (DCIC), a relatively understudied subnucleus of the auditory midbrain. They used two complementary recording methods while mice passively listened to sounds at different locations: a large volume but slow sampling calcium-imaging method, and a smaller volume but temporally precise electrophysiology method. They found that neurons in the DCIC were variable in their activity, unreliably responding to sound presentation and responding during inter-sound intervals. Boffi and colleagues used a naïve Bayesian decoder to determine if the DCIC population encoded sound location on a single trial. The decoder failed to classify sound location better than chance when using the raw single-trial population response but performed significantly better than chance when using intermediate principal components of the population response. In line with this, when the most azimuth dependent neurons were used to decode azimuthal position, the decoder performed equivalently to the azimuthal localization abilities of mice. The top azimuthal units were not clustered in the DCIC, possessed a contralateral bias in response, and were correlated in their variability (e.g., positive noise correlations). Interestingly, when these noise correlations were perturbed by inter-trial shuffling decoding performance decreased. Although Boffi and colleagues display that azimuthal information can be extracted from DCIC responses, it remains unclear to what degree this information is used and what role noise correlations play in azimuthal encoding.

      Strengths: The authors should be commended for collection of this dataset. When done in isolation (which is typical), calcium imaging and linear array recordings have intrinsic weaknesses. However, those weaknesses are alleviated when done in conjunction with one another - especially when the data largely recapitulates the findings of the other recording methodology. In addition to the video of the head during the calcium imaging, this data set is extremely rich and will be of use to those interested in the information available in the DCIC, an understudied but likely important subnucleus in the auditory midbrain.

      The DCIC neural responses are complex; the units unreliably respond to sound onset, and at the very least respond to some unknown input or internal state (e.g., large inter-sound interval responses). The authors do a decent job in wrangling these complex responses: using interpretable decoders to extract information available from population responses.

      Weaknesses:<br /> The authors observe that neurons with the most azimuthal sensitivity within the DCIC are positively correlated, but they use a Naïve Bayesian decoder which assume independence between units. Although this is a bit strange given their observation that some of the recorded units are correlated, it is unlikely to be a critical flaw. At one point the authors reduce the dimensionality of their data through PCA and use the loadings onto these components in their decoder. PCA incorporates the correlational structure when finding the principal components and constrains these components to be orthogonal and uncorrelated. This should alleviate some of the concern regarding the use of the naïve Bayesian decoder because the projections onto the different components are independent. Nevertheless, the decoding results are a bit strange, likely because there is not much linearly decodable azimuth information in the DCIC responses. Raw population responses failed to provide sufficient information concerning azimuth for the decoder to perform better than chance. Additionally, it only performed better than chance when certain principal components or top ranked units contributed to the decoder but not as more components or units were added. So, although there does appear to be some azimuthal information in the recoded DCIC populations - it is somewhat difficult to extract and likely not an 'effective' encoding of sound localization as their title suggests.

      Although this is quite a worthwhile dataset, the authors present relatively little about the characteristics of the units they've recorded. This may be due to the high variance in responses seen in their population. Nevertheless, the authors note that units do not respond on every trial but do not report what percent of trials that fail to evoke a response. Is it that neurons are noisy because they do not respond on every trial or is it also that when they do respond they have variable response distributions? It would be nice to gain some insight into the heterogeneity of the responses. Additionally, is there any clustering at all in response profiles or is each neuron they recorded in the DCIC unique? They also only report the noise correlations for their top ranked units, but it is possible that the noise correlations in the rest of the population are different. It would also be worth digging into the noise correlations more - are units positively correlated because they respond together (e.g., if unit x responds on trial 1 so does unit y) or are they also modulated around their mean rates on similar trials (e.g., unit x and y respond and both are responding more than their mean response rate). A large portion of trial with no response can occlude noise correlations. More transparency around the response properties of these populations would be welcome.

      It is largely unclear what the DCIC is encoding. Although the authors are interested in azimuth, sound location seems to be only a small part of DCIC responses. The authors report responses during inter-sound interval and unreliable sound-evoked responses. Although they have video of the head during recording, we only see a correlation to snout and ear movements (which are peculiar since in the example shown it seems the head movements predict the sound presentation). Additional correlates could be eye movements or pupil size. Eye movement are of particular interest due to their known interaction with IC responses - especially if the DCIC encodes sound location in relation to eye position instead of head position (though much of eye-position-IC work was done in primates and not rodent). Alternatively, much of the population may only encode sound location if an animal is engaged in a localization task. Ideally, the authors could perform more substantive analyses to determine if this population is truly noisy or if the DCIC is integrating un-analyzed signals.

      Although this critique is ubiquitous among decoding papers in the absence of behavioral or causal perturbations, it is unclear what - if any - role the decoded information may play in neuronal computations. The interpretation of the decoder means that there is some extractable information concerning sound azimuth - but not if it is functional. This information may just be epiphenomenal, leaking in from inputs, and not used in computation or relayed to downstream structures. This should be kept in mind when the authors suggest their findings implicate the DCIC functionally in sound localization.

      It is unclear why positive noise correlations amongst similarly tuned neurons would improve decoding. A toy model exploring how positive noise correlations in conjunction with unreliable units that inconsistently respond may anchor these findings in an interpretable way. It seems plausible that inconsistent responses would benefit from strong noise correlations, simply by units responding together. This would predict that shuffling would impair performance because you would then be sampling from trials in which some units respond, and trials in which some units do not respond - and may predict a bimodal performance distribution in which some trials decode well (when the units respond) and poor performance (when the units do not respond).

      Significance: Boffi and colleagues set out to parse the azimuthal information available in the DCIC on a single trial. They largely accomplish this goal and are able to extract this information when allowing the units that contain more information about sound location to contribute to their decoding (e.g., through PCA or decoding on top unit activity specifically). The dataset will be of value to those interested in the DCIC and also to anyone interested in the role of noise correlations in population coding. Although this work is first step into parsing the information available in the DCIC, it remains difficult to interpret if/how this azimuthal information is used in localization behaviors of engaged mice.

    1. eLife assessment

      This valuable study provides convincing evidence that mutant hair cells with abnormal, reversed polarity of their hair bundles in mouse otolith organs retain wild-type localization, mechanoelectrical transduction and receptor field of their afferent innervation, leading to mild behavioral dysfunction. It thus demonstrates that the bimodal pattern of afferent nerve projections in this organ is not causally related to the bimodal distribution of hair-bundle orientations, as also confirmed in the zebrafish lateral line. The work will be of interest to scientists interested in the development and function of the vestibular system as well as in planar-cell polarity.

    2. Reviewer #1 (Public Review):

      Summary:

      The authors aim at dissecting the relationship between hair-cell directional mechanosensation and orientation-linked synaptic selectivity, using mice and the zebrafish. They find that Gpr156 mutant animals homogenize the orientation of hair cells without affecting the selectivity of afferent neurons, suggesting that hair-cell orientation is not the feature that determines synaptic selectivity. Therefore, the process of Emx2-dependent synaptic selectivity bifurcates downstream of Gpr156.

      Strengths:

      This is an interesting and solid paper. It solves an interesting problem and establishes a framework for the following studies. That is, to ask what are the putative targets of Emx2 that affect synaptic selectivity.<br /> The quality of the data is generally excellent.

      Weaknesses:

      The feeling is that the advance derived from the results is very limited.

    3. Reviewer #2 (Public Review):

      Summary:

      The authors inquire in particular whether the receptor Gpr156, which is necessary for hair cells to reverse their polarities in the zebrafish lateral line and mammalian otolith organs downstream of the differential expression of the transcription factor Emx2, also controls the mechanosensitive properties of hair cells and ultimately an animal's behavior. This study thoroughly addresses the issue by analyzing the morphology, electrophysiological responses, and afferent connections of hair cells found in different regions of the mammalian utricle and the Ca2+ responses of lateral line neuromasts in both wild-type animals and gpr156 mutants. Although many features of hair cell function are preserved in the mutants-such as development of the mechanosensory organs and the Emx2-dependent, polarity-specific afferent wiring and synaptic pairing-there are a few key changes. In the zebrafish neuromast, the magnitude of responses of all hair cells to water flow resembles that of the wild-type hair cells that respond to flow arriving from the tail. These responses are larger than those observed in hair cells that are sensitive to flow arriving from the head and resemble effects previously observed in Emx2 mutants. The authors note that this behavior suggests that the Emx2-GPR156 signaling axis also impinges on hair cell mechanotransduction. Although mutant mice exhibit normal posture and balance, they display defects in swimming behavior. Moreover, their vestibulo-ocular reflexes are perturbed. The authors note that the gpr156 mutant is a good model to study the role of opposing hair cell polarity in the vestibular system, for the wiring patterns follow the expression patterns of Emx2, even though hair cells are all of the same polarity. This paper excels at describing the effects of gpr156 perturbation in mouse and zebrafish models and will be of interest to those studying the vestibular system, hair cell polarity, and the role of inner-ear organs in animal behavior.

      Strengths:

      The study is exceptional in including, not only morphological and immunohistochemical indices of cellular identity but also electrophysiological properties. The mutant hair cells of murine maculæ display essentially normal mechanoelectrical transduction and adaptation-with two or even three kinetic components-as well as normal voltage-activated ionic currents.

    1. eLife assessment:

      This important study investigates the contribution of cytosolic S100A/8 to neutrophil migration to inflamed tissues. The authors provide convincing evidence for how the loss of cytosolic S100A/8 specifically affects the ability of neutrophils to crawl and subsequently adhere under shear stress. This study will be of interest in fields where inflammation is implicated, such as autoimmunity or sepsis.

    2. Reviewer #1 (Public Review):

      Summary:

      In this manuscript by Napoli et al, the authors study the intracellular function of Cytosolic S100A8/A9 a myeloid cell soluble protein that operates extracellularly as an alarmin, whose intracellular function is not well characterized. Here, the authors utilize state-of-the-art intravital microscopy to demonstrate that adhesion defects observed in cells lacking S100A8/A9 (Mrp14-/-) are not rescued by exogenous S100A8/A9, thus highlighting an intrinsic defect. Based on this result subsequent efforts were employed to characterize the nature of those adhesion defects.

      Strengths:

      The authors convincingly show that Mrp14-/- neutrophils have normal rolling but defective adhesion caused by impaired CD11b activation (deficient ICAM1 binding). Analysis of cellular spreading (defective in Mrp14-/- cells) is also sound. The manuscript then focuses on selective signaling pathways and calcium measurements. Overall, this is a straightforward study of biologically important proteins and mechanisms.

      Weaknesses:

      Some suggestions are included below to improve this manuscript.

    3. Reviewer #2 (Public Review):

      Summary:

      Napoli et al. provide a compelling study showing the importance of cytosolic S100A8/9 in maintaining calcium levels at LFA-1 nanoclusters at the cell membrane, thus allowing the successful crawling and adherence of neutrophils under shear stress. The authors show that cytosolic S100A8/9 is responsible for retaining stable and high concentrations of calcium specifically at LFA-1 nanoclusters upon binding to ICAM-1, and imply that this process aids in facilitating actin polymerisation involved in cell shape and adherence. The authors show early on that S100A8/9 deficient neutrophils fail to extravasate successfully into the tissue, thus suggesting that targeting cytosolic S100A8/9 could be useful in settings of autoimmunity/acute inflammation where neutrophil-induced collateral damage is unwanted.

      Strengths:

      Using multiple complementary methods from imaging to western blotting and flow cytometry, including extracellular supplementation of S100A8/9 in vivo, the authors conclusively prove a defect in intracellular S100A8/9, rather than extracellular S100A8/9 was responsible for the loss in neutrophil adherence, and pinpointed that S100A8/9 aided in calcium stabilisation and retention at the plasma membrane.

      Weaknesses:

      (1) Extravasation is shown to be a major defect of Mrp14-/- neutrophils, but the Giemsa staining in Figure 1H seems to be quite unspecific to me, as neutrophils were determined by nuclear shape and granularity. It would have perhaps been more clear to use immunofluorescence staining for neutrophils instead as seen in Supplementary Figure 1A (staining for Ly6G or other markers instead of S100A9).

      (2) The representative image for Mrp14-/- neutrophils used in Figure 4K to demonstrate Ripley's K function seems to be very different from that shown above in Figures 4C and 4F.

      (3) Although the authors have done well to draw a path linking cytosolic S100A8/9 to actin polymerisation and subsequently the arrest and adherence of neutrophils in vitro, the authors can be more explicit with the analysis - for example, is the F-actin co-localized with the LFA-1 nanoclusters? Does S100A8/9 localise to the membrane with LFA-1 upon stimulation? Lastly, I think it would have been very useful to close the loop on the extravasation observation with some in vitro evidence to show that neutrophils fail to extravasate under shear stress.

    1. However, measured maximum electric fields and calculated specific absorption rate (SAR) values are well below the maximum permissible levels published by International Commission on Non-Ionizing Radiation Protection (ICNIRP).

      This should be the first sentence.

    1. eLife assessment

      This valuable study reports that actin-related proteins may be involved in transcriptional regulation during spermatogenesis. The supporting data remain incomplete, and more extensive disentanglement from the canonical role of these actin-related proteins and the experimental validation of in silico predictions are required. This work will be of interest to reproductive biologists and other researchers working on non-canonical roles of actin and actin-related proteins.

    2. Reviewer #2 (Public Review):

      Summary:

      How dynamics of gene expression accompany cell fate and cellular morphological changes is important for our understanding of molecular mechanisms that govern development and diseases. The phenomenon is particularly prominent during spermatogenesis, the process which spermatogonia stem cells develop into sperm through a series of steps of cell division, differentiation, meiosis, and cellular morphogenesis. The intricacy of various aspects of cellular processes and gene expression during spermatogenesis remains to be fully understood. In this study, the authors found that testis-specific actin-related proteins (which usually participate in modifying cells' cytoskeletal systems) ACTL7A and ACTL7B were expressed and localized in the nuclei of mouse spermatocytes and spermatids. Based on this observation, the authors analyzed protein sequence conservations of ACTL7B across dozens of species and identified a putative nuclear localization sequence (NLS) that is often responsible for the nuclear import of proteins that carry them. Using molecular biology experiments in a heterologous cell system, the authors verified the potential role of this internal NLS and found it indeed could facilitate the nuclear localization of marker proteins when expressed in cells. Using gene-deleted mouse models they generated previously, the authors showed that deletion of Actl7b caused changes in gene expression and mis-localization of nucleosomal histone H3 and chromatin regulator histone deacetylase HDAC1 and 2, supporting their proposed roles of ACTL7B in regulating gene expression. The authors further used alpha-Fold 2 to model the potential protein complexes that could be formed between the ARPs (ACTL7A and ACTL7B) and known chromatin modifiers, such as INO80 and SWI/SNF complexes and found that consistent with previous findings, it is likely that ACTL7A and ACTL7B interact with the chromatin-modifying complexes through binding to their alpha-helical HSA domain cooperatively. These results suggest that ACTL7B possesses novel functions in regulating chromatin structure and thus gene expression beyond conventional roles of cytoskeleton regulation, providing alternative pathways for understanding how gene expression is regulated during spermatogenesis and the etiology of relevant infertility diseases.

      Strengths:

      The authors provided sufficient background to the study and discussions of the results. Based on their previous research, this study utilized numerous methods, including protein complex structural modeling method alpha-fold 2 Multimers, to further investigate the functional roles of ACTL7B. The results presented here are in general of good quality. The identification of a potential internal NLS in ACTL7B is mostly convincing, in line with the phenotypes presented in the gene deletion model.

      Weaknesses:

      While the study offered an interesting new look at the functions of ARP proteins during spermatogenesis, some of the study is mainly theoretical speculations, including the protein complex formation. Some of the results may need further experimental verifications, for example, differentially expressed genes that were found in potentially spermatogenic cells at different developmental stages, in order to support the conclusions and avoid undermining the significance of the study.

    3. Reviewer #3 (Public Review):

      In this manuscript, Pierre Ferrer and colleagues explore the exciting possibility that, in the male germ line, the composition and function of deeply conserved chromatin remodeling complexes is fine-tuned by the addition of testis-specific actin-related proteins (ARPs). In this regard, the Authors aim to extend previously reported non-canonical (transcriptional) roles of ARPs in somatic cells to the unique developmental context of the germ line. The manuscript is focused on the potential regulatory role in post-meiotic transcription of two ARPs: ACTL7A and ACTL7B (particularly the latter). The canonical function of both testis-specific ARPs in spermatogenesis is well established, as they have been previously shown to be required for the extensive cellular morphogenesis program driving post-meiotic development (spermiogenesis). Disentangling the actual functions of ACTL7A and ACTL7B as transcriptional regulators from their canonical role in the profound morphological reshaping of post-meiotic cells (a process that also deeply impacts nuclear architecture and regulation) represents a key challenge in terms of interpreting the reported findings (see below).

      The authors begin by documenting, via fluorescence microscopy, the intranuclear localization of ACTL7B. This ARP is convincingly shown to accumulate in the nucleus of spermatocytes and spermatids. Using a series of elegant reporter-based experiments in a somatic cell line, the authors map the driver of this nuclear accumulation to a potential NLS sequence in the ACTL7B actin-like body domain. Ferrer and colleagues then performed a testicular RNA-seq analysis in ACTL7B KO mice to define the putative role of ACTL7B in male germ cell transcription. They report substantial changes to the testicular transcriptome - particularly the upregulation of several classes of genes - in ACTL7B KO mice. However, wild-type testes were used as controls for this experiment, thus introducing a clear confounding effect to the analysis (ACTL7B KO testes have extensive post-meiotic defects due to the canonical role of ACTL7B in spermatid development). Then, the authors employ cutting-edge AI-driven approaches to predict that both ACTL7A and ACTL7B are likely to bind to four key chromatin remodeling complexes. Although these predictions are based on a robust methodology, they would certainly benefit from experimental validation. Finally, the authors associate the loss of ACTL7B with decreased lysine acetylation and lower levels of the HDAC1 and HDAC3 chromatin remodelers in the nucleus of developing spermatids.

      Globally, these data may provide important insight into the unique processes male germ cells employ to sustain their extraordinarily complex transcriptional program. Furthermore, the concept that (comparably younger) testis-specific proteins can be incorporated into ancient chromatin remodeling complexes to modulate their function in the germ line is timely and exciting.

      It is my opinion that the manuscript would benefit from additional experimental validation to better support the authors' conclusions. In particular, I believe that addressing two critical points would substantially strengthen the message of the manuscript:

      (1) The proposed role of ACTL7B in post-meiotic transcriptional regulation temporally overlaps with the protein's previously reported canonical functions in spermiogenesis (PMID: 36617158 and 37800308). Indeed, the canonical functions of ACTL7B have been shown to have a profound effect at the level of spermatid morphology and to impact nuclear organization. This potentially renders the observed transcriptional deregulation in ACTL7B KO testes an indirect consequence of spermatid morphology defects. I acknowledge that it is experimentally difficult to disentangle the proposed intranuclear roles of ACTL7B from the protein's well-documented cytoplasmic function. Perhaps the generation of a NLS-scrambled ACTL7B variant could offer some insight. In light of the substantial investment this approach would represent, I would suggest, as an alternative, that instead of using wild-type testes as controls for the transcriptome and chromatin localization assays, the authors consider the possibility of using testicular tissue from a mutant with similarly abnormal spermiogenesis but due to transcription-independent defects. This would, in my opinion, offer a more suitable baseline to compare ACTL7B KO testes with.

      (2) The manuscript would greatly benefit if experimental validation of the AI-driven predictions were to be provided (in terms of the binding capacity of ACTL7A and ACTL7B to key chromatin remodeling complexes). More so it seems that the authors have the technical expertise / available mass spectrometry data required for this purpose (lines 664-665). Still on this topic, given the predicted interactions of ACTL7A and ACTL7B with the SRCAP, EP400, SMARCA2 and SMARCA4 complexes (Figure 7), it is rather counter-intuitive that the Authors chose for their immunofluorescence assays, in ACTL7B KO testes, to determine the chromatin localization of HDAC1 and HDAC3, rather than that of any of above four complexes.

    1. When an AtomSpace inherits from multiple contributors, the contents of that AtomSpace is the set-union of the contributing spaces.
    2. The unique ID of a Node is it's string name

      I thought things are content-addressed. Having a string name is strange.

    3. One of the keys might hold the truthiness of this statement. Another key might hold its probability.

      Aren't these two about the same? Where true is 1.

      And say probability = 0.6 is a more fine-grained expression of truthiness.

      This unimportant nitpick aside, can you annotate say truthiness = true with probability = 0.6?

      (probability (truthiness (owns Jack computer) true) 0.6)

    1. The value of information is manifested in various contexts

      turner edwards: can there ever been an objective value of information or does it all just rely on context?

    2. Two added elements illustrate important learning goals related to those concepts: knowledge practices,5 which are demonstrations of ways in which learners can increase their understanding of these information literacy concepts, and dispositions,6 which describe ways in which to address the affective, attitudinal, or valuing dimension of learning.

      Samer Shamieh: Illustrating your learning goals will ultimately help you learn better and have the knowledge to understand what you read about.

    1. Résumé de la vidéo [00:00:00][^1^][1] - [02:13:00][^2^][2] :

      Cette vidéo est une conférence organisée par France Stratégie sur le thème des évaluations de politiques publiques et de leur impact. Elle réunit des intervenants de différents horizons, tels que des ministres, des parlementaires, des hauts fonctionnaires, des chercheurs, des experts et des représentants de la société civile. Ils partagent leurs expériences, leurs pratiques et leurs recommandations pour améliorer la qualité, la pertinence et l'utilisation des évaluations dans le processus de décision publique.

      Points forts :

      • [00:00:00][^3^][3] Introduction de Gilles de Margerie, commissaire général de France Stratégie
        • Présente le contexte et les enjeux de la conférence
        • Souligne le rôle de France Stratégie comme producteur et diffuseur d'évaluations
        • Annonce le programme et les intervenants
      • [00:01:11][^4^][4] Discours de Bruno Le Maire, ministre de l'Économie, des Finances et de la Relance
        • Affirme que les évaluations de politiques publiques ont un impact sur les décisions
        • Cite des exemples de politiques évaluées, comme le plan de relance ou la réforme des retraites
        • Plaide pour une culture de l'évaluation plus développée en France
      • [00:07:29][^5^][5] Présentation du rapport "Quelles évaluations de politiques publiques pour quelles utilisations ?" par Adam Baïz, coordinateur de l'évaluation des politiques publiques à la Cour des comptes
        • Expose la méthodologie et les résultats de l'étude menée par France Stratégie
        • Analyse l'évolution de la mobilisation, de la production et de l'utilisation des évaluations dans le débat parlementaire
        • Formule des propositions pour renforcer la qualité et l'impact des évaluations
      • [00:28:31][^6^][6] Table ronde animée par Emmanuel Cugny, journaliste à France Info, avec Pierre Moscovici, premier président de la Cour des comptes, Isabelle Dechef de Ville, présidente de la Société française de l'évaluation, Amélie Verdier, directrice générale de l'Agence régionale de santé d'Île-de-France, et Gilles de Margerie, commissaire général de France Stratégie
        • Échange sur les pratiques, les enjeux et les perspectives de l'évaluation des politiques publiques
        • Aborde des questions telles que la place de l'évaluation dans la décision publique, les critères de qualité et de crédibilité des évaluations, les modalités de coopération entre les acteurs de l'évaluation, ou encore les défis posés par la crise sanitaire
        • Répond aux questions du public
      • [02:07:11][^7^][7] Conclusion de Gilles de Margerie, commissaire général de France Stratégie
        • Fait le bilan de la conférence et remercie les participants
        • Souligne l'importance de l'évaluation pour la démocratie, la transparence et l'efficacité de l'action publique
        • Appelle à poursuivre le dialogue et la réflexion sur l'évaluation des politiques publiques
    1. Pouvons-nous arrimer l’épistémologie des études littérature aux cadres théoriques des humanités numériques?

      pas clair comme explication de la question

    2. udes littéraires computationnelles aujourd’hui et explorer ce qui domine la conversation dans ce domaine

      citer les idées de lecture et analyse de texte assistée par ordinateur (Meunier)

    3. disciplinaire cohérent et unique

      ce n'est par ailleurs pas leur volonté, je crois

    4. idéologiques

      ou "politiques"?

    5. r·rice cherchait à exprimer dans un contexte historique précis, et plus globalement comment était le monde (et comment était perçu le monde) dans ce contexte ; Comprendre les réactions d’un individu ou d’une société à un texte, et ce que ces réactions révèlent sur le texte, ainsi que sur l’individu ou la société qui réagit ; Comprendre ce que le texte signifie pour le lecteur en dehors du contexte historique où le texte fut produit, soit en analysant ses caractéristiques formelles et internes ou en explorant les thèmes et sujets universels qui transcendent le texte.

      références?

    6. représenter l’irreprésentable et pour traduire l’intraduisible

      pas sûr. c'est une affirmation complètement idéologique. romantique et pas argumentée. je nuancerais

    7. udes littéraires computationnelles tâtent à la recherche d’une forme plus claire, d’un cadre épistémologique et d’une reconnaissance institutionnelle

      est-ce que c'est ça ta problématique de recherche? l'institutionnalisation des études littéraires computationnels? je crois pas. Dans une intro je voudrais une problématique formulée de façon claire.

    8. égularisé (

      es

    9. ’est toutefois sous cette ombrelle

      des DH? Seulement?

    1. La escritura fue muy importante para mantener la cohesión del Estado egipcio. La alfabetización se concentraba en una élite educada de escribas. Ser escriba era la aspiración de cualquier egipcio de ascendencia humilde. El sistema jeroglífico fue siempre difícil de aprender

      Es impresionante cómo ha evolucionado la escritura a lo largo de los años y su importancia en el país de Egipto y en diferentes lugares, buscando que las generaciones futuras perciban la escritura de diferentes maneras.

    2. Trata sobre la historia de la escritura. Se discuten los diferentes sistemas de escritura que se han desarrollado en todo el mundo. Los primeros sistemas de escritura fueron pictográficos e ideográficos. Estos fueron reemplazados gradualmente por sistemas más complejos que podían representar sonidos. La escritura ha tenido un profundo impacto en la historia de la humanidad. Ha hecho posible registrar información y comunicarse a grandes distancias. También ha jugado un papel importante en el desarrollo de la civilización.

    3. La escritura surgió por primera vez en una variedad de culturas diferentes en la Edad del Bronce, las primeras escrituras conocidas parecen haber sido logografiaohpictográfico. Este tipo de escrituras parecen haber aparecido de manera independiente en Oriente Próximo, Egipto, el Valle del Indo y antigua China. Posteriormente las escrituras de Próximo oriente evolucionaron a sistemas cuneiformes basados en la representación de sonidos, por lo que dejaron de ser puramente logográficos. También las escrituras egipcia y china desarrollaron complementos fonéticos para facilitar la lectura de los signos logográficos.

      Me ha llamado mucho la atencion sobre donde surgio por primera vex la escritura, para mì es un dato que desconocia y como se dio la logografia, nos cuenta la lectura que nacio por la necesidad comercial, creo que es un documento muy importante para la lectura

    4. La protoescritura es un tema muy interesante ya que se refiere a los primeros sistemas de escritura que surgieron al final del IV milenio a.C. en Euroasia. Estos sistemas utilizaban símbolos ideográficos y mnemónicos para transmitir información, pero carecían de un contenido lingüístico directo. Se considera que la protoescritura evolucionó gradualmente hacia formas más complejas, como las escrituras jeroglíficas del antiguo Oriente Medio, como la egipcia, protocuneiforme sumeria y cretense. Además, se han descubierto evidencias de posibles formas tempranas de escritura en China y en la región del Indo en el II milenio a.C. La protoescritura es un tema fascinante que nos permite comprender el origen y desarrollo de la escritura a lo largo de la historia.

    5. El alfabeto griego, adaptado del fenicio, dio lugar en el Mediterráneo al alfabeto etrusco y a varios otros alfabetos protoitálicos (incluyendo el alfabeto latino) en el siglo VIII a. C. El alfabeto griego es el que introduce por primera vez signos vocálicos. (Dieron el último paso, pues separaron vocales de consonantes y las escribieron por separado). También las versiones primitivas de alfabeto griego dieron lugar a varios alfabetos anatolios y escrituras empleadas en lenguas paleobalcánicas.

      Aquí nos dice que el alfabeto griego, nacido de la adaptación del fenicio, ha tenido un impacto profundo en la historia de la escritura, sentando las bases para numerosos sistemas de escritura que se utilizan en todo el mundo hasta el día de hoy. Su principal contribución fue la introducción de vocales independientes, un avance fundamental que permitió una representación más precisa del lenguaje hablado.

    6. La protoescritura es la primera etapa de escritura, la escritura de aquel modo se invento en euroasiá en el IV milenio antes de cristo.

      La prescritura marca un buen avance significativo para la humanidad ya que transmite información y la escritura en el sentido moderno tiene una base para desarrollar una escritura más compleja y sostenida la comunicación escrita es la que conocemos hoy no solamente facilita la transmisión de información sino que también contribuye al desarrollo de un buen pensamiento y cuando se siguió evolucionando la preescritura refleja la creatividad humana en la búsqueda de formas efectivas de la comunicación permite también comprender el origen del lenguaje escrito y el impacto del desarrollo de la civilizaciones en pocas palabras representa un capítulo fascinante en la historia de la comunicación humana ya que este se extiende hasta nuestros días.

    7. Esta lectura me pareció muy interesante por que es impresionante reconocer lo significativa y poderoso que puede llegar a ser la escritura. Como darnos cuenta de su gran evolución, esta ha adoptado múltiples formas de interpretación por parte de los seres de aquel entonces. La escritura no tuvo un desarrollo específico como el que conocemos hoy en día (letras o abecedario). Su origen inició con símbolos que daban un significado entendible solo para aquellos que habitaban en aquel entonces, lo que hoy conocemos como escritura simbólica, que utilizaba símbolos ideográficos.

    8. El contacto con pueblos europeos, estimuló la creación de numerosos sistemas de escritura originales en América del Norte y en África, muchos de esos sistemas eran independientes del alfabeto latino, pero sus creadores se inspiraron en la idea que comprendieron al ver el uso del alfabeto latino por parte de los colonizadores occidentales

      Me parece muy interesante, ya que muestra cómo el contacto con pueblos europeos tuvo un impacto significativo en la creación de sistemas de escritura originales en América del Norte y África. Es fascinante ver cómo, a pesar de que muchos de estos sistemas eran independientes del alfabeto latino, sus creadores se inspiraron en la idea que obtuvieron al observar el uso del alfabeto latino por parte de los colonizadores occidentales. Esto resalta la influencia cultural y la adaptación creativa que puede surgir a partir de la interacción entre diferentes grupos étnicos.

    9. Muy interesante e informativo este tema ya que nos habla de la historia de la escritura que es fundamental para entender la evolución de las civilizaciones esta historia esto comenzó con sistemas protoescritura en el VII milenio a.C., utilizando símbolos ideográficos. Como también surgió la escritura cuneiforme en el IV milenio a.C., en Sumeria, lo cual derivada de fichas de arcilla usadas para registros contables y en Egipto, se desarrollaron los jeroglíficos. Independientemente, la escritura también apareció en China y Mesoamérica. Estos sistemas permitieron registrar información, administrar recursos y transmitir conocimientos, marcando la transformación de la prehistoria a la historia y estableciendo las bases para la comunicación escrita en todo el mundo. Y gracias a la evolución de la escritura ahora en la actualidad somos seres más inteligentes.

    10. La práctica de la escritura desempeñó un papel crucial en la preservación de la unidad del Estado egipcio. La capacidad de leer y escribir estaba limitada a una minoría educada de escribas. Convertirse en escriba era el anhelo de todo egipcio de origen modesto. A lo largo del tiempo, el sistema de escritura jeroglífica se volvió cada vez más complejo con la adición de un mayor número de signos, llegando a alcanzar varios cientos.

    11. A través del tiempo la evolución ha sido un de los términos claves para la existencia e incluso la escritura ha evolucionado de manera radical desde el IV milenio a.C. Es muy importante e interesante conocer sobre la evolución de la escritura, ya que nos ayuda a conocer su historia y la de los humanos; a pesar de que el origen de los humanos sigue siendo un misterio, el de la escritura ya no lo es. Tras investigaciones se llego a la conclusión de que a través del mundo existen varias escrituras, las cuales representan a cada nación o pueblo del mudo, los mismos han ido cambiando a la par.

    1. Arts integration and UDL are natural partners. The arts offer teachers multiple means for providing information to a wide range of learners, multiple means for all students to make sense of and express their understandings, and multiple means for engaging all students in participatory, collaborative, authentic, and energizing learning experiences.

      I think this sums up UDL and arts integration perfectly. Arts integration in a classroom is a great way to promote inclusion, participation, and effort for all learners.

    2. Arts integration also involves students in ongoing reflection and self-assessment. Because the products students create are concrete and visible—a dance sequence, a musical composition, a poem, a collage, a dramatic improvisation—it is possible for students (and teachers, too) to examine their progress and reflect on what is working well and what needs improvement.

      I love how arts integration helps students to be a part of their learning and reflection. When they are creating products based on goals that they have made, they are more likely to put in an effort and succeed with their goals, when it is something fun like art!

    1. where the deconstruction of speculative claims possesses or at least seems to possess clear speculative effects, the deconstruction of scientific claims does not, as a rule, possess any scientific effects. BBT, recall, is an empirical theory

      Attempting to deconstruct scientific claims gave us the Sokal affair, which has been greatly embarrassing to postmodernism.

    2. Informatic insufficiency is parasitic on sufficiency, as it has to be, given the mechanistic nature of neural processing. For any circuit involving inputs and outputs, differences must be made. Sufficient or not, the system, if it is to function at all, must take it as such.

      A brain trying to catch itself in the act of the sufficiency illusion is only stuck in another sufficiency illusion, because the infinite regress must end somewhere. Metacognition is very slow (about 10 Hz) and small (about 10 items), so the infinite regress typically stops after 1 or 2 acts of "catching itself in the act of sufficiency illusion".

      "Ah ha, I am subject to the sufficiency illusion!"

      "Ah ha, I am subject to the sufficiency illusion-sufficiency illusion!"

      "I give up, this is too meta. And pointless, since I'm just always in a sufficiency illusion anyway."

    3. On BBT, all traditional and metacognitive accounts of the human are the product of extreme informatic poverty. Ironically enough, many have sought intentional asylum within that poverty in the form of apriori or pragmatic formalisms, confusing the lack of information for the lack of substantial commitment, and thus for immunity against whatever the sciences of the brain may have to say. But this just amounts to a different way of taking refuge in obscurity. What are ‘rules’? What are ‘inferences’? Unable to imagine how science could answer these questions, they presume either that science will never be able to answer them, or that it will answer them in a manner friendly to their metacognitive intuitions. Taking the history of science as its cue, BBT entertains no such hopes. It sees these arguments for what they happen to be: attempts to secure the sufficiency of low-dimensional, metacognitive information, to find gospel in a peephole glimpse.

      This describes the approach of Sellars, Brandom, and Brassier, all of which Bakker has criticized in the blog.

      They admit that science has priority in the scientific realm, but what we think we are is not something that can be true or false, but are games, rules, things we play, a game of "pretend as if we are persons".

      This is a much better position. It does not attempt to tell science that science is a building founded upon the ground of philosophy (unlike Kant, or Heidegger), and does not try to make scientifically testable predictions and get embarrassed in the process (unlike those who sought to study the "quantum of consciousness" because they thought free will is real, thus something quantum-mechanical must be true of the brain, or that philosopher who argued that Anton syndrome is impossible because it is philosophically impossible, or those psychoanalysts that try to interpret Cotard's syndrome as some manifestation of childhood trauma).

      The problem with this position is as follows:

      1. Is science really based on a game of giving and taking reasons? If not, then there's no guarantee that science would protect the game of "let's pretend we are persons who make decisions, has plans, hopes for love, etc". The juggernaut of science may eventually crush the "manifest image of man" under its wheels, migrate to a society of unconscious biorobots, and run even faster as a result!

      2. Philosophers are unable to figure out what rules, games, normativity, etc, are! They can't agree, after centuries of disputation. Any working consensus will have to come from science, and what if science finally shows that rules and games are nothing like what Sellars, Brandom, etc, thought they are? If not, then not only is the manifest image not the scientific image, not only is it unnecessary for working scientists, it is even not what the philosophers say it is. It is as if the philosophers have been stuck in Plato's Cave, mistaking the shadow-play for optical-science.

    4. Because it is blind to itself, it cannot, temporally speaking, differentiate itself from itself. As a result, such acts seem to arise from some reflexive source. The absence of information, once again, means the absence of distinction, which means identity.

      The brain is too slow to catch itself in the act of changing, and too slow to catch itself in the act of being too slow, So the brain just defaults to treating itself as unchanging, identical through time.

    5. the ‘fundamental synthesis’ described by Hagglund is literally a kind of ‘flicker fusion,’ a metacognitive presumption of identity where there is none. It is a kind of mandatory illusion: illusory because it egregiously mistakes what is the case, and mandatory because, like the illusion of continuous motion in film, it involves basic structural capacities that cannot be circumvented and so ‘seen through.’

      I am a brain that moves through time, synapses and biochemicals flickering furiously. It is too fast for the brain to catch itself in the change. The brain flickers 100 times a second (gamma brainwave), but conscious thought comes at most 10 times a second. For the brain to "catch itself in the change" the brain would have to somehow self-represent 10 times faster.

      So, the brain can't catch itself in the change. Imagine a single second and 100 brain-states during that single second. Each brain-state differs from the previous one at trillions of synapses, but the brain is stuck not representing the difference between brain-0, brain-1, ..., brain-10, because otherwise, it would have to squeeze in 10 units of change into 1 unit of reflective thought. There is simply not enough time.

      So, the brain sees itself as mostly unchanging despite there being plenty of change, because not only is it too slow to see its own change, it is too slow to represent its own slowness. If it spends all its time representing its own slowness, it will promptly die of starvation, because a brain is not built for contemplation, but surviving.

    6. the spectre of noocentrism, the possibility that our conception of ourselves as intentional is a kind of perspectival illusion pertaining to metacognition not unlike geocentrism in the case of environmental cognition.

      Geocentrism: Not only can't we see earth as moving, we can't even see that we are missing the information. We are not left with a nagging sense of "I don't know enough either way." but rather an obvious "Earth abides.".

      Noocentrism: Not only can't we see how we are made of parts (neurons firing, biochemicals flickering) that are too fast, too numerous, for us to see, we can't even see that we are unable to see. We are not left with a nagging sense of "I don't know how much I know about myself." but rather an obvious "I see all there is to see.".

    7. trace and differance do not possess the resources to even begin explaining synthesis in any meaningful sense of the term ‘explanation.’ To think that it does, I have argued, is to misconceive both the import and the project of deconstruction. But this does not mean that presence/synthesis is in fact insoluble.

      Derrida used trace and differance to do deconstruction, showing how weird it is that language can mean things, but unstably. Trying to build a science out of it is doomed. Deconstruction cannot build a science.

      But the opposite way can. Science can explain why the meaning of language is weird, and why deconstruction works.

    8. In fact, if anything is missing in an exegetical sense from Hagglund’s consideration of Derrida it has to be Heidegger, who edited The Phenomenology of Internal Time-consciousness and, like Derrida, arguably devised his own philosophical implicature via a critical reading of Husserl’s account of temporality. In this sense, you could say that trace and differance are not the result of a radicalization of Husserl’s account of time, but rather a radicalization of a radicalization of that account.

      Husserl tried to be objective and scientific about time. He tried to study how time is perceived by introspection, but at least he tried to be scientific about it. Basically, though he couldn't use a microscope or a clock, he tried to form the equivalent of microscopes and clocks using nothing but his mind, to probe his mind.

      Heidegger radicalized this by saying that first-person time and third-person time are so different because of something philosophical "the Ontological difference". He then proceeded to build a whole philosophy of the first-person using only what's available first-person.

      Derrida radicalized Heidegger by saying that the Ontological difference exists, but simply cannot be escaped. Heidegger's project is doomed from the start.

    9. even though Hagglund utterly fails to achieve his thetic goals, there is a sense in which he unconsciously (and inevitably) provides a wonderful example of the very figure Derrida is continually calling to our attention. The problem of synthesis is the problem of presence, and it is insoluble, insofar as any theoretical solution, for whatever reason, is doomed to merely reenact it.

      The problem of synthesis: If things keep changing, how come some things are the same things?

      The problem of presence: If things don't exist (because everything keeps changing), why do we see things as if they exist stably over time?

      Derrida: The problem can't be solved. Any attempt to solve the problem of presence by making a theory will just end up creating it again, but in more arcane language, because even if a theory solves the problem right now, it will lose its power over time, because meaning is unstable. In fact, I'll keep trying this again and again in my writings to show you how I can't jump out of this magic circle no matter how much I try, and neither can you.

    10. The synthesis of the trace follows from the constitution of time we have considered. Given that the now can appear only by disappearing–that it passes away as soon as it comes to be–it must be inscribed as a trace in order to be at all. This is the becoming-space of time. The trace is necessarily spatial, since spatiality is characterized by the ability to remain in spite of temporal succession. Spatiality is thus the condition for synthesis, since it enables the tracing of relations between past and future. Radical Atheism, 18 But as far as ‘explanations’ are concerned it remains unclear as to how this can be anything other than a speculative posit. The synthesis of now moments occurs somehow. Since the past now must be recuperated within future nows, it makes sense to speak of some kind of residuum or ‘trace.’ If this synthesis isn’t the product of subjectivity, as Kant and Husserl would have it, then it has to be the product of something. The question is why this ‘something’ need have anything to do with space. Why does the fact that the trace (like the Dude) ‘abides’ have anything to do with space? The fact that both are characterized by immunity to succession implies, well… nothing. The trace, you could say, is ‘spatial’ insofar as it possesses location. But it remains entirely unclear how spatiality ‘enables the tracing of relations between past and future,’ and so becomes the ‘condition for synthesis.’

      Hagglund, having broken time into pieces, wanted to piece time back together again. If time is in pieces, and remains in pieces, where does he find some glue? He can't glue time together with anything time-like, because that break apart just like time.

      So he went for space... somehow? It is entirely unclear how that's supposed to work. Hagglund apparently thought that because space does not come one after another, it is not like time. But space does come one to the left of another. It seems his entire argument is that one moment of time "replaces" another, while one point in space does not "replace" another.

      And there's the problem with relativity, where space and time are related by coordinate transforms. Space would be broken into pieces too if time is.

    11. No matter how fierce the will to hygiene and piety, reason is always besmirched and betrayed by its occluded origins. Thus the aporetic loop of theory and practice, representation and performance, reflexivity and irreflexivity–and, lest we forget, interiority and exteriority…

      No matter how hard the philosophers try, they always seem to end up deconstructing themselves and running in circles of interpretation. One generation proposes "theory", another "practice", etc. It is quite tiring.

      At least Derrida was doing it on purpose, trying to show that philosophers are spinning endlessly in place as fast as possible. Instead of it taking centuries, he wanted to show the loop in the space of a single 10-page paper (hopefully).

    12. But this requires that he retreat from his earlier claims regarding the ultratranscendental status of trace and differance, that he rescind the claim that they constitute an ‘all the way down’ condition. He could claim they are merely transcendental in the Kantian, or ‘conditions of experience,’ sense, but then that would require abandoning his claim to materialism, and so strand him with the ‘old Derrida.’ So instead he opts for ‘compatibility,’ and leaves the question of theoretical utility, the question of why we should bother with arcane speculative tropes like trace and differance given the boggling successes of the mechanistic paradigm, unasked.

      The trilemma, all bad:

      1. Bite the bullet, insist that our world really is made of "arche-material", and try to use deconstruction to tell physicists that they need to start doing deconstruction if they want to know what the material world is. This is not going to be taken seriously.
      2. Retreat back to saying that deconstruction is what the mind is like. That's just plain old Derrida, nothing new. He wants to be new.
      3. Say that deconstructive physics is merely compatible with scientific physics... in which case, why bother? Deconstructivists have won exactly 0 Nobel Prizes in Physics, or written 0 textbooks in physics. Do they really have any advantage here? If not, then why not just stick with standard scientific physics?
    13. Hagglund, in effect, has argued himself into the very bind which I fear is about to seize Continental philosophy as a whole. He recognizes the preposterous theoretical hubris involved in arguing that the mechanistic paradigm depends on arche-materiality, so he hedges, settles for ‘compatibility’ over anteriority. In a sense, he has no choice. Time is itself the object of scientific study, and a divisive one at that. Asserting that trace and differance are constitutive of the mechanistic paradigm places his philosophical speculation on firmly empirical ground (physics and cosmology, to be precise)–a place he would rather not be (and for good reason!).

      Hagglund takes literature criticism and then tries to submit the entire physical universe to it. But that would be rather incredible. Even philosophers would balk at trying to apply trace and differance to... the big bang theory, or Darwinian evolution, or the formation of the solar system.

      So Hagglund retreats a bit. Instead of saying that deconstruction is a foundation for physical cosmology, he says it is compatible with it.

      One is reminded of the Bergson vs Einstein debate on time. While Bergsonian time is still endlessly analyzed for its literary value, Einsteinian time is simply the working hypothesis for engineers and physicists and, in its approximate form of Newtonian time, the working hypothesis of everyone, even philosophers.

    14. This notion of the arche-materiality can accommodate the asymmetry between the living and the nonliving that is integral to Darwinian materialism (the animate depends upon the inanimate but not the other way around). Indeed, the notion of arche-materiality allows one to account for the minimal synthesis of time–namely, the minimal recording of temporal passage–without presupposing the advent or existence of life. The notion of arche-materiality is thus metatheoretically compatible with the most significant philosophical implications of Darwinism: that the living is essentially dependant on the nonliving, that animated intention is impossible without mindless, inanimate repetition, and that life is an utterly contingent and destructible phenomenon. Unlike current versions of neo-realism or neo-materialism, however, the notion of arche-materiality does not authorize its relation to Darwinism by constructing an ontology or appealing to scientific realism but rather articulating a logical infrastructure that is compatible with its findings. Journal of Philosophy

      While "arche-writing" highlights the inherent instability of meaning inherent in any signifying system, "arche-materiality" broadens this notion to encompass the material world itself. The instability and deferral of meaning Derrida identified in language are ultimately grounded in the fundamental instability and flux of the material world itself.

      He rejects any materialism that is based on strings, or fields, or atoms, etc. He rejects any materialism that is based on fixed kinds of things, because physics deconstructs itself as much as text deconstructs words.

      As an application, "arche-materiality" as provides a "logical infrastructure" compatible with scientific findings like those of Darwinism.

    15. “The succession of time,” Hagglund states in his Journal of Philosophy interview, “entails that every moment negates itself–that it ceases to be as soon as it comes to be–and therefore must be inscribed as trace in order to be at all.” Trace and differance, he claims, are logical as opposed to ontological implications of succession, and succession seems to be fundamental to everything.

      Time destroys itself. Self-destruction is what time is. Everything is in time. Everything destroys itself. Everything exists only as trace.

      Hagglund takes deconstruction out of literature and puts it into physical cosmology.

    16. Derrida himself did, evince their ‘quasi-transcendentality’ through actual interpretative performances. One can, in other words, either refer or revere. Since second-order philosophical accounts are condemned to the former, it has become customary in the philosophical literature to assign content to the impossibility of stable content assignation, to represent the way performance, or the telling, cuts against representation, or the told. (Deconstructive readings, you could say, amount to ‘toldings,’ readings that stubbornly refuse to allow the antinomy of performance and representation to fade into occlusion). This, of course, is one of the reasons late 20th century Continental philosophy came to epitomize irrationalism for so many in the Anglo-American philosophical community. It’s worth noting, however, that in an important sense, Derrida agreed with these worries: this is why he prioritized demonstrations of his position over schematic statements, drawing cautionary morals as opposed to traditional theoretical conclusions. As a way of reading, deconstruction demonstrates the congenital inability of reason and representation to avoid implicitly closing the loop of contradiction. As a speculative account of why reason and representation possess this congenital inability, deconstruction explicitly closes that loop itself.

      Deconstruction shows that meaning is unstable, so why write so many difficult books? By repeatedly showing how words have unstable meaning, they are trying to show something they can't say in words.

      By calling "trace" and "différance" "quasi-transcendental," Derrida admits that these terms themselves are subject to the same instability they describe. This, then, is the heart of Derrida's philosophy: a self-contradiction built into its core,. He is like a blacksmith holding up a sword, "It is so sharp that it cut itself in half!".

      Perhaps Wittgenstein has said it simply, without fuss:

      My propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them—as steps—to climb beyond them. (He must, so to speak, throw away the ladder after he has climbed up it.) He must transcend these propositions, and then he will see the world aright.

    17. Where Hegel temporalized the krinein of Critical Philosophy across the back of the eternal, conceiving the recuperative role of the transcendental as a historical convergence upon his very own philosophy, Derrida temporalizes the krinein within the aporetic viscera of this very moment now, overturning the recuperative role of the transcendental, reinterpreting it as interminable deflection, deferral, divergence–and so denying his thought any self-consistent recourse to the transcendental.

      Hegel saw the "krinein" as unfolding over time, ultimately leading to a complete and totalizing understanding. In contrast, Derrida views the "krinein" as an ongoing process, always happening within the present moment, never reaching a final resolution.

      Derrida opposed "deconstruction" to "criticism" which, like its Greek root krinein, refers to the separating and distinguishing of meanings, while deconstruction is always open to the possibility that a text is ironic or has no intention-to-signify whatsoever.

    18. Derrida actually develops what you could call a ‘logic of context’ using trace and differance as primary operators.

      It seems he was trying to do something like mathematical logic, where the basic operators are "trace" and "differance", and the basic objects are "signs". Meaning tumbles out as statistical properties emerging by repeated applications of trace and differance on billions of signs.

    19. What is the methodological justification for speaking of the trace as a condition for not only language and experience but also processes that extend beyond the human and even the living?”

      So, Derrida probably treated "trace" as a literary critique thing. I write a symbol, and then you write a symbol, and so on, leaving behind traces of unstable meanings, tumbling like amoebas through time. So far, a bit crazy, but not too crazy.

      But then Hagglund went in and generalized it to the entire universe. A fossil is a trace. The sun is a trace. This plastic bag is a trace. The microwave background radiation is a trace... Just what is a trace after all? And why are we projecting our little literary criticism to the entire world, the entire physical cosmos? Did Hagglund take "There's nothing outside the text" so seriously as to turn physics into a branch of literary criticism?

    20. Identity has to come from somewhere. And this is where Derrida, according to Hagglund, becomes a revolutionary part of the philosophical solution. “For philosophical reason to advocate endless divisibility,” he writes, “is tantamount to an irresponsible empiricism that cannot account for how identity is possible” (25). This, Hagglund contends, is Derrida’s rationale for positing the trace. The nowhere of the trace becomes the ‘from somewhere’ of identity, the source of ‘originary synthesis.’

      Well, if Derrida has rejected all forms of identity, and time just keeps happening, then why do we feel the same?

      Solution: smuggle the identity back in by the magic of "trace".

      A "trace" is like pawprints in the sand. They create the illusion (or real, but who knows what the philosophers mean, really?) of identity through time. The pawprints here point to some foxes a while ago, creating an identity of fox through time.

      Similarly, memories in my head creates an identity of myself through time.

    21. The pivotal question is what conclusion to draw from the antinomy between divisible time and indivisible presence. Faced with the relentless division of temporality, one must subsume time under a nontemporal presence in order to secure the philosophical logic of identity. The challenge of Derrida’s thinking stems from his refusal of this move. Deconstruction insists on a primordial division and thereby enables us to think the radical irreducibility of time as constitutive of any identity. Radical Atheism, 16-17

      The most important question about time is this: Eadem mutata resurgo ("Though I changed, I arise the same.").

      We can't say that time is both a Timeline of different points, and that time is a single timeless "Now". We have to choose.

      Christians, Heidegger, Kant, and some other philosophers, have went with favoring the timeless "Now" as the underlying reality, then try to explain the illusion of Timeline. Derrida went with the Timeline as the underlying reality, and then try to explain the illusion of the timeless Now.

    22. The primary problem, as Aristotle sees it, is the difficulty of determining whether the now, which divides the past from the future, is always one and the same or distinct, for the now always seems to somehow be the same now, even as it is unquestionably a different now.

      How many "Now"s are there? One? Then how could it be that this "now" and the "now" two days ago look so different? Two? Then how is it that I'm always in Now(1), but not Now(2)?

    23. it is significant, I think, that he begins with a reading of “Ousia and Gramme,” which is to say, a reading of Derrida’s reading of Heidegger’s reading of Hegel!

      You are reading me reading Bakker reading Hagglund reading Derrida reading Heidegger reading Hegel.

      Are we done with this game of reading yet?

    24. The desire for survival cannot aim at transcending time, since the given time is the only chance for survival. There is thus an internal contradiction in the so-called desire for immortality.

      "You want to live, right? Of course you want. You want to live as yourself, because what else can you be if not yourself? Thus, you want to live as yourself. What are you? You are a human. You are a human in time. You are not a creature of eternity. Therefore, you cannot want to live forever. QED".

    1. rmsb: Bayesian Regression Modeling Strategies Package, Focusing on Semiparametric Univariate and Longitudinal Models

      maybe subdivide 6.3 with 6.3.1 rms, 6.3.2 rmsb, and ? 6.3.3 the global options, etc. or maybe include that with 6.3.1?

      Main point is the toc for section 6 should show a subsection for rmsb explicitly!

    1. Trước khi phân loại Pod System, bạn cần xác định dòng sản phẩm này là gì, có phù hợp để sử dụng hay không nhé. Thực chất Pod System là dòng thuốc lá điện tử hiện đại, được trau chuốt hơn hết về thiết kế và mẫu mã.  Pod System sẽ thích hợp với tinh dầu vape có nồng độ nicotine cao (từ 20mg trở lên). Vậy Pod System được chia thành 2 loại chính sau: Pod Kit Pod kit được thiết kế đặc biệt nhỏ gọn nhưng cầm rất chắc tay. Sản phẩm này được lòng rất nhiều dân chơi khói, không chỉ giới trẻ mà còn ở độ tuổi trung niên cũng rất ưa chuộng.

      Pod System là dòng thuốc lá điện tử được thiết kế nhỏ gọn, tích hợp sẵn tinh dầu vào đầu pod. Website: https://vapepod365.net/pod-kit Phone: 0704810810 Địa chỉ: 468/13 Đường Trần Hưng Đạo,P2,Quận 5,Tp Hồ Chí Minh

      podsystem #podkit #podsystemgiare #maypod #podchamtinhdau

    1. Es muy bueno que el Ministerio de Cultura y Patrimonio haya realizado esta encuesta en 2021 para entender mejor los hábitos de lectura y el consumo cultural en Ecuador. Esto era algo necesario, ya que Ecuador era uno de los pocos países sin datos consolidados en esta área. Esta información es crucial para crear políticas públicas efectivas que promuevan y protejan los derechos culturales de la población. A pesar de la pandemia, lograron realizar este estudio con la ayuda de varias instituciones, lo cual es admirable. Ahora, tanto la cultura como la academia y la ciudadanía tienen acceso a estos valiosos datos.

    2. En este enlace nos informa sobre una encuesta que habla sobre los Hábitos Lectores, practicas y consumos culturales dice que el Ecuador fue uno de los pocos países que no contaban con una encuesta realizada a los hogares al no hacer esa cuestión no se podía determinar los hábitos y consumos ni estadísticas de la población, pero luego la INEC y otras más, pese a algunas circunstancias si logro recoger resultados en los hogares ecuatorianos derivado en distintas temáticas los cuales fueron de gran ayuda tanto para el conocimiento de la cultura de la sociedad como para El Ministerio de cultura y patrimonio.

    3. La encuesta proporciona datos importantes para entender los intereses y comportamientos culturales de los ecuatorianos, lo que ayuda en la formulación de políticas culturales, de la lectura y otras actividades culturales con mas efectividad. Es una herramienta clave para el desarrollo cultural nacional.

    4. Encuentro la encuesta muy interesante ya que, realizarla no solo proporciona conocimiento sobre los hábitos lectores sino también saber sobre los consumos culturales y gracias a los resultados que se obtengan saber como enriquecer no solo el hábito lector sino la identidad cultural.

    1. Grounded

      In you are in one of my courses, before making annotations, please toggle from Public to your course group. Do not make annotations on this page for one of our courses!

    1. ISTE

      In you are in one of my courses, before making annotations, please toggle from Public to your course group. Do not make annotations on this page for one of our courses!

    1. That Socrates is a doer of evil, and corrupter of the youth, and he does not believe in the gods of the state, and has other new divinities of his own. That is the sort of charge; and now let us examine the particular counts. He says that I am a doer of evil, who corrupt the youth; but I say, O men of Athens, that Meletus is a doer of evil,

      This seems like once again he is stating the injustice that he feels. He feels as if he has been judged possibly?? Almost like there is a feeling of a predetermined fate for him.

    2. and I further observed that upon the strength of their poetry they believed themselves to be the wisest of men in other things in which they were not wise.

      What does he mean when he says this? Is this another tribute to going against the grain and path of society? Showing defiance towards their presumed roles?

    3. for there must have been something strange which you have been doing? All this great fame and talk about you would never have arisen if you had been like other men: tell us, then, why this is, as we should be sorry to judge hastily of you."

      This reminds me of many historical or philisophy lessons that go over how people are often deemed as weird or outcasts time and time again for going against the grain of society, or going against the norms.

    4. Well, then, I will make my defence, and I will endeavor in the short time which is allowed to do away with this evil opinion of me which you have held for such a long time; and I hope I may succeed, if this be well for you and me, and that my words may find favor with you. But I know that to accomplish this is not easy - I quite see the nature of the task. Let the event be as God wills: in obedience to the law I make my defence.

      I feel as if he is saying there was prejudice or bias against him because he states they have an ongoing evil opinion of him. He understands that is there is a predisposition of bias or negativity, it is hard to overcome.

    1. is relatively a new area of investigation when we consider it within the historical context of literary works produced for the oral, written, and print mediums. But if we think of it within the framework of literature expressed in yet another medium, the digital, then it can easily be regarded as the continuation of a very long tradition, one that is exploring the affordances and constraints of this new medium much like we saw the written visual/concrete poetry in the 2nd and 3rd centuries in Alexandria and printed novels like Laurence Stern’s Tristram Shandy in 18th century did in theirs.

      I like the point of view they have as a historical evolution for electronic literature and the example they give from the printed, visual and handwritten versions, to a new computering way of creating literature.

    1. In addition to description, your deliberate choices in narration can create impactful, beautiful, and entertaining stories.

      I like the idea of accidentally writing something.

    1. Creating building systems in the present sense is not enough. We need a new, more subtle kind of building system
    2. Most designers today think of themselves as the designers of objects. If we follow the argument presented here, we reach a very different conclusion. To make objects with complex holistic properties, it is necessary to invent generating systems which will generate objects with the required holistic properties.
    3. processes which then maintain the system’s equilibrium
    4. Alexander doesn’t rule out spontaneous order, but sees that as a rare event.  For a system as a whole to have the properties desired, the builders will most probably have to have a generating system to create the system as a whole.
    5. A generating system, in this sense, may have a very simple kit of parts, and very  simple rules.
    6. The formal systems of mathematics are systems in this sense. The parts numbers, variables, and signs like + and =. The rules specify ways of combining three parts to form expressions, and ways of forming expressions from other expressions, and ways of forming true sentences from expressions, and ways of forming true sentences from other true sentences. The combinations of parts, generated by such a system, are the true sentences, hence theorems, of mathematics. Any combination of parts which is not formed according to the rules is either meaningless or false
    7. We must not use the word system, then, to refer to an object. A system is an abstraction. It is not a special kind of thing, but a special way of looking at a thing.
    8. In order to speak of something as a system, we must be able to state clearly: (1) the holistic behaviour which we are focusing on; (2) the parts within the thing, and the interactions among these parts, which cause the holistic behaviour we have defined; (3) the way in which this interaction, among these parts, causes the holistic behaviour defined. If we can do these three, it means we have an abstract working model of the holistic behaviour in the thing. In this case, we may properly call the thing a system, If we cannot do these three, we have no model, and it is meaningless to call the thing a system.
    9. Stability, no matter in which of its many forms, is a holistic property. It can only be understood as a product of interaction among parts.
    10. The most important properties which anything can have are those properties that deal with its stability.
    11. holistic behaviour is that instability which occurs in objects that are very vulnerable to a change in one part: when one part changes,
    12. The pattern form excels an engaging the reader in generative solutions: to understand the principles and values of lasting solutions and long-term emergent behavior. Good patterns go beyond the quick fix.
    13. we need to address most interesting problems with emergent behavior.
    14. … a fundamental characteristic of complex human systems … [is that] cause and effect are not close in time and space. By effects, I mean the obvious symptoms that indicate that there are problems drug abuse, unemployment, starving children, falling orders, and sagging profits. By cause I mean the interaction of the underlying system that is most responsible for generating the symptoms, and which, if recognized, could lead to changes producing lasting improvement. Why is this a problem? Because most of us assume they are most of us assume, most of the time, that cause and effect are close in time and space.
    15. What, exactly, does it mean to say that structures generate particular patterns of behavior?
    16. Lao Tsu principles of nonaction
    17. Thus, as in the case of natural languages, the pattern language is generative. It not only tells us the rules of arrangement, but shows us how to construct arrangements as many as we want which satisfy the rules.
    18. a means of letting the problem resolve itself over time, just as a flower unfolds from its seed
    19. The structures of a pattern are not themselves solutions, but they generate solutions.

      "Factory pattern"

    20. we often attack only symptoms, leaving the underlying problem unresolved.
    21. Generative patterns work indirectly; they work on the underlying structure of a problem (which may not be manifest in the problem) rather than attacking the problem directly. Good design patterns are like that: they encode the deep structure (in the Senge sense) of a solution and its associated forces, rather than cataloging a solution.
    22. emergence is a property of a whole that is not a property in its parts.
    23. how such a system is born, itself, of a generative system, establishing the duality between the object as a computing agent and the method as a computational process.
    24. design forms through the iterative readings and responses to interrelational conditions, with the intention of producing environments synchronous with their cultural settings.
    25. computation of such interrelational, complex behaviour-based systems
    26. The system behaviour emerges only in the dynamics of the interactions of the parts. This is not a cumulative linear effect but rather a cyclical causal effect
    27. states
    28. an overall design problem cannot be divided into sub-problems, and consequently, that it is impossible to arrive at a novel design solution as a summary process of solving individual problems one after the other
    29. complex systems of interactions and reciprocities
      • complex systems
      • interactions and reciprocities
    30. critique on classic physics and its deductive methods and focus on isolated phenomena. Bertalanffy considered such methods as unsuitable for biology
    31. Ultrastability places two sets of environmental and reactive variables in a primary feedback loop. A slower, second feedback affects the reactive variables by acting on the step-mechanisms and setting parameters for the environmental variables.

      Also see "Above the line, Below the line" above.

    32. In order to develop a model for stability in design problems, Alexander looked to cybernetics for models of homeostasis and ultrastability. Such systems could stabilize themselves regardless of what disturbed them, including variables that weren’t considered when the system was designed.
    33. Alexander would also step away from the notion of a semantic network and more toward the pursuit of the geometrics of order.
    34. the language provides the framework for using the patterns as a program to create form.  But he aims for semantics, allegory, and poetics, as well as the aspects of language that generate feelings, emotions, a sense of order — all of which extend beyond the structural, topological and syntactic aspects of his program.
    35. meanings and their evocations

      observer-participant actants

    36. networks

      graphs

    37. the sophistication of a semantic network

      lit

    38. “Next, several acts of building, each one done to repair and magnify the product of the previous acts, will slowly generate a larger and more complex whole than any single act can generate,”
    39. accretion
    40. unfolding
    41. systems may come to necessitate their own propagation, he suggests, when we use them.
    42. In an interview with his biographer, Alexander noted, “We give names to things but we don’t give many names to relationships.”
    43. a rule set

      Everything law, natural laws, logical calculi and cellular automata and beyond.

    44. pattern languages contain an inherent rule set that determines their logic

      As do logical calculi.

    45. context

      situatedness

    46. genetics

      Everywhere this term appears below, please also consider the lack of language to describe genesis as a whole:

    47. Almost every ‘system as a whole’ is generated by a ‘generating system’. If we wish to make things which function as ‘wholes’ we shall have to invent generating systems to create them.

      Alexander 2011, p. 59; Alexander 1968, p. 605

    1. s,fromwithin Pandora’sbox

      *9.) Lines 85-95 Greek mythology, Epirmethus acting against advice opened the box Jove had given his wife Pandora. Context: Strephon had discovered something in Celia's chest that the author describes as opening Pandora's Box.

    2. Strephon,!

      1.) Strephon and Celia are "names associated with pastoral poetry." A genre of poetry explores the connections between human life and nature. Context: The author depicts Celia a feminine woman taking "5 hours" to get ready. As a way of describing the human nature of a feminine heterosexual woman.

    1. Algorithms force us to be explicit about what we want to achieve with decision-making. And it’s far more difficult to paper over our poorly specified or true intentions when we have to state these objectives formally.

      This is so often overlooked. Compared to the past, where society-impacting decision-making discussions were often behind closed doors and far less explicit, a society leveraging algorithms allows for clearcut and open discussions.

    2. evidence-based decision-making is only as reliable as the evidence on which it is based, and high quality examples are critically important to machine learning

      We see this in the battle between LLMs today.

    1. (a) Each person receiving services in a facility providing mental health services under this part has the right to communicate freely and privately with persons outside the facility unless a qualified professional determines that such communication is likely to be harmful to the person or others in a manner directly related to the person’s clinical well-being, the clinical well-being of other patients, or the general safety of staff. Each facility shall make available as soon as reasonably possible to persons receiving services a telephone that allows for free local calls and access to a long-distance service. A facility is not required to pay the costs of a patient’s long-distance calls. The telephone shall be readily accessible to the patient and shall be placed so that the patient may use it to communicate privately and confidentially. The facility may establish reasonable rules for the use of this telephone, provided that the rules do not interfere with a patient’s access to a telephone to report abuse pursuant to paragraph (f).

      CFR #OOPZ

      In any case; it seems like the beggining of a class action lawsuit against kipu.com clients; that really aren't listening to the laws of the state ... and even have the nerve to "have post this exact patients rights list" without the highlighted details.

      Systematically constricting "communication with outside persons en masse" and blanket across all patients and all times.

  2. drive.google.com drive.google.com
    1. Ensino Híbrido

      O modelo híbrido, também conhecido como "blended learning", combina atividades presenciais e online, aproveitando os benefícios de ambos os formatos para criar uma experiência de aprendizagem mais rica e flexível. Este modelo tem-se tornado mais popular devido à sua capacidade em adaptar-se às necessidades individuais dos estudantes e às exigências de um mundo cada vez mais digital.

      Quais as características do Modelo Híbrido? 1) A flexibilidade, uma vez que permite que os estudantes escolham quando e onde aprender, combinando as sessões presenciais com as atividades online. Isso é particularmente útil para os estudantes que precisam de horários flexíveis ou que têm outras responsabilidades, como o trabalho ou a atenção e o tempo dedicado à família.

      2) A personalização, já que a aprendizagem híbrida possibilita a personalização do conteúdo de acordo com o ritmo e as preferências de aprendizagem de cada estudante. As ferramentas de e-learning podem adaptar o material às necessidades individuais, oferecendo uma abordagem mais centrada no aluno.

      3) A interação e a colaboração, dado que mesmo no formato online, o modelo híbrido promove a interação e a colaboração entre os estudantes. As ferramentas como os fóruns de discussão, os chats e as videoconferências permitem que os alunos trabalhem juntos em projetos e que possam trocar ideias.

      4) O acesso a recursos diversificados, pois os estudantes têm acesso a uma variada gama de recursos educativos online, desde vídeos e podcasts até a artigos científicos e e-books. Isso enriquece a experiência de aprendizagem e proporciona múltiplas fontes de informação.

      Como implementar o Modelo Híbrido?

      Para implementar, de forma eficaz, o modelo híbrido, é necessário um planeamento cuidadoso e a escolha das ferramentas adequadas, como por exemplo o planeamento de quais as partes ou secções do conteúdo da unidade curricular deverão trabalhadas online e quais as que deverão ser abordadas presencialmente; a seleção das plataformas e das ferramentas que irão ser úteis na aprendizagem online; a capacitação dos docentes para a utilização as tecnologias e metodologias de ensino híbrido de forma eficaz; e a implementação de métodos de avaliação que integrem tanto atividades online quanto atividades presenciais. Note-se que o feedback contínuo é fundamental para ajustar o processo de ensino-aprendizagem e para garantir que os objetivos educacionais sejam alcançados.

      Referências Utilizadas https://blogs.worldbank.org/es/education/que-es-el-aprendizaje-hibrido-como-pueden-los-paises-implementarlo-de-manera-efectiva https://ciberespiral.org/es/modelos-hibridos-para-promover-el-aprendizaje/ https://ec.europa.eu/education/education-in-the-eu/digital-education-action-plan_en https://eadbox.com/o-que-e-ensino-hibrido/ https://tutormundi.com/blog/ensino-hibrido/ https://observatoriodeeducacao.institutounibanco.org.br/em-debate/ensino-hibrido https://www.arvore.com.br/blog/aprendizagem-hibrida https://revistaft.com.br/o-ensino-hibrido-sob-a-otica-do-professor-desafios-estrategias-e-reflexoes/ https://blog.eadplataforma.com/educacao/modelos-de-ensino-hibrido/ https://www.christenseninstitute.org/publication/ensino-hibrido/ https://iave.pt/wp-content/uploads/2022/08/1_ensaio.pdf https://desafiosdaeducacao.com.br/ensino-hibrido-guia/

    1. Résumé de la vidéo [00:00:13][^1^][1] - [00:14:39][^2^][2]:

      John Hattie présente sa recherche sur l'apprentissage visible, analysant des milliers d'études pour identifier les méthodes qui améliorent ou nuisent à la réussite des élèves. Il utilise une métrique commune pour comparer l'efficacité de diverses stratégies éducatives, soulignant l'importance de se concentrer sur des approches ayant un impact significatif plutôt que sur des changements structurels mineurs.

      Points forts: + [00:00:13][^3^][3] Introduction à l'apprentissage visible * Analyse de l'impact des méthodes éducatives * Utilisation d'une métrique commune pour l'évaluation * Identification des méthodes les plus efficaces + [00:01:01][^4^][4] Effet de la réduction de la taille des classes * Comparaison avec d'autres stratégies éducatives * Impact relativement faible sur la réussite * Importance de la qualité de l'enseignement + [00:04:01][^5^][5] Influences négatives sur la réussite * Présence d'élèves perturbateurs * Effets négatifs de la rétention des élèves * Débat sur les sujets peu influents en éducation + [00:10:11][^6^][6] Facteurs ayant un impact positif * Importance de l'intention d'apprentissage claire * Critères de réussite évidents * Interaction entre pairs et défi approprié

    1. Résumé de la vidéo [00:00:10][^1^][1] - [00:23:39][^2^][2]:

      Cette vidéo présente une discussion avec le professeur John Hattie sur le pouvoir du feedback dans l'éducation. Il explore la nature variable du feedback, son importance pour l'amélioration des performances des élèves et des enseignants, et comment le feedback peut être optimisé dans les salles de classe et au-delà.

      Points forts: + [00:01:47][^3^][3] Définition du feedback * Information pour améliorer le travail * Doit mener à une amélioration concrète * Peut provenir de diverses sources + [00:03:40][^4^][4] Équilibrer le feedback positif * Éviter que les éloges diluent le feedback * Se concentrer sur l'amélioration plutôt que sur les louanges * Importance de la réception du feedback par l'élève + [00:06:13][^5^][5] Feedback et réception par les élèves * Les élèves entendent souvent ce qu'ils veulent * Importance de comprendre et d'agir sur le feedback * Le feedback doit être clair et orienté vers l'action + [00:07:02][^6^][6] Feedback pour les enseignants * Le feedback des élèves aux enseignants est crucial * Les enseignants doivent être ouverts au feedback * Le feedback aide les enseignants à s'améliorer + [00:10:26][^7^][7] Curiosité et apprentissage * Encourager la curiosité et les questions des élèves * Importance de l'échec et de la lutte dans l'apprentissage * Le feedback prospère sur les erreurs et les défis + [00:16:02][^8^][8] Feedback en ligne et utilisation des médias sociaux * Les médias sociaux peuvent encourager les questions des élèves * Les élèves sont plus disposés à discuter de leurs lacunes en ligne * Utiliser la technologie pour améliorer le feedback et l'engagement

    1. Résumé de la vidéo [00:00:13][^1^][1] - [00:14:39][^2^][2]:

      Cette vidéo présente la première partie de l'apprentissage visible de John Hattie, se concentrant sur les désastres et les méthodes inférieures à la moyenne en éducation. Hattie discute de l'importance de comparer les effets des différentes méthodes pédagogiques sur la réussite des élèves en utilisant une métrique commune pour identifier les stratégies les plus efficaces.

      Points forts: + [00:00:13][^3^][3] L'importance de l'évaluation comparative * Mettre en évidence les méthodes qui améliorent ou nuisent à la réussite * Utiliser une métrique commune pour évaluer diverses études * Exemple de l'effet de la réduction de la taille des classes + [00:03:49][^4^][4] Les méthodes inférieures à la moyenne * La rétention des élèves a un effet négatif sur la réussite * La connaissance du sujet par l'enseignant a peu d'impact * Les débats sur l'éducation se concentrent souvent sur des aspects moins importants + [00:09:09][^5^][5] Les devoirs et leur efficacité variable * Les devoirs au primaire ont peu d'effet sur la réussite * Les devoirs au secondaire et ceux de courte durée sont plus efficaces * L'étude et les compétences d'apprentissage doivent être enseignées à l'école + [00:10:38][^6^][6] Les facteurs ayant un impact positif * Les influences des pairs et les défis sont bénéfiques * L'importance des intentions d'apprentissage claires et des critères de réussite * La comparaison avec les programmes Outward Bound comme modèle d'enseignement efficace

    1. Résumé de la vidéo [00:00:13][^1^][1] - [00:25:59][^2^][2]:

      Cette vidéo présente la suite de l'apprentissage visible par le professeur John Hattie. Il discute des mises à jour de la méta-analyse depuis 2009, des critiques de son travail et des implications pour l'enseignement et l'apprentissage. Hattie souligne l'importance de l'expertise des enseignants et de l'évaluation de leur impact sur l'apprentissage des élèves.

      Points forts: + [00:01:10][^3^][3] La suite de l'apprentissage visible * Hattie résiste à écrire une deuxième édition mais travaille sur une suite * Il veut se concentrer sur ce qui fonctionne le mieux, pas seulement sur ce qui fonctionne + [00:02:01][^4^][4] Mise à jour de la méta-analyse * De 800 méta-analyses en 2009 à 1700 maintenant, couvrant environ un quart de milliard d'élèves * La distribution fondamentale n'a pas changé, la plupart des interventions améliorent les résultats des élèves + [00:07:01][^5^][5] Critiques et malentendus * Hattie aborde les nombreuses critiques de son travail, y compris la mauvaise interprétation de la table de classement * Il encourage la réinterprétation des données et souligne l'importance de l'interprétation dans la recherche + [00:16:01][^6^][6] L'importance de l'impact de l'enseignement * Hattie se concentre sur l'impact de l'enseignement sur l'apprentissage plutôt que sur la méthode d'enseignement * Il discute de l'importance de la confiance des élèves et de la manière dont les attentes des enseignants influencent les résultats des élèves + [00:19:07][^7^][7] Les sept grands messages * Les enseignants travaillent ensemble pour évaluer leur impact et ont des attentes élevées pour tous les élèves * La réussite scolaire est fortement influencée par l'expertise des enseignants et leur capacité à mettre en œuvre des pratiques efficaces + [00:22:18][^8^][8] L'expertise des enseignants pendant la pandémie * Hattie souligne comment les enseignants ont mené la révolution de l'enseignement pendant la pandémie de COVID-19 * Il met en évidence la nécessité de reconnaître et de valoriser l'expertise des enseignants dans l'amélioration de l'éducation

      Résumé de la vidéo [00:25:59][^1^][1] - [00:41:02][^2^][2] :

      Cette partie de la vidéo présente les réflexions du Professeur John Hattie sur l'apprentissage visible et l'importance de la pensée évaluative chez les enseignants. Il discute des défis et des opportunités qui ont émergé pendant la pandémie de COVID-19, soulignant la résilience et l'expertise des enseignants pour minimiser les perturbations. Hattie explore également les "cadres de pensée" qui sous-tendent l'impact des enseignants et l'importance de l'adaptation et de la critique constructive dans les écoles.

      Points forts : + [00:25:59][^3^][3] Impact de la pandémie sur l'éducation * Étude néo-zélandaise sur les performances en lecture, mathématiques et écriture * Discussion sur la "perte d'apprentissage" et les problèmes d'équité * Importance de l'expertise des enseignants face aux perturbations + [00:27:18][^4^][4] Les cadres de pensée des enseignants * Exploration de la pensée sous-jacente à l'impact des enseignants * L'importance de l'évaluation de l'impact par les enseignants * La pensée évaluative comme compétence clé + [00:29:01][^5^][5] Adaptations et biais dans l'enseignement * Les adaptations des enseignants aux programmes et leurs effets * Reconnaissance et gestion des biais personnels * Comprendre le véritable impact de l'enseignement + [00:31:04][^6^][6] Recherche sur l'apprentissage en classe * Développement d'outils et de méthodes pour observer et analyser les classes * Utilisation de la technologie pour comprendre le comportement des élèves * Importance de la stratégie d'apprentissage et de son application en classe + [00:35:42][^7^][7] Apprentissage à travers les yeux des élèves * L'importance de la clarté de l'enseignant pour la compréhension des élèves * L'impact de la compréhension des objectifs de la leçon sur la confusion des élèves * Développement des enseignants axé sur la perspective des élèves + [00:37:11][^8^][8] Analyse des stratégies d'apprentissage * Étude des différentes stratégies d'apprentissage et de leur efficacité * Modélisation de l'apprentissage en fonction des compétences et des motivations des élèves * Identification des meilleures stratégies en fonction du stade d'apprentissage

    1. Spinoza accuses naive Christians of making in his letters: we conceive of the condition in terms belonging to the conditioned

      Spinoza's God is an abstract God, like Euclidean geometry. He was against naive Christians who thought of God in human terms. No, God has no face, no brain, no arm, no eyes, no thought, no intention, nothing whatsover.

      God created humans. Humans have those. God is our condition. We are not God's condition. Without us, God still is.

    2. tropical luxuriance

      one must also above all give the finishing stroke to that other and more portentous atomism which Christianity has taught best and longest, the SOUL-ATOMISM. Let it be permitted to designate by this expression the belief which regards the soul as something indestructible, eternal, indivisible, as a monad, as an atomon: this belief ought to be expelled from science! Between ourselves, it is not at all necessary to get rid of "the soul" thereby, and thus renounce one of the oldest and most venerated hypotheses—as happens frequently to the clumsiness of naturalists, who can hardly touch on the soul without immediately losing it. But the way is open for new acceptations and refinements of the soul-hypothesis; and such conceptions as "mortal soul," and "soul of subjective multiplicity," and "soul as social structure of the instincts and passions," want henceforth to have legitimate rights in science. In that the NEW psychologist is about to put an end to the superstitions which have hitherto flourished with almost tropical luxuriance around the idea of the soul, he is really, as it were, thrusting himself into a new desert and a new distrust—it is possible that the older psychologists had a merrier and more comfortable time of it; eventually, however, he finds that precisely thereby he is also condemned to INVENT—and, who knows? perhaps to DISCOVER the new.

      Beyond Good and Evil, I.12

    1. More than 95% of people could be using a computer from 2008 or before without any problems. Needing a recent machine is limited to people who: Do extreme, professional, processor-intensive video-rendering. Compile massive programs and operating systems with severe time constraints. Play recent triple AAA video-games on high settings. Use many massive Electron apps and other inexcusably bad software written by soydevs and other people who shouldn't be writing software.

      Next, I need to find out how to fit this sentiment on a bumper sticker.

    1. tuated Learning: Legitimate Peripheral Participation" by Jean Lave and Etienne Wenger: This book introduces the concept of situated learning, which emphasizes learning as social participation. The ideas presented have fueled the development of collaborative and immersive learning environments, including simulations and virtual reality, for kinesthetic and procedural learning.

      Speaks to seminal work in the area

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:24:19][^2^][2]:

      Cette vidéo explore l'importance des compétences psychosociales chez les enseignants, souvent négligées malgré leur rôle crucial dans le développement des élèves. Véronique Gaspard, cofondatrice de Déclic CNV, discute de la nécessité d'intégrer ces compétences dans la formation des éducateurs pour améliorer le bien-être et la communication au sein de l'école.

      Points forts: + [00:00:15][^3^][3] L'importance des compétences psychosociales * Reconnaissance de leur valeur par les enseignants * Manque d'interventions structurées * Nécessité d'incarner ces compétences pour les transmettre + [00:01:00][^4^][4] Vision à long terme * Intégration des compétences psychosociales dans le socle commun de connaissances * Équilibre entre savoirs fondamentaux et savoir-être * Rêve d'une éducation centrée sur le vivre ensemble + [00:02:00][^5^][5] Développement personnel et professionnel * Importance de l'empathie et de la communication * Distinction entre développement personnel et soin de soi * Impact des compétences psychosociales sur les relations + [00:03:00][^6^][6] Urgence de prendre soin des éducateurs * Reconnaissance de la souffrance des enseignants * Besoin de soutien et de formations adaptées * Lien entre bien-être des enseignants et qualité de l'enseignement + [00:10:00][^7^][7] La communication non violente (CNV) * Clarification du concept et de son application * Importance de l'écoute empathique et de la responsabilité émotionnelle * Formation continue comme clé de la diffusion des compétences psychosociales + [00:20:00][^8^][8] Politique éducative et choix sociétaux * Réflexion sur les structures éducatives actuelles * Potentiel de la CNV pour transformer l'éducation * Vision d'une société où l'empathie est naturellement préservée

      Résumé de la vidéo [00:24:21][^1^][1] - [00:45:00][^2^][2] : La vidéo aborde l'importance des compétences psychosociales chez les enseignants et comment elles sont souvent sous-estimées. Elle souligne la crise d'autorité dans l'éducation et propose la communication non violente (CNV) comme outil pour améliorer la relation entre enseignants et élèves. La CNV aide à reconnaître et à exprimer les émotions et les besoins, favorisant ainsi la coopération et la collaboration au lieu de la compétition.

      Points forts : + [00:24:21][^3^][3] Crise d'autorité dans l'éducation * L'autorité est acquise, pas imposée * Importance de l'exemple adulte pour attirer les élèves + [00:25:01][^4^][4] Communication non violente (CNV) * Éduquer les enfants à reconnaître le bien et le mal * Utiliser la CNV pour établir des structures non pyramidales + [00:26:01][^5^][5] Éducation avec des repères internes * Comprendre et nommer les émotions * Relier les émotions aux besoins pour une meilleure coopération + [00:27:19][^6^][6] Défi d'écologie relationnelle * Apprendre aux enfants à prendre soin d'eux-mêmes et des autres * Importance de la solidarité entre enseignants + [00:31:07][^7^][7] Formation des enseignants à la CNV * Intégrer les émotions dans l'enseignement * Différencier la faiblesse de la vulnérabilité + [00:37:01][^8^][8] Partage d'expériences et de vulnérabilité * Inspirer par le partage d'émotions et d'expériences * Encourager l'expression de la vulnérabilité comme force

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:23:14][^2^][2]:

      Cette vidéo explore les stratégies pour réguler les désordres en classe et les dispositifs à choisir pour y parvenir. Elle discute des comportements inappropriés des élèves et comment les enseignants peuvent s'approprier différents dispositifs pour créer un environnement d'apprentissage ordonné.

      Points forts: + [00:00:15][^3^][3] Introduction aux désordres en classe * Définition des désordres comme comportements inappropriés * Focus sur les désordres ordinaires tels que la moquerie et le bavardage * Importance de la perspective des enseignants sur les désordres + [00:03:30][^4^][4] Typologie des désordres * Distinction entre désordres spécifiques à l'école et désordres sociétaux * Lien avec la forme scolaire et les attentes de civilisation * Variété des situations de désordre rencontrées par les enseignants + [00:10:04][^5^][5] Caractéristiques des dispositifs de régulation différés * Basés sur des problèmes vécus par les élèves et les enseignants * Régulation non immédiate pour favoriser la réflexivité des élèves * Association des élèves à la recherche de solutions + [00:19:22][^6^][6] Appropriation des dispositifs par les enseignants * Difficultés communes rencontrées dans l'utilisation des dispositifs * Influence des normes éducatives contemporaines sur les pratiques * Importance de l'hybridation des pratiques pour une gestion efficace de la classe

      Résumé de la vidéo [00:23:17][^1^][1] - [00:46:05][^2^][2]:

      La vidéo aborde la régulation des désordres en classe et les dispositifs pédagogiques à choisir. Elle discute de l'importance de l'anticipation, de la posture enseignante, et de la nécessité d'une didactique des compétences psychosociales.

      Points forts: + [00:23:17][^3^][3] Les valeurs et l'identité dans l'éducation * L'importance des valeurs et de l'identité dans les choix pédagogiques * La reconnaissance dans un mouvement de pensée peut être une ressource ou une contrainte + [00:25:01][^4^][4] Les normes éducatives et leur impact * Les normes éducatives influencent à la fois l'école et la famille * La culpabilité liée à l'intervention ou non dans les situations de désordre + [00:27:16][^5^][5] La symétrie dans l'enseignement des savoirs académiques et comportementaux * La différence de traitement entre les matières académiques et les savoirs comportementaux * La difficulté à établir une asymétrie dans l'enseignement des comportements + [00:31:07][^6^][6] La nécessité d'une didactique pour les compétences psychosociales * L'absence de formation et d'outils pour les enseignants dans ce domaine * L'importance de développer des ressources pédagogiques pour anticiper les désordres + [00:36:02][^7^][7] La démarche d'enquête pour rendre les dispositifs appropriables * La démarche d'enquête comme moyen de comprendre et résoudre les désordres * L'importance de la collaboration entre enseignants et élèves dans cette démarche + [00:43:55][^8^][8] La conceptualisation des situations de désordre * La nécessité de conceptualiser les situations de désordre pour une meilleure gestion * L'importance du rôle de l'enseignant dans la transmission des contenus et références

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:27:34][^2^][2] : La vidéo aborde la question de l'efficacité des programmes éducatifs en France et leur impact sur la qualité de l'enseignement. L'invité, Frédéric Marie, discute de la nécessité d'une formation adéquate pour les enseignants afin de naviguer entre les exigences prescriptives des programmes et la liberté pédagogique. Il souligne l'importance de repenser la formation des enseignants et propose un modèle coopératif qui intègre la recherche et la pratique.

      Points forts : + [00:00:00][^3^][3] La fonction des programmes * Différentes fonctions selon les pays * Importance de passer des problèmes des programmes à un programme de problèmes + [00:01:06][^4^][4] Formation et accompagnement des enseignants * Nécessité d'une formation pour comprendre et utiliser les programmes * L'importance de l'appropriation des programmes par les enseignants + [00:04:18][^5^][5] Définition et objectifs des programmes * Développer le potentiel de chaque enfant * Contribuer à une société démocratique + [00:10:00][^6^][6] Évolution historique des programmes * Influence de la philosophie et des révolutions sur l'éducation * Changements progressifs et réformes importantes au fil du temps + [00:18:31][^7^][7] Comparaison internationale des systèmes éducatifs * Différences entre les systèmes éducatifs de la France, de l'Allemagne et de la Finlande * La Finlande comme exemple d'un système inclusif et épanouissant + [00:26:01][^8^][8] Défis et conditions de l'enseignement * Les enseignants face à des programmes denses et des conditions matérielles concrètes * L'impact des conditions d'enseignement sur la réussite des élèves

      Résumé de la vidéo 00:27:36 - 00:52:32 : Cette partie de la vidéo discute de l'efficacité des programmes d'enseignement et de la manière dont ils peuvent être adaptés pour améliorer l'apprentissage. L'accent est mis sur l'organisation du temps et de l'espace en classe pour encourager une approche plus enquêtive et moins linéaire de l'enseignement. L'idée est de susciter la curiosité des élèves et de les engager dans des problèmes concrets, en utilisant des événements imprévus comme points de départ pour l'apprentissage. La vidéo explore également les défis auxquels les enseignants sont confrontés lorsqu'ils essaient de suivre les programmes tout en répondant aux besoins individuels des élèves.

      Points saillants : + [00:27:36][^1^][1] L'importance de l'organisation du temps d'étude * L'école comme lieu articulant le temps personnel et institutionnel * Les élèves suivent une méthode structurée par le programme et les manuels scolaires * Succession d'objets de savoir organisée selon le programme et l'emploi du temps + [00:29:15][^2^][2] Transition vers un temps d'enquête * Passer d'une approche linéaire à une approche basée sur l'enquête * Aborder des problèmes concrets pour un apprentissage en profondeur * Susciter la curiosité et l'intérêt des élèves pour favoriser de bonnes habitudes de réflexion + [00:37:02][^3^][3] Le rôle de la coopération dans l'enseignement * Importance du travail coopératif entre enseignants, chercheurs et formateurs * Sortir de l'approche linéaire des programmes pour favoriser l'interdisciplinarité * Changement profond de la formation initiale et continue des enseignants + [00:44:03][^4^][4] Repenser la forme scolaire * Reconstruire le temps de l'étude et repenser la place des acteurs dans la cité * Réflexion sur la philosophie de l'éducation et la valorisation de la pédagogie * Importance de l'exigence et de la mise en contexte des savoirs pour répondre aux besoins du monde actuel

  3. www.deutschdidaktik.phil.fau.de www.deutschdidaktik.phil.fau.de
    1. Es gilt also, im Verlauf derSchuljahre die kindliche Intensitatder Vorstellungsbildung zu erhaltenund einer zunehmenden Differenzie-rung, Flexibilitat und textorientiertenGenauigkeit zuzufihren.

      Dies soll ja, wie am Anfang der Spalte genannt, durch "kreativ-produktive Verfahren des Umgangs mit Texten" passieren. Ist das mit den Verfahrensweisen der handlungs- und produktionsorientierten Methode gleichzusetzen (die gegen Ende des Textes zu dem Thema genannt wurden)?

    2. hiilerinnen und Schiilersollen lernen, mit dieser Offenheitliterarischer Texte umzugehen. Dasfallt ihnen nicht immer leicht, weil sie— wesentlich bedingt durch die schu-lische Sozialisation — feste Ergebnissehaben méchten.

      Meiner Erfahrung nach, geht das eher von den Lehrern aus, dass sie ein Ergebnis haben wollen. Schüler wären froh, wenn man auch mal andere Lösungsvorschläge vorbringen und akzeptieren würde.

    3. erpriifbar

      Warum nicht überprüfbar? Man könnte die bildlichen Vorstellungen der Schüler ja durch das Malen von Schaubildern abfragen.

    1. https://www.youtube.com/watch?v=oGE6bJVMYew

      Résumé de la vidéo [00:00:04][^1^][1] - [00:22:43][^2^][2] : La vidéo présente le MegaLab de My Little Paris, un événement créatif où l'équipe et les invités partagent des passions, des découvertes et des histoires. La vidéo commence par une exploration détaillée d'un tableau du musée du Louvre, invitant les spectateurs à voyager à travers l'image et à découvrir un monde caché derrière le portrait du chancelier Rolin. Elle se poursuit avec un discours d'Anne-Fleur, DG et cofondatrice de My Little Paris, qui explique l'importance de la créativité et de la confiance dans la réussite de l'entreprise. L'événement célèbre les 15 ans de My Little Paris et souligne l'engagement de l'entreprise envers une écologie régénérative.

      Points forts : + [00:00:17][^3^][3] Exploration d'un tableau * Voyage dans l'image d'un tableau du Louvre * Découverte d'un monde caché derrière le portrait * Importance de l'observation et de l'imagination + [00:06:59][^4^][4] Discours d'Anne-Fleur * Présentation de l'événement MegaLab * Importance de la créativité chez My Little Paris * Célébration des 15 ans de l'entreprise + [00:13:00][^5^][5] Engagement écologique * Participation à la convention des entreprises pour le climat * Objectif d'une économie régénérative * Partage d'expériences personnelles et professionnelles + [00:19:01][^6^][6] Programme de la soirée * Présentation des intervenants et de leurs sujets * Aperçu des histoires et des thèmes abordés * Attentes et objectifs de l'événement

      Résumé de la vidéo [00:22:44][^1^][1] - [00:44:38][^2^][2] : La vidéo présente une série de discussions sur les changements climatiques, la communication et les responsabilités sociétales. Elle met en lumière les scénarios possibles liés à l'augmentation des températures, l'impact sur l'habitabilité de la planète et l'importance d'une action collective pour garantir la survie humaine. Des experts partagent leurs perspectives sur la communication des industries polluantes, le rôle des médias dans la diffusion d'informations climatiques et l'influence des imaginaires culturels sur notre perception de la réalité.

      Points saillants: + [00:22:44][^3^][3] Scénarios climatiques * Discussion sur l'augmentation de la température moyenne * Cinq scénarios possibles selon les politiques mises en place * Conséquences sur l'habitabilité de la planète + [00:26:28][^4^][4] Communication et industries polluantes * Histoire de la communication des industries polluantes * Création du doute et transfert de responsabilité * Influence sur la perception publique du changement climatique + [00:30:25][^5^][5] Responsabilité des médias * Mesure du traitement médiatique des questions climatiques * Problèmes de quantité, qualité et transversalité du contenu climatique * Importance de l'éducation et de la sensibilisation du public + [00:32:58][^6^][6] Imaginaires culturels et réalité * Impact des imaginaires dominants sur nos objectifs et perceptions * Jeu interactif pour explorer différents imaginaires culturels * Réflexion sur la fiction et la réalité dans la société moderne

      Résumé de la vidéo [00:44:40][^1^][1] - [01:06:58][^2^][2]:

      La vidéo présente une conférence sur les leçons apprises lors du lancement d'une entreprise à Tokyo, en se concentrant sur l'importance de l'expérience client, la stratégie de rareté, et l'intelligence émotionnelle. La conférencière partage ses expériences personnelles et professionnelles, soulignant la différence culturelle dans le traitement des clients, l'approche unique de la publicité, et la puissance des avis clients au Japon.

      Points forts: + [00:44:40][^3^][3] L'importance de l'expertise et de l'authenticité * L'expertise japonaise dans le café et la fierté nationale * La transmission de l'expertise française et l'authenticité parisienne + [00:47:50][^4^][4] La vie pratique et l'attention aux détails * L'expérience client supérieure dans les restaurants et chez les coiffeurs * L'innovation et la commodité dans les transports, comme le Shinkansen + [00:51:02][^5^][5] L'intelligence émotionnelle à travers l'équitation * Les leçons apprises d'un cheval en tant que coach en intelligence émotionnelle * L'importance de la communication non verbale et de la compréhension des émotions + [01:02:59][^6^][6] La créativité dans le monde professionnel et artistique * La distinction entre les créatifs et les non-créatifs dans l'entreprise * La réflexion sur la créativité personnelle et l'expérience de la danse

      Résumé de la vidéo [01:07:01][^1^][1] - [01:29:31][^2^][2]:

      La vidéo présente une conférence sur la créativité et l'innovation chez My Little Paris. Elle aborde l'importance de la créativité dans tous les services de l'entreprise, les trois piliers de la créativité (la tête, le corps, le cœur), et comment l'environnement de travail et les rituels quotidiens peuvent stimuler l'esprit créatif. La conférencière partage également son expérience personnelle de transition de la danse à l'entreprise, soulignant l'importance de rester fidèle à soi-même tout en explorant de nouvelles avenues.

      Points forts: + [01:07:01][^3^][3] La créativité chez My Little Paris * La créativité est encouragée à tous les niveaux * Importance des rituels et de l'environnement de travail * Les trois piliers : la tête (curiosité et connaissances), le corps (mouvement et repos), le cœur (passion et émotion) + [01:10:06][^4^][4] L'importance de l'environnement créatif * Un environnement riche stimule la créativité * La notion de feuille blanche est un mythe; la créativité vient de l'existant * Les idées évoluent à partir de connexions et de culture générale + [01:13:00][^5^][5] Transition de la danse à l'entreprise * L'importance de faire un pas de côté tout en restant fidèle à soi-même * Utiliser ses compétences et sensibilités uniques dans de nouveaux contextes * La créativité comme moyen de sortir de sa zone de confort + [01:16:01][^6^][6] Le rôle de l'animateur et l'entraînement d'un collectif * Conseils pour entraîner et fédérer un groupe * L'importance de l'attitude positive et de l'exemple personnel * Les bénéfices de l'émulation collective et de l'expérimentation de différents rôles

      Résumé de la vidéo [01:29:33][^1^][1] - [01:41:24][^2^][2]:

      Cette partie de la vidéo explore la vie des jumelles Louise et Jeanne, leur relation unique et les défis de construire une identité individuelle en tant que jumeaux. Louise partage des anecdotes personnelles et des réflexions sur la façon dont elles se sont construites à travers la ressemblance et la différence, soulignant l'importance de l'équilibre entre ces deux facteurs dans le développement personnel.

      Points forts: + [01:29:33][^3^][3] Introduction humoristique * Confusion comique sur qui présente * Présentation des jumelles et de leur sujet + [01:31:11][^4^][4] La construction du soi chez les jumelles * Difficultés et avantages d'être jumelles * Influence de la ressemblance et de la différence sur l'identité + [01:33:03][^5^][5] Influence d'un documentaire * Découverte du documentaire "Three Identical Strangers" * Questionnement sur l'inné et l'acquis chez les jumelles + [01:37:00][^6^][6] Théorie personnelle de Louise * Explication des facteurs de ressemblance et de différence * Application de la théorie à divers types de duos

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:26:37][^2^][2]:

      La vidéo intitulée "Enfants, objets ou sujets de droits ?" présente une session plénière du Conseil économique, social et environnemental (CESE) en France, axée sur les droits des enfants. La discussion porte sur la manière dont les enfants sont perçus dans la société, l'importance de leur donner la parole et de les considérer comme des sujets de droits à part entière. Les intervenants soulignent la nécessité d'une éducation à la vie affective, relationnelle et sexuelle (EVARS) adaptée à tous les âges et la participation active des jeunes dans les débats sur ces sujets.

      Points forts:

      • [00:00:00][^3^][3] Introduction de la session

        • Présentation du CESE et de son rôle dans l'élaboration des politiques publiques
        • Importance de la société civile et de la participation citoyenne
      • [00:07:01][^4^][4] Débat sur les droits des enfants

        • Discussion sur la place de l'enfant dans la société et les rapports de pouvoir
        • Échanges sur l'éducation à la vie affective et sexuelle
      • [00:17:44][^5^][5] Intervention des jeunes

        • Présentation du Conseil français des associations pour les droits de l'enfant (COFRADE)
        • Témoignages de jeunes sur l'éducation affective, relationnelle et sexuelle

      Résumé de la vidéo [00:26:39][^1^][1] - [00:51:04][^2^][2]:

      La vidéo traite de l'importance de l'éducation à la vie affective, relationnelle et sexuelle (EVARS) pour les enfants et les jeunes. Elle met en lumière les défis rencontrés dans la mise en œuvre de cette éducation et l'importance de l'implication des parents et des professionnels. Les discussions soulignent la nécessité d'aborder des sujets sensibles comme la sexualité et les violences sexuelles dès le plus jeune âge pour armer les enfants avec les connaissances nécessaires pour se protéger.

      Points forts:

      • [00:26:39][^3^][3] L'éducation EVARS

        • Importance de l'éducation EVARS pour établir des relations saines
        • Nécessité de dépasser les tabous et d'inclure les parents dans l'éducation
        • Absence d'âge spécifique pour commencer l'éducation sexuelle
      • [00:27:43][^4^][4] Intervention des jeunes

        • Les jeunes soulèvent la question de la relation entre parents et enfants
        • Ils insistent sur la connaissance des EVARS par les représentants des enfants
        • L'emploi du temps chargé des parents ne doit pas être un obstacle à l'éducation
      • [00:28:50][^5^][5] Mise en pratique de l'EVARS

        • Débat sur la composition des groupes d'EVARS, mixtes ou non
        • Discussion sur les tabous et la gêne liée à certains sujets
        • Importance de la mixité pour briser les tabous et partager les connaissances
      • [00:31:12][^6^][6] Qui doit dispenser l'EVARS

        • Débat sur l'intervenant idéal pour l'EVARS : externe ou interne
        • Avantages d'un intervenant extérieur pour la liberté d'expression
        • Nécessité d'une formation adéquate pour les intervenants
      • [00:34:08][^7^][7] Lieux d'enseignement de l'EVARS

        • L'EVARS doit être enseignée à l'école mais aussi dans d'autres lieux
        • Importance des loisirs et du numérique dans l'éducation des enfants
        • L'EVARS ne doit pas être limitée à l'école, elle doit être accessible partout
      • [00:37:21][^8^][8] Violence sexiste et sexuelle

        • L'environnement violent dans lequel grandissent les enfants
        • Nécessité de lutter contre les systèmes de domination et d'appropriation du corps
        • Importance de l'éducation pour prévenir la violence et promouvoir l'égalité

      Résumé de la vidéo [00:51:08][^1^][1] - [01:18:58][^2^][2]:

      La troisième partie de la vidéo aborde la formation des professionnels de santé et de justice sur la violence et l'éducation à la vie relationnelle, sexuelle et affective (EVARS). Elle souligne l'importance de l'instruction et de l'éducation dans le développement des enfants en tant que citoyens et la nécessité d'une politique publique qui crée du commun tout en respectant la vie privée.

      Points forts:

      • [00:51:08][^3^][3] Formation sur la violence et l'EVAR

        • Nécessité d'une formation approfondie pour les professionnels
        • Lien entre l'instruction et l'éducation pour grandir
      • [00:57:00][^4^][4] Le corps et l'éducation

        • Le corps comme point de départ solide pour une politique publique
        • Importance de parler ouvertement du corps des filles et des garçons
      • [01:04:02][^5^][5] Droits des enfants et leur effectivité

        • Écart entre les droits annoncés et leur application réelle
        • Importance de l'écoute et de la prise en compte des besoins des enfants
      • [01:10:03][^6^][6] Protection de l'enfance et reconnaissance des violences

        • Statistiques alarmantes sur les violences sexuelles contre les enfants
        • Coût sociétal élevé dû à l'impunité et au manque de soins spécialisés

      Résumé de la vidéo [01:19:00][^1^][1] - [01:43:54][^2^][2]:

      La quatrième partie de la vidéo se concentre sur la protection de l'enfance et le droit des enfants à être entendus dans les processus judiciaires et sociaux. Les intervenants discutent de l'importance de la formation des professionnels pour recueillir la parole des enfants et de la nécessité d'une politique publique qui soutient les enfants en tant que sujets de droits. Ils soulignent également les défis liés à l'exécution des décisions de justice et les disparités territoriales dans la protection de l'enfance.

      Points forts:

      • [01:19:00][^3^][3] Droit des enfants à être entendus

        • Formation des professionnels pour écouter les enfants
        • Difficultés rencontrées par les enfants dans les commissariats
      • [01:20:10][^4^][4] Expérience universelle de l'enfance

        • Réflexion sur l'oubli de l'adulte de ce que c'est d'être un enfant
        • Importance de la confiance et de l'espoir dans le regard des enfants
      • [01:25:00][^5^][5] Formation et politique publique

        • Nécessité d'une politique publique pour former à croire les enfants
        • Présentation d'un programme de formation pour les professionnels
      • [01:27:01][^6^][6] Enfants comme sujets de droit

        • Débat sur le chemin restant pour faire des enfants des sujets de droit
        • Importance de l'intérêt supérieur de l'enfant dans les politiques publiques
      • [01:31:01][^7^][7] Violence et éducation relationnelle

        • Discussion sur la violence éducative et la protection de l'enfance
        • Évocation des réclamations reçues concernant la protection de l'enfance
      • [01:37:00][^8^][8] Autorité parentale et violence

        • Réflexion sur l'autorité parentale excluant la violence
        • Évolution de la législation et de la perception de l'autorité parentale

      Résumé de la vidéo [01:43:56][^1^][1] - [02:09:12][^2^][2]:

      La cinquième partie de la vidéo aborde les défis de la protection de l'enfance en France, notamment les difficultés d'exécution des décisions de justice concernant les enfants et les disparités entre les départements. Les intervenants discutent de l'importance de soutenir les professionnels du secteur social et de la nécessité d'une meilleure formation pour écouter et comprendre les enfants. Ils soulignent également le rôle crucial de l'éducation à la vie affective, relationnelle et sexuelle (EVARS) dès le plus jeune âge pour prévenir les violences et promouvoir l'égalité.

      Points forts:

      • [01:43:56][^3^][3] Problèmes de la protection de l'enfance

        • Difficultés d'application des décisions de justice
        • Inégalités entre les départements
        • Importance de la formation des professionnels
      • [01:52:06][^4^][4] Rôle de l'éducation EVARS

        • Nécessité d'une éducation EVARS précoce
        • Impact de la pornographie sur les jeunes
        • Débat sur le moment approprié pour aborder la pornographie dans l'EVARS
      • [02:00:03][^5^][5] Violence entre mineurs

        • Prévalence de la violence sexuelle entre mineurs
        • Manque de soutien pour les professionnels du lien social
        • Importance de l'écoute et de la compréhension des enfants

      Résumé de la vidéo 02:09:14 - 02:31:40:

      La sixième partie de la vidéo se concentre sur les défis de la protection de l'enfance et les droits des enfants en France. Les intervenants discutent de l'importance de l'éducation à la vie affective, relationnelle et sexuelle (EVAR), de la nécessité d'écouter et de protéger les enfants, et de l'impact des structures de pouvoir et de domination dans les familles et la société. Ils soulignent également les difficultés rencontrées par les enfants et les professionnels dans le système judiciaire et social.

      Points forts: + [02:09:14][^1^][1] Témoignages et débats * Échanges sur les expériences personnelles des jeunes * Discussion sur la violence entre enfants et adolescents * Importance de l'EVARS pour prévenir les comportements abusifs

      • [02:11:02][^2^][2] Rôle des professionnels et de l'EVAR

        • Débat sur l'importance de l'EVARS et la formation des professionnels
        • Nécessité d'un contenu adapté au développement des enfants
        • Impact de la pornographie et des idées fausses sur les jeunes
      • [02:15:56][^3^][3] Opposition à l'EVARS et protection des enfants

        • Discussion sur les oppositions à l'EVARS pour de mauvaises raisons
        • Importance de protéger les enfants contre les violences et agressions
        • Rôle des parents et de la société dans la protection et l'éducation des enfants
      • [02:20:56][^4^][4] Autorité parentale et droits des enfants

        • Réflexion sur l'autorité parentale et les rapports de pouvoir
        • Nécessité de lutter contre la domination patriarcale et le tabou de l'inceste
        • Importance de l'écoute inconditionnelle et de la protection des enfants
    1. Since its original proposal, the view has evolved and attracted new followers in the physics community, but has been less warmly received by philosophers. Many, it seems, share the view of Hagar (2003) that “Fuchs’ ‘thin’ realism, and the entire ‘fog from the north’ which inspires it, are nothing but instrumentalism in disguise” (p.772).Footnote 1
      • Physicists get into Philosophy fields
      • But Philosophers "protest" for "imprecise" ideas
  4. openbooks.lib.msu.edu openbooks.lib.msu.edu
    1. consciousness

      perspective instead of consciousness?

    2. also

      do we need this?

    3. The Creative Evolution

      italics

    4. s/of crest and claw/from/

      see previous comments

    5. /o

      consistency: spacing and /

    6. /o

      see previous comment

    7. rie/ of

      consistency; spacing and /

    8. d /

      consistency issue with / and spacing throughout the notes: I suggest a space before and after /

    9. s-

      the dash should be longer and separated from magazines here

    10. who

      whom?

    11. on

      in?

    12. parole in libertà

      italics?

    13. parole in libertà

      see previous comment