1. Dec 2025
    1. eLife Assessment

      This valuable study successfully decoded visual representations of facial expressions and stereoscopic depth information from electroencephalogram (EEG) signals recorded in an immersive virtual reality (VR) environment. The evidence is solid in demonstrating the technical feasibility of integrating state-of-the-art EEG decoding and VR with eye tracking. This work will interest neuroscience researchers, as well as engineers developing brain-machine interfaces and/or virtual reality displays.

    2. Reviewer #1 (Public review):

      Summary:

      The study by Klotzsche et al. examines whether emotional facial expressions can be decoded from EEG while participants view 3D faces in immersive VR and whether stereoscopic depth cues affect these neural representations. Participants viewed computer-generated faces (three identities, four emotions) rendered either stereoscopically or monoscopically, while performing an emotion recognition task. Time-resolved multivariate decoding revealed above-chance decodability of facial expressions from EEG. Importantly, decoding accuracy did not differ between monoscopic and stereoscopic viewing. This indicates that the neural representation of expressions is robust against stereoscopic disparity for the relevant features. However, a separate classifier could distinguish the depth condition (mono vs. stereo) from EEG, i.e., the pattern of neuronal activity differs between conditions, but not in ways relevant for the decoding of emotions. It had an early peak and a temporal profile similar to identity decoding, suggesting that early, task-irrelevant visual differences are captured neurally. Cross-decoding further demonstrated that expression decoders trained in one depth condition could generalize to the other, supporting the idea of representational invariance. Eye-tracking analyses showed that expressions and identities could be decoded from gaze patterns, but not the depth condition, and EEG- and gaze-based decoding performances were not correlated across participants. Overall, this work shows that EEG decoding in VR is feasible and sensitive, and suggests that stereoscopic cues are represented in the brain but do not influence the neural processing of facial expressions. This study addresses a relevant question with state-of-the-art experimental and data analysis techniques.

      Strengths:

      (1) It combines EEG, virtual reality stereoscoptic and monoscopic presentation of visual stimuli, and advanced data analysis methods to address a timely question.

      (2) The figures are of very high quality.

      (3) The reference list is appropriate and up to date.

      Weaknesses:

      (1) The introduction-results-discussion-methods order makes it hard to follow the Results without repeatedly consulting the Methods. Please introduce minimal, critical methodological context at the start of each Results subsection; reserve technical details for Methods/Supplement.

      (2) Many Results subsections begin with a crisp question and present rich analyses, but end without a short synthesis. Please add 1-2 sentences that explicitly answer the opening question and state what the analyses demonstrate.

      (3) The Results compellingly show that (a) expressions are decodable from EEG and (b) mono vs stereo trials are decodable from EEG; yet expression decoding is comparable across mono and stereo. It would help if you articulate why depth is neurally distinguishable while leaving expression representations unchanged. Maybe improve the discussion of the results of source localization and give a more detailed connection to what we already know about the processing of disparity.

    3. Reviewer #2 (Public review):

      Summary:

      The authors' main aim was to determine the extent to which the emotional expression of face images could be inferred from electrophysiological data under the viewing conditions imposed by immersive virtual reality displays. Further, given that stereoscopic depth cues can be easily manipulated in such displays, the authors wished to investigate whether successful emotion decoding was affected by the presence or absence of these depth cues, and also if the presence/absence of depth cues was itself a property of the viewing experience that could be decoded from neural data.

      Overall, the authors use fairly standard approaches to decoding neural data to demonstrate that above-chance results (slightly above the 0.5 chance threshold for their measure of choice) are in general achievable for emotion decoding, decoding the identity of faces from neural data, and decoding the presence/absence of depth cues in an immersive virtual reality display. They further examine the contribution of specific components of the response to visual stimuli with similar outcomes.

      Strengths:

      The main contribution of the manuscript is methodological. Rather than shedding particular light on the neural mechanisms supporting depth processing or face perception, what is on offer is primarily a straightforward examination of an applied question. With regard to the goal of answering that applied question, I think the paper succeeds. The overall experimental design is not novel, but in this case, that is a good thing. The authors have used relatively unadorned tasks and previous approaches to applying decoding tools to EEG data to see what they can get out of the neural data collected under these viewing conditions. While I would say that there is not a great deal that is especially surprising about these results, the authors do meet the goal they set for themselves.

      Weaknesses:

      Some of the key weaknesses I see are points that the authors raise themselves in their discussion, particularly with regard to the generalizability of their results. In particular, the 3D faces they have employed here perhaps exhibit a somewhat limited repertoire of emotional expression and do not necessarily cover a representative gamut of emotional face appearances, such as one would encounter in naturalistic settings. Then again, part of the goal of the paper was to examine the decodability of emotional expression in a specific, non-natural viewing environment - a viewing environment in which one could reasonably expect to encounter artificial faces like these. Still, the limitations of the stimuli potentially limit the scope of the conclusions one should draw from the data. I also think that there is a great deal of room for low-level image properties to drive the decoding results for faces, which could have been addressed in a number of ways (matching power spectra, for example, or using an inverted-image control condition). The absence of such control comparisons means that it is difficult to know if this is really a result that reflects face processing or much lower-level image differences that are diagnostic of emotion or identity in this subset of images. Again, to some extent, this is potentially acceptable - if one is mostly interested in whether this result is achievable at all (by hook or by crook), then it is not so important how the goal is met. Then again, one would perhaps like to know if what has been measured here is more a reflection of spatial vision vs. face processing mechanisms.

    4. Reviewer #3 (Public review):

      Summary:

      This study investigates two main questions:

      (1) whether brain activity recorded during immersive virtual reality can differentiate facial expressions and stereoscopic depth, and

      (2) whether depth cues modulate facial information processing.

      The results show that both expression and depth information can be decoded from multivariate EEG recorded in a head-mounted VR setup. However, the results show that the decoding performance of facial expressions does not benefit from depth information.

      Strengths:

      The study is technically strong and well executed. EEG data are of high quality despite the challenges of recording inside a head-mounted VR system. The work effectively combines stereoscopic stimulus presentation, eye-tracking to monitor gaze behavior, and time-resolved multivariate decoding techniques. Together, these elements provide an exemplary demonstration of how to collect and analyze high-quality EEG data in immersive VR environments.

      Weaknesses:

      The major limitation concerns the theoretical question about how stereoscopic depth modulates facial expression processing. While previous work has suggested that stereoscopic depth cues can shape natural face perception and emphasize the importance of binocular information in recognizing facial expressions (lines 95-97), the present study reports a null effect of depth. However, the stimulus configuration they used likely constrained the ability to detect any depth-related effects. All facial stimuli were static, frontal, and presented at a fixed distance. This design leads to near-ceiling behavioral performance and no behavioral effect of depth on expression recognition. It makes the null modulation of depth on expression processing unsurprising and limits the theoretical reach of the study. Adding more subtle or naturalistic features (such as various viewing angles and dynamic expressions) to the stimulus set if the authors aim to advance a strong theoretical claim about the role of binocular disparity. Or reframing the work as a technical validation of EEG decoding in this context.

      Another issue relates to the claim that eye movements cannot explain the EEG decoding results. It is a real challenge to remove eye-movement-related artifacts and confounds, as the VR setup tends to encourage viewers to explore the environment freely. However, nearly half of the eye-tracking datasets were lost (usable in only 17 of 33 participants), which substantially weakens the evidence for EEG-gaze dissociation. Moreover, it would be almost impossible to decode facial information from only two-dimensional gaze direction, given that with 60 EEG channels, the decoding accuracy was modest (AUC ≈ 0.60). These two factors together limited the strength of the reported null correlation between neural and eye-data decoding.

      The decoding analysis appears to use all 60 EEG channels as input features. I wonder why the authors did not examine using more spatially specific channel subsets. Facial expression and depth cues are known to preferentially engage occipito-temporal regions (e.g., N170-related sites), yet the current approach treats all sensors equally. Including all the channels may add noise and irrelevant signals to facial information decoding. Besides, using a subset of spatial-specific channels would align more directly with the subsequent source reconstruction.

    5. Author response:

      We thank the reviewers for their thoughtful and constructive comments. We are pleased that they found the study technically strong and the integration of EEG decoding, immersive VR, and eye tracking valuable.

      Across all three reviews, several points of clarification emerged. In our revision, we will focus on:

      (1) Improving clarity and structure of the manuscript (Reviewer #1).

      We will strengthen the flow between the Methods and Results subsections and include explicit concluding statements for the single results.

      (2) Emphasize methodological scope and limitations in terms of stimulus set and generalizability (Reviewers #2 and #3).

      We will further emphasize that a key objective was to establish, for the first time, the methodological feasibility of decoding facial features (especially emotional expressions) under VR conditions, and that our stimulus set (consisting of facial expressions that were easy to distinguish) limits (a) the task-relevance (and thus possibly the neural integration) of depth information and (b) the generalizability to less easily distinguishable settings. We appreciate the suggestion of an inverted-face control to further investigate the extent to which the decoding results were based on low-level features; however, we do not plan a follow-up experiment at this stage; instead, we will discuss this limitation more explicitly.

      We believe these revisions will substantially strengthen the manuscript and further highlight its methodological focus.

    1. uv init

      この手前に全体を俯瞰したいので、uvで提供しているコマンド一覧がほしい。

      全部紹介しないなら、そのうち本書ではこれを紹介するよ、とか言ってほしい

    2. 特定のバージョンのuvをインストールする場合は

      これ必要な場合ってありますか?説明しなくてもいいかなと思った

    3. Pythonの標準パッケージマネージャーはpipですが、

      pipはパッケージインストーラーであってマネージャーじゃないかなという意見

      https://pip.pypa.io/en/stable/

      あー、でもpipの方に「パッケージを管理する」って書いてるのか、じゃあこのままでいいかなぁ

    1. eLife Assessment

      This important study reveals that mitotic release of an ER-microtubule tether is critical for normal mitotic progression. Manipulating CLIMP63 phosphorylation, the authors provide convincing evidence that persistent microtubule-ER contacts activate the spindle assembly checkpoint and, if mitosis is forced to proceed, drive severe micronucleation. While the study provides new mechanistic insights, some evidence is indirect, and additional experiments would further refine the model.

    2. Reviewer #1 (Public review):

      Summary:

      In the present manuscript, de Bos and Kutay investigate the functional implications of persistent microtubule-ER contacts as cells go through mitosis. To do so, they resorted to investigating phosphorylation mutants of the ER-Microtubule crosslinker Climp63. They found that phosphodeficient Climp63 mutants induce a severe SAC-dependent mitotic delay after normal chromosome alignment, with an impressive mitotic index of approximately 75%. Strikingly, this was often associated with massive nuclear fragmentation into up to 30 micronuclei that are able to recruit both core and non-core nuclear envelope components. One particular residue (S17) that is phosphorylated by Cdk1 seems to account for most, if not all, these phenotypes. Furthermore, the authors use the impact on mitosis as an indirect way to map the microtubule binding domain of Climp63, which has remained controversial, and found that it is mostly restricted to the N-terminal 28 residues of Climp63. Of note, despite the strong impact on mitosis, persistent microtubule-ER contacts did not affect the distribution of other organelles during mitosis, such as mitochondria or lysosomes.

      Strengths:

      Overall, this work provides important mechanistic insight into the functional implications of ER-microtubule network remodelling during mitosis and should be of great interest to a vast readership of cell biologists.

      Weaknesses:

      Some of the key findings appear somewhat preliminary and would be worth exploring further to substantiate some of the claims and clarify the respective impact on mitosis and nuclear envelope reassembly on the resulting micronuclei.

      The following suggestions would significantly clarify some key points:

      (1) The striking increase in mitotic index in cells expressing the Climp63 phosphodefective mutant, together with their live cell imaging data indicating extensive mitotic delays that can be relieved by SAC inhibition, suggests that SAC silencing is significantly delayed or even impossible to achieve. The fact that most chromosomes align in 12 min, irrespective of the expression of the Climp63 phosphodefective mutant, suggests that initial microtubule-kinetochore interactions are not compromised, but maybe cannot be stably maintained. Alternatively, the stripping of SAC proteins from kinetochores by dynein along attached microtubules might be compromised, despite normal microtubule-kinetochore attachments. The authors allude to both these possibilities, but unfortunately, they never really test them. This could easily be done by immunofluorescence with a Mad1 or c-Mad2 antibody to inspect which fraction of kinetochores (co-stained with a constitutive kinetochore marker, such as CENP-A or CENP-C) are positive for these SAC proteins. If just a small fraction, then the stability of some attachments is likely the cause. If most/all kinetochores retain Mad1/c-Mad2, then it is probably an issue of silencing the SAC.

      (2) The authors use the increase in mitotic index (H3 S10 phosphorylation levels) as a readout for the MT binding efficiency of Climp63 and respective mutants. Although suggestive, this is fairly indirect and requires additional confirmation. For example, the authors could perform basic immunofluorescence in fixed cells to inspect co-localization of Climp63 (and its mutants) with microtubules.

      (3) The authors refer in the discussion that the striking nuclear fragmentation seen upon mitotic exit of cells expressing Climp63 phosphodefective mutant has not been reported before, and yet it is strikingly similar to what has been previously observed in cells treated with taxol (they cite Samwer et al. 2017, but they might elect to cite also Mitchison et al., Open Biol, 2017 and most relevantly Jordan et al., Cancer Res, 1996). This striking similarity and given the extensive mitotic delay observed in the Climp63 phosphodefective mutant, it is tempting to speculate that these cells are undergoing mitotic slippage (i.e., cells exit mitosis without ever satisfying the SAC) because they are unable to silence/satisfy the SAC. Indeed, the scattered micronuclei morphology has also been observed in cells undergoing mitotic slippage (e.g., Brito and Rieder, Curr Biol., 2006). The experiment suggested in point #1 should also shed light on this problem. The authors might want to consider discussing this possible explanation to interpret the observed phenotypes.

      (4) One of the most significant implications of the findings reported in this paper is that microtubule proximity does not seem to impact the assembly of either core or non-core nuclear envelope proteins on micronuclei (that possibly form due to mitotic slippage, rather than normal anaphase). These results challenge some models explaining nuclear envelope defects in micronuclei derived from lagging chromosomes due to the proximity of microtubules, and, as the authors point out at the very end, other reasons might underlie these defects. Along this line, the authors might elect to cite Afonso et al. Science, 2014, and Orr et al., Cell Reports, 2022, who provide evidence that a spindle midzone-based Aurora B gradient, rather than microtubules per se, underlie the nuclear envelope defects commonly seen in micronuclei derived from lagging chromosomes during anaphase.

    3. Reviewer #2 (Public review):

      Mitotic phosphorylation of the ER-microtubule linker CLIMP63 was discovered decades ago and was shown to release CLIMP63 from microtubules. Here, the authors describe for the first time the significance of CLIMP63 phosphorylation for mitotic division in cells. Expression of non-phosphorylatable CLIMP63 led to a massive re-localization of ER into the area of the mitotic spindle. This was not unexpected, as another ER-microtubule linker, STIM1, is phosphorylated during mitosis to release it from microtubules, and unphosphorylatable STIM1 also leads to an invasion of the ER into the spindle. The authors map CLIMP63's microtubule-binding domain and define S17 as the critical residue that needs to be phosphorylated for release from microtubules and as a target of Cdk1, albeit with an indirect assay that is based on the ability of overexpressed mutants to disrupt mitosis. The authors further demonstrate that aberrant, microtubule-tethered membranes in the spindle disrupt spindle function. This is in line with the group's prior findings that chromosome-tethered membranes lead to severe chromosome segregation defects. Cells overexpressing phospho-deficient CLIMP63 arrested in prometaphase with an active checkpoint. When these cells were forced to exit mitosis, a large number of micronuclei formed. Interestingly, these micronuclei had different compositions and properties from previously described ones, suggesting that there are diverse paths for a cell to become multinucleated. Lastly, the authors asked whether mitochondria and lysosomes depend on ER for their distribution in mitotic cells. However, the position of these other organelles was unchanged in cells in which ER was re-localized due to the overexpression of phospho-deficient CLIMP63. This is an interesting observation in the context of how the interior organisation of mitotic cells is achieved.

      Suggestions:

      (1) The authors should confirm the mapping of the microtubule-binding domain by more direct assays, such as microtubule co-pelleting or proximity ligation assays.

      (2) The authors should clarify why they performed phenotypic studies and live microscopy experiments (Figures 4 and 5) using the CLIMP63(3A) mutant, despite knowing that the relevant phosphorylation site was S17. Were the phenotypes different for S17A versus the triple mutant?

  2. test2025.mitkoforevents.cz test2025.mitkoforevents.cz
    1. Zkontrolujte všechny dostupné rozměry nůžkových stanů a zjistěte více o jejich parametrech a výhodách.

      Vyberte rozměr, který nejvíce odpovídá Vašemu účelu.

    2. stan je odolný vůči silným a náhlým poryvům větru (týká se vybraných rozměrů při použití speciálních kotvících sad).

      odolnost vůči větru (při použití bezpečnostní kotvící sady)

    3. Nůžkové stany Octa Optima jsou oblíbeně využívány našimi zákazníky jako obchodní a servisní stánky, informační a prodejní místa, prostory pro obchodní setkání a mobilní reklamní nosiče během akcí, veletrhů a výstav. Profesionálně navržená grafika činí stan účinnou formou reklamy. Buduje rozpoznatelnost značky, přitahuje pozornost a vzbuzuje důvěru. To je skvělý způsob, jak prezentovat firmu v co nejlepším světle.

      delete

    4. Nejoblíbenější řada obchodních stanů Nůžkové stany Octa Optima jsou synonymem pohodlí a spolehlivosti. Jsou extrémně snadné na rozložení, připravené k použití za pouhých 60 sekund. Díky kompaktním rozměrům po složení se vybrané modely bez problémů vejdou do kufru standardního auta.   Stany jsou také pohodlné na přenášení, i po rozložení, a mimořádně stabilní – odolné vůči větru a nepříznivým povětrnostním podmínkám. Nemusíte se obávat, že by se převrhly nebo odletěly. Zajišťují plnou bezpečnost během každé akce.

      Střední řada nůžkových stanů Zesílený profil stanové nohy o průměru 48 mm, prodloužená záruka a stále zachovaná stavba do 60 s! Stany Octa Optima lze označit za zlatou střední cestu. (next paragraph) Hodí se do náročnějších podmínek nebo tam, kde se dá očekávat zhoršené počasí. Stany v této řadě zvládnou zastřešit od 3x3 m až do největšího rozměru 6x6 m. + change the text in video for CZ

    5. Pozáruční servis 10letý pozáruční servis a přístup k náhradním dílům

      4letá záruka na konstrukci a náhradní díly skladem

    1. 誤差が出てもエラーにはならないため

      これはよくあるエラーなのか?

      よくあるエラーだと上にあるTOMLDecodeErrorが出るから、それを見てちゃんと対処してね、っていう話が適切かなと思いました。

      どっちかというと周辺知識よりかなと思いました

    2. PEP 680においても、パッケージングツールなどでTOMLを読み込むことを目的としていることから、書き込みの必要はないとされています。 その理由としては、以下のようなことがあります。 TOMLのスタイル(インデントや引用符など)にゆらぎがあり、標準化しづらい コメントや書式などのスタイル関連のメタデータを含んだ出力を行うAPIの設計が複雑 現時点では除外する方向として、将来的に再検討する方が安全

      ちょっと読みにくいかなと思いました。

      PEP 680においても、以下の理由によりTOMLの読み込みのみで書き込みには対応しないことと記述してあります。

      • ~~標準化が困難
      • ~~設計が複雑
      • ~~将来的~~安全
    3. 挙げれています。

      挙げられています。

      最後までよんでPEPの話だと思ったので、この文の最初に

      PEP 680中ではtomllibを標準ライブラリとする理由として~~~が挙げられています。

      みたいに書いた方がいいかなと思いました。

    4. パッケージングに選ばれている

      パッケージに関する情報を記述するpyproject.tomlの

      とか?(ちょっと表現がふわっとしている&省略されているかなと思った

    5. サポート

      サポートが連続するので言い換えたい。

      プログラミング言語にTOMLファイルに対応したライブラリがあります。

      とか

    6. data["value"]

      dataはこの時点で定義されてないので、そのまま実行するとエラーになります。 NameError: name 'data' is not defined

    7. TypedDictを使用した型チェックの例

      (質問) 実際にtomlファイルは別ファイルで持つことが多いとおもうですが、(そうでもないです?あまりtomlを扱ったことがなくて;)その場合、TypedDictを用いた型チェックはできないという認識であってます?

    8. テーブルの配列

      この「テーブル」と上記の[table]が同じものに感じて、一瞬???となったので、[table]は違う名前に変えた方が混乱がないかもと思いました。(違う意味ですよね?)

    9. sample.toml

      上記のリスト13.20 のcaptionでは、sample1.tomlとなっているので、どちらかで統一した方が良さそうですかね?

  3. test2025.mitkoforevents.cz test2025.mitkoforevents.cz
    1. eLife Assessment

      This study provides useful insights into addressing the question of whether the prevalence of autoimmune disease could be driven by sex differences in the T cell receptor (TCR) repertoire, correlating with higher rates of autoimmune disease in females. The authors compare male and female TCR repertoires using bulk RNA sequencing, from sorted thymocyte subpopulations in pediatric and adult human thymuses; however, the results do not provide sufficient analytical rigor and incompletely support the central claims.

    2. Reviewer #1 (Public review):

      Summary:

      The goal of this paper was to determine whether the T cell receptor (TCR) repertoire differs between a male and a female human. To address this, this group sequenced TCRs from double-positive and single-positive thymocytes in male and female humans of various ages. Such an analysis on sorted thymocyte subsets has not been performed in the past. The only comparable dataset is a pediatric thymocyte dataset where total thymocytes were sorted.

      They report on participant ages and sexes, but not on ethnicity, race, nor provide information about HLA typing of individuals. Though the experiments themselves are heroic, they do represent a relatively small sampling of diverse humans. They observed no differences in TCRbeta or TCRalpha usage, combinational diversity, or differences in the length of the CDR3 region, or amino acid usage in the CD3aa region between males or females. Though they observed some TCRbeta CD3aa sequence motifs that differed between males and females, these findings could not be replicated using an external dataset and therefore were not generalizable to the human population.

      They also compared TCRbeta sequences against those identified in the past using computational approaches to recognize cancer-, bacterial-, viral-, or autoimmune-antigens. They found very little overlap of their sequences with these annotated sequences (depending on the individual, ranging from 0.82-3.58% of sequences). Within the sequences that were in overlap, they found that certain sequences against autoimmune or bacterial antigens were significantly over-represented in female versus male CD8 SP cells. Since no other comparable dataset is available, they could not conclude whether this is a finding that is generalizable to the human population.

      Strengths:

      This is a novel dataset. Overall, the methodologies appear to be sound. There was an attempt to replicate their findings in cases where an appropriate dataset was available. I agree that there are no gross differences in TCR diversity between males and females.

      Weaknesses:

      Overall, the sample size is small given that it is an outbred population. The cleaner experiment would have been to study the impact of sex in a number of inbred MHC I/II identical mouse strains or in humans with HLA-identical backgrounds.

      It is unclear whether there was consensus between the three databases they used regarding the antigens recognized by the TCR sequences. Given the very low overlap between the TCR sequences identified in these databases and their dataset, and the lack of replication, they should tone down their excitement about the CD8 T cell sequences recognizing autoimmune and bacterial antigens being over-represented in females.

      The dataset could be valuable to the community.

    3. Reviewer #2 (Public review):

      Summary:

      This study addresses the hypothesis that the strikingly higher prevalence of autoimmune diseases in women could be the result of biased thymic generation or selection of TCR repertoires. The biological question is important, and the hypothesis is valuable. Although the topic is conceptually interesting and the dataset is rich, the study has a number of major issues that require substantial improvement. In several instances, the authors conclude that there are no sex-associated differences for specific parameters, yet inspection of the data suggests visible trends that are not properly quantified. The authors should either apply more appropriate statistical approaches to test these trends or provide stronger evidence that the observed differences are not significant. In other analyses, the authors report the differences between sexes based on a pulled analysis of TCR sequences from all the donors, which could result in differences driven by one or two single donors (e.g., having particular HLA variants) rather than reflect sex-related differences.

      Strengths:

      The key strength of this work is the newly generated dataset of TCR repertoires from sorted thymocyte subsets (DP and SP populations). This approach enables the authors to distinguish between biases in TCR generation (DP) and thymic selection (SP). Bulk TCR sequencing allows deeper repertoire coverage than single-cell approaches, which is valuable here, although the absence of TRA-TRB pairing and HLA context limits the interpretability of antigen specificity analyses. Importantly, this dataset represents a valuable community resource and should be openly deposited rather than being "available upon request."

      Weaknesses:

      Major:

      (1) The authors state that there is "no clear separation in PCA for both TRA and TRB across all subsets." However, Figure 2 shows a visible separation for DP thymocytes (especially TRA, and to a lesser degree TRB) and also for TRA of Tregs. This apparent structure should be acknowledged and discussed rather than dismissed.

      (2) Supplementary Figures 2-5 involve many comparisons, yet no correction for multiple testing appears to be applied. After appropriate correction, all the reported differences would likely lose significance. These analyses must be re-evaluated with proper multiple-testing correction, and apparent differences should be tested for reproducibility in an external dataset (for example, the pediatric thymus and peripheral blood repertoires later used for motif validation).

      (3) Supplementary Figure 6 suggests that women consistently show higher Rényi entropies across all subsets. Although individual p-values are borderline, the consistent direction of change is notable. The authors should apply an integrated statistical test across subsets (for example, a mixed-effects model) to determine whether there is an overall significant trend toward higher diversity in females.

      (4) Figures 4B and S8 clearly indicate enrichment of hydrophobic residues in female CDR3s for both TRA and TRB (excluding alanine, which is not strongly hydrophobic). Because CDR3 hydrophobicity has been linked to increased cross-reactivity and self-reactivity (see, e.g., Stadinski et al., Nat Immunol 2016), this observation is biologically meaningful and consistent with higher autoimmune susceptibility in females.

      (5) The majority of "hundreds of sex-specific motifs" are probably donor-specific motifs confounded by HLA restriction. This interpretation is supported by the failure to validate motifs in external datasets (pediatric thymus, peripheral blood). The authors should restrict analysis to public motifs (shared across multiple donors) and report the number of donors contributing to each motif.

      (6) When comparing TCRs to VDJdb or other databases, it is critical to consider HLA restriction. Only database matches corresponding to epitopes that can be presented by the donor's HLA should be counted. The authors must either perform HLA typing or explicitly discuss this limitation and how it affects their conclusions.

      (7) Although the age distributions of male and female donors are similar, the key question is whether HLA alleles are similarly distributed. If women in the cohort happen to carry autoimmune-associated alleles more often, this alone could explain observed repertoire differences. HLA typing and HLA comparison between sexes are therefore essential.

      (8) In some analyses (e.g., Figures 8C-D) data are shown per donor, while others (e.g., Fig. 8A-B) pool all sequences. This inconsistency is concerning. The apparent enrichment of autoimmune or bacterial specificities in females could be driven by one or two donors with particular HLAs. All analyses should display donor-level values, not pooled data.

      (9) The reported enrichment of matches to certain specificities relative to the database composition is conceptually problematic. Because the reference database has an arbitrary distribution of epitopes, enrichment relative to it lacks biological meaning. HLA distribution in the studied patients and HLA restrictions of antigens in the database could be completely different, which could alone explain enrichment and depletions for particular specificities. Moreover, differences in Pgen distributions across epitopes can produce apparent enrichment artifacts. Exact matches typically correspond to high-Pgen "public" sequences; thus, the enrichment analysis may simply reflect variation in Pgen of specific TCRs (i.e., fraction of high-Pgen TCRs) across epitopes rather than true selection. Consequently, statements such as "We observed a significant enrichment of unique TRB CDR3aa sequences specific to self-antigens" should be removed.

      (10) The overrepresentation of self-specific TCRs in females is the manuscript's most interesting finding, yet it is not described in detail. The authors should list the corresponding self-antigens, indicate which autoimmune diseases they relate to, and show per-donor distributions of these matches.

      (11) The concept of polyspecificity is controversial. The authors should clearly explain how polyspecific TCRs were defined in this study and highlight that the experimental evidence supporting true polyspecificity is very limited (e.g., just a single TCR from Figure 5 from Quiniou et al.).

      Minor:

      (1) Clarify why the Pgen model was used only for DP and CD8 subsets and not for others.

      (2) The Methods section should define what a "high sequence reliability score" is and describe precisely how the "harmonized" database was constructed.

      (3) The statement "we generated 20,000 permuted mixed-sex groups" is unclear. It is not evident how this permutation corrects for individual variation or sex bias. A more appropriate approach would be to train the Pgen model separately for each individual's nonproductive sequences (if the number of sequences is large enough).

    1. eLife Assessment

      The authors ask whether a simple whole-head spectral power analysis of human magnetoencephalography data recorded at rest in a large cohort of adults shows robust effects of age, and their results provide compelling evidence that it does. The relative simplicity of the analysis is a major strength of the paper, and the authors are careful to control for many different confounds - although perhaps highly correlated factors like brain anatomy still pose a slight issue. The paper provides a valuable power analysis framework that should inform researchers across the broader neuroimaging community

    2. Reviewer #1 (Public review):

      Summary:

      This is a careful, well-powered treatment of age effects in resting-state MEG. Rather than extracting (say) complex connectivity measures, the authors look at the 'simplest possible thing': changes in the overall power spectrum across age.

      Strengths:

      They find significant age-related changes at different frequency bands: broadly, attenuation at low-frequency (alpha) and increased beta. These patterns are identified in a large dataset (CamCAN) and then verified in other public data.

      Weaknesses:

      Some secondary interpretations (what is "unique" to age vs global anatomy) may go beyond what the statistics strictly warrant in the current form, but these can be tightened with (I think, fairly quick) additions already foreshadowed by the authors' own analyses.

      Aims:

      The authors set out to replace piecemeal, band-by-band ageing claims with t-maps, and Cohen's f2 over sensors×frequency ("GLM-Spectrum").

      On CamCAN, six spatio-spectral peaks survive relatively strict statistical controls. The larger effects are in low-frequency and upper-alpha/beta ranges (f2 approx 0.2-0.3), while lower-alpha and gamma reach significance but with small practical impact (f2 < 0.075). A nice finding is that the same qualitative profile appears in three additional independent datasets.

      Two analyses are especially interesting. First, the authors show a difference between absolute and relative spectral magnitude (basically, within-subject normalization). Relative scaling sharpens the spectral specificity of the spatial maps, while absolute magnitude is dominated by a broad spatial mode that correlates positively across frequencies, likely reflecting head-position/field-spread factors. The replication of the main age profile is robust to preprocessing decisions (e.g., SSS movement compensation choices) - the bigger determinant of the effect is whether they apply sensor normalization (relative vs absolute).

      Second, lots of brain-related things might be related to age, and the authors spend some time trying to back out confounds/covariates. This section is handled transparently (in general, I found the writing style very clear throughout) - they examine single covariates (sex, BP, GGMV, etc.) and compare simple vs partial age effects. For example, aging is correlated with reductions in global grey-matter volume (GGMV), but it would be nice to find a measure that is independent of this: controlling for GGMV (via a linear model) reduces age-related effect sizes heterogeneously across space/frequency but does not eliminate them, a nuance the authors treat carefully.

      This is a nice paper, and I have only a few concrete suggestions:

      (1) High-gamma:

      There can be a lot of EMG / eye movement contamination (I know these were RS eyes closed data, but still..) above 30-40 Hz, and these effects are the weakest anyway. Could you add an analysis (e.g., ICA/label-based muscle component removal) and show the gamma band's sensitivity to that step? Or just note this point more clearly?

      (2) GGMV confound control:

      Controlling for GGMV reduces, but does not eliminate, age effects. I have a few questions about this: a) Could we see the residuals as a function of age? I wonder if there are non-linear effects or something else that the regression is not accounting for. Also, b) GGMV and age are highly colinear - is this an issue? Can regression really split them apart robustly? I think by some cunning orthogonalisation, you can compute the effect of age independent of GGVM. I don't think this is the same as the effect 'adjusted' for GGMV (which is what is shown here if I'm reading it correctly). Finally, of course, GGMV might actually be the thing you want to look at (because it might more accurately reflect clinical issues) - so strong correlations are not really a problem: I think really the focus might even be on using MEG to predict GGMV and controlling for age.

    3. Reviewer #2 (Public review):

      This paper describes the application of the "GLM-Spectrum" mass univariate approach to examine the effects of age on M/EEG power spectra. Its strengths include promotion of the unbiased approach, suitable for future meta/mega-analyses, and the provision of effect sizes for powering future studies. These are useful contributions to the literature. What is perhaps lacking is a discussion of the limitations of this approach, in comparison to other methods.

      An analogy is the mass univariate approach to spatial localisation of effects in fMRI/PET images. This approach is unbiased by prior assumptions about the organisation of the brain, but potentially also less sensitive, by ignoring that prior knowledge. For example, a voxelwise univariate approach is less sensitive to detecting effects in functionally homogeneous brain regions, where SNR can be increased by averaging over voxels. In the context of power spectra, the authors' approach deliberately ignores knowledge about the dominant frequency bands/oscillations in human power spectra. This is in contrast to approaches like FOOOF and IRASA, which explicitly parametrise frequency components. I am not saying these methods are better; I just think that the authors should acknowledge that these approaches have advantages over their mass univariate approach (in sensitivity and interpretation; see below). I guess it is a type of bias-sensitivity trade-off: the authors want to avoid bias, but they should acknowledge the corresponding loss of sensitivity, as well as loss of interpretation compared to model-based approaches (i.e, models that parameterise frequency; I don't mean the statistical models for each frequency separately).

      An example of the interpretational loss can be seen in the authors' observation of opposite-signed effects of age around the alpha peak. While the authors acknowledge that this pattern can arise from a reduction in alpha frequency with age, this is an indirect inference, and a direct (and likely much more sensitive) approach would be to parametrise and estimate the peak alpha frequency directly for each participant, as done with FOOOF for example (possibly with group priors, as in Medrano et al, 2025, EJN). The authors emphasise the nonlinear effects of age in Figure 2A, but their approach cannot test this directly (e.g., in terms of plotting effects of age on frequency, magnitude, and width for each participant), so for me, this figure illustrates a weakness of their approach, not a strength.

      Then I think the section "Two dissociable and opposite effects in the alpha range" in the Discussion section is confusing, because if there is a single reduction in alpha peak frequency and magnitude with age, then there is only one "effect", not "two dissociable" ones. If the authors do want to claim that there are two dissociable age effects within the alpha range, then they need to do a statistical test, e.g., that the topographies of low and high alpha are significantly different. This then reveals another limitation of the mass univariate approach - that space (channel) is not parametrised either - so one cannot test for significant channel x effect interactions within this framework, as necessary to really claim a dissociation (e.g., in underlying neural generators).

      While the authors show that normalisation of each person's power spectra by the sum across frequencies helps improve some statistics, they might want to say more about disadvantages of this approach, e.g., loss of sensitivity to any effects (eg of age) that are broadly distributed across majority of frequencies, loss of real SI units (absolute effect sizes) (as well as problems if normalisation were used for techniques like FOOOF, where the 1/f exponent would be affected).

      The authors should give more information on how artifactual ICs were defined. This may be important for cardiac artefacts, since Schmidt et al (2004, eLife) have pointed out how "standard" ICA thresholds can fail to remove all cardiac effects. This is very important for the effects of age, given that age affects cardiac dynamics (even though the focus of Schmidt et al is the 1/f exponent, could residual cardiac effects cause artifactual age effects in current results, even above ~1Hz?).

      The authors should clarify the precise maxfilter arguments, and explain what "reference" was used for the "trans" option - e.g., did the authors consider transforming the data to match a sphere at the centre of the helmet, which might not only remove some of the global power differences due to different head positions, but also be best for generalisation of the effect sizes they report to future studies (assuming the centre of the helmet is the most likely location on average)? And on that matter, did head positions actually differ by age at all?

    1. eLife Assessment

      This study explores how exogenous attention operates at the finest spatial scale of vision, within the foveola - a topic that has not been previously explored. The question is important for understanding how attention shapes perception, and how it differs between the periphery and the central regions of highest visual acuity. The evidence is compelling, as shown by carefully designed experiments with state-of-the-art eye tracking to monitor attended locations just a few tens of minutes of arc away from the fixation target, but additional clarification regarding analyses and implications for vision and oculomotor control would broaden the impact of the study.

    2. Reviewer #1 (Public review):

      Summary:

      The manuscript investigates how exogenous attention modulates spatial frequency sensitivity within the foveola. Using high-precision eye-tracking and gaze-contingent stimulus control, the authors show that exogenous attention selectively improves contrast sensitivity for low- to mid-range spatial frequencies (4-8 cycles/degree), but not for higher frequencies (12-20 CPD). In contrast, improvements in asymptotic performance at the highest contrast levels occur across all spatial frequencies. These results suggest that, even within the foveola, exogenous attention operates through a mechanism similar to that observed in peripheral vision, preferentially enhancing lower spatial frequencies.

      Strengths:

      The study shows strong methodological rigor. Eye position was carefully controlled, and the stimulus generation and calibration were highly precise. The authors also situate their work well within the existing literature, providing a clear rationale for examining the fine-grained effects of exogenous attention within the foveola. The combination of high spatial precision, gaze-contingent presentation, and detailed modeling makes this a valuable technical contribution.

      Weaknesses:

      The manipulation of attention raises some interpretive concerns. Clarifying this issue, together with additional detail about statistics, participant profiles, other methodological elements, and further discussion in relation to oculomotor control in general, could broaden the impact of the findings.

    3. Reviewer #2 (Public review):

      Summary:

      This study aims to test whether foveal and non-foveal vision share the same mechanisms for endogenous attention. Specifically, they aim to test whether they can replicate at the foveola previous results regarding the effects of exogenous attention for different spatial frequencies.

      Strengths:

      Monitoring the exact place where the gaze is located at this scale requires very precise eye-tracking methods and accurate and stable calibration. This study uses state-of-the-art methods to achieve this goal. The study builds on many other studies that show similarities between foveal vision and non-foveal vision, adding more data supporting this parallel.

      Weaknesses:

      The study lacks a discussion of the strength of the effect and how it relates to previous studies done away from the fovea. It would be valuable to know if not just the range of frequencies, but the size of the effect is also comparable.

    4. Reviewer #3 (Public review):

      Summary:

      This paper explores how spatial attention affects foveal information processing across different spatial frequencies. The results indicate that exogenously directed attention enhances contrast sensitivity for low- to mid-range spatial frequencies (4-8 CPD), with no significant benefits for higher spatial frequencies (12-20 CPD). However, asymptotic performance increased as a result of spatial attention independently of spatial frequency.

      Strengths:

      The strengths of this article lie in its methodological approach, which combines a psychophysical experiment with precise control over the information presented in the foveola.

      Weaknesses:

      The authors acknowledge that they used the standard approach of analyzing observer-averaged data, but recognize that this method has limitations: it ignores the uncertainty associated with parameter estimates and the relationships between different parameters of the psychometric model. This may affect the interpretation of attentional effects. In the future, mixed-effects models at the trial level could overcome these limitations.

    1. eLife Assessment

      This valuable study provides solid evidence for deficits in aversive taste learning and taste coding in a mouse model of autism spectrum disorders. Specifically, the authors found that Shank3 knockout mice exhibit behavioral deficits in learning and extinction of conditioned taste aversion, and calcium imaging of the gustatory cortex identified impaired neuronal responses to taste stimuli. This paper will likely be of interest to researchers studying how learning and sensory processes are affected by genetic causes of autism spectrum disorders.

    2. Reviewer #1 (Public review):

      Summary:

      The study from Wu and Turrigiano investigates how disruption of taste coding in a mouse model of autism spectrum disorders (ASDs) affects aversive learning in the context of a conditioned taste aversion (CTA) paradigm. The experiments combine 2-photon calcium imaging of neurons in the gustatory portion of the anterior insular cortex (i.e., gustatory cortex) with behavioral training and testing. The authors rely on Shank3 knockout mice as a model for ASDs. The authors found that Shank3 mice learn CTA more slowly and extinguish the memory more rapidly than control subjects. Calcium imaging identified impairments in taste-evoked activity associated with memory encoding and extinction. During memory encoding, the authors found less suppressed neuronal activity and increased correlated variability in Shank3 mice compared to controls. During extinction, they observed a faster loss of taste selectivity and degradation of taste discriminability in mutants compared to controls.

      Strengths:

      This is a well-written manuscript that presents interesting findings. The results on the learning and extinction deficits in Shank3 mice are of particular interest. Analyses of neural activity are well conducted and provide important information on the type of impaired cortical activity that may correlate with behavioral deficits.

      Weaknesses:

      (1) The experiments rely on three groups: CS-only WT, CTA WT, and CTA KO. Can the authors provide a rationale for not having a CS-only KO group?

      (2) The authors design an effective behavioral paradigm comparing consumption of water and saccharin and tracking extinction (Figure 3). This paradigm shows differences in licking across distinct behavioral conditions. For instance, during T1, licking to water strongly differs from licking to saccharin for both WT and KO. During T2, licking to water strongly differs from licking to saccharin only for WT (much less for KO), and licking to saccharin in WT differs from that in KO. These differences in taste sampling across conditions could contribute to some of the effects on neural activity and discriminability reported in Figures 5 and 6. That is sucrose and water trials may be highly discriminable because in one case the mouse licks and in the other it does not (or licks much less). The author may want to address this issue.

      (3) Are there any omission trials following CTA? If so, they should be quantified and reported. How are the omission trials treated with regard to the analyses?

      (4) The authors describe the extinction paradigm as "alternative choice". In decision-making, alternative choice paradigms typically require 2 lateral spouts to report decisions following the sampling from a central spout. To avoid confusion, the authors may want to define their paradigm as alternative sampling.

      (5) Figure 4 reports that CTA increases the proportion of neurons that consistently respond to saccharin and water across days. While the saccharin result could be an effect of aversive learning, it is less clear why the phenomenon would generalize to water as well. Can the authors provide an explanation?

      (6) The recordings are performed in the part of the anterior insular cortex that is typically defined as "gustatory cortex" (GC). Given the functional heterogeneity of the anterior insular cortex (AIC) and given that the authors do not sample all of the anteroposterior extent of AIC, I would suggest being more explicit about their positioning in GC. Also, some citations (e.g., Gogolla et al, 2014) refer to the posterior insular cortex, which is considered more inherently multimodal than GC. GC multimodality is typically associative in nature, as only a few neurons respond to sound and light in naïve animals.

      (7) It would be useful to add summary figures showing the extent of viral spread as well as GRIN lens placement.

      (8) I encourage the authors to add Ns every time percentages are reported. How many neurons have been recorded in each condition? Can the authors provide the average number of neurons recorded per session and per animal?

      (9) It looks like some animals learned more than others (Figure 1E or Figure 3C). Is it possible to compare neural activity across animals that showed different degrees of learning?

    3. Reviewer #2 (Public review):

      Wu and Turrigiano investigated how cortical taste coding during conditioned taste aversion (CTA) learning is affected in Shank3 knockout (KO) mice, a model of monogenic ASD. Using longitudinal two-photon calcium imaging of AIC neurons, the authors show that Shank3 KO mice exhibit reduced suppression of activity in a subset of neurons and a higher correlated variability in neural activity. This is accompanied by slower learning and faster extinction of aversive taste memories. These results suggest that Shank3 loss compromises the flexibility and stability of cortical representations underlying adaptive behaviour.

      Major strengths:

      (1) Conceptual significance: The study connects a molecular ASD risk gene (Shank3) to flexible sensory encoding, bridging genetics, systems neuroscience, and behaviour.

      (2) Technical rigour: Longitudinal calcium imaging with cell-registration across learning and extinction sessions is technically demanding and well-executed.

      (3) Behavioural paradigm: The use of both acquisition and extinction paradigms provides a more nuanced picture of learning dynamics.

      (4) Analyses: Correlated variability, discriminability indices, and population decoding analyses are robust and appropriate for addressing behavioural and network-level coding changes.

      Major weaknesses:

      (1) Causality: The paper infers that increased correlated variability causes learning deficits, but no causal tests (e.g., optogenetic modulation of inhibition or interneuron rescue) are presented to confirm this.

      (2) Behavioural scope: The study focuses exclusively on taste aversion; generalisation to other flexible learning paradigms (e.g., reversal or probabilistic tasks) is not addressed.

      (3) Mechanistic insights: While providing interesting findings of altered sensory perception and extinction of learning-related signals in AIC, it offered nearly no mechanistic insights. This makes the interpretation, especially on how generalisable these findings are, difficult. Also, different reported findings are "potentially" connected, but the exact relation between increased correlated variability and faster loss of taste selectivity cannot be assessed.

    4. Reviewer #3 (Public review):

      In this study, Wu & Turrigiano investigate an ethologically relevant form of associative learning (conditioned taste aversion - CTA) and its extinction in the Shank3 KO mouse model of ASD. They also examine the underlying circuits in the anterior insular cortex (AIC) simultaneously, using two-photon calcium imaging through a GRIN lens. They report that Shank3 KO mice learn CTA slower and suggest that this is mediated by a reduction in tastant-stimulus activity suppression of AIC neurons and a reduced signal-to-noise ratio due to increased noise correlations in AIC neurons. Interestingly, once Shank3 KO mice acquire CTA, they extinguish the aversive memory more rapidly than wild-type mice. This accelerated extinction is accompanied by a faster loss of neuronal and population-level taste selectivity and coding in the AIC compared to WT mice.

      This is an important study that uses in vivo methods to assess circuit dysfunction in a mouse model of ASD, related to sensory perception valence (in this case, taste). The study is well executed, the data are of high quality, and the analytical procedures are detailed. Furthermore, the behavioural paradigm is well thought out, particularly the approach for assessing extinction through repeated retrieval sessions (T1-T5), which effectively tests discrimination between saccharin and water rather than relying solely on lick counts or total consumption as a measure of extinction. Finally, the statistical tests used are appropriate and justified.

      There is, however, a missing link between the behavioural findings and the underlying mechanisms. More specifically:

      (1) The authors don't make a causal link between the behaviour and AIC neurophysiology, both the percentage of suppressed cells and the coactivity measurements. For the % of suppressed cells, it seems that both WT and KO cells are suppressed in the transition between CST1 and CST2 (Figure 1L), yet only the WT mice exhibit CTA (at least by CST2). For the taste-elicited coactivity measure, it seems that there is an increase in coactivity from CST1 to CST2 in WT (Figure 2C - blue, although not statistically tested?), but persistently higher coactivity in KO. Is this change of coactivity in WT important for the expression of CTA? Plotting behavioral performance (from Figure 1G) against coactivity (from Figure 2C) for each animal would be informative.

      (2) Shank3 KO cells already show an increase in baseline coactivity (Figure 2- figure supplement 1), and the authors never examine CS-only responses in the KO group, therefore making it difficult to determine whether elevated coactivity and noise correlations reflect a generalized AIC abnormality in Shank3 KOs (perhaps through impaired PV-mediated inhibition in insular cortex - Gogolla et al, 2014) that is not directly responsible/related to CTA?

      (3) How do the authors interpret the large range of lick ratios (Figure 1G) for WT (almost bi-modal distribution)? Is there a within-subject correlation with any of the neurophysiological measurements to suggest a relationship between AIC neurophysiology and behavioural expression of CTA?

      (4) Indeed, CTA appears to be successfully achieved for Shank3 KO mice delayed by 1 day, as the level of saccharin aversion during the first retrieval session (T1) is comparable between Shank3 KO and WTs. In this context, not extending the first part of the paradigm to include CST3 seems to be a missed opportunity. Doing so would have allowed for within-cell and within-subject comparison of taste-elicited pairwise correlation across the learning and to investigate the neural mechanism of delayed extinction in KOs more effectively.

      (5) How to interpret Figure 5F: Absolute discriminability is lower for T5 for CTA WT and CTA KO compared to CS-only? Why would AIC neurons have less information on taste identity by the end of extinction than during the unconditioned (CS-only) condition? And if that is the case, how is decoding accuracy in Figure 6C higher in T5 for CTA WT vs CS-only?

    1. The legacy provider problem

      The legacy provider problem: The legacy system processes CIDs one at a time, requiring a separate DHT lookup (10-20 seconds each) to find the 20 closest peers for each CID. This sequential approach typically handles less than 10,000 CID over 22h (Provide.DHT.Interval). If your node has more CIDs than can be reprovided within Provide.DHT.Interval, provider records start expiring after amino.DefaultProvideValidity, making content undiscoverable.

    1. I was referring back to uh the original uh definition by Walsh which was not the anthroposine at all it was the anthrop era and maybe that what we actually need to be thinking about is is this an era is this the anthroposic era rather than the anthroposine

      for - question - anthropocene - era instead of epoch? - professor Alasdair Skelton, Stockholm University - great presentation comparing anthropocene vs other eras in the past 66 million years

    2. We're looking to the meiosene and potentially even the eene for modern analoges and there were no humans living in those intervals. So we don't know the human impact of the kinds of conditions that are being forecast that are being modeled for a hundred years from now.

      for - comparison - anthropocene - past similiar epochs - miocene and possibly eocene - no humans alive at that time - unknown impacts of living in such an environment

    3. we used a number of different proxies at 12 different sites, and they all recorded very clearly the effects of the great acceleration. And with that midpoint of about 1952.9 years, it all makes perfect sense. So it's not just the site at Crawford Lake, but all of the sites that we looked at showed a very very similar signal.

      for - definition - anthropocene - synchronized signals of great acceleration at all 12 sites, not just Crawford Lake - Francine McCarthy, Brock University

    4. one remaining project of course is still the formalization because while that is, you know, as as as Johan said, in many respects it doesn't matter, but in some it does partly because the anthroposine's meaning has been stretched so widely in so many areas that it makes sense to try and at least define it clearly and precisely in one sense so it can be used even quantitatively as well as qualitatively.

      for - definition - anthropocene - post rejection definition - future work - even though it's been rejected as a geological epoch, due to so many uses of it, it still needs a proper definition

    1. connect together the 1000 pairs of junction boxes which are closest together

      The input file contains 1000 boxes. If I connect together 1000 (or as few as 999) pairs following the procedure described above, I end up with one circuit connecting all boxes.

      I should actually count the connection within components towards the total of 1000.

    1. I feel guilty for not being closer, for not seeing them more frequently, and oddly enough, I feel really guilty for enjoying my time here. This guilt is heavy and burdensome and last month it consumed a lot of my thoughts.

      Something I will never understand

    1. Take profit of sites like this one, as long as they exist, as long as the ideas, emotions and creation they propose are still visible, as long as those who offer them to share are still alive.

      Wull what are you waiting for?

    1. It seems that the total potential risk created by the negative impacts of people’s belief systems is larger than any outside existential risk in this world.

      Or the filters that people's surroundings are uniformly presented through. Map always being north. Mercator projection.

    2. in the instances when we feel fear, there are benefits in reframing it in our minds as an absence of knowledge.

      "I need more data..."--Dune Messiah

    3. But admitting that we don’t understand how the world works, and then trying to understand some slice of it, can only be terrifying. It’s far easier to inconclusively accept the world model of others. It’s even more comforting to then conclusively justify the truth-validity by the volume of people who share that world model. To attempt to stand outside of viral ideas – mimetic beliefs – and to take an assumption-free approach at understanding the world is one of the hardest challenges faced by individuals today.

      NB

    1. At the same time, the vitality of democracy depends on harnessing new technologies to improve democratic institutions, not just responding to risks. A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone.

      I don't see why this wouldn't be a capability based on my current understanding of AI. If the formation of the AI system and the data that it's built on lack bias, there is seemingly no reason for the system to later develop any new biases and either way that's something the system can be monitored for as a precaution.

    2. Elimination of most cance

      I've briefly seen that potentially they've developed an AI system that can detect breast cancer years before it actually starts to become a tumor or problem.

    3. This might suggest a pessimistic perspective on what AI can accomplish. But biomedicine is unique in that although the process of developing drugs is overly cumbersome, once developed they generally are successfully deployed and used.

      For most cases in this area there is no outside factors that are evolving rapidly. One of the big risks of AI is bias which medically since it is based almost solely on biology there shouldn't have any impact on the deployment.

    4. experiments and hardware design have a certain “latency” and need to be iterated upon a certain “irreducible” number of times in order to learn things that can’t be deduced logically. But massive parallelism may be possible on top of that

      If it ends up developing this far I think the success rate of what could be discovered is endless but I think the driving factor of that will be the parallelism especially since as mentioned experiments take so long that in the time one is being done there are so many other things that can be ran and tested as well but we don't have the people or resources to do so right now.

    5. Biology and physical health

      From what I've seen theres already been quite a bit of really positive uses of AI to come from this area that can be big break throughs in the medical field.

    6. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. Fear is one kind of motivator, but it’s not enough: we need hope as well.

      Theres obviously a use for AI and a positive purpose that it can serve or there wouldn't be a need or desire to develop it in the first place. As mentioned though despite that need that doesn't mean any risks or fears should be ignored

    7. We can expect that AI will lead to improvements in technologies that slow or prevent climate change, from atmospheric carbon-removal and clean energy technology to lab-grown meat that reduces our reliance on carbon-intensive factory farming.

      It is funny that AI is being used to mitigate climate change as we are poising the air, water, and creating such a a devastating environmental impact using AI for very mundane tasks. I feel like this argument shows the true reason why the author is writing the article.

    8. I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.

      If he is going to insist that the AI is useful for more than analysis, I dont think he makes a very good case as for why an AI would be better at the trivial lab tasks, compared to a human.

      Yes, the AI would be better at tasks which require analysis. I dont see how the AI is going to be an improvement over a human intelligence when it comes to starting and stopping centrifuges.

    9. I think of the issue as having two parts: international conflict, and the internal structure of nations. On the international side, it seems very important that democracies have the upper hand on the world stage when powerful AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.

      Using AI to govern will be extremely difficult and will require a lot of thought to make sure everything goes correctly.,

    10. Nevertheless, it is a thing of transcendent beauty. We have the opportunity to play some small role in making it real.

      I found that this was an interesting take on what AI can do, but at the end of the day, it was still a sales pitch to investors on what AI can do for the future and why they should invest in it.

    11. Both AI companies and developed world policymakers will need to do their part to ensure that the developing world is not left out; the moral imperative is too great.

      Developing countries can be left out of the benefits of AI because they simply don't have the resources to utilize AI.

    12. Advanced computational neuroscience. As noted above, both the specific insights and the gestalt of modern AI can probably be applied fruitfully to questions in systems neuroscience, including perhaps uncovering the real causes and dynamics of complex diseases like psychosis or mood disorders

      AI can give a unique perspective to questions we have been dealing with for a while. This could also be used elsewhere and benefit different industries.

    13. and resisting the temptation to rely on natural resource wealth); it’s plausible that “AI finance ministers and central bankers” could replicate or exceed this 10% accomplishmen

      He claims here that AI could replicate this, but I don't see how AI could do the same and if people would trust AI to make such massive decisions. It seems almost like he's promoting AI to be the solution to everything.

    14. Given all this, many biologists have long been skeptical of the value of AI and “big data” more generally in biology

      AI may not be so useful in the fields of biology, but its use may have not been found yet or the technology is not good enough to be used just yet.

    15. data is often lacking—not so much in quantity, but quality: there is always a dearth of clear, unambiguous data that isolates a biological effect of interest from the other 10,000 confounding things that are going on, or that intervenes causally in a given process, or that directly measures some effect (as opposed to inferring its consequences in some indirect or noisy way)

      Modern biology has massive datasets, but AI progress is limited by noisy, ambiguous, or confounded data. This shows that adding more data is not the solution, you need data that is understood and can produce a positive effect when ingested by AI.

    16. Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10

      Over time, these systems may innovate new methods that reduce current bottlenecks. This may be done through new experiments, new jurisdictions, new data-gathering paradigms.

    17. Physical laws. This is a starker version of the first point. There are certain physical laws that appear to be unbreakable

      AI cant solve things that are unsolvable (obviously), this means that we can't create something out of nothing. It also means that AI is still restricted and held to the same rules we are held to.

    18. Need for data. Sometimes raw data is lacking and in its absence more intelligence does not help.

      Intelligence cannot substitute for missing empirical evidence—a crucial limitation for subjects like particle physics, biology, etc.

    19. Speed of the outside world. Intelligent agents need to operate interactively in the world in order to accomplish things and also to learn

      Physical processes impose unavoidable latency; no amount of intelligence can culture cells or grow animals faster than biology allows.

    20. I believe that in the AI age, we should be talking about the marginal returns to intelligence77 The closest economics work that I’m aware of to tackling this question is work on “general purpose technologies” and “intangible investments” that serve as complements to general purpose technologies., and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high.

      The author introduces a powerful economic lens: intelligence as a production factor whose returns can be quantified, helping predict where AI will accelerate progress and where it won’t.

    21. Second, and conversely, you might believe that technological progress is saturated or rate-limited by

      This frames a core debate: whether intelligence alone can accelerate progress or whether external constraints—data, society, physical time—will always bottleneck innovation.

    22. In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on

      We are quite close if not already achieve this capability.

    23. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.

      This is what I feel like current AI cannot do and wont be able to do.

    24. One thing writing this essay has made me realize is that it would be valuable to bring together a group of domain experts (in biology, economics, international relations, and other areas) to write a much better and more informed version of what I’ve produced here. It’s probably best to view my efforts here as a starting prompt for that gro

      It seems almost like we should create a committee or standard to guide the development of AI.

    25. as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation.

      I feel like this is something that is frequently overlooked when we hear from technology leaders like Bill Gates, Zuckerberg, Altman, etc.

    26. I also think that as a matter of principle it’s bad for your soul to spend too much of your time “talking your book”.

      Take everything with a grain of salt and research who is supporting these articles and papers.

    27. The basic development of AI technology and many (not all) of its benefits seems inevitable (unless the risks derail everything) and is fundamentally driven by powerful market forces.

      Benefits come naturally and are not controlled by the creators. There may also be benefits that couldn't have been predictedĄ

    28. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.

      People need to unde4rstand that all new technology has inherent risks and if we want to positively benefit from it we need to adequately prepare ourselves.

    29. Nevertheless, it is a thing of transcendent beauty. We have the opportunity to play some small role in making it real.

      In the end, the author recognizes the absolute awesomeness of AI, whether it is used for good or bad.

    30. And as with some of the other challenges, we will likely have to fight to get a good outcome here: exploitative or dystopian directions are clearly also possible and have to be prevented.

      Through all of the advantages of AI, the most important thing for us to do will be to make sure AI is used in the right capacity for the right purposes.

    31. Poorly implemented services are currently a major driver of cynicism about government

      This could be a major positive use for AI. Accelerating workflow in government organizations can bring positive connotations to places like the DMV. This will also apply to increasing the efficiency of all businesses worldwide.

    32. “100 years of progress in 5-10 years”

      100 years of progress is a relative term. In 10 years from now, the same amount of progress could be considered 50 years of progress. The speed of technological growth has grown and continues to grow exponentially every day.

    33. will everyone have access to these technologies?

      In order to draw the necessary popularity to survive, most AI companies release their products for free use. Further down the line I suspect these products to cost money to use

    34. but it is worth reflecting on how much the world will change even if biology is the only area to be successfully accelerated by AI.

      All of science will experience faster growth with the rise of AI

    35. Computation requires a certain minimum energy per bit erased, limiting the density of computation in the world.

      AI itself requires a high level of energy consumption

    36. and dangerous to view practical technological goals in essentially religious terms.

      It's important to treat AI in the right way, and not in a way where you start to praise and worship it too strongly.

    1. Het bedrijf ontwikkelt al een AI-naar-FPGA-platform waarmee elk AI-model kan draaien op goedkope, in de EU geproduceerde herconfigureerbare chips. Als ze hierin slagen, zou dit de afhankelijkheid van Europa van buitenlandse GPU-fabrieken volledig kunnen wegnemen, een terugkerend thema in de strategie van Vydar.

      A potential path away from NVIDIA it seems, but not at the moment, the text suggests.

    2. We maken geen eigen AI-chips”, merkte Crijnen op, “maar omdat de hardware speciaal voor dit doel is gebouwd, kunnen we toekomstige in Europa gemaakte chips gemakkelijk integreren. Die flexibiliteit is cruciaal.”

      This suggests they do use NVIDIA Jetson now, but don't need to if alternatives are available?

    3. Hun systeem wordt nu voor 100% in Europa geproduceerd en weegt 30 gram, tegenover 176 gram voor concurrenten die op Jetson zijn gebaseerd. Het heeft een stroomverbruik van 3 watt, een efficiëntieverbetering van 88%.

      a sixth in weight, power reduction from 15 to 3 Watt range.

    1. I saw the face of a young boy who had been hurt by the world. He felt the world hadn't been kind to him and that he had lost the agency to manipulate reality to his dreams. That's what depression had always been to me, the unbinding of intention and personal action.

      Nota bene bb

    1. Other reflection assignments might ask you to think about your writing stages including the inventing, outlining, drafting, and revision process.

      Introduces reflection on specific writing steps.

    2. The goal with reflective writing is to help students become more self-aware of their strengths and weaknesses as a writer.

      States the purpose: developing self-awareness to improve writing.

    1. Begin with a topic sentence. Using one of the five Ws or H questions here will remind you and your readers what you will focus on in this paragraph. Introduce your sources in a sentence or two to summarize what the information revealed about your topic. Include a direct quote using P.I.E. and reflect on what the source illuminated about your question.

      Each paragraph should feel like a lesson, teaching the reader one piece of the bigger answer.

    1. e speaker."31 Ifocus too much attention on writing for an audience-whether conceived"target receiver," a "needy reader," or a "constructive participant"-wenarrow our view of composing, forgetting that writing is also an explorof ideas, a quest for purpose, and a projection

      Audience is important but not ALL important in writing. Audience afftects writing but shouldn't control it.

    2. Advocates of the social perspective on audience argue that novice writersneed to experience the satisfactions and the conflicts of reader response-both the satisfaction that comes from having successfully shaped the reader'sWriting for Readers 181understanding and experiencewhich seemed clear to the wrheld special meaning for the wdetail--clear enough in the wThro

      I agree that writers should experience the disconnect between readers. When I think something is nice and clear it can make no sense to someone else.

    3. ntent from a text. In Dillonmeaning of the text is not on the page to be extractewhat results when they engage ... texts for whatevhave and with whatever knowledge, values, and preoit" (p

      Sometimes audience creates meaning when they read something with their own perspectives blended in.

    4. at writersshould observe those principles which help readers achieve semantic closure,thereby facilitating an effective flow of information through the short-termmemory "bottleneck." Two of the most important principles are to omitneedless words and to keep related words togethe

      Keeping writing concise and coherent is important.

    5. a message toterms, the writer aims to get information into the rlem, of course, is that filling a reader's head with informsimple as filling a glass with water. Readers "process"linguistic input into a conceptual code that must be intion already stored

      Kind of like a computer. The informational perspective relies on being clear and coherent for ease of audience absorption.

    6. In written communication, however, thewriter often has only a vague and quite general conception of who thereaders of a piece of writing will be. T

      goes with previous note

    7. A second limitation is the assumption-implicit in the rhetoricalperspective-that in fact the writer either knows, or can find out, a good dealabout the audience. T

      This is an important note about the fact that writers dont always know their audience. It makes it harder to write with an aim.

    8. The arguments against trapping wild animals for their fur, here in Iowa,are all based on emotions. Anti-trappers are convinced animals suffer in-tolerable and unjustified pain while in traps. I believe that, with theirunsound reasoning, they cause all humans intolerable and unjustifiedpains. Let me tell you why their arguments are unsou

      good example of bad audience analysis, clearly doesn't consider his possible audiences perspective.

    9. According to thianalyze the audience's beliefs,be adapted to the par

      Writers must analyze their audience to tailor their message effectively. This annotation appears weird. sry

    10. ." However, it is becoming clear that the term "au-dience" has multiple meanings in contemporary work on composition: theterm no longer means the same thing to all theorists who talk about theprocess of writing for readers, and various pedagogical techniques-all pur-portedly aimed at teaching students about audience-are based on quite dif-ferent theoretical perspectiv

      Audience is not just one simple thing. It is a multifaceted concept with many different perspectives.

    1. For SETI to be conducted and eventually succeed, humans must at least consider the possibility thatlife exists beyond Earth. Starting and maintaining the search, they must act as if the conditions of pos-sibility for life and the emergence of technosignatures are actually given. SETI cannot be conductedfrom a pure agnostic and passive position. It requires active scientific exploration and empirical obser-vation and, as such, must presuppose the possible existence of external events that can effect the obser-vational setup and their reliable attribution to causing conditions (Radder, 2021). One might beconsciously aware that this is a purely logical requirement and that our beliefs can change. Yet, presup-posing that there is no life beyond Earth, renders conducting SETI senseless. Deliberately assigning arandom probability to the possibility of extraterrestrials may express uncertainty, but effectively con-ducting SETI requires accepting that among the myriads of signals, we are able to detect some mayand can indeed be traced back to the activity of extraterrestrials. This, of course, does neither tell uswhere they are, how many there are, nor what their activity will exactly look like.
    1. Through all the thankless years, Cold-edged with dear-bought wisdom, The judgment of your peers!

      This particular section is striking to me because it is an acknowledgement by Kipling that what he describes as the "white man's burden", to "civilize" non white societies through cultural, political and economic imperialism, is not considered valid by all white people. Yet, he uses this very fact to further bolster his own narrative, portraying imperialism as a moral necessity precisely due to it's "thankless" nature.

    1. he issues that receive the most attention from media become the issues that the public discusses,

      The media is powerful in shaping what we think about. Even if they don't tell us what to believe, they definitely still influence what's on our radar. Many of the public start speaking out and forming opinions on topics that are trending or being covered the most. People start reacting when something is everywhere.

    2. assumed that audiences passively accepted media messages and would exhibit predictable reactions in response to those messages.

      This makes it clear how much early researchers underestimated people. They essentially assumed that everyone would simply accept whatever the media told them. This is something we still hear today, as evidenced by people blaming social media for influencing teens to think or act in a certain way. However, most people aren't like that. They question and interpret things when forming an understanding of them. Most times, we actually push back more than we absorb.

    1. Mindless "test prep" by English teachers isthus an ironic error. If we really understood test-ing- its Purpose and Audience- we would notmake this mistake and kill off good writing in theprocess.

      Writing should not be "mindless" it should be focused but persuasive and unique.

    2. Then, you real-ize- humbly, as I have- that you cannot possiblyreach everyone in your world (in my case, the worldof education). You usually have to find your mostsimpatico audience, to find your niche as a

      you need to figure out what YOUR audience is, then play to it.

    3. Too often we teach Writing Skills andthe Writing Process rather than helping studentsfind something worth communicating.

      Teachers should encourage students to have more power in their writing instead of tearing them down for it.

    4. he consequences of your writingmatter for a specific audience in a specific situation

      Your writing can be powerful so it is important that you focus it in the right direction.

    5. But English teachers often have too narrow asense of what constitutes a realistic challenge forcausing a genuine effect in developing writingprompts and scoring rubrics

      Writing for education and writing for the real world are different.

    6. The task demands in the newspaper ad makea further point about authentic writing: say it con-cisely, have great empathy for your client/audience,English Journal 98.5 (2009): 29-37 29This content downloaded from98.48.77.226 on Mon, 08 Dec 2025 01:21:11 UTCAll use subject to https://about.jstor.org/terms

      Don't get so stuck in a rubric that you forget the focus and target of your writing.

    1. AbstrAct: This essay examines the ubiquitous presence of Venus in the archive of Atlantic slaveryand wrestles with the impossibility of discovering anything about her that hasn’t already beenstated.

      Hartman's main thesis is that the enslaved girl "Venus" appears in the archives, but only in fragments that deprive her of humanity. She argues that because the archive only records enslaved women from a position of sexual exploitation, ownership, and assault, it is intrinsically violent. Her goal is to tell a story that challenges the limitations of the archive without perpetuating that violence.

    2. The barracoon, thehollow of the slave ship, the pest-house, the brothel, the cage, the surgeon’s laboratory, theprison, the cane-field, the kitchen, the master’s bedroom—turn out to be exactly the sameplace and in all of them she is called Venus

      All the places where Venus appears: the barracoon, slave ship, pest-house, brothel, prison, field, kitchen, and master’s bedroom function as interconnected sites where her body is controlled and exploited. This is reflective of some of the key concepts we learned about in class such as biopower and structural power. Biopower, the control of bodies is evident in the ways that the bodies of Venuses are sexually exploited and controlled, and this is done under the justification of structural/colonial power. The master and the existing power structures (white supremacy, patriarchy) serve to justify the marginalization and mistreatment of women, especially Black women like Venus.

    1. view. Rather, it would seem that egocentricincapacity to take account of the reader and cope wiof writing at th

      Novice writers may need to take a step back and reassess their writing to maintain an ego free and reader friendly piece.

    2. he composition. To the writand state probably seem obvious-they "go without sacisely this kind of unwarranted assumption about whaous that points up the egocentri

      you need to be clear in your writing, do not omit any possibly important details.

    3. nce. The cure for such problems, from an informa-tional perspective, lies in instruction which focuses both on common sourcesof difficulty for readers and on general writing techniques-use of dovetail-ing, proleptic devices, thematic tags, parallel forms, and so forth--which canreduce a reader's uncertainty and thus aid comprehe

      To properly relate new and old information a writer must be clear.

    4. eas. In sum, the writer's job is to facilitate tintake of information, designing a text so that its readers will encounterobstacles to their understanding and will thus comprehend the text witminimal amount of effort.

      For readers to get a full understanding and memory of the text writers should constantly relate "old" and "new" information together.

    5. Two of the most important principles are to omitneedless words and to keep related words togeth

      Structure your writing in a way that keeps attention and focuses in on the important facts.

    6. ass with water. Readers "process"linguistic input into a conceptual code that must be intion already stored in memory. Since the goal of writinginto that memory store, a writer needs to understancess works, paying particular attention to the kindsencounter in their efforts to extract information from texts

      Text has to be given to readers in a way that makes it accessible to all. Especially when the goal of your work is to inform.

    7. tten communication, however, thewriter often has only a vague and quite general conception of who thereaders of a piece of writing will be. To-

      You don't always get to see your audience, if your work is available to the public everyone can access it, making your audience larger than what your intended audience for your paper was/is.

    8. "4 Moreover, when we teach studentsthat all writing involves argument-that the audience is typically anadversary-we often mislead them into taking a more assertive stance than iswarranted for many of the writing situations they will encounter either incollege or in the world of wor

      When thinking of audience you need to think about the like minded individuals who might read your piece. Not every work has to have a goal to persuade.

    9. those in the pradapt their discourses to thesaudiences and adapting messarent com

      adapting to your audience shows your maturity and forethought as a writer and speaker.

    10. ersuade. According to thianalyze the audience's beliefs,be adapted to the par

      Know your audience, if you want to persuade them think about the why, what questions they might have, or what reservations they might have.

    1. order, sheshowed me (as did my other subjects) that the internal representation ormental sketch a writer makes of the audience is an essential part of the writ-ing proces

      Good writers can imagine their audience and their needs in order to shape and revise their work. this works well with my subject of highlighting the role of audience in writing.

    2. So what do they want to hear about? These are seniors in high schoolwho I think want to become English majors ... probably not . . but theywant to hear about what English teach

      Even when writing a narrative Toby F. Starts with considering his audience and what they would want to hear.

    3. y the end of the protocol, thisrepresentation has become specific: future missionaries, engineers, and busi-ness people traveling to other countries. As

      This shows how writers learn more about their audience as they develop their ideas. Understanding audience on a deeper level can influence tone, examples used and goals for writing.

    4. the contrary, subjects frequently recon-ceived their task as they thought about their relationship to their audience.And as they did, they developed new goals which changed the nature of theirdiscourse

      A piece can completely change when audience is considered.

    5. hese are 1) how the writer perceived the composing task (whatFlower and Hayes call a writer's "problem representation"), which deter-mined the kind of discourse he or she produced, and 2) whether the audi-ence was explicitly stated or was implied by the kind of discourse the subjectchose.

      Audience is part of the thought process from the very beginning.

    6. analyzing and/or constructing a hypothetical audiencesetting goals and naming plans aimed at a specific audienceevaluating content and style (persona) with regard to anticipated audi-ence response*reviewing, editing, and revising for a specific audience

      There are many ways to consider audience when writing. It isn't that simple.

    7. ns. A protocolis, therefore, a rich source for information about some of what the writer isthinking as she is writing. At this time, it is the best research tool for teasingout the cognitive processes that reveal themselves in what we call audienceawareness

      This study uses protocols to illustrate thought processes of writers. This is how Berkenkotter shows WHEN writers consider audience.

    8. one characteristic of cognitive maturation is the ability of thewriter to "decenter" from his or her own perception

      Writers become stronger the more they think outside of themselves and about the reader.

    9. The purpose of my study was to investigate whether experienced writerswho have formal training in rhetorical theory think about their audiencemore actively than writers who

      audience awareness can vary by skill level.

      Main question: How and when should skilled writers consider their audience when writing?

    1. I Tried Coding on Every OS // Here’s What I LearnedTap to unmute2xI Tried Coding on Every OS // Here’s What I LearnedForrestKnight 142,855 views 3 weeks agoSearchInfoShoppingCopy linkIf playback doesn't begin shortly, try restarting your device.Pull up for precise seeking18:07•You're signed outVideos that you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmUp nextLiveUpcomingCancelPlay nowForrestKnightSubscribeSubscribedExploring software by building it. Rust Changed How I Code Forever15:06The Best Linux Distro for You10:05HideShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.     0:000:00 / 24:14 (21:30)Live•Watch full video ON OFF Skip to Highlight?•Tangents•15:26I'm 41, if You're In Your 30s, Watch This…Mark Manson931k views • 2 months agoLivePlaylist ()Mix (50+)25:18I Attempted to Climb Europe’s Most DEADLY Mountain… SoloMagnus Midtbø2.6m views • 2 months agoLivePlaylist ()Mix (50+)4:445 Biggest Mistakes Women Make Starting Online Business (And How to Fix Them)Anna | Health & Wealth Coach1 view • 6 hours agoLivePlaylist ()Mix (50+)18:15I taught an octopus piano (It took 6 months)Mattias Krantz6.2m views • 1 month agoLivePlaylist ()Mix (50+)34:08What Every Body Fat % Actually Looks Like (50% to 5%)Jeff Nippard8.2m views • 4 weeks agoLivePlaylist ()Mix (50+)13:54Samsung Galaxy Z TriFold - Better than you think!Mrwhosetheboss6.3m views • 5 days agoLivePlaylist ()Mix (50+)28:36Dlaczego Polacy POKOCHALI piosenki z AI?Kanał o technologii7.4k views • 3 days agoLivePlaylist ()Mix (50+)10:03My New Found Addiction!Ardens1m views • 7 months agoLivePlaylist ()Mix (50+)16:54Choosing Signals (TIER LIST)YBCTooCold23k views • 1 day agoLivePlaylist ()Mix (50+)13:47I Can't Believe How Good Linux Is Now.. (Omarchy Linux)Livakivi118k views • 1 month agoLivePlaylist ()Mix (50+)29:23Takich treści POWINNI ZAKAZAĆ OD WCZORAJ! Tragedia.Michał Wrzosek50k views • 6 days agoLivePlaylist ()Mix (50+)21:15Java Isn't Verbose // we just suckForrestKnight46k views • 1 month agoLivePlaylist ()Mix (50+) I Tried Coding on Every OS // Here’s What I Learned
      • Introduction: ForrestKnight shares personal experiences coding on various OSes over a decade, including Windows, macOS, Ubuntu, Arch Linux, WSL2, NixOS, and others like Omakub/Omari; emphasizes these are subjective thoughts, not benchmarks.
      • Early Days: Started with Windows 8 for Java (NetBeans/Eclipse), used mid-2012 MacBook Pro for iOS dev noticing Unix advantages (forward slashes, SSH ease); shifted to Ubuntu VM on Windows for first job (Java Spring, Angular, VS Code).
      • Arch Linux (2020): Installed custom rice on PC (dual-boot with Windows 10), faced audio/Wi-Fi issues but praised Arch Wiki; used tools like Awesome WM, ZSH, Kitty; eventually abandoned for time sink, reformatted to Windows.
      • WSL2 (2022 onward): Adopted on Windows 10 for Linux env without VM overhead; integrates seamlessly with VS Code, IntelliJ; quirks include networking, Expo tunneling, Chrome ext copying, higher memory use; primary setup for web/Java/ML dev.
      • Recent Experiments: Tried NixOS on mini PC (disliked declarative config, poor IDE/GPU support, skill barrier); switched to Ubuntu + Omakub (DHH's setup, plug-and-play like Omari for Arch); plans VM test of Omari.
      • Windows 11 & Future: End of Windows 10 support forces upgrade; dislikes AI/ads; seeks Adobe alternatives (DaVinci Resolve, Figma) for potential Linux switch; dual-boots historically for Adobe.
      • Conclusions: Windows headaches mitigated by WSL2; macOS liked for Unix/hardware but closed ecosystem; Linux varies—Ubuntu/Omakub plug-and-play vs. Arch/NixOS customization/time sinks; choose based on workflow needs.
    1. Article 29: The dowry is fixed at 3 cows: one for the girl, two for the father and mother.

      One thing I noticed about medieval west Africa and even pre medieval times is that the cattle or cow is a big currency. The cow is a bid currency because of their milk and the offspring they bear. Even today some places and tribes in west Africa herd their cattle and trade them. Also being a sign of a abundance of wealth.- Abdoulaye Gueye

    2. Article 4: The society is divided into age groups. Those born during a period of three years in succession belong to the same age-group. The members of the intermediary class between young and old people, should be invited to take part in the making of important decisions concerning the society

      I find this Interesting because, today in many west African societies you are bundled are within a friend group as your age group. The age group are usually anything between 3 years. And middle-aged people are usually in charge of holding ceremonies or functions. - Abdoulaye Gueye