10,000 Matching Annotations
  1. Oct 2025
    1. You will consider criteria such as the type of source, its intended purpose and audience, the author’s (or authors’) qualifications, the publication’s reputation, any indications of bias or hidden agendas, how current the source is, and the overall quality of the writing, thinking, and design.

      What you should consider when evaluating your resources

    1. In the early ages of the world, according to the scripture chronology there were no kings

      I have a question about this paper as to what he means there were no kings in the beginning? Because if they are christians like I thought the British all were then wouldn't he believe that Jesus was the king of men?

    2. Thomas Paine Calls for American independence, 1776

      The Author of this document is Thomas Paine, it was written in 1776. This document is important because it was a very huge part of the colonies actually splitting from Britian.

    1. ntroduction of new socialtechniques is an attempt by the article to analytically approach the social dynamicsof Turkey’s integration with the outside world

      what would atatürk do?

    2. social dynamics of Turkey-EU relations by investigating the various ways in whichTurkey’s integration with the outside world has reshaped civic activism in Turkey.

      Before Gezi Protest??

    Annotators

    1. Syllogisme

      Juiste afleidingen: - Affirm the atecedent = C is een A --> C is een B. - Deny the consequent = C is niet B --> C is niet A

      Onjuiste afleidingen: - Deny the antecedent = C is niet A --> C is niet B - Affirming the consequent = C is een B --> C is een A

      Een syllogisme is valide als de conclusie noodzakelijkerwijs volgt uit de premissen.

      Een syllogisme is waar als het syllogisme valide is en alle premissen waar zijn.

    1. eLife Assessment

      This paper presents a valuable theory and analysis of the role of neurogenesis and inhibitory plasticity in the drift of neural representations in the olfactory system. For one of the findings, regarding the impact of neurogenesis on the drift, the evidence remains incomplete. The reason lies in the differences in variability/drift of the mitral/tufted cell responses observed in the model compared to experimental observations, where these responses remain stable over extended time scales.

    2. Reviewer #1 (Public review):

      Summary:

      The authors build a network model of the olfactory bulb and the piriform cortex and use it to run simulations and test their hypotheses. Given the model's settings, the authors observe drift across days in the responses to the same odors of both the mitral/tufted cells, as well as of piriform cortex neurons. When representing the M/T and PCx responses within a lower-dimensional space, the apparent drift is more prominent in the PCx, while the M/T responses appear in comparison more stable. The authors further note that introducing spike-time dependent plasticity (STDP) at bulb synapses involving abGCs slows down the drift in the PCx representations, and further link this to the observation that repeated exposure to the same odorant slows down drift in the piriform cortex.

      The model is clearly explained and relies on several assumptions and observations:

      (1) Random projections of MTC from the olfactory bulb to the piriform cortex, random intra-piriform connectivity, and random piriform to bulb connectivity.

      (2) Higher dimensionality of piriform cortex representations compared to M/T responses, which enables superior decoding of odor identity in the piriform cortex.

      (3) Spike time-dependent plasticity (STDP) at synapses involving the abGCs.

      The authors address an open topical problem, and the model is elegant in its simplicity. I have however, several major concerns with the hypotheses underlying the model and with its biological plausibility.

      Concerns:

      (1) In their model, the authors propose that MTC remain stable at the population level, despite changes in individual MTC responses.

      The authors cite several experimental studies to support their claims that individual MTC responses to the same odors change (some increase, some decrease) across days. Interpreting the results of these studies must, however, take into account the variability of M/T responses across odor presentation repeats within the same session vs. across sessions. In the Shani-Narkiss et al., Frontiers in Neural Circuits, 2023 study referenced, a large fraction of the variability across days in M/T responses is also observed across repeats to the same odorant in the same session (Shani-Narkiss et al., Figure 4), while the authors have M/T responses in the same session that are highly reproducible. This is an important point to consider and address, since it constrains how much of the variability in M/T responses can be attributed to adult neurogenesis in the olfactory bulb versus to other networks' inhibitory mechanisms, which do not rely on neurogenesis. In the authors' model, the variability in M/T responses observed across days emerges as a result of adult-born neurogenesis, which does not need to be the main source of variability observed in imaging experiments (Shani-Narkiss et al., Figure 4).

      Another study (Kato et al., Neuron, 2012, Figure 4) reported that mitral cell responses to odors experienced repeatedly across 7 days tend to sparsen and decrease in amplitude systematically, while mitral cell responses to the same odor on day 1 vs. day 7 when the odor is not presented repeatedly in between seem less affected (although the authors also reported a decrease in the CI for this condition). As such, Kato et al. mostly report decreases in mitral cell odor responses with repeated odor exposure at both the individual and population level, and not so much increases and decreases in the individual mitral cell responses, and stability at the population level.

      (2) In Figure 1, a set of GCs is killed off, and new GCs are integrated in the network as abGC. Following the elimination of 10% of GCs in the network, new cells are added and randomly assigned synaptic weights between these abGCs and MTC, GCs, SACs, and top-down projections from PCx. This is done for 11 days, during which time all GCs have gone through adult neurogenesis.

      Is the authors' assumption here that across the 11 days, all GCs are being replaced? This seems to depart from the known biology of the olfactory bulb granule cells, i.e., GCs survive for a large fraction of the animal's life.

      (3) The authors' model relies on several key assumptions: random projections of MTC from the olfactory bulb to the piriform cortex, random intra-piriform connectivity, and random piriform to bulb connectivity. These assumptions are not necessarily accurate, as recent work revealed structure in the projections from the olfactory bulb to the piriform cortex and structure within the piriform cortex connectivity itself (Fink et al., bioRxiv, 2025; Chae et al., Cell, 2022; Zeppilli et al., eLife, 2021).

      How do the results of the model relating adult neurogenesis in the bulb to drift in the piriform cortex representations change when considering an alternative scenario in which the olfactory bulb to piriform and intra-piriform connectivity is not fully distributed and indistinguishable from random, but rather is structured?

      (4) I didn't understand the logic of the low-dimensional space analysis for M/T cells and piriform cortex neurons (Figures 2 & 3). In the authors' model, the full-ensemble M/T responses are reorganized over time, presumably due to the adult-born neurogenesis. Analyzing a lower-dimensional projection of the ensemble trajectories reveals a lower degree of re-organization. This is the same for the piriform cortex, but relatively, the piriform ensembles displayed in a low-dimensional embedding appear to drift more compared to the M/T ensembles.

      This analysis triggers a few questions: which representation is relevant for the brain function - the high or the low-dimensional projection? What fraction of response variance is included in the low-dimensional space analysis? How did the authors decide the low-dimensional cut-off? Why does STDP cause more drift in piriform cortex ensembles vs. M/T ensembles? Is this because of the assumed higher dimensionality of the piriform cortex representations compared to the mitral cells?

      (5) Could the authors comment whether STDP at abGC synapses and its impact on decreasing drift represent a new insight, and also put it into context? Several studies (e.g., Lledo, Murthy, Komiyama groups) reported that abGC integrates in the network in an activity-dependent manner, and not randomly, and as such stabilizes the active neuronal responses, which is consistent with the authors' report.

      Related, I couldn't find through the manuscript which synapses involving abGCs they focus on, or what is the relative contribution of the various plastic synapses shown in the cartoon from Figure 4 A1 (circles and triangles).

      6) The study would be strengthened, in my opinion, by including specific testable predictions that the authors' models make, which can be further food for thought for experimentalists.<br /> How does suppression of adult-born neurogenesis in the OB impact the stability of mitral cell odor responses? How about piriform cortex ensembles?

    3. Reviewer #2 (Public review):

      Summary:

      The authors address a critical problem in olfactory coding. It has long been known that adult neurogenesis, specifically in the form of adult-born granule cells that embed into the existing inhibitory networks on the olfactory bulb, can potentially alter the responses of Mitral/Tufted neurons that project activity to the Piriform Cortex and to other areas of the brain. Fundamentally, it would seem that these granule cells could alter the stability of neural codes in the OB over time. The authors develop a spiking network model to explore how stability can be achieved both in the OB over time and in the PC, which receives inputs. The model recapitulates published activity recordings of M/T cells and shows how activity in different M/T cells from the same glomerulus shifts over time in ways that, in spite of the shift, preserve population/glomerular level codes. However, these different M/T cells fan out onto different pyramidal cells of the PC, which gives rise to instability at that level. STDP then, is necessary to maintain stability at the PC level as long as odor environments remain constant. These results may also apply to a similar neurogenesis-based change in the Dentate Gyrus, which generates instability in CA1/3 regions of the hippocampus

      Strengths:

      A robust network model that untangles important, seemingly contradictory mechanisms that underlie olfactory coding.

      Weaknesses:

      The work is a significant contribution to understanding olfactory coding. But the manuscript would benefit from a brief discussion of why neurogenesis occurs in the first place - e.g., injury, ongoing needs for plasticity, and adapting to turnover of ORNs. There is literature on this topic. It seems counterintuitive to have a process in the MOB (and for that matter in the DG) that potentially disrupts the ability to generate stable codes both in the MOB and PC, and in particular a disruption that requires two different mechanisms - multiple M/T cells per glomerulus in the MOB and STDP in the PC - to counteract.

      Given that neurogenesis has an important function, and a mechanism is in place to compensate for it in the MOB, why would it then be disrupted in fan-out projections to the PC? The answer may lie in the need for fan-out projections so that pyramidal neurons in the PC can combinatorially represent many different inputs from the MOB. So something like STDP would be needed to maintain stability in the face of the need for this coding strategy.

      This kind of discussion, or something like it, would help readers understand why these mechanisms occur in the first place. It is interesting that PC stability requires that odor environments be stable, and that this stability drives PC representational stability. This result suggests experimental work to test this hypothesis. As such, it is a novel outcome of the research.

    4. Reviewer #3 (Public review):

      Summary

      The authors set out to explore the potential relationship between adult neurogenesis of inhibitory granule cells in the olfactory bulb and cumulative changes over days in odor-evoked spiking activity (representational drift) in the olfactory stream. They developed a richly detailed spiking neuronal network model based on Izhikevich (2003), allowing them to capture the diversity of spiking behaviors of multiple neuron types within the olfactory system. This model recapitulates the circuit organization of both the main olfactory bulb (MOB) and the piriform cortex (PCx), including connections between the two (both feedforward and corticofugal). Adult neurogenesis was captured by shuffling the weights of the model's granule cells, preserving the distribution of synaptic weights. Shuffling of granule cell connectivity resulted in cumulative changes in stimulus-evoked spiking of the model's M/T cells. Individual M/T cell tuning changed with time, and ensemble correlations dropped sharply over the temporal interval examined (long enough that almost all granule cells in the model had shuffled their weights). Interestingly, these changes in responsiveness did not disrupt low-dimensional stability of olfactory representations: when projected into a low-dimensional subspace, population vector correlations in this subspace remained elevated across the temporal interval examined. Importantly, in the model's downstream piriform layer, this was not the case. There, shuffled GC connectivity in the bulb resulted in a complete shift in piriform odor coding, including for low-dimensional projections. This is in contrast to what the model exhibited in the M/T input layer. Interestingly, these changes in PCx extended to the geometrical structure of the odor representations themselves. Finally, the authors examined the effect of experience on representational drift. Using an STDP rule, they allowed the inputs to and outputs from adult-born granule cells to change during repeated presentations of the same odor. This stabilized stimulus-evoked activity in the model's piriform layer.

      Strengths

      This paper suggests a link between adult neurogenesis in the olfactory bulb and representational drift in the piriform cortex. Using an elegant spiking network that faithfully recapitulates the basic physiological properties of the olfactory stream, the authors tackle a question of longstanding interest in a creative and interesting manner. As a purely theoretical study of drift, this paper presents important insights: synaptic turnover of recurrent inhibitory input can destabilize stimulus-evoked activity, but only to a degree, as representations in the bulb (the model's recurrent input layer) retain their basic geometrical form. However, this destabilized input results in profound drift in the model's second (piriform) layer, where both the tuning of individual neurons and the layer's overall functional geometry are restructured. This is a useful and important idea in the drift field, and to my knowledge, it is novel. The bulb is not the only setting where inhibitory synapses exhibit turnover (whether through neurogenesis or synaptic dynamics), and so this exploration of the consequences of such plasticity on drift is valuable. The authors also elegantly explore a potential mechanism to stabilize representations through experience, using an STDP rule specific to the inhibitory neurons in the input layer. This has an interesting parallel with other recent theoretical work on drift in the piriform (Morales et al., 2025 PNAS), in which STDP in the piriform layer was also shown to stabilize stimulus representations there. It is fascinating to see that this same rule also stabilizes piriform representations when implemented in the bulb's granule cells.

      The authors also provide a thoughtful discussion regarding the differential roles of mitral and tufted cells in drift in piriform and AON and the potential roles of neurogenesis in archicortex.

      In general, this paper puts an important and much-needed spotlight on the role of neurogenesis and inhibitory plasticity in drift. In this light, it is a valuable and exciting contribution to the drift conversation.

      Weaknesses

      I have one major, general concern that I think must be addressed to permit proper interpretation of the results.

      I worry that the authors' model may confuse thinking on drift in the olfactory system, because of differences in the behavior of their model from known features of the olfactory bulb. In their model, the tuning of individual bulbar neurons drifts over time. This is inconsistent with the experimental literature on the stability of odor-evoked activity in the olfactory bulb.

      In a foundational paper, Bhalla & Bower (1997) recorded from mitral and tufted cells in the olfactory bulb of freely moving rats and measured the odor tuning of well-isolated single units across a five-day interval. They found that the tuning of a single cell was quite variable within a day, across trials, but that this variability did not increase with time. Indeed, their measure of response similarity was equivalent within and across days. In what now reads as a prescient anticipation of the drift phenomenon, Bhalla and Bower concluded: "it is clear, at least over five days, that the cell is bounded in how it can respond. If this were not the case, we would expect a continual increase in relative response variability over multiple days (the equivalent of response drift). Instead, the degree of variability in the responses of single cells is stable over the length of time we have recorded." Thus, even at the level of single cells, this early paper argues that the bulb is stable.

      This basic result has since been replicated by several groups. Kato et al. (2012) used chronic two-photon calcium imaging of mitral cells in awake, head-fixed mice and likewise found that, while odor responses could be modulated by recent experience (odor exposure leading to transient adaptation), the underlying tuning of individual cells remained stable. While experience altered mitral cell odor responses, those responses recovered to their original form at the level of the single neuron, maintaining tuning over extended periods (two months). More recently, the Mizrahi lab (Shani-Narkiss et al., 2023) extended chronic imaging to six months, reporting that single-cell odor tuning curves remained highly similar over this period. These studies reinforce Bhalla and Bower's original conclusion: despite trial-to-trial variability, olfactory bulb neurons maintain stable odor tuning across extended timescales, with plasticity emerging primarily in response to experience. (The Yamada et al., 2017 paper, which the authors here cite, is not an appropriate comparison. In Yamada, mice were exposed daily to odor. Therefore, the changes observed in Yamada are a function of odor experience, not of time alone. Yamada does not include data in which the tuning of bulb neurons is measured in the absence of intervening experience.)

      Therefore, a model that relies on instability in the tuning of bulbar neurons risks giving the incorrect impression that the bulb drifts over time. This difference should be explicitly addressed by the authors to avoid any potential confusion. Perhaps the best course of action would be to fit their model to Mizrahi's data, should this data be available, and see if, when constrained by empirical observation, the model still produces drift in piriform. If so, this would dramatically strengthen the paper. If this is not feasible, then I suggest being very explicit about this difference between the behavior of the model and what has been shown empirically. I appreciate that in the data there is modest drift (e.g., Shani-Narkiss' Figure 8C), but the changes reported there really are modest compared to what is exhibited by the model. A compromise would be to simply apply these metrics to the model and match the model's similarity to the Shani-Narkiss data. Then the authors could ask what effect this has on drift in piriform.

      The risk here is that people will conclude from this paper that drift in piriform may simply be inherited from instability in the bulb. This view is inconsistent with what has been documented empirically, and so great care is warranted to avoid conveying that impression to the community.

      Major comments (all related to the above point)

      (1) Lines 146-168: The authors find in their model that "individual M/T cells changed their responses to the same odor across days due to adult-neurogenesis, with some cells decreasing the firing rate responses (Fig.2A1 top) while other cells increased the magnitude of their responses (Fig. 2A2 bottom, Fig. S2)" they also report a significant decrease in the "full ensemble correlation" in their model over time. They claim that these changes in individual cell tuning are "similar to what has been observed by others using calcium imaging of M/T cell activity (Kato et al., 2012 and Yamada et al., 2017)" and that the decrease in full ensemble correlation is "consistent with experimental observations (Yamada et al., 2017)." However, the conditions of the Kato and Yamada experiments that demonstrate response change are not comparable here, as odors were presented daily to the animals in these experiments. Therefore, the changes in odor tuning found in the Kato and Yamada papers (Kato Figure 4D; Yamada Figure 3E) are a function of accumulated experience with odor. This distinction is crucial because experience-induced changes reflect an underlying learning process, whereas changes that simply accumulate over time are more consistent with drift. The conditions of their model are more similar to those employed in other experiments described in Kato et al. 2012 (Figure 6C) as well as Shani-Narkiss et al. (2023), in which bulb tuning is measured not as a function of intervening experience, but rather as a function of time (Kato's "recovery" experiment). What is found in Kato is that even across two months, the tuning of individual mitral cells is stable. What alters tuning is experience with odor, the core finding of both the Kato et al., 2012 paper and also Yamada et al., 2017. It is crucial that this is clarified in the text.

      (2) The authors show that in a reduced-space correlation metric, the correlation of low-dimensional trajectories "remained high across all days"..."consistent with a recent experimental study" (Shani-Narkiss et al., 2023). It is true that in the Shani-Narkiss paper, a consistent low-dimensional response is found across days (t-SNE analysis in Shani-Narkiss Figure 7B). However, the key difference between the Shani-Narkiss data and the results reported here is that Shani-Narkiss also observed relative stability in the native space (Shani-Narkiss Figure 8). They conclude that they "find a relatively stable response of single neurons to odors in either awake or anesthetized states and a relatively stable representation of odors by the MC population as a whole (Figures 6-8; Bhalla and Bower, 1997)." This should be better clarified in the text.

      (3) In the discussion, the authors state that "In the MOB, individual M/T cells exhibited variable odor responses akin to gain control, altering their firing rate magnitudes over time. This is consistent with earlier experimental studies using calcium-imaging." (L314-6). Again, I disagree that these data are consistent with what has been published thus far. Changes in gain would have resulted in increased variability across days in the Bhalla data. Moreover, changes in gain would be captured by Kato's change index ("To quantify the changes in mitral cell responses, we calculated the change index (CI) for each responsive mitral cell-odor pair on each trial (trial X) of a given day as (response on trial X - the initial response on day 1)/(response on trial X + the initial response on day 1). Thus, CI ranges from −1 to 1, where a value of −1 represents a complete loss of response, 1 represents the emergence of a new response, and 0 represents no change." Kato et al.). This index will capture changes in gain. However, as shown in Figure 4D (red traces), Figure 6C (Recovery and Odor set B during odor set A experience and vice versa), the change index is either zero or near zero. If the authors wish to claim that their model is consistent with these data, they should also compute Kato's change index for M/T odor-cell pairs in their model and show that it also remains at 0 over time, absent experience.

    1. eLife Assessment

      This valuable study compares auditory cortex responses to sounds and cochlear implant stimulation measured with surface electrode grids in rats. Beyond the reduced frequency resolution of cochlear implants observed previously, this study suggests key discrepancies between neuronal representations of cochlear stimulations and natural sounds. However, the evidence for this potentially interesting result is incomplete because there is a lack of evidence for the effectiveness of the comparison method. This study is of interest to researchers in the auditory neuroscience field and clinicians implementing treatments with cochlear implants.

    2. Reviewer #1 (Public review):

      Summary:

      This manuscript addresses an important question: whether cortical population codes for cochlear-implant (CI) stimulation resemble those for natural acoustic input or constitute a qualitatively different representation. The authors record intracranial EEG (µECoG) responses to pure tones in normal-hearing rats and to single-channel CI pulses in bilaterally deafened, acutely implanted rats, analysing the data with ERP/high-gamma measures, tensor component analysis (TCA), and information-theoretic decoding. Across several readouts, the acoustic condition supports better single-trial stimulus classification than the CI condition. However, stronger decoding does not, on its own, establish that the acoustic responses instantiate a "richer" cortical code, and the evidence for orderly spatial organisation is not compelling for CI, and is also less evident than expected for normal-hearing, given prior knowledge. The overall narrative is interesting, but at present, the conclusions outpace the data because of statistical, methodological, and presentation issues.

      Strengths:

      The study poses a timely, clinically relevant question with clear implications for CI strategy. The analytical toolkit is appropriate: µECoG captures mesoscale patterns; TCA offers a transparent separation of spatial and temporal structure; and mutual-information decoding provides an interpretable measure of single-trial discriminability. Within-subject recordings in a subset of animals, in principle, help isolate modality effects from inter-animal variability. Where analyses are most direct, the acoustic condition yields higher single-trial decoding accuracy, which is a meaningful and clearly presented result.

      Weaknesses:

      Several limitations constrain how far the conclusions can be taken. Parts of the statistical treatment do not match the data structure: some comparisons mix paired and unpaired animals but are analysed as fully paired, raising concerns about misestimated uncertainty. Methodological reporting is incomplete in places; essential parameters for both acoustic and electrical stimulation, as well as objective verification of implantation and deafening, are not described with sufficient detail to support confident interpretation or replication. Figure-level clarity also undermines the message. In Figure 2, non-significant slopes for CI, repeated identification of a single "best channel," mismatched axes, and unclear distinctions between example and averaged panels make the assertion of spatial organisation unconvincing; importantly, the normal-hearing panels also do not display tonotopy as clearly as expected, which weakens the key contrast the paper seeks to establish. Finally, the decoding claims would be strengthened by simple internal controls, such as within-modality train/test splits and decoding on raw ERP/high-gamma features to demonstrate that poor cross-modal transfer reflects genuine differences in the underlying responses rather than limitations of the modelling pipeline.

    3. Reviewer #2 (Public review):

      Summary:

      This article reports measurements of iEEG signals on the rat auditory cortex during cochlear implant or sound stimulation in separate groups of rats. The observations indicate some spatial organization of cochlear implant stimuli, but that is very different from cochlear implants.

      Strengths:

      The study includes interesting analyses of the sound and cochlear implant representation structure based on decoders.

      Weaknesses:

      The observation that responses to cochlear implant stimulation (stimulation) are spatially organized is not new (e.g., Adenis et al. 2024).

      The claim that spatial and temporal dimensions contribute information about the sound is also not new; there is a large literature on this topic. Moreover, the results shown here are extremely weak. They show similar levels of information in the spatial and temporal dimensions, and no synergy between the two dimensions. This is however, likely the consequence of high measurement noise leading to poor accuracy in the information estimates, as the authors state.

      The main claim of the study - the mismatch between cochlear implant and sound representation - is not supported. The responses to each modality are measured in different animals. The authors do not show that they actually can compare representations across animals (e.g., for the same sounds). Without this positive control, there is no reason to think that it is possible to decode from one animal with a decoder trained on another, and the negative result shown by the authors is therefore not surprising.

    4. Reviewer #3 (Public review):

      Summary:

      Through micro-electroencephalography, Hight and colleagues studied how the auditory cortex in its ensemble responds to cochlear implant stimulation compared to the classic pure tones. Taking advantage of a double-implanted rat model (Micro-ECoG and Cochlear Implant), they tracked and analyzed changes happening in the temporal and spatial aspects of the cortical evoked responses in both normal hearing and cochlear-implanted animals. After establishing that single-trial responses were sufficient to encode the stimuli's properties, the authors then explored several decoder architectures to study the cortex's ability to encode each stimulus modality in a similar or different manner. They conclude that a) intracranial EEG evoked responses can be accurately recorded and did not differed between normal hearing and cochlear-implanted rats; b) Although coarsely spatially organized, CI-evoked responses had higher trial-by-trial variability than pure tones; c) Stimulus identity is independently represented by temporal and spatial aspect of cortical representations and can be accurately decoded by various means from single trials; d) and that Pure tones trained decoder can't decode CI-stimulus identity accurately.

      Strength:

      The model combining micro-eCoG and cochlear implantation and the methodology to extract both the Event Related Potentials (ERPs) and High-Gammas (HGs) is very well designed and appropriately analyzed. Likewise, the PCA-LDA and TCA-LDA are powerful tools that take full advantage of the information provided by the cortical ensembles.

      The overall structure of the paper, with a paced and exhaustive progress through each step and evolution of the decoder, is very appreciable and easy to follow. The exploration of single-trial encoding and stimulus identity through temporal and spatial domains is providing new avenues to characterize the cortical responses to CI stimulations and their central representation. The fact that single trials suffice to decode the stimulus identity regardless of their modality is of great interest and noteworthy. Although the authors confirm that iEEG remains difficult to transpose in the clinic, the insights provided by the study confirm the potential benefit of using central decoders to help in clinic settings.

      Weaknesses:

      The conclusion of the paper, especially the concept of distinct cortical encoding for each modality, is unfortunately partially supported by the results, as the authors did not adequately consider fundamental limitations of CI-related stimulation.

      First, the reviewer assumed that the authors stimulated in a Monopolar mode, which, albeit being clinically relevant, notoriously generates a high current spread in rodent models. Second, comparing the averaged BF maps for iEEG (Figure 2A, C), BFs ranged from 4 to 16kHz with a predominance of 4kHz BFs. The lack of BFs at higher frequencies hints at a potential location mismatch between the frequency range sampled at the level of the cortex (low to medium frequencies) and the frequency range covered by the CI inserted mostly in the first turn-and-a-half of the cochlea (high to medium frequencies). Looking at Figure 2F (and to some extent 2A), most of the CI electrodes elicited responses around the 4kHz regions, and averaged maps show a predominance of CI-3-4 across the cortex (Figure 2C, H) from areas with 4kHz BF to areas with 16kHz BF. It is doubtful that CI-3-4 are located near the 4kHz region based on Müller's work (1991) on the frequency representation in the rat cochlea.

      Taken together with the Pearsons correlations being flat, the decoder examples showing a strong ability to identify CI-4 and 3 and the Fig-8D, E presenting a strong prediction of 4kHz and 8kHz for all the CI electrodes when using a pure tone trained decoder, it is possible that current spread ended stimulating indistinctly higher turns of the cochlea or even the modiolus in a non-specific manner, greatly reducing (or smearing) the place-coding/frequency resolution of each electrode, which in turn could explain the coarse topographic (or coarsely tonotopic according to the manuscript) organization of the cortical responses. Thus, the conclusion that there are distinct encodings for each modality is biased, as it might not account for monopolar smearing. To that end, and since it is the study's main message and title, it would have benefited from having a subgroup of animals using bipolar stimulations (or any focused strategy since they provide reduced current spread) to compare the spatial organization of iEEG responses and the performances of the different decoders to dismiss current spread and strengthen their conclusion.

      Nevertheless, the reviewer wants to reiterate that the study proposed by Hight et al. is well constructed, relevant to the field, and that the overall proposal of improving patient performances and helping their adaptation in the first months of CI use by studying central responses should be pursued as it might help establish new guidelines or create new clinical tools.

  2. docdrop.org docdrop.org
    1. The discussion of this issue is complex but in brief many ' ' ' of the difficulties teachers encounter with children who are different in background from themselves are related co this underlying attitudinal difference in the appropriate display of explicitness and personal power in the classroom.

      This statement accurately reveals that the core dilemma of cross-cultural education lies in the conflicting interactions between teachers and students caused by cultural differences. This conflict stems from differing understandings of how authority is expressed. Different cultures define respect differently. In African American culture, direct instruction is viewed as a sign of responsibility and care; whereas, middle-class white culture emphasizes equality through negotiation. Educational equity cannot be achieved solely through resource investment; it also requires the deconstruction of cultural power relations. If teachers fail to reflect on their own cultural assumptions about authority, any reform of teaching methods will likely be ineffective.

    2. I would like to suggest that some of the problems may cer-tainly be as this young man relates. Yet, from my work with teachers in many settings, I have come to believe that a major portion of the problem may also rest with how these three groups of teachers interact and use language with their stu-dents.

      This analysis reveals a deeper challenge to educational equity that is even well-intentioned teachers can inadvertently marginalize minority students if they lack the skills to engage in culturally responsive teaching. Therefore, educational reform must prioritize cultural inclusion, not simply resource allocation.

    3. The clash between school culture and home culture is actual-ized in at least two ways. When a significant difference exists between the students' culture and the school's culture, teach-ers can easily misread students' aptitudes, intent, or abilities as a result of the difference in styles of language use and incer-actional patterns.

      This statement profoundly reveals the core mechanism of cultural conflict in education, pointing out that when there are significant differences between school culture and family culture, teachers may misjudge students' abilities, intentions or talents due to differences in language use and interaction patterns. This statement reveals that the core challenge of educational equity lies in whether schools can truly "see" and respect students' cultural backgrounds. If educators measure students solely against the prevailing culture, differences become "deficiencies." However, if they are viewed as resources, cultural conflict can be transformed into opportunities for educational innovation. Ultimately, the mission of education is not to eliminate differences, but to build a shared space for growth within them.

    1. eLife Assessment

      This important study uses simultaneous EEG and fMRI recordings to shed light on the relationship between alpha and gamma oscillations and specific cortical layers. The sophisticated methodology provides solid evidence for correlations between oscillatory power and the strength and contents of fMRI signals in different cortical layers, though some caveats remain. This paper will be of interest to neuroscientists studying the role and mechanisms of alpha and gamma oscillations.

    2. Reviewer #1 (Public review):

      In this manuscript, Clausner and colleagues use simultaneous EEG and fMRI recordings to clarify how visual brain rhythms emerge across layers of early visual cortex. They report that gamma activity correlates positively with feature-specific fMRI signals in superficial and deep layers. By contrast, alpha activity generally correlated negatively with fMRI signals, with two higher frequencies within the alpha reflecting feature-specific fMRI signals. This feature-specific alpha code indicates an active role of alpha oscillations in visual feature coding, providing compelling evidence that the functions of alpha oscillations go beyond cortical idling or feature-unspecific suppression.

      The study is very interesting and timely. Methodologically, it is state-of-the-art. The findings on a more active role of alpha activity that goes beyond the classical idling or suppression accounts are in line with recent findings and theories. In sum, this paper makes a very nice contribution. I still have a few comments that I outline below, regarding the data visualization, some methodological aspects, and a couple of theoretical points.

      (1) The authors put a lot of effort into the figure design. For instance, I really like Figure 1, which conveys a lot of information in a nice way. Figures 3 and 4, however, seem overengineered, and it takes a lot of time to distill the contents from them. The fact that they have a supplementary figure explaining the composition of these figures already indicates that the authors realized this is not particularly intuitive. First of all, the ordering of the conditions is not really intuitive. Second, the indication of significance through saturation does not really work; I have a hard time discerning the more and less saturated colors. And finally, the white dots do not really help either. I don't fully understand why they are placed where they are placed (e.g., in Figure 3). My suggestion would be to get rid of one of the factors (I think the voxel selection threshold could go: the authors could run with one of the stricter ones, and the rest could go into the supplement?) and then turn this into a few line plots. That would be so much easier to digest.

      (2) The division between high- and low-frequency alpha in the feature-specific signal correspondence is very interesting. I am wondering whether there is an opposite effect in the feature-unspecific signal correspondence. Would the high-frequency alpha show less of a feature-unspecific correlation with the BOLD?

      (3) In the discussion (line 330 onwards), the authors mention that low-frequency alpha is predominantly related to superficial layers, referencing Figure 4A. I have a hard time appreciating this pattern there. Can the authors provide some more information on where to look?

      (4) How did the authors deal with the signal-to-noise ratio (SNR) across layers, where the presence of larger drain veins typically increases BOLD (and thereby SNR) in superficial layers? This may explain the pattern of feature-unspecific effects in the alpha (Figure 3). Can the authors perform some type of SNR estimate (e.g., split-half reliability of voxel activations or similar) across layers to check whether SNR plays a role in this general pattern?

      (5) The GLM used for modelling the fMRI data included lots of regressors, and the scanning was intermittent. How much data was available in the end for sensibly estimating the baseline? This was not really clear to me from the methods (or I might have missed it). This seems relevant here, as the sign of the beta estimates plays a major role in interpreting the results here.

      (6) Some recent research suggests that gamma activity, much in contrast to the prevailing view of the mechanism for feedforward information propagation, relates to the feedback process (e.g., Vinck et al., 2025, TiCS). This view kind of fits with the localization of gamma to the deep layer here?

      (7) Another recent review (Stecher et al., 2025, TiNS) discusses feature-specific codes in visual alpha rhythms quite a bit, and it might be worth discussing how your results align with the results reported there.

    3. Reviewer #2 (Public review):

      The authors address a long-standing controversy regarding the functional role of neural oscillations in cortical computations and layer-specific signalling. Several studies have implicated gamma oscillations in bottom-up processing, while lower-frequency oscillations have been associated with top-down signalling. Therefore, the question the authors investigate is both timely and theoretically relevant, contributing to our understanding of feedforward and feedback communication in the brain. This paper presents a novel and complicated data acquisition technique, the application of simultaneous EEG and fMRI, to benefit from both temporal and spatial resolution. A sophisticated data analysis method was executed in order to understand the underlying neural activity during a visual oddball task. Figures are well-designed and appropriately represent the results, which seem to support the overall conclusions. However, some of the claims (particularly those regarding the contribution of gamma oscillations) feel somewhat overstated, as the results offer indeed some significant evidence, but most seem more like a suggestive trend. Nonetheless, the paper is well-written, addresses a relevant and timely research question, introduces a novel and elegant analysis approach, and presents interesting findings. Further investigation will be important to strengthen and expand upon these insights.

      One of the main strengths of the paper lies in the use of a well-established and straightforward experimental paradigm (the visual oddball task). As a result, the behavioural effects reported were largely expected and reassuring to see replicated. The acquisition technique used is very novel, and while this may introduce challenges for data analysis, the authors appear to have addressed these appropriately.

      Later findings are very interesting, and mainly in line with our current understanding of feedback and feedforward signalling. However, the layer weight calculation is lacking in the manuscript. While it is discussed in the methods, it would help to briefly explain in the results how these weights are calculated, so that the reader can better follow what is being interpreted.

      Line 104 states there is one virtual channel per hemisphere for low and high frequencies. It may be helpful to include the number of channels (n=4) in the results section, as specified in the methods. Also, this raises the question of whether a single virtual channel (i.e., voxel) provides sufficient information for reproducibility.

      One area that would benefit from further clarification is the interpretation of gamma oscillations. The evidence for gamma involvement in the observed effects appears somewhat limited. For example, no significant gamma-related clusters were found for the feature-unspecific BOLD signal (Figure 2). Significant effects emerged only when the analysis was restricted to positively responding voxels, and even then, only for the contrast between EEG-coherent and EEG-incoherent conditions in the feature-specific BOLD response. It remains unclear how to interpret this selective emergence of gamma-related effects. Given previous literature linking gamma to feedforward processing, one might expect more robust involvement in broader, feature-unspecific contrasts. The current discussion presents the gamma-related findings with some confidence, and the manuscript would benefit from a more nuanced reflection on why these effects may not have appeared more broadly. The explanation provided in line 230, that restricting the analysis to positively responding voxels may have increased the SNR, is reasonable, but it may not fully account for the absence of gamma effects in V1's feature-unspecific response. Including the actual beta values from Figure 4 in the legend or main text would also help readers better assess the strength and specificity of the reported effects.

      Relating to behavioural findings for underlying neural activity, could the authors test on a trial-by-trial basis how behavioural performance relates to the BOLD signal / oscillatory activity change? Line 305 states that "Since behavioural performance in the present study was consistently high at 94% on average and participants were instructed to respond quickly to potential oddball stimuli, a higher alpha frequency might reflect a more successful stimulus encoding and hence faster and more accurate behavioural performance." Also, this might help to relate the findings to the lower vs upper alpha functionality difference.

      In Figure 4, the EEG alpha specificity plot shows relatively large error bars, and there is visible overlap between the lower and upper alpha in both congruent and incongruent conditions. While upper alpha shows a positive slope across conditions and lower alpha remains flat, the interaction appears to be driven by the change from congruent to incongruent in upper alpha. It is worth clarifying whether the simple effects (e.g., lower vs upper within each condition) were tested, given the visual similarity at the incongruent condition. Overall, the significant interaction (p < 0.001, FDR-corrected) is consistent with diverging trends, but a breakdown of simple effects would help interpret the result more clearly. Was there a significant difference between lower and upper alpha in congruent or incongruent conditions?

      Overall, this study provides a valuable contribution to the literature on oscillatory dynamics and laminar fMRI, though some interpretations would benefit from further clarification or qualification.

    4. Reviewer #3 (Public review):

      Summary:

      Clausner et al. investigate the relationship between cortical oscillations in the alpha and gamma bands and the feature-specific and feature-unspecific BOLD signals across cortical layers. Using a well-designed stimulus and GLM, they show a method by which different BOLD signals can be differentiated and investigated alongside multiple cortical oscillatory frequencies. In addition to the previously reported positive relationship between gamma and BOLD signals in superficial layers, they show a relationship between gamma and feature-specific BOLD in the deeper layers. Alpha-band power is shown to have a negative relationship with the negative BOLD response for both feature-specific and feature-unspecific contrasts. When separated into lower (8-10Hz) and upper (11-13Hz) alpha oscillations, they show that higher frequency alpha showed a significantly stronger negative relationship with congruency, and can therefore be interpreted as more feature-specific than lower frequency alpha.

      Strengths:

      The use of interleaved EEG-fMRI has provided a rich dataset that can be used to evaluate the relationship of cortical layer BOLD signals with multiple EEG frequencies. The EEG data were of sufficient quality to see the modulation of both alpha-band and gamma-band oscillations in the group mean VE-channel TFS. The good EEG data quality is backed up with a highly technical analysis pipeline that ultimately enables the interpretation of the cortical layer relationship of the BOLD signal with a range of frequencies in the alpha and gamma bands. The stimulus design allowed for the generation of multiple contrasts for the BOLD signal and the alpha/gamma oscillations in the GLM analysis. Feature-specific and unspecific BOLD contrasts are used with congruently or incongruently selected EEG power regressors to delineate between local and global alpha modulations. A transparent approach is used for the selection of voxels contributing to the final layer profiles, for which statistical analysis is comprehensive but uses an alternative statistical test, which I have not seen in previous layer-fMRI literature.

      A significant negative relationship between alpha-band power and the BOLD signal was seen in congruently (EEGco) selected voxels (predominantly in superficial layers) and in feature-contrast (EEGco-inco) selected (superficial and deep layers). When separated into lower (8-10Hz) and upper (11-13Hz) alpha oscillations, they show that higher frequency alpha showed a significantly stronger negative relationship with congruency than lower frequency alpha. This is interpreted as a frequency dissociation in the alpha-BOLD relationship, with upper frequency alpha being feature-specific and lower frequency alpha corresponding to general modulation. These results are a valuable addition to the current literature and improve our current understanding of the role of cortical alpha oscillations.

      There is not much work in the literature on the relationship between alpha power and the negative BOLD response (NBR), so the data provided here are particularly valuable. The negative relationship between the NBR and alpha power shown here suggests that there is a reduction in alpha power, linked to locally reduced BOLD activity, which is in line with the previously hypothesized inhibitory nature of alpha.

      Weaknesses:

      It is not entirely clear how the draining vein effect seen in GE-BOLD layer-fMRI data has been accounted for in the analysis. For the contrast of congruent-incongruent, it is assumed that the underlying draining effect will be the same for both conditions, and so should be cancelled out. However, for the other contrasts, it is unclear how the final layer profiles aren't confounded by the bias in BOLD signal towards the superficial layers. Many of the profiles in Figure 3 and Figure 4A show an increased negative correlation between alpha power and the BOLD signal towards the superficial layers.

      When investigating if high alpha (8-10 Hz) and low alpha (11-13 Hz) are two different sources of alpha, it would be beneficial to show if this effect is only seen at the group level or can be seen in any single subjects. Inter-subject variability in peak alpha power could result in some subjects having a single low alpha peak and some a single high alpha peak rather than two peaks from different sources.

      The figure layout used to present the main findings throughout is an innovative way to present so much information, but it is difficult to decipher the main findings described in the text. The readability would be improved if the example (Appendix 0 - Figure 1) in the supplementary material is included as a second panel inside Figure 3, or, if this is not possible, the example (Appendix 0 - Figure 1) should be clearly referred to in the figure caption.

    1. recontextualization where every Art and Science of Knowing is remade by the deep logic of dynamic proximity

      recontextualization

      meaning/intent-fully named associative reticulate complexes spanning their own meaningful contexts or rather connected conPlexes

      the Nameless Book of Heaven - myth metaphor and mystery

    1. eLife Assessment

      This important study shows that visual search for upright and rotated objects is affected by rotating participants in a VR and gravitational reference frame. However, the evidence supporting this conclusion is incomplete, given the authors' use of normalized response time and the assumption that object recognition across rotations requires mental rotation.

    2. Reviewer #1 (Public review):

      Summary:

      The current study sought to understand which reference frames humans use when doing visual search in naturalistic conditions. To this end, they had participants do a visual search task in a VR environment while manipulating factors such as object orientation, body orientation, gravitational cues, and visual context (where the ground is). They generally found that all cues contributed to participants' performance, but visual context and gravitational cues impacted performance the most, suggesting that participants represent space in an allocentric reference frame during visual search.

      Strengths:

      The study is valuable in that it sheds light on which cues participants use during visual search. Moreover, I appreciate the use of VR and precise psychophysical predictions (e.g., slope vs. intercept) to dissociate between possible reference frames.

      Weaknesses:

      It's not clear what the implications of the study are beyond visual search. Moreover, I have some concerns about the interpretation of Experiment 1, which relies on an incorrect interpretation of mental rotation. Thus, most of the conclusions rely on Experiment 2, which has a small sample size (n = 10). Finally, the statistical analyses could be strengthened with measures of effect size and non-parametric statistics.

    3. Reviewer #2 (Public review):

      Summary:

      This paper addresses an interesting issue: how is the search for a visual target affected by its orientation (and the viewer's) relative to other items in the scene and gravity? The paper describes a series of visual search tasks, using recognizable targets (e.g., a cat) positioned within a natural scene. Reaction times and accuracy at determining whether the target was present or absent, trial-to-trial, were measured as the target's orientation, that of the context, and of the viewer themselves (via rotation in a flight simulator) were manipulated. The paper concludes that search is substantially affected by these manipulations, primarily by the reference frame of gravity, then visual context, followed by the egocentric reference frame.

      Strengths:

      This work is on an interesting topic, and benefits from using natural stimuli in VR / flight simulator to change participants' POV and body position.

      Weaknesses:

      There are several areas of weakness that I feel should be addressed.

      (1) The literature review/introduction seems to be lacking in some areas. The authors, when contemplating the behavioral consequences of searching for a 'rotated' target, immediately frame the problem as one of rotation, per se (i.e., contrasting only rotation-based explanations; "what rotates and in which 'reference frame[s]' in order to allow for successful search?"). For a reader not already committed to this framing, many natural questions arise that are worth addressing.

      1a) Why do we need to appeal to rotation at all as opposed to, say, familiarity? A rotated cat is less familiar than a typically oriented one. This is a long-standing literature (e.g., Wang, Cavanagh, and Green (1994)), of course, with a lot to unpack.

      1b) What are the triggers for the 'corrective' rotation that presumably brings reference frames back into alignment? What if the rotation had not been so obvious (i.e. for a target that may not have a typical orientation, like a hand, or a ball, or a learned, nonsense object?) or the background had not had such clear orientation (like a cluttered non-naturalistic background of or a naturalistic backdrop, but viewed from an unfamiliar POV (e.g., from above) or a naturalistic background, but not all of the elements were rotated)? What, ultimately, is rotated? The entire visual field? Does that mean that searching for multiple targets at different angles of rotation would interfere with one another?

      1c) Relatedly, what is the process by which the visual system comes to know the 'correct' rotation? (Or, alternatively, is 'triggered to realize' that there is a rotation in play?) Is this something that needs to be learned? Is it only learned developmentally, through exposure to gravity? Could it be learned in the context of an experiment that starts with unfamiliar stimuli?

      1d) Why the appeal to natural images? I appreciate any time a study can be moved from potentially too stripped-down laboratory conditions to more naturalistic ones, but is this necessary in the present case? Would the pattern of results have been different if these were typical laboratory 'visual search' displays of disconnected object arrays?

      1e) How should we reconcile rotation-based theories of 'rotated-object' search with visual search results from zero gravity environments (e.g., for a review, see Leone (1998))?

      1f) How should we reconcile the current manipulations with other viewpoint-perspective manipulations (e.g., Zhang & Pan (2022))?

      (2) The presentation/interpretation of results would benefit from more elaboration and justification.

      2a) All of the current interpretations rely on just the RT data. First, the RT results should also be presented in natural units (i.e., seconds/ms), not normalized. As well, results should be shown as violin plots or something similar that captures distribution - a lot of important information is lost when just presenting one 'average' dot across participants. More fundamentally, I think we need to have a better accounting for performance (percent correct or d') to help contextualize the RT results. We should at least be offered some visualization (Heitz, 2014) of the speed accuracy trade-off for each of the conditions. Following this, the authors should more critically evaluate how any substantial SAT trends could affect the interpretation of results.

      2b) Unless I am missing something, the interpretation of the pattern of results (both qualitatively and quantitatively in their 'relative weight' analysis) relies on how they draw their contrasts. For instance, the authors contrast the two 'gravitational' conditions (target 0 deg versus target 90 deg) as if this were a change in a single variable/factor. But there are other ways to understand these manipulations that would affect contrasts. For instance, if one considers whether the target was 'consistent' (i.e., typically oriented) with respect to the context, egocentric, and gravitational frames, then the 'gravitational 0 deg' condition is consistent with context, egocentric view, but inconsistent with gravity. And, the 'gravitational 90 deg' condition, then, is inconsistent with context, egocentric view, but consistent with gravity. Seen this way, this is not a change in one variable, but three. The same is true of the baseline 0 deg versus baseline 90 deg condition, where again we have a change in all three target-consistency variables. The 'one variable' manipulations then would be: 1) baseline 0 versus visual context 0 (i.e., a change only in the context variable); 2) baseline 0 versus egocentric 0 (a change only in the egocentric variable); and 3) baseline 0 versus gravitational 0 (a change only in the gravitational variable). Other contrasts (e.g., gravitational 90 versus context 90) would showcase a change in two variables (in this case, a change in both context and gravity). My larger point is, again, unless I am really missing something, that the choice of how to contrast the manipulations will affect the 'pattern' of results and thereby the interpretation. If the authors agree, this needs to be acknowledged, plausible alternative schemes discussed, and the ultimate choice of scheme defended as the most valid.

      2c) Even with this 'relative weight' interpretation, there are still some patterns of results that seem hard to account for. Primarily, the egocentric condition seems hard to account for under any scheme, and the authors need to spend more time discussing/reconciling those results.

      2d) Some results are just deeply counterintuitive, and so the reader will crave further discussion. Most saliently for me, based on the results of Experiment 2 (specifically, the fact that gravitational 90 had better performance than gravitational 0), designers of cockpits should have all gauges/displays rotate counter to the airplane so that they are always consistent with gravity, not the pilot. Is this indeed a fair implication of the results?

      2e) I really craved some 'control conditions' here to help frame the current results. In keeping with the rhetorical questions posed above in 1a/b/c/d, if/when the authors engage with revisions to this paper, I would encourage the inclusion of at least some new empirical results. For me the most critical would be to repeat some core conditions, but with a symmetric target (e.g. a ball) since that would seem to be the only way (given the current design) to tease out nuisance confounding factors such as, say, the general effect of performing search while sideways (put another way, the authors would have to assume here that search (non-normalized RT's and search performance) for a ball-target in the baseline condition would be identical to that in the gravitational condition.)

    4. Reviewer #3 (Public review):

      The study tested how people search for objects in natural scenes using virtual reality. Participants had to find targets among other objects, shown upright or tilted. The main results showed that upright objects were found faster and more accurately. When the scene or body was rotated, performance changed, showing that people use cues from the environment and gravity to guide search.

      The manuscript is clearly written and well designed, but there are some aspects related to methods and analyses that would benefit from stronger support.

      First, the sample size is not justified with a power analysis, nor is it explained how it was determined. This is an important point to ensure robustness and replicability.

      Second, the reaction time data were processed using different procedures, such as the use of the median to exclude outliers and an ad hoc cut-off of 50 ms. These choices are not sufficiently supported by a theoretical rationale, and could appear as post-hoc decisions.

      Third, the mixed-model analyses are overall well-conducted; however, the specification of the random structure deserves further consideration. The authors included random intercepts for participants and object categories, which is appropriate. However, they did not include random slopes (e.g., for orientation or set size), meaning that variability in these effects across participants was not modelled. This simplification can make the models more stable, but it departs from the maximal random structure recommended by Barr et al. (2013). The authors do not explicitly justify this choice, and a reviewer may question why participant-specific variability in orientation effects, for example, was not allowed. Given the modest sample sizes (20 in Experiment 1 and 10 in Experiment 2), convergence problems with more complex models are likely. Nonetheless, ignoring random slopes can, in principle, inflate Type I error rates, so this issue should at least be acknowledged and discussed.

  3. docdrop.org docdrop.org
    1. hough Black girls living in the context of a larger Black community may have more social choices, they too have to contend with devaluing messages about who they are and who they will become, especially if they are poor or working-class.

      This sentence reveals the deep dilemma that black women face in the process of developing their identity: even if they live in black communities and have more social support, they still cannot escape the influence of systematic devaluation and social prejudice, especially economic disadvantages, which will exacerbate this oppression.

    2. One thing that happens is puberty. As children enter adolescence , they begin to explore the question of identity, asking "Who am I? Who can I be?" in ways they have not done before. For Black youth, asking "Who am I?" usually includes thinking about "Who am I ethnically and/or racially? What does it mean to be Black?"

      This sentence not only describes the laws of psychological development, but also points to the core of social fairness. Only when every teenager is no longer forced to defend themselves or doubt themselves because of their racial identity can true identity freedom be achieved.

    3. If you walk into racially mixed elementary schools, you will often see young children of diverse racial backgrounds playing with one another, sitting at the snack table together, crossing racial boundaries with an ease uncommon in adoles-cence.

      This statement highlights the dynamic nature of racial identity development. Natural integration in childhood and group differentiation in adolescence are not contradictory, but rather strategies individuals use at different stages to cope with social pressure. The core task of the education system is to recognize the importance of racial identity while avoiding essentializing it into rigid labels. Instead, it supports adolescents in their exploration and self-definition.

    1. eLife Assessment

      This study provides important results with regard to the ongoing debate of the relationship between internalizing psychopathology and learning under uncertainty. The methods and analyses are solid, and the results are backed by a large sample size, yet the study could still benefit from a more detailed discussion about the difference in experimental design and analysis compared to previous studies. If these concerns are addressed, this study would be of interest to researchers in clinical and computational psychiatry for the behavioral markers of psychopathological symptoms.

    2. Reviewer #1 (Public review):

      The authors conducted a series of experiments using two established decision-making tasks to clarify the relationship between internalizing psychopathology (anxiety and depression) and adaptive learning in uncertain and volatile environments. While prior literature has reported links between internalizing symptoms - particularly trait anxiety - and maladaptive increases in learning rates or impaired adjustment of learning rates, findings have been inconsistent. To address this, the authors designed a comprehensive set of eight experiments that systematically varied task conditions. They also employed a bifactor analysis approach to more precisely capture the variance associated with internalizing symptoms across anxiety and depression. Across these experiments, they found no consistent relationship between internalizing symptoms and learning rates or task performance, concluding that this purported hallmark feature may be more subtle than previously assumed.

      Strengths:

      (1) A major strength of the paper lies in its impressive collection of eight experiments, which systematically manipulated task conditions such as outcome type, variability, volatility, and training. These were conducted both online and in laboratory settings. Given that trial conditions can drive or obscure observed effects, this careful, systematic approach enables a robust assessment of behavior. The consistency of findings across online and lab samples further strengthens the conclusions.

      (2) The analyses are impressively thorough, combining model-agnostic measures, extensive computational modeling (e.g., Bayesian, Rescorla-Wagner, Volatile Kalman Filter), and assessments of reliability. This rigor contributes meaningfully to broader methodological discussions in computational psychiatry, particularly concerning measurement reliability.

      (3) The study also employed two well-established, validated computational tasks: a game-based predictive inference task and a binary probabilistic reversal learning task. This choice ensures comparability with prior work and provides a valuable cross-paradigm perspective for examining learning processes.

      (4) I also appreciate the open availability of the analysis code that will contribute substantially to the field using similar tasks.

      Weakness:

      (1) While the overall sample size (N = 820 across eight experiments) is commendable, the number of participants per experiment is relatively modest, especially in light of the inherent variability in online testing and the typically small effect sizes in correlations with mental health traits (e.g., r = 0.1-0.2). The authors briefly acknowledge that any true effects are likely small; however, the rationale behind the sample sizes selected for each experiment is unclear. This is especially important given that previous studies using the predictive inference task (e.g., Seow & Gillan, 2020, N > 400; Loosen et al., 2024, N > 200) have reported non-significant associations between trait anxiety symptoms and learning rates.

      (2) The motivation for focusing on the predictive inference task is also somewhat puzzling, given that no cited study has reported associations between trait anxiety and parameters of this task. While this is mitigated by the inclusion of a probabilistic reversal learning task, which has a stronger track record in detecting such effects, the study misses an opportunity to examine whether individual differences in learning-related measures correlate across the two tasks, which could clarify whether they tap into shared constructs.

      (3) The parameterization of the tasks, particularly the use of high standard deviations (SDs) of 20 and 30 for outcome distributions and hazard rates of 0.1 and 0.16, warrants further justification. Are these hazard rates sufficiently distinct? Might the wide SDs reduce sensitivity to volatility changes? Prior studies of the circle version of this predictive inference task (e.g., Vaghi et al., 2019; Seow & Gillan, 2020; Marzuki et al., 2022; Loosen et al., 2024; Hoven et al., 2024) typically used SDs around 12. Indeed, the Supplementary Materials suggest that variability manipulations did not seem to substantially affect learning rates (Figure S5)-calling into question whether the task manipulations achieved their intended cognitive effects.

      (4) Relatedly, while the predictive inference task showed good reliability, the reversal learning task exhibited only "poor-to-moderate" reliability in its learning-rate estimates. Given that previous findings linking anxiety to learning rates have often relied on this task, these reliability issues raise concerns about the robustness and generalizability of conclusions drawn from it.

      (5) As the authors note, the study relies on a subclinical sample. This limits the generalizability of the findings to individuals with diagnosed disorders. A growing body of research suggests that relationships between cognition and symptomatology can differ meaningfully between general population samples and clinical groups. For example, Hoven et al. (2024) found differing results in the predictive inference task when comparing OCD patients, healthy controls, and high- vs. low-symptom subgroups.

      (6) Finally, the operationalization of internalizing symptoms in this study appears to focus on anxiety and depression. However, obsessive-compulsive disorder is also generally considered an internalizing disorder, which presents a gap in the current cited literature of the paper, particularly when there have been numerous studies with the predictive inference task and OCD/compulsivity (e.g., Vaghi et al., 2019; Seow & Gillan, 2020; Marzuki et al., 2022; Loosen et al., 2024; Hoven et al., 2024), rather than trait anxiety per se.

      Overall:

      Despite the named limitations, the authors have done very impressive work in rigorously examining the relationship between anxiety/internalizing symptoms and learning rates in commonly used decision-making tasks under uncertainty. Their conclusion is well supported by the consistency of their null findings across diverse task conditions, though its generalizability may be limited by some features of the task design and its sample. This study provides strong evidence that will guide future research, whether by shifting the focus of examining dysfunctions of larger effect sizes or by extending investigations to clinical populations.

    3. Reviewer #2 (Public review):

      Summary:

      In this work, the authors recruited a large sample of participants to complete two well-established paradigms: the predictive inference task and the volatile reversal learning task. With this dataset, they not only replicated several classical findings on uncertainty-based learning from previous research but also demonstrated that individual differences in learning behavior are not systematically associated with internalizing psychopathology. These results provide valuable large-scale evidence for this line of research.

      Strengths:

      (1) Use of two different tasks.

      (2) Recruitment of a large sample of participants.

      (3) Inclusion of multiple experiments with different conditions, demonstrating strong scientific rigor.

      Weaknesses:

      Below are questions rather than 'weaknesses':

      (1) This study uses a large human sample, which is a clear strength. However, was the study preregistered? It would also be useful to report a power analysis to justify the sample size.

      (2) Previous studies have tested two core hypotheses: (a) that internalizing psychopathology is associated with overall higher learning rates, and (b) that it is associated with learning rate adaptation. In the first experiment, the findings seem to disconfirm only the first hypothesis. I found it unclear how, in the predator task, participants were expected to adjust their learning rate to adapt to volatility. Could the authors clarify this point?

      (3) According to the Supplementary Information, Model 13 showed the best fit, yet the authors selected Model 12 due to the larger parameter variance in Model 13. What would the results of Model 13 look like? Furthermore, do Models 12 and 13 correspond to the optimal models identified by Gagne et al. (2020)? Please clarify.

      (4) In the Discussion, the authors addressed both task reliability and parameter reliability. However, the term reliability seems to be used differently in these two contexts. For example, good parameter recovery indicates strong reliability in one sense, but can we then directly equate this with parameter reliability? It would be helpful to define more precisely what is meant by reliability in each case.

      (5) The Discussion also raises the possibility that limited reliability may represent a broader challenge facing the interdisciplinary field of computational psychiatry. What, in the authors' view, are the key future directions for the field to mitigate this issue?

    1. +

      Should this be + or *? From the description of indicator \(\textbf{1}{s_t=s}\) and in the proof of Proposition 2.1, it seems be the latter.

    1. Article 2: Bouskila-Yam & Kluger (2011)

      Uit mijn ervaringen vragen ze niet meer dan dit: Strength-based performance appraisal (SBPA) -> focuses on identifying, appreciating, and developing employee’s qualities in line with the company goals. (future use and development of employee strengths). The SBPA is aimed to overcome the pitfalls of the traditional performance appraisal exercise. Negative and positive feedback

      Over de 'pitfall' zit bijna altijd een vraag, maar meer weten is altijd beter.

    1. eLife Assessment

      This valuable study describes MerQuaCo, a computational and automatic quality control tool for spatial transcriptomics datasets. The authors have collected a remarkable number of tissues to construct the main algorithm. The compelling strength of the evidence is demonstrated through a combination of empirical observations, automated computational approaches, and validation against existing software packages. MerQuaCo will interest researchers who routinely perform spatial transcriptomic imaging (especially MERSCOPE), as it provides an imperfection detector and quality control measures for reliable and reproducible downstream analysis.

    2. Reviewer #1 (Public review):

      Summary:

      The authors present MerQuaCo, a computational tool that fills a critical gap in the field of spatial transcriptomics: the absence of standardized quality control (QC) tools for image-based datasets. Spatial transcriptomics is an emerging field where datasets are often imperfect, and current practices lack systematic methods to quantify and address these imperfections. MerQuaCo offers an objective and reproducible framework to evaluate issues like data loss, transcript detection variability, and efficiency differences across imaging planes.

      Strengths:

      (1) The study draws on an impressive dataset comprising 641 mouse brain sections collected on the Vizgen MERSCOPE platform over two years. This scale ensures that the documented imperfections are not isolated or anecdotal but represent systemic challenges in spatial transcriptomics. The variability observed across this large dataset underscores the importance of using sufficiently large sample sizes when benchmarking different image-based spatial technologies. Smaller datasets risk producing misleading results by over-representing unusually successful or unsuccessful experiments. This comprehensive dataset not only highlights systemic challenges in spatial transcriptomics but also provides a robust foundation for evaluating MerQuaCo's metrics. The study sets a valuable precedent for future quality assessment and benchmarking efforts as the field continues to evolve.

      (2) MerQuaCo introduces thoughtful metrics and filters that address a wide range of quality control needs. These include pixel classification, transcript density, and detection efficiency across both x-y axes (periodicity) and z-planes (p6/p0 ratio). The tool also effectively quantifies data loss due to dropped images, providing tangible metrics for researchers to evaluate and standardize their data. Additionally, the authors' decision to include examples of imperfections detectable by visual inspection but not flagged by MerQuaCo reflects a transparent and balanced assessment of the tool's current capabilities.

      Weaknesses:

      (1) The study focuses on cell-type label changes as the main downstream impact of imperfections. Broadening the scope to explore expression response changes of downstream analyses would offer a more complete picture of the biological consequences of these imperfections and enhance the utility of the tool.

      (2) While the manuscript identifies and quantifies imperfections effectively, it does not propose post-imaging data processing solutions to correct these issues, aside from the exclusion of problematic sections or transcript species. While this is understandable given the study is aimed at the highest quality atlas effort, many researchers don't need that level of quality to compare groups. It would be important to include discussion points as to how those cut-offs should be decided for a specific study.

      (3) Although the authors demonstrate the applicability of MerQuaCo on a large MERFISH dataset, and the limited number of sections from other platforms, it would be helpful to describe its limitations in its generalizability.

    3. Reviewer #2 (Public review):

      Summary:

      The authors present MerQuaCo, a computational tool for quality control in image-based spatial transcriptomic, especially MERSCOPE. They assessed MerQuaCo on 641 slides that are produced in their institute in terms of the ratio of imperfection, transcript density, and variations of quality by different planes (x-axis).

      Strengths:

      This looks to be a valuable work that can be a good guideline of quality control in future spatial transcriptomics. A well-controlled spatial transcriptomics dataset is also important for the downstream analysis.

      Weaknesses:

      The results section needs to be more structured.

    4. Reviewer #3 (Public review):

      Summary:

      MerQuaCo is an open-source computational tool developed for quality control in image-based spatial transcriptomics data, with a primary focus on data generated by the Vizgen MERSCOPE platform. The authors analyzed a substantial dataset of 641 fresh-frozen adult mouse brain sections to identify and quantify common imperfections, aiming to replace manual quality assessment with an automated, objective approach, providing standardized data integrity measures for spatial transcriptomics experiments.

      Strengths:

      The manuscript's strengths lie in its timely utility, rigorous empirical validation, and practical contributions to methodology and biological discovery in spatial transcriptomics.

      Weaknesses:

      While MerQuaCo demonstrates utility in large datasets and cross-platform potential, its generalizability and validation require expansion, particularly for non-MERSCOPE platforms and real-world biological impact.

    1. eLife Assessment

      This study provides a valuable contribution to spatial transcriptomics by introducing MerQuaCo, a computational tool for standardizing quality control in image-based spatial transcriptomics datasets. The tool addresses the lack of consensus in the field and provides robust metrics to identify and quantify common imperfections in datasets. The work is supported by an impressive dataset and compelling analyses, and will be of significant interest to researchers focused on data reproducibility and downstream analysis reliability in spatial transcriptomics.

    2. Reviewer #1 (Public review):

      The authors present MerQuaCo, a computational tool that fills a critical gap in the field of spatial transcriptomics: the absence of standardized quality control (QC) tools for image-based datasets. Spatial transcriptomics is an emerging field where datasets are often imperfect, and current practices lack systematic methods to quantify and address these imperfections. MerQuaCo offers an objective and reproducible framework to evaluate issues like data loss, transcript detection variability, and efficiency differences across imaging planes.

      Strengths

      (1) The study draws on an impressive dataset comprising 641 mouse brain sections collected on the Vizgen MERSCOPE platform over two years. This scale ensures that the documented imperfections are not isolated or anecdotal but represent systemic challenges in spatial transcriptomics. The variability observed across this large dataset underscores the importance of using sufficiently large sample sizes when benchmarking different image-based spatial technologies. Smaller datasets risk producing misleading results by over-representing unusually successful or unsuccessful experiments. This comprehensive dataset not only highlights systemic challenges in spatial transcriptomics but also provides a robust foundation for evaluating MerQuaCo's metrics. The study sets a valuable precedent for future quality assessment and benchmarking efforts as the field continues to evolve.

      (2) MerQuaCo introduces thoughtful metrics and filters that address a wide range of quality control needs. These include pixel classification, transcript density, and detection efficiency across both x-y axes (periodicity) and z-planes (p6/p0 ratio). The tool also effectively quantifies data loss due to dropped images, providing tangible metrics for researchers to evaluate and standardize their data. Additionally, the authors' decision to include examples of imperfections detectable by visual inspection but not flagged by MerQuaCo reflects a transparent and balanced assessment of the tool's current capabilities.

      Comments on revisions:

      All previous concerns have been fully addressed. The revised manuscript presents a robust, well-documented, and user-friendly tool for quality control in image-based spatial transcriptomics, a rapidly advancing area where objective assessment tools are urgently needed.

    3. Reviewer #3 (Public review):

      Summary:

      MerQuaCo is an open-source computational tool developed for quality control in image-based spatial transcriptomics data, with a primary focus on data generated by the Vizgen MERSCOPE platform. The authors analyzed a substantial dataset of 641 fresh-frozen adult mouse brain sections to identify and quantify common imperfections, aiming to replace manual quality assessment with an automated, objective approach, providing standardized data integrity measures for spatial transcriptomics experiments.

      Strengths:

      The manuscript's strengths lie in its timely utility, rigorous empirical validation, and practical contributions to methodology and biological discovery in spatial transcriptomics.

      Weaknesses:

      While MerQuaCo demonstrates utility in large datasets and cross-platform potential, its generalizability and validation are currently limited by the availability of sufficient datasets from non-MERSCOPE platforms and non-brain tissues. The evaluation of data imperfections' impact on downstream analyses beyond cell typing (e.g., differential expression, spatial statistics, and cell-cell interactions) is also constrained by space and scope. However, these represent valuable directions for future work as more datasets become available.

    4. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public review):

      The authors present MerQuaCo, a computational tool that fills a critical gap in the field of spatial transcriptomics: the absence of standardized quality control (QC) tools for image-based datasets. Spatial transcriptomics is an emerging field where datasets are often imperfect, and current practices lack systematic methods to quantify and address these imperfections. MerQuaCo offers an objective and reproducible framework to evaluate issues like data loss, transcript detection variability, and efficiency differences across imaging planes.

      Strengths:

      (1) The study draws on an impressive dataset comprising 641 mouse brain sections collected on the Vizgen MERSCOPE platform over two years. This scale ensures that the documented imperfections are not isolated or anecdotal but represent systemic challenges in spatial transcriptomics. The variability observed across this large dataset underscores the importance of using sufficiently large sample sizes when benchmarking different image-based spatial technologies. Smaller datasets risk producing misleading results by over-representing unusually successful or unsuccessful experiments. This comprehensive dataset not only highlights systemic challenges in spatial transcriptomics but also provides a robust foundation for evaluating MerQuaCo's metrics. The study sets a valuable precedent for future quality assessment and benchmarking efforts as the field continues to evolve.

      (2) MerQuaCo introduces thoughtful metrics and filters that address a wide range of quality control needs. These include pixel classification, transcript density, and detection efficiency across both x-y axes (periodicity) and z-planes (p6/p0 ratio). The tool also effectively quantifies data loss due to dropped images, providing tangible metrics for researchers to evaluate and standardize their data. Additionally, the authors' decision to include examples of imperfections detectable by visual inspection but not flagged by MerQuaCo reflects a transparent and balanced assessment of the tool's current capabilities.

      Weaknesses:

      (1) The study focuses on cell-type label changes as the main downstream impact of imperfections. Broadening the scope to explore expression response changes of downstream analyses would offer a more complete picture of the biological consequences of these imperfections and enhance the utility of the tool.

      Here, we focused on the consequences of imperfections on cell-type labels, one common use for spatial transcriptomics datasets. Spatial datasets are used for so many other purposes that there are almost endless ways in which imperfections could impact downstream analyses. It is difficult to see how we might broaden the scope to include more downstream effects, while providing enough analysis to derive meaningful conclusions, all within the scope of a single paper. Existing studies bring some insight into the impact of imperfections and we expect future studies will extend our understanding of consequences in other biological contexts.

      (2) While the manuscript identifies and quantifies imperfections effectively, it does not propose post-imaging data processing solutions to correct these issues, aside from the exclusion of problematic sections or transcript species. While this is understandable given the study is aimed at the highest quality atlas effort, many researchers don't need that level of quality to compare groups. It would be important to include discussion points as to how those cut-offs should be decided for a specific study.

      Studies differ greatly in their aims and, as a result, the impact of imperfections in the underlying data will differ also, preventing us from offering meaningful guidance on how cut-offs might best be identified. Rather, our aim with MerQuaCo was to provide researchers with tools to generate information on their spatial datasets, to facilitate downstream decisions on data inclusion and cut-offs.

      (3) Although the authors demonstrate the applicability of MerQuaCo on a large MERFISH dataset, and the limited number of sections from other platforms, it would be helpful to describe its limitations in its generalizability.

      In figure 9, we addressed the limitations and generalizability of MerQuaCo as best we could with the available datasets. Gaining deep insight into the limitations and generalizability of MerQuaCo would require application to multiple large datasets and, to the best of our knowledge, these datasets are not available.

      Reviewer #2 (Public review):

      The authors present MerQuaCo, a computational tool for quality control in image-based spatial transcriptomic, especially MERSCOPE. They assessed MerQuaCo on 641 slides that are produced in their institute in terms of the ratio of imperfection, transcript density, and variations of quality by different planes (x-axis).

      Strengths:

      This looks to be a valuable work that can be a good guideline of quality control in future spatial transcriptomics. A well-controlled spatial transcriptomics dataset is also important for the downstream analysis.

      Weaknesses:

      The results section needs to be more structured.

      We have split the ‘Transcript density’ subsection of the results into 3 new subsections.

      Reviewer #3 (Public review):

      MerQuaCo is an open-source computational tool developed for quality control in imagebased spatial transcriptomics data, with a primary focus on data generated by the Vizgen MERSCOPE platform. The authors analyzed a substantial dataset of 641 freshfrozen adult mouse brain sections to identify and quantify common imperfections, aiming to replace manual quality assessment with an automated, objective approach, providing standardized data integrity measures for spatial transcriptomics experiments.

      Strengths:

      The manuscript's strengths lie in its timely utility, rigorous empirical validation, and practical contributions to methodology and biological discovery in spatial transcriptomics.

      Weaknesses:

      While MerQuaCo demonstrates utility in large datasets and cross-platform potential, its generalizability and validation require expansion, particularly for non-MERSCOPE platforms and real-world biological impact.

      We agree that there is value in expanding our analyses to non-Merscope platforms, to tissues other than brain, and to analyses other than cell typing. The limiting factor in all these directions is the availability of large enough datasets to probe the limits of MerQuaCo. We look forward to a future in which more datasets are available and it’s possible to extend our analyses

      Reviewer #1(Recommendation for the Author):

      (1) To better capture the downstream impacts of imperfections, consider extending the analysis to additional metrics, such as specificity variation across cell types, gene coexpression, or spatial gene patterning. This would deepen insights into how these imperfections shape biological interpretations and further demonstrate the versatility of MerQuaCo.

      These are compelling ideas, but we are unable to study so many possible downstream impacts in sufficient depth in a single study. Insights into these topics will likely come from future studies.

      (2) In Figure 7 legend, panel label (D) is repeated thus panels E-F are mislabelled. 

      We have corrected this error.

      (3) Ensure that the image quality is high for the figures. 

      We will upload Illustrator files, ensuring that images are at full resolution.

      Reviewer #2 (Recommendation for the Author):

      (1) A result subsection "Transcript density" looks too long. Please provide a subsection heading for each figure. 

      We have split this section into 3 with new subheadings.

      (2) The result subsection title "Transcript density" sounds ambiguous. Please provide a detailed title describing what information this subsection contains. 

      We have renamed this section ‘Differences in transcript density between MERSCOPE experiments’.

      Minor: 

      (1) There is no explanation of the black and grey bars in Figure 2A.

      We have added information to the figure legend, identifying the datasets underlying the grey and black bars.

      (2) In the abstract, the phrase "High-dimension" should be "High-dimensional". 

      We have changed ‘high-dimension’ to ‘high-dimensional’.

      (3) In the abstract, "Spatial results" is an unclear expression. What does it stand for? 

      We have replaced the term ‘spatial results’ with ‘the outputs of spatial transcriptomics platforms’.

      Reviewer #3 (Recommendation for the Author):

      (1) While the tool claims broad applicability, validation is heavily centered on MERSCOPE data, with limited testing on other platforms. The authors should expand validation to include more diverse platforms and add a small analysis of non-brain tissue. If broader validation isn't feasible, modify the title and abstract to reflect the focus on the mouse brain explicitly.

      We agree that expansion to other platforms is desirable, but to the best of our knowledge sufficient datasets from other platforms are not available. In the abstract, we state that ‘… we describe imperfections in a dataset of 641 fresh-frozen adult mouse brain sections collected using the Vizgen MERSCOPE.’

      (2) The impact of data imperfections on downstream analysis needs a more comprehensive evaluation. The authors should expand beyond cluster label changes to include a) differential expression analysis with simulated imperfections, b) impact on spatial statistics and pattern detection, and c) effects on cell-cell interactions. 

      Each of these ideas could support a substantial study. We are unable to do them justice in the limited space available as an addition to the current study.

      (3) The pixel classification workflow and validation process need more detailed documentation. 

      The methods and results together describe the workflow and validation in depth. We are unclear what details are missing.

      (4) The manuscript lacks comparison to existing. QC pipelines such as Squidpy and Giotto. The authors should benchmark MerQuaCo against them and provide integration options with popular spatial analysis tools with clear documentation.

      To the best of our knowledge, Squidpy and Giotto lack QC benchmarks, certainly of the parameters characterized by MerQuaCo. Direct comparison isn’t possible.

    1. When you stitch the cut, don’t forget to put all these back in my ear.Put them back in order as you would do with books on your shelf.

      this except seems both literal and symbolic. the writer when referring to "stich the cut" seems genuine and literal and also while proceeding to stich the wound they want all their memories and life experience to be put back as such.

    1. it mattered not in the least whether it turned out corn-flakes or Cadillacs

      impact of machines/tech on human interaction/ inter&intra-personal relationships is of far greater importance than that which they produce

  4. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. lonelygirl15. November 2023. Page Version ID: 1186146298. URL:

      I believe this film masterfully captures the central theme of this chapter: signal deception. It begins with what appears to be an ordinary teenage vlog, gradually evolving into a chilling tale of trust and control. The narrative's shift from mundane daily life to cult stalking effectively demonstrates to audiences the blurred boundaries between reality and fiction in the internet age.

    2. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      This article tells us Donald Trump's Twitter posts are not just written by himself, these posts are posted by iPhone and Android phone. And data shows that the posts published by Android phone are written with angrier and more negative tone, which are written by Donald Trump himself. The other half posts are published by staff who are trying to imitate Donald Trump's tone and publish positive views. The article tells us the authenticity and inauthenticity on social media. So we should be able to distinguish what we can trust and what we cannot trust.

    3. ext analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      This article overall showcases data on Trumps different tweets. When his team is tweeting from an Iphone vs. an Andriod. His Iphone tweets always meant well or good luck. While his Andriod tweets were more progressively aggressive. He used the android to usually insult his rivals or opponents. The writer of this article used python to state his case on how he knows which tweet is curated from which device.

    4. I've always love thriller genres, so this science fiction definitely caught my eye. Lonelygirl15 was viral on YouTube and millions of people subscribed the author because they felt relate to the homeschooled girl who was just sharing her daily life until something went wrong. The story turned into the dark side. People were questioning whether the story is true or not until the truth was revealed that the whole people who appeared in the video were in a cast.

    5. Jonah E. Bromwich and Ezra Marcus. The Anonymous Professor Who Wasn’t. The New York Times,

      One of the quotes that stood out to me was something said by Jacqueline Keeler of Pollen Nation about white and non-native individuals centering themselves around Native issues. She points out that it makes it even harder for Native American individuals to speak out when "frauds" are speaking for them. McLaughlin's choice to make her fake online persona a Native woman as a white woman is a startling and upsetting factor of this story.

    6. Alannah Oleson. Beyond “Average” Users: Building Inclusive Design Skills with the CIDER Technique. Bits and Behavior, October 2022. URL: https://medium.com/bits-and-behavior/beyond-average-users-building-inclusive-design-skills-with-the-cider-technique-413969544e6d (visited on 2023-11-24).

      This article basically talked about the design technique which was called CIDER. There are five stages in CIDER, and this technique tried to help users to think beyond "average" user .The five letters meant Critique, Imagine, Design, Expand, Repeat. After I read this article, I learned CIDER played an important role in learners and creators.

    7. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      The post reinforces the belief that Trump's more combative tone comes through when he tweets from an Android vs. when he tweets from an iPhone, in which he sounds more polished and official, and this distinction suggests different authorship or roles involved in shaping Trump's public voice.

    8. lonelygirl15. November 2023. Page Version ID: 1186146298. URL: https://en.wikipedia.org/w/index.php?title=Lonelygirl15&oldid=1186146298 (visited on 2023-11-24).

      Lonleygirl15 is a great example of authenticity when it comes to online personas. What was an online web series following 16 year old Bree's mundane life which soon spiraled into a mystery revolving around her parent's religion, viewers were drawn to her simple, home recording type videos with a dark twist. However after people became skeptical about Bree, and if her whole story itself was real, people were quick to point out video inconsistencies, and create theories about the whole ordeal. And while the bad press garnered some temporary publicity, the show eventually died off after it was revealed to be scripted with hired actors. Overall, it was that belief that the events and characters were real which drew people to the series in the first place. So of course audiences finding out they were hoaxed inevitably led to the series infamous reputation within online history.

    9. Text analysis of Trump's tweets confirms he writes only theAndroid half was published on. Text analysis of Trump's tweets confirms he writes only the (angrier) Android half. August 2016. URL: http://varianceexplained.org/r/trump-tweets/ (visited on 2023-11-24).

      I think it’s interesting how the article showed that Trump’s tweets were not random but followed clear patterns. The idea that the tone and device he used could show which tweets were his own made me realize how carefully public figures can use social media to shape their image. It also makes me think about the power a posts have, one tweet can change people’s opinions or even affect politics.

    10. Key & Peele. Key & Peele - Obama Meet & Greet. September 2014. URL: https://www.youtube.com/watch?v=nopWOC4SRm4 (visited on 2023-12-07).

      I love this skit, it remids me of how I greet some of my friends compared to others. Same with my transition to college. I am very different with all the people I meet here than I am with my friends back home.

    11. Peter Aldhous. At First It Looked Like A Scientist Died From COVID. Then People Started Taking Her Story Apart. BuzzFeed News, August 2020. URL: https://www.buzzfeednews.com/article/peteraldhous/bethann-mclaughlin-twitter-suspension-fake-covid-death (visited on 2023-12-07).

      This piece unpacks the @sciencing_bi hoax—where a well-known activist fabricated a “Native American professor” who supposedly died of COVID—and shows how the lie unraveled through open-source verification (timestamps, language patterns, overlapping social graphs). What struck me is how a sympathetic identity can still be weaponized to mobilize outrage and donations, which blurs “authentic vs. inauthentic” far beyond simple anonymity. The article also documents the platform response (account suspension after community sleuthing), underscoring a reactive moderation gap: detection lag lets harmful narratives peak before correction. For me, it strengthens the case for reputation signals on pseudonyms and lightweight provenance checks on high-impact claims, so empathy isn’t exploited by manufactured personas.

    12. Steak-umm [@steak_umm]. Brands that use social causes for marketing do so to meat a bottom line. they calculate decisions based on the risk/reward ratio of advertisers, current audiences, and potential audiences. workers internally may truly care, but the decisions are ultimately based in self-interest. October 2020. URL: https://twitter.com/steak_umm/status/1321517041967370245 (visited on 2023-11-24).

      I think Trey's tweet is quite correct, however brand tweets became such a phenomenon that they were likely well-known by the CEOs as well. Duolingo especially capitalized on it, I'd be shocked if their CEO was oblivious to it.

    13. Alex Norcia. Brand Twitter Is Absurd, and It Will Only Get Worse. Vice, February 2019. URL: https://www.vice.com/en/article/pangw8/brand-twitter-is-absurd-and-it-will-only-get-worse (visited on 2023-11-24).

      Brand twitter is a quite ridiculous part of marketing where a lot of accounts that are meant to advertise a certain company/product try to interact with others on the platform. There are many examples of this on the internet where certain brands would either poke fun of one another or post a strangely specific/realistic scenario that the brand itself could not experience. It was popularized around 2018 since it is a way to make the brand seem more human but it has made some people upset. Some believe that it is a cheap cash grab and seems like brands are trying too hard to be relatable while others think it’s just a fun way thing to come across online.

  5. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. As a rule, humans do not like to be duped. We like to know which kinds of signals to trust, and which to distrust. Being lulled into trusting a signal only to then have it revealed that the signal was untrustworthy is a shock to the system, unnerving and upsetting. People get angry when they find they have been duped. These reactions are even more heightened when we find we have been duped simply for someone else’s amusement at having done so.

      Regarding deception, I believe that whether online or in real life, the vast majority of people dislike being deceived—especially after choosing to trust something or someone only to discover they've been deceived. The same applies online: whether it's false statements or genuine thoughts, one should never deceive others.

    2. Authentic connections frequently place high value on a sense of proximity and intimacy. Someone who pretends to be your friend, but does not spend time with you (proximity) or does not open themselves up to trusting mutual interdependence (intimacy) is offering one kind of connection (being an acquaintance) under the guise of a different kind of connection (friendship).

      I personally agree this idea, and I also think that people that like to spend their time and money into the relationships actually care about each others more than the people that do not want to invest anything into their relationship. In addition, I think proximity and intimacy should be based on trust. If people do not trust each other, they will not get authentic relationships.

    3. This is not to say that there is no room for appreciating connections that are not fully honest, transparent, and earnest all the time. Social media spaces have allowed humor and playfulness to flourish, and sometimes humor and play are not, strictly speaking, honest. Often, this does not bother us, because the kind of connection offered by joke accounts matches the jokey way they interact on social media. We get to know a lot about public figures and celebrities, but it is not usually considered problematic for celebrity social media accounts to be run by publicist teams. As long as we know where we stand, and the kind of connection being offered roughly matches the sort of connection we’re getting, things go okay.

      I appreciate that this accepts that not all online interactions need to be entirely authentic. Even humor and play can lead to real connection, assuming people know what sort of interaction they are having.

    4. When someone presents themselves as open and as sharing their vulnerabilities with us, it makes the connection feel authentic. We feel like they have entangled their wellbeing with ours by sharing their vulnerabilities with us. Think about how this works with celebrity personalities. Jennifer Lawrence became a favorite of many when she tripped at the Oscars [f2], and turned the moment into her persona as someone with a cool-girl, unpolished, unfiltered way about her. She came across as relatable and as sharing her vulnerabilities with us, which let many people feel that they had a closer, more authentic connection with her. Over time, that persona has come to be read differently, with some suggesting that this open-styled persona is in itself also a performance. Does this mean that her performance of vulnerability was inauthentic?

      This chapter about authenticity really make me reflect on the current "performative" male trend. As you may know, the stereotype for these performative males goes along the lines of things like drinking matcha, wearing tote bags, listening to indie music like Clario... etc. In hindsight, you can chop this up as just ones interests, regardless of their gender. But the reason it's such a big trend is because people can sense when a guy is doing it purely for validation. More specifically- female validation, since these interests are more stereotypically women's interests. So like the text reads, "humans do not like to be duped", and when people can tell something is inauthentic, they're not going to take it seriously.

    5. Many users were upset that what they had been watching wasn’t authentic. That is, users believed the channel was presenting itself as true events about a real girl, and it wasn’t that at all. Though, even after users discovered it was fictional, the channel continued to grow in popularity.

      This made me think about how people’s reactions to “fake” content depend on their expectations. Some fans felt betrayed, but others didn’t really care once they knew it was scripted. I feel like this shows that people don’t always need something to be 100% real to enjoy it, they just want to know what kind of relationship they’re in. It reminds me of how influencers act online now. Even if their posts are planned, as long as we know it’s part of their brand and not pretending to be completely natural, it still feels authentic in its own way.

    1. Where do you see parasocial relationships on social media? In what ways are you in parasocial relationships? What are the ways in which a parasocial relationship can be authentic or inauthentic?

      Just like this chapter says, parasocial relationship is very common on social media. Such as influencers and their followers. In Chinese TikTok, the most common way that influencers call their followers is families and brothers no matter they are male or female. This kind of call builds a parasocial relationship between the influencers and their followers by giving followers a illusion that they have a very close relationship with the influencers, so they would be willing to buy the merchandises the influencers are selling. I also followed some influencers such as fitness bloggers and they replied my messages sometimes, but when I asked them work out together face to face, they would reject me. So parasocial relationship is authentic in the way of giving the followers emotional value and support, but it is also inauthentic because influencers are not friends with most of their followers in real life, but they still act that way.

    2. In what ways are you in parasocial relationships?

      I feel there are a few celebrities or internet personalities I feel a parasocial relation to. I love podcasts, especially the type in which people simply speak about their lives and opinions or tell jokes and comment on current affairs. I think that creating this type of content involves displaying your personality, or a part of your personality at least, on the internet. Consuming this content, it is easy to feel as though you know the individual you are watching and forget that the persona they are presenting is not their true offline personality.

    3. As an example of the ethically complicated nature of parasocial relationships, let’s consider the case of Fred Rogers [f36], who hosted a children’s television program from 1968 to 2001. In his television program, Mr. Rogers wanted all children to feel cared for and loved. To do this, he intentionally fostered a parasocial relationship with the children in his audience (he called them his “television friends”):

      I find this example a bit odd, yet somehow understandable. Sometimes I also find many people on TV rather unbelievable, but I wouldn't go so far as to write them a letter.

    1. eLife Assessment

      This important study shows how the relative importance of inter-species interactions in microbiomes can be inferred from empirical species abundance data. The methods based on statistical physics of disordered systems are compelling and rigorous, and allow for distinguishing healthy and non-healthy human gut microbiomes via differences in their inter-species interaction patterns. This work should be of broad interest to researchers in microbial ecology and theoretical biophysics.

    2. Reviewer #1 (Public review):

      Summary:

      In this manuscript, the authors develop a novel method to infer ecologically-informative parameters across healthy and diseased states of the gut microbiota, although the method is generalizable to other datasets for species abundances. The authors leverage techniques from theoretical physics of disordered systems to infer different parameters-mean and standard deviation for the strength of bacterial interspecies interactions, a bacterial immigration rate, and the strength of demographic noise-that describe the statistics of microbiota samples from two groups-one for healthy subjects and another one for subjects with chronic inflammation syndromes. To do this, the authors simulate communities with a modified version of the Generalized Lotka-Volterra model and randomly-generated interactions, and then use a moment-matching algorithm to find sets of parameters that better reproduce the data for species abundances. They find that these parameters are different for the healthy and diseased microbiota groups. The results suggest, for example, that bacterial interaction strengths, relative to noise and immigration, are more dominant of microbiota dynamics in diseased states than in healthy states.

      We think that this manuscript brings an important contribution that will be of interest in the areas of statistical physics, (microbiota) ecology and (biological) data science. The evidence of their results is solid and the work improves the state-of-the-art in terms of methods.

      Strengths:

      • Using a fairly generic ecological model, the method can identify the change in the relative importance of different ecological forces (distribution of interspecies interactions, demographic noise and immigration) in different sample groups. The authors focus on the case of the human gut microbiota, showing that the data is consistent with a higher influence of species interactions (relative to demographic noise and immigration) in a disease microbiota state than in healthy ones.

      • The method is novel, original and it improves the state-of-the-art methodology for the inference of ecologically-relevant parameters. The analysis provides solid evidence on the conclusions.

      Weaknesses:

      • As a proof of concept for a new inference method, this text maintains a technical focus, which may require some familiarity with statistical physics. Nevertheless, the authors' clear introduction of key mathematical terms and their interpretations, along with a clear discussion of the ecological implications, make the results accessible and easy to follow.
    3. Reviewer #2 (Public review):

      Summary:

      This valuable work aims to infer, from microbiome data, microbial species interaction patterns associated with healthy and unhealthy human gut microbiomes. Using solid techniques from statistical physics, the authors propose that healthy and unhealthy microbiome interaction patterns substantially differ. Unhealthy microbiomes are closer to instability and single-strain dominance; whereas healthy microbiomes showcase near-neutral dynamics, mostly driven by demographic noise and immigration.

      Strengths:

      This is a well-written article, relatively easy to follow and transparent despite the high degree of technicality of the underlying theory. The authors provide a powerful inferring procedure, which bypasses the issue of having only compositional data. This work shows that embracing the complexity of microbial systems can be used to our advantage, instead of being an insurmountable obstacle. This is a powerful counterpoint to the classic reductionist view that pushes researchers to study much simpler systems, and only hope to one day scale up their findings.

      Weaknesses:

      As acknowledged by the authors themselves, this is only a proof of concept. Further research is to better understand the dynamical nature of gut-microbiomes. The authors do however point at ways in which species abundance distributions could be better reproduced by dynamical models. They also suggest that they work could explain prior empirical findings invoking the "Anna Karenina principle", where healthy microbiomes resemble one another, but disease states tend to all differ.

    4. Reviewer #3 (Public review):

      Summary:

      I found the manuscript to be well-written. I have a few questions regarding the model, though the bulk of my comments are requests to provide definitions and additional clarity. There are concepts and approaches used in this manuscript that are clear boons for understanding the ecology of microbiomes but are rarely considered by researchers approaching the manuscript from a traditional biology background. The authors have clearly considered this in their writing of S1 and S2, so addressing these comments should be straightforward. The methods section is particularly informative and well-written, with sufficient explanations of each step of the derivation that should be informative to researchers in the microbial life sciences that are not well-versed with physics-inspired approaches to ecology dynamics.

      Strengths:

      The modeling efforts of this study primarily rely on a disordered for of the generalized Lotka-Volterra (gLV) model. This model can be appropriate for investigating certain systems and the authors are clear about when and how more mechanistic models (i.e., consumer-resource) can lead to gLV. Phenomenological models such as this have been found to be highly useful for investigating the ecology of microbiomes, so this modeling choice seems justified, and the limitations are laid out.

      Weaknesses:

      The authors use metagenomic data of diseased and healthy patients that was first processed in Pasqualini et al. (2024). The use of metagenomic data leads me into a question regarding the role of sampling effort (i.e., read counts) in shaping model parameters such as $h$. This parameter is equal to the average of 1/# species across samples because the data are compositional in nature. My understanding is that $h$ was calculated using total abundances (i.e., read counts). The number of observed species is strongly influenced by sampling effort and the authors addressed this point in their revised manuscript.

      However, the role of sampling effort can depend on the type of data and my instinct about the role that sampling effort plays in species detection is primarily based on 16S data. The dependency between these two variables may be less severe for the authors' metagenomic pipeline. This potential discrepancy raises a broader issue regarding the investigation of microbial macroecological patterns and the inference of ecological parameters. Often microbial macroecology researchers rely on 16S rRNA amplicon data because that type of data is abundant and comparatively low-cost. Some in microbiology and bioinformatics are increasingly pushing researchers to choose metagenomics over 16S. Sometimes this choice is valid (discovery of new MAGs, investigate allele frequency changes within species, etc.), sometimes it is driven by the false equivalence "more data = better". The outcome though is that we have a body of more-or-less established microbial macroecological patterns which rest on 16S data and are now slowly incorporating results from metagenomics. To my knowledge there has not been a systematic evaluation of the macroecological patterns that do and do not vary by one's choice in 16S vs. metagenomics. Several of the authors in this manuscript have previously compared the MAD shape for 16S and metagenomic datasets in Pasqualini et al., but moving forward a more comprehensive study seems necessary (2024). These points were addressed by the authors in their revised manuscript.

      Final review: The authors addressed all comments and I have no additional comments.

      References

      Pasqualini, Jacopo, et al. "Emergent ecological patterns and modelling of gut microbiomes in health and in disease." PLOS Computational Biology 20.9 (2024): e1012482.

    1. I've noticed that whenever I post new pictures of myself, I aim to present a more positive image and show how enjoyable and delightful my life is, even though it may be the opposite reality in real life. For example, the styles of my Instagram post tend to be more delicate and gorgeous because that is my social card, but when I talk with my friends in the group chat, I become more down-to-earth and relatively more real. I don't really like this because it compromises my authenticity, as I feel it is dangerous to show your true side to a completely public area where all your followers can see. It's more like a social mask to protect yourself.

    2. How do you notice yourself changing how you express yourself in different situations, particularly on social media? Do you feel like those changes or expressions are authentic to who you are, do they compromise your authenticity in some way?

      In ways I believe it is human nature to naturally change yourself and your actions based on the environment or people you are around. For me personally I do act differently between my family,friends,peers,cowrokers,teachers,etc. On social media especially I express myself in 2 different ways on 2 different accounts. On my main I don't really post that often but on the spam with my close friends I tend to post non stop. This doesn't change how authentic I am I would say, I just perceive myself to others in the appropriate time and place. I'm still myself at the end of the day but I could see how it could potentially contradict myself from an outside perspective.

    3. Do you feel like those changes or expressions are authentic to who you are, do they compromise your authenticity in some way?

      I think that even if the changes do not come from another person, those changes can actually affect your authentic self. For example, if someone is raised in a religious household and is consequentially religious themselves, then it is authentically them being religious. I think this is analogous to smaller changes that occur online. Perhaps these changes are just means by which we represent our authentic self.

    1. Consider placing the thesis toward the bottom of your introduction. This allows you a few sentences to introduce the concept and prepare the reader for your purpose.

      Using the first part of your introduction to hook your reader

    2. thesis statement must concentrate on a specific area of a general topic. As you may recall, the creation of a thesis statement begins when you choose a broad subject and then narrow down its parts until you pinpoint a specific aspect of that topic.

      A thesis is a main idea or central statement that is defended or demonstrated in a text, essay, or research.

    1. A parity bit is flipped when an error occurs during transmission, causing the total number of 1s in the data to change, and the receiver detects an error because the received parity doesn't match the original parity rules

    1. Part of becoming a skilled researcher is learning the epistemology of one’s discipline

      To be a good researcher, you must understand how knowledge is created and justified within your specific field. Epistemology refers to the theory of knowledge (how we know what we know)

    2. “The spectrum of inquiry ranges from asking simple questions that depend upon basic recapitulation of knowledge to increasingly sophisticated abilities to refine research questions, use more advanced research methods, and explore more diverse disciplinary perspectives”

      Suggests that research isn’t static; it grows and deepens. I should expect my topic to evolve as I dig deeper.

    3. When you do research with inquiry, you need to know what you want to learn about. you should think about what kind of questions you want to ask and how you are going to find the answers. It's important to use reliable resources, like books, trusted websites, or articles from experts. Sometimes, people start with one research topic and or question but later change it. This can happen if they find out the question has already been asked too many times or it's too time consuming or difficult to answer. It's ok to change your question. Research is about learning and finding the best focus for your topic.

    4. Thus, as they engage in inquiry, researchers will choose methods based on the values of the Communities of Practitioners in their disciplines. They may sift through research publications across disciplines with hopes of synthesizing published information in a new way; test past research claims in a lab; or interview people.

      They sometimes will use methods like experiments or interviews to get more data, they use a lot of techniques.

    1. When you find a book that is written about your topic, check the bibliography for references that you can try to find yourself.

      Question: If you are reading a text from a credible source should you still check that the sources in their bibliography are credible?

    2. Imagine what would happen if a detective collected enough evidence to solve a criminal case, but she never shared her solution with the authorities.

      This is an interesting take and it makes me appreciate anyone who shares valuable research for others. I can imagine people have worked together on research all around the world.

    3. Writing a research paper is an ideal way to organize thoughts, craft narratives, or make arguments based on research, and share your newfound knowledge with the world.

      An effective method to delivering your ideas to others is by making a research paper

    1. or years, many academics have questioned the importance — even the justice — of requiring college students to master standard English.

      It tells about how importance of students doing the standard English.

    1. Ethical guidelines govern AI integration, emphasizing data privacy and ethical implications. Transparent communication is crucial for addressing ethical concerns in AI’s contribution to game world creation.

      Although the author does mention and repeating the importance of transparency and keeping players safe, the author doesn't provide any examples.

    2. The integration of artificial intelligence (AI) in game development opens up new possibilities for the industry. The role of AI is crucial in enhancing personalized experiences for younger players.

      Through this section, it shows that this article is mainly industry perspective on AI and ethics. It mainly mentions how AI is improving the industry rather than the ethic issues.

    3. AI systems and algorithms drive game content creation, optimizing difficulty levels to match player skills and improving game progression. These systems analyze player data, preferences, and behavior, allowing game designers to create personalized experiences. Furthermore, AI algorithms can be utilized to design realistic game characters, each with their unique behavior patterns, adding depth to gameplay. However, ethical considerations arise when using AI to influence game design, including the potential perpetuation of harmful stereotypes or privacy concerns related to player data.

      The section shows the awareness of the issues of AI on the ethic challenges to the problems such as player privacy. It also mentions that transparent communication are essential in using AI on video games. The article was published in 2023, which means this issue has been recognize long before.

    1. No one, we presume, supposes that any change in public opinion or feeling, in relation to this unfortunate race, in the civilized nations of Europe or in this country, should induce the court to give to the words of the Constitution a more liberal construction in their favor than they were intended to bear when the instrument was framed and adopted.

      Taney is saying the Constitution should always be read the same way it was when it was first written no matter how society changes this is pure originalism. I disagree because if we followed this logic blindly equality and progress would've never happened.

    2. the act of Congress which prohibited a citizen from holding and owning property of this kind in the territory of the United States north of the line therein mentioned, is not warranted by the Constitution, and is therefore void;

      Here he's attacking the Missouri Compromise through originalist property arguments. He's prioritizing what the framers thought about property over evolving ideas about justice. I disagree because that law was meant to limit slavery's spread not harm anyone's real rights.

    3. at the time of the Declaration of Independence, and when the Constitution of the United States was framed and adopted. But the public history of every European nation displays it in a manner too plain to be mistaken.

      Taney uses the historical context of the 1770s to argue that Black people were never part of the political community. It's another way he locks the meaning of the constitution in the past. I disagree because laws should reflect growth not just the history.

    4. an act of Congress which deprives a citizen of the United States of his liberty or property, merely because he came himself or brought his property into a particular Territory of the United States, and who had committed no offence against the laws, could hardly be dignified with the name of due process of law.

      He's framing slavery as a property issue, using the original understanding of the Fifth Amendment to defend it. That's how he tries to strike down anti slavery laws. I disagree because this treats human beings as objects instead of recognizing their rights.

    5. The words 'people of the United States' and 'citizens' are synonymous terms, and mean the same thing. They both describe the political body who, according to our republican institutions, form the sovereignty, and who hold the power and conduct the Government through their representatives.

      Taney argues that citizenship only applied to White people because that's how he claims that framers saw it. It's a textbook originalist move taking their intent as absolute. I disagree because it purposely shuts out anyone who didn't fit their narrow view.

    6. it is too clear for dispute, that the enslaved African race were not intended to be included, and formed no part of the people who framed and adopted this declaration;

      Taney is basically saying that when the Constitution was written, Black people weren't meant to be part of the people. He's leaning on what he thinks the framers originally meant. I find this reasoning outdated because it freezes the law in a time when equality wasn't valued.

    7. they had no rights which the white man was bound to respect;

      This line shows Taney using originalist reasoning to deny any legal rights to Black people. It reflects how he interpreted the farmers views to support racism. I strongly disagree because it upholds injustice instead of evolving with society.

    8. 'We hold these truths to be self-evident: that all men are created equal;

      If the Declaration of Independence is saying this then we have to put all races on equal field but clearly the United states at that time didn't do that so it showed even the citizens weren't following the foundation which the United States stands on.

    9. the plaintiff in error could not be a citizen of the State of Missouri, within the meaning of the Constitution of the United States, and, consequently, was not entitled to sue in its courts.

      I disagree with this because in my opinion if said person is living in the untied states, obeying law and the Constitution and like Dred Scott was in a free state at some point you should be seen as equal.

    10. It is very clear, therefore, that no State can, by any act or law of its own, passed since the adoption of the Constitution

      I agree with this because if a state can make laws and do as they please it would give them to much power and then the point of the united states is throw out the window.

    11. he question is simply this: Can a negro, whose ancestors were imported into this country, and sold as slaves, become a member of the political community formed and brought into existence by the Constitution of the United States, and as such become entitled to all the rights, and privileges, and immunities, guarantied by that instrument to the citizen?

      I believe that a person imported to this country as a slave should have a right to citizenship no matter what because they were bought to this country as a worker. But if we are talking about the time if said person was to be brought to a state where slaves are free like Dred Scott he then should be seen as a free man and a citizen.

    12. That plea denies the right of the plaintiff to sue in a court of the United States

      They couldn't sue because of their status in the United States. I believe this is wrong because they were once in a slave free state so they should have been seen as equal and allowed.

    13. The defendant pleaded in abatement to the jurisdiction of the court, that the plaintiff was not a citizen of the State of Missouri, as alleged in his declaration, being a negro of African descent, whose ancestors were of pure African blood, and who were brought into this country and sold as slaves

      They weren't citizens so they couldn't sue in court. I disagree with this because they are still people living in the united states so they should have the right a protection of the Constitution and laws.

    14. the said Harriet, wife of said Dred Scott, and Eliza and Lizzie, the daughters of the said Dred Scott, were negro slaves, the lawful property of the defendant.'

      I disagree with this because they were in a slave free state so them being in that state should have made them free.

    15. Upon these considerations, it is the opinion of the court that the act of Congress which prohibited a citizen from holding and owning property of this kind in the territory of the United States north of the line therein mentioned, is not warranted by the Constitution, and is therefore void

      Taney invalidates the Missouri Compromise based on what the framers intended about property rights and slavery. I disagree, because this blocks progress toward equality and was a step backward for civil rights.

    16. And if the Constitution recognises the right of property of the master in a slave, and makes no distinction between that description of property and other property owned by a citizen,

      Taney uses the original language and intent of the framers to argue that the Constitution protected slavery as property. I disagree, because treating people as property is incompatible with modern human rights.

    17. In discussing this question, we must not confound the rights of citizenship which a State may confer within its own limits, and the rights of citizenship as a member of the Union.

      Taney draws on the original distinction between state and federal citizenship to argue against Scott’s rights. I disagree, because citizenship should be equal and not subject to outdated interpretations.

    18. It is very clear, therefore, that no State can, by any act or law of its own, passed since the adoption of the Constitution, introduce a new member into the political community created by the Constitution of the United States. It cannot make him a member of this community by making him a member of its own. And for the same reason it cannot introduce any person, or description of persons, who were not intended to be embraced in this new political family, which the Constitution brought into existence, but were intended to be excluded from it.

      Taney argues that since no citizenship law for Black people existed at the time, they can’t be citizens now. I disagree, because laws should be able to evolve as society changes.

    19. The only two provisions which point to them and include them, treat them as property, and make it the duty of the Government to protect it; no other power, in relation to this race, is to be found in the Constitution; and as it is a Government of special, delegated, powers, no authority beyond these two provisions can be constitutionally exercised.

      Taney claims that the Constitution’s original language only treated Black people as property, not as citizens. I disagree, because denying personhood is morally and legally indefensible.

    20. They show that a perpetual and impassable barrier was intended to be erected between the white race and the one which they had reduced to slavery,

      Taney uses colonial laws and practices to argue that the framers always intended a strict separation and exclusion of Black people. I disagree, because using old discriminatory laws to justify present injustice is wrong.

    21. No one, we presume, supposes that any change in public opinion or feeling, in relation to this unfortunate race, in the civilized nations of Europe or in this country, should induce the court to give to the words of the Constitution a more liberal construction in their favor than they were intended to bear when the instrument was framed and adopted

      Taney rejects the idea that changing public opinion should affect constitutional interpretation, preferring original intent. I disagree, because public opinion reflects current values and justice.

    22. They had for more than a century before been regarded as beings of an inferior order, and altogether unfit to associate with the white race, either in social or political relations; and so far inferior, that they had no rights which the white man was bound to respect

      Taney is referencing the historical view held by the framers to justify denying rights to Black people. I strongly disagree, because this argument relies on racist beliefs and ignores progress in social justice.

    23. The words 'people of the United States' and 'citizens' are synonymous terms, and mean the same thing. They both describe the political body who, according to our republican institutions, form the sovereignty, and who hold the power and conduct the Government through their representatives. They are what we familiarly call the 'sovereign people,' and every citizen is one of this people, and a constituent member of this sovereignty. The question before us is, whether the class of persons described in the plea in abatement compose a portion of this people, and are constituent members of this sovereignty? We think they are not, and that they are not included, and were not intended to be included, under the word 'citizens' in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States. On the contrary, they were at that time considered as a subordinate and inferior class of beings, who had been subjugated by the dominant race, and, whether emancipated or not, yet remained subject to their authority, and had no rights or privileges but such as those who held the power and the Government might choose to grant them.

      Taney is arguing that when the Constitution was written, the framers only considered certain groups as “the people” or “citizens,” and excluded Black Americans. I disagree, because definitions of citizenship have changed over time and should not remain stuck in the past.

    24. In the year 1834, the plaintiff was a negro slave belonging to Dr. Emerson, who was a surgeon in the army of the United States. In that year, 1834, said Dr. Emerson took the plaintiff from the State of Missouri to the military post at Rock Island, in the State of Illinois, and held him there as a slave until the month of April or May, 1836.

      Dr. Emerson took Dred Scott to a free state which means in that state Dred Scott isn't a slave. I disagree that Dred Scott was kept a slave during that time because it was against State law to have slaves.

    1. I left engineering and went on to study law and eventually became a lawyer. More important, that class and paper helped me understand education differently. Instead of seeing college as a direct stepping stone to a career, I learned to see college as a place to first learn and then seek a career or enhance an existing career. By giving me the space to express my own interpretation and to argue for my own values, my philosophy class taught me the importance of education for education’s sake. That realization continues to pay dividends every day.

      Denouement/Resolution

    2. What I learned through this process extended well beyond how to write a college paper. I learned to be open to new challenges. I never expected to enjoy a philosophy class and always expected to be a math and science person. This class and assignment, however, gave me the self-confidence, critical-thinking skills, and courage to try a new career path.

      Falling Action

    3. The first class I went to in college was philosophy, and it changed my life forever. Our first assignment was to write a short response paper to the Albert Camus essay “The Myth of Sisyphus.” I was extremely nervous about the assignment as well as college. However, through all the confusion in philosophy class, many of my questions about life were answered.

      intro - the Exposition

    4. literacy can be linked to the idea of being empowered; for example, Malcolm X describes the freeing aspects of literacy in his essay, “Literacy Behind Bars.”

      interesting connection to the text

    1. We live in a current moment where, to get things done, we have to deploy terms in ways that capture the imagination of decision makers and the public in ways that affect change. In a sense, it is a kind of marketing. But it is worth thinking about the ways digital archaeology fits into the frameworks of public archaeology as discussed in Moshenska (2017). In particular, we are thinking of the ways in which the public form their views of archaeology. The work of academic archaeologists is not the primary vector through which the public learns about archaeology.

      This really connects to my project because I am using digital tools to be able to make history engaging for people, like digital archaeology, I'm "marketing" the past while using visuals from Voyant.

    1. Bishop moved from simply reporting her personal reactions to thethings she read to attempting to uncover how the author led her

      Thinking deeply into what the writer intends in their writing can open the door to view a source from a new angle and therefore open the door for more information and ideas.

    2. Here are some additional examples of the kinds of questions youmight ask yourself as you read

      The use of personal questions can elevate one's writing by expanding thinking and expanding one's point of view.

    1. But we can examine the training data, for it is in the selection of training data that we introduce biases or agendas into the computation. By thinking of the machine in this case as something non-human, our hope is that we remind you to not accept the results or methods of AI in archaeology blindly, as if the machine was not capable of racist or colonialist results.

      I have to also remember that Voyant is not neutral, the results depend on how the text is prepared. Even simple tools can reflect bias in what we give them.

    1. Public archaeology seeks to promote awareness of what archaeology is, how it is done, and why it matters amongst members of the general audience

      This also connects to my project because its like a public archaeology, showing people how trade and culture shaped what was eaten

    2. Digital archaeology is often about deformance rather than justification

      Deformation allows for the discovery of many possibilities, and when reconstructing ancient civilizations, allows for an exercise in which one could view multiple possibilities for how each settlement was built where it was, and possibly catch relevant information that would be lost to one simply trying to prove a point.

    3. Rather we might concentrate more on discovery and generation, of ‘interesting way[s] of thinking about this’.

      This is a more progressive approach to modern learning, with the focus less on proving why things are the way you believe they are, and instead discovering many ways in which things could have been. The previous method was flawed, forcing one to think with a closed mind, only searching for things that could justify their arguments and being blind to the other arguments that could also be justified.

    4. By the 1980s desktop computing was becoming sufficiently widespread that the use of Geographic Information Systems (GIS) was feasible for greater numbers of archaeologists. The other ‘killer app’ of the time was computer-aided design, which allowed metric 3-dimensional reconstructions from the plans drawn on site by excavators.

      The information that these reconstructions can provide enables questions which could only be answered with conjecture to be realistically solved. It allows for more accurate research, especially into past events or those to come, which helps to build a true understanding of history.

    5. Geospatial, digital and Web-based tools are now central to carrying out archaeological research and to communicating archaeological information in a globalized world.

      These tools, becoming vital in current archaeological research, promote the approach of encouraging the use of modern technology. The capabilities between a person using these tools and a person who doesn't are so vast that it makes no sense to prohibit use. People should instead be taught to understand how these tools function and use them to better understand the material they are learning. The idea that people are becoming too reliant on new technology only applies to those who use technology for things they should do themselves, not for things that they can't do on their own.

    6. This puts our volume in dialogue with the work of archaeologists such as Ben Marwick, who makes available with his research, the code, the dependencies, and sometimes, an entire virtual machine, to enable other scholars to replicate, reuse, or dispute his conclusions. We want you to reuse our code, to study it, and to improve upon it. We want you to annotate our pages, point out our errors and make digital practice better. For us, digital archaeology is not the mere use of computational tools to answer archaeological questions more quickly. Rather, we want to enable the audience for archaeological thinking to enter into conversation with us, and to do archaeology for themselves. This is one way to practice inclusivity in archaeology.

      This is a modern approach to learning that involves adapting to the current technological climate and encourages the use of new technology, rather than prohibiting it. It allows students to understand how the technology works, as well as learn the material.

  6. drive.google.com drive.google.com
    1. d. Everyone is looking at something a child can'tsee. For a minute they've forgotten the children. Maybe a kidis lying on the rug, half asleep.

      Why is the parents will forgot their child? Is the author is telling us something?

    2. 112stony, lifeless elegance of hotels and apartment buildings,toward the vivid, killing streets of our childhood. These streetshadn't changed, though housing projects jutted up out of themnow like rocks in the middle of a boiling sea. Most

      I think this sentence is quite touching, because I have this feeling before.

    1. Code-switching and code-mixing are two concepts that overlap and should be seen as forming a continuum rather than two absolutely separate phenomena. Generally speaking, code-switching is often taken as involving clearer points of break between two discernible linguistic systems, while in prototypical code-mixing there is constant switching to and fro within the sentence boundary, but such a distinction is not maintained in all code-switching related literature

      they blend and are not totally different ideas.

    1. Clinical psychologists can test a new pharmaceutical treatment for depression by giving some patients the new pill and others an already-tested one to see which is the more effective treatment.

      This method can actually show cause and effect because researchers control what happens

    2. two different variables are measured to determine whether there is a relationship between them.

      This type looks at relationships between two things but doesn't prove that one causes the other

    3. researchers gather participants from different groups (commonly different ages) and look for differences between the groups.

      looking at different groups at the same time

    4. esearchers examine data that has already been collected for other purposes.

      Its handy because the data is already there, but you're limited to what's been collected before.

    5. researcher unobtrusively collects information without the participant’s awareness.

      Watching people or kids in their normal setting without them knowing they're being watched

    6. a detailed analysis of a particular person, group, business, event, etc. This approach is commonly used to learn more about rare examples with the goal of describing that particular thing.

      This is when Researchers focus on one person or a small group to learn a lot about them

    1. Workers' relationships with employers. Stakeholders noted that digital surveillance by employers may create a sense of distrust among workers, making them feel like they are constantly being watched, and leading to a decline in worker productivity and morale.

      Constant monitoring, even in the workplace, can translate to normal life activities; citizens may feel uneasy due to the surveillance that is being done on them without them knowing.

    2. Stakeholders most frequently mentioned that the digital surveillance tools employers use include cameras and microphones, computer monitoring software, geolocation, tracking applications, and devices worn by workers.

      These same surveillance tools are used in a government context as well, for example, facial recognition, and also geolocation.

    3. In 2023, the White House Office of Science and Technology Policy asked for public comments on employers' use of digital surveillance to monitor workers' activities.

      Shows how the U.S. government is looking at surveillance activities, and also how the U.S. government is looking at data collection through many different viewpoints.

    4. House Office of Science and Technology Policy (OSTP) regarding the use of automated digital surveillance tools to monitor workers and the effects of that surveillance on workers.

      This relates to my topic in that it reports on how surveillance technologies are being utilized and how they are employed in workplaces, specifically.

    1. Anonymity can encourage inauthentic behavior because, with no way of tracing anything back to you[1], you can get away with pretending you are someone you are not, or behaving in ways that would get your true self in trouble.

      I feel this too. When people think there’s no consequence, some go extreme—hate raids, doxxing help-threads, or “sock-puppet” pile-ons feel way too easy under full anonymity. But I’ve also seen anonymity protect the right people: a student reporting harassment, a queer kid seeking help, a worker blowing the whistle. So I don’t want a blanket ban; I want guardrails: stable pseudonyms with reputation, stronger friction for brand-new throwaways, and a clear, due-process path to unmask only in severe cases (credible threats, coordinated harm). That balance keeps space for the vulnerable while making it harder to weaponize the mask.

    1. understand their own learner variability

      Maybe using technology and the web to understand more about where they are from. Using this type of information to maybe share with the class.

    2. Culturally responsive practice considers how to engage students in the learning process,

      I have a connection when I read this because it reminded me of Flag Day in my 5th-grade class. My elementary school had a student population that was 80% non-American. It was a wonderful way to understand and see other flags.

    1. Inside the nose, specialized cells and structures like cilia and mucous membranes trap dust, pollen, and other foreign particles, preventing them from reaching the lower respiratory tract.

      The lower respiratory tract goes deep into our lungs where gas exchange happens

    2. The upper airway is the initial pathway for air to enter the respiratory system, a complex and vital process for life. It begins with the nose, the primary entry point for air.

      2 central pathways in our body to receive oxygen

    3. abnormal breath sounds usually found using a stethoscope) sounds like gurgling (often due to fluid in the airway), stridor (a high-pitched sound indicating upper airway obstruction), or wheezing (a whistling sound associated with lower airway constriction) are present

      Breath sounds are important to detect what a patient is experiencing such as a blockage or fluid and more

    4. Its effectiveness depends on obtaining a proper mask seal and squeezing the bag at the correct rate: once every five to six seconds for adults and once every three seconds for infants or children.

      You want the ventilation to be the same as your lungs would produce air which is why the timing is Important

    5. Commonly used PPVDs in EMS include pocket masks and bag-valve-masks (BVMs)

      PPVDS help force oxygen into the lungs for patients who aren’t breathing adequately

    6. The modified jaw thrust is the preferred technique when cervical spine injury is suspected

      This is to prevent making a patient paraplegic and a chance for a better recover

    7. as seen with drug overdoses, head injuries, or neurological diseases.

      Chest trauma or pulmonary complications is not the only reason for inadequate breathing .

    1. How do you notice yourself changing how you express yourself in different situations, particularly on social media? Do you feel like those changes or expressions are authentic to who you are, do they compromise your authenticity in some way?

      On social media, for example, Instagram, I only post regarding my running. I share a very small portion of my life with the internet; therefore, some people only view me as an athlete. Meeting me in person or discovering other aspects of my life might initally seem inauthentic, as I am only seen as a runner. When they realize I am more than that, they can think I was inauthentic/hiding a part of myself. However, I feel like it does not compromise my authenticity. I am just portraying one aspect of my life, and it’s up to the consumer whether they choose to see my full personality or not.

    1. Her brain allows one half-formed thought to pass: 'Well now that’s done: and I’m glad it’s over.'

      I found this line interesting as it adds to the discourse on female voice, and it connects to the curious section of dialogue in “A Game of Chess.”

      In the woman’s encounter with the “young man carbuncular,” she is given no agency—or, rather, no action at all on her part is marked. Not a single action verb follows the pronoun “she.” “Is” is not an action verb (it is a linking verb), and it is used to describe her state as “bored and tired”—thus a mere projection by Tiresias (a man). In fact, lack of action is what is marked. The man’s “caresses” are “unreproved,” his “exploring hands” encounter “no defense,” and his “vanity…makes a welcome of indifference.” The woman’s entire functioning appears to be gone. The connection to John Donne’s Elegy XIX. To His Mistress Going to Bed furthers this.

      The one thing left is her voice—not outer (clearly) but inner. But this is operating on the lowest level. It is a singular (“one”), “half-formed” thought that comes. What is interesting is that this is all “her brain allows”—she is stopping herself, or rather a part of her or something inside of her is, as evidently there are two sides. But these two sides—her brain and from wherever thoughts issue (also the brain?)—are natural, integral parts of the self. What is going on here?

      But there is definitely a control running through the encounter—and perhaps extending beyond?—as if the whole thing had been laid out. The woman, afterwards, “smoothes her hair with automatic hand” and “puts a record on the gramophone”—set to go round and round and round. The reference to The Vicar of Wakefield appears to add to this, and so does the odd use of the colon, somehow urging an inevitability. I think it has all been laid out—with Tiresias, who can see the future, looming over the scene.

      In terms of connection to the dialogue in “A Game of Chess,” there seems to be some sort of comment on women’s agency and what is asked of them: “‘Speak to me. Why do you never speak. Speak. / ‘What are you thinking of? What thinking? What? / ‘I never know what you are thinking. Think.’” I would like to explore this further.

    1. Everything we do with our digital devices is underpinned by software driven by innumerable algorithms, which are frequently characterised as invisible, black boxes. Striphas (2015), for example, has argued that our reliance on algorithms constitutes what he calls an 'algorithmic culture', while Bogost (2015) goes further and suggests that we live not so much in an algorithmic culture as a 'computational theocracy' with the invisibility of algorithms giving them a transcendental, almost divine character. In the process, algorithms can become mythologised:

      This is a genuine societal issue, putting too much trust in the information that computers give to you, or relying on them for tasks which you could easily complete without them. Almost like if it wasn't developed with the assistance of technology, it is deemed unworthy as if it needs a computers approval to be validated. This raises the importance of cross checking information given to you from the internet, and making sure to not blindly follow the word of the computer.

    1. Medicine security became a pressing issue during the COVID pandemic when countries restricted exports to prioritize domestic needs. Despite this wake-up call, significant strides to promote local pharmaceutical manufacturing had been limited until these recent commitments.

      The news emphasizes a unified national effort—by the government, regulators, and the private sector—to achieve medicine security and transform the Philippine pharmaceutical industry into a stronger, more self-reliant, and globally competitive sector.

    1. -Channels: Bay Today acts as a promotional/media channel (earned media).

      -Customer Relationships: The Centre can leverage local media to reach its base and potential new audiences.

    1. Information flow is reciprocal – we have to set the instrument up correctly over a fixed point and provide it with locational information and the height of the target in order to make the instrument operational. The instrument records horizontal distance and angle, but it is dependent on us to select the location of interest, aim appropriately at the target and trigger the reading. It is also dependent on the staff holder positioning the target correctly over the object of interest, and on both human team members correctly recording any changes in target height. The instrument reports the three-dimensional coordinates back to the user and the process repeats iteratively in the conduct of the survey.

      This dependant relationship between machine and people is what makes it acceptable for intellect to be given to machines. They are capable of doing calculations at speeds that are impossible for any human, but they remain machines with a need for directives. This is what keeps them as tools, allowing them to act as an extension of a person and enables that person to develop their ideas into reality.

    1. Watch Recent videos Upcoming livestreams On-demand performances Popular series Interviews Workshops Listen

      -Invest in digital/streaming/recorded content, to reach beyond the local catchment.

    1. This introduces an essentially asymmetric relationship between human agent and thing rather than the broadly symmetric interaction implicit in the parity principle. In some respects, this might appear to be akin to the distinction between 'primary agency' and 'secondary agency' (for example, Gell 1998, 21) in which, unlike humans, things do not have agency in themselves but have agency given or ascribed to them. However, the increasing assignment of intelligence in digital devices that enables them to act independent of human agents could suggest that some digital cognitive artefacts possess primary agency as they autonomously act on others – both human and non-human/inanimate things. Arguably this agency is still in some senses secondary in that it is ultimately provided via the human programmer even if this is subsequently subsumed within a neural network generated by the thing itself, for example. This is not the place to develop the discussion of thing agency further (for example, see the debate between Lindstrøm (2015), Olsen and Witmore (2015), and Sørensen (2016)); however, the least controversial position to adopt here is to propose that for the most part the agency of digital cognitive artefacts employed by archaeologists complements rather than duplicates through extending and supporting archaeological cognition. They do this, for example, through providing the capability of seeing beneath the ground or characterising the chemical constituents of objects, neither of which are specifically human abilities. So there is considerable scope for considering the nature of the relationship between ourselves as archaeologists and our cognitive artefacts – how do we interact and in what ways is archaeological cognition extended or complemented by these artefacts?

      While controversial, the addition of intelligence to digital cognitive artifacts making them operate independently from humans, remains a completely necessary step in advancement. There are some tasks which would simply require too much time, or are actually impossible for humans to complete if the process relied on their intelligence alone. The ability for an artifcat to work on its own allows for an incredible increase in effeciency, making things that were deemed impossible 20 years ago into a reality.

    1. We believe the performing arts are vital to the human experience.

      Is a national-level performing arts institution, presenting theatre, music, dance, and indigenous programming on a national stage.

    1. These cognitive artefacts support us in performing tasks that otherwise at best we would have to conduct using more laborious and time-consuming methods (film photography or measured survey using tapes, for instance) or that we would not be able to undertake (we cannot physically see beneath the ground, or determine the chemical constituents of an object, for example). Furthermore, a characteristic of archaeology is the way that we adopt and apply tools and techniques developed in other domains (Schollar 1999, 8; Lull 1999, 381). Consequently, most if not all of the cognitive artefacts used in archaeology are designed outside their discipline of application, meaning we have little or no control over their development and manufacture, and hence their internal modes of operation have to be taken at face value.

      This is the line of thinking that has fast tracked technological evolution, for better or for worse. The idea that our intellect is humanities greatest advantage, and that if we desire something that we are incapable of accomplishing ourselves, we create a tool which makes it possible. The idea of recreating a simulation of an ancient civilization would have been seen as an impossible magic even a couple hundred years ago.

    1. Strategic levering Use the gallery as a ‘’gateway’’ into theatre audiences: people who come for gallery openings may stay or return for performances in the Capitol Centre.

    2. -Customer relationships: They actively engage volunteers and local artists, which builds ownership and advocacy.

      -Value Proposition: As same as the Capitol Centre, they have a focus on uniting the community, here they reflect the diversity of promoting art and helping small artists grow, and this is a place to exhibit, network, and be part of the gallery community.

    1. Critiques are two-way. It is not just one person providing critical feedback, but rather the designer articulating the rationale for their decisions (why they made the choices that they did) and the critic responding to those judgements. The critic might also provide their own counter-judgements to understand the designer’s rationale further.The critic in a critique must engage deeply in the substance of the problem a designer is solving, meaning the more expertise they have on a problem, the better. After all, the goal of a critique is to help someone else understand what you were trying to do and why, so they can provide their own perspective on what they would have done and why. This means that critique is “garbage in, garbage out”: if the person offering critique does not have expertise, their critiques may not be very meaningful.

      I totally agree that critiques should be a two-way conversation rather than just one person pointing out flaws. It makes a lot more sense when both the designer and the critic are actively explaining their reasoning because it feels more collaborative that way. I also like the idea that critiques are only as good as the person giving them as it reminds me how important it is to get feedback from people who actually understand the problem you’re solving.

    1. Ideas for Capitol Centre -Host ‘’Open studio’’ days, live art-making events, community workshops to heighten foot traffic and public awareness.

      -Adopt stronger ‘’Get involved’’ call for more curated artist calls, volunteer roles, behind-the-scenes tours.

    1. We have seen the cost of terrestrial laser scanners come down in recent years, and, perhaps more significantly, the development of the SIFT (Scale Invariant Feature Transforms) algorithm has seen an explosion in the use of structure from motion photogrammetry as a means of three-dimensional survey using consumer-grade cameras and drones. In the process, we have witnessed changes to the way in which we see the world and capture what we see.

      This explosion in the use of photogrammetry as a means of three-dimensional survey was exactly what allowed my research proposal to become a possibility. With this technology it will be possible to reconstruct the settlement locations of ancient greece and allow for further research into proving my hypothesis. This is an example of the web of options opened up with a significant technological breakthrough.

    1. White Water Gallery is a not-for-profit Artist-Run Centre committed to supporting artistic practices that prioritize risk and innovation. Understanding the need to advance the public’s threshold for viewing contemporary art the gallery encourages outreach programming that promotes accessibility and shared knowledge.

      -White Water Gallery is a visual arts gallery in the region in North Bay that invites the community to come here to the Capitol centre as well. -They emphasize ‘’Get involved’’ community artists, volunteers, and donor support.

    1. The Take. A Tale of Two Jennifer Lawrences. April 2022. URL: https://www.youtube.com/watch?v=Q7aq1bHXuY8&t=641s (visited on 2023-11-24).

      When Jeniffer fell at the Oscars, she was seen as a more relatable figure, showing her faults and vulnerability. The video talks about how, at the time, she was celebrated for her openness; however, as time went on, she was criticized for her persona. It also talks about how Anne Hathaway gives off a contrasting personality of being extremely genuine and put-together. Both personalities are criticized, highlighting how female celebrities are seen as more inauthentic in relation to men, and how they have to be very careful and calculated on how they get portrayed in order to not be judged.

    1. Enjoy world-class cinema in your own backyard! The Film House offers unique films in a modern, cinematic environment with Niagara wine and craft beer. We screen the latest documentaries, new features and classic films.

      The multi-venue model allows simultaneous programming: theatre, recital, film. Capitol centre might consider diversifying (black-box studio, film screening room) to generate more programming flexibility.

    1. The FirstOntario Performing Arts Centre aims to engage people in exceptional live arts experiences, and to enrich the lives of citizens of St. Catharines and the Niagara region, while providing a world class venue for local, national and international artists and community arts organizations to flourish.

      Focus on urban activation: they oriented architecture to face the street, making the building part of the public realm.

    2. The FirstOntario PAC is a cultural hub for many local arts organizations and festivals who use the facilities for their annual programming, including Niagara Symphony Orchestra, Chorus Niagara, Carousel Players, Gallery Players of Niagara, Bravo! Niagara Festival of the Arts, Garden City Comedy Festival, Essential Collective Theatre, Suitcase In Point, In The Soil Festival, Yellow Door Theatre Project, TD Niagara Jazz Festival, and Brock University's Departments of Music and Dramatic Arts, Encore Professional Music Series and Tuesday's free Music@Noon concerts.

      Value Propositions: A multi-venue cultural hub, mix of local and national acts, and integration with university/education.

    1. Building on the PAC’s decade-long success as a home for both local artists and visiting artists from across the country and around the world, the new season features innovative and creative work that inspires, unites, and drives urgent cultural conversation.

      They emphasize being a catalyst for downtown cultural/economic renewal.

    1. VENUE INFORMATION Opening its doors in fall 2015, the FirstOntario Performing Arts Centre (PAC) is a 95,000 square foot academic and cultural complex comprised of four extraordinary performance venues: Partridge Hall, Robertson Theatre, The Recital Hall and The Film House. The state-of-the-art Diamond Schmitt Architects designed facility is located in the heart of downtown St. Catharines, Ontario on the corner of St. Paul Street and Carlise Street, adjacent to Brock University’s Marilyn I. Walker School of Fine and Performing Arts.

      The number of seats is bigger than the Capitol centre, each venue has a different amount of seats: Partridge Hall (770), Robertson Theatre (210), Recital Hall (300), and Film House (199).

    1. Email: boxoffice@capitolcentre.org Phone: (705) 474-4747 or Toll Free: 1-888-834-4747 - Please leave a message and we will call you back! Tickets: Please note tickets can also be purchased ONLINE anytime! Social Media: Facebook  /CapitolCentre and/or Instagram  @capitol_centre

      -Balancing between local community interest and attracting touring content.

      -Advertising and promoting the place can also be a challenge since there are new people who do not know the place.