10,000 Matching Annotations
  1. Feb 2025
    1. fue Busa quien sugirió la idea del hipervínculo a la IBM24Jones y Busa, Roberto Busa, S. J., and the Emergence of Humanities Computing..

      ¿Por que en IBM no conocian los trabajos de Bush y Nelson o no les importaban o de modo concomitante/anterior?

    2. A su vez, los algoritmos y procesos computacionales son una continuación natural de la lectura y la hermenéutica: usar una barra de búsqueda en un explorador complementa recorrer los pasillos de una biblioteca, establecer categorías y ficheros virtuales permite establecer relaciones hiper/intertextuales entre obras, etc.

      Me parece interesante la contrapostura de Tudor Girba y los desarrolladores del Glamorous Toolkit respecto a hacer sentido del código sin leerlo, al menos no como prosa, sino, por el contrario, usando visualizaciones y consultas debido a su caracter altamente estructurado.

      Lo digital en conexión y extensión con lo análogo me recuerda proyectos como Hypercard in the world de Bret Victor y su equipo donde literalmente lo digital y lo análogo están en ese diálogo y usan el ejemplo del catálogo y la biblioteca como lugares prominentes y ejemplares de tales conexiones.

      Desafortunadamente, con la computación complicada y "separatista"/enclaustrado de hoy en día las tradiciones análogas y digitales se ven en contraposición más que en complemento (unas habitando la pantalla y otras por fuera, sin vínculos evidentes).

    3. Como aluden las similitudes iconográficas entre las Figuras 12 y 13, desde esta gran narrativa, la mesa de copista ahora es una mesa de cómputo, las glosas con comentarios anotados en las márgenes de los libros ahora son anotaciones escritas en formatos de intercambio interoperables, las bibliotecas y sus formas de organización ahora son estructuras de datos.

      Y también la marginalia digital provista por Hypothesis, usada para la lecutra anotada durante la evaluación de esta tesis, o durante la producción de la mía, o en varios ejercicios educativos institucionalizados y en el Hackerspace, aunque desvinculados de los intereses monásticos.

    4. Interactivo 7. Un mapa de vectores que representa la semántica distribucional de los artículos publicados hasta el Vol. 8 de la Revista de Humanidades Digitales

      Y también existe un interés por las reflexiones al respecto del acceso, la democratización y la accesibilidad de los archivos, y en general una postura a favor del software y la ciencia abierta.

      Aunque a este nivel de zoom o alguno cercano hacia afuera (zoom out) no se alcanza a ver dicha postura en las palabras que el mapa muestra.

      Muy interesantes los interactivos a lo largo del texto. Su dependencia de JavaScript, también para los textos acompañantes hace difícil anotarlos con Hypothesis. Me pregunto si herramientas hipérmediales como HTMX, Data Star e incluso AlpineJS permitirían que cierta parte del texto sea estática y embebida en la salida HTML, así corresponda a visualizaciones dinámicas que se presentan progresivamente.

    5. Reviews in Digital Humanities se hace más evidente que, además de un interés por proyectos de colecciones digitales y análisis computacional, es una revista que tiene un enfoque explícitamente postcolonial e interesado en la justicia social, y por ese motivo procura reseñar proyectos de diversas partes del mundo y de grupos marginalizados o subrepresentados; si analizáramos el mapa de las humanidades digitales desde otra publicación más conservadora en cuanto a sus temas y posturas políticas —digamos, la revista Digital Scholarship in the Humanities—, el mapa presentaría un terreno distinto y unas omisiones particulares.
    6. este tipo de procesos computacionales ofrece un paratexto, un texto que acompaña al texto8Stephen Ramsay, Reading Machines: Toward an Algorithmic Criticism (Urbana: University of Illinois Press, 2011). y que enriquece la interpretación al proporcionar nuevos puntos de vista y lecturas posibles.

      De allí lo desafortunado del término "inteligencia artificial2 cuando otros términos anteriores, pero menos taquilleros, como Inteligencia Aumentada de Engelbart capturaban mejor la noción de una computación creado por humanos para aumentarles (como el paratexto) en lugar de para reemplazarles.

    7. los principios de la semántica distribucional6Magnus Sahlgren, «The Distributional Hypothesis», Rivista di Linguistica 20, n.º 1 (2008): 33-53. en los que se basa este tipo de procesamiento son controversiales, como discutiremos más adelante, porque suponen que se puede extraer sentido puramente de la estructura y las frecuencias del texto.
    8. una comprobación de que no era necesario definirlas, sino, tal vez, ejercerlas, sea como sea que se entiendan, y a través del desarrollo de comunidades, proyectos e infraestructuras ver sus alcances y grietas.

      Una definición enactiva o performante: se define a partir de la práctica.

    1. El interactivo 5, a continuación, presenta un generador de textos que entremezcla las palabras de distintos autores que, cada uno a su manera, han reflexionado sobre el humanismo en América Latina: Manuel Quintín Lame, Domingo Sarmiento, Leopoldo Zea, Oswald de Andrade y José Vasconcelos. Usando un sistema de cadenas de Markov, comúnmente aplicado en obras de literatura electrónica, este generador remezcla distintos textos y crea una amalgama que brinca entre los términos usados en ellos:

      De nuevo, un sesgo de género, en el material publicado/seleccionado.

      Interesante experimento de humanidades digitales en pequeño.

    2. Las figuras 9, 10 y 11 son una muestra de esto en el contexto del estallido social colombiano del 2020. Las dos primeras figuras muestran la disputa acerca de la memoria de un lugar público en Popayán Colombia que es a la vez un sitio arqueológico precolombino y el lugar de la estatua de un conquistador español que fue derribada por indígenas del lugar como protesta ante el borramiento de su propia historia. La tercera figura muestra un discurso, tal vez un desliz, de un programa noticioso que hace una diferenciación entre ciudadanos e indígenas, efectivamente removiendo a los últimos de su estatus como humanos dentro del humanismo.

      Me recuerda la idea de Isin y Ruppert frente a una ciudadanía que va más allá del Estado Nación o la ciudad (cfg otro comentario al respecto).

    3. Tabla 2. Distintas posturas con respecto a la autenticidad del humanismo y las humanidades latinoamericanas

      Es inevitable percibir un sesgo de género, no forzósamente del autor de la tesis, como de la literatura publicada en latinoamérica al respecto.

    4. "establecer una conversación" con desarrolladores, saber "qué tan dificil o qué tan fácil es hacer algo", y crear propóstios y expectativas adecuadas sobre lo que quiere lograr. Es decir, se ha tenido que impregnar de el pensamiento y la cultura digital como una necesidad para sus prácticas.

      Esto me recuerda de la noción del conocimiento "conversacional sobre programación" que mencionaban en un Podcast (si no estoy mal, Future of Coding), de personas que, sin ser programadores profesionales, saben lo suficiente como para poder charlar con ellos, alrededor de proyectos que requieren dichos saberes.

    5. si profundizamos esta idea, podemos afirmar que para lograr una postura crítica y un entendimiento de estas sociedades contemporáneas, es además necesario conocer el funcionamiento interno de las tecnologías digitales para poder hacer una crítica cultural de ellas. Como afirma Berry, esto no implica "defender que los métodos y prácticas existentes en la ciencia de la computación se vuelvan hegemónicos, sino que un entendimiento humanístico de la tecnología pueda desarrollarse, lo que también implica una indagación urgente de lo que es humano dentro de las humanidades o ciencias sociales computacionales"86David M. Berry, «Introduction: Understanding the Digital Humanities», Understanding Digital Humanities, ed. David M. Berry (Houndmills New York: Palgrave Macmillan, 2012), 9..
    6. necesariamente pasan por una hibridación de medios análogos y digitales de registro de información y de ritualidad comunitaria. Estas formas de memoria tienen materialidades, modalidades, modos de transformación y formas de circulación particulares que deben ser estudiados en su especificidad83Richard L. MacDonald, Nick Couldry, y Luke Dickens, «Digitization and Materiality: Researching Community Memory Practice Today», The Sociological Review 63, n.º 1 (febrero de 2015): 102-20, https://doi.org/10.1111/1467-954X.12215.. También sería imposible estudiar las democracias en el presente sin pensar en las formas de circulación de la información en internet, la vigilancia estatal y corporativa, o las formas de autoregistro de la personalidad y la ideología en redes sociales84Nick Couldry, «Surveillance-Democracy», Journal of Information Technology & Politics 14, n.º 2 (3 de abril de 2017): 182-88, https://doi.org/10.1080/19331681.2017.1309310..

      Esto me recuerda los proyectos como los que hacemos de memoria viva para la revitalización lingüística en el Amazonas o para auditar el discurso en Twitter de los candidatos a la presidencia en Colombia que entrarían en esas otras formas de ciudadanía y de mezcla entre la memoria análoga y la digital en nuestros tiempos y que fueron concebidos en diálogo con la academia, pero con tecnologías y prácticas desarrolladas por fuera.

    7. La cuantificación, vista como complemento más que como reemplazo, puede entonces resultar beneficiosa, pues desencadena un ampliamiento de las maneras de ver la cultura y, por lo tanto, un enriquecimiento de la interpretación. La unión entre aesthesis y mathesis implica adoptar nuevos métodos y perspectivas sin seguirlas ciegamente como promesa de resolución de la crisis o mejor adaptación de las humanidades a la academia cientificista.

      Además de situarse por fuera de la oda al gigantismo de datos antes criticado. Lo cuantitativo y lo algorítmico, unido a lo interpretativo está ocurriendo a distintas escalas y contextos, en distintos lugares y prácticas (como los espacios hacker/maker), de modos más alineados con esta postura.

    8. 2) la consolidación, preponderancia, visibilidad y reproducción de ciertos métodos que provienen de los centros de poder-saber;

      Precisamente, en ese sentido valdría la pena visibilizar métodos digitales que provienen de periferias y lugares no académicos, como los espacios hacker/maker.

    9. situar y entender los textos en el mundo humano construido en estos procesos básicos nos devuelve a la disciplina humanística de la hermenéutica, de la que las humanidades digitales son una encarnación tecnológica"78

      Y sin embargo, este bonito propósito, es ortogonal al tamaño de las bases de datos. Otras hermenéuticas computacionales podrían ocurrir y de hecho ocurren, en lo pequeño.

    10. el procesamiento algorítmico de grandes bases de datos

      De nuevo esa oda al gigantismo, tan presente en las humanidades digitales y muchos de sus exponentes.

    11. Las letras son escritas por los humanos mientras que los números son una extracción de un dato pre-dado del mundo, una fatalidad ya escrita.

      Aunque Lisa Gitelman en "Raw Data" Is an Oxymoron" critica esta idea del dato como objetivo y dado, mostrando las relaciones entre dato y cultura.

      Por supuesto, hay datos objetivos (la velocidad de la luz), pero son sujetos, inmersos en culturas, quienes se formulan las preguntas que conducen a ellos y crean los instrumentos de medida. Datos y letras son escritos por humanos, si bien hayamos algunos datos objetivos por el camino.

    12. lo humano a gran escala, y así podría ponerlas a la par de los estándares de otras formas de investigación en la academia. Bajo esta idea, lo digital actualiza y revalida a las humanidades.

      Algunas personas vemos con preocupación esta oda a la escala que ocurre en las humanidades digitales y sus exponentes más visibles (ejp: Manovich). Parece ser que, si no se tiene un supercomputador con teraflops de procesamiento y "chochentamil" teras de información, no se puede entrar en diálogo con las humanidades digitales.

      Quizás es por eso que alternativas como la permacomputación, las infraestructuras de bolsillo, la computación convivencial no se ubican a sí mismas dentro de las DH.

    13. Esta posición defensiva, si se exacerba, puede resultar problemática, pues implica la búsqueda del decaímiento de las humanidades en otra causa fuera de sí mismas, sin un autoexamen suficiente acerca de sus propias prácticas y limitaciones, sus vicios y virtudes.

      Precisamente a eso me refería en comentarios previos frente a cómo la defensa aborda los problemas que se plantean, pues si bien ratifica 5 puntos a favor de las humanidades, ellos no configuran una respuesta a muchas de las críticas acá planteadas.

    14. En resumen, la primera es una defensa epistemológica, la segunda económica, la tercera estética, la cuarta política, y la quinta tautológica (vale la pena hacer humanidades simplemente porque se pueden hacer). Todas son adecuadas, aunque queda la pregunta acerca de si serán suficientes. Distintos autores que asumen la tarea de la defensa de las humanidades toman posturas que se acomodan a alguna de estas categorías: la necesidad de democracia y felicidad Nussbaumiana, el entendimiento de las prácticas de sentido Gombrichianas, el afán civilizatorio Sloterdijkiano, o el valor en sí mismo Heideggeriano.

      Sin embargo, salvo la 2 y la 5, el resto de argumentos pareciera no dialogar con las críticas hechas a la democracia y su caracter centrado en lo humano o una preocupación genuina por otras formas de felicidad.

      Otro tanto se podría decir del párrafo siguiente y su relación institucionalizada con lo humano, re-enfatizando lugares en los que se ha centrado la "cultura" y lo "culto", en sus sentido más excluyentes que incluyentes. Esfuerzos por redefinir estas instituciones o por conectarlas con otras extituciones (espacios maker/hacker, Malokas, etc, precisamente intentan ver los valores positivos que se atribuyen a las humanidades, más allá de estas.

    15. No obstante, es justamente el excepcionalismo Humano y la actitud guardapuertas la que ha sido cuestionada por las teorías posthumanas como el propio causante del declive humanístico en este sentido. Estas teorías proponen, por el contrario, descentrar al Humano para ver las relaciones planetarias complejas que existen entre distintos agentes y entes64Cary Wolfe, What Is Posthumanism? (Minneapolis: University of Minnesota Press, 2010). y dar lugar a quienes han sido excluidos del concepto de humanidad. En una línea similar, las teorías decoloniales han cuestionado a este Humano, con H mayúscula, y su potencial discursivo para oprimir a otros grupos al arrebatarle su voz65Arturo Escobar, «Cultura y Diferencia: La Ontología Política Del Campo de Cultura y Desarrollo.», 2012, http://hdl.handle.net/10256/7724.. Si tomamos la terminología de Aníbal Quijano, las humanidades ejercen una forma de colonialidad del saber en la medida en la que definen qué vale la pena estudiar, es decir, qué es digno de apreciación, interpretación y discusión dentro de lo que asumimos como cultura humana, y el humanismo ejerce una forma de colonialidad del ser en la medida en la que define quién es y quién no es un Ser Humano66Aníbal Quijano, «Colonialidad Del Poder, Cultura y Conocimiento En América Latina», Debate 44 (1998): 227-38; Aníbal Quijano, «Colonialidad del poder, eurocentrismo y América Latina», La colonialidad del saber: eurocentrismo y ciencias sociales ; perspectivas latinoamericanas, ed. Edgardo Lander y Santiago Castro-Gómez (Buenos Aires: Consejo Latinoamericano de Ciencias Sociales, CLACSO [u.a.], 2005), 201-46..

      Resuena con varias de mis comentarios a lo largo del capítulo. Quizás valdría la pena colocar esta crítica antes. Incluso al comienzo del capítulo para que futuros lectores dialoguen con ella antes y también se pregunten por cómo toda la tesis realiza el diálogo con esta crítica tan contemporanea y pertinente.

    16. Sin embargo, esta función de faro moral ha sido reemplazada por otras formas de participación, como la opinión pública a través de redes en internet y la tecnocracia en la administración pública. Es decir, el papel de las humanidades como lugar de la domesticación del hombre, como crianza del poder, se encuentra en duda. Las tecnologías digitales juegan un papel escencial en esta transformación, pues es la comunicación en red, el many-to-many de internet, el que en buena medida desdibuja la autoridad moral humanística y su lugar de guardapuertas de la cultura; los propios usuarios, prosumidores o productores-consumidores, pueden hacer y divulgar interpretaciones del mundo de forma descentralizada63Henry Jenkins, Confronting the Challenges of Participatory Culture: Media Education for the 21st Century (Cambridge, MA: The MIT Press, 2009)., y las figuras autoritativas humanistas, ligadas fuertemente a viejos medios de comunicación o a instituciones monolíticas, pierden saliencia en medio de la saturación mediática.

      También están los discursos post-humanistas que no recurren a lo humano, ni al "hombre" como centro de lo discursivo, y las llamadas "nuevas materialidades" que cuestionan lo textual o la lectoescrita como el elemento discursivo por omisión, con sus fuertes tendencias academicistas.

    17. el Idolo Academica, o la subdivisión disciplinar en dominios de conocimiento; el Idolo Quantitatis, o el credo inductivista de que todo conocimiendo se adquiere a través de datos experimentales; el Idolo Novitatis, o la idea de que la investigación siempre debe producir novedad; y el Idolo Temporis, o la creencia de que cualquier indagación académica sigue unas reglas metodológicas fijas y estandarizadas.
    18. C. P. Snow53C. P. Snow, The Two Cultures (London ; New York: Cambridge University Press, 1993). llamaría las "dos culturas": la científica y la humanística, o, como lo entendería Adela Cortina, dos subculturas que "comparten el conjunto del bagaje humano, es decir, el ámbito de la curiosidad por el mundo natural y el aprecio por los sistemas simbólicos de pensamiento"54Cortina, «EL FUTURO DE LAS HUMANIDADES», 207..
    19. universidades invisibles, es decir, comunicaciones semipúblicas en las que "un sentido de comunidad surge a partir de la lectura común de un mismo texto"42John Seely Brown y Paul Duguid, The Social Life of Information (Boston, Massachusetts: Harvard Business Review Press, 2017), 173..

      Me recuerda una aproximación del Ronin Institute y una forma de asociación quizás más cercana a ese origen y no mediada por una filiación institucional.

    20. El museo, entonces, de forma similar a la biblioteca se convierte en una institución de conservación de la memoria que propicia tanto ver la magnitud de la cultura humana en una escala comprensible como producir interpretaciones intertextuales entre objetos disímiles a través de significados transversales.

      Sería conveniente poner a charlar esta idea, con la del museo como lugar exotisante y de expropiación, así como con las discusiones sobre el regreso de los objetos museísticos expropiados a las culturas y naciones de las que fueron sacados.

    21. Los diccionarios y la lingüística aprovechan la distinción entre lenguas cultas y lenguas vulgares para entrar en una batalla nacionalista que Calvet ha llamado glotopolítica35Louis-Jean Calvet, Lingüística y colonialismo: breve tratado de glotofagia (Benos Aires: Fondo de Cultura Económica, 2005).. En otros términos, la distinción entre civilización y barbarie toma un discurso científico cuando las discusiones lingüísticas se apoyan en métodos formales para definir cuál lengua es la que tiene más vestigios originarios o es la más perfecta36Ibid.; Eco, The Search for the Perfect Language..
    22. No es que se produzca un estudio de la cultura entendida, en términos generales, como una producción de cualquier grupo humano (con minúsculas), sino de la alta cultura grecolatina, entendida como una herencia occidental que da identidad a la naciente consciencia europea29Edward Bleiberg, Arts and Humanities Through the Eras. (Farmington Hills; Ipswich: Cengage Gale Ebsco Publishing, 2004).. Aunque hay cierto interés por las culturas exóticas como contenedoras cifradas de un cristianismo prematuro, es decir, como parte primitiva de una narrativa maestra30Thijs Weststeijn, «'Signs That Signify by Themselves'. Writing with Images in the Seventeenth Century», The Making of the Humanities. Vol 1. Early Modern Europe, ed. Rens Bod, Jaap Maat, y Thijs Weststeijn (Amsterdam: Amsterdam University Press, 2010). (como se ve en la figura 7, un estudio del monje Jesuita Athanasius Kircher acerca de China). El Renacimiento nos deja el programa de estudios basado en la admiración por la antiguedad clásica, el mencionado trívium y quadrivium, que fundamenta las bases de la educación humanista contemporánea y configura a las humanidades propiamente hablando.

      Este desconocimiento de unas tradiciones, en favor de la europa clásica y su "cuna" es lo que algunos critican cuando se refieren al humanismo como una forma atávica y colonial de eurocentrismo.

    23. nos permitirán manejar nuestro camino a través de este mundo multicultural, [...] y esto es precisamente porque las humanidades se tratan de lectura e interpretación"21J. M. Coetzee, Elizabeth Costello (New York: Penguin Books, 2004), 129.. Esto resonará en el capítulo 6 con la mención del padre Roberto Busa, uno de los precursores de las humanidades digitales y la contrucción de una narrativa que imagina un linaje entre los copistas medievales y los textos anotados computacionalmente.

      Precisamente la idea de lectura más allá de la textualidad se podría criticar en cómo lo computacional nos permite pensar con todo el cuerpo, en el espacio y en comunidad, en lugar de un proceso del logos y lo textual. Proyectos como Dynamiclad, el Folk computing y los desarrollos propios de Computación Convivencial critican ese énfasis en lo textual, que puede ubicarse tanto de los copistas mediavales como en los textos anotados computacionalmente.

    24. Como crítica, la filósofa postcolonial Gayatri Spivak afirma que, bajo este discurso, no saber leer es equivalente a no poder hablar, en el sentido de no ser escuchado y no ser reconocido como sujeto con agencia propia15Gayatri Chakravorty Spivak, «¿Puede Hablar El Subalterno?», Revista Colombiana de Antropología 39 (1 de enero de 2003): 297-364, https://doi.org/10.22380/2539472X.1244.. Aquí, saber leer no es solo una proposición literal, es también una metáfora que se usa para hacer referencia a ser capaz de considerar lo correcto, el camino adecuado para el proyecto de lo Humano.

      De ahí el peligro del proyecto "Humano" en mayúsculas y definido desde un solo centro y mirada y un trivium heredado via dinámicas coloniales. En ese sentio l acrítica de Spivak puede resonar con una mirada más adecuada para las posturas propias que cuestione esa tradición. (ver comentario anterior)

    25. En otros términos, insiste en que solo quienes tienen lenguaje son agentes de su propia voluntad y por lo tanto pueden actuar realmente en el mundo, pueden ser mundanos. El humanismo, justamente, defiende a la lectoescritura como una habilidad especializada o como una ocupación del tiempo libre de quien cuenta con ocio y voz pública, y al conocimiento como una forma de asegurar la participación ciudadana de las élites.

      Esto de por sí, coloca al humano en ese lugar privilegiado que se critica recientemente y la idea del resto (lo animal, lo vejetal) sin voluntad propia y, por tanto, sujeto a la voluntad humana.

      Otras tradiciones autóctonas del continente, antes de la conquista, no se paran en ese lugar.

    26. un sistema de diferenciación letrado que separa a los que saben leer, escribir y argumentar de los que no lo saben; a los válidos interlocutores democráticos de los que no lo son.

      Además de no tener en cuenta las oralituras, las inscripciones no textuales rituales ampliamente expandidos y como formas de participación en lo colectivo que no tienen la idea de "Plaza Pública".

    27. propósitos humanísticos: principalmente, la biblioteca, el archivo, el museo, la universidad, y, si ampliamos la perspectiva, también las democracias. Adicionalmente, la academia contemporánea ha configurado unas disciplinas, es decir, dominios delimitados de estudio, que se desarrollan en esas instituciones: la historia, la filología, la filosofía, la estética, la lingüística, los estudios literarios, entre muchos otros.

      ¿Qué pasa con la mirada sobre los 3 propósitos del humanismo, antes mencionados fuera de la mirada institucional e incluso del estado-nación? En particular me recuerda la noción de ciudadanía de Isin y Ruppert que reconoce la crítica a la noción de estado-nación y su mirada colonizante y piensa el ejercicio de deberes y derechos en clave más allá de la constitución, pensando en 3 fuerzas: performativas, legales e imaginativas.

      ¿Cómo la potencia de mirar por fuera de los confines instituicionales, disciplinares y del estado-nación los 3 propósitos de las humanidades podría dialogar con esta tesis?

    28. se han configurado una serie de propósitos particulares para las humanidades y el humanismo; las ideas que plantearé en esta disertación se basan principalmente en tres de ellos: la conservación de la memoria, la interpretación y apreciación de la cultura, y la participación en la vida pública.
    29. La relación es recursiva, pues el humanismo define los confines de las humanidades y las humanidades definen lo que puede considerarse como bueno para el Humano.

      Interesante. Pensé en lo autoreferencia y la autopoieses.

    30. En el contexto que nos interesa, o sea, Latinoamérica, se ha planteado en múltiples ocasiones la pregunta por la autenticidad de nuestras humanidades y nuestro humanismo, o, dicho de otra forma, se ha planteado la pregunta acerca de si somos los Humanos del humanismo. Aunque no esta cuestión no estuvo muy presente en las entrevistas que realicé, esta pregunta es crítica en el proyecto de las humanidades digitales, pues nos obliga a repensar tanto el tipo de temas que se deberían tratar en proyectos de este campo en nuestro contexto como la necesidad de construir visiones propias de lo humano.

      En esta pregunta me encuentro yo. Incluso me pregunto por visiones posthumanas o que cuestionan el centro de lo humano en discursos que deberían ser más holísticos, hibridados y ciborg, por ejemplo.

      También está la inquietud por dónde está lo propio desde incluso el nombre del contienente "descubierto" y otras acepciones como las del Abya Yala.

    31. Según Latour1Bruno Latour, Nunca fuimos modernos: Ensayos de antropología simétrica (Madrid: Clave Intelectual, 2022)., la modernidad sigue un doble proceso contradictorio: traducción, es decir, integración intercultural e hibridación, y, a la vez, purificación, es decir, separación de lo Humano y lo no-humano. En efecto, las humanidades modernas se reconocen como las guardianas de la memoria, las posibilitadoras de la interpretación y apreciación de la cultura, las promotoras de la educación y el cultivo intelectual, las defensoras de la argumentación y la participación democrática en la esfera pública, y, aún así, todos esos propósitos, en ocasiones, se maldireccionan en cercar lo que significa ser humano y así oprimir a otros y disminuir su dignidad.

      E incluso, ¿en qué sentido las humanidades son las únicas guardianas de la memoria? ¿Qué pasa con tradiciones indígenas, hacktivistas u otras que se instauran en el dicho resguardo, pero no forzosamente se adscriben a un proyecto humanista o no ponen lo humano en el centro?

    32. la relación paradójica de la tradición de las humanidades con Latino/América.

      Interesante esta forma de escritura. Me pregunto qué tipos de tensiones explicita y cuáles supera y si lo hace a partir de dualismos.

    1. La tensión entre el uso de tecnologías externas, y la importación de sus idiosincracias, o la producción de tecnologías propias, situadas en el contexto local, con los costos y formas de trabajo que conllevan.

      Esta tensión la habitamos en la comunidad de Grafoscopio. El abordaje dado acá ha sido reconfigurar ensamblajes o "stacks" tecnológicos para que den cuenta de contextos y necesidades locales y la incorporación de metaherramientas digitales y programación intersticial para extenderlas desde los límites/conexiones entre el stack, en lugar de desde adentro de los componentes (salvo en el caso de las metaherramientas, pues esa precisamente es su función).

    2. Las líneas de trabajo propuestas en esta dimensión se enfocan principalmente en la sostenibilidad de las comunidades de humanidades digitales en América Latina como mundos del arte y sistemas autoorganizados.

      Precisamente en clave de esas sostenibilidad, resistencia y no uniformidad es que planteo la pregunta anterior.

    3. La relación simbiótica pero no siempre reconocida entre comunidades formales, es decir, instituciones humanísticas, y comunidades informales, como asociaciones y grupos de interés.

      Interesante. Particularmente desde las periferias que pueden estar en diálogo, pero no se quieren ver subsumidas dentro de las llamadas Humanidades Digitales, como los hacktivismos, grupos de tecnologías cívicas, estudios críticos de datos y código, enmarcados en los llamados estudios críticos de ciencia, tecnología y sociedad

    4. aproximaciones decoloniales a las humanidades y la propuesta del marco crítico para las humanidades digitales ofrecida por Nuria Rodríguez Ortega.18

      A la luz de estas lecturas decoloniales es que me interesa saber cómo se mantiene la paradoja bárbaro/civilizado.

    5. La paradoja del bárbaro/civilizado, y la oportunidad que esto abre para construir unas humanidades digitales propias, centradas en la multiculturalidad.

      Curioso por leer más en detalle. Particularme por la pregunta de si se pueden superar dichos binarismos heredados y colonizantes.

    6. Bruno Latour llamaría composicionista13Bruno Latour, «An Attempt at a "Compositionist Manifesto"», New Literary History 41, n.º 3 (2010): 471-90, http://www.jstor.org/stable/40983881., que, de acuerdo con Alan Liu, es justamente una estrategia anti-fundacional propia de las humanidades digitales: "no se fija en fundamentos absolutos del conocimiento ni en refutaciones absolutas de tales fundamentos, sino que por el contrario crea composiciones mezcladas, impuras, sobre la marcha de múltiples posiciones"14Alan Liu, «Toward Critical Infrastructure Studies», 2018, 8.. La estrategia composicionista, en efecto, busca construir un terreno común desde la negociación de partes diversas, y busca mediar, y hacer explícitas las direcciones en las que se mueve el campo

      Me recuerda la perspectiva en diseño desde la perspectiva de Wolfgang Jonas, en la que el diseño no tiene un saber fundante "debajo" que le "sostenga", sino que es sostenido por los saberes en red que se encuentran al lado.

    7. Tomando prestada la metáfora que usa el biólogo Enrico Coen7Enrico Coen, De las células a las civilizaciones: los principios de cambio que conforman la vida (Barcelona: Crítica, 2013). para hablar de la diversidad genética en el periplo evolutivo, podemos pensar en un cielo en el que las nubes se mueven en direcciones diversas de acuerdo con el viento. Si las especies de seres vivos se mueven así, como nubes, durante millones de años, las posturas con respecto a las humanidades digitales también lo hacen, cambiando de forma, fusionándose, chocando y separándose de acuerdo a las dinámicas de sus envolventes. El interactivo 2 es la representación metafórica de este movimiento del campo en un plano multidimensional.

      ¿Cuál sería la forma de que los interactivos se puedan reproducir sin estar fuertemente ligados a la plataforma de publicación específica? Es decir, dado que se trata de trozos de JavaScript embebidos dentro del código Markdown, ¿podrían estos pasarse por conversores como Pandoc y aún así lograr páginas relativamente autónomas y transportables?

    8. el trazado que dejan las marcas de las asociaciones entre sujetos. En este sentido, y como lo entendería el autor, el trazado es una estrategia de "escritura a mano" convencional —longhand—, en vez de taquigrafía —shorthand—, en el sentido en el que busca establecer matices que muestren que los sujetos no se constituyen por simples sistemas de oposición, sino que, por el contrario, para entenderlos hay que narrarlos, hay que contar cómo se transforman, cómo cambian de opinión y cómo son inconsistentes: hay que construir una especie de libro de viajes del discurso y la práctica:
    9. Normalmente, un envolvente tiene un ataque, es decir, un tiempo de crecimiento en el que alcanza su pico; un decaimiento en el que disminuye su intensidad, un sostenimiento, en el que se mantiene estable; y una disolución, en la que la energía que mantiene la modulación va desapareciendo, a menos que un nuevo envolvente reactive la modulación. Las asociaciones sociales, podríamos decir, también tienen envolventes, y así como en una composición musical, o mejor, en una grabación de un paisaje sonoro, las dinámicas aumentan, se sostienen, decaen o se modulan unas con otras.
    10. la regla es performance y lo que debe ser explicado, las problemáticas excepciones, son cualquier tipo de estabilidad en el largo plazo y en amplia escala"4Latour, Reassembling the Social, 35..

      HackBo se ha sostenido durante 14 años. ¿Qué explicaciones hay detrás?

    1. la producción de ficciones y poéticas para la indagación, las aproximaciones perfomáticas a los objetos de estudio, o las exploraciones conceptuales desde la visualidad.69Gioia Chilton y Patricia Leavy, «Arts-Based Research Practice: Merging Social Research and the Creative Arts», The Oxford Handbook of Qualitative Research, ed. Patricia Leavy (Oxford: Oxford University Press, 2014), 403-22, https://doi.org/10.1093/oxfordhb/9780199811755.001.0001. En el mismo sentido, las propuestas de la investigación basada en artes de Barone y Eisner, como la producción de conocimiento a través de medios no lingüísticos y el alumbramiento o la creación de nuevos problemas de investigación a partir de la especulación expresiva70Tom Barone y Elliot W. Eisner, Arts Based Research (Los Angeles London New Delhi Singapore Washington DC: Sage, 2012)..
    2. la investigación artística propuestas por Henk Borgdorff, especialmente los paralelismos entre los contextos del descubrimiento y la justificación en la ciencia y el arte —el laboratorio y el taller, la publicación y la obra, respectivamente—, y el lugar que el autor otorga a la especulación y el saber hacer como forma de conocimiento68Henk Borgdorff, The Conflict of the Faculties: Perspectives on Artistic Research and Academia (Amsterdam: Leiden University Press, 2012)..
    3. la teoría de los comunes o los recursos de reserva común de Ostrom y colegas67Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action, 1.ª ed. (Cambridge University Press, 1990), https://doi.org/10.1017/CBO9780511807763; Elinor Ostrom, «Beyond Markets and States: Polycentric Governance of Complex Economic Systems», American Economic Review 100, n.º 3 (2010): 641-72, https://doi.org/10.1257/aer.100.3.641..

      ¿Cómo esta teoría se refleja en la práctica, particularmente en la elección de una licencia para el repositorio de la tesis? Si bien encontré un archivo README en la raíz del repositorio con una alusión breve:

      Este repositorio es libre y abierto y se puede adaptar para crear proyectos similares.

      no dice cómo ocurre en particular esa apertura y libertad entre el centenar de opciones disponibles. Tampoco encontré un archivo LICENSE en la raíz del repositorio que explicitara la manera concreta en que estas libertades toman cuerpo.

    4. Para la dimensión de las infraestructuras, consulté textos del campo de Ciencia, Tecnología y Sociedad acerca del concepto general de infraestructura59Brian Larkin, «The Politics and Poetics of Infrastructure», Annual Review of Anthropology 42, n.º 1 (21 de octubre de 2013): 327-43, https://doi.org/10.1146/annurev-anthro-092412-155522; Susan Leigh Star, «The Ethnography of Infrastructure», American Behavioral Scientist 43, n.º 3 (noviembre de 1999): 377-91, https://doi.org/10.1177/00027649921955326; Marshall McLuhan, Understanding Media: The Extensions of Man (Cambridge, Mass: MIT Press, 1994). y en específico acerca de la historia del desarrollo de infraestructuras tecnológicas en América Latina60Rosalba Casas y Tania Pérez-Bustos, eds., Ciencia, Tecnología y Sociedad En América Latina: La Mirada de Nuevas Generaciones (Buenos Aires: Clacso, 2019); Renato Dagnino, «El Pensamiento En Ciencia, Tecnología y Sociedad En Latinoamérica : Una Interpretación Política de Su Trayectoria», Redes 7, n.º 3 (1996), http://ridaa.unq.edu.ar/handle/20.500.11807/504. y la formación de brechas digitales61Andrés Lombana Bermudez, «La Evolución de Las Brechas Digitales y El Auge de La Inteligencia Artificial (IA)», Revista Mexicana de Bachillerato a Distancia 10, n.º 20 (15 de agosto de 2018): 17, https://doi.org/10.22201/cuaed.20074751e.2018.20.65884.
    5. como Latino/Americanos, civilizados/bárbaros, podríamos decir— como por nuestra propia relación con las tecnologías digitales -las infraestucturas, formas de organización y conocimientos con los que contamos—

      Valdría la pena problematizar estos binarios. ¿Pueden ser superados?

    6. El referente externo es necesario para establecer un diálogo global, inevitable en las sociedades contemporáneas, conectadas por internet y relativamente abiertas al acceso de la riqueza cultural de la humanidad, justamente gracias a las tecnologías digitales. El desarrollo local es necesario para construir especifidades en la interpretación de nuestras formas de cultura y para dar lugar a formas situadas de conocimiento o formas ampliadas y multiculturales de entender lo humano.

      Algo similar pasa en el caso de Grafoscopio, en el que estamos en diálogo, pero no subsumidos, a los referentes internacionales. Mostrar las resonancias y distancias ayuda a ubicar esa tensión constructiva discursiva.

    7. se han indagado y puesto en disputa las visiones canónicas de las humanidades digitales y sus formatos estándares en otras publicaciones —los debates— con respecto a sus alcances, contradicciones y exclusiones25Domenico Fiormonte, Sukanta Chaudhuri, y Paola Ricaurte, eds., Global Debates in the Digital Humanities (Minneapolis: University of Minnesota Press, 2022); Matthew K. Gold, ed., Debates in the Digital Humanities (Minneapolis: Univ Of Minnesota Press, 2012); Matthew K. Gold y Lauren F. Klein, eds., Debates in the Digital Humanities 2019 (Minneapolis London: University of Minnesota Press, 2019).. También se han producido historiografías alternativas que cuestionan la gran narrativa de las humanidades digitales construida desde centros hegemónicos como Estados Unidos y Europa26Dorothy Kim y Adeline Koh, eds., Alternative Historiographies of the Digital Humanities (Santa Barbara: Punctum Books, 2021)., o que dan cuenta de nuevos horizontes para las humanidades digitales en clave de lecturas humanísticas de los nuevos medios27Fiormonte, Numerico, y Tomasi, The Digital Humanist., o que ponen en cuestión las relaciones de poder globales en el campo28Fiormonte, Chaudhuri, y Ricaurte, Global Debates in the Digital Humanities..

      Debido a la perspectiva Latinoamericana a la que apuesta esta investigación, sería bueno contar con un desarrollo acerca de cómo se ubica esta tesis respecto a estas historias difractivas y no anglo-europeas de las Humanidades Digitales y los campos relacionados

    8. El mapa a continuación (Interactivo 1) muestra los lugares que presentan mayor interés de búsqueda de los términos humanidades digitales, humanidades digitais, y digital humanities en Google Trends. Es decir, las traducciones del término en español, portugués e inglés, respectivamente. Como se observa allí, el término en inglés tiene un amplio alcance en el globo.

      Sería bueno que el mapa incluya algún tipo de convención de calor que indique no sólo el idioma de la búsqueda por color sino la cantidad de items buscados. De igual manera en enlace a Google Trends aparece roto y tiene fechas relativas al día de la búsqueda (today -5) en lugar de abosolutas con las fechas exactas.

    1. El capítulo 10 ofrece una serie de reflexiones sobre la producción de este libro digital, el sistema de diseño que lo soporta, los múltiples componentes interactivos contenidos en él y una elaboración acerca de su potencial argumentativo como investigación creación.

      interesante cómo se explicitan las materialidades digitales detrás de la publicación multiformato como parte del argumento mismo del libro.

    2. El capítulo 6 habla de distintos modos de relacionamiento con lo digital en las humanidades: instrumentales, de lo digital como cultura, activistas y de lo digital como medio de expresión. El capítulo 7 habla de la formación de comunidad en las humanidades digitales, sus dificultades y potencialidades. El capítulo 8 habla de las infraestructuras, sus problemas y líneas de trabajo para robustecerlas

      Es raro que en ese relacionamiento activista y preocupación por las infraestructuras y comunidades, las comunidades hacktivistas no parecen ser mencionadas, ni siquiera por contraste con la red de humanidades digitales

    3. mi recomendación es leer el libro digital en su condición multimedial, en el sitio web, y, si se echa en falta, revisar la versión imprimible en pdf, que tiene otras prestaciones, limitaciones y posibilidades.

      Dado que ya existe la versión web/HTML, la impresa/PDF podría alimentarse, en ediciones posteriores, de un maquetado (tipo Tufte, por ejemplo) que haga explícitos los vínculos entre los dos formatos

    4. como estrategia formal para recorrer las dimensiones y sus tensiones, este trabajo está escrito y programado como un libro digital; está compuesto por una serie de ensayos interactivos, en el sentido en el que, además del texto, el proyecto usa visualizaciones de datos, simulaciones, videos, imágenes, piezas artísticas, y otros elementos propiamente digitales que contribuyen a los argumentos que se presentan. Es decir, no son decoraciones ni elementos cosméticos, sino que son parte del andamiaje conceptual general. Como desarrollé en esta introducción, mi incursión en las humanidades digitales viene de la creación artística computacional, y por eso tiene sentido que la reflexión y los argumentos también se nutran de esa prácica, y que las posibilidades de lo digital como y para la cultura se vean y no solo se enuncien. En esa línea, esta disertación se enmarca como una investi
    5. "los investigadores que llevan a cabo investigación exploratoria deben ser creativos, de mente abierta y flexibles: adoptar posturas investigativas y explorar todas las fuentes de información [...] Ellos hacen preguntas creativas y toman ventaja de la serendipia (es decir, factores inesperados y fortuitos que tienen implicaciones amplias)"2
    6. la tradición humanística, o la inserción de las tecnologías y culturas digitales en las humanidades y la autenticidad del humanismo latinoamericano; los modos de relacionamiento con lo digital, o las formas en las que se entiende, usa y estudia lo digital desde las humanidades; la construcción de comunidad, o las formas de organización formales e informales que configuran a las humanidades digitales; y las infraestructuras, o las formas de facilitación del trabajo humano, natural y computacional, sus limitaciones y posibilidades
    7. En efecto, hablar de la Red Colombiana de Humanidades Digitales sería incompleto sin mencionar también a la Asociación Argentina de Humanidades Digitales y a la RedHD Mexicana, que tienen búsquedas y problemas similares, han servido como referentes organizativos y epistemológicos, y han sido aliadas y soportes importantes en muchos proyectos. O a proyectos transnacionales que conforman comunidades de prácticas y aprendizaje, o grandes instituciones que soportan las infraestructuras tecnológicas y organizativas de las humanidades digitales.

      ¿Qué pasa con las comunidades de práctica pequeñas o las redes conformadas informalmente como los espacios hacker/maker en distintas latitudes de Latinoamérica? ¿Por qué ellos no están listados en esas maneras propias de hacer y entender que enlazan a este párrafo con el siguiente? ¿Es referido a la denominación dentro de esta red o temática de humanidades digitales?

    8. Así, la programación creativa me llevó, a su vez, a descubrir que también existían procesos computacionales que podían usarse no solo para crear obras de arte sino también como ayuda para la interpretación de objetos culturales: algoritmos de procesamiento de textos e imágenes, colecciones interactivas con miles de piezas digitalizadas, manipulación y visualización de datos. Así como es posible analizar una obra a partir de lecturas recurrentes e infinitas minucias, es posible ver esa obra en un contexto amplio, como un punto datificado en una nube de produccionies humanas. Por mi propia formación en semiótica, que se centra en estudiar los problemas acerca de cómo damos sentido a los diversos signos del mundo, he tenido una aproximación ambivalente a estos procesos computacionales, que se hace evidente a lo largo de la disertación: a veces es optimista y a veces desconfiada
    9. Siguiendo el espíritu de Shiffman, entendí que es posible una aproximación amable y tranquila al código y a la lógica digital, y que, bien aprovechada, esa lógica puede ser útil para crear o para entender las complejidades de la cultura humana. La continuación de mi inquietud consistió en combinar mi vida en la academia con un aprendizaje desde el hacer, el error y la especulación a través del código. Y mi propio navegar en las culturas digitales me mostró que sus complejidades son dignas de un análisis sofisticado y profundo.
    10. Las formas culturales que se desprenden de estas posibilidades de lo digital son también infinitamente ricas: por ejemplo, auténticos sistemas de valor para la crítica de arte del pixel art, grupos de fans que crean colecciones e investigaciones sobre sus fandoms de formas tan sofisticadas como las de un humanista medievalista, piezas de código que borran los límites entre la colaboración humana y las máquinas, etc. Lo digital, la computación y la internet son una continuación de la compleja cultura humana.
    11. De hecho, Shiffman en sus videos suele equivocarse de formas que van desde errores de digitación hasta romper el código por completo, pero asume esas equivocaciones con una candidez que le quita la mistificación y el misterio a la programación; de cierta forma, la humaniza. Es decir, escribir código es difícil, pero superable.
    12. Sin embargo, en realidad, la aproximación pedagógica del libro resultó desastrosa, y no solo impidió que aprendiera a programar en ese momento, porque me produjo una desmotivación que me costó superar, sino que creó para mí una mistificación alrededor del código, una especie de sensación de memorizar sin aprender y de que los procesos computacionales ocurrían como por arte de magia. La manera difícil era difícil arbitrariamente, como una especie de disciplina o rigor militar que había que seguir ciegamente, o como una dieta milagrosa con la que, torturándose, eventualmente se alcanza la figura que se quiere a costa de perder la energía y la salud. Haber escogido ese libro como puerta de entrada al mundo de la programación me hizo abandonar mi proyecto en ese momento, y me hizo pensar que ese nivel de detalle no era para mí y que debía quedarme con las interfaces del software prefabricado y resignarme a sus limitaciones.
    13. Desde que recuerdo, me ha interesado saber de su funcionamiento físico y su software, especular sobre los algoritmos que hacen que las cosas anden, abrir las carcasas, intentar reordenar los cables, cambiar datos en ejecutables, dañar aparatos; principalmente ha sido una aproximación ingenua. Como mi interés es más que todo creativo, he buscado muchas veces hacer algo diferente a lo que el software permite o salirme del modelo de uso original de los diseñadores para adaptarlo y obtener algún efecto expresivo que está en mi cabeza. Usando programas de todo tipo —desde Flash hasta Pure Data—, he tratado de hacer imágenes, música, textos, animaciones, y ver qué cosas permite un computador que en otros medios sería difícil o imposible.
    1. provinenen de colecciones digitales y están en el dominio público o cuentan con licencias Creative Commons que permiten su uso libre bajo condiciones que encajan con las de este proyecto.

      provienen

      Sin embargo, al no estar explícito el licenciamiento de esta tesis y su código fuente, no es posible saber si, efectivamente, se encaja o no con las condiciones de este proyecto, cuando la licencia del mismo no está clara.

    2. Este formato adicional puede permitir formas de lectura que, paradójicamente, no son sencillas en el metamedio digital, como la portabilidad, la posibilidad de hacer anotaciones en las márgenes o de subrayar. De este modo, esta disertación puede existir simultáneamente como texto convencional y como pieza digital interactiva.

      De hecho el formato digital web (en contraposición al digital PDF) permite anotaciones que no sólo pueden ser hechas en los márgenes, como esta, sino habilitar una conversación en línea abierta, como indicaba en otro comentario. Esto implica repensar el formato y la infraestructura de publicación para habilitar dihas ramas de lectura por omisión (como indiqué allí).

    3. Los principios que sigo para la creación de visualizaciones de datos parten de un método que yo mismo elaboré, pensado específicamente para las humanidades digitales, y cuyos principios están publicados en artículo titulado Signos visuales a escala humana23Sergio Rodríguez Gómez, «Signos Visuales a Escala Humana: Una Clasificación de Métodos de Visualización de Datos y Una Reflexión Sobre Sus Alcances Para La Investigación Humanística», Revista de Humanidades Digitales 6 (26 de noviembre de 2021): 64-84, https://doi.org/10.5944/rhd.vol.6.2021.30734.. El método está basado en la triple clasificación de los signos propuesta por el semiólogo y filósofo Charles Sanders Peirce: signos icónicos, indéxicos y simbólicos
    4. La visualización de datos es un método que consiste en representar datos, que suelen ser difíciles de comprender por su volumen o su nivel de abstracción, de forma gráfica de una manera en la que alcancen escala humana y se facilite su entendimiento. Para lograrlo se suelen usar una serie de principios computacionales que transforman los datos crudos en diversos elementos visuales, con marcas y canales modificados, que se disponen en coordenadas espaciales22Jacques Bertin, Semiology of Graphics: Diagrams, Networks, Maps (ESRI Press, 2011), https://books.google.com?id=X5caQwAACAAJ; Leland Wilkinson, The Grammar of Graphics (Springer Science & Business Media, 2013), https://books.google.com?id=ZiwLCAAAQBAJ..
    5. Es decir, es un ejercicio de programación exploratoria, como lo propone Nick Montfort21Nick Montfort, Exploratory Programming for the Arts and Humanities (Cambridge, Massachusetts: The MIT Press, 2016)., en el sentido en el que se usa la programación como medio expresivo y se aprovecha para producir especulaciones más que verdades objetivistas o se usa como un método experimental para recorrer conjuntos de datos de formas inesperadas. Así, en los capítulos se encuentran sketches interactivos cuya función es ilustrar o evocar ideas que se desarrollan en el texto, o que añaden una búsqueda estética y formal a conceptos o narrativas en juego
    6. Me aproximo a la escritura como investigación como lo elabora Laurel Richardson, es decir, desarrollando procesos analíticos en la medida en la que se escribe y se encuentran posibles caminos de indagación20Laurel Richardson y Elizabeth Adams St. Pierre, «Writing: A Method of Inquiry», The SAGE Handbook of Qualitative Research, ed. Norman K. Denzin y Yvonna S. Lincoln (Los Angeles: SAGE, 2018).. Estos caminos pueden abrir encuentros fortuitos, coincidencias, ideas inesperadas que, junto a los datos, se integran a la narrativa general y la argumentación. El método cualitativo de la Zettelkasten, comentado antes, es en definitiva el insumo del que finalmente se nutre la escritura. Como afirma Richardson, la escritura como investigación toma de muchos géneros textuales y se nutre de múltiples voces para configurar un proceso de cristalización más que de triangulación.
    7. el documento completo se enmarca principalmente en las formas de producción del netart, o arte pensado específicamente para presentarse a través de exploradores de internet. Su existencia en internet también permite que la disertación sea leída y circule fácilmente, pues puede compartirse en múltiples dispositivos. Es decir, lo producido aquí se enmarca dentro de lo que Borgdorff llamaría un contexto de justificación18Borgdorff, The Conflict of the Faculties. tanto académico como artístico, pues encaja dualmente en un formato apto para el estándar universitario y también para un público general interesado en profundizar en el mapa de las humanidades digitales.

      Mucho de este caracter dual se puede apreciar en el cuidado estilo escritural que es accesible a un público general, sin dejar de ser riguroso para uno especializado.

      Por ello recomendaría que se habiliten, por omisión en la versión web, sistemas de lectura hipertextual, como Hypothesis y ramas de lectura (versiones que puedan ser bifurcadas y activadas para el comentario público, pero que habiten enlaces distintos, como es posible usando Fossil SCM). No sé si la infraestructura lo permita, a pesar del formato Markdown detrás, debido a llamados a JavaScript que no son muy portables, al menos en mis experimentos preliminares.

    8. En otros términos, el sustrato y medio expresivo es, justamente, el medio digital, y se usa como metareflexión de las tensiones de las humanidades digitales. Como afirma Johanna Drucker, "necesitamos tomar del reto de desarrollar expresiones gráficas enraizadas en y apropiadas para la actividad interpretativa"15Johanna Drucker, «Humanities Approaches to Graphical Display», Digital Humanities Quarterly 5, n.º 1 (2011).. Este libro digital apunta en esa dirección y, en la medida en la que hace uso de elementos interactivos y multimodales, busca hacer un aporte novedoso a las humanidades y los estudios de la comunicación y las posibilidades creativas como medio para el desarrollo investigativo.
    9. A esto, Manovich14Lev Manovich, Software Takes Command: Extending the Language of New Media (New York; London: Bloomsbury, 2013). añade que el metamedio digital imita pero también hibridiza y extiende los medios tradicionales con prestaciones que en su forma original eran imposibiles.

      Precisamente esta posibilidad no llega hasta Manovich, sino que varios de los autores detrás del Dynabook (Kay, Goldberg, Ingalls) ya hablaban de estas características propias e incluso precursoras como Ada Lovelace ya hablaban de un medio cuya música son las ideas.

    10. Como lo afirman Kay y Goldberg, el medio digital es, de hecho, un metamedio. Es decir, un medio que imita a los demás medios13Alan Kay y Adele Goldberg, «Personal Dynamic Media», The New Media Reader, ed. Noah Wardrip-Fruin y Nick Montfort (Cambridge, Mass: MIT Press, 2003), 391-404..

      Diría que la acepción de Goldberg y Kay va más allá de la imitación (vía la simulación computacional) y coloca atributos propios (por ejemplo la programación).

    11. la acción de la persona que lee modifica la pieza, y piezas multimodales que involucran imágenes, video y sonido. La inclusión de estas piezas parte de la idea de la especificidad del medio, introducida por Clement Greenberg como reflexión con respecto al arte neoxpresionista de mitades del siglo pasado12Clement Greenberg, «Hacia un nuevo Laocoonte Towards a Newer Laocoon», Revista Co-herencia 17, n.º 33 (2020): 19+, https://link.gale.com/apps/doc/A645242161/IFME?u=googlescholar&sid=googleScholar&xid=150312cc..
    12. Es, en términos de Borgdorff11Henk Borgdorff, The Conflict of the Faculties: Perspectives on Artistic Research and Academia (Amsterdam: Leiden University Press, 2012)., una investigación a través de las artes. La razón principal de esto es doble: primero, los elementos creativos son parte de los argumentos que se ponen en juego y, segundo, la propia escritura de la disertación es en sí creativa, sin que esto choque con la expectativa y los estándares de una investigación académica.
    13. Las incrustaciones de palabras, o word embeddings, son un método que consiste en derivar, por medio de algoritmos diversos, representaciones numéricas vectoriales de los términos presentes en un corpus, de acuerdo con sus relaciones contextuales7Michael Gavin et al., «Spaces of Meaning: Conceptual HIstory, Vector Semantics, and Close Reading», Debates in the Digital Humanities 2019, ed. Matthew K. Gold y Lauren F. Klein (Minneapolis London: University of Minnesota Press, 2019), 243-67.. Este método sigue los principios de la semántica distribucional8
    14. Los métodos digitales tienen como objetivo usar las marcas o rastros dejados por sujetos en medios digitales —como redes sociales, blogs, revistas, etc.— para interpretar los fenómenos sociales subyacentes a ellos y explorar cómo las propias plataformas los definen y determinan. Las incrustaciones de palabras y los análisis de redes pueden verse como estrategias para la interpretación de estos rastros en las prácticas de las humanidades digitales que dan cuenta de distintas formas de interacción, organización y conceptualización del campo. Las visualizaciones, como método complementario, permiten analizar los rastros con mayor facilidad, y así detectar patrones y categorías generales.
    15. todos los métodos cualitativos son complementarios, y se soportan entre sí a través de puntos de conexión que combinan la teoría propuesta por otros autores con mi propia observación y análisis.
    16. Específicamente, métodos de análisis textual basados en incrustaciones de palabras y análisis de redes. Estas estrategias se enmarcan dentro de una más amplia, llamada métodos digitales, o digital methods, desarrollada principalmente en la Universidad de Ámsterdam6Richard Rogers, Digital Methods (Cambridge, Massachusetts: The MIT Press, 2013); Richard Rogers, Doing Digital Methods (Thousand Oaks, CA: SAGE Publications, 2019); Tommaso Venturini et al., «A Reality Check(List) for Digital Methods», New Media & Society 20, n.º 11 (noviembre de 2018): 4195-4217, https://doi.org/10.1177/1461444818769236..
    17. observar de primera mano problemas acerca de la sostenibilidad de las infraestructuras en humanidades digitales, como se explora en detalle en el capítulo 8, y, por otra, me ha permitido desarrollar elementos de diseño y creación que he podido aplicar a los elementos interactivos de este libro digital.

      Interesante. Algo similar ocurrió en mi caso con el desarrollo de Grafoscopio.

    18. las dificultades del voluntarismo, la producción de infraestructuras propias, las oportunidades de colaboración entre comunidades informales y formales, y las brechas idiomáticas en las humanidades digitales.

      Todas esas dificultades resuenan mucho con lo que hemos experimentando en HackBo y Grafoscopio.

    19. blog, y como coordinador del Club de Programación de la Red.

      Una tesis muy rica en referencias. Creo que los hiperenlaces en la versión PDF e incluso la versión web no le hacen justicia. Quizás otro tema impreso y en web como Tufte CSS, para futuras ediciones de la tesis (luego del grado, por supuesto) pueda resaltar toda esa riqueza en la marginalia, mostrando algunas capturas de pantalla a modo de provocación de los enlaces a los que se hace referencia acá, como está:

    20. El protocolo completo se puede consultar aquí.

      Por ejemplo, frente a la pregunta:

      Dimensión de la epistemología digital. ¿Si tuviera que posicionarse en este eje de la epistemología de las HD, en qué lugar se ubicaría?

      Yo no me ubicaría en ninguno de los dos extremos, ni en la mitad. Precisamente siento que ese enunciado no recoge la postura de académicos activistas, como yo, y me pregunto si no habrían muchos en el mismo lugar.

    21. realizan activismo o prácticas comunitarias que involucran tecnologías digitales, o crean usando e indagando conceptos digitales o herramientas computacionales.

      ¿Cómo se entiende entonces el activismo realizado por estas personas? Las Humanidades Digitales son un término abarcante, que se puede usar para quienes no se reconocen como tales y se discute ampliamente cómo da cuenta de la dualidad mencionada, pero el activismo y las prácticas comunitarias parecieran verse subsumidas dentro de ese término sin discutirse por qué, incluso ante el no reconocimiento como humanistas digitales (con la crítica o apatía al humanismo como tradición) y sí como activistas no se alcanza a recuperar hasta el momento y se pasa de largo.

    22. activistas, miembros de organizaciones que trabajan en la intersección entre el pensamiento humanístico, las humanidades y las tecnologías digitales.

      ¿Cuáles activistas? Es una perspectiva que no he podido detectar claramente hasta el momento.

      La mención a las personas entrevistas muestra una filiación principalmente al mundo académico (salvo la persona asociada a las escuelas digitales, que para mí sería más un miembro de organización de base) y no está clara una doble condición de académicos y activistas, por ejemplo. Quizás tengan esa doble condición muchos, pero precisamente el no enunciarla hace parte de ese caracter más bien tácito del activismo que no se percibe con claridad en las genealogías para las humanidades digitales, ni en las posturas y materialidades críticas sobre las mismas (que son más académicas que activistas).

    23. El método de la teoría fundamentada consiste en el proceso de construir teorías sobre fenómenos sociales basadas en las propias percepciones y entendimientos del mundo de los participantes del estudio a través del registro y análisis iterado de sus comunicaciones y acciones1Virginia Monge Acuña, «La Codificación En El Método de Investigación de La Grounded Theory o Teoría Fundamentada», Innovaciones Educativas 17, n.º 22 (1 de julio de 2015): 77-84, https://doi.org/10.22458/ie.v17i22.1100..
    24. Esta investigación se fundamenta en una metodología de investigación mixta y multimétodo, es decir, es una mezcla de métodos cualitativos, métodos digitales y de investigación-creación. Los métodos cualitativos comprenden teoría fundamentada, revisión bibliográfica y de proyectos, y observación participante; los métodos digitales comprenden análisis de textos, análisis de redes y visualización de datos; y la investigación-creación comprende escritura como investigación y piezas de programación creativa.

      Esta investigación se fundamenta en una metodología de investigación mixta y multimétodo, es decir, es una mezcla de

      • métodos cualitativos,
        • teoría fundamentada,
        • revisión bibliográfica y de proyectos,
        • y observación participante;
      • métodos digitales
        • análisis de textos,
        • análisis de redes
        • y visualización de datos
      • y de investigación-creación.
        • escritura como investigación
        • y piezas de programación creativa.
    1. Voici un sommaire minuté des idées fortes concernant le microbiote, basé sur la transcription de la vidéo de France Culture :

      • 0:00-1:10: Introduction au microbiote intestinal, constitué de milliards de micro-organismes (bactéries, virus, levures) logés dans nos intestins, interagissant avec notre cerveau. Le microbiote fascine les chercheurs car ses déséquilibres pourraient expliquer certaines maladies digestives, inflammatoires ou neurologiques. L'idée est de le modifier ou même de le transplanter pour traiter diverses pathologies.

      • 1:10-2:20: Définition du microbiote intestinal comme l'ensemble des micro-organismes colonisant notre tube digestif dès la naissance. Ces micro-organismes reçoivent le gîte et le couvert en échange de services rendus à notre santé. La composition du microbiote varie selon l'environnement intestinal, comme entre le haut de l'intestin et le colon.

      • 2:20-3:15: Chaque individu possède un microbiote spécifique, un peu comme des empreintes digitales. Bien qu'il existe des différences selon les régions du monde, notamment une diversité moindre dans les pays développés par rapport aux populations traditionnelles, il y a aussi des aspects communs. Le microbiote se constitue après la naissance lors des premières interactions avec le monde microbien.

      • 3:15-4:20: La naissance par voie basse ou césarienne influence le microbiote initial du bébé. Le microbiote évolue et se mature jusqu'à l'âge de 3 à 5 ans, en parallèle avec le développement du système immunitaire. Avec l'âge, des perturbations peuvent survenir.

      • 4:20-5:00: Le microbiote joue un rôle important dans l'immunité, en stimulant et en éduquant notre système de défense. Un déséquilibre précoce du microbiote peut augmenter le risque de développer des maladies liées à l'immunité plus tard.

      • 5:00-5:49: La découverte du rôle du microbiote est récente, car les bactéries intestinales sont difficiles à cultiver. L'avènement de la biologie moléculaire et du séquençage de l'ADN a permis d'analyser le microbiote intestinal à partir des années 2000.

      • 5:49-7:14: De nombreux facteurs impactent le microbiote intestinal, notamment l'alimentation, l'exposition aux antibiotiques et le lieu de vie. L'alimentation est le facteur environnemental le plus important.

      • 7:14-8:07: Les perturbations du microbiote peuvent jouer un rôle dans les maladies inflammatoires chroniques de l'intestin. Un microbiote déséquilibré envoie des signaux altérés au système immunitaire, entraînant une activation inappropriée. Le microbiote des patients atteints de ces maladies est altéré en termes de composition et de fonctions. Le rôle du microbiote dans le syndrome de l'intestin irritable est moins clair.

      • 8:07-9:02: L'intestin communique avec le cerveau de manière bidirectionnelle. Les bactéries produisent des métabolites qui peuvent atteindre le cerveau via la circulation générale, influençant ainsi son fonctionnement. Au moins 30 % des molécules présentes dans le sang sont produites par des bactéries ou issues de leur transformation.

      • 9:02-10:00: Le microbiote est impliqué dans diverses maladies neurologiques, le diabète, l'obésité, les cancers et les maladies rhumatismales. Cependant, son rôle varie d'une maladie à l'autre. Une bonne alimentation, riche en fibres végétales (fruits et légumes), est essentielle pour un microbiote sain. Il faut éviter les aliments ultra-transformés, la viande rouge et la charcuterie. Les aliments fermentés peuvent être bénéfiques.

      • 10:00-10:53: Les probiotiques en prévention ne sont pas forcément nécessaires, il est préférable de privilégier une bonne alimentation. L'impact de l'alimentation bio sur le microbiote est peu documenté. Le tabac peut influencer positivement le microbiote lors de l'arrêt, tandis que l'alcool a des effets plus indirects. Les tests disponibles actuellement pour analyser le microbiote n'ont pas d'intérêt clinique.

      • 10:53-12:00: La transplantation fécale consiste à remplacer un microbiote altéré par celui d'un sujet sain. Cette pratique est ancienne, utilisée notamment dans la médecine chinoise. Les vétérinaires l'utilisent également. Les donneurs doivent passer de nombreux tests pour éviter la transmission de maladies.

      • 12:00-13:03: La transplantation fécale se fait par les voies naturelles, après un nettoyage intestinal. Elle peut se faire par la bouche (gélules) ou par le bas (coloscopie, lavement). Il n'y a pas de rejet car on ne donne pas de traitement immunosuppresseur. L'efficacité de la transplantation dépend du donneur et du receveur.

      • 13:03-14:38: La transplantation fécale est efficace à 90 % dans les infections récidivantes à Clostridium difficile. Dans d'autres situations, la recherche est en cours. Le microbiote n'est qu'un facteur parmi d'autres pour la santé. La transplantation fécale à domicile est fortement déconseillée en raison des risques de transmission de maladies et d'aggravation de l'état du patient. Le tourisme de la greffe fécale est également déconseillé.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Summary:

      his study shows a new mechanism of GS regulation in the archaean Methanosarcina mazei and clarifies the direct activation of GS activity by 2-oxoglutarate, thus featuring another way in which 2-oxoglutarate acts as a central status reporter of C/N sensing.

      Mass photometry and single particle cryoEM structure analysis convincingly show the direct regulation of GS activity by 2-OG promoted formation of the dodecameric structure of GS. The previously recognized small proteins GlnK1 and Sp26 seem to play a subordinate role in GS regulation, which is in good agreement with previous data. Although these data are quite clear now, there remains one major open question: how does 2-OG further increase GS activity once the full dodecameric state is achieved (at 5 mM)? This point needs to be reconsidered.

      Weaknesses:

      It is not entirely clear, how very high 2-OG concentrations activate GS beyond dodecamer formation.

      The data presented in this work are in stark contrast to the previously reported structure of M. mazei GS by the Schumacher lab. This is very confusing for the scientific community and requires clarification. The discussion should consider possible reasons for the contradictory results.

      Importantly, it is puzzling how Schumacher could achieve an apo-structire of dodecameric GS? If 2-OG is necessary for dodecameric formation, this should be discussed. If GlnK1 doesn't form a complex with the dodecameric GS, how could such a complex be resolved there?

      In addition, the text is in principle clear but could be improved by professional editing. Most obviously there is insufficient comma placement.

      We thank Reviewer #1 for the professional evaluation and raising important points. We will address those comments in the updated manuscript and especially improve the discussion in respect to the two points of concern.

      (1) How can GlnA1 activity further be stimulated with further increasing 2-OG after the dodecamer is already fully assembled at 5 mM 2-OG.

      We assume a two-step requirement for 2-OG, the dodecameric assembly and the priming of the active sites. The assembly step is based on cooperative effects of 2-OG and does not require the presence of 2-OG in all 2-OG-binding pockets: 2-OG-binding to one binding pocket also causes a domino effect of conformational changes in the adjacent 2-OG-unbound subunit, as also described for Methanothermococcus thermolithotrophicus GS in Müller et al. 2023. Due to the introduction of these conformational changes, the dodecameric form becomes more favourable even without all 2-OG binding sites being occupied. With higher 2-OG concentrations present (> 5mM), the activity increased further until finally all 2-OG-binding pockets were occupied, resulting in the priming of all active sites (all subunits) and thereby reaching the maximal activity.

      (2) The contradictory results with previously published data on the structure of M. mazei by Schumacher et al. 2023.

      We certainly agree that it is confusing that Schumacher et al. 2023 obtained a dodecameric structure without the addition of 2-OG, which we claim to be essential for the dodecameric form. 2-OG is a cellular metabolite that is naturally present in E. coli, the heterologous expression host both groups used. Since our main question focused on analysing the 2-OG effect on GS, we have performed thorough dialysis of the purified protein to remove all 2-OG before performing MP experiments. In the absence of 2-OG we never observed significant enzyme activity and always detected a fast disassembly after incubation on ice. We thus assume that a dodecamer without 2-OG in Schumacher et al. 2023 is an inactive oligomer of a once 2-OG-bound form, stabilized e.g. by the presence of 5 mM MgCl2.

      The GlnA1-GlnK1-structure (crystallography) by Schumacher et al. 2023 is in stark contrast to our findings that GlnK1 and GlnA1 do not interact as shown by mass photometry with purified proteins. A possible reason for this discrepancy might be that at the high protein concentrations used in the crystallization assay, complexes are formed based on hydrophobic or ionic protein interactions, which would not form under physiological concentrations.

      Reviewer #2 (Public Review):

      Summary:

      Herdering et al. introduced research on an archaeal glutamine synthetase (GS) from Methanosarcina mazei, which exhibits sensitivity to the environmental presence of 2-oxoglutarate (2-OG). While previous studies have indicated 2-OG's ability to enhance GS activity, the precise underlying mechanism remains unclear. Initially, the authors utilized biophysical characterization, primarily employing a nanomolar-scale detection method called mass photometry, to explore the molecular assembly of Methanosarcina mazei GS (M. mazei GS) in the absence or presence of 2-OG. Similar to other GS enzymes, the target M. mazei GS forms a stable dodecamer, with two hexameric rings stacked in tail-to-tail interactions. Despite approximately 40% of M. mazei GS existing as monomeric or dimeric entities in the detectable solution, the majority spontaneously assemble into a dodecameric state. Upon mixing 2-OG with M. mazei GS, the population of the dodecameric form increases proportionally with the concentration of 2-OG, indicating that 2-OG either promotes or stabilizes the assembly process. The cryo-electron microscopy (cryo-EM) structure reveals that 2-OG is positioned near the interface of two hexameric rings. At a resolution of 2.39 Å, the cryo-EM map vividly illustrates 2-OG forming hydrogen bonds with two individual GS subunits as well as with solvent water molecules. Moreover, local side-chain reorientation and conformational changes of loops in response to 2-OG further delineate the 2-OG-stabilized assembly of M. mazei GS.

      Strengths & Weaknesses:

      The investigation studies the impact of 2-oxoglutarate (2-OG) on the assembly of Methanosarcina mazei glutamine synthetase (M mazei GS). Utilizing cutting-edge mass photometry, the authors scrutinized the population dynamics of GS assembly in response to varying concentrations of 2-OG. Notably, the findings demonstrate a promising and straightforward correlation, revealing that dodecamer formation can be stimulated by 2-OG concentrations of up to 10 mM, although GS assembly never reaches 100% dodecamerization in this study. Furthermore, catalytic activities showed a remarkable enhancement, escalating from 0.0 U/mg to 7.8 U/mg with increasing concentrations of 2-OG, peaking at 12.5 mM. However, an intriguing gap arises between the incomplete dodecameric formation observed at 10 mM 2-OG, as revealed by mass photometry, and the continued increase in activity from 5 mM to 10 mM 2-OG for M mazei GS. This prompts questions regarding the inability of M mazei GS to achieve complete dodecamer formation and the underlying factors that further enhance GS activity within this concentration range of 2-OG.

      Moreover, the cryo-electron microscopy (cryo-EM) analysis provides additional support for the biophysical and biochemical characterization, elucidating the precise localization of 2-OG at the interface of two GS subunits within two hexameric rings. The observed correlation between GS assembly facilitated by 2-OG and its catalytic activity is substantiated by structural reorientations at the GS-GS interface, confirming the previously reported phenomenon of "funnel activation" in GS. However, the authors did not present the cryo-EM structure of M. mazei GS in complex with ATP and glutamate in the presence of 2-OG, which could have shed light on the differences in glutamine biosynthesis between previously reported GS enzymes and the 2-OG-bound M. mazei GS.

      Furthermore, besides revealing the cryo-EM structure of 2-OG-bound GS, the study also observed the filamentous form of GS, suggesting that filament formation may be a universal stacking mechanism across archaeal and bacterial species. However, efforts to enhance resolution to investigate whether the stacked polymer is induced by 2-OG or other factors such as ions or metabolites were not undertaken by the authors, leaving room for further exploration into the mechanisms underlying filament formation in GS.

      We thank Reviewer #2 for the detailed assessment and valuable input. We will address those comments in the updated manuscript and clarify the message.

      (1) The discrepancy of the dodecamer formation (max. at 5 mM 2-OG) and the enzyme activity (max. at 12.5 mM 2-OG). We assume that there are two effects caused by 2-OG: 1. cooperativity of binding (less 2-OG needed to facilitate dodecamer formation) and 2. priming of each active site. See also Reviewer #1 R.1). We assume this is the reason why the activity of dodecameric GlnA1 can be further enhanced by increased 2-OG concentration until all catalytic sites are primed.

      (2) The lack of the structure of a 2-OG and ATP-bound GlnA1. Although we strongly agree that this would be a highly interesting structure, it seems out of the scope of a typical revision to request new cryo-EM structures. We evaluate the findings of our present study concerning the 2-OG effects as important insights into the strongly discussed field of glutamine synthetase regulation, even without the requested additional structures.

      (3) The observed GlnA1-filaments are an interesting finding. We certainly agree with the referee on that point, that the stacked polymers are potentially induced by 2-OG or ions. However, it is out of the main focus of this manuscript to further explore those filaments. Nevertheless, this observation could serve as an interesting starting point for future experiments.

      Reviewer #3 (Public Review):

      Summary:

      The current manuscript investigates the effect of 2-oxoglutarate and the Glk1 protein as modulators of the enzymatic reactivity of glutamine synthetase. To do this, the authors rely on mass photometry, specific activity measurements, and single-particle cryo-EM data.

      From the results obtained, the authors convey that glutamine synthetase from Methanosarcina mazei exists in a non-active monomeric/dimeric form under low concentrations of 2-oxoglutarate, and its oligomerization into a dodecameric complex is triggered by higher concentration of 2-oxoglutarate, also resulting in the enhancement of the enzyme activity.

      Strengths:

      Glutamine synthetase is a crucial enzyme in all domains of life. The dodecameric fold of GS is recurrent amongst prokaryotic and archaea organisms, while the enzyme activity can be regulated in distinct ways. This is a very interesting work combining protein biochemistry with structural biology.

      The role of 2-OG is here highlighted as a crucial effector for enzyme oligomerization and full reactivity.

      Weaknesses:

      Various opportunities to enhance the current state-of-the-art were missed. In particular, omissions of the ligand-bound state of GnK1 leave unexplained the lack of its interaction with GS (in contradiction with previous results from the authors). A finer dissection of the effect and role of 2-oxoglurate are missing and important questions remain unanswered (e.g. are dimers relevant during early stages of the interaction or why previous GS dodecameric structures do not show 2-oxoglutarate).

      We thank Reviewer #3 for the expert evaluation and inspiring criticism.

      (1) Encouragement to examine ligand-bound states of GlnK1. We agree and plan to perform the suggested experiments exploring the conditions under which GlnA1 and GlnK1 might interact. We will perform the MP experiments in the presence of ATP. In GlnA1 activity test assays when evaluating the presence/effects of GlnK1 on GlnA1 activity, however, ATP was always present in high concentrations and still we did not observe a significant effect of GlnK1 on the GlnA1 activity.

      (2) The exact role of 2-OG could have been dissected much better. We agree on that point and will improve the clarity of the manuscript. See also Reviewer #1 R.1.

      (3) The lack of studies on dimers. This is actually an interesting point, which we did not consider during writing the manuscript. Now, re-analysing all our MP data in this respect, GlnA1 is likely a dimer as smallest species. Consequently, we will add more supplementary data which supports this observation and change the text accordingly.

      (4) Previous studies and structures did not show the 2-OG. We assume that for other structures, no additional 2-OG was added, and the groups did not specifically analyse for this metabolite either. All methanoarchaea perform methanogenesis and contain the oxidative part of the TCA cycle exclusively for the generation of glutamate (anabolism) but not a closed TCA cycle enabling them to use internal 2-OG concentration as internal signal for nitrogen availability. In the case of bacterial GS from organisms with a closed TCA cycle used for energy metabolism (oxidation of acetyl CoA) like e.g. E. coli, the formation of an active dodecameric GS form underlies another mechanism independent of 2-OG. In case of the recent M. mazei GS structures published by Schumacher et al. 2023, the dodecameric structure is probably a result from the heterologous expression and purification from E. coli. (See also Reviewer #1 R.2). One example of methanoarchaeal glutamine synthetases that do in fact contain the 2-OG in the structure, is Müller et al. 2023.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      Specific issues:

      L 141: 2-OG levels increase due to slowing GOGAT reaction (due to Gln limitation as a consequence of N-starvation).... (2-OG also increases in bacteria that lack GDH...)

      As the GS-GOGAT cycle is the major route of ammonium assimilation, consumption of 2-OG by GDH is probably only relevant under high ammonium concentrations.

      In Methanoarchaea, GS is strictly regulated and expression strongly repressed under nitrogen sufficiency - thus glutamate for anabolism is mainly generated by GDH under N sufficiency consuming 2-OG delivered by the oxidative part of the TCA cycle (Methanogenesis is the energy metabolism in methanoarchaea, a closed TCA cycle is not present) thus 2-OG is increasing under nitrogen limitation, when no NH3 is available for GDH.

      L148: it is not clear what is meant by: "and due to the indirect GS activity assay"

      We apologize for not being clear here. The GS activity assay used is the classical assay by Sahpiro & Stadtman 1970 and is a coupled optical test assay (coupling the ATP consumption of the GS activity to the oxidation of NADH by lactate dehydrogenase). Based on the coupled test assay the measurements of low activities show a high deviation. We now added this information in the revised MS respectively.

      L: 177: arguing about 2-OG affinities: more precisely, the 0.75 mM 2-OG is the EC50 concentration of 2-OG for triggering dodecameric formation; it might not directly reflect the total 2-OG affinity, since the affinity may be modulated by (anti)cooperative effects, or by additional sites... as there may be different 2-OG binding sites involved... (same in line 201)

      Thank you for the valuable input. We changed KD to EC50 within the entire manuscript. Concerning possible additional 2-OG binding sites: we did not see any other 2-OG in the cryo-EM structure aside from the described one and we therefore assume that the one described in the manuscript is the main and only one. Considering the high amounts of 2-OG (12.5 mM) used in the structure, it is quite unlikely that additional 2-OG sites exist since they would have unphysiologically low affinities.

      In this respect, instead of the rather poor assay shown in Figure 1D, a more detailed determination of catalytic activation by different 2-OG concentrations should be done (similar to 1A)... This would allow a direct comparison between dodecamerization and enzymatic activation.

      We agree and performed the respective experiments, which are now presented in revised Fig. 1D

      Discussion: the role of 2-OG as a direct activator, comparison with other prokaryotic GS: in other cases, 2-OG affects GS indirectly by being sensed by PII proteins or other 2-OG sensing mechanisms (like 2OG-NtcA-mediated repression of IF factors in cyanobacteria)

      We agree and have added that information in the discussion as suggested.

      290. Unclear: As a second step of activation, the allosteric binding of 2-OG causes a series of conformational.... where is this site located? According to the catalytic effects (compare 1A and 1D) this site should have a lower affinity …

      Thank you very much for pointing this out. Binding of 2-OG only occurs in one specific allosteric binding-site. Binding however, has two effects on the GlnA1: dodecamer assembly and priming of the active site (with two specific EC50, which are now shown in Fig. 1A and D).

      See also public comment #1 (1).

      Reviewer #2 (Recommendations For The Authors):

      The primary concern for me is that mass photometry might lead to incorrect conclusions. The differences in the forms of GS seen in SEC and MP suggest that GS can indeed form a stable dodecamer when the concentration of GS is high enough, as shown in Figure S1B. I strongly suggest using an additional biophysical method to explore the connection between GS and 2-OG in terms of both assembly and activity, to truly understand 2-OG's role in the process of assembly and catalysis.

      We apologize if we did not present this clear enough, however the MP analysis of GlnA1 in the absence of 2-OG showed always (monomers/) dimers, dodecamers were only present in the presence of 2-OG. The SEC analysis in Fig. S1B has been performed in the presence of 12.5 mM 2-OG, we realized this information is missing in the figure legend - we now added this in the revised version. The 2-OG is in addition visible in the Cryo EM structure. Thus, we do not agree to perform additional biophysical methods.

      As for the other experimental findings, they appear satisfactory to me, and I have no reservations regarding the cryoEM data.

      (1) Mass photometry is a fancy technique that uses only a tiny amount of protein to study how they come together. However, the concentration of the protein used in the experiment might be lower than what's needed for them to stick together properly. So, the authors saw a lot of single proteins or pairs instead of bigger groups. They showed in Figure S1B that the M. mazei GS came out earlier than a 440-kDa reference protein, indicating it's actually a dodecamer. But when they looked at the dodecamer fraction using mass photometry, they found smaller bits, suggesting the GS was breaking apart because the concentration used was too low. To fix this, they could try using a technique called analytic ultracentrifuge (AUC) with different amounts of 2-OG to see if they can spot single proteins or pairs when they use a bit more GS. They could also try another technique called SEC-MALS to do similar tests. If they do this, they could replace Figure 1A with new data showing fully formed GS dodecamers when they use the right amount of 2-OG.

      Thank you for this input. In MP we looked at dodecamer formation after removing the 2-OG entirely and re-adding it in the respective concentration. We think that GlnA1 is much more unstable in its monomeric/dimeric fraction and that the complete and harsh removal of 2-OG results in some dysfunctional protein which does not recover the dodecameric conformation after dialysis and re-addition of 2-OG. Looking at the dodecamer-peak right after SEC however, we exclusively see dodecamers, which is now included as an additional supplementary figure (suppl. Fig. 1C). Consequently, we did not perform additional experiments.

      (2) Building on the last point, the estimated binding strength (Kd) between 2-OG and GS might be lower than it really is, because the GS often breaks apart from its dodecameric form in this experiment, even though 2-OG helps keep the pairs together, as seen with cryoEM. What if they used 5-10 times more GS in the mass photometry experiment? Would the estimated bond strength stay the same? Could they use AUC or other techniques like ITC to find out the real, not just estimated, strength of the bond?

      We agree that the term KD is not suitable. We have changed the term KD to EC50 as suggested by reviewer #1, which describes the effective concentration required for 50 % dodecamer assembly. Furthermore, we disagree that the dodecamer breaks apart when the concentrations are as low as in MP experiments. The actual reason for the breaking is rather the harsh dialysis to remove all 2-OG before MP experiments. Right after SEC, the we exclusively see dodecamer in MP (suppl. Fig. S1C). See also #2 (1).

      (3) The fact that the GS hardly works without 2-OG is interesting. I tried to understand the experiment setup, but it wasn't clear as the protocol mentioned in the author's 2021 FEBS paper referred to an old paper from 1970. The "coupled optical test assay" they talked about wasn't explained well. I found other papers that used phosphometry assays to see how much ATP was used up. I suggest the authors give a better, more detailed explanation of their experiments in the methods section. Also, it's unclear why the GS activity keeps going up from 5 to 12.5 mM 2-OG, even though they said it's saturated. They suggested there might be another change happening from 5 to 12.5 mM 2-OG. If that's the case, they should try to get a cryo-EM picture of the GS with lots of 2-OG, both with and without ATP/glutamate (or the Met-Sox-P-ADP inhibitor), to see what's happening at a structural level during this change caused by 2-OG.

      We agree with the reviewer that the GS assay was not explained in detail (since published and known for several years). However, we now added the more detailed description of the assay in the revised MS, which also measures the ATP used up by GS, but couples the generation of ADP to an optical test assay producing pyruvate from PEP with the generated ADP catalysed by pyruvate kinase present in the assay. This generated pyruvate is finally reduced to lactate by the present lactate dehydrogenase consuming NADH, the reduction of which is monitored at 340 nm.

      The still increasing activity of GS after dodecamer formation (max. at 5 mM 2-OG) and the continuously increasing enzyme activity (max. at 12.5 mM 2-OG): See also public reviews, we assume that there are two effects caused by 2-OG: 1. cooperativity of binding (less 2-OG needed to facilitate dodecamer formation) and 2. priming of each active site.

      The suggested additional experiments with and without ATP/Glutamate: Although we strongly agree that this would be a highly interesting structure, it seems out of the scope of a typical revision to request new cryo-EM structures. We evaluate the findings of our present study concerning the 2-OG effects as important insights into the strongly discussed field of glutamine synthetase regulation, even without the requested additional structures.

      (4) Please remake Figure S2, the panels are too small to read the words. At least I have difficulty doing so.

      We assume the reviewer is pointing to Suppl. Fig S3, we now changed this figure accordingly.

      Line 153, the reference Schumacher et al. 23, should be 2023?

      Yes, thank you. We corrected that.

      Line 497. I believe it's UCSF ChimeraX, not Chimera.

      We apologize and corrected accordingly.

      Reviewer #3 (Recommendations For The Authors):

      Recent studies on the Methanothermococcus thermolithotrophicus glutamine synthetase, published by Müller et al., 2024, have identified the binding site for 2-oxoglutarate as well as the conformational changes that were induced in the protein by its presence. In the present study, the authors confirm these observations and additionally establish a link between the presence of 2-oxoglutarate and the dodecameric fold and full activation of GS.

      Curiously, here, the authors could not confirm their own findings that the dodecameric GS can directly interact with the PII-like GlnK1 protein and the small peptide sP26. However, the lack of mention of the GlnK-bound state in these studies is very alarming since it certainly is highly relevant here.

      We agree with the reviewer that we have not observed the interaction with GlnK1 and sP26 in the recent study. Consequently, we speculate that yet unknown cellular factor(s) might be required for an interaction of GlnA1 with GlnK1 and sP26, which were not present in the in vitro experiments using purified proteins, however they were present in the previous pull-down approaches (Ehlers et al. 2005, Gutt et al. 2021). Another reason might be that post-translational modifications occur in M. mazei, which might be important for the interaction, which are also not present in purified proteins expressed in E. coli.

      The manuscript interest could have been substantially increased if the authors had done finer biochemical and enzymatic analyses on the oligomerization process of GS, used GlnK1 bound to known effectors in their assays and would have done some more efforts to extrapolate their findings (even if a small niche) of related glutamine synthetases.

      We thank the reviewer for their valuable encouragement to explore ligand-bound-states of GlnK1. However, in this manuscript we mainly focused on 2-OG as activator of GlnA1 and decided to dedicate future experiments to the exploration of conditions that possibly favor GlnK1-binding.

      In principle, we have explored the ATP bound GlnK1 effects on GlnA1 activity in the activity assays (Fig. 2E) since ATP (3.6 mM) is present. GlnK1 however showed no effects on GlnA1 activity.

      In general, the manuscript is poorly written, with grammatically incorrect sentences that at times, which stands in the way of passing on the message of the manuscript.

      Particular points:

      (1) It is mentioned that 2-OG induces the active oligomeric (dodecamer, 12-mer) state of GlnA1 without detectable intermediates. However, only 62 % of the starting inactive enzyme yields active 12-mers. Note that this is contradicted in line 212.

      Thanks for pointing out this discrepancy. After removing all 2-OG as we did before MP-experiments, GlnA1 doesn’t reach full dodecamers anymore when 2-OG is re-added. This is not because the 2-OG amount is not enough to trigger full assembly, but because the protein is much more unstable in the absence of 2-OG, so we predict that some GlnA1 breaks during dialysis. See also answer reviewer #2 (1) and supplementary figure S1C.

      Is there any protein precipitation upon the addition of 2-OG? Is all protein being detected in the assay, meaning, is monomer/dimer + dodecamer yields close to 100% of the total enzyme in the assay?

      There is no protein precipitation upon the addition of 2-OG, indeed, GlnA1 is much more stable in the presence of 2-OG. In the mass photometry experiments, all particles are measured, precipitated protein would be visible as big entities in the MP.

      Please add to Figure 1 the amount of monomer/dimer during titration. Some debate why there is no full conversion should be tentatively provided.

      We agree with the reviewer and included the amount of monomer/dimer in the figure, as well as some discussion on why it is not fully converted again. GlnA1 is unstable without 2-OG and it was dialysed against buffer without 2-OG before MP measurements. This sample mistreatment resulted in no full re-assembly after re-adding 2-OG (although full dodecamers before dialysis (suppl. Fig. S1C).

      (2) Figure 1B reflects an exemplary result. Here, the addition of 0.1 mM 2-OG seems to promote monomer to dimer transition. Why was this not studied in further detail? It seems highly relevant to know from which species the dodecamer is assembled.

      We thank the reviewer for their comment. However, we would like to point out that, although not shown in the figure, GlnA1 is always mainly present as dimers as the smallest entity. As suggested earlier, we have added the amount of monomers/dimers to Figure 1A, which shows low monomer-counts at all 2-OG concentrations (Fig.1A). Although not depicted in the graph starting at 0.01 mM OG, we also see mainly dimers at 0 mM 2-OG.

      How does the y-axis compare to the number and percentage of counts assigned to the peaks? In line 713, it is written that the percentage of dodecamer considers the total number of counts, and this was plotted against the 2-OG concentration.

      We thank the reviewer for addressing this unclarity. Line 713 corresponds to Figure 1A, where we indeed plotted the percentage of dodecamer against the 2-OG-concentration. Thereby, the percentage of dodecamer corresponds to the percentage calculated from the Gaussian Fit of the MP-dodecamer-peak. In Figure 1 B, however, the y-axis displays the relative amount of counts per mass, multiple similar masses then add up to the percentage of the respective peak (Gaussian Fit above similar masses).

      (3) Lines 714 and 721 (and elsewhere): Why only partial data is used for statistical purposes?

      We in general only show one exemplary biological replicate, since the quality of the respective GlnA1 purification sometimes varied (maximum activity ranging from 5 - 10 U/mg). Therefore, we only compared activities within the same protein purification. For the EC50 calculations of all measurements, we refer to the supplement.

      (4) Lines 192-193: It is claimed that GlnK1 was previously shown to both regulate the activity of GlnA1 and form a complex with GlnA1. Please mention the ratio between GlnK1 and GlnA1 in this complex.

      We now included the requested information (GlnA1:GlnK1 1:1, (Ehlers et al. 2005); His6-GlnA1 (0.95 μM), His6-GlnK1 (0.65 μM); 2:1,4, Gutt et al. 2021).

      It is also known that PII proteins such as GlnK1 can bind ADP, ATP, and 2-OG. Interestingly, however, for various described PII proteins, 2-OG can only bind after the binding of ATP.

      So, the crucial question here is what is the binding state of GlnK1? 

      Were these assays performed in the absence of ATP? This is key to fully understand and connect the results to the previous observations. For example, if the GlnK1 used was bound to ADP but not to ATP, then the added 2-OG might indeed only be able to affect GlnA1 (leading to its activation/oligomerization). If this were true and according to the data reported, ADP would prevent GlnK1 from interacting with any oligomeric form of GlnA1. However, if GlnK1 bound to ATP is the form that interacts with GlnA1 (potentially validating previous results?) then, 2-OG would first bind to GlnK1 (assuming a higher affinity of 2-OG to GlnK1), eventually causing its release from GlnA1 followed by binding and activation of GlnA1.

      These experiments need to be done as they are essential to further understand the process. Given the ability of the authors to produce the protein and run such assays, it is unclear why they were not done here. As written in line 203, in this case, "under the conditions tested" is not a good enough statement, considering what is known in the field and how many more conclusions could easily be taken from such a setup.

      Thanks for the encouragement to investigate the ligand-bound states of GlnK1. We agree and plan to perform the suggested mass photometry experiments exploring the conditions under which GlnA1 and GlnK1 might interact in future work. In GlnA1 activity test assays, when evaluating the presence/effects of GlnK1 on GlnA1 activity, however, ATP was always present in high concentrations and still we did not observe a significant effect of GlnK1 on the GlnA1 activity.

      (5) Figure 2D legend claims that the graphic shows the percentage of dodecameric GlnA1 as a function of the concentration of 2-OG. This is not what the figure shows; Figure 2D shows the dodecamer/dimer (although legend claims monomer was used, in line 732) ratio as a function of 2-OG (stated in line 736!). If this is true, a ratio of 1 means 50 % of dodecamers and dimers co-exist. This appears to be the case when GlnK1 was added, while in the absence of GlnK1 higher ratios are shown for higher 2-OG concentration implying that about 3 times more dodecamers were formed than dimers. However, wouldn´t a 50 % ratio be physiologically significant?

      We apologize for the partially incorrect and also misleading figure legend and corrected it. Indeed, the ratio of dodecamers and dimers is shown. Furthermore, we did not use monomeric GlnA1 (the smallest entity is mainly a dimer, see Fig 1A), however, the molarity was calculated based on the monomer-mass. Concerning the significance of the difference between the maximum ratio of GlnA1 and GlnK1: The ratio does appear higher, but this is mostly because adding large quantities of GlnK1 broadens all peaks at low molecular weight. This happens because the GlnK1 signal starts overlapping with the signal from GlnA1, leading to inflated GlnA1 dimer counts. We therefore do not think that this is biologically significant, especially as the activities do not differ under these conditions.

      (6) Is it possible that the uncleaved GlnA1 tag is preventing interaction with GlnK1? This should be discussed.

      This is of course a very important point. We however realized that Schumacher et al. also used an N-terminal His-tag, so we assume that the N-terminal tag is not hampering the interaction.

      (7) Line 228: Please detail the reported discrepancies in rmsd between the current protein and the gram-negative enzymes.

      The differences in rmsd between our M.mazei GlnA1 structure and the structure of gram-negative enzymes is caused by a) sequence similarity: E.g. M.mazei GlnA1 compared to B.subtilis GlnA have a sequence percent identity of 58.47; b) ligands in the structure: The B.Subtilis structure contains L-Methionine-S-sulfoximine phosphate, a transition state inhibitor, while the M. mazei  structure contains 2OG; c) Methodology: The structural determination methods also contribute to these differences. B. subtilis GlnA was determined using X-ray crystallography, while the M. mazei GlnA1 structure was resolved using Cryo-EM, where the protein behaves differently in ice compared to a crystal.

      (8) Line 747: The figure title claims "dimeric interface" although the manuscript body only refers to "hexameric interface" or "inter-hexamer interface" (line 224). Moreover, the figure 4 legend uses terms such as vertical and horizontal dimers and this too should be uniformized within the manuscript.

      Thank you for your valuable feedback. We have updated both the figure title and the figure legend as well in the main text to ensure consistency in the description.

      (9) Line 752: The description of the color scheme used here is somehow unclear.

      Thanks for pointing this out. We changed the description to make it more comprehensive.

      (10) Please label H14/15 and H14´/H15´in Fig 4C zoom.

      We agree that this has not been very clear. We added helix labels.

      (11) In Figure 4D legend, make sure to note that the binding sites for the substrate are based on homologies with another enzyme poised with these molecules.

      The same should be clear in the text: sites are not known, they are assumed to be, based on homologies (paragraph starting at line 239).

      Concerning this comment we want to point out that we studied the exact same enzyme as the Schumacher group, except that we used 2-OG in our experiments, which they did not.

      (12) Figure 3 appears redundant in light of Figure 4. 

      (13) Line 235: When mentioning F24, please refer to Figure 5.

      Thank you, we changed that accordingly.

      (14) Please provide the distances for the bonds depicted in Figure 4B.

      Thanks for pointing this out, we added distance labels to Figure 4B. For reasons of clarity only to three H-bonds.

      (15) Line 241: D57 is likely serving to abstract a proton from ammonium, what is residue Glu307 potentially doing? The information seems missing in light of how the sentence is built.

      Thanks for pointing this out. According to previous studies both residues are likely involved in proton abstraction - first from ammonium, and then from the formed gamma-ammonium group. Additionally, they contribute in shielding the active site from bulk solvent to prevent hydrolysis of the formed phospho-glutamate.

      (16) Why do the authors assume that increased concentrations of 2-OG are a signal for N starvation only in M. mazei and not in all prokaryotic equivalent systems (line 288)?

      In line 288, we did not claim that this is a unique signal for M. mazei. It is also the central N-starvation signal in Cyanobacteria but not directly perceived by the cyanobacterial GS through binding directly to GS.

      The authors should look into the residues that bind 2-OG and check if they are conserved in other GS. The results of this sequence analysis should be discussed in line with the variable prokaryotic glutamine synthetase types of activity modulation that were exposed in the introduction and Figure 7.

      Please refer to supplementary figure S5, where we already aligned the mentioned glutamine synthetase sequences. Since this was also already discussed in Müller et al. 2024, we did not want to repeat their observations and refer to our supplementary figure in too much detail.

      (17) Figure 5 title: Replace TS by transition state structures of homology enzymes, or alike.

      Thank you for this suggestion. We did not change the title however, since it is not a homologue but the exact same glutamine synthetase from Methanosarcina mazei.

      (18) Line 249: D170 is not shown in Figure 5A or elsewhere in Figure 5.

      Thank you for pointing this out. We added D170 to figure 5A.

      (19) Representative density for the residues binding 2-OG should be provided, maybe in a supplemental figure.

      Thank you for the suggestion. We added the densities of 2-OG-binding residues to figure 4B

      (20) Line 260: Please add a reference when describing the phosphoryl transfer.

      We thank the reviewer for this important point and added that accordingly.

      (21) Line 296: The binding of 2-OG indeed appears to be cooperative, such that at concentrations above its binding affinity to the protein, only dodecamers are seen (under experimental conditions). However, claiming that the oligomerization is fast is not correct when the experimental setup includes 10 minutes of incubation before measurements are done. Please correct this within the entire manuscript.

      A (fast) continuous kinetic assay could have confirmed this point and revealed the oligomerization steps and the intermediaries in the process (maybe monomer/dimers, then dimers/hexamers, and then hexamers/dodecamers). Such assays would have been highly valuable to this study.

      We thank the reviewer for this suggestion, but disagree. It is indeed a rather fast regulation (as activity assays without pre-incubation only takes 1 min longer to reach full activity, see the newly included suppl. Fig S6). Considering other regulation mechanisms like e.g. transcription or translation regulation, an activation that takes only 60 s is actually quite quick.

      (22) Line 305 (and elsewhere in the manuscript): the authors state that 2-OG primes the active site for a transition state. This appears incorrect. The transition state is the highest energy state in an enzymatic reaction progressing from substrate to product. Meaning, the transition state is a state that has a more or less modified form of the original substrate bound to the active site. This is not the case.

      In line 366 an "active open state" appears much more adequate to use. 

      We agree and changed accordingly throughout the manuscript.

      (23) Line 330: Please delete "found". Eventually replace it with "confirmed": As the authors write, others have described this residue as a ligand to glutamine.

      Thanks, we changed that accordingly, although previous descriptions were just based on homologies without the experimental validation.

      (24) The discussion in at various points summarizing again the results. It should be trimmed and improved.

      (25) Line 381: replace "two fast" with "fast"?

      We thank the reviewer for this suggestion, but disagree on this point. We especially wanted to highlight that there are two central nitrogen-metabolites involved in the direct regulation of GlnA1, that means TWO fast direct processes mediated by 2-OG and glutamine.

    1. Voici un sommaire minuté des idées fortes qui ressortent de l'entretien avec Éric Debarbieux:

      • 0:00-1:13 Introduction Éric Debarbieux, spécialiste du climat scolaire, a publié "Zéro pointé ? Une histoire politique de la violence à l'école". L'ouvrage dresse un bilan mitigé des politiques menées pour prévenir la violence scolaire et s'inquiète des difficultés croissantes de l'école à gérer les troubles du comportement.
      • 1:14-2:41 Intérêt pour la question de la violence à l'école: Debarbieux explique son intérêt pour la violence à l'école par son expérience de praticien en tant qu'éducateur spécialisé et instituteur spécialisé auprès de jeunes en difficulté. Il a voulu comprendre la violence plutôt que de se laisser submerger par elle.
      • 2:42-3:41 Ce livre n'est pas sur "comment on fait" mais "comment on fait politiquement": Debarbieux précise que son livre ne se concentre pas sur des solutions pédagogiques, mais plutôt sur une analyse politique de la violence à l'école. Il continue d'intervenir sur le terrain, mais souhaite laisser la place aux jeunes chercheurs.
      • 3:42-6:25 Évolution du regard sur la violence à l'école: Historiquement, la violence à l'école n'était pas une question politique et personne ne voulait la voir. Debarbieux retrace l'évolution du regard sur la violence à l'école, en commençant par le phénomène du chahut dans les années 60, qui était une forme de violence acceptée et ritualisée contre les professeurs.
      • 6:26-7:42 Démocratisation du lycée et nouveaux publics: L'arrivée de nouveaux publics dans les lycées, suite à une volonté politique de démocratisation, a entraîné une contestation de l'ordre et une perte de sens pour certains élèves. Cette violence est devenue une violence antiscolaire.
      • 7:43-9:17 Massification, exclusion sociale et ghettoïsation: La massification de l'école, l'exclusion sociale, les problèmes d'exclusion urbaine ont également eu un impact sur la violence à l'école. Initialement, cette violence était perçue comme venant uniquement de l'extérieur, ce qui était une erreur.
      • 9:18-10:21 La violence vient aussi de l'intérieur: Une rupture claire se produit avec les manifestations de lycéens au début des années 90 contre la violence dans les établissements scolaires et pour plus de sécurité. Cela marque le début des politiques publiques et des plans antiviolence.
      • 10:22-11:01 La puissance publique s'y intéresse: La puissance publique commence à s'intéresser à la question de la violence à l'école en raison de faits divers et de la pression médiatique.
      • 11:02-12:55 Mesurer la violence: Un des grands combats scientifiques des années 90 est la mesure de la violence à l'école, ce qui implique de la définir. Un premier appel d'offre est lancé pour mieux comprendre le phénomène. Debarbieux et son équipe mènent une enquête auprès de 14000 élèves sans financement initial.
      • 12:56-13:41 Expérience du terrain: L'expérience de Debarbieux est partie du terrain et il est resté en contact avec le terrain en permanence, ce qui lui a permis de poser de nouvelles questions et d'articuler la recherche avec la pratique.
      • 13:42-16:21 Être un médiateur dans les équipes: Face à des enseignants qui ne veulent pas entendre parler de pédagogie coopérative ou d'élèves difficiles, il fallait être en mesure d'être un médiateur dans les équipes. La question de la violence n'est pas seulement celle de la violence des élèves, mais aussi des conflits d'équipe.
      • 16:22-17:07 Enquête de victimation et de climat scolaire: Debarbieux et son collègue Yves Montoya ont créé une méthode d'enquête de victimation et de climat scolaire pour recueillir l'avis de tous les élèves. L'objectif était de restituer les enquêtes au terrain et de réfléchir avec les personnels sur ce qu'il pouvait faire.
      • 17:08-18:39 Violence en milieu scolaire: La violence en milieu scolaire est souvent présentée comme un problème lié au comportement des élèves ou aux problèmes familiaux, mais rarement comme un problème de relations entre adultes. Or, le premier facteur de risque de violence à l'école est l'instabilité des équipes éducatives et la qualité de ces équipes, liée à la conflictualité.
      • 18:40-19:52 Instabilité des équipes éducatives: Denise Godfredson a montré que le premier facteur de risque sur la violence à l'école est l'instabilité des équipes éducatives. Les équipes qui se déchirent ne peuvent pas prendre en charge les problèmes de violence, ce qui entraîne un repli dans la classe et une incivilité.
      • 19:53-21:06 Violences entre adultes: Des travaux montrent l'importance des violences entre adultes. Une enquête menée par Debarbieux en Seine-Saint-Denis a quantifié le lien entre les conflits en équipe et les agressions subies par les élèves.
      • 21:07-22:20 Aggravation des conflits: On observe une aggravation des conflits entre les directions et les enseignants, avec une augmentation du nombre de personnels qui se disent harcelés. Les enseignants se plaignent d'être harcelés par la hiérarchie, et les personnels de direction, par les enseignants.
      • 22:21-23:05 Méfiance envers la hiérarchie: Ce que révèle aussi cette situation est la méfiance envers la haute hiérarchie et le personnel politique à la tête du ministère de l'Éducation nationale. Une grande majorité du personnel ne se sent pas soutenue, voire méprisée, par la haute hiérarchie.
      • 23:06-24:28 Climat scolaire et conflits internes: Le climat scolaire, qui inclut la bonne entente d'équipe, est un facteur de protection contre la violence. Les conflits au sein de l'administration et des cabinets ministériels ont un impact direct sur les politiques publiques. Derrière cette conflictualité, il y a une conflictualité sociétale globale.
      • 24:29-26:13 Harcèlement à l'école: Le harcèlement à l'école est un phénomène de groupe où des individus se liguent contre un autre, souvent motivé par le racisme, la xénophobie ou la transphobie. Les discours haineux tenus dans la société ont des répercussions dans les cours de récréation.
      • 26:14-27:25 Difficulté à gérer les enfants en difficulté: L'institution a de plus en plus de mal à gérer les enfants en grande difficulté comportementale.
      • 27:26-28:22 Stabilité des enquêtes de victimation: Les enquêtes de victimation montrent une stabilité, voire une petite aggravation récente, de la violence à l'école. Un des phénomènes préoccupants est la difficulté de l'école primaire face aux enfants à troubles du comportement, en lien avec l'école inclusive.
      • 28:23-29:22 Augmentation des problèmes avec les enfants à troubles: On observe une augmentation du nombre d'enseignants qui disent avoir des problèmes fréquents avec des enfants à troubles du comportement, passant de 40% à plus de 70%.
      • 29:23-30:47 Craintes des enseignants: Les enseignants expriment un cri de désespoir et demandent de l'aide. En 2023, une partie d'entre eux souhaitent que ces enfants soient placés dans des centres spécialisés. Debarbieux souligne que cela ne se fera pas pour des raisons économiques et que l'école inclusive ne suffit pas.
      • 30:48-32:36 Désespoir des enseignants: Le désespoir d'un nombre incroyable d'enseignants du premier degré fait craindre un danger réel pour le maintien de l'offre éducative dans le primaire. Il y a une désaffection pour le métier d'enseignant, notamment en raison de la difficulté à gérer les enfants difficiles.
      • 32:37-33:12 Sentiment d'impuissance: Pour Debarbieux, cette désaffection est liée au découragement et au sentiment d'impuissance des enseignants. Une majorité d'entre eux estime ne pas avoir été suffisamment formée.
      • 33:13-34:09 Formation continue: Il est important de proposer une formation continue de qualité, dispensée par des personnes connaissant le terrain et capables de sortir des discours théoriques.
      • 34:10-35:27 Désintérêt pour les questions scientifiques: Debarbieux constate un désintérêt pour les questions scientifiques dans les sphères politiques et un intérêt pour le court-termisme. Il nuance en précisant qu'il a souvent été appelé à la rescousse, mais que l'intérêt pour la science arrive souvent tardivement.
      • 35:28-36:24 Claude Allègre: Claude Allègre a été le premier à s'intéresser véritablement au point de vue scientifique, mais sa communication maladroite a nui à ses efforts.
      • 36:25-37:22 La communication l'emporte: La communication l'emporte souvent sur la science, surtout depuis l'avènement du web 2.0 et de la réponse immédiate. Luc Chatel, par exemple, était dans le court-termisme et la réponse musclée.
      • 37:23-38:02 Tentative d'informer les politiques publiques par la science: Luc Chatel a ensuite tenté d'informer les politiques publiques par la science, avec les États généraux sur la sécurité à l'école et les Assises contre le harcèlement. Cette politique a été continuée par la gauche au pouvoir.
      • 38:03-39:02 Arrivée de Blanquer: L'arrivée de Blanquer a mis fin à cette continuité et a imposé un autre programme, cassant notamment la délégation ministérielle dirigée par Debarbieux.
      • 39:03-40:00 Instabilité ministérielle: L'instabilité ministérielle et la volonté de chaque ministre de laisser sa marque lassent le terrain et nuisent à l'action publique. Même ceux qui essaient de faire quelque chose deviennent prisonniers de ce climat de rejet et d'autoritarisme. Gabriel Attal, par exemple, a commencé par des plans sur l'empathie et a terminé par une loi contre la jeunesse.
      • 40:01-41:13 Moment harcèlement en politique: Le "moment harcèlement" en politique marque une bascule où l'on comprend que la violence n'est pas forcément extérieure aux établissements et qu'il faut la prévenir. Il y a un intérêt pour les victimes que l'on ne sentait pas trop avant.
      • 41:14-42:27 Vision de la violence à l'école: Jusqu'en 2010-2011, la vision de la violence à l'école est celle d'une violence provenant de l'extérieur, nécessitant de se protéger en renforçant la sécurité et le partenariat avec la police et la justice.
      • 42:28-43:12 Essentiel de la violence à l'école: L'essentiel de la violence à l'école n'est pas constitué d'intrusions, mais de violences banales et ordinaires qui, lorsqu'elles s'accumulent, ont des effets délétères sur les victimes, les témoins et les agresseurs.
      • 43:13-44:40 Enquête pour l'UNICEF: Une enquête menée par Debarbieux pour l'UNICEF a révélé qu'environ 10% des élèves sont victimes à répétition de harcèlement. Ces chiffres ont été fortement médiatisés et ont conduit à l'organisation des Assises nationales contre le harcèlement.
      • 44:41-46:17 Basculement: Ce moment a aussi été la révélation d'un phénomène où tout le monde s'est dit "c'est bien sûr", en lien avec le harcèlement au travail et le mouvement #MeToo. On ne supporte plus les micro-violences qui étaient auparavant considérées comme banales. Il y a une évolution sociétale intéressante, mais on observe un retour de bâton dramatique.
      • 46:18-47:06 Mesures inefficaces: Certaines mesures politiques prises sont inefficaces, comme la sanctuarisation de l'école ou le regroupement des enfants difficiles.
      • 47:07-48:01 Discipline militaire: L'exemple de la discipline militaire, proposée par différents responsables politiques, a été essayé et évalué, et s'est avéré inefficace et coûteux. Les militaires eux-mêmes reconnaissent ne pas savoir faire.
      • 48:02-50:04 Boot camps: Les "boot camps" aux États-Unis ont également montré leur inefficacité. Le regroupement des individus difficiles, par exclusion interne ou externe, augmente leur capacité à faire bande. C'est un principe de criminologie de base.
      • 50:05-51:13 Internat: Debarbieux n'est pas contre l'idée d'internat, mais celui-ci doit être souhaité et ne pas devenir une punition. De même, l'enseignement professionnel ne doit pas devenir une punition pour les mauvais élèves.
      • 51:14-53:07 Programmes importés: La transposition directe de programmes contre le harcèlement ou la violence provenant d'autres pays, notamment d'Europe du Nord, est également inefficace. Il n'y a pas de programme miracle et il faut tenir compte des contextes et de la manière dont les équipes s'en emparent.
      • 53:08-54:27 Responsabilité sur le harcelé: Mettre toute la responsabilité du harcèlement sur le harcelé est un effet pervers. Les adultes doivent être présents et aider les élèves à s'aider eux-mêmes.
      • 54:28-55:11 Prévention indirecte: Une prévention indirecte, basée sur des choses triviales mais montrant que l'on fait attention aux élèves et à leur bien-être corporel, peut être plus efficace. L'exemple des toilettes est souvent cité.
      • 55:12-56:17 Question politique: Quoi qu'il en soit, la question politique doit être posée majoritairement. Debarbieux se dit en désaccord avec l'idée que la violence est due à l'ensauvagement de la jeunesse. Il ne s'agit pas d'être laxiste, mais de ne pas voir la situation uniquement à travers le prisme de la répression policière.
      • 56:18-57:12 Syndicalisme policier: Debarbieux s'inquiète de l'évolution du syndicalisme policier, où l'on a tendance à réduire le rôle du policier à "petites tête, gros bâton".
      • 57:13-58:30 Ce qui fonctionne: Il faut agir sur le terrain en favorisant une approche climat scolaire globale, où l'on veille au bien-être de l'équipe, à la communication, à la qualité du leadership, et où l'on met en place un système disciplinaire cohérent et appliqué par tous.
      • 58:31-59:06 Climat scolaire: Le climat scolaire ne doit pas être enfermé dans l'établissement, mais doit prendre en compte l'environnement extérieur, les parents et le quartier. Il faut se demander si l'on veut une école "du" quartier ou une école "dans" le quartier.
      • 59:07-1:00:05 Actions possibles au ministère: Au niveau du ministère, il est possible d'agir à condition d'éviter la circularité et les discours théoriques, et en apportant une aide maximale à la formation de long terme.
      • 1:00:06-1:00:13 Être combatif: Malgré les difficultés, il faut rester combatif et continuer à se battre.
    2. Éric Debarbieux, chercheur en sciences de l'éducation, aborde la question de la violence scolaire et de son évolution. Voici les points clés de son analyse :

      • Évolution de la perception de la violence scolaire Historiquement, la violence à l'école n'était pas considérée comme une question politique ou sociale importante. Elle était parfois acceptée ou ritualisée, comme le chahut. L'évolution a été marquée par l'arrivée de nouveaux publics dans les lycées, la massification scolaire, le chômage de masse et l'exclusion sociale. La violence était initialement perçue comme venant de l'extérieur de l'école, mais une rupture s'est produite dans les années 1990 avec les manifestations lycéennes contre la violence, menant à des politiques publiques et des plans de prévention.

      • La violence entre adultes Debarbieux souligne que la violence en milieu scolaire n'est pas seulement liée au comportement des élèves, mais aussi aux relations entre adultes. L'instabilité des équipes éducatives et la conflictualité entre adultes sont des facteurs de risque importants. Des études montrent un lien entre les conflits entre adultes et les agressions subies par les élèves. Il y a une aggravation des conflits entre directions et enseignants, avec des sentiments de harcèlement. La méfiance envers la hiérarchie et le personnel politique au ministère de l'Éducation nationale complexifie la situation.

      • Difficultés face aux troubles du comportement Les écoles primaires rencontrent des difficultés croissantes avec les enfants ayant des troubles du comportement, ce qui crée un sentiment de désespoir chez les enseignants. L'école inclusive est un facteur, mais il ne faut pas assimiler handicap et violence. Le manque de formation et de soutien adéquat pour les enseignants contribue à ce problème.

      • Inefficacité de certaines mesures politiques Debarbieux critique certaines mesures politiques comme la discipline militaire, les "boot camps", et le regroupement d'élèves difficiles. Il remet en question la transposition directe de programmes contre le harcèlement venant d'autres pays, soulignant l'importance de les adapter aux contextes locaux. Il insiste sur la nécessité de ne pas déresponsabiliser les adultes et de prendre en compte la dimension politique et sociétale de la violence.

      • Pistes pour des solutions Debarbieux met en avant l'importance de l'approche du climat scolaire, qui prend en compte le bien-être de l'équipe éducative, la communication, la qualité du leadership et un système disciplinaire cohérent. Il souligne également la nécessité d'impliquer les parents et la communauté.

    1. Reviewer #1 (Public review):

      Wang et al., recorded concurrent EEG-fMRI in 107 participants during nocturnal NREM sleep to investigate brain activity and connectivity related to slow oscillations (SO), sleep spindles, and in particular their co-occurrence. The authors found SO-spindle coupling to be correlated with increased thalamic and hippocampal activity, and with increased functional connectivity from the hippocampus to the thalamus and from the thalamus to the neocortex, especially the medial prefrontal cortex (mPFC). They concluded the brain-wide activation pattern to resemble episodic memory processing, but to be dissociated from task-related processing and suggest that the thalamus plays a crucial role in coordinating the hippocampal-cortical dialogue during sleep.

      The paper offers an impressively large and highly valuable dataset that provides the opportunity for gaining important new insights into the network substrate involved in SOs, spindles, and their coupling. However, the paper does unfortunately not exploit the full potential of this dataset with the analyses currently provided, and the interpretation of the results is often not backed up by the results presented.

      I have the following specific comments.

      (1) The introduction is lacking sufficient review of the already existing literature on EEG-fMRI during sleep and the BOLD-correlates of slow oscillations and spindles in particular (Laufs et al., 2007; Schabus et al., 2007; Horovitz et al., 2008; Laufs, 2008; Czisch et al., 2009; Picchioni et al., 2010; Spoormaker et al., 2010; Caporro et al., 2011; Bergmann et al., 2012; Hale et al., 2016; Fogel et al., 2017; Moehlman et al., 2018; Ilhan-Bayrakci et al., 2022). The few studies mentioned are not discussed in terms of the methods used or insights gained.

      (2) The paper falls short in discussing the specific insights gained into the neurobiological substrate of the investigated slow oscillations, spindles, and their interactions. The validity of the inverse inference approach ("Open ended cognitive state decoding"), assuming certain cognitive functions to be related to these oscillations because of the brain regions/networks activated in temporal association with these events, is debatable at best. It is also unclear why eventually only episodic memory processing-like brain-wide activation is discussed further, despite the activity of 16 of 50 feature terms from the NeuroSynth v3 dataset were significant (episodic memory, declarative memory, working memory, task representation, language, learning, faces, visuospatial processing, category recognition, cognitive control, reading, cued attention, inhibition, and action).

      (3) Hippocampal activation during SO-spindles is stated as a main hypothesis of the paper - for good reasons - however, other regions (e.g., several cortical as well as thalamic) would be equally expected given the known origin of both oscillations and the existing sleep-EEG-fMRI literature. However, this focus on the hippocampus contrasts with the focus on investigating the key role of the thalamus instead in the Results section.

      (4) The study included an impressive number of 107 subjects. It is surprising though that only 31 subjects had to be excluded under these difficult recording conditions, especially since no adaptation night was performed. Since only subjects were excluded who slept less than 10 min (or had excessive head movements) there are likely several datasets included with comparably short durations and only a small number of SOs and spindles and even less combined SO-spindle events. A comprehensive table should be provided (supplement) including for each subject (included and excluded) the duration of included NREM sleep, number of SOs, spindles, and SO+spindle events. Also, some descriptive statistics (mean/SD/range) would be helpful.

      (5) Was the 20-channel head coil dedicated for EEG-fMRI measurements? How were the electrode cables guided through/out of the head coil? Usually, the 64-channel head coil is used for EEG-fMRI measurements in a Siemens PRISMA 3T scanner, which has a cable duct at the back that allows to guide the cables straight out of the head coil (to minimize MR-related artifacts). The choice for the 20-channel head coil should be motivated. Photos of the recording setup would also be helpful.

      (6) Was the EEG sampling synchronized to the MR scanner (gradient system) clock (the 10 MHz signal; not referring to the volume TTL triggers here)? This is a requirement for stable gradient artifact shape over time and thus accurate gradient noise removal.

      (7) The TR is quite long and the voxel size is quite large in comparison to state-of-the-art EPI sequences. What was the rationale behind choosing a sequence with relatively low temporal and spatial resolution?

      (8) The anatomically defined ROIs are quite large. It should be elaborated on how this might reduce sensitivity to sleep rhythm-specific activity within sub-regions, especially for the thalamus, which has distinct nuclei involved in sleep functions.

      (9) The study reports SO & spindle amplitudes & densities, as well as SO+spindle coupling, to be larger during N2/3 sleep compared to N1 and REM sleep, which is trivial but can be seen as a sanity check of the data. However, the amount of SOs and spindles reported for N1 and REM sleep is concerning, as per definition there should be hardly any (if SOs or spindles occur in N1 it becomes by definition N2, and the interval between spindles has to be considerably large in REM to still be scored as such). Thus, on the one hand, the report of these comparisons takes too much space in the main manuscript as it is trivial, but on the other hand, it raises concerns about the validity of the scoring.

      (10) Why was electrode F3 used to quantify the occurrence of SOs and spindles? Why not a midline frontal electrode like Fz (or a number of frontal electrodes for SOs) and Cz (or a number of centroparietal electrodes) for spindles to be closer to their maximum topography?

      (11) Functional connectivity (hippocampus -> thalamus -> cortex (mPFC)) is reported to be increased during SO-spindle coupling and interpreted as evidence for coordination of hippocampo-neocortical communication likely by thalamic spindles. However, functional connectivity was only analysed during coupled SO+spindle events, not during isolated SOs or isolated spindles. Without the direct comparison of the connectivity patterns between these three events, it remains unclear whether this is specific for coupled SO+spindle events or rather associated with one or both of the other isolated events. The PPIs need to be conducted for those isolated events as well and compared statistically to the coupled events.

      (12) The limited temporal resolution of fMRI does indeed not allow for easily distinguishing between fMRI activation patterns related to SO-up- vs. SO-down-states. For this, one could try to extract the amplitudes of SO-up- and SO-down-states separately for each SO event and model them as two separate parametric modulators (with the risk of collinearity as they are likely correlated).

      (13) L327: "It is likely that our findings of diminished DMN activity reflect brain activity during the SO DOWN-state, as this state consistently shows higher amplitude compared to the UP-state within subjects, which is why we modelled the SO trough as its onset in the fMRI analysis." This conclusion is not justified as the fact that SO down-states are larger in amplitude does not mean their impact on the BOLD response is larger.

      (14) Line 77: "In the current study, while directly capturing hippocampal ripples with scalp EEG or fMRI is difficult, we expect to observe hippocampal activation in fMRI whenever SOs-spindles coupling is detected by EEG, if SOs- spindles-ripples triple coupling occurs during human NREM sleep". Not all SO-spindle events are associated with ripples (Staresina et al., 2015), but hippocampal activation may also be expected based on the occurrence of spindles alone (Bergmann et al., 2012).

      References:

      Bergmann TO, Molle M, Diedrichs J, Born J, Siebner HR (2012) Sleep spindle-related reactivation of category-specific cortical regions after learning face-scene associations. Neuroimage 59:2733-2742.<br /> Caporro M, Haneef Z, Yeh HJ, Lenartowicz A, Buttinelli C, Parvizi J, Stern JM (2011) Functional MRI of sleep spindles and K-complexes. Clin Neurophysiol.<br /> Czisch M, Wehrle R, Stiegler A, Peters H, Andrade K, Holsboer F, Samann PG (2009) Acoustic oddball during NREM sleep: a combined EEG/fMRI study. PLoS One 4:e6749.<br /> Fogel S, Albouy G, King BR, Lungu O, Vien C, Bore A, Pinsard B, Benali H, Carrier J, Doyon J (2017) Reactivation or transformation? Motor memory consolidation associated with cerebral activation time-locked to sleep spindles. PLoS One 12:e0174755.<br /> Hale JR, White TP, Mayhew SD, Wilson RS, Rollings DT, Khalsa S, Arvanitis TN, Bagshaw AP (2016) Altered thalamocortical and intra-thalamic functional connectivity during light sleep compared with wake. Neuroimage 125:657-667.<br /> Horovitz SG, Fukunaga M, de Zwart JA, van Gelderen P, Fulton SC, Balkin TJ, Duyn JH (2008) Low frequency BOLD fluctuations during resting wakefulness and light sleep: a simultaneous EEG-fMRI study. Hum Brain Mapp 29:671-682.<br /> Ilhan-Bayrakci M, Cabral-Calderin Y, Bergmann TO, Tuscher O, Stroh A (2022) Individual slow wave events give rise to macroscopic fMRI signatures and drive the strength of the BOLD signal in human resting-state EEG-fMRI recordings. Cereb Cortex 32:4782-4796.<br /> Laufs H (2008) Endogenous brain oscillations and related networks detected by surface EEG-combined fMRI. Hum Brain Mapp 29:762-769.<br /> Laufs H, Walker MC, Lund TE (2007) 'Brain activation and hypothalamic functional connectivity during human non-rapid eye movement sleep: an EEG/fMRI study'--its limitations and an alternative approach. Brain 130:e75; author reply e76.<br /> Moehlman TM, de Zwart JA, Chappel-Farley MG, Liu X, McClain IB, Chang C, Mandelkow H, Ozbay PS, Johnson NL, Bieber RE, Fernandez KA, King KA, Zalewski CK, Brewer CC, van Gelderen P, Duyn JH, Picchioni D (2018) All-Night Functional Magnetic Resonance Imaging Sleep Studies. J Neurosci Methods.<br /> Picchioni D, Horovitz SG, Fukunaga M, Carr WS, Meltzer JA, Balkin TJ, Duyn JH, Braun AR (2010) Infraslow EEG oscillations organize large-scale cortical-subcortical interactions during sleep: A combined EEG/fMRI study. Brain Res.<br /> Schabus M, Dang-Vu TT, Albouy G, Balteau E, Boly M, Carrier J, Darsaud A, Degueldre C, Desseilles M, Gais S, Phillips C, Rauchs G, Schnakers C, Sterpenich V, Vandewalle G, Luxen A, Maquet P (2007) Hemodynamic cerebral correlates of sleep spindles during human non-rapid eye movement sleep. Proc Natl Acad Sci U S A 104:13164-13169.<br /> Spoormaker VI, Schroter MS, Gleiser PM, Andrade KC, Dresler M, Wehrle R, Samann PG, Czisch M (2010) Development of a large-scale functional brain network during human non-rapid eye movement sleep. J Neurosci 30:11379-11387.<br /> Staresina BP, Bergmann TO, Bonnefond M, van der Meij R, Jensen O, Deuker L, Elger CE, Axmacher N, Fell J (2015) Hierarchical nesting of slow oscillations, spindles and ripples in the human hippocampus during sleep. Nat Neurosci 18:1679-1686.

    1. briefing détaillé basé sur les extraits de la vidéo ARTE "Notre microbiote nous domine-t-il ?":

      Briefing Document : "Notre microbiote nous domine-t-il ?"

      Source : Extraits de la vidéo ARTE "Notre microbiote nous domine-t-il ? | 42 - La réponse à presque tout" (14 février 2025).

      Thèmes Principaux :

      L'importance du Microbiote : Le documentaire met en évidence le rôle crucial du microbiote (ensemble des micro-organismes) pour la santé humaine. Il souligne que nous sommes totalement dépendants de ces bactéries : "On est totalement dépendant de ces bactéries sans son microbiote l'être humain ne survivrait pas". Il y a plus de micro-organismes vivant dans et sur un seul adulte que d'humains sur la planète. Diversité et Santé : Une grande diversité microbienne est corrélée à une meilleure santé. Le documentaire souligne que " plus il est diversifié, meilleure est notre santé." Cette diversité est essentielle pour la création de réseaux complexes au sein du microbiote, comparée à un chantier où différents ouvriers spécialisés sont nécessaires pour construire un bâtiment solide. La diversité du microbiote est bénéfique pour l'organisme et le système immunitaire : "une grande diversité est bénéfique à notre organisme et notre système immunitaire". Une baisse de la diversité est observée dans beaucoup de maladies chroniques. Menaces sur la Biodiversité Microbienne : L'urbanisation, les règles d'hygiène, l'évolution de l'alimentation et des modes de vie conduisent à une perte de biodiversité microbienne : "L’urbanisation, les règles d’hygiène ainsi que l’évolution de l’alimentation et des modes de vie conduisent cependant à une perte de biodiversité microbienne". L'environnement dans lequel on vit a une influence directe sur notre microbiote. Le recours aux pesticides est identifié comme une menace majeure. Urbanisation et Microbiote : La vidéo aborde l'impact de l'urbanisation sur le microbiote, suggérant que le microbiote d'un citadin est très différent de celui d'un campagnard. Avec l'augmentation de la population urbaine, il est crucial de comprendre ces changements. Il est supposé que l'appauvrissement du microbiote est lié à l'urbanisation. "on suppose que l'appauvrissement du microbiote est lié à l'urbanisation." La Banque Mondiale du Microbiote : Face à la perte de diversité, une banque suisse conserve des échantillons congelés de matière fécale du monde entier afin de préserver le microbiote de l'humanité : "notre coffre fort contient des communautés microbiennes plus exactement des échantillons congelés de matière fécale provenant du monde entier." Cette initiative est comparée à l'Arche de Noé pour les microbes, visant à préserver des espèces potentiellement importantes pour la santé future. L'Axe Intestin-Cerveau : Le documentaire met en lumière la connexion entre l'intestin et le cerveau, soulignant que l'intestin est le principal producteur de sérotonine, influençant notre humeur : "grâce à son microbiote l'intestin est le principal producteur de sérotonine ce qui influence notre humeur". L'axe intestin-cerveau est décrit comme une route d'information biochimique du ventre à la tête et vice-versa, empruntée par les métabolites produits par les bactéries. Influence de l'Alimentation : L'alimentation joue un rôle crucial dans la composition du microbiote. Les bactéries raffolent des fibres présentes dans les fruits, légumes, fruits à coque et céréales complètes. Une alimentation pauvre en fibres peut conduire les bactéries à grignoter la muqueuse intestinale protectrice. "notre microbiote intestinale dépend de notre alimentation ce qu'on mange nos bactéries le mangent aussi". Colonisation Microbienne : Les trois premières années de vie sont décisives pour l'établissement du microbiote. La naissance par voie basse favorise une meilleure colonisation microbienne que par césarienne. Impact des Antibiotiques : L'utilisation d'antibiotiques, en particulier à large spectre, peut avoir des effets marqués et durables sur le microbiote, surtout s'ils sont administrés dès la petite enfance. Transplantation Fécale : La transplantation fécale est présentée comme une méthode pour restaurer un microbiote sain en réintroduisant les microbes manquants. Reconnexion à l'Environnement : Le documentaire suggère des moyens de stimuler la diversité microbienne, comme avoir des animaux domestiques, des plantes d'intérieur, ouvrir les fenêtres et vivre dans un environnement avec des arbres. Il est recommandé de se frotter au monde extérieur et d'éviter la stérilisation excessive. Idées/Faits Importants :

      Le corps humain abrite environ 30 billions de micro-organismes. La diversité microbienne est un indicateur clé de la santé. L'urbanisation et les modes de vie modernes appauvrissent le microbiote. La "banque du microbiote" en Suisse est une tentative de préserver la diversité microbienne. L'alimentation, l'environnement et l'utilisation d'antibiotiques influencent la composition du microbiote. La transplantation fécale est une approche thérapeutique prometteuse. Il est possible de modifier son microbiote par des habitudes de vie saines.

    1. Voici un sommaire minuté de la conférence-débat "Cosmétiques : Lever le voile sur les perturbateurs endocriniens", mettant en évidence les idées forces:

      • 0:03-0:31 Introduction de la conférence sur les perturbateurs endocriniens (PE) dans les cosmétiques et annonce d'un bar pour continuer la discussion.

      • 0:37-2:01 Définition des perturbateurs endocriniens par l'Organisation Mondiale de la Santé (OMS) comme des substances altérant le système endocrinien, présents dans divers produits du quotidien, y compris les cosmétiques. Le marché mondial des cosmétiques est en essor, avec la France en leader. L'utilisation de PE est courante pour la conservation des produits, malgré leurs effets néfastes potentiels sur la santé et l'environnement.

      • 2:06-3:07 Présentation des intervenants : Aurélie Portefaix (pédiatre), Luc Jugla (chimiste), Céline de Laurens (adjointe à la santé de Lyon), et Edouard Raffin (avocat en droit de l'environnement).

      • 3:14-8:52 Définition du système endocrinien et explication de son fonctionnement par Aurélie Portefaix. Les PE sont des substances ou mélanges de substances hétérogènes. Certains PE ont une durée de vie courte, tandis que d'autres persistent longtemps dans l'organisme. L'étude Esteban a révélé une exposition généralisée aux parabènes, phtalates et pesticides. Les PE peuvent mimer ou bloquer l'action des hormones, altérer la synthèse des protéines ou provoquer des mutations épigénétiques transmissibles à la descendance.

      • 8:58-10:14 Importance de la question des PE dans les cosmétiques en raison de leur utilisation fréquente, de la perméabilité de la peau et de l'interaction contenant-contenu.

      • 10:14-11:25 Effets des PE sur l'environnement : le juriste se base sur les analyses scientifiques pour légiférer. Le mécanisme juridique encadre la mise sur le marché des produits contenant des PE, avec identification, restriction ou interdiction.

      • 11:25-14:02 Les effets sur l'environnement sont aussi nombreux que les types de PE. Le risque est extrêmement répandu et grave, avec des implications de plus en plus fortes découvertes au fil des ans. Les PE perturbent le système hormonal et sont invisibles et insidieux.

      • 14:02-14:42 Effets des PE sur la faune : troubles de la reproduction.

      • 15:25-17:39 Raisons de l'utilisation des PE dans les cosmétiques : conservation, résistance, absorption des rayons solaires. Les parabènes sont utilisés comme conservateurs à large spectre et peu coûteux. L'industrie s'adapte en réduisant l'utilisation de certaines familles de PE.

      • 17:39-20:55 Enjeux politiques liés aux PE, tant au niveau de la pollution environnementale que de l'exposition individuelle. Les réglementations visent à protéger la santé, mais les autorisations de mise sur le marché reposent souvent sur des estimations négociées avec les industriels.

      • 20:55-23:35 Nécessité de recréer un lien avec la science et de renforcer la confiance des citoyens dans le fonctionnement de la société.

      • 23:35-25:36 Catégorisation des substances comme perturbateurs endocriniens : PE avérés, présumés ou suspectés. Preuve scientifique d'un effet néfaste, mode d'action compatible et lien de causalité établi sont nécessaires.

      • 25:36-29:32 Mécanismes législatifs et réglementaires existants pour encadrer l'utilisation des PE. Le droit est influencé par les lobbyes, nécessitant un équilibre entre réglementation protectrice et compétitivité. Le règlement REACH encadre l'enregistrement, l'évaluation et l'autorisation des substances chimiques.

      • 29:32-34:45 Le but est de protéger les consommateurs et leur santé en interdisant ou en restreignant les substances selon leur classification. La France a été pionnière avec les stratégies nationales sur les perturbateurs endocriniens (SNPE). La loi anti-gaspillage et économie circulaire de 2020 impose une meilleure information sur les produits. Un décret de 2023 relatif à l'information sur certains produits de protection intime est un exemple concret.

      • 34:45-37:29 Rôle de l'ANSES : Agence de l'État qui répond à une commande, parfois en retard par rapport aux risques. Les avis de l'ANSES sont parfois timorés et la temporalité n'est pas la même que celle des chercheurs académiques. L'ANSES doit tenir compte des réalités économiques et prend des pincettes sur certains sujets.

      • 37:29-46:37 Actions de la ville de Lyon : Participation à des instances de coordination, signature de la charte des villes et territoires sans perturbateur endocrinien. Mise en place d'une politique publique locale avec un plan d'action axé sur les écoles et les crèches. Actions concrètes : bio dans les cantines, fin du plastique, nettoyage sans chimie. La ville se concentre sur les 1000 premiers jours et les enfants.

      • 46:37-51:33 Choix de ne pas utiliser de PE : Prise de conscience des problématiques environnementales. Dès qu'il y a suspicion, on a décidé de ne pas les utiliser. Transparence dans la composition des produits cosmétiques. Adaptation rapide de la profession aux demandes des clients.

      • 51:33-54:03 Progrès scientifiques pour identifier et prouver les effets des PE : Essais randomisés contrôlés difficiles à mettre en place. Études épidémiologiques sur le temps long, biomarqueurs, tests in vitro et in vivo. Recherche sur l'exposome, c'est-à-dire l'ensemble des expositions au cours de la vie.

      • 54:03-58:22 Alternatives aux PE : Utilisation d'huiles essentielles (avec prudence), reformulation des conservateurs. Difficultés à remplacer les dérivés fluorés dans le maquillage et les filtres solaires organiques.

      • 58:22-1:03:26 Adaptation du consommateur, harmonisation des pratiques, remise à jour de REACH. Libre arbitre et responsabilité individuelle.

      • 1:03:26-1:04:46 Les pouvoirs publics et les autorités vont chercher un équilibre avec la compétitivité économique et un caractère raisonnable.

      • 1:04:46-1:11:30 Prise de conscience dans la population et des applications comme Yuka.

      • 1:04:46-1:17:20 Les risques de non-conformité et les conséquences légales : Les agences européennes ou les agences françaises vont subir à la fois des sanctions administratives et pénales.

      • 1:17:20-1:35:27 Les consommateurs qui vont être atteints de symptômes qu' ils ne comprennent pas vont voir leurs médecins et vont ensuite chercher des réponses et vont peut-être intenter un procès administratif ou pénal contre un fabricant ou un distributeur.

      • 1:35:27-1:43:23 On manque d'études épidémiologiques, donc il est très difficile à rapporter.

      • 1:43:23-1:46:12 Travail sur l'amélioration de cette évaluation scientifique qui passe par une transparence dans les produits qu'on va consommer.

      • 1:46:12-1:48:14 Discussion avec le public.

    1. Bien sûr. Voici un résumé de la vidéo avec les idées fortes en gras :

      • [00:00:06] Introduction de l'émission et des participants. François Dubet, sociologue et directeur d'études, est l'invité pour discuter de son livre "Tous égaux, tous singuliers", qui traite des inégalités et de la solidarité.

      • [00:00:39] Évolution de la notion d'inégalité sociale : Les inégalités sont mesurables, mais la question centrale est de comprendre pourquoi certaines sont acceptées et d'autres suscitent l'indignation. Depuis une trentaine d'années, deux grands changements ont marqué les sociétés : l'explosion des classes sociales et le passage d'une conception de la justice sociale axée sur la réduction des inégalités de conditions à une conception axée sur l'égalité des chances.

      • [00:01:44] Justice sociale vs. égalité des chances : La justice sociale est liée à l'affirmation que les hommes naissent libres et égaux, mais vivent dans des sociétés inégalitaires. Deux modèles existent : un modèle européen axé sur la réduction des inégalités de position et un modèle américain axé sur l'égalité des chances. Le second modèle tend à dominer, car les supports sociaux du premier (syndicats, classe ouvrière) s'affaiblissent. L'injustice sociale majeure est perçue comme la discrimination plutôt que l'exploitation.

      • [00:02:28] Régime des inégalités vs. inégalités : Les inégalités sont mesurables, tandis que le régime des inégalités est le mode de construction et de justification de ces inégalités. Le premier régime d'inégalités était celui des castes, où les inégalités étaient considérées comme naturelles. Les révolutions démocratiques ont aboli ce régime, mais des traces subsistent, notamment dans les inégalités entre les sexes. Les sociétés industrielles ont ensuite structuré les inégalités autour des fonctions sociales et du marché capitaliste.

      • [00:03:15] L'expérience des inégalités : Dans l'ancien régime, les inégalités étaient vécues collectivement, avec une fierté et une culture de classe. Aujourd'hui, les mutations du capitalisme ont individualisé l'expérience des inégalités. Chacun se sent inégal en fonction de multiples facteurs (niveau d'études, lieu de résidence, âge, etc.) plutôt qu'en tant que membre d'une classe sociale.

      • [00:03:44] Le mouvement des "gilets jaunes" : Ce mouvement est caractérisé par une colère collective, mais sans revendications claires ni sentiment d'appartenance à une classe. Il y a une hostilité envers les élites et une difficulté à formuler des revendications. Le sentiment dominant est le mépris, où chacun se sent dévalorisé et non reconnu.

      • [00:04:18] Déconnexion entre inégalités mesurées et inégalités perçues : L'expérience des inégalités est la rencontre entre les inégalités objectives, la représentation qu'on en a et les conceptions de la justice mobilisées. Les inégalités hommes-femmes ont diminué, mais le sentiment d'injustice est plus fort, car les femmes sont davantage confrontées aux inégalités et ont une plus grande conscience de l'égalité. De même, les inégalités scolaires sont perçues comme plus intolérables, car l'école est devenue une compétition où chacun doit jouer et où les petites inégalités deviennent décisives.

      • [00:05:16] L'égalité des chances et ses limites : Les Américains croient en l'égalité des chances, mais la mobilité sociale y est plus faible qu'en France ou en Scandinavie. Il est crucial de comprendre comment les acteurs vivent et se représentent les inégalités pour pouvoir les combattre efficacement. Les inégalités qui créent le moins d'indignation sont les moins importantes à traiter.

      • [00:06:05] Repenser la solidarité : L'enjeu est de fabriquer une représentation démocratique où les colères trouvent une expression politique. Il faut rendre lisibles les transferts de solidarité, car le mécanisme actuel est illisible et crée un sentiment d'être lésé. Il est urgent de simplifier les mécanismes de redistribution et de revenir à des politiques universelles plutôt que de cibler des publics particuliers. Il faut également fabriquer des sentiments de solidarité plus actifs, en s'appuyant sur la vie associative locale et en intégrant l'apprentissage de la solidarité dans l'expérience scolaire.

      • [00:07:39] Le mécanisme des petites inégalités qui s'agrègent : Autrefois, le parcours scolaire était un destin fixé à la naissance. Aujourd'hui, il est le résultat d'une accumulation de petites différences (notes, choix d'options, établissement fréquenté, etc.). Chacune de ces inégalités n'est pas très forte, mais elle s'accumule et donne l'impression que l'individu est responsable de son propre destin.

      • [00:08:28] Les pistes d'action politique : Pour repenser la solidarité, il faut que les colères trouvent une expression politique. Il faut rendre lisibles les transferts de solidarité et simplifier les mécanismes de redistribution. Il faut fabriquer des sentiments de solidarité plus actifs, en s'appuyant sur la vie associative et en intégrant l'apprentissage de la solidarité dans l'école.

      • [00:09:16] La sociologie face aux inégalités multiples : La sociologie s'est dispersée dans une multitude d'objets et de théories, perdant sa vision globale de la société. Il faudrait qu'elle refabrique une image de la société et qu'elle s'intéresse à la fois aux acteurs et aux mécanismes qui les structurent. Il faut tenir ensemble les épreuves individuelles et les enjeux collectifs.

      • [00:10:13] La sociologie et la stigmatisation : Il y a parfois une posture avantageuse dans la dénonciation des stigmatisations et des discriminations. Il est important d'écouter les acteurs sociaux et de ne pas surinterpréter leurs propos. Les sociologues de la génération actuelle sont peut-être plus techniques et professionnalisés, mais ont peut-être perdu en imagination sociologique. Les laboratoires de recherche devraient favoriser le travail collectif plutôt que de se contenter d'être des plateformes de services individuels.

    1. Voici un sommaire de la vidéo avec les idées fortes, en gras:

      • Introduction
        • Présentation des invités : Bruno Humbeeck, psychopédagogue et spécialiste de l'hyper parentalité et de l'éducation positive, et Béatrice Kammerer, journaliste spécialisée en éducation positive.
        • L'émission explore l'éducation positive, un sujet de plus en plus répandu mais controversé, en famille et à l'école.
        • Les invités ne sont ni pour ni contre l'éducation positive, mais proposent une approche mesurée.
      • Définition et historique de l'éducation positive
        • L'éducation positive a commencé comme un champ de recherche étudiant les aspects positifs de l'être humain, puis est devenue une forme pédagogique.
        • Le terme est apparu en 2006 avec une recommandation du Conseil de l'Europe définissant une parentalité visant à élever et responsabiliser l'enfant de manière non-violente, en favorisant son plein développement.
        • Cette définition large a permis à divers courants éducatifs de s'y agréger, rendant la notion difficile à cerner précisément.
      • Confusion avec les pédagogies nouvelles et critique de l'approche "recette"
        • L'éducation positive est parfois confondue avec les pédagogies nouvelles comme Montessori, issues de l'après-guerre.
        • L'idée de laisser croître les enfants vient de Rousseau, mais il prônait aussi une éducation négative, par exemple en laissant l'enfant subir les conséquences de ses actes.
        • L'éducation positive a parfois le défaut de considérer qu'il n'y a qu'une seule façon d'éduquer, alors que toutes les formes pédagogiques ont une intention bienveillante.
        • L'éducation positive est critiquée pour ses "recettes" simplistes et ses injonctions comme "soyez des parents zen".
      • Évolution du regard sur l'enfant et valeurs de l'éducation positive
        • L'éducation positive est une évolution du regard sur l'enfant, notamment la reconnaissance de l'enfant comme personne à part entière.
        • Les châtiments corporels ne sont plus acceptables, et l'on vise l'épanouissement de l'enfant, en tenant compte de ses émotions et de ses besoins.
        • Il faut faire attention à ne pas centrer l'enfant sur lui-même au point de négliger le souci collectif, ce qui pourrait être un danger pour la démocratie.
      • Les dérives de l'éducation positive : parents "zen", hyper communicants et hyper tolérants
        • La pédagogie positive peut mener à des enfants tyrans si elle n'est pas modulée par l'empathie.
        • Une des dérives est le parent "zen" qui réprime ses émotions, même quand l'enfant fait une bêtise.
        • Le parent hyper communicant est constamment à la disposition de son enfant, sans sélectionner ce qui est digne d'intérêt.
        • Le parent hyper tolérant ne fixe plus de limites, oubliant tout principe d'autorité, ce qui peut mener à un déni démocratique.
      • Principes et pratiques recommandées par l'éducation positive
        • Interdiction de toute forme de violence éducative et de coercition.
        • Pas de punition ni de récompense, car cela nuit à la motivation intrinsèque de l'enfant.
        • Valoriser l'expression des émotions et formuler des demandes mesurées, sans violence.
        • Attention à l'estime de soi de l'enfant : ne pas juger l'enfant, mais ses comportements.
        • Être tourné vers les besoins de l'enfant dès la petite enfance, en s'inspirant de la théorie de l'attachement.
      • L'importance de nuancer et de ne pas rejeter en bloc l'éducation positive
        • Il serait dommage de rejeter en bloc l'éducation positive sous prétexte de ses excès.
        • Il faut éviter d'être dans un monde binaire qui dirait l'éducation positive c'est bien ou c'est mal.
        • L'attention portée sur les violences ordinaires est un acquis de la pédagogie positive qu'il faut préserver.
      • Critiques de l'école "bienveillante" et nécessité d'un cadre et de règles
        • L'école de la bienveillance est critiquée lorsqu'elle se transforme en école de la complaisance, où les enfants font ce qu'ils veulent sans limite.
        • Il faut exercer une forme d'autorité sur l'enfant, car il a besoin de règles et de lois pour faire société.
        • L'attention portée sur les violences ordinaires est un acquis de la pédagogie positive qu'il faut préserver.
      • Les différentes critiques adressées à l'éducation positive : la question de l'autorité
        • Certains parents veulent abolir toute forme de contrainte, considérant que toute imposition est une violence.
        • L'éducation positive a contesté l'autoritarisme, mais a parfois demandé aux parents de ne plus faire preuve d'autorité, ce qui peut mener à d'autres types de comportements critiquables.
      • La bienveillance : un concept à nuancer
        • Il faut nuancer le discours autour de la bienveillance éducative, en évitant de transformer les évaluations en "écoles des fans".
        • L'école est un laboratoire permanent, il faut être attentif aux innovations, mais les encadrer.
      • Le besoin de cadres et de limites pour l'enfant et l'adulte
        • Les enfants ont besoin d'un cadre et de balises, et une éducation dérégulée ne profite à personne.
        • Il faut des règles et un système de punitions pour que ces règles soient respectées.
      • Renoncer à la triple perfection et accepter l'imperfection du monde
        • Il faut renoncer à la triple perfection : ne pas être soi-même parfait, ne pas avoir des enfants parfaits, et accepter que le monde est imparfait.
        • L'inquiétude climatique et le contexte mondial incertain rendent la pédagogie positive difficile à vivre.
        • Il faut accepter l'incertitude et le fait que nos enfants ne seront pas parfaits.
      • Les injonctions paradoxales faites aux parents et la difficulté d'être parent aujourd'hui
        • Les parents sont soumis à des injonctions paradoxales : être attentif à son enfant sans l'envahir, répondre à ses besoins tout en lui permettant d'explorer.
        • Il est difficile d'être parent aujourd'hui, car il y a des attentes très diverses de toute la société.
        • Les parents sont tiraillés entre la peur du déclassement social et le désir de voir leur enfant s'épanouir.
      • Les inégalités entre pères et mères et la charge mentale des mères
        • L'éducation positive a plutôt tendance à renforcer les inégalités entre pères et mères.
        • Les femmes ont souvent la mission d'être les "chargées de développement" du foyer, et l'éducation positive se dirige énormément vers elles.
        • On demande aux mères de prendre en charge les compétences de communication et l'expression des émotions.
        • Dès la naissance, on adresse aux femmes des injonctions sur l'accouchement, l'allaitement, la disponibilité, etc..
      • Le danger du "coaching" parental et l'importance du soutien
        • Le "coaching" est dangereux car il donne l'idée qu'il y aurait une performance à atteindre, alors qu'il faut accepter d'être des parents imparfaits.
        • La parentalité est le mécanisme par lequel chacun de nos enfants transforme l'éducation qu'il reçoit en quelque chose de personnel.
        • La pédagogie positive a sans doute été dans le mur quand elle s'est voulue prescriptive.
      • L'enjeu de la coéducation et de la complémentarité des expertises

        • Il faut accepter l'idée qu'on n'éduque pas seul ses enfants et qu'on doit utiliser tous les supports que nos sociétés mettent à notre disposition.
        • La coéducation repose sur la complémentarité des expertises : celles des parents et celles des spécialistes.
        • Il faut redonner de la valeur aux pratiques parentales, même si elles ne suivent pas forcément les manuels d'éducation positive.
        • Les techniques de la coéducation ne sont pas les mêmes que celles du soutien à la parentalité.
      • Conclusion

        • Présentation du numéro de Sciences Humaines sur l'éducation positive et d'autres ressources sur le sujet.
        • Remerciements aux invités et aux participants.
    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reply to the Reviewers

      I would like to thank the reviewers for their comments and interest in the manuscript and the study.

      Referee #1

      1. I would assume that there are RNA-seq and/or ChIP-seq data out there produced after knockdown of one or more of these DBPs that show directional positioning.

      Response: The directional positioning of CTCF-binding sites at chromatin interaction sites was analyzed by CRISPR experiment (Guo Y et al. Cell 2015). We found that the machine learning and statistical analysis showed the same directional bias of the CTCF-binding motif sequence at chromatin interaction sites as the experimental analysis of Guo Y et al. (lines 229-245, Figure 3b, c, d and Table 1). Since CTCF is involved in different biological functions (Braccioli L et al. Essays Biochem. 2019 ResearchGate webpage), the directional bias of binding sites may be reduced in all binding sites including those at chromatin interaction sites (lines 68-73). In our study, we investigated the DNA-binding sites of proteins using the ChIP-seq data of DNA-binding proteins and DNase-seq data. We also confirmed that the DNA-binding sites of SMC3 and RAD21, which tend to be found in chromatin loops with CTCF, also showed the same directional bias as CTCF by the computational analysis.

      1. Figure 6 should be expanded to incorporate analysis of DBPs not overlapping CTCF/cohesin in chromatin interaction data that is important and potentially more interesting than the simple DBPs enrichment reported in the present form of the figure.

      Response: Following the reviewer's advice, I performed the same analysis with the DNA-binding sites that do no overlap with the DNA-binding sites of CTCF and cohesin (RAD21 and SMC3) (Fig. 6 and Supplementary Fig. 4). The result showed the same tendency in the distribution of DNA-binding sites. The height of a peak on the graph became lower for some DNA-binding proteins after removing the DNA-binding sites that overlapped with those of CTCF and cohesin. I have added the following sentence on lines 427 and 817: For the insulator-associated DBPs other than CTCF, RAD21, and SMC3, the DNA-binding sites that do not overlap with those of CTCF, RND21, and SMC3 were used to examine their distribution around interaction sites.

      1. Critically, I would like to see use of Micro-C/Hi-C data and ChIP-seq from these factors, where insulation scores around their directionally-bound sites show some sort of an effect like that presumed by the authors - and many such datasets are publicly-available and can be put to good use here.

      Response: As suggested by the reviewer, I have added the insulator scores and boundary sites from the 4D nucleome data portal as tracks in the UCSC genome browser. The insulator scores seem to correspond to some extent to the H3K27me3 histone marks from ChIP-seq (Fig. 4a and Supplementary Fig. 3). The direction of DNA-binding sites on the genome can be shown with different colors (e.g. red and green), but the directionality of insulator-associated DNA-binding sites is their overall tendency, and it may be difficult to notice the directionality from each binding site because the directionality may be weaker than that of CTCF, RAD21, and SMC3 as shown in Table 1 and Supplementary Table 2.

      I found that the CTCF binding sites examined by a wet experiment in the previous study may not always overlap with the boundary sites of chromatin interactions from Micro-C assay (Guo Y et al. Cell 2015). The chromatin interaction data do not include all interactions due to the high sequencing cost of the assay. The number of the boundary sites may be smaller than that of CTCF binding sites acting as insulators and/or some of the CTCF binding sites may not be locate in the boundary sites. It may be difficult for the boundary location algorithm to identify a short boundary location. Due to the limitations of the chromatin interaction data, I planned to search for insulator-associated DNA-binding proteins without using chromatin interaction data in this study. I have added the statistical summary of the analysis in lines 364-387 as follows: Overall, among 20,837 DNA-binding sites of the 97 insulator-associated proteins found at insulator sites identified by H3K27me3 histone modification marks (type 1 insulator sites), 1,315 (6%) overlapped with 264 of 17,126 5kb long boundary sites, and 6,137 (29%) overlapped with 784 of 17,126 25kb long boundary sites in HFF cells. Among 5,205 DNA-binding sites of the 97 insulator-associated DNA-binding proteins found at insulator sites identified by H3K27me3 histone modification marks and transcribed regions (type 2 insulator sites), 383 (7%) overlapped with 74 of 17,126 5-kb long boundary sites, 1,901 (37%) overlapped with 306 of 17,126 25-kb long boundary sites. Although CTCF-binding sites separate active and repressive domains, the limited number of DNA-binding sites of insulator-associated proteins found at type 1 and 2 insulator sites overlapped boundary sites identified by chromatin interaction data. Furthermore, by analyzing the regulatory regions of genes, the DNA-binding sites of the 97 insulator-associated DNA-binding proteins were found (1) at the type 1 insulator sites (based on H3K27me3 marks) in the regulatory regions of 3,170 genes, (2) at the type 2 insulator sites (based on H3K27me3 marks and gene expression levels) in the regulatory regions of 1,044 genes, and (3) at insulator sites as boundary sites identified by chromatin interaction data in the regulatory regions of 6,275 genes. The boundary sites showed the highest number of overlaps with the DNA-binding sites. Comparing the insulator sites identified by (1) and (3), 1,212 (38%) genes have both types of insulator sites. Comparing the insulator sites between (2) and (3), 389 (37%) genes have both types of insulator sites. From the comparison of insulator and boundary sites, we found that (1) or (2) types of insulator sites overlapped or were close to boundary sites identified by chromatin interaction data.

      1. The suggested alternative transcripts function, also highlighted in the manuscripts abstract, is only supported by visual inspection of a few cases for several putative DBPs. I believe this is insufficient to support what looks like one of the major claims of the paper when reading the abstract, and a more quantitative and genome-wide analysis must be adopted, although the authors mention it as just an 'observation'.

      Response: According to the reviewer's comment, I performed the genome-wide analysis of alternative transcripts where the DNA-binding sites of insulator-associated proteins are located near splicing sites. The DNA-binding sites of insulator-associated DNA-binding proteins were found within 200 bp centered on splice sites more significantly than the other DNA-binding proteins (Fig. 4e and Table 2). I have added the following sentences on lines 397 - 404: We performed the statistical test to estimate the enrichment of insulator-associated DNA-binding sites compared to the other DNA-binding proteins, and found that the insulator-associated DNA-binding sites were significantly more abundant at splice sites than the DNA-binding sites of the other proteins (Fig 4e and Table 2; Mann‒Whitney U test, p value 5. Figure 1 serves no purpose in my opinion and can be removed, while figures can generally be improved (e.g., the browser screenshots in Figs 4 and 5) for interpretability from readers outside the immediate research field.

      Response: I believe that the Figure 1 would help researchers in other fields who are not familiar with biological phenomena and functions to understand the study. More explanation has been included in the Figures and legends of Figs. 4 and 5 to help readers outside the immediate research field understand the figures.

      1. Similarly, the text is rather convoluted at places and should be re-approached with more clarity for less specialized readers in mind.

      Response: Reviewer #2's comments would be related to this comment. I have introduced a more detailed explanation of the method in the Results section, as shown in the responses to Reviewer #2's comments.

      Referee #2

      1. Introduction, line 95: CTCF appears two times, it seems redundant.

      Response: On lines 91-93, I deleted the latter CTCF from the sentence "and examined the directional bias of DNA-binding sites of CTCF and insulator-associated DBPs, including those of known DBPs such as RAD21 and SMC3".

      1. Introduction, lines 99-103: Please stress better the novelty of the work. What is the main focus? The new identified DPBs or their binding sites? What are the "novel structural and functional roles of DBPs" mentioned?

      Response: Although CTCF is known to be the main insulator protein in vertebrates, we found that 97 DNA-binding proteins including CTCF and cohesin are associated with insulator sites by modifying and developing a machine learning method to search for insulator-associated DNA-binding proteins. Most of the insulator-associated DNA-binding proteins showed the directional bias of DNA-binding motifs, suggesting that the directional bias is associated with the insulator.

      I have added the sentence in lines 96-99 as follows: Furthermore, statistical testing the contribution scores between the directional and non-directional DNA-binding sites of insulator-associated DBPs revealed that the directional sites contributed more significantly to the prediction of gene expression levels than the non-directional sites. I have revised the statement in lines 101-110 as follows: To validate these findings, we demonstrate that the DNA-binding sites of the identified insulator-associated DBPs are located within potential insulator sites, and some of the DNA-binding sites in the insulator site are found without the nearby DNA-binding sites of CTCF and cohesin. Homologous and heterologous insulator-insulator pairing interactions are orientation-dependent, as suggested by the insulator-pairing model based on experimental analysis in flies. Our method and analyses contribute to the identification of insulator- and chromatin-associated DNA-binding sites that influence EPIs and reveal novel functional roles and molecular mechanisms of DBPs associated with transcriptional condensation, phase separation and transcriptional regulation.

      1. Results, line 111: How do the SNPs come into the procedure? From the figures it seems the input is ChIP-seq peaks of DNBPs around the TSS.

      Response: On lines 121-124, to explain the procedure for the SNP of an eQTL, I have added the sentence in the Methods: "If a DNA-binding site was located within a 100-bp region around a single-nucleotide polymorphism (SNP) of an eQTL, we assumed that the DNA-binding proteins regulated the expression of the transcript corresponding to the eQTL".

      1. Again, are those SNPs coming from the different cell lines? Or are they from individuals w.r.t some reference genome? I suggest a general restructuring of this part to let the reader understand more easily. One option could be simplifying the details here or alternatively including all the necessary details.

      Response: On line 119, I have included the explanation of the eQTL dataset of GTEx v8 as follows: " The eQTL data were derived from the GTEx v8 dataset, after quality control, consisting of 838 donors and 17,382 samples from 52 tissues and two cell lines". On lines 681 and 865, I have added the filename of the eQTL data "(GTEx_Analysis_v8_eQTL.tar)".

      1. Figure 1: panel a and b are misleading. Is the matrix in panel a equivalent to the matrix in panel b? If not please clarify why. Maybe in b it is included the info about the SNPs? And if yes, again, what is then difference with a.

      Response: The reviewer would mention Figure 2, not Figure 1. If so, the matrices in panels a and b in Figure 2 are equivalent. I have shown it in the figure: The same figure in panel a is rotated 90 degrees to the right. The green boxes in the matrix show the regions with the ChIP-seq peak of a DNA-binding protein overlapping with a SNP of an eQTL. I used eQTL data to associate a gene with a ChIP-seq peak that was more than 2 kb upstream and 1 kb downstream of a transcriptional start site of a gene. For each gene, the matrix was produced and the gene expression levels in cells were learned and predicted using the deep learning method. I have added the following sentences to explain the method in lines 133 - 139: Through the training, the tool learned to select the binding sites of DNA-binding proteins from ChIP-seq assays that were suitable for predicting gene expression levels in the cell types. The binding sites of a DNA-binding protein tend to be observed in common across multiple cell and tissue types. Therefore, ChIP-seq data and eQTL data in different cell and tissue types were used as input data for learning, and then the tool selected the data suitable for predicting gene expression levels in the cell types, even if the data were not obtained from the same cell types.

      1. Line 386-388: could the author investigate in more detail this observation? Does it mean that loops driven by other DBPs independent of the known CTCF/Cohesin? Could the author provide examples of chromatin structural data e.g. MicroC?

      Response: As suggested by the reviewer, to help readers understand the observation, I have added Supplementary Fig. S4c to show the distribution of DNA-binding sites of "CTCF, RAD21, and SMC3" and "BACH2, FOS, ATF3, NFE2, and MAFK" around chromatin interaction sites. I have modified the following sentence to indicate the figure on line 493: Although a DNA-binding-site distribution pattern around chromatin interaction sites similar to those of CTCF, RAD21, and SMC3 was observed for DBPs such as BACH2, FOS, ATF3, NFE2, and MAFK, less than 1% of the DNA-binding sites of the latter set of DBPs colocalized with CTCF, RAD21, or SMC3 in a single bin (Fig. S4c).

      In Aljahani A et al. Nature Communications 2022, we find that depletion of cohesin causes a subtle reduction in longer-range enhancer-promoter interactions and that CTCF depletion can cause rewiring of regulatory contacts. Together, our data show that loop extrusion is not essential for enhancer-promoter interactions, but contributes to their robustness and specificity and to precise regulation of gene expression. Goel VY et al. Nature Genetics 2023 mentioned in the abstract: Microcompartments frequently connect enhancers and promoters and though loss of loop extrusion and inhibition of transcription disrupts some microcompartments, most are largely unaffected. These results suggested that chromatin loops can be driven by other DBPs independent of the known CTCF/Cohesin.

      FOXA1 pioneer factor functions as an initial chromatin-binding and chromatin-remodeling factor and has been reported to form biomolecular condensates (Ji D et al. Molecular Cell 2024). CTCF have also found to form transcriptional condensate and phase separation (Lee R et al. Nucleic acids research 2022). FOS was found to be an insulator-associated DNA-binding protein in this study and is potentially involved in chromatin remodeling, transcription condensation, and phase separation with the other factors such as BACH2, ATF3, NFE2 and MAFK. I have added the following sentence on line 548: FOXA1 pioneer factor functions as an initial chromatin-binding and chromatin-remodeling factor and has been reported to form biomolecular condensates.

      1. In general, how the presented results are related to some models of chromatin architecture, e.g. loop extrusion, in which it is integrated convergent CTCF binding sites?

      Response: Goel VY et al. Nature Genetics 2023 identified highly nested and focal interactions through region capture Micro-C, which resemble fine-scale compartmental interactions and are termed microcompartments. In the section titled "Most microcompartments are robust to loss of loop extrusion," the researchers noted that a small proportion of interactions between CTCF and cohesin-bound sites exhibited significant reductions in strength when cohesin was depleted. In contrast, the majority of microcompartmental interactions remained largely unchanged under cohesin depletion. Our findings indicate that most P-P and E-P interactions, aside from a few CTCF and cohesin-bound enhancers and promoters, are likely facilitated by a compartmentalization mechanism that differs from loop extrusion. We suggest that nested, multiway, and focal microcompartments correspond to small, discrete A-compartments that arise through a compartmentalization process, potentially influenced by factors upstream of RNA Pol II initiation, such as transcription factors, co-factors, or active chromatin states. It follows that if active chromatin regions at microcompartment anchors exhibit selective "stickiness" with one another, they will tend to co-segregate, leading to the development of nested, focal interactions. This microphase separation, driven by preferential interactions among active loci within a block copolymer, may account for the striking interaction patterns we observe.

      The authors of the paper proposed several mechanisms potentially involved in microcompartments. These mechanisms may be involved in looping with insulator function. Another group reported that enhancer-promoter interactions and transcription are largely maintained upon depletion of CTCF, cohesin, WAPL or YY1. Instead, cohesin depletion decreased transcription factor binding to chromatin. Thus, cohesin may allow transcription factors to find and bind their targets more efficiently (Hsieh TS et al. Nature Genetics 2022). Among the identified insulator-associated DNA-binding proteins, Maz and MyoD1 form loops without CTCF (Xiao T et al. Proc Natl Acad Sci USA 2021 ; Ortabozkoyun H et al. Nature genetics 2022 ; Wang R et al. Nature communications 2022). I have added the following sentences on lines 563-567: Another group reported that enhancer-promoter interactions and transcription are largely maintained upon depletion of CTCF, cohesin, WAPL or YY1. Instead, cohesin depletion decreased transcription factor binding to chromatin. Thus, cohesin may allow transcription factors to find and bind their targets more efficiently. I have included the following explanation on lines 574-576: Maz and MyoD1 among the identified insulator-associated DNA-binding proteins form loops without CTCF.

      As for the directionality of CTCF, if chromatin loop anchors have some structural conformation, as shown in the paper entitled "The structural basis for cohesin-CTCF-anchored loops" (Li Y et al. Nature 2020), directional DNA binding would occur similarly to CTCF binding sites. Moreover, cohesin complexes that interact with convergent CTCF sites, that is, the N-terminus of CTCF, might be protected from WAPL, but those that interact with divergent CTCF sites, that is, the C-terminus of CTCF, might not be protected from WAPL, which could release cohesin from chromatin and thus disrupt cohesin-mediated chromatin loops (Davidson IF et al. Nature Reviews Molecular Cell Biology 2021). Regarding loop extrusion, the 'loop extrusion' hypothesis is motivated by in vitro observations. The experiment in yeast, in which cohesin variants that are unable to extrude DNA loops but retain the ability to topologically entrap DNA, suggested that in vivo chromatin loops are formed independently of loop extrusion. Instead, transcription promotes loop formation and acts as an extrinsic motor that extends these loops and defines their final positions (Guerin TM et al. EMBO Journal 2024). I have added the following sentences on lines 535-539: Cohesin complexes that interact with convergent CTCF sites, that is, the N-terminus of CTCF, might be protected from WAPL, but those that interact with divergent CTCF sites, that is, the C-terminus of CTCF, might not be protected from WAPL, which could release cohesin from chromatin and thus disrupt cohesin-mediated chromatin loops. I have included the following sentences on lines 569-574: The 'loop extrusion' hypothesis is motivated by in vitro observations. The experiment in yeast, in which cohesin variants that are unable to extrude DNA loops but retain the ability to topologically entrap DNA, suggested that in vivo chromatin loops are formed independently of loop extrusion. Instead, transcription promotes loop formation and acts as an extrinsic motor that extends these loops and defines their final positions.

      Another model for the regulation of gene expression by insulators is the boundary-pairing (insulator-pairing) model (Bing X et al. Elife 2024) (Ke W et al. Elife 2024) (Fujioka M et al. PLoS Genetics 2016). Molecules bound to insulators physically pair with their partners, either head-to-head or head-to-tail, with different degrees of specificity at the termini of TADs in flies. Although the experiments do not reveal how partners find each other, the mechanism unlikely requires loop extrusion. Homologous and heterologous insulator-insulator pairing interactions are central to the architectural functions of insulators. The manner of insulator-insulator interactions is orientation-dependent. I have summarized the model on lines 551-559: Other types of chromatin regulation are also expected to be related to the structural interactions of molecules. As the boundary-pairing (insulator-pairing) model, molecules bound to insulators physically pair with their partners, either head-to-head or head-to-tail, with different degrees of specificity at the termini of TADs in flies (Fig. 7). Although the experiments do not reveal how partners find each other, the mechanism unlikely requires loop extrusion. Homologous and heterologous insulator-insulator pairing interactions are central to the architectural functions of insulators. The manner of insulator-insulator interactions is orientation-dependent.

      1. Do the authors think that the identified DBPs could work in that way as well?

      Response: The boundary-pairing (insulator-pairing) model would be applied to the insulator-associated DNA-binding proteins other than CTCF and cohesin that are involved in the loop extrusion mechanism (Bing X et al. Elife 2024) (Ke W et al. Elife 2024) (Fujioka M et al. PLoS Genetics 2016).

      Liquid-liquid phase separation was shown to occur through CTCF-mediated chromatin loops and to act as an insulator (Lee, R et al. Nucleic Acids Research 2022). Among the identified insulator-associated DNA-binding proteins, CEBPA has been found to form hubs that colocalize with transcriptional co-activators in a native cell context, which is associated with transcriptional condensate and phase separation (Christou-Kent M et al. Cell Reports 2023). The proposed microcompartment mechanisms are also associated with phase separation. Thus, the same or similar mechanisms are potentially associated with the insulator function of the identified DNA-binding proteins. I have included the following information on line 546: CEBPA in the identified insulator-associated DNA-binding proteins was also reported to be involved in transcriptional condensates and phase separation.

      1. Also, can the authors comment about the mechanisms those newly identified DBPs mediate contacts by active processes or equilibrium processes?

      Response: Snead WT et al. Molecular Cell 2019 mentioned that protein post-transcriptional modifications (PTMs) facilitate the control of molecular valency and strength of protein-protein interactions. O-GlcNAcylation as a PTM inhibits CTCF binding to chromatin (Tang X et al. Nature Communications 2024). I found that the identified insulator-associated DNA-binding proteins tend to form a cluster at potential insulator sites (Supplementary Fig. 2d). These proteins may interact and actively regulate chromatin interactions, transcriptional condensation, and phase separation by PTMs. I have added the following explanation on lines 576-582: Furthermore, protein post-transcriptional modifications (PTMs) facilitate control over the molecular valency and strength of protein-protein interactions. O-GlcNAcylation as a PTM inhibits CTCF binding to chromatin. We found that the identified insulator-associated DNA-binding proteins tend to form a cluster at potential insulator sites (Fig. 4f and Supplementary Fig. 3c). These proteins may interact and actively regulate chromatin interactions, transcriptional condensation, and phase separation through PTMs.

      1. Can the author provide some real examples along with published structural data (e.g. the mentioned micro-C data) to show the link between protein co-presence, directional bias and contact formation?

      Response: Structural molecular model of cohesin-CTCF-anchored loops has been published by Li Y et al. Nature 2020. The structural conformation of CTCF and cohesin in the loops would be the cause of the directional bias of CTCF binding sites, which I mentioned in lines 531 - 535 as follows: These results suggest that the directional bias of DNA-binding sites of insulator-associated DBPs may be involved in insulator function and chromatin regulation through structural interactions among DBPs, other proteins, DNAs, and RNAs. For example, the N-terminal amino acids of CTCF have been shown to interact with RAD21 in chromatin loops. To investigate the principles underlying the architectural functions of insulator-insulator pairing interactions, two insulators, Homie and Nhomie, flanking the Drosophila even skipped locus were analyzed. Pairing interactions between the transgene Homie and the eve locus are directional. The head-to-head pairing between the transgene and endogenous Homie matches the pattern of activation (Fujioka M et al. PLoS Genetics 2016).

      Referee #3

      1. Some of these TFs do not have specific direct binding to DNA (P300, Cohesin). Since the authors are using binding motifs in their analysis workflow, I would remove those from the analysis.

      Response: When a protein complex binds to DNA, one protein of the complex binds to the DNA directory, and the other proteins may not bind to DNA. However, the DNA motif sequence bound by the protein may be registered as the DNA-binding motif of all the proteins in the complex. The molecular structure of the complex of CTCF and Cohesin showed that both CTCF and Cohesin bind to DNA (Li Y et al. Nature 2020). I think there is a possibility that if the molecular structure of a protein complex becomes available, the previous recognition of the DNA-binding ability of a protein may be changed. Therefore, I searched the Pfam database for 99 insulator-associated DNA-binding proteins identified in this study. I found that 97 are registered as DNA-binding proteins and/or have a known DNA-binding domain, and EP300 and SIN3A do not directory bind to DNA, which was also checked by Google search. I have added the following explanation in line 249 to indicate direct and indirect DNA-binding proteins: Among 99 insulator-associated DBPs, EP300 and SIN3A do not directory interact with DNA, and thus 97 insulator-associated DBPs directory bind to DNA. I have updated the sentence in line 20 of the Abstract as follows: We discovered 97 directional and minor nondirectional motifs in human fibroblast cells that corresponded to 23 DBPs related to insulator function, CTCF, and/or other types of chromosomal transcriptional regulation reported in previous studies.

      1. I am not sure if I understood correctly, by why do the authors consider enhancers spanning 2Mb (200 bins of 10Kb around eSNPs)? This seems wrong. Enhancers are relatively small regions (100bp to 1Kb) and only a very small subset form super enhancers.

      Response: As the reviewer mentioned, I recognize enhancers are relatively small regions. In the paper, I intended to examine further upstream and downstream of promoter regions where enhancers are found. Therefore, I have modified the sentence in lines 917 - 919 of the Fig. 2 legend as follows: Enhancer-gene regulatory interaction regions consist of 200 bins of 10 kbp between -1 Mbp and 1 Mbp region from TSS, not including promoter.

      1. I think the H3K27me3 analysis was very good, but I would have liked to see also constitutive heterochromatin as well, so maybe repeat the analysis for H3K9me3.

      Response: Following the reviewer's advice, I have added the ChIP-seq data of H3K9me3 as a truck of the UCSC Genome Browser. The distribution of H3K9me3 signal was different from that of H3K27me3 in some regions. I also found the insulator-associated DNA-binding sites close to the edges of H3K9me3 regions and took some screenshots of the UCSC Genome Browser of the regions around the sites in Supplementary Fig. 3b. I have modified the following sentence on lines 962 - 964 in the legend of Fig. 4: a Distribution of histone modification marks H3K27me3 (green color) and H3K9me3 (turquoise color) and transcript levels (pink color) in upstream and downstream regions of a potential insulator site (light orange color). I have also added the following result on lines 348 - 352: The same analysis was performed using H3K9me3 marks, instead of H3K27me3 (Fig. S3b). We found that the distribution of H3K9me3 signal was different from that of H3K27me3 in some regions, and discovered the insulator-associated DNA-binding sites close to the edges of H3K9me3 regions (Fig. S3b).

      1. I was not sure I understood the analysis in Figure 6. The binding site is with 500bp of the interaction site, but micro-C interactions are at best at 1Kb resolution. They say they chose the centre of the interaction site, but we don't know exactly where there is the actual interaction. Also, it is not clear what they measure. Is it the number of binding sites of a specific or multiple DBP insulator proteins at a specific distance from this midpoint that they recover in all chromatin loops? Maybe I am missing something. This analysis was not very clear.

      Response: The resolution of the Micro-C assay is considered to be 100 bp and above, as the human nucleome core particle contains 145 bp (and 193 bp with linker) of DNA. However, internucleosomal DNA is cleaved by endonuclease into fragments of multiples of 10 nucleotides (Pospelov VA et al. Nucleic Acids Research 1979). Highly nested focal interactions were observed (Goel VY et al. Nature Genetics 2023). Base pair resolution was reported using Micro Capture-C (Hua P et al. Nature 2021). Sub-kilobase (20 bp resolution) chromatin topology was reported using an MNase-based chromosome conformation capture (3C) approach (Aljahani A et al. Nature Communications 2022). On the other hand, Hi-C data was analyzed at 1 kb resolution. (Gu H et al. bioRxiv 2021). If the resolution of Micro-C interactions is at best at 1 kb, the binding sites of a DNA-binding protein will not show a peak around the center of the genomic locations of interaction edges. Each panel shows the number of binding sites of a specific DNA-binding protein at a specific distance from the midpoint of all chromatin interaction edges. I have modified and added the following sentences in lines 585-589: High-resolution chromatin interaction data from a Micro-C assay indicated that most of the predicted insulator-associated DBPs showed DNA-binding-site distribution peaks around chromatin interaction sites, suggesting that these DBPs are involved in chromatin interactions and that the chromatin interaction data has a high degree of resolution. Base pair resolution was reported using Micro Capture-C.

      Minor comments:

      1. PIQ does not consider TF concentration. Other methods do that and show that TF concentration improves predictions (e.g., https://www.biorxiv.org/content/10.1101/2023.07.15.549134v2 or https://pubmed.ncbi.nlm.nih.gov/37486787/). The authors should discuss how that would impact their results.

      Response: The directional bias of CTCF binding sites was identified by ChIA-pet interactions of CTCF binding sites. The analysis of the contribution scores of DNA-binding sites of proteins considering the binding sites of CTCF as an insulator showed the same tendency of directional bias of CTCF binding sites. In the analysis, to remove the false-positive prediction of DNA-binding sites, I used the binding sites that overlapped with a ChIP-seq peak of the DNA-binding protein. This result suggests that the DNA-binding sites of CTCF obtained by the current analysis have sufficient quality. Therefore, if the accuracy of prediction of DNA-binding sites is improved, althought the number of DNA-binding sites may be different, the overall tendency of the directionality of DNA-binding sites will not change and the results of this study will not change significantly.

      As for the first reference in the reviewer's comment, chromatin interaction data from Micro-C assay does not include all chromatin interactions in a cell or tissue, because it is expensive to cover all interactions. Therefore, it would be difficult to predict all chromatin interactions based on machine learning. As for the second reference in the reviewer's comment, pioneer factors such as FOXA are known to bind to closed chromatin regions, but transcription factors and DNA-binding proteins involved in chromatin interactions and insulators generally bind to open chromatin regions. The search for the DNA-binding motifs is not required in closed chromatin regions.

      1. DeepLIFT is a good approach to interpret complex structures of CNN, but is not truly explainable AI. I think the authors should acknowledge this.

      Response: In the DeepLIFT paper, the authors explain that DeepLIFT is a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input (Shrikumar A et al. ICML 2017). DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. DeepLIFT calculates a metric to measure the difference between an input and the reference of the input.

      Truly explainable AI would be able to find cause and reason, and to make choices and decisions like humans. DeepLIFT does not perform causal inferences. I did not use the term "Explainable AI" in our manuscript, but I briefly explained it in Discussion. I have added the following explanation in lines 615-620: AI (Artificial Intelligence) is considered as a black box, since the reason and cause of prediction are difficult to know. To solve this issue, tools and methods have been developed to know the reason and cause. These technologies are called Explainable AI. DeepLIFT is considered to be a tool for Explainable AI. However, DeepLIFT does not answer the reason and cause for a prediction. It calculates scores representing the contribution of the input data to the prediction.

      Furthermore, to improve the readability of the manuscript, I have included the following explanation in lines 159-165: we computed DeepLIFT scores of the input data (i.e., each binding site of the ChIP-seq data of DNA-binding proteins) in the deep leaning analysis on gene expression levels. DeepLIFT compares the importance of each input for predicting gene expression levels to its 'reference or background level' and assigns contribution scores according to the difference. DeepLIFT calculates a metric to measure the difference between an input and the reference of the input.

    2. Note: This response was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      I would like to thank the reviewers for their comments and interest in the manuscript and the study.

      Reviewer #1

      1) I would assume that there are RNA-seq and/or ChIP-seq data out there produced after knockdown of one or more of these DBPs that show directional positioning.

      As the reviewer pointed out, a wet experimental validation of the results of this study would give an opportunity for more biological researchers to have an interest in the study. I plan to promote the wet experimental analysis in collaboration with biological experimental researchers as a next step of this study. The same analysis in this study can be performed in immortalized cells for CRISPR experiment (e.g. Guo Y et al. Cell 2015).

      2) Figure 6 should be expanded to incorporate analysis of DBPs not overlapping CTCF/cohesin in chromatin interaction data that is important and potentially more interesting than the simple DBPs enrichment reported in the present form of the figure.

      Following the reviewer's advice, I performed the same analysis with the DNA-binding sites that do no overlap with the DNA-binding sites of CTCF and cohesin (RAD21 and SMC3) (Fig. 6 and Supplementary Fig. 4). The result showed the same tendency in the distribution of DNA-binding sites. The height of a peak on the graph became lower for some DNA-binding proteins after removing the DNA-binding sites that overlapped with those of CTCF and cohesin. I have added the following sentence on lines 427 and 817: For the insulator-associated DBPs other than CTCF, RAD21, and SMC3, the DNA-binding sites that do not overlap with those of CTCF, RND21, and SMC3 were used to examine their distribution around interaction sites.

      3) Critically, I would like to see use of Micro-C/Hi-C data and ChIP-seq from these factors, where insulation scores around their directionally-bound sites show some sort of an effect like that presumed by the authors - and many such datasets are publicly-available and can be put to good use here.

      As suggested by the reviewer, I have added the insulator scores and boundary sites from the 4D nucleome data portal as tracks in the UCSC genome browser. The insulator scores seem to correspond to some extent to the H3K27me3 histone marks from ChIP-seq (Fig. 4a and Supplementary Fig. 3). The direction of DNA-binding sites on the genome can be shown with different colors (e.g. red and green), but the directionality is their overall tendency, and it may be difficult to notice the directionality from each binding site.

      I found that the CTCF binding sites examined by a wet experiment in the previous study may not always overlap with the boundary sites of chromatin interactions from Micro-C assay (Guo Y et al. Cell 2015). The chromatin interaction data do not include all interactions due to the high sequencing cost of the assay. The number of the boundary sites may be smaller than that of CTCF binding sites acting as insulators and/or some of the CTCF binding sites may not be locate in the boundary sites. It may be difficult for the boundary location algorithm to identify a short boundary location. Due to the limitations of the chromatin interaction data, I planned to search for insulator-associated DNA-binding proteins without using chromatin interaction data in this study. I have added the statistical summary of the analysis in lines 364-387 as follows: Overall, among 20,837 DNA-binding sites of the 97 insulator-associated proteins found at insulator sites identified by H3K27me3 histone modification marks (type 1 insulator sites), 1,315 (6%) overlapped with 264 of 17,126 5kb long boundary sites, and 6,137 (29%) overlapped with 784 of 17,126 25kb long boundary sites in HFF cells. Among 5,205 DNA-binding sites of the 97 insulator-associated DNA-binding proteins found at insulator sites identified by H3K27me3 histone modification marks and transcribed regions (type 2 insulator sites), 383 (7%) overlapped with 74 of 17,126 5-kb long boundary sites, 1,901 (37%) overlapped with 306 of 17,126 25-kb long boundary sites. Although CTCF-binding sites separate active and repressive domains, the limited number of DNA-binding sites of insulator-associated proteins found at type 1 and 2 insulator sites overlapped boundary sites identified by chromatin interaction data. Furthermore, by analyzing the regulatory regions of genes, the DNA-binding sites of the 97 insulator-associated DNA-binding proteins were found (1) at the type 1 insulator sites (based on H3K27me3 marks) in the regulatory regions of 3,170 genes, (2) at the type 2 insulator sites (based on H3K27me3 marks and gene expression levels) in the regulatory regions of 1,044 genes, and (3) at insulator sites as boundary sites identified by chromatin interaction data in the regulatory regions of 6,275 genes. The boundary sites showed the highest number of overlaps with the DNA-binding sites. Comparing the insulator sites identified by (1) and (3), 1,212 (38%) genes have both types of insulator sites. Comparing the insulator sites between (2) and (3), 389 (37%) genes have both types of insulator sites. From the comparison of insulator and boundary sites, we found that (1) or (2) types of insulator sites overlapped or were close to boundary sites identified by chromatin interaction data.

      4) The suggested alternative transcripts function, also highlighted in the manuscripts abstract, is only supported by visual inspection of a few cases for several putative DBPs. I believe this is insufficient to support what looks like one of the major claims of the paper when reading the abstract, and a more quantitative and genome-wide analysis must be adopted, although the authors mention it as just an 'observation'.

      According to the reviewer's comment, I performed the genome-wide analysis of alternative transcripts where the DNA-binding sites of insulator-associated proteins are located near splicing sites. The DNA-binding sites of insulator-associated DNA-binding proteins were found within 200 bp centered on splice sites more significantly than the other DNA-binding proteins (Fig. 4e and Table 2). I have added the following sentences on lines 397 - 404: We performed the statistical test to estimate the enrichment of insulator-associated DNA-binding sites compared to the other DNA-binding proteins, and found that the insulator-associated DNA-binding sites were significantly more abundant at splice sites than the DNA-binding sites of the other proteins (Fig 4e and Table 2; Mann‒Whitney U test, p value < 0.05). The comparison between the splice sites of both ends of first and last introns and those of other introns showed the similar statistical significance of enrichment and number of splice sites with the insulator-associated DNA-binding proteins (Table 2 and Table S9).

      5) Figure 1 serves no purpose in my opinion and can be removed, while figures can generally be improved (e.g., the browser screenshots in Figs 4 and 5) for interpretability from readers outside the immediate research field.

      I believe that the Figure 1 would help researchers in other fields who are not familiar with biological phenomena and functions to understand the study. More explanation has been included in the Figures and legends of Figs. 4 and 5 to help readers outside the immediate research field understand the figures.

      6) Similarly, the text is rather convoluted at places and should be re-approached with more clarity for less specialized readers in mind.

      Reviewer #2's comments would be related to this comment. I have introduced a more detailed explanation of the method in the Results section, as shown in the responses to Reviewer #2’s comments.

      Reviewer #2

      1) Introduction, line 95: CTCF appears two times, it seems redundant.

      On lines 91-93, I deleted the latter CTCF from the sentence "We examine the directional bias of DNA-binding sites of CTCF and insulator-associated DBPs, including those of known DBPs such as RAD21 and SMC3".

      2) Introduction, lines 99-103: Please stress better the novelty of the work. What is the main focus? The new identified DPBs or their binding sites? What are the "novel structural and functional roles of DBPs" mentioned?

      Although CTCF is known to be the main insulator protein in vertebrates, we found that 97 DNA-binding proteins including CTCF and cohesin are associated with insulator sites by modifying and developing a machine learning method to search for insulator-associated DNA-binding proteins. Most of the insulator-associated DNA-binding proteins showed the directional bias of DNA-binding motifs, suggesting that the directional bias is associated with the insulator.

      I have added the sentence in lines 96-99 as follows: Furthermore, statistical testing the contribution scores between the directional and non-directional DNA-binding sites of insulator-associated DBPs revealed that the directional sites contributed more significantly to the prediction of gene expression levels than the non-directional sites. I have revised the statement in lines 101-110 as follows: To validate these findings, we demonstrate that the DNA-binding sites of the identified insulator-associated DBPs are located within potential insulator sites, and some of the DNA-binding sites in the insulator site are found without the nearby DNA-binding sites of CTCF and cohesin. Homologous and heterologous insulator-insulator pairing interactions are orientation-dependent, as suggested by the insulator-pairing model based on experimental analysis in flies. Our method and analyses contribute to the identification of insulator- and chromatin-associated DNA-binding sites that influence EPIs and reveal novel functional roles and molecular mechanisms of DBPs associated with transcriptional condensation, phase separation and transcriptional regulation.

      3) Results, line 111: How do the SNPs come into the procedure? From the figures it seems the input is ChIP-seq peaks of DNBPs around the TSS.

      On lines 121-124, to explain the procedure for the SNP of an eQTL, I have added the sentence in the Methods: "If a DNA-binding site was located within a 100-bp region around a single-nucleotide polymorphism (SNP) of an eQTL, we assumed that the DNA-binding proteins regulated the expression of the transcript corresponding to the eQTL".

      4) Again, are those SNPs coming from the different cell lines? Or are they from individuals w.r.t some reference genome? I suggest a general restructuring of this part to let the reader understand more easily. One option could be simplifying the details here or alternatively including all the necessary details.

      On line 119, I have included the explanation of the eQTL dataset of GTEx v8 as follows: " The eQTL data were derived from the GTEx v8 dataset, after quality control, consisting of 838 donors and 17,382 samples from 52 tissues and two cell lines”. On lines 681 and 865, I have added the filename of the eQTL data "(GTEx_Analysis_v8_eQTL.tar)".

      5) Figure 1: panel a and b are misleading. Is the matrix in panel a equivalent to the matrix in panel b? If not please clarify why. Maybe in b it is included the info about the SNPs? And if yes, again, what is then difference with a.

      The reviewer would mention Figure 2, not Figure 1. If so, the matrices in panels a and b in Figure 2 are equivalent. I have shown it in the figure: The same figure in panel a is rotated 90 degrees to the right. The green boxes in the matrix show the regions with the ChIP-seq peak of a DNA-binding protein overlapping with a SNP of an eQTL. I used eQTL data to associate a gene with a ChIP-seq peak that was more than 2 kb upstream and 1 kb downstream of a transcriptional start site of a gene. For each gene, the matrix was produced and the gene expression levels in cells were learned and predicted using the deep learning method. I have added the following sentences to explain the method in lines 133 - 139: Through the training, the tool learned to select the binding sites of DNA-binding proteins from ChIP-seq assays that were suitable for predicting gene expression levels in the cell types. The binding sites of a DNA-binding protein tend to be observed in common across multiple cell and tissue types. Therefore, ChIP-seq data and eQTL data in different cell and tissue types were used as input data for learning, and then the tool selected the data suitable for predicting gene expression levels in the cell types, even if the data were not obtained from the same cell types.

      6) Line 386-388: could the author investigate in more detail this observation? Does it mean that loops driven by other DBPs independent of the known CTCF/Cohesin? Could the author provide examples of chromatin structural data e.g. MicroC?

      As suggested by the reviewer, to help readers understand the observation, I have added Supplementary Fig. S4c to show the distribution of DNA-binding sites of "CTCF, RAD21, and SMC3" and "BACH2, FOS, ATF3, NFE2, and MAFK" around chromatin interaction sites. I have modified the following sentence to indicate the figure on line 493: Although a DNA-binding-site distribution pattern around chromatin interaction sites similar to those of CTCF, RAD21, and SMC3 was observed for DBPs such as BACH2, FOS, ATF3, NFE2, and MAFK, less than 1% of the DNA-binding sites of the latter set of DBPs colocalized with CTCF, RAD21, or SMC3 in a single bin (Fig. S4c).

      In Aljahani A et al. Nature Communications 2022, we find that depletion of cohesin causes a subtle reduction in longer-range enhancer-promoter interactions and that CTCF depletion can cause rewiring of regulatory contacts. Together, our data show that loop extrusion is not essential for enhancer-promoter interactions, but contributes to their robustness and specificity and to precise regulation of gene expression. Goel VY et al. Nature Genetics 2023 mentioned in the abstract: Microcompartments frequently connect enhancers and promoters and though loss of loop extrusion and inhibition of transcription disrupts some microcompartments, most are largely unaffected. These results suggested that chromatin loops can be driven by other DBPs independent of the known CTCF/Cohesin.

      I added the following sentence on lines 561-569: The depletion of cohesin causes a subtle reduction in longer-range enhancer-promoter interactions and that CTCF depletion can cause rewiring of regulatory contacts. Another group reported that enhancer-promoter interactions and transcription are largely maintained upon depletion of CTCF, cohesin, WAPL or YY1. Instead, cohesin depletion decreased transcription factor binding to chromatin. Thus, cohesin may allow transcription factors to find and bind their targets more efficiently. Furthermore, the loop extrusion is not essential for enhancer-promoter interactions, but contributes to their robustness and specificity and to precise regulation of gene expression.

      FOXA1 pioneer factor functions as an initial chromatin-binding and chromatin-remodeling factor and has been reported to form biomolecular condensates (Ji D et al. Molecular Cell 2024). CTCF have also found to form transcriptional condensate and phase separation (Lee R et al. Nucleic acids research 2022). FOS was found to be an insulator-associated DNA-binding protein in this study and is potentially involved in chromatin remodeling, transcription condensation, and phase separation with the other factors such as BACH2, ATF3, NFE2 and MAFK. I have added the following sentence on line 548: FOXA1 pioneer factor functions as an initial chromatin-binding and chromatin-remodeling factor and has been reported to form biomolecular condensates.

      7) In general, how the presented results are related to some models of chromatin architecture, e.g. loop extrusion, in which it is integrated convergent CTCF binding sites?

      Goel VY et al. Nature Genetics 2023 identified highly nested and focal interactions through region capture Micro-C, which resemble fine-scale compartmental interactions and are termed microcompartments. In the section titled "Most microcompartments are robust to loss of loop extrusion," the researchers noted that a small proportion of interactions between CTCF and cohesin-bound sites exhibited significant reductions in strength when cohesin was depleted. In contrast, the majority of microcompartmental interactions remained largely unchanged under cohesin depletion. Our findings indicate that most P-P and E-P interactions, aside from a few CTCF and cohesin-bound enhancers and promoters, are likely facilitated by a compartmentalization mechanism that differs from loop extrusion. We suggest that nested, multiway, and focal microcompartments correspond to small, discrete A-compartments that arise through a compartmentalization process, potentially influenced by factors upstream of RNA Pol II initiation, such as transcription factors, co-factors, or active chromatin states. It follows that if active chromatin regions at microcompartment anchors exhibit selective "stickiness" with one another, they will tend to co-segregate, leading to the development of nested, focal interactions. This microphase separation, driven by preferential interactions among active loci within a block copolymer, may account for the striking interaction patterns we observe.

      The authors of the paper proposed several mechanisms potentially involved in microcompartments. These mechanisms may be involved in looping with insulator function. Another group reported that enhancer-promoter interactions and transcription are largely maintained upon depletion of CTCF, cohesin, WAPL or YY1. Instead, cohesin depletion decreased transcription factor binding to chromatin. Thus, cohesin may allow transcription factors to find and bind their targets more efficiently (Hsieh TS et al. Nature Genetics 2022). Among the identified insulator-associated DNA-binding proteins, Maz and MyoD1 form loops without CTCF (Xiao T et al. Proc Natl Acad Sci USA 2021 ; Ortabozkoyun H et al. Nature genetics 2022 ; Wang R et al. Nature communications 2022). I have added the following sentences on lines 563-567: Another group reported that enhancer-promoter interactions and transcription are largely maintained upon depletion of CTCF, cohesin, WAPL or YY1. Instead, cohesin depletion decreased transcription factor binding to chromatin. Thus, cohesin may allow transcription factors to find and bind their targets more efficiently. I have included the following explanation on lines 574-576: Maz and MyoD1 among the identified insulator-associated DNA-binding proteins form loops without CTCF.

      As for the directionality of CTCF, if chromatin loop anchors have some structural conformation, as shown in the paper entitled "The structural basis for cohesin-CTCF-anchored loops" (Li Y et al. Nature 2020), directional DNA binding would occur similarly to CTCF binding sites. Moreover, cohesin complexes that interact with convergent CTCF sites, that is, the N-terminus of CTCF, might be protected from WAPL, but those that interact with divergent CTCF sites, that is, the C-terminus of CTCF, might not be protected from WAPL, which could release cohesin from chromatin and thus disrupt cohesin-mediated chromatin loops (Davidson IF et al. Nature Reviews Molecular Cell Biology 2021). Regarding loop extrusion, the ‘loop extrusion’ hypothesis is motivated by in vitro observations. The experiment in yeast, in which cohesin variants that are unable to extrude DNA loops but retain the ability to topologically entrap DNA, suggested that in vivo chromatin loops are formed independently of loop extrusion. Instead, transcription promotes loop formation and acts as an extrinsic motor that extends these loops and defines their final positions (Guerin TM et al. EMBO Journal 2024). I have added the following sentences on lines 535-539: Cohesin complexes that interact with convergent CTCF sites, that is, the N-terminus of CTCF, might be protected from WAPL, but those that interact with divergent CTCF sites, that is, the C-terminus of CTCF, might not be protected from WAPL, which could release cohesin from chromatin and thus disrupt cohesin-mediated chromatin loops. I have included the following sentences on lines 569-574: The ‘loop extrusion’ hypothesis is motivated by in vitro observations. The experiment in yeast, in which cohesin variants that are unable to extrude DNA loops but retain the ability to topologically entrap DNA, suggested that in vivo chromatin loops are formed independently of loop extrusion. Instead, transcription promotes loop formation and acts as an extrinsic motor that extends these loops and defines their final positions.

      Another model for the regulation of gene expression by insulators is the boundary-pairing (insulator-pairing) model (Bing X et al. Elife 2024) (Ke W et al. Elife 2024) (Fujioka M et al. PLoS Genetics 2016). Molecules bound to insulators physically pair with their partners, either head-to-head or head-to-tail, with different degrees of specificity at the termini of TADs in flies. Although the experiments do not reveal how partners find each other, the mechanism unlikely requires loop extrusion. Homologous and heterologous insulator-insulator pairing interactions are central to the architectural functions of insulators. The manner of insulator-insulator interactions is orientation-dependent. I have summarized the model on lines 551-559: Other types of chromatin regulation are also expected to be related to the structural interactions of molecules. As the boundary-pairing (insulator-pairing) model, molecules bound to insulators physically pair with their partners, either head-to-head or head-to-tail, with different degrees of specificity at the termini of TADs in flies (Fig. 7). Although the experiments do not reveal how partners find each other, the mechanism unlikely requires loop extrusion. Homologous and heterologous insulator-insulator pairing interactions are central to the architectural functions of insulators. The manner of insulator-insulator interactions is orientation-dependent.

      8) Do the authors think that the identified DBPs could work in that way as well?

      The boundary-pairing (insulator-pairing) model would be applied to the insulator-associated DNA-binding proteins other than CTCF and cohesin that are involved in the loop extrusion mechanism (Bing X et al. Elife 2024) (Ke W et al. Elife 2024) (Fujioka M et al. PLoS Genetics 2016).

      Liquid-liquid phase separation was shown to occur through CTCF-mediated chromatin loops and to act as an insulator (Lee, R et al. Nucleic Acids Research 2022). Among the identified insulator-associated DNA-binding proteins, CEBPA has been found to form hubs that colocalize with transcriptional co-activators in a native cell context, which is associated with transcriptional condensate and phase separation (Christou-Kent M et al. Cell Reports 2023). The proposed microcompartment mechanisms are also associated with phase separation. Thus, the same or similar mechanisms are potentially associated with the insulator function of the identified DNA-binding proteins. I have included the following information on line 546: CEBPA in the identified insulator-associated DNA-binding proteins was also reported to be involved in transcriptional condensates and phase separation.

      9) Also, can the authors comment about the mechanisms those newly identified DBPs mediate contacts by active processes or equilibrium processes?

      Snead WT et al. Molecular Cell 2019 mentioned that protein post-transcriptional modifications (PTMs) facilitate the control of molecular valency and strength of protein-protein interactions. O-GlcNAcylation as a PTM inhibits CTCF binding to chromatin (Tang X et al. Nature Communications 2024). I found that the identified insulator-associated DNA-binding proteins tend to form a cluster at potential insulator sites (Supplementary Fig. 2d). These proteins may interact and actively regulate chromatin interactions, transcriptional condensation, and phase separation by PTMs. I have added the following explanation on lines 576-582: Furthermore, protein post-transcriptional modifications (PTMs) facilitate control over the molecular valency and strength of protein-protein interactions. O-GlcNAcylation as a PTM inhibits CTCF binding to chromatin. We found that the identified insulator-associated DNA-binding proteins tend to form a cluster at potential insulator sites (Fig. 4f and Supplementary Fig. 3c). These proteins may interact and actively regulate chromatin interactions, transcriptional condensation, and phase separation through PTMs.

      10) Can the author provide some real examples along with published structural data (e.g. the mentioned micro-C data) to show the link between protein co-presence, directional bias and contact formation?

      Structural molecular model of cohesin-CTCF-anchored loops has been published by Li Y et al. Nature 2020. The structural conformation of CTCF and cohesin in the loops would be the cause of the directional bias of CTCF binding sites, which I mentioned in lines 531 – 535 as follows: These results suggest that the directional bias of DNA-binding sites of insulator-associated DBPs may be involved in insulator function and chromatin regulation through structural interactions among DBPs, other proteins, DNAs, and RNAs. For example, the N-terminal amino acids of CTCF have been shown to interact with RAD21 in chromatin loops.

      To investigate the principles underlying the architectural functions of insulator-insulator pairing interactions, two insulators, Homie and Nhomie, flanking the Drosophila even skipped locus were analyzed. Pairing interactions between the transgene Homie and the eve locus are directional. The head-to-head pairing between the transgene and endogenous Homie matches the pattern of activation (Fujioka M et al. PLoS Genetics 2016).

      Reviewer #3

      1. Some of these TFs do not have specific direct binding to DNA (P300, Cohesin). Since the authors are using binding motifs in their analysis workflow, I would remove those from the analysis.

      When a protein complex binds to DNA, one protein of the complex binds to the DNA directory, and the other proteins may not bind to DNA. However, the DNA motif sequence bound by the protein may be registered as the DNA-binding motif of all the proteins in the complex. The molecular structure of the complex of CTCF and Cohesin showed that both CTCF and Cohesin bind to DNA (Li Y et al. Nature 2020). I think there is a possibility that if the molecular structure of a protein complex becomes available, the previous recognition of the DNA-binding ability of a protein may be changed. Therefore, I searched the Pfam database for 99 insulator-associated DNA-binding proteins identified in this study. I found that 97 are registered as DNA-binding proteins and/or have a known DNA-binding domain, and EP300 and SIN3A do not directory bind to DNA, which was also checked by Google search. I have added the following explanation in line 249 to indicate direct and indirect DNA-binding proteins: Among 99 insulator-associated DBPs, EP300 and SIN3A do not directory interact with DNA, and thus 97 insulator-associated DBPs directory bind to DNA. I have updated the sentence in line 22 of the Abstract as follows: We discovered 97 directional and minor nondirectional motifs in human fibroblast cells that corresponded to 23 DBPs related to insulator function, CTCF, and/or other types of chromosomal transcriptional regulation reported in previous studies.

      2. I am not sure if I understood correctly, by why do the authors consider enhancers spanning 2Mb (200 bins of 10Kb around eSNPs)? This seems wrong. Enhancers are relatively small regions (100bp to 1Kb) and only a very small subset form super enhancers.

      As the reviewer mentioned, I recognize enhancers are relatively small regions. In the paper, I intended to examine further upstream and downstream of promoter regions where enhancers are found. Therefore, I have modified the sentence in lines 917 – 919 of the Fig. 2 legend as follows: Enhancer-gene regulatory interaction regions consist of 200 bins of 10 kbp between -1 Mbp and 1 Mbp region from TSS, not including promoter.

      3. I think the H3K27me3 analysis was very good, but I would have liked to see also constitutive heterochromatin as well, so maybe repeat the analysis for H3K9me3.

      Following the reviewer's advice, I have added the ChIP-seq data of H3K9me3 as a truck of the UCSC Genome Browser. The distribution of H3K9me3 signal was different from that of H3K27me3 in some regions. I also found the insulator-associated DNA-binding sites close to the edges of H3K9me3 regions and took some screenshots of the UCSC Genome Browser of the regions around the sites in Supplementary Fig. 3b. I have modified the following sentence on lines 962 – 964 in the legend of Fig. 4: a Distribution of histone modification marks H3K27me3 (green color) and H3K9me3 (turquoise color) and transcript levels (pink color) in upstream and downstream regions of a potential insulator site (light orange color). I have also added the following result on lines 348 – 352: The same analysis was performed using H3K9me3 marks, instead of H3K27me3 (Fig. S3b). We found that the distribution of H3K9me3 signal was different from that of H3K27me3 in some regions, and discovered the insulator-associated DNA-binding sites close to the edges of H3K9me3 regions (Fig. S3b).

      4. I was not sure I understood the analysis in Figure 6. The binding site is with 500bp of the interaction site, but micro-C interactions are at best at 1Kb resolution. They say they chose the centre of the interaction site, but we don't know exactly where there is the actual interaction. Also, it is not clear what they measure. Is it the number of binding sites of a specific or multiple DBP insulator proteins at a specific distance from this midpoint that they recover in all chromatin loops? Maybe I am missing something. This analysis was not very clear.

      The resolution of the Micro-C assay is considered to be 100 bp and above, as the human nucleome core particle contains 145 bp (and 193 bp with linker) of DNA. However, internucleosomal DNA is cleaved by endonuclease into fragments of multiples of 10 nucleotides (Pospelov VA et al. Nucleic Acids Research 1979). Highly nested focal interactions were observed (Goel VY et al. Nature Genetics 2023). Base pair resolution was reported using Micro Capture-C (Hua P et al. Nature 2021). Sub-kilobase (20 bp resolution) chromatin topology was reported using an MNase-based chromosome conformation capture (3C) approach (Aljahani A et al. Nature Communications 2022). On the other hand, Hi-C data was analyzed at 1 kb resolution. (Gu H et al. bioRxiv 2021). If the resolution of Micro-C interactions is at best at 1 kb, the binding sites of a DNA-binding protein will not show a peak around the center of the genomic locations of interaction edges. Each panel shows the number of binding sites of a specific DNA-binding protein at a specific distance from the midpoint of all chromatin interaction edges. I have modified and added the following sentences in lines 585-589: High-resolution chromatin interaction data from a Micro-C assay indicated that most of the predicted insulator-associated DBPs showed DNA-binding-site distribution peaks around chromatin interaction sites, suggesting that these DBPs are involved in chromatin interactions and that the chromatin interaction data has a high degree of resolution. Base pair resolution was reported using Micro Capture-C.

      1.PIQ does not consider TF concentration. Other methods do that and show that TF concentration improves predictions (e.g.,https://www.biorxiv.org/content/10.1101/2023.07.15.549134v2 or https://pubmed.ncbi.nlm.nih.gov/37486787/). The authors should discuss how that would impact their results.

      The directional bias of CTCF binding sites was identified by ChIA-pet interactions of CTCF binding sites. The analysis of the contribution scores of DNA-binding sites of proteins considering the binding sites of CTCF as an insulator showed the same tendency of directional bias of CTCF binding sites. In the analysis, to remove the false-positive prediction of DNA-binding sites, I used the binding sites that overlapped with a ChIP-seq peak of the DNA-binding protein. This result suggests that the DNA-binding sites of CTCF obtained by the current analysis have sufficient quality. Therefore, if the accuracy of prediction of DNA-binding sites is improved, althought the number of DNA-binding sites may be different, the overall tendency of the directionality of DNA-binding sites will not change and the results of this study will not change significantly.

      As for the first reference in the reviewer's comment, chromatin interaction data from Micro-C assay does not include all chromatin interactions in a cell or tissue, because it is expensive to cover all interactions. Therefore, it would be difficult to predict all chromatin interactions based on machine learning. As for the second reference in the reviewer's comment, pioneer factors such as FOXA are known to bind to closed chromatin regions, but transcription factors and DNA-binding proteins involved in chromatin interactions and insulators generally bind to open chromatin regions. The search for the DNA-binding motifs is not required in closed chromatin regions.

      2. DeepLIFT is a good approach to interpret complex structures of CNN, but is not truly explainable AI. I think the authors should acknowledge this.

      In the DeepLIFT paper, the authors explain that DeepLIFT is a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input (Shrikumar A et al. ICML 2017). DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. DeepLIFT calculates a metric to measure the difference between an input and the reference of the input.

      Truly explainable AI would be able to find cause and reason, and to make choices and decisions like humans. DeepLIFT does not perform causal inferences. I did not use the term "Explainable AI" in our manuscript, but I briefly explained it in Discussion. I have added the following explanation in lines 615-620: AI (Artificial Intelligence) is considered as a black box, since the reason and cause of prediction are difficult to know. To solve this issue, tools and methods have been developed to know the reason and cause. These technologies are called Explainable AI. DeepLIFT is considered to be a tool for Explainable AI. However, DeepLIFT does not answer the reason and cause for a prediction. It calculates scores representing the contribution of the input data to the prediction.

      Furthermore, to improve the readability of the manuscript, I have included the following explanation in lines 159-165: we computed DeepLIFT scores of the input data (i.e., each binding site of the ChIP-seq data of DNA-binding proteins) in the deep leaning analysis on gene expression levels. DeepLIFT compares the importance of each input for predicting gene expression levels to its 'reference or background level' and assigns contribution scores according to the difference. DeepLIFT calculates a metric to measure the difference between an input and the reference of the input.

  2. resu-bot-bucket.s3.ca-central-1.amazonaws.com resu-bot-bucket.s3.ca-central-1.amazonaws.com
    1. I used Python as well as knowledge about context-free languages to create this project

      make sure to use strong action statements here (accomplished X through Y resulting in Z). instead of saying 'i used', you could rephrase to something along the lines of 'Developed the front-end of a JSON compiler, implementing lexical, syntactic, and semantic analysis, ensuring proper error handling and recovery'.

    1. Voici des documents de synthèse pour un briefing sur le Cadre d'action et de coopération pour la transformation écologique (CACTÉ), en tirant parti des sources et de notre conversation précédente :

      Titre : Briefing sur le Cadre d'action et de coopération pour la transformation écologique (CACTÉ)

      Introduction (0:00-1:31):

      • Le CACTÉ est un cadre d'action et de coopération pour la transformation écologique.
      • Il a été élaboré par la direction générale de la création artistique (DGCA) du ministère de la Culture.
      • Objectif principal : intégrer les enjeux écologiques dans le secteur de la création artistique.
      • Les intervenants clés sont Frédérique Sarre, responsable de la mission Transformation écologique de la création au Ministère de la Culture, et Maxime Gueudet, Chargé de mission transition écologique de la création au Ministère de la Culture.

      Contexte et Stratégie Globale (1:43-3:22):

      • Le CACTÉ est une mesure du plan d'action de la DGCA pour la transition écologique du secteur de la création.
      • Il s'inscrit dans la stratégie "mieux produire, mieux diffuser".
      • Il est aligné avec les accords internationaux et la feuille de route ministérielle.
      • Il constitue le volet écologique de "mieux produire, mieux diffuser".

      Élaboration du Plan d'Action (3:29-4:26):

      • Le plan d'action a été élaboré à partir de 2020.
      • Il a impliqué une approche transversale avec les services de la DGCA et les DRAC.
      • Une mission dédiée à la transformation écologique a été créée.

      Objectifs du Plan d'Action et du CACTÉ (4:32-5:22):

      • "Éviter l'ingérable et gérer l'inévitable".
      • Contribuer à la réduction des impacts environnementaux du secteur de la création.
      • Assurer la robustesse du secteur face aux crises écologiques.
      • Respecter les valeurs du secteur, notamment la liberté de création.

      Principes Clés du CACTÉ (6:28-7:26):

      • Souplesse : Adaptation aux réalités et priorités locales.
      • Structuration : Engagement obligatoire et méthodologie imposée.
      • Pédagogie : Présentation sous forme de guide.

      Application du CACTÉ (7:32-8:22):

      • Obligatoire pour les structures ayant une convention pluriannuelle d'objectifs (CPO) de 3 ans ou plus avec la DGCA.
      • Recommandé pour les structures financées de manière régulière depuis plus de 3 ans.
      • Utilisable par toute autre structure du secteur public ou privé.

      Calendrier (8:29-9:06):

      • Expérimentation en 2024 dans cinq régions.
      • Publication de la version révisée.
      • Généralisation prévue en 2025-2026 lors des renouvellements de CPO.

      Bilan de l'Expérimentation (9:11-10:13):

      • Réception positive et adaptabilité aux réalités locales.
      • Association des partenaires financiers.
      • Nécessité d'accompagnement et crainte d'un manque de disponibilité des équipes.

      Réponses aux Craintes (10:13-11:25):

      • Accompagnement par les DRAC et la DGCA.
      • Webinaires et outils de soutien.

      Documents du CACTÉ (11:31-12:21):

      • Document présentant le dispositif.
      • Guide thématique pour l'action avec des fiches action.
      • Volet ressources (réglementation, sites internet, guides, rapports).

      Fonctionnement Détaillé du CACTÉ (12:26-13:53):

      • Deux types d'engagements :
        • Engagement méthodologique obligatoire.
        • Engagements thématiques au choix.
      • Engagement méthodologique obligatoire :
        • Définir une stratégie basée sur des données objectives (évaluation des impacts environnementaux).
        • Formation (minimum une journée pour toute l'équipe).
        • Coopération en interne et en externe.

      Importance de la Formation (13:59-14:23):

      • La formation est essentielle pour développer les questions de transition écologique.
      • Elle doit inclure les enjeux de la transition écologique et leur application au secteur de la création.

      Nécessité de Coopération (14:23-15:18):

      • La coopération interne et externe est cruciale pour une démarche écoresponsable efficace.
      • Impliquer l'ensemble de l'équipe dans l'élaboration et la mise en œuvre du plan d'action.
      • S'associer avec des acteurs locaux, y compris ceux hors du secteur culturel.

      Engagements Thématiques (15:25-17:06):

      • Dix engagements thématiques au choix.
      • Les structures choisissent les engagements qu'elles souhaitent mettre en œuvre.
      • Les engagements sont choisis en fonction des impacts estimés, de la réalité de la structure, de son contexte local, et en concertation avec les partenaires financiers.
      • Le nombre d'engagements minimum varie selon le type et la taille de la structure.
      • Exemples d'engagements thématiques: mobilité, réduction de la consommation des fluides, alimentation, écoconception des œuvres, numérique, communication, gestion des déchets, bâti et sites, biodiversité.

      Leviers d'Action (17:31-19:57):

      • Pour chaque engagement, des leviers d'action sont proposés pour assurer que l'engagement est rempli dans son intégralité.

      Fiches Action (20:02-20:46):

      • Les fiches action sont des guides thématiques et des supports d'auto-évaluation.
      • Elles permettent de visualiser rapidement les actions mises en œuvre.
      • Elles développent des leviers et exemples d'actions pour chaque engagement.

      Évaluation et Certification (20:46-21:47):

      • L'évaluation donne lieu à une certification (niveaux 1, 2, 3, 3+) correspondant au nombre d'engagements thématiques mis en œuvre.
      • Le niveau 3+ inclut l'intégration des questions écologiques dans la programmation ou les actions d'EAC.

      Absence d'Indicateurs Chiffrés Obligatoires (21:47-23:53):

      • L'évaluation ne repose pas sur des indicateurs chiffrés obligatoires, mais sur une obligation de moyens.
      • Cela est dû à la difficulté de définir des indicateurs pertinents et à la capacité limitée des structures à les renseigner.

      Processus d'Évaluation (23:53-26:22):

      • L'évaluation associe les collectivités territoriales et permet un suivi régional.
      • Auto-évaluation par la structure.
      • Réunion d'un groupe régional d'évaluation coopérative.

      Groupe Régional d'Évaluation Coopérative (26:22-27:31):

      • Le groupe donne un avis sur le niveau de certification.
      • Il recommande des actions aux structures et à la puissance publique.

      Instances de Suivi et Attribution de la Certification (27:31-28:27):

      • La discussion revient aux instances de suivi (comité de suivi, conseil d'administration).
      • La DRAC attribue la certification.

      Questions Fréquentes et Réponses :

      • Rôle des collectivités : Application par les DRAC en lien avec les collectivités, dialogue au sein des comités de suivi.
      • Exemples de déploiement : Diversité des structures concernées (centres nationaux, FRAC, scènes dramatiques, équipes indépendantes).
      • Calendrier pour les festivals : Se rapprocher de la DRAC, Coprog comme outil possible, pas de "carotte" ou de "bâton", mais écoconditionnalité douce.
      • Possibilité pour une compagnie non conventionnée d'être certifiée : Oui, en adressant une demande à la DRAC.
      • Accompagnement : privilégier un accompagnement collectif, en s'appuyant sur des écoconseillers, des dispositifs existants (Avdas), ou les DLA.
      • Empilement des dispositifs : le CACTÉ doit s'adapter aux priorités politiques locales et aux démarches déjà en place.

      Ces documents devraient fournir une base solide pour un briefing complet sur le CACTÉ.

    2. Voici un sommaire minuté des idées fortes du webinaire sur le Cadre d’action et de coopération pour la transformation écologique (CACTÉ) :

      • 0:00-1:05: Introduction par Antoine Dunan de la FNADAC, remerciant les intervenants et soulignant l'importance du CACTÉ pour les associations.
      • 1:12-1:31: Présentation de Frédérique Sarre et Maxime Gueudet du Ministère de la Culture, exprimant leur enthousiasme à partager les résultats de leurs travaux sur le CACTÉ.
      • 1:43-3:22: Contexte et stratégie globale du CACTÉ, le positionnant comme une mesure du plan d'action de la DGCA pour la transition écologique du secteur de la création, intégré à la stratégie "mieux produire, mieux diffuser" et aligné avec les accords internationaux et la feuille de route ministérielle.
      • 3:29-4:26: Élaboration du plan d'action de la DGCA depuis 2020, soulignant la transversalité, la formation aux enjeux écologiques, et la création d'une mission dédiée à la transformation écologique.
      • 4:32-5:22: Objectifs du plan d'action et du CACTÉ : éviter l'ingérable et gérer l'inévitable, assurer la robustesse du secteur face aux crises écologiques, tout en respectant les valeurs du secteur, notamment la liberté de création.
      • 5:28-6:21: Les mesures du plan d'action sont fondées sur une enquête préalable, des auditions, et un repérage des initiatives existantes, avec une approche itérative et évolutive.
      • 6:28-7:26: Le CACTÉ est construit sur trois piliers : souplesse (adaptation aux réalités et priorités locales), structuration (obligation d'engagement et méthodologie imposée), et pédagogie (présentation sous forme de guide).
      • 7:32-8:22: Application du CACTÉ : obligatoire pour les structures ayant un document de contractualisation de 3 ans ou plus avec la DGCA, recommandé pour les autres, et utilisable par toute structure.
      • 8:29-9:06: Calendrier : expérimentation en 2024 dans cinq régions, révision, et publication de la nouvelle version, généralisation prévue en 2025-2026 lors des renouvellements de CPO.
      • 9:11-10:13: Bilan de l'expérimentation : réception positive, adaptabilité aux réalités locales, association des partenaires financiers, mais nécessité d'accompagnement et crainte d'un manque de disponibilité des équipes.
      • 10:13-11:25: Réponses apportées aux craintes : accompagnement par les DRAC et la DGCA, webinaires, etc.
      • 11:31-12:21: Présentation des trois documents du CACTÉ : un document présentant le dispositif, un guide thématique pour l'action avec des fiches action, et un volet ressources.
      • 12:26-13:53: Détail du fonctionnement du CACTÉ : deux types d'engagements, un engagement méthodologique obligatoire (définir une stratégie basée sur des données objectives, formation, coopération en interne et externe), et des engagements thématiques.
      • 13:59-14:23: Importance de la formation pour développer les questions de transition écologique, avec un minimum d'une journée pour toute l'équipe.
      • 14:23-15:18: Nécessité de coopération interne et externe pour une démarche écoresponsable efficace.
      • 15:25-17:06: Présentation des 10 engagements thématiques au choix, à sélectionner en fonction des impacts estimés, de la réalité de la structure, de son projet, de son contexte local, et en dialogue avec les partenaires financiers.
      • 17:06-17:31: Le nombre d'engagements minimum varie selon le type et la taille de la structure.
      • 17:31-19:57: Pour chaque engagement, des leviers d'action sont proposés pour assurer que l'engagement est rempli dans son intégralité.
      • 20:02-20:46: Les fiches action sont des guides thématiques et des supports d'auto-évaluation, permettant de visualiser rapidement les actions mises en œuvre.
      • 20:46-21:47: L'évaluation donne lieu à une certification (niveaux 1, 2, 3, 3+) correspondant au nombre d'engagements thématiques mis en œuvre, le niveau 3+ incluant l'introduction des questions écologiques dans la programmation ou les actions d'EAC.
      • 21:47-23:53: L'évaluation ne repose pas sur des indicateurs chiffrés obligatoires, mais sur une obligation de moyens, en raison de la difficulté de définir des indicateurs pertinents et de la capacité limitée des structures à les renseigner.
      • 23:53-26:22: L'évaluation associe les collectivités territoriales et permet un suivi régional, avec une auto-évaluation par la structure et la réunion d'un groupe régional d'évaluation coopérative.
      • 26:22-27:31: Le groupe régional d'évaluation coopérative donne un avis sur le niveau de certification et recommande des actions aux structures et à la puissance publique.
      • 27:31-28:27: La discussion revient ensuite aux instances de suivi (comité de suivi, conseil d'administration) pour confirmer le niveau de certification et envisager des solutions aux problèmes rencontrés, puis attribution de la certification par la DRAC.
      • 28:46-32:30: Questions sur le rôle des collectivités et la déclinaison du CACTÉ au niveau régional (PACA), réponse sur l'application par les DRAC en lien avec les collectivités, et sur le dialogue au sein des comités de suivi et conseils d'administration.
      • 32:30-34:55: Questions sur les récits d'expérience de la phase d'expérimentation, réponse sur l'observation de la mise en œuvre et l'enclenchement de la démarche, et sur les principaux engagements choisis (biodiversité moins choisie, mobilité des professionnels et des œuvres plus fréquente).
      • 34:55-38:41: Questions sur le travail avec les référentiels carbone (FRAC, centres d'art), réponse sur le lien entre ces référentiels et le CACTÉ, et sur la création d'un outil pour estimer facilement le profil d'émission des structures.
      • 38:41-40:03: Questions sur des exemples de déploiement du CACTÉ dans des structures labellisées et des festivals, réponse sur la diversité des structures concernées et la possibilité d'obtenir des détails par mail.
      • 40:03-45:35: Questions sur le calendrier de mise en place du CACTÉ pour un festival de cirque, réponse sur l'absence d'obligation dans ce cas, mais l'invitation à se rapprocher de la DRAC, sur l'intégration de Coprog comme outil possible, et sur l'absence de carotte ou de bâton, mais une écoconditionnalité douce et une évaluation qui fait partie des critères de poursuite de la subvention.
      • 45:35-47:31: Questions sur la mise en place du groupe régional d'évaluation coopérative et les autres lieux de coopération, réponse sur une mise en place réelle en 2027, une préfiguration possible en 2026, et sur les discussions au sein des CLTC et COREPS.
      • 47:31-50:19: Questions sur la possibilité pour une compagnie non conventionnée d'être certifiée, réponse positive avec une demande à adresser à la DRAC, et sur une communication en direction de l'ensemble des structures du secteur, réponse sur un webinaire prévu et des présentations par les DRAC.
      • 50:19-54:27: Questions sur l'empilement des strates d'implication des collectivités, réponse sur l'adaptabilité du CACTÉ et l'importance de choisir les engagements correspondant aux priorités politiques locales, et sur la synchronisation progressive avec les conseillers DRAC.
      • 54:27-59:10: Questions d'Olivier, il représente une compagnie musicale. Il demande si le conseiller référent des DRAC pour le CACTE serait le conseiller musique. La réponse est "oui".
      • 59:10-1:03:27: Il n'y aura pas de formations obligatoires imposées par le ministère, mais que l'AVDAS a intégré le CACTÉ dans son programme de formation.
      • 1:03:27-1:05:53: Demande sur le volet lié à l'intégration des questions de transition dans les programmations et projets EAC, la réponse est que le principe du CACTÉ est avant tout de permettre de contraindre les structures de s'engager dans une démarche d'écoresponsabilité de leurs activités..
      • 1:05:53-1:07:20: Une structure d'accompagnement demande si elle peut être référencée pour délivrer les formations obligatoires dans le cadre du CACTÉ, la réponse est que pour le moment, toute structure est agrée en quelque sorte.
      • 1:07:20-1:08:46: Une question par rapport aux engagements thématiques les plus et moins populaires, la réponse est que l'engagement sur la biodiversité est assez peu choisi, mais la prudence reste de mise.
      • 1:09:05-1:12:09: Antoine Dunan a expérimenté en PACA et trouve qu'il y a des montagnes à déplacer pour traiter certains objets et demande comment financer et mettre en place des nouvelles infrastructures. La fnadaac a mis en place un groupe de travail qui s'appelle crisealide.
      • 1:12:16-1:16:41: Y a-t-il des écoconseillers ? La réponse est l'appropriation du sujet par chacun est essentiel, et ils peuvent accompagner un certain nombre de structures.
      • 1:16:54-1:18:12: Les interlocuteurs remercient les participants et se tiennent à disposition.
    1. Voici un sommaire minuté des idées fortes de la conférence de Gérald Bronner :

      • 0:09-1:36 : Introduction et remerciements. Gérald Bronner se présente et remercie la Sorbonne et la Fondation Descartes pour leur soutien, soulignant la gratuité de l'événement grâce à leur contribution. Il met en évidence l'importance de la Fondation Descartes, unique en France à s'intéresser à la qualité de l'information et à la désinformation.

      • 1:41-3:30 : Présentation du séminaire et contrat moral. Bronner explique le déroulement du séminaire en quatre parties, encourageant les participants à reproduire ce séminaire auprès de leurs proches pour diffuser l'esprit critique. Il annonce que les supports de cours seront disponibles en ligne.

      • 3:36-6:01 : Pourquoi un séminaire sur l'esprit critique ?. Bronner justifie la nécessité de ce séminaire face à la désinformation et aux fausses nouvelles, un problème reconnu par de nombreuses institutions internationales.

      • 6:08-8:05 : Le Forum économique mondial et les risques pour l'humanité. Il cite le Forum économique mondial qui identifie la mésinformation et la désinformation comme les principaux risques à court terme pour l'humanité, car ils empêchent de s'accorder sur la réalité des problèmes. Le risque est la fracture du socle épistémique commun et la polarisation des sociétés.

      • 8:10-9:36 : La démocratie des crédules et les conditions métacognitives. Bronner évoque ses travaux antérieurs, notamment "La démocratie des crédules", et souligne que nous vivons aujourd'hui dans cette réalité. Il explique que les prochaines séances porteront sur les conditions métacognitives pour développer l'esprit critique.

      • 9:42-10:31 : Exercices d'esprit critique et commune humanité. Les dernières séances seront consacrées à des exercices d'esprit critique sur des problèmes amusants et neutres, encourageant à se rappeler notre commune humanité face aux divergences d'interprétation.

      • 10:31-12:29 : Disponibilité de l'information et pression concurrentielle. Bronner aborde l'explosion de la disponibilité de l'information grâce à Internet, qui a créé une pression concurrentielle sur le marché des idées. Contrairement aux espoirs initiaux, la rationalité ne s'impose pas naturellement.

      • 12:29-14:05 : Paradoxe de l'information et biais de confirmation. Il explique le paradoxe de l'information : plus il y a d'informations, plus il est facile de trouver des informations qui confirment nos croyances préalables, renforçant le biais de confirmation.

      • 14:13-16:11 : Intervention de tous et super-diffuseurs. Tout le monde peut intervenir sur le marché de l'information, mais certains individus (super-diffuseurs) parlent plus fort et plus que les autres. 1 % des comptes produisent 33 % de l'information.

      • 16:11-17:44 : Radicalité et antivaccins. Les plus actifs sont souvent porteurs d'une forme de radicalité. L'exemple des antivaccins montre comment une minorité suractive peut essaimer ses arguments dans l'espace public.

      • 17:44-19:38 : Prosélytisme et apathie. Les antivaccins font plus de prosélytisme que les provaccins. Bronner souligne que "le mal n'a besoin de rien d'autre pour s'imposer que de l'apathie des gens de bien", appelant à ne pas se laisser faire.

      • 19:38-21:33 : Rapidité de diffusion et effet de primauté. Les théories du complot se diffusent très rapidement, exploitant les "Data void". L'effet de primauté fait que la première information rencontrée imprime une impression durable, même en cas de démenti.

      • 21:33-23:01 : Stratégies des théoriciens du complot et exploitation des Data void. Les plus radicaux exploitent stratégiquement les vides de données (Data void) pour diffuser leurs théories alternatives.

      • 23:01-25:56 : Puissance argumentative et millefeuille argumentatif. Il ne faut pas sous-estimer la puissance argumentative de ces raisonnements, qui forment un "millefeuille argumentatif" impressionnant grâce au travail en essaim permis par le numérique. Il ne faut pas prendre les gens qui croient ces théories pour des imbéciles.

      • 25:56-28:18 : Variables sociales et intimidation intellectuelle. Les gens qui se sentent déclassés sont plus sensibles aux théories du complot. L'intimidation intellectuelle et l'asymétrie de visibilité des points de vue sont des conséquences de ce phénomène.

      • 28:18-31:01 : Loi de Brandolini. Bronner explique la loi de Brandolini : il faut moins de temps pour diffuser une ânerie que pour la défaire. Il donne l'exemple d'une théorie du complot autour du film Captain America et de la bière Corona.

      • 31:01-33:51 : Leviers possibles et modération. Un article de Science montre que les fausses histoires se diffusent plus vite et plus profondément que les vraies. Bronner évoque la modération des réseaux sociaux, mais souligne que leur intérêt économique n'est pas toujours convergent avec les intérêts de la démocratie.

      • 33:51-35:36 : Le cerveau comme meilleur régulateur et illusions mentales. Les meilleurs régulateurs sont nos cerveaux. Il faut apprendre à dompter certains mécanismes cognitifs. Bronner parle d'illusions mentales, qu'il préfère qualifier de tentations mentales auxquelles on peut résister.

      • 35:36-38:12 : Lazy thinking et avarie cognitive. La "lazy thinking" (pensée paresseuse) ou "avarice cognitive" est ce qui prédit le plus la diffusion de fausses informations. Il faut apprendre à reconnaître les situations où l'on risque de se tromper.

      • 38:12-40:22 : Biais cognitifs et exemple de la COVID-19 et la 5G. La littérature a dénombré environ 150 biais cognitifs. Bronner donne l'exemple de la corrélation fallacieuse entre la COVID-19 et les antennes 5G.

      • 40:22-43:02 : Corrélation n'est pas causalité. Il rappelle que corrélation n'est pas causalité, citant l'exemple des cigognes et des bébés.

      • 43:02-47:21 : Interrogations métaphysiques et paradoxe de Fermi. Bronner exprime son inquiétude quant à l'avenir de notre civilisation. Il évoque le paradoxe de Fermi (où sont les extraterrestres ?) et les interprétations pessimistes sur la durée des civilisations.

      • 47:21-50:11 : Croyance aux extraterrestres et soucoupes volantes. Il aborde le regain de la croyance aux extraterrestres et l'histoire de Kenneth Arnold, à l'origine du terme "soucoupe volante".

      • 50:11-52:51 : Modes et observations. Il explique comment les modes et les prédictions peuvent susciter des observations et des mésinterprétations.

      • 52:51-54:40 : Arguments rationnels et platistes. Bronner souligne l'importance de présenter des arguments rationnels et cite l'exemple des platistes pour illustrer l'ambivalence de la croyance.

    1. Voici un résumé minuté de la transcription concernant l'impact du sel sur la santé, basé sur les informations des sources fournies:

      • 0:08-0:20: Le sel est présenté comme essentiel et indispensable à la cuisine de qualité.
      • 0:27-0:34: Le sel est potentiellement dangereux pour la santé, voire mortel, en cas de surconsommation.
      • 0:41-0:47: Il est avancé que ces craintes pourraient être infondées.
      • 0:47-0:53: Le sodium est indispensable au fonctionnement normal du corps humain.
      • 1:52-2:06: Importance du sel en cuisine, notamment pour l'assaisonnement du poisson.
      • 2:19-2:25: Importance d'un assaisonnement équilibré, ni trop discret, ni trop présent.
      • 2:45-2:52: Nécessité de saler généreusement l'eau pour blanchir les légumes verts.
      • 3:47-3:55: Estimation de l'utilisation de plusieurs centaines de grammes de sel pour 50 convives, ce qui peut sembler impressionnant comparé aux recommandations nutritionnelles.
      • 4:00-4:25: Le sel a toujours été diabolisé, notamment dans les ouvrages destinés au grand public, et fait l'objet de campagnes de sensibilisation sur la consommation excessive.
      • 5:01-5:18: Le sel de table est du chlorure de sodium, essentiel pour la rétention de liquide, le fonctionnement du cœur, des nerfs, des muscles et la tension artérielle.
      • 5:23-5:46: Le sel est un électrolyte qui transporte une charge électrique, permettant au cœur de battre et alimentant le cerveau, les muscles et le système nerveux.
      • 5:46-6:05: Le sel maintient l'équilibre des fluides et l'hydratation du corps.
      • 6:13-6:19: Une personne de 70 kg doit avoir environ 62 cuillères à café de sel dans l'organisme.
      • 6:19-6:33: Le sel est filtré et éliminé par les reins, évitant l'accumulation de toxines.
      • 6:33-6:38: Le sel est indispensable à notre organisme.
      • 6:52-7:16: Le sel est tombé en disgrâce aux États-Unis en 1977, conduisant à une recommandation de 3g de sel par jour.
      • 7:16-8:15: Cette recommandation est basée sur des études contestables, notamment une étude sur des rats ayant ingéré d'énormes quantités de sel et une étude comparant la consommation de sel et la tension artérielle de populations isolées.
      • 8:15-8:35: Le sel est devenu le grand ennemi, accusé d'entraîner la rétention de liquide, l'hypertension artérielle et les infarctus.
      • 8:35-9:00: Ces conclusions sont le résultat d'hypothèses incertaines et ne tiennent pas compte du fait que la consommation de sel n'est pas la seule cause de l'hypertension.
      • 9:05-9:13: Parmi les facteurs de risque de l'hypertension figurent le tabac, l'alcool, le stress, l'obésité et le sel.
      • 9:13-9:32: Le sel est l'élément le plus facile à changer dans notre alimentation, ce qui en fait le coupable idéal de l'hypertension.
      • 9:45-10:02: La consommation de sel n'augmente la tension artérielle que chez certaines personnes.
      • 10:02-10:10: Contrairement au tabac et à l'alcool, le sel est indispensable à notre organisme.
      • 10:10-10:16: Théorie selon laquelle nous raffolons du sel parce que nous descendons de créatures marines.
      • 10:35-10:56: Comparaison entre la composition de l'eau de mer et de notre milieu intérieur, expliquant notre besoin de sel.
      • 11:08-11:13: Le rein est l'organe le plus important du corps pour la gestion du sel.
      • 11:18-11:40: Les reins filtrent et réinjectent le sel dans notre organisme, permettant de maintenir un environnement interne équilibré.
      • 11:40-11:46: Nous avons besoin du sel pour vivre.
      • 11:53-12:19: Une consommation importante de sel pourrait être mauvaise pour la santé en augmentant la tension artérielle chez les personnes sensibles au sel (environ 1/4 de la population).
      • 13:31-13:39: La dose nécessaire de sel est estimée à 1,5 g par jour, mais la plupart des britanniques en consomment environ 20 fois plus.
      • 13:53-14:17: La Finlande a réussi à réduire sa consommation de sel grâce à des campagnes de sensibilisation.
      • 14:50-15:02: L'American Heart Association recommande de ne pas dépasser 1500 mg de sodium par jour.
      • 15:14-15:20: Le lien entre la consommation de sodium et les maladies cardiovasculaires n'a jamais été remis en question et s'est imposé comme une vérité.
      • 15:56-16:02: Rien n'indique que la restriction en sel serait bonne pour l'organisme selon certaines études.
      • 16:12-16:25: Il ne faut pas exiger systématiquement des malades souffrant d'insuffisance cardiaque qu'ils réduisent leur consommation de sodium sans éléments concrets.
      • 17:11-17:17: Il faut tenir compte des difficultés que vont rencontrer certains patients comme les personnes âgées, celles qui ont des revenus limités ou qui font partie de minorités.
      • 17:36-17:43: Il faut des éléments tangibles qui prouvent que le changement améliorera son état avant de demander à un patient atteint d'une maladie cardiovasculaire de changer son alimentation.
      • 19:01-19:07: L'être humain est programmé pour aimer le sel.
      • 20:54-21:07: La majeure partie du sel que l'on mange vient des produits industriels.
      • 21:40-21:47: Certains nutritionnistes conseillent à leurs clients d'en consommer davantage.
      • 21:53-22:10: Exemple de Miguel, qui avait des symptômes indiquant une carence en sel et à qui on a donné des boissons riches en électrolytes.
      • 22:22-22:34: Une carence en sel peut être mortelle, comme dans le cas d'une coureuse de marathon ayant bu trop d'eau et perdu trop de sel (hyponatrémie).
      • 23:26-23:44: Plusieurs organisations préconisent un apport très faible en sodium, mais aucun élément probant n'a été apporté pour justifier ce chiffre.
      • 23:44-24:11: De plus en plus d'études prouvent que le contrôle de l'apport en sodium n'apporte rien et pourrait même augmenter les risques, sauf en Chine où la consommation est très élevée.
      • 24:49-25:08: Une faible consommation de sodium est associée à une augmentation du niveau de certaines hormones, avec des effets délétères sur le système vasculaire.
      • 25:13-25:26: Chaque nutriment essentiel doit être consommé dans une certaine quantité : au-delà la dose est toxique, en dessous on a des carences.
      • 26:52-27:06: Les usages du sel sont multiples : il ne sert pas seulement à assaisonner, mais aussi à contrôler l'activité de la levure dans le pain et à tempérer la puissance du sucre dans les desserts.
      • 30:28-30:44: Dans les années 20, on a ajouté de l'iode au sel pour compenser une carence chez les consommateurs américains.
      • 32:09-32:16: D'un point de vue chimique, le sel de mer et le sel extrait d'une mine contiennent la même quantité de sodium pour un poids donné.
      • 32:58-33:11: C'est à cause de cette diversité que les études sur le sel se contredisent autant.
      • 33:11-33:40: Il est très complexe de mener des études sur la nutrition humaine, car il est difficile de contrôler l'alimentation des participants.
      • 33:40-34:32: Présentation du programme Mars 500, une expérience d'isolement permettant d'étudier les effets du sel dans des conditions contrôlées.
      • 34:32-35:28: Découverte surprenante : la quantité de sodium stockée ou éliminée dans l'organisme ne dépend pas de l'alimentation du sujet.
      • 35:47-36:00: Le sodium disparu s'est dispersé dans l'organisme.
      • 36:00-36:10: Si on ne sait pas où se trouve ce sel, il y a de quoi se demander s'il est vraiment nécessaire de réduire notre consommation.
      • 36:28-36:34: On a créé la sauce soja pour diluer le sel qui était rare à l'époque.
      • 37:44-37:50: Umami est un mot japonais qui désigne la 5e saveur de base après l'acide, l'amer, le sucré et le salé.
      • 38:30-38:38: Partout dans le monde, les êtres humains sont fous de sel, car cette envie découle d'un véritable besoin.
      • 38:38-39:09: On nous sert souvent des biscuits apéritifs salés dans les bars, car le sel donne envie de boire.
      • 39:15-39:26: Le sel donne soif à court terme, mais pendant le programme Mars 500, les cosmonautes qui ont mangé une plus grande quantité de sel ont aussi eu plus d'appétit.
      • 39:26-39:58: Selon une étude menée sur des cosmonautes, une augmentation de l'apport en sel ferait baisser la soif et augmenterait la faim.
      • 39:58-40:09: Les réserves d'eau sont stockées dans des dépôts de sel.
      • 40:09-40:17: Il faut continuer à creuser pour en savoir plus sur les stocks de sodium.
      • 40:24-40:32: Utilisation de l'IRM pour détecter le sodium dans le corps.
      • 40:37-40:51: Le sel apparaît en blanc sur l'écran, plus la zone est blanche, plus la concentration est importante.
      • 41:10-41:17: Notre cœur envoie 4,5 g de sel par minute dans notre corps.
      • 41:23-41:42: Plus le patient est âgé, plus la quantité de sodium stockée dans les muscles est importante.
      • 41:42-42:07: Plus on vieillit, plus notre corps stocke le sel, ce qui pourrait être associé aux problèmes de santé liés à l'âge.
      • 42:18-42:25: Le sel stocké aiderait le corps à conserver l'eau, comme une crème hydratante.
      • 42:30-42:44: S'hydrater, ce n'est pas seulement boire de l'eau, il faut que cette eau reste dans le corps, et cette étape est gérée par notre métabolisme.
      • 42:50-42:56: Ces mécanismes surpassent la fonction rénale.
      • 43:20-43:32: Avec le temps, la peau est de plus en plus perméable et perd en élasticité.
      • 43:32-43:46: On cherche à savoir si les patients dont la peau laisse passer plus d'eau stockent davantage de sodium au niveau cutané.
      • 44:17-44:23: Il est trop tôt pour affirmer que le stockage de sel joue un rôle dans notre organisme, mais ces observations soulèvent des questions.
      • 44:23-44:35: Si une quantité indéterminée de sel ingéré est stockée dans notre corps, a-t-on besoin de contrôler notre consommation au milligram près ?.
      • 45:44-45:55: Le sel est à la fois un conservateur essentiel et une substance très corrosive, présente dans l'eau mais pouvant entraîner déshydratation.
      • 45:55-46:02: Il est essentiel à la vie animale et humaine, mais a été qualifié d'aliment mortel.
      • 46:02-46:09: C'est le condiment de toutes les contradictions.
      • 46:09-46:40: On aime les modèles simples, mais notre biologie est un peu plus compliquée que ça.
      • 47:53-47:58: Tâchons d'épargner nos reins et de les aider à faire le travail dans des conditions normales.
      • 48:28-48:40: Les gens doivent avoir conscience de ce qu'ils mangent, et les agences de santé doivent recommander un régime équilibré plutôt que de se concentrer sur le sel.
      • 48:55-49:01: Une alimentation équilibrée à base de fruits et légumes est bénéfique.
      • 49:01-49:08: La modération est une bonne ligne de conduite.
      • 49:08-49:14: On ne comprend pas encore tous les effets que le sel a sur notre corps.
    1. Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      Authors has provided a mechanism by which how presence of truncated P53 can inactivate function of full length P53 protein. Authors proposed this happens by sequestration of full length P53 by truncated P53.

      In the study, performed experiments are well described.

      My area of expertise is molecular biology/gene expression, and I have tried to provide suggestions on my area of expertise. The study has been done mainly with overexpression system and I have included few comments which I can think can be helpful to understand effect of truncated P53 on endogenous wild type full length protein. Performing experiments on these lines will add value to the observation according to this reviewer.

      Major comments:

      1. What happens to endogenous wild type full length P53 in the context of mutant/truncated isoforms, that is not clear. Using a P53 antibody which can detect endogenous wild type P53, can authors check if endogenous full length P53 protein is also aggregated as well? It is hard to differentiate if aggregation of full length P53 happens only in overexpression scenario, where lot more both of such proteins are expressed. In normal physiological condition P53 expression is usually low, tightly controlled and its expression get induced in altered cellular condition such as during DNA damage. So, it is important to understand the physiological relevance of such aggregation, which could be possible if authors could investigate effect on endogenous full length P53 following overexpression of mutant isoforms. Response: Thank you very much for your insightful comments. 1) To address "what happens to endogenous wild-type full-length P53 in the context of mutant/truncated isoforms," we employed a human A549 cell line expressing endogenous wild-type p53 under DNA damage conditions such as an etoposide treatment1. We choose the A549 cell line since similar to H1299, it is a lung cancer cell line (www.atcc.org). For comparison, we also transfected the cells with 2 μg of V5-tagged plasmids encoding FLp53 and its isoforms Δ133p53 and Δ160p53. As shown in Figure R1A, lanes 1 and 2, endogenous p53 expression, remained undetectable in A549 cells despite etoposide treatment, which limits our ability to assess the effects of the isoforms on the endogenous wild-type FLp53. We could, however, detect the V5-tagged FLp53 expressed from the plasmid using anti-V5 (rabbit) as well as with anti-DO-1 (mouse) antibody (Figure R1). The latter detects both endogenous wild-type p53 and the V5-tagged FLp53 since the antibody epitope is within the N-terminus (aa 20-25). This result supports the reviewer's comment regarding the low level of expression of endogenous p53 that is insufficient for detection in our experiments. (Figure R1 is included in the file "RC-2024-02608 Figures of Response to Reviewer.)__

      In summary, in line with the reviewer's comment that 'under normal physiological conditions p53 expression is usually low,' we could not detect p53 with an anti-DO-1 antibody. Thus, we proceeded with V5/FLAG-tagged p53 for detection of the effects of the isoforms on p53 stability and function. We also found that protein expression in H1299 cells was more easily detectable than in A549 cells (Compare Figures R1A and B). Thus, we decided to continue with the H1299 cells (p53-null), which would serve as a more suitable model system for this study.

      2) We agree with the reviewer that 'It is hard to differentiate if aggregation of full-length p53 happens only in overexpression scenario'. However, it is not impossible to imagine that such aggregation of FLp53 happens under conditions when p53 and its isoforms are over-expressed in the cell. Although the exact physiological context is not known and beyond the scope of the current work, our results indicate that at higher expression, p53 isoforms drive aggregation of FLp53. Given the challenges of detecting endogenous FLp53, we had to rely on the results obtained with plasmid mediated expression of p53 and its isoforms in p53-null cells.

      Can presence of mutant P53 isoforms can cause functional impairment of wild type full length endogenous P53? That could be tested as well using similar ChIP assay authors has performed, but instead of antibody against the Tagged protein if the authors could check endogenous P53 enrichment in the gene promoter such as P21 following overexpression of mutant isoforms. May be introducing a condition such as DNA damage in such experiment might help where endogenous P53 is induced and more prone to bind to P53 target such as P21.

      Response: Thank you very much for your valuable comments and suggestions. To investigate the potential functional impairment of endogenous wild-type p53 by p53 isoforms, we initially utilized A549 cells (p53 wild-type), aiming to monitor endogenous wild-type p53 expression following DNA damage. However, as mentioned and demonstrated in Figure R1, endogenous p53 expression was too low to be detected under these conditions, making the ChIP assay for analyzing endogenous p53 activity unfeasible. Thus, we decided to utilize plasmid-based expression of FLp53 and focus on the potential functional impairment induced by the isoforms.

      3. On similar lines, authors described:

      "To test this hypothesis, we escalated the ratio of FLp53 to isoforms to 1:10. As expected, the activity of all four promoters decreased significantly at this ratio (Figure 4A-D). Notably, Δ160p53 showed a more potent inhibitory effect than Δ133p53 at the 1:5 ratio on all promoters except for the p21 promoter, where their impacts were similar (Figure 4E-H). However, at the 1:10 ratio, Δ133p53 and Δ160p53 had similar effects on all transactivation except for the MDM2 promoter (Figure 4E-H)."

      Again, in such assay authors used ratio 1:5 to 1:10 full length vs mutant. How authors justify this result in context (which is more relevant context) where one allele is Wild type (functional P53) and another allele is mutated (truncated, can induce aggregation). In this case one would except 1:1 ratio of full-length vs mutant protein, unless other regulation is going which induces expression of mutant isoforms more than wild type full length protein. Probably discussing on these lines might provide more physiological relevance to the observed data.

      Response: Thank you for raising this point regarding the physiological relevance of the ratios used in our study. 1) In the revised manuscript (lines 193-195), we added in this direction that "The elevated Δ133p53 protein modulates p53 target genes such as miR34a and p21, facilitating cancer development2, 3. To mimic conditions where isoforms are upregulated relative to FLp53, we increased the ratios to 1:5 and 1:10." This approach aims to simulate scenarios where isoforms accumulate at higher levels than FLp53, which may be relevant in specific contexts, as also elaborated above.

      2) Regarding the issue of protein expression, where one allele is wild-type and the other is isoform, this assumption is not valid in most contexts. First, human cells have two copies of TPp53 gene (one from each parent). Second, the TP53 gene has two distinct promoters: the proximal promoter (P1) primarily regulates FLp53 and ∆40p53, whereas the second promoter (P2) regulates ∆133p53 and ∆160p534, 5. Additionally, ∆133TP53 is a p53 target gene6, 7 and the expression of Δ133p53 and FLp53 is dynamic in response to various stimuli. Third, the expression of p53 isoforms is regulated at multiple levels, including transcriptional, post-transcriptional, translational, and post-translational processing8. Moreover, different degradation mechanisms modify the protein level of p53 isoforms and FLp538. These differential regulation mechanisms are regulated by various stimuli, and therefore, the 1:1 ratio of FLp53 to ∆133p53 or ∆160p53 may be valid only under certain physiological conditions. In line with this, varied expression levels of FLp53 and its isoforms, including ∆133p53 and ∆160p53, have been reported in several studies3, 4, 9, 10.

      3) In our study, using the pcDNA 3.1 vector under the human cytomegalovirus (CMV) promoter, we observed moderately higher expression levels of ∆133p53 and ∆160p53 relative to FLp53 (Figure R1B). This overexpression scenario provides a model for studying conditions where isoform accumulation might surpass physiological levels, impacting FLp53 function. By employing elevated ratios of these isoforms to FLp53, we aim to investigate the potential effects of isoform accumulation on FLp53.

      4. Finally does this altered function of full length P53 (preferably endogenous one) in presence of truncated P53 has any phenotypic consequence on the cells (if authors choose a cell type which is having wild type functional P53). Doing assay such as apoptosis/cell cycle could help us to get this visualization.

      Response: Thank you for your insightful comments. In the experiment with A549 cells (p53 wild-type), endogenous p53 levels were too low to be detected, even after DNA damage induction. The evaluation of the function of endogenous p53 in the presence of isoforms is hindered, as mentioned above. In the revised manuscript, we utilized H1299 cells with overexpressed proteins for apoptosis studies using the Caspase-Glo® 3/7 assay (Figure 7). This has been shown in the Results section (lines 254-269). "The Δ133p53 and Δ160p53 proteins block pro-apoptotic function of FLp53.

      One of the physiological read-outs of FLp53 is its ability to induce apoptotic cell death11. To investigate the effects of p53 isoforms Δ133p53 and Δ160p53 on FLp53-induced apoptosis, we measured caspase-3 and -7 activities in H1299 cells expressing different p53 isoforms (Figure 7). Caspase activation is a key biochemical event in apoptosis, with the activation of effector caspases (caspase-3 and -7) ultimately leading to apoptosis12. The caspase-3 and -7 activities induced by FLp53 expression was approximately 2.5 times higher than that of the control vector (Figure 7). Co-expression of FLp53 and the isoforms Δ133p53 or Δ160p53 at a ratio of 1: 5 significantly diminished the apoptotic activity of FLp53 (Figure 7). This result aligns well with our reporter gene assay, which demonstrated that elevated expression of Δ133p53 and Δ160p53 impaired the expression of apoptosis-inducing genes BAX and PUMA (Figure 4G and H). Moreover, a reduction in the apoptotic activity of FLp53 was observed irrespective of whether Δ133p53 or Δ160p53 protein was expressed with or without a FLAG tag (Figure 7). This result, therefore, also suggests that the FLAG tag does not affect the apoptotic activity or other physiological functions of FLp53 and its isoforms. Overall, the overexpression of p53 isoforms Δ133p53 and Δ160p53 significantly attenuates FLp53-induced apoptosis, independent of the protein tagging with the FLAG antibody epitope."

      **Referees cross-commenting**

      I think the comments from the other reviewers are very much reasonable and logical.

      Especially all 3 reviewers have indicated, a better way to visualize the aggregation of full-length wild type P53 by truncated P53 (such as looking at endogenous P53# by reviewer 1, having fluorescent tag #by reviewer 2 and reviewer 3 raised concern on the FLAG tag) would add more value to the observation.

      Response: Thank you for these comments. The endogenous p53 protein was undetectable in A549 cells induced by etoposide (Figure R1A). Therefore, we conducted experiments using FLAG/V5-tagged FLp53. To avoid any potential side effects of the FLAG tag on p53 aggregation, we introduced untagged p53 isoforms in the H1299 cells and performed subcellular fractionation. Our revised results, consistent with previous FLAG-tagged p53 isoforms findings, demonstrate that co-expression of untagged isoforms with FLAG-tagged FLp53 significantly induced the aggregation of FLAG-FLp53, while no aggregation was observed when FLAG-tagged FLp53 was expressed alone (Supplementary Figure 6). These results clearly indicate that the FLAG tag itself does not contribute to protein aggregation.

      Additionally, we utilized the A11 antibody to detect protein aggregation, providing additional validation (Figure R3). Given that the fluorescent proteins (~30 kDa) are substantially bigger than the tags used here (~1 kDa) and may influence oligomerization (especially GFP), stability, localization, and function of p53 and its isoforms, we avoided conducting these vital experiments with such artificial large fusions.

      Reviewer #1 (Significance (Required)):

      The work in significant, since it points out more mechanistic insight how wild type full length P53 could be inactivated in the presence of truncated isoforms, this might offer new opportunity to recover P53 function as treatment strategies against cancer.

      Response: Thank you for your insightful comments. We appreciate your recognition of the significance of our work in providing mechanistic insights into how wild-type FLp53 can be inactivated by truncated isoforms. We agree that these findings have potential for exploring new strategies to restore p53 function as a therapeutic approach against cancer.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      The manuscript by Zhao and colleagues presents a novel and compelling study on the p53 isoforms, Δ133p53 and Δ160p53, which are associated with aggressive cancer types. The main objective of the study was to understand how these isoforms exert a dominant negative effect on full-length p53 (FLp53). The authors discovered that the Δ133p53 and Δ160p53 proteins exhibit impaired binding to p53-regulated promoters. The data suggest that the predominant mechanism driving the dominant-negative effect is the co-aggregation of FLp53 with Δ133p53 and Δ160p53.

      This study is innovative, well-executed, and supported by thorough data analysis. However, the authors should address the following points:

        • Introduction on Aggregation and Co-aggregation: Given that the focus of the study is on the aggregation and co-aggregation of the isoforms, the introduction should include a dedicated paragraph discussing this issue. There are several original research articles and reviews that could be cited to provide context.* Response: Thank you very much for the valuable comments. We have added the following paragraph in the revised manuscript (lines 74-82): "Protein aggregation has become a central focus of modern biology research and has documented implications in various diseases, including cancer13, 14, 15. Protein aggregates can be of different types ranging from amorphous aggregates to highly structured amyloid or fibrillar aggregates, each with different physiological implications. In the case of p53, whether protein aggregation, and in particular, co-aggregation with large N-terminal deletion isoforms, plays a mechanistic role in its inactivation is yet underexplored. Interestingly, the Δ133p53β isoform has been shown to aggregate in several human cancer cell lines16. Additionally, the Δ40p53α isoform exhibits a high aggregation tendency in endometrial cancer cells17. Although no direct evidence exists for Δ160p53 yet, these findings imply that p53 isoform aggregation may play a major role in their mechanisms of actions."

      2. Antibody Use for Aggregation: To strengthen the evidence for aggregation, the authors should consider using antibodies that specifically bind to aggregates.

      Response: Thank you for your insightful suggestion. We addressed protein aggregation using the A11 antibody which specifically recognizes amyloid-like protein aggregates. We analyzed insoluble nuclear pellet samples prepared under identical conditions as described in Figure 6B. To confirm the presence of p53 proteins, we employed the anti-p53 M19 antibody (Santa Cruz, Cat No. sc-1312) to detect bands corresponding to FLp53 and its isoforms Δ133p53 and Δ160p53. The monomer FLp53 was not detected (Figure R3, lower panel), which may be attributed to the lower binding affinity of the anti-p53 M19 antibody to it. These samples were also immunoprecipitated using the A11 antibody (Thermo Fischer Scientific, Cat No. AHB0052) to detect aggregated proteins. Interestingly, FLp53 and its isoforms, Δ133p53 and Δ160p53, were clearly visible with Anti-A11 antibody when co-expressed at a 1:5 ratio suggesting that they underwent co-aggregation__.__ However, no FLp53 aggregates were observed when it was expressed alone (Figure R2). These results support the conclusion in our manuscript that Δ133p53 and Δ160p53 drive FLp53 aggregation.

      (Figure R2 is included in the file "RC-2024-02608 Figures of Response to Reviewer.)__

      3. Fluorescence Microscopy: Live-cell fluorescence microscopy could be employed to enhance visualization by labeling FLp53 and the isoforms with different fluorescent markers (e.g., EGFP and mCherry tags).

      Response: We appreciate the suggestion to use live-cell fluorescence microscopy with EGFP and mCherry tags for the visualization FLp53 and its isoforms. While we understand the advantages of live-cell imaging with EGFP / mCherry tags, we restrained us from doing such fusions as the GFP or corresponding protein tags are very big (~30 kDa) with respect to the p53 isoform variants (~30 kDa). Other studies have shown that EGFP and mCherry fusions can alter protein oligomerization, solubility and aggregation18, 19. Moreover, most fluorescence proteins are prone to dimerization (i.e. EGFP) or form obligate tetramers (DsRed)20, 21, 22, potentially interfering with the oligomerization and aggregation properties of p53 isoforms, particularly Δ133p53 and Δ160p53.

      Instead, we utilized FLAG- or V5-tag-based immunofluorescence microscopy, a well-established and widely accepted method for visualizing p53 proteins. This method provided precise localization and reliable quantitative data, which we believe meet the needs of the current study. We believe our chosen method is both appropriate and sufficient for addressing the research question.

      Reviewer #2 (Significance (Required)):

      The manuscript by Zhao and colleagues presents a novel and compelling study on the p53 isoforms, Δ133p53 and Δ160p53, which are associated with aggressive cancer types. The main objective of the study was to understand how these isoforms exert a dominant negative effect on full-length p53 (FLp53). The authors discovered that the Δ133p53 and Δ160p53 proteins exhibit impaired binding to p53-regulated promoters. The data suggest that the predominant mechanism driving the dominant-negative effect is the co-aggregation of FLp53 with Δ133p53 and Δ160p53.

      Response: We sincerely thank the reviewer for the thoughtful and positive comments on our manuscript and for highlighting the significance of our findings on the p53 isoforms, Δ133p53 and Δ160p53.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      In this manuscript entitled "Δ133p53 and Δ160p53 isoforms of the tumor suppressor protein p53 exert dominant-negative effect primarily by co-aggregation", the authors suggest that the Δ133p53 and Δ160p53 isoforms have high aggregation propensity and that by co-aggregating with canonical p53 (FLp53), they sequestrate it away from DNA thus exerting a dominant-negative effect over it.

      First, the authors should make it clear throughout the manuscript, including the title, that they are investigating Δ133p53α and Δ160p53α since there are 3 Δ133p53 isoforms (α, β, γ), and 3 Δ160p53 isoforms (α, β, γ).

      Response: Thank you for your suggestion. We understand the importance of clearly specifying the isoforms under study. Following your suggestion, we have added α in the title, abstract, and introduction and added the following statement in the Introduction (lines 57-59): "For convenience and simplicity, we have written Δ133p53 and Δ160p53 to represent the α isoforms (Δ133p53α and Δ160p53α) throughout this manuscript."

      One concern is that the authors only consider and explore Δ133p53α and Δ160p53α isoforms as exclusively oncogenic and FLp53 dominant-negative while not discussing evidences of different activities. Indeed, other manuscripts have also shown that Δ133p53α is non-oncogenic and non-mutagenic, do not antagonize every single FLp53 functions and are sometimes associated with good prognosis. To cite a few examples:

      • Hofstetter G. et al. D133p53 is an independent prognostic marker in p53 mutant advanced serous ovarian cancer. Br. J. Cancer 2011, 105, 1593-1599.
      • Bischof, K. et al. Influence of p53 Isoform Expression on Survival in High-Grade Serous Ovarian Cancers. Sci. Rep. 2019, 9,5244.
      • Knezovi´c F. et al. The role of p53 isoforms' expression and p53 mutation status in renal cell cancer prognosis. Urol. Oncol. 2019, 37, 578.e1-578.e10.
      • Gong, L. et al. p53 isoform D113p53/D133p53 promotes DNA double-strand break repair to protect cell from death and senescence in response to DNA damage. Cell Res. 2015, 25, 351-369.
      • Gong, L. et al. p53 isoform D133p53 promotes efficiency of induced pluripotent stem cells and ensures genomic integrity during reprogramming. Sci. Rep. 2016, 6, 37281.
      • Horikawa, I. et al. D133p53 represses p53-inducible senescence genes and enhances the generation of human induced pluripotent stem cells. Cell Death Differ. 2017, 24, 1017-1028.
      • Gong, L. p53 coordinates with D133p53 isoform to promote cell survival under low-level oxidative stress. J. Mol. Cell Biol. 2016, 8, 88-90. Response: Thank you very much for your comment and for highlighting these important studies.

      We agree that Δ133p53 isoforms exhibit complex biological functions, with both oncogenic and non-oncogenic potentials. However, our mission here was primarily to reveal the molecular mechanism for the dominant-negative effects exerted by the Δ133p53α and Δ160p53α isoforms on FLp53 for which the Δ133p53α and Δ160p53α isoforms are suitable model systems. Exploring the oncogenic potential of the isoforms is beyond the scope of the current study and we have not claimed anywhere that we are reporting that. We have carefully revised the manuscript and replaced the respective terms e.g. 'pro-oncogenic activity' with 'dominant-negative effect' in relevant places (e.g. line 90). We have now also added a paragraph with suitable references that introduces the oncogenic and non-oncogenic roles of the p53 isoforms.

      After reviewing the papers you cited, we are not sure that they reflect on oncogenic /non-oncogenic role of the Δ133p53α isoform in different cancer cases. Although our study is not about the oncogenic potential of the isoforms, we have summarized the key findings below:

      • Hofstetter et al., 2011: Demonstrated that Δ133p53α expression improved recurrence-free and overall survival (in a p53 mutant induced advanced serous ovarian cancer, suggesting a potential protective role in this context.
      • Bischof et al., 2019: Found that Δ133p53 mRNA can improve overall survival in high-grade serous ovarian cancers. However, out of 31 patients, only 5 belong to the TP53 wild-type group, while the others carry TP53 mutations.
      • Knezović et al., 2019: Reported downregulation of Δ133p53 in renal cell carcinoma tissues with wild-type p53 compared to normal adjacent tissue, indicating a potential non-oncogenic role, but not conclusively demonstrating it.
      • Gong et al., 2015: Showed that Δ133p53 antagonizes p53-mediated apoptosis and promotes DNA double-strand break repair by upregulating RAD51, LIG4, and RAD52 independently of FLp53.
      • Gong et al., 2016: Demonstrated that overexpression of Δ133p53 promotes efficiency of cell reprogramming by its anti-apoptotic function and promoting DNA DSB repair. The authors hypotheses that this mechanism is involved in increasing RAD51 foci formation and decrease γH2AX foci formation and chromosome aberrations in induced pluripotent stem (iPS) cells, independent of FL p53.
      • Horikawa et al., 2017: Indicated that induced pluripotent stem cells derived from fibroblasts that overexpress Δ133p53 formed non-cancerous tumors in mice compared to induced pluripotent stem cells derived from fibroblasts with complete p53 inhibition. Thus, Δ133p53 overexpression is "non- or less oncogenic and mutagenic" compared to complete p53 inhibition, but it still compromises certain p53-mediated tumor-suppressing pathways. "Overexpressed Δ133p53 prevented FL-p53 from binding to the regulatory regions of p21WAF1 and miR-34a promoters, providing a mechanistic basis for its dominant-negative inhibition of a subset of p53 target genes."
      • Gong, 2016: Suggested that Δ133p53 promotes cell survival under low-level oxidative stress, but its role under different stress conditions remains uncertain. We have revised the Introduction to provide a more balanced discussion of Δ133p53's dule role (lines 62-73):

      "The Δ133p53 isoform exhibit complex biological functions, with both oncogenic and non-oncogenic potentials. Recent studies demonstrate the non-oncogenic yet context-dependent role of the Δ133p53 isoform in cancer development. Δ133p53 expression has been reported to correlate with improved survival in patients with TP53 mutations23, 24, where it promotes cell survival in a non-oncogenic manner25, 26, especially under low oxidative stress27. Alternatively, other recent evidences emphasize the notable oncogenic functions of Δ133p53 as it can inhibit p53-dependent apoptosis by directly interacting with the FLp53 4, 6. The oncogenic function of the newly identified Δ160p53 isoform is less known, although it is associated with p53 mutation-driven tumorigenesis28 and in melanoma cells' aggressiveness10. Whether or not the Δ160p53 isoform also impedes FLp53 function in a similar way as Δ133p53 is an open question. However, these p53 isoforms can certainly compromise p53-mediated tumor suppression by interfering with FLp53 binding to target genes such as p21 and miR-34a2, 29 by dominant-negative effect, the exact mechanism is not known."

      On the figures presented in this manuscript, I have three major concerns:

      *1- Most results in the manuscript rely on the overexpression of the FLAG-tagged or V5-tagged isoforms. The validation of these construct entirely depends on Supplementary figure 3 which the authors claim "rules out the possibility that the FLAG epitope might contribute to this aggregation. However, I am not entirely convinced by that conclusion. Indeed, the ratio between the "regular" isoform and the aggregates is much higher in the FLAG-tagged constructs than in the V5-tagged constructs. We can visualize the aggregates easily in the FLAG-tagged experiment, but the imaging clearly had to be overexposed (given the white coloring demonstrating saturation of the main bands) to visualize them in the V5-tagged experiments. Therefore, I am not convinced that an effect of the FLAG-tag can be ruled out and more convincing data should be added. *

      Response: Thank you for raising this important concern. We have carefully considered your comments and have made several revisions to clarify and strengthen our conclusions.

      First, to address the potential influence of the FLAG and V5 tags on p53 isoform aggregation, we have revised Figure 2 and removed the previous Supplementary Figure 3, where non-specific antibody bindings and higher molecular weight aggregates were not clearly interpretable. In the revised Figure 2, we have removed these potential aggregates, improving the clarity and accuracy of the data.

      To further rule out any tag-related artifacts, we conducted a co-immunoprecipitation assay with FLAG-tagged FLp53 and untagged Δ133p53 and Δ160p53 isoforms. The results (now shown in the new Supplementary Figure 3) completely agree with our previous result with FLAG-tagged and V5-tagged Δ133p53 and Δ160p53 isoforms and show interaction between the partners. This indicates that the FLAG / V5-tags do not influence / interfere with the interaction between FLp53 and the isoforms. We have still used FLAG-tagged FLp53 as the endogenous p53 was undetectable and the FLAG-tagged FLp53 did not aggregate alone.

      In the revised paper, we added the following sentences (Lines 146-152): "To rule out the possibility that the observed interactions between FLp53 and its isoforms Δ133p53 and Δ160p53 were artifacts caused by the FLAG and V5 antibody epitope tags, we co-expressed FLAG-tagged FLp53 with untagged Δ133p53 and Δ160p53. Immunoprecipitation assays demonstrated that FLAG-tagged FLp53 could indeed interact with the untagged Δ133p53 and Δ160p53 isoforms (Supplementary Figure 3, lanes 3 and 4), confirming formation of hetero-oligomers between FLp53 and its isoforms. These findings demonstrate that Δ133p53 and Δ160p53 can oligomerize with FLp53 and with each other."

      Additionally, we performed subcellular fractionation experiments to compare the aggregation and localization of FLAG-tagged FLp53 when co-expressed either with V5-tagged or untagged Δ133p53/Δ160p53. In these experiments, the untagged isoforms also induced FLp53 aggregation, mirroring our previous results with the tagged isoforms (Supplementary Figure 5). We've added this result in the revised manuscript (lines 236-245): "To exclude the possibility that FLAG or V5 tags contribute to protein aggregation, we also conducted subcellular fractionation of H1299 cells expressing FLAG-tagged FLp53 along with untagged Δ133p53 or Δ160p53 at a 1:5 ratio. The results showed (Supplementary Figure 6) a similar distribution of FLp53 across cytoplasmic, nuclear, and insoluble nuclear fractions as in the case of tagged Δ133p53 or Δ160p53 (Figure 6A to D). Notably, the aggregation of untagged Δ133p53 or Δ160p53 markedly promoted the aggregation of FLAG-tagged FLp53 (Supplementary Figure 6B and D), demonstrating that the antibody epitope tags themselves do not contribute to protein aggregation."

      We've also discussed this in the Discussion section (lines 349-356): "In our study, we primarily utilized an overexpression strategy involving FLAG/V5-tagged proteins to investigate the effects of p53 isoforms Δ133p53 and Δ160p53 on the function of FLp53. To address concerns regarding potential overexpression artifacts, we performed the co-immunoprecipitation (Supplementary Figure 6) and caspase-3 and -7 activity (Figure 7) experiments with untagged Δ133p53 and Δ160p53. In both experimental systems, the untagged proteins behaved very similarly to the FLAG/V5 antibody epitope-containing proteins (Figures 6 and 7 and Supplementary Figure 6). Hence, the C-terminal tagging of FLp53 or its isoforms does not alter the biochemical and physiological functions of these proteins."

      In summary, the revised data set and newly added experiments provide strong evidence that neither the FLAG nor the V5 tag contributes to the observed p53 isoform aggregation.

      2- The authors demonstrate that to visualize the dominant-negative effect, Δ133p53α and Δ160p53α must be "present in a higher proportion than FLp53 in the tetramer" and the need at least a transfection ratio 1:5 since the 1:1 ration shows no effect. However, in almost every single cell type, FLp53 is far more expressed than the isoforms which make it very unlikely to reach such stoichiometry in physiological conditions and make me wonder if this mechanism naturally occurs at endogenous level. This limitation should be at least discussed.

      Response: Thank you for your insightful comment. However, evidence suggests that the expression levels of these isoforms such as Δ133p53, can be significantly elevated relative to FLp53 in certain physiological conditions3, 4, 9. For example, in some breast tumors, with Δ133p53 mRNA is expressed at a much levels than FLp53, suggesting a distinct expression profile of p53 isoforms compared to normal breast tissue4. Similarly, in non-small cell lung cancer and the A549 lung cancer cell line, the expression level of Δ133p53 transcript is significantly elevated compared to non-cancerous cells3. Moreover, in specific cholangiocarcinoma cell lines, the Δ133p53 /TAp53 expression ratio has been reported to increase to as high as 3:19. These observations indicate that the dominant-negative effect of isoform Δ133p53 on FLp53 can occur under certain pathological conditions where the relative amounts of the FLp53 and the isoforms would largely vary. Since data on the Δ160p53 isoform are scarce, we infer that the long N-terminal truncated isoforms may share a similar mechanism.

      Figure 5C: I am concerned by the subcellular location of the Δ133p53α and Δ160p53α as they are commonly considered nuclear and not cytoplasmic as shown here, particularly since they retain the 3 nuclear localization sequences like the FLp53 (Bourdon JC et al. 2005; Mondal A et al. 2018; Horikawa I et al, 2017; Joruiz S. et al, 2024). However, Δ133p53α can form cytoplasmic speckles (Horikawa I et al, 2017) when it colocalizes with autophagy markers for its degradation.

      3-The authors should discuss this issue. Could this discrepancy be due to the high overexpression level of these isoforms? A co-staining with autophagy markers (p62, LC3B) would rule out (or confirm) activation of autophagy due to the overwhelming expression of the isoform.

      Response: Thank you for your thoughtful comments. We have thoroughly reviewed all the papers you recommended (Bourdon JC et al., 2005; Mondal A et al., 2018; Horikawa I et al., 2017; Joruiz S. et al., 2024)4, 29, 30, 31. Among these, only the study by Bourdon JC et al. (2005) provided data regarding the localization of Δ133p534. Interestingly, their findings align with our observations, indicating that the protein does not exhibit predominantly nuclear localization in the Figure below. The discrepancy may be caused by a potentially confusing statement in that paper4

      (The Figure from Bourdon JC et al. (2005) is included in the file "RC-2024-02608 Figures of Response to Reviewer.)__

      The localization of p53 is governed by multiple factors, including its nuclear import and export32. The isoforms Δ133p53 and Δ160p53 contain three nuclear localization sequences (NLS)4 . However, the isoforms Δ133p53 and Δ160p53 were potentially trapped in the cytoplasm by aggregation and masking the NLS. This mechanism would prevent nuclear import.

      Further, we acknowledge that Δ133p53 co-aggregates with autophagy substrate p62/SQSTM1 and autophagosome component LC3B in cytoplasm by autophagic degradation during replicative senescence33. We agree that high overexpression of these aggregation-prone proteins may induce endoplasmic reticulum (ER) stress and activates autophagy34. This could explain the cytoplasmic localization in our experiments. However, it is also critical to consider that we observed aggregates in both the cytoplasm and the nucleus (Figures 6B and E and Supplementary Figure 6B). While cytoplasmic localization may involve autophagy-related mechanisms, the nuclear aggregates likely arise from intrinsic isoform properties, such as altered protein folding, independent of autophagy. These dual localizations reflect the complex behavior of Δ133p53 and Δ160p53 isoforms under our experimental conditions.

      In the revised manuscript, we discussed this in Discussion (lines 328-335): "Moreover, the observed cytoplasmic isoform aggregates may reflect autophagy-related degradation, as suggested by the co-localization of Δ133p53 with autophagy substrate p62/SQSTM1 and autophagosome component LC3B33. High overexpression of these aggregation-prone proteins could induce endoplasmic reticulum stress and activate autophagy34. Interestingly, we also observed nuclear aggregation of these isoforms (Figure 6B and E and Supplementary Figure 6B), suggesting that distinct mechanisms, such as intrinsic properties of the isoforms, may govern their localization and behavior within the nucleus. This dual localization underscores the complexity of Δ133p53 and Δ160p53 behavior in cellular systems."

      Minor concerns:

      - Figure 1A: the initiation of the "Δ140p53" is shown instead of "Δ40p53"

      Response: Thank you! The revised Figure 1A has been created in the revised paper.

      • Figure 2A: I would like to see the images cropped a bit higher, so the cut does not happen just above the aggregate bands

      Response: Thank you for this suggestion. We've changed the image and the new Figure 2 has been shown in the revised paper.

      • Figure 3C: what ratio of FLp53/Delta isoform was used?

      Response: We have added the ratio in the figure legend of Figure 3C (lines 845-846) "Relative DNA-binding of the FLp53-FLAG protein to the p53-target gene promoters in the presence of the V5-tagged protein Δ133p53 or Δ160p53 at a 1: 1 ratio."

      • Figure 3C suggests that the "dominant-negative" effect is mostly senescence-specific as it does not affect apoptosis target genes, which is consistent with Horikawa et al, 2017 and Gong et al, 2016 cited above. Furthermore, since these two references and the others from Gong et al. show that Δ133p53α increases DNA repair genes, it would be interesting to look at RAD51, RAD52 or Lig4, and maybe also induce stress.

      Response: Thank you for your thoughtful comments and suggestions. In Figure 3C, the presence of Δ133p53 or Δ160p53 only significantly reduced the binding of FLp53 to the p21 promoter. However, isoforms Δ133p53 and Δ160p53 demonstrated a significant loss of DNA-binding activity at all four promoters: p21, MDM2, and apoptosis target genes BAX and PUMA (Figure 3B). This result suggests that Δ133p53 and Δ160p53 have the potential to influence FLp53 function due to their ability to form hetero-oligomers with FLp53 or their intrinsic tendency to aggregate. To further investigate this, we increased the isoform to FLp53 ratio in Figure 4, which demonstrate that the isoforms Δ133p53 and Δ160p53 exert dominant-negative effects on the function of FLp53.

      These results demonstrate that the isoforms can compromise p53-mediated pathways, consistent with Horikawa et al. (2017), which showed that Δ133p53α overexpression is "non- or less oncogenic and mutagenic" compared to complete p53 inhibition, but still affects specific tumor-suppressing pathways. Furthermore, as noted by Gong et al. (2016), Δ133p53's anti-apoptotic function under certain conditions is independent of FLp53 and unrelated to its dominant-negative effects.

      We appreciate your suggestion to investigate DNA repair genes such as RAD51, RAD52, or Lig4, especially under stress conditions. While these targets are intriguing and relevant, we believe that our current investigation of p53 targets in this manuscript sufficiently supports our conclusions regarding the dominant-negative effect. Further exploration of additional p53 target genes, including those involved in DNA repair, will be an important focus of our future studies.

      • Figure 5A and B: directly comparing the level of FLp53 expressed in cytoplasm or nucleus to the level of Δ133p53α and Δ160p53α expressed in cytoplasm or nucleus does not mean much since these are overexpressed proteins and therefore depend on the level of expression. The authors should rather compare the ratio of cytoplasmic/nuclear FLp53 to the ratio of cytoplasmic/nuclear Δ133p53α and Δ160p53α.

      Response: Thank you very much for this valuable suggestion. In the revised paper, Figure 5B has been recreated. Changes have been made in lines 214-215: "The cytoplasm-to-nucleus ratio of Δ133p53 and Δ160p53 was approximately 1.5-fold higher than that of FLp53 (Figure 5B)."

      **Referees cross-commenting**

      I agree that the system needs to be improved to be more physiological.

      Just to precise, the D133 and D160 isoforms are not truncated mutants, they are naturally occurring isoforms expressed in almost every normal human cell type from an internal promoter within the TP53 gene.

      Using overexpression always raises concerns, but in this case, I am even more careful because the isoforms are almost always less expressed than the FLp53, and here they have to push it 5 to 10 times more expressed than the FLp53 to see the effect which make me fear an artifact effect due to the overwhelming overexpression (which even seems to change the normal localization of the protein).

      To visualize the endogenous proteins, they will have to change cell line as the H1299 they used are p53 null.

      Response: Thank you for these comments. We've addressed the motivation of overexpression in the above responses. We needed to use the plasmid constructs in the p53-null cells to detect the proteins but the expression level was certainly not 'overwhelmingly high'.

      First, we tried the A549 cells (p53 wild-type) under DNA damage conditions, but the endogenous p53 protein was undetectable. Second, several studies reported increased Δ133p53 level compared to wild-type p53 and that it has implications in tumor development2, 3, 4, 9. Third, the apoptosis activity of H1299 cells overexpressing p53 proteins was analyzed in the revised manuscript (Figure 7). The apoptotic activity induced by FLp53 expression was approximately 2.5 times higher than that of the control vector under identical plasmid DNA transfection conditions (Figure 7). These results rule out the possibility that the plasmid-based expression of p53 and its isoforms introduced artifacts in the results. We've discussed this in the Results section (lines 254-269).

      Reviewer #3 (Significance (Required)):

      Overall, the paper is interesting particularly considering the range of techniques used which is the main strength.

      The main limitation to me is the lack of contradictory discussion as all argumentation presents Δ133p53α and Δ160p53α exclusively as oncogenic and strictly FLp53 dominant-negative when, particularly for Δ133p53α, a quite extensive literature suggests a not so clear-cut activity.

      The aggregation mechanism is reported for the first time for Δ133p53α and Δ160p53α, although it was already published for Δ40p53α, Δ133p53β or in mutant p53.

      This manuscript would be a good basic research addition to the p53 field to provide insight in the mechanism for some activities of some p53 isoforms.

      My field of expertise is the p53 isoforms which I have been working on for 11 years in cancer and neuro-degenerative diseases

      Response: Thank you very much for your positive and critical comments. We've included a fair discussion on the oncogenic and non-oncogenic function of Δ133p53 in the Introduction following your suggestion (lines 62-73).

      References

      1. Pitolli C, Wang Y, Candi E, Shi Y, Melino G, Amelio I. p53-Mediated Tumor Suppression: DNA-Damage Response and Alternative Mechanisms. Cancers 11, (2019).

      Fujita K, et al. p53 isoforms Delta133p53 and p53beta are endogenous regulators of replicative cellular senescence. Nature cell biology 11, 1135-1142 (2009).

      Fragou A, et al. Increased Δ133p53 mRNA in lung carcinoma corresponds with reduction of p21 expression. Molecular medicine reports 15, 1455-1460 (2017).

      Bourdon JC, et al. p53 isoforms can regulate p53 transcriptional activity. Genes & development 19, 2122-2137 (2005).

      Ghosh A, Stewart D, Matlashewski G. Regulation of human p53 activity and cell localization by alternative splicing. Molecular and cellular biology 24, 7987-7997 (2004).

      Aoubala M, et al. p53 directly transactivates Δ133p53α, regulating cell fate outcome in response to DNA damage. Cell death and differentiation 18, 248-258 (2011).

      Marcel V, et al. p53 regulates the transcription of its Delta133p53 isoform through specific response elements contained within the TP53 P2 internal promoter. Oncogene 29, 2691-2700 (2010).

      Zhao L, Sanyal S. p53 Isoforms as Cancer Biomarkers and Therapeutic Targets. Cancers 14, (2022).

      Nutthasirikul N, Limpaiboon T, Leelayuwat C, Patrakitkomjorn S, Jearanaikoon P. Ratio disruption of the ∆133p53 and TAp53 isoform equilibrium correlates with poor clinical outcome in intrahepatic cholangiocarcinoma. International journal of oncology 42, 1181-1188 (2013).

      Tadijan A, et al. Altered Expression of Shorter p53 Family Isoforms Can Impact Melanoma Aggressiveness. Cancers 13, (2021).

      Aubrey BJ, Kelly GL, Janic A, Herold MJ, Strasser A. How does p53 induce apoptosis and how does this relate to p53-mediated tumour suppression? Cell death and differentiation 25, 104-113 (2018).

      Ghorbani N, Yaghubi R, Davoodi J, Pahlavan S. How does caspases regulation play role in cell decisions? apoptosis and beyond. Molecular and cellular biochemistry 479, 1599-1613 (2024).

      Petronilho EC, et al. Oncogenic p53 triggers amyloid aggregation of p63 and p73 liquid droplets. Communications chemistry 7, 207 (2024).

      Forget KJ, Tremblay G, Roucou X. p53 Aggregates penetrate cells and induce the co-aggregation of intracellular p53. PloS one 8, e69242 (2013).

      Farmer KM, Ghag G, Puangmalai N, Montalbano M, Bhatt N, Kayed R. P53 aggregation, interactions with tau, and impaired DNA damage response in Alzheimer's disease. Acta neuropathologica communications 8, 132 (2020).

      Arsic N, et al. Δ133p53β isoform pro-invasive activity is regulated through an aggregation-dependent mechanism in cancer cells. Nature communications 12, 5463 (2021).

      Melo Dos Santos N, et al. Loss of the p53 transactivation domain results in high amyloid aggregation of the Δ40p53 isoform in endometrial carcinoma cells. The Journal of biological chemistry 294, 9430-9439 (2019).

      Mestrom L, et al. Artificial Fusion of mCherry Enhances Trehalose Transferase Solubility and Stability. Applied and environmental microbiology 85, (2019).

      Kaba SA, Nene V, Musoke AJ, Vlak JM, van Oers MM. Fusion to green fluorescent protein improves expression levels of Theileria parva sporozoite surface antigen p67 in insect cells. Parasitology 125, 497-505 (2002).

      Snapp EL, et al. Formation of stacked ER cisternae by low affinity protein interactions. The Journal of cell biology 163, 257-269 (2003).

      Jain RK, Joyce PB, Molinete M, Halban PA, Gorr SU. Oligomerization of green fluorescent protein in the secretory pathway of endocrine cells. The Biochemical journal 360, 645-649 (2001).

      Campbell RE, et al. A monomeric red fluorescent protein. Proceedings of the National Academy of Sciences of the United States of America 99, 7877-7882 (2002).

      Hofstetter G, et al. Δ133p53 is an independent prognostic marker in p53 mutant advanced serous ovarian cancer. British journal of cancer 105, 1593-1599 (2011).

      Bischof K, et al. Influence of p53 Isoform Expression on Survival in High-Grade Serous Ovarian Cancers. Scientific reports 9, 5244 (2019).

      Gong L, et al. p53 isoform Δ113p53/Δ133p53 promotes DNA double-strand break repair to protect cell from death and senescence in response to DNA damage. Cell research 25, 351-369 (2015).

      Gong L, et al. p53 isoform Δ133p53 promotes efficiency of induced pluripotent stem cells and ensures genomic integrity during reprogramming. Scientific reports 6, 37281 (2016).

      Gong L, Pan X, Yuan ZM, Peng J, Chen J. p53 coordinates with Δ133p53 isoform to promote cell survival under low-level oxidative stress. Journal of molecular cell biology 8, 88-90 (2016).

      Candeias MM, Hagiwara M, Matsuda M. Cancer-specific mutations in p53 induce the translation of Δ160p53 promoting tumorigenesis. EMBO reports 17, 1542-1551 (2016).

      Horikawa I, et al. Δ133p53 represses p53-inducible senescence genes and enhances the generation of human induced pluripotent stem cells. Cell death and differentiation 24, 1017-1028 (2017).

      Mondal AM, et al. Δ133p53α, a natural p53 isoform, contributes to conditional reprogramming and long-term proliferation of primary epithelial cells. Cell death & disease 9, 750 (2018).

      Joruiz SM, Von Muhlinen N, Horikawa I, Gilbert MR, Harris CC. Distinct functions of wild-type and R273H mutant Δ133p53α differentially regulate glioblastoma aggressiveness and therapy-induced senescence. Cell death & disease 15, 454 (2024).

      O'Brate A, Giannakakou P. The importance of p53 location: nuclear or cytoplasmic zip code? Drug resistance updates : reviews and commentaries in antimicrobial and anticancer chemotherapy 6, 313-322 (2003).

      Horikawa I, et al. Autophagic degradation of the inhibitory p53 isoform Δ133p53α as a regulatory mechanism for p53-mediated senescence. Nature communications 5, 4706 (2014).

      Lee H, et al. IRE1 plays an essential role in ER stress-mediated aggregation of mutant huntingtin via the inhibition of autophagy flux. Human molecular genetics 21, 101-114 (2012).

    1. This calibration looks very good: no obvious under- or over-fitting, nor clear L-shaped patterns.

      I don't know if I agree with this; it does look like there are some calibration issues. I also imagine the plot would look worse if the x and y axes had the same scales.

      One question - is this run for ALL batter/seasons in the training set? Would be interested in what this looks like if we restrict the population to player/seasons with some sample size threshold (maybe \(PA > 50\). Don't know if that's the right way to evaluate the model, but just something I'm curious about. My prior is that it would make the calibration look even worse, since the model will be more confident about their true talent and their sample size makes the expected noise drop.

      Regardless - we need to think more about what this is telling us. In my mind, it's saying that the model is overconfident. It's estimating true talent too close to the observed values in some cases (too much coverage of low probabilities), and that's likely what's hurting the top end as well (not enough coverage of high probabilities).

    Annotators

    1. ~~l:4 50 ft ravishments of the surnrnit of our isyrian daw/ :If-burledi ... s and when'the;book Waes' rnlan spun rne astern lii11,;1an PaphianJrf:8"' ' • d • h c osed h round b '_J "dism1sse ,me wit but mist , w en th a out in bru ' y rern. . e Spell a we of• ofhim, \1 ,

      When I read this it makes me think of illusions, obsession, and a lingering presence of something/someone powerful. The steps in the sentence go from being deeply captivated by someone--the story coming to an end--to it being a dream? Similar to Moby dick regarding the presense of the whale and Ahabs obsession with finding it. Tangled up with Ishmael and following Ahab through the journey, but when the book ended, the "spell" was over and everything was gone, but Ishmael.

    2. ll · e bYthe "E' ,.u wo ' b •. d • Y the • 1es and arth

      Why utilize the description of "Earth's Holocaust"? Might this seem insensitive to connect literary elements with traumatic historic events. When I read this passage, I found this quite inappropriate to be honest.

    3. r nee o a venture £ n in Engl cl'' ,, newwork,2the 50 uthe,rn ,~perm Whale Fishe';i ~und~d 9Pon c:~ •. . ,, , ,,,iP person~! exp~rie~c.e, of ~wo years:• and illustratelbn Wild legends~\l'flshould you be ~nc,lme~ to undertak ' rnhore , as a harpo y th~ aut~o,r'sczoo 3 C Id b • , e t e hook I oneer

      It has no romance and the main character has no experience as a harpooneer. The books is not a personal experience either. He is trying to make it sound good that he can get is published.

    4. As for 'p' atr rd\y 1)11' . • h' , . . onaoe . ethe American author wlto now patronizes , is country, , and not h· 1<> , ll•i. • ' I h ls co shim.

      I knew Millville was patriotic of American but not to this extent.

    1. La revue Recherches en éducation, numéro 49, explore l'autorité et le pouvoir dans le contexte éducatif contemporain. Plusieurs articles de ce numéro thématique mettent en lumière le rôle des élèves, des parents et des familles dans le système éducatif, que ce soit directement ou indirectement.

      Références aux élèves:

      • Plusieurs articles traitent directement des élèves, notamment en ce qui concerne leurs expériences et leurs apprentissages.
      • L'article de Sylvain Fabre examine l'accompagnement des élèves dans leurs expériences artistiques, en explorant la notion de milieux rythmiques. L'auteur met en évidence l'importance de considérer l'élève dans son dialogue avec les environnements culturels et scolaires, soulignant comment le sujet s'inscrit dans des milieux et y détermine les conditions de son activité.
      • Emmanuel Sander, Géry Marcoux, et al. présentent les résultats d'une étude sur les conceptions intuitives des notions de justice et de liberté chez des collégiens dans le cadre du cours d'enseignement moral et civique. Cette étude vise à identifier les conceptions prédominantes d’élèves de collège en réseau d’éducation prioritaire en France sur les notions de justice sociale et de liberté individuelle ainsi qu’à évaluer leur flexibilité cognitive relativement à ces notions.

      Références aux parents et aux familles:

      • Bien que les parents ne soient pas toujours le sujet central, ils sont mentionnés dans le contexte des relations entre l'école et la société.
      • Dans l'article de Cécile Roaux, il est souligné que les directeurs d'école primaire doivent organiser l'école autour de principes managériaux tout en restant soumis à une culture professionnelle de l'égalité entre pairs portée par les enseignants. Dans ce contexte, les directeurs doivent négocier en permanence pour qu'émerge un réel collectif de professionnels, ce qui implique également une prise en compte des relations avec les familles.
      • L'étude d'Emmanuel Sander, Géry Marcoux, et al. prend en compte l'influence de la famille dans la construction de l'identité politique des élèves. L'étude de Wilfried Lignier et Julie Pagis (2017) tend à démontrer que la construction de l’identité politique se fait en lien entre la famille, les groupes des pairs et l’école.

      En résumé, la revue aborde les thématiques du pouvoir et de l'autorité en éducation, en considérant les élèves comme des acteurs centraux de l'apprentissage et en reconnaissant l'influence des parents et des familles dans le contexte éducatif. Les articles mettent en évidence la complexité des relations entre les différents acteurs du système éducatif et soulignent l'importance de prendre en compte les conceptions et les expériences de chacun.

    2. Le numéro 49 de la revue Recherches en éducation (2022) est un dossier thématique qui aborde les questions d'autorité et de pouvoir des personnels de direction, d’encadrement et de formation dans les politiques publiques contemporaines d’éducation. Ce dossier examine la recomposition des rapports de pouvoir et d'autorité au sein du système scolaire, en se penchant sur les relations entre les différents acteurs chargés de définir, d'encadrer et de mettre en œuvre les politiques publiques. Les articles de ce numéro se concentrent sur le personnel de direction, d'encadrement ou de formation, et mettent en évidence la multi-référentialité des sciences de l'éducation et de la formation. L'objectif de ce dossier est d'analyser les implications à l'échelle globale et les incidences professionnelles concrètes.

      Voici un aperçu des articles inclus dans ce numéro thématique:

      • Édito par Camille Roelens et Stéphan Mierzejewski, qui introduit les enjeux de l'autorité et du pouvoir éducatifs à l'épreuve des politiques d'éducation.
      • "Rapports de force et crise de l’autorité dans le mouvement Freinet entre 1945 et 1968 : quand l’horizontalité questionne la verticalité": Xavier Riondet analyse les rapports de force et la crise de l'autorité dans le mouvement Freinet entre 1945 et 1968, en examinant comment l'horizontalité remet en question la verticalité. L'article de Xavier Riondet cherche à penser les enjeux de l'école actuelle à partir des crises qui ont marqué l'histoire de l'Éducation Nouvelle entre 1945 et 1967. Les rapports de force et d'autorité y ont été précocement bouleversés, en particulier en ce qui concerne les questions de direction et de formation. L'auteur questionne et explicite certaines influences des expériences d'Éducation Nouvelle sur l'Éducation nationale, en revenant sur la puissance transformatrice de la dynamique d'égalisation et d'individualisation au cœur des collectifs éducatifs.
      • "L’évolution des rapports hiérarchiques entre directeurs et adjoints dans le champ de l’enseignement primaire de la Seine sous la IIIe République : aux origines d’une autonomie professionnelle": Jérôme Krop étudie l'évolution des rapports hiérarchiques entre directeurs et adjoints dans l'enseignement primaire de la Seine sous la IIIe République, explorant les origines d'une autonomie professionnelle. L'article de Jérôme Krop se propose de remonter aux origines historiques de la condition des directeurs d'école, sous la IIIe République, une période de reconfiguration du champ de l'enseignement primaire et des rapports de pouvoir entre ces derniers et les instituteurs adjoints. Son étude repose sur les données issues de l’étude exhaustive d’un corpus représentatif composé de dossiers d’instituteurs et d’institutrices appartenant à la première génération des enseignants des écoles publiques entrés dans l’enseignement primaire de la Seine entre 1870 et 1886. Son analyse de la conflictualité entre les instituteurs et les directeurs d’écoles montre comment l’histoire des relations sociales constitutives de ce champ de l’enseignement primaire a produit les schèmes de perception et les dispositions pratiques hostiles aux rapports d’autorité fondés sur la subordination hiérarchique.
      • "De l’engagement dans la fonction d’adjoint d’établissement scolaire à l’exercice partagé du pouvoir et de l’autorité": Simon Mallard, Gwénola Réto et Rozenn Décret-Rouillard s'intéressent à l'engagement dans la fonction d'adjoint d'établissement scolaire et à l'exercice partagé du pouvoir et de l'autorité. Cet article s’intéresse à l’engagement de professionnels de l’éducation et de l’enseignement dans la fonction d’adjoint d’établissement scolaire. L'analyse des entretiens semi-directifs menés auprès d'adjoints de l’enseignement public, a fait émerger trois axes saillants : s’engager pour prendre des responsabilités, traverser des épreuves et enfin tenir le rôle pour devenir le chef. Ces axes permettent de qualifier l’engagement dans la fonction d’adjoint et de comprendre ce qui apparaît comme la colonne vertébrale de leur fonction : l’exercice partagé du pouvoir et de l’autorité avec le chef d’établissement.
      • "La direction d’école primaire, une question de pouvoir ?": Cécile Roaux questionne la notion de pouvoir dans la direction d'école primaire.
      • "Le leadership : la fin d’un tabou et le début d’un mythe. Évolution des métiers de l’encadrement scolaire et de leur formation en Suisse romande": Laetitia Progin examine l'évolution des métiers de l'encadrement scolaire et de leur formation en Suisse romande, en mettant en perspective le leadership entre attentes, mythe et réalité. Cet article présente l’évolution des métiers de l’encadrement et de leur formation en Suisse romande en particulier. Il analyse, d’une manière empirique, comment le positionnement des cadres scolaires vis-à-vis de l’appel au leadership a évolué (entre tabou et émergence d’un mythe) sous l’effet notamment de leur formation et propose, pour conclure, quelques éléments de synthèse.
      • "Pression temporelle et situation de porte-à-faux. Regard socio-didactique sur les positionnements professionnels des conseillers pédagogiques de circonscription - « tuteurs terrain »": Stéphan Mierzejewski et Abdelkarim Zaid analysent la pression temporelle et les situations de porte-à-faux vécues par les conseillers pédagogiques de circonscription, en adoptant une approche socio-didactique. Cet article aborde la problématique générale du dossier, en l’envisageant du double point de vue de la pression temporelle et des contradictions structurelles qui caractérisent l’évolution des missions de conseiller pédagogique de circonscription.

      La section "Varia" propose également des articles qui élargissent le champ de la réflexion:

      • "Rythmes scolaires et éducation artistique : l’expérience de la danse à l’école": Sylvain Fabre propose une réflexion sur l'accompagnement des élèves dans leurs expériences artistiques, en explorant la notion de milieux rythmiques.
      • "Valeurs de l’éducation et capitalisme contemporain : l’exemple de l’idéal d’autonomie": Renaud Hétier examine comment le capitalisme transforme les valeurs de l'éducation.
      • "Conceptions intuitives des notions de justice et de liberté : résultats d'une étude au collège dans le cours d'enseignement moral et civique": Emmanuel Sander, Géry Marcoux, et al. présentent les résultats d'une étude sur les conceptions intuitives des notions de justice et de liberté chez des collégiens dans le cadre du cours d'enseignement moral et civique. Cette étude vise à identifier les conceptions prédominantes d’élèves de collège en réseau d’éducation prioritaire en France sur les notions de justice sociale et de liberté individuelle ainsi qu’à évaluer leur flexibilité cognitive relativement à ces notions.
    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer 1 (Public Review):

      O’Neill et al. have developed a software analysis application, miniML, that enables the quantification of electrophysiological events. They utilize a supervised deep learned-based method to optimize the software. miniML is able to quantify and standardize the analyses of miniature events, using both voltage and current clamp electrophysiology, as well as optically driven events using iGluSnFR3, in a variety of preparations, including in the cerebellum, calyx of held, Golgi cell, human iPSC cultures, zebrafish, and Drosophila. The software appears to be flexible, in that users are able to hone and adapt the software to new preparations and events. Importantly, miniML is an open-source software free for researchers to use and enables users to adapt new features using Python.

      Overall this new software has the potential to become widely used in the field and an asset to researchers. However, the authors fail to discuss or even cite a similar analysis tool recently developed (SimplyFire), and determine how miniML performs relative to this platform. There are a handful of additional suggestions to make miniML more user-friendly, and of broad utility to a variety of researchers, as well as some suggestions to further validate and strengthen areas of the manuscript:

      (1) miniML relative to existing analysis methods: There is a major omission in this study, in that a similar open source, Python-based software package for event detection of synaptic events appears to be completely ignored. Earlier this year, another group published SimplyFire in eNeuro (Mori et al., 2024; doi: 10.1523/eneuro.0326-23.2023). Obviously, this previous study needs to be discussed and ideally compared to miniML to determine if SimplyFire is superior or similar in utility, and to underscore differences in approach and accuracy.

      We thank the reviewer for bringing this interesting publication to our attention. We have included SimplyFire in our benchmarking for comprehensive comparison with miniML. The approach taken by SimplyFire differs from miniML in a number of ways. Our results show that miniML provides higher recall and precision than SimplyFire (revised Figure 3). We appreciate that SimplyFire provides a user-interface similar to the commonly used MiniAnalysis software. In addition, the peak-finding-based approach of SimplyFire makes it relatively robust to event shape, which facilitates analysis of diverse data. However, we noted a strong threshold-dependence and long run time of SimplyFire (revised Figure 3 and Figure 3—figure supplement 1). In addition, SimplyFire is not robust against various types of noise typically encountered in electrophysiological recordings. Our extended benchmark analysis thus indicates that AI-based event detection is superior to existing algorithmic approaches, including SimplyFire.

      (2) The manuscript should comment on whether miniML works equally well to quantify current clamp events (voltage; e.g. EPSP/mEPSPs) compared to voltage clamp (currents, EPSC/mEPSCs), which the manuscript highlights. Are rise and decay time constants calculated for each event similarly?

      miniML works equally well for current- and voltage events (Figure 5, Figure 9). In general, events of opposite polarity can be analyzed by simply inverting the data. Transfer learning models may further improve the detection.

      For each detected event, independent of data/recording type, rise times are calculated as 10–90% times (baseline–peak), and decay times are calculated as time to 50% of the peak. In addition, event decay time constants are calculated from a fit to the event average. With miniML being open-source, researchers can adapt the calculations of event statistics to their needs, if desired. In the revised manuscript, we have expanded the Methods section that describes the quantification of event statistics (Methods, Quantification).

      (3) The interface and capabilities of miniML appear quite similar to Mini Analysis, the free software that many in the field currently use. While the ability and flexibility for users to adapt and adjust miniML for their own uses/needs using Python programming is a clear potential advantage, can the authors comment, or better yet, demonstrate, whether there is any advantage for researchers to use miniML over Mini Analysis or SimplyFire if they just need the standard analyses?

      Following the reviewer’s suggestion, we developed a graphical user interface (GUI) for miniML to enhance its usability (Figure 2—figure supplement 2), which is provided on the GitHub repository. Our comprehensive benchmark analysis demonstrated that miniML outperforms existing tools such as MiniAnalysis and SimplyFire. The main advantages are (i) increased reliability of results, which eliminates the need for visual inspection; (ii) fast runtime and easy automation; (iii) superior detection performance as demonstrated by higher recall in both synthetic and real data; (iv) open-source Python-based design. We believe that these advantages make miniML a valuable tool for researchers recording various types of synaptic events, offering a more efficient and reliable solution compared to existing methods.

      (4) Additional utilities for miniML: The authors show miniML can quantify miniature electrophysiological events both current and voltage clamp, as well as optical glutamate transients using iGluSnFR. As the authors mention in the discussion, the same approach could, in principle, be used to quantify evoked (EPSC/EPSP) events using electrophysiology, Ca2+ events (using GCaMP), and AP waveforms using voltage indicators like ASAP4. While I don’t think it is reasonable to ask the authors to generate any new experimental data, it would be great to see how miniML performs when analysing data from these approaches, particularly to quantify evoked synaptic events and/or Ca2+ (ideally postsynaptic Ca2+ signals from miniature events, as the Drosophila NMJ have developed nice approaches).

      In the revised manuscript, we have extended the application examples of miniML. We applied miniML to detect mEPSPs recorded with the novel voltage-sensitive indicator ASAP5 (Figure 9 and Figure 9—figure supplement 1). We performed simultaneous recordings of membrane voltage through electrophysiology and ASAP5 voltage imaging in rat cultured neurons at physiological temperature. Data were analyzed using miniML, with electrophysiology data being used as ground-truth for assessing detection performance in imaging data. Our results demonstrate that miniML robustly detects mEPSPs in current-clamp, and can localize corresponding transients in imaging data. Furthermore, we observed that miniML performs better than template matching and deconvolution on ASAP5 imaging data (Figure 9 and Figure 9—figure supplement 2).

      Reviewer 2 (Public Review):

      This paper presents miniML as a supervised method for the detection of spontaneous synaptic events. Recordings of such events are typically of low SNR, where state-of-the-art methods are prone to high false positive rates. Unlike current methods, training miniML requires neither prior knowledge of the kinetics of events nor the tuning of parameters/thresholds.

      The proposed method comprises four convolutional networks, followed by a bi-directional LSTM and a final fully connected layer which outputs a decision event/no event per time window. A sliding window is used when applying miniML to a temporal signal, followed by an additional estimation of events’ time stamps. miniML outperforms current methods for simulated events superimposed on real data (with no events) and presents compelling results for real data across experimental paradigms and species. Strengths:

      The authors present a pipeline for benchmarking based on simulated events superimposed on real data (with no events). Compared to five other state-of-the-art methods, miniML leads to the highest detection rates and is most robust to specific choices of threshold values for fast or slow kinetics. A major strength of miniML is the ability to use it for different datasets. For this purpose, the CNN part of the model is held fixed and the subsequent networks are trained to adapt to the new data. This Transfer Learning (TL) strategy reduces computation time significantly and more importantly, it allows for using a substantially smaller data set (compared to training a full model) which is crucial as training is supervised (i.e. uses labeled examples).

      Weaknesses:

      The authors do not indicate how the specific configuration of miniML was set, i.e. number of CNNs, units, LSTM, etc. Please provide further information regarding these design choices, whether they were based on similar models or if chosen based on performance.

      The data for the benchmark system was augmented with equal amounts of segments with/without events. Data augmentation was undoubtedly crucial for successful training.

      (1) Does a balanced dataset reflect the natural occurrence of events in real data? Could the authors provide more information regarding this matter?

      In a given recording, the event frequency determines the ratio of event-containing vs. nonevent-containing data segments. Whereas many synapses have a skew towards non-events, high event frequencies as observed, e.g., in pyramidal cells or Purkinje neurons, can shift the ratio towards event-containing data.

      For model training, we extracted data segments from mEPSC recordings in cerebellar granule cells, which have a low mEPSC frequency (about 0.2 Hz, Delvendahl et al. 2019). Unbalanced training data may complicate model training (Drummond and Holte 2003; Prati et al. 2009; Tyagi and Mittal 2020). We therefore decided to balance the training dataset for miniML by down-sampling the majority class (i.e., non-event segments), so that the final datasets for model training contained roughly equal amounts of events and non-events.

      (2) Please provide a more detailed description of this process as it would serve users aiming to use this method for other sub-fields.

      We thank the reviewer for raising this point. In the revised manuscript, we present a systematic analysis of the impact of imbalanced training data on model training (Figure 1—figure supplement 2). In addition, we have revised the description of model training and data augmentation in the Methods section (Methods, Training data and annotation).

      The benchmarking pipeline is indeed valuable and the results are compelling. However, the authors do not provide comparative results for miniML for real data (Figures 4-8). TL does not apply to the other methods. In my opinion, presenting the performance of other methods, trained using the smaller dataset would be convincing of the modularity and applicability of the proposed approach.

      Quantitative comparison of synaptic detection methods on real-world data is challenging because the lack of ground-truth data prevents robust, quantitative analyses. Nevertheless, we compared miniML to common template-based and finite-threshold based methods on four different types of synapses. We noted that miniML generally detects more events, whereas other methods are susceptible to false-positives (Figure 4—figure supplement 1). In addition, we analyzed the performance of miniML on voltage imaging data (Figure 9). Simultaneous recordings of electrophysiological and imaging data allowed a quantitative comparison of detection methods in this dataset. Our results demonstrate that miniML provides higher recall for optical minis recorded using ASAP5 (Figure 9 and Figure 9—figure supplement 2; F1 score, Cohen’s d 1.35 vs. template matching and 5.1 vs. deconvolution).

      Impact:

      Accurate detection of synaptic events is crucial for the study of neural function. miniML has a great potential to become a valuable tool for this purpose as it yields highly accurate detection rates, it is robust, and is relatively easily adaptable to different experimental setups.

      Additional comments:

      Line 73: the authors describe miniML as "parameter-free". Indeed, miniML does not require the selection of pulse shape, rise/fall time, or tuning of a threshold value. Still, I would not call it "parameter-free" as there are many parameters to tune, starting with the number of CNNs, and number of units through the parameters of the NNs. A more accurate description would be that as an AI-based method, the parameters of miniML are learned via training rather than tuned by the user.

      We agree that a deep learning model is not parameter-free, and this term may be misleading. We have therefore changed this sentence in the introduction as follows: "The method is fast, robust to threshold choice, and generalizable across diverse data types [...]"

      Line 302: the authors describe miniML as "threshold-independent". The output trace of the model has an extremely high SNR so a threshold of 0.5 typically works. Since a threshold is needed to determine the time stamps of events, I think a better description would be "robust to threshold choice".

      To detect event localizations, a peak search is performed on the model output, which uses a minimum peak height parameter (or threshold). Extreme values for this parameter do indeed have a small impact on detection performance (Figure 3J). We have changed the description in the introduction and discussion according to the reviewer’s suggestion.

      Reviewer 3 (Public Review):

      miniML as a novel supervised deep learning-based method for detecting and analyzing spontaneous synaptic events. The authors demonstrate the advantages of using their methods in comparison with previous approaches. The possibility to train the architecture on different tasks using transfer learning approaches is also an added value of the work. There are some technical aspects that would be worth clarifying in the manuscript:

      (1) LSTM Layer Justification: Please provide a detailed explanation for the inclusion of the LSTM layer in the miniML architecture. What specific benefits does the LSTM layer offer in the context of synaptic event detection?

      Our model design choice was inspired by similar approaches in the literature (Donahue et al. 2017; Islam et al. 2020; Passricha and Aggarwal 2019; Tasdelen and Sen 2021; Wang et al. 2020). Convolutional and recurrent neural networks are often combined for time-series classification problems as they allow learning spatial and temporal features, respectively. Combining the strengths of both network architectures can thus help improve the classification performance. Indeed, a CNN-LSTM architecture proved to be superior in both training accuracy and detection performance (Figure 1—figure supplement 2). Further, this architecture requires fewer free parameters than comparable model designs using fully connected layers instead. The revised manuscript shows a comparison of different model architectures (Figure 1—figure supplement 2), and we added the following description to the text (Methods, Deep learning model architecture):

      "The combination of convolutional and recurrent neural network layers helps to improve the classification performance for time-series data. In particular, LSTM layers allow learning temporal features."

      (2) Temporal Resolution: Can you elaborate on the reasons behind the lower temporal resolution of the output? Understanding whether this is due to specific design choices in the model, data preprocessing, or post-processing will clarify the nature of this limitation and its impact on the analysis.

      When running inference on a continuous recording, we choose to use a sliding window approach with stride. Therefore, the model output has a lower temporal resolution than the raw data, which is determined by the stride length (i.e., how many samples to advance the sliding window). While using a stride is not required, it significantly reduces inference time (cf. Figure 2—figure supplement 1). We recommend a stride of 20 samples, which does not impact the detection of events. Any subsequent quantification of events (amplitude, area, risetimes, etc.) is performed on raw data. Based on the reviewer’s comment, we have adapted the code to resample the prediction trace to the sampling rate of the original data. This maintains temporal precision and avoids confusion.

      The Methods now include the following statement:

      "To maintain temporal precision, the prediction trace is resampled to the sampling frequency of the raw data."

      (3) Architecture optimization: how was the architecture CNN+LSTM optimized in terms of a number of CNN layers and size?

      We performed a Bayesian optimization over a defined range of hyperparameters in combination with empirical hyperparameter tuning. We now describe this in the Methods section as follows:

      "To optimise the model architecture, we performed a Bayesian optimisation of hyperparameters. Hyperparameter ranges were chosen for the free parameters of all layers. Optimisation was then performed with a maximum number of trials of 50. Models were evaluated using the validation dataset. Because higher number of free parameters tended to increase inference times, we then empirically tuned the chosen hyperparameter combination to achieve a trade-off between number of free parameters and accuracy."

      Recommendations For The Authors

      Reviewing Editor (Recommendations For The Authors):

      Overall suggestions to the authors:

      (1) Directly compare miniML with SimplyFire (which was not cited or discussed in the original manuscript), with both idealized and actual data. Discuss the pros/cons of each software.

      We have conducted an extensive comparison between miniML and SimplyFire using both simulated and actual experimental data. This analysis is now presented in the revised Figure 3, Figure 3—figure supplement 1, and Figure 4—figure supplement 1. In addition, we have included relevant citations for SimplyFire in our manuscript. These additions provide a more comprehensive and balanced view of the available tools in the field, positioning our work within the broader context of existing solutions.

      (2) Generate a better user interface akin to MiniAnalysis or SimplyFire.

      We thank the editor and reviewers for the suggestion to improve the user interface. We have created a user-friendly graphical user interface (GUI) for miniML that is available on our GitHub repository. This GUI is now showcased in Figure 2—figure supplement 2 of the manuscript. The new interface allows users to load and analyze data through an intuitive point-and-click system, visualize results in real-time, and adjust parameters easily without coding knowledge. We have incorporated user feedback to refine the interface and improve user experience. These improvements significantly enhance the accessibility of miniML, making it more user-friendly for researchers with varying levels of programming expertise.

      Reviewer 1 (Recommendations For The Authors):

      Related to point (1) of the Public Review, we have taken the liberty to compare electrophysiological data using miniAnalysis, SimiplyFire, and miniML. In our comparison, we note the following in our experience:

      (1.1) In contrast to both SimplyFire and miniAnalysis, miniML does not currently have a user-friendly interface where the user can directly control or change the parameters of interest, nor does miniML have a user control center, so the user cannot simply type or select the mini manually. Rather, if any parameter needs to be changed, the user needs to read, understand, and change the original source code to generate the preferred change. This level of "activation energy" and required user coding expertise in computer science, which many researchers do not have, renders miniML much less accessible when directly compared to SimplyFire and miniAnalysis. Hence, unless miniML’s interface can be made more user-friendly, this is a major disadvantage, especially when compared to SimplyFire, which has many of the same features as miniML but with a much easier interface and user controls.

      As suggested by the reviewer, we have created a graphical user interface (GUI) for miniML. The GUI allows easy data loading, filtering, analysis, event inspection, and saving of results without the need for writing Python code. Figure 2—figure supplement 2 illustrates the typical workflow for event analysis with miniML using the GUI and a screenshot of the user interface. Code to use miniML via the GUI is now included in the project’s GitHub repository. The GUI provides a simple and intuitive way to analyze synaptic events, whereas running miniML as Python script allows for more customization and a high degree of automatization.

      (1.2) We compared electrophysiological miniature events between miniML, SimplyFire, and miniAnalysis. All three achieved similar mean amplitudes in "wild type" conditions, and conditions in which mini events were enhanced and diminished, so the overall means and utilities are similar, with miniML and SimplyFire being preferred given the flexibility and much faster analysis. We did note a few differences, however. SimplyFire tends to capture a high number of mini-events over miniML, especially in conditions of diminished mini amplitude (e.g., miniML found 76 events, while SimplyFire 587). The mean amplitudes, however, were similar. It seems that in data with low SNR, SimplyFire captures many more events as real minis that are probably noise, while miniML is more selective, which might be an advantage in miniML. That being said, we found SimplyFire to be superior in many respects, not least of which the user interface and experience.

      We appreciate the reviewer’s thorough comparison of miniML, SimplyFire, and MiniAnalysis. While we acknowledge SimplyFire’s user-friendly interface, our study highlights several advantages of AI-based event analysis over conventional algorithmic approaches. Our updated benchmark analysis revealed better detection performance of miniML compared with SimplyFire (revised Figure 3), which had similar performance to deconvolution. As already noted by the reviewer, high false positive rates are a major issue of the SimplyFire approach. Although a minimum amplitude cutoff can partially resolve this problem, detection performance is highly sensitive to threshold setting (revised Figure 3). Another apparent disadvantage of SimplyFire is its relatively slow runtime (Figure 3—figure supplement 1). Finally, we have enhanced miniML’s accessibility by providing a graphical user interface that is easy to use and provides additional functionality.

      Some technical comments:

      (1) Improvements to the dependence version of miniML: There is a need to clarify the dependence version of the python and tensor flow used in this study and in the GitHub. We used Python version 3.8.19 to load the miniML model. However, if Python versions >=3.9, as described on the GitHub provided, it is difficult to have a matched h5py version installed. It is also inaccurate to say using Python >=3.9, because tensor flow version for this framework needs to be around 2.13. However, if using Python >=3.10, it will only allow 2.16 version tensor flow to be the download choice. Therefore, as a Python framework, the dependency version needs to be specified on GitHub to allow researchers to access the model using the entire work.

      Thank you for highlighting this issue. We have now included specific version numbers in the requirements to avoid version conflicts and to ensure proper functioning of the code.

      (2) Due to the intrinsic characteristics of the trained model, every model is only suitable for analyzing data with similar attributes. It is hard for researchers without a strong computer science background to train a new model themselves for their specific data. Therefore, it would be preferred if there were more available transfer learning models on GitHub accessible for researchers to adapt to their data.

      We would like to thank the reviewer for this feedback. Trained models (such as the default model) can often be used on different data (see, e.g., Figure 4, where data from four distinct synaptic preparations were analyzed with the base model, and Figure 5—figure supplement 1). However, changes in event waveform and/or noise characteristics may necessitate transfer learning to obtain optimal results with miniML. We have revised the description and tutorial for model training on the project’s GitHub repository to provide more guidance in this process. In addition, we now provide a tutorial on how to use existing models on out-of-sample data with distinct kinetics, using resampling. We hope these updates to the miniML GitHub repository will facilitate the use of the method.

      Following the suggestion by the reviewer, we have provided the transfer learning models used for the manuscript on the project’s GitHub repository to increase the number of available machine learning models for event detection. In addition, users of miniML are encouraged to supply their custom models. We hope that this will facilitate model exchange between laboratories in the future.

      Reviewer 3:

      I congratulate all authors for the convincing demonstration of their methodology, I do not have additional recommendations.

      We would like to thank the reviewer for the positive assessment of our manuscript.

      References

      Delvendahl, I., Kita, K., & Müller, M. (2019). Rapid and sustained homeostatic control of presynaptic exocytosis at a central synapse. Proceedings of the National Academy of Sciences, 116(47), 23783–23789. https://doi.org/10.1073/pnas.1909675116

      Donahue, J., Hendricks, L. A., Rohrbach, M., Venugopalan, S., Guadarrama, S., Saenko, K., & Darrell, T. (2017). Long-term recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 677–691. https://doi.org/10.1109/tpami.2016.2599174

      Drummond, C., & Holte, R. C. (2003). C4.5, class imbalance, and cost sensitivity: Why under-sampling beats over-sampling. https: //api.semanticscholar.org/CorpusID:204083391

      Islam, M. Z., Islam, M. M., & Asraf, A. (2020). A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using x-ray images. Informatics in Medicine Unlocked, 20, 100412. https://doi.org/10.1016/j.imu.2020.100412

      Passricha, V., & Aggarwal, R. K. (2019). A hybrid of deep CNN and bidirectional LSTM for automatic speech recognition. Journal of Intelligent Systems, 29(1), 1261–1274. https://doi.org/10.1515/jisys-2018-0372

      Prati, R. C., Batista, G. E. A. P. A., & Monard, M. C. (2009). Data mining with imbalanced class distributions: Concepts and methods. Indian International Conference on Artificial Intelligence. https://api.semanticscholar.org/CorpusID:16651273

      Tasdelen, A., & Sen, B. (2021). A hybrid CNN-LSTM model for pre-miRNA classification. Scientific Reports, 11(1). https://doi.org/10. 1038/s41598-021-93656-0

      Tyagi, S., & Mittal, S. (2020). Sampling approaches for imbalanced data classification problem in machine learning. In P. K. Singh, A. K. Kar, Y. Singh, M. H. Kolekar, & S. Tanwar (Eds.), Proceedings of icric 2019 (pp. 209–221). Springer International Publishing.

      Wang, H., Zhao, J., Li, J., Tian, L., Tu, P., Cao, T., An, Y., Wang, K., & Li, S. (2020). Wearable sensor-based human activity recognition using hybrid deep learning techniques. Security and Communication Networks, 2020, 1–12. https://doi.org/10.1155/2020/ 2132138

    1. Absolument ! Voici un briefing document détaillé basé sur le texte fourni, mettant en évidence les thèmes principaux et les idées clés, avec des citations pertinentes :

      Briefing Document : Critique de l'Idéologie du Bien-Être en Éducation

      Source : Philippe Meirieu, "Pourquoi il faut rompre avec l’idéologie du bien-être en éducation," Recherches en éducation, 57 (2025).

      Thèse Centrale :

      L'article de Philippe Meirieu critique l'omniprésence de l'idéologie du bien-être dans l'éducation contemporaine, arguant qu'elle est à la fois vaine et potentiellement dangereuse pour le développement et l'émancipation des enfants.

      Il propose une alternative : une "pédagogie du bien-devenir" qui reconnaît la nécessité de la frustration, du défi et de la confrontation avec la réalité pour une croissance authentique.

      Principaux Arguments :

      La Critique du Bien-Être comme Absolutisme :

      Meirieu remet en question l'idée que l'éducation devrait être principalement axée sur la recherche du bien-être à tout prix. Il souligne que cette approche peut conduire à une forme d'hédonisme et d'individualisme, où l'on sacrifie d'autres valeurs importantes comme l'effort, la responsabilité et la considération des autres.

      Citation : "On a fait du bien-être une sorte de religion à laquelle on sacrifie tout : sans bien-être, il semble aujourd’hui que la vie est impossible ou insupportable."

      Il précise que la quête exclusive du bien-être peut infantiliser l'enfant, le privant des expériences nécessaires pour développer sa résilience et sa capacité à faire face à l'adversité.

      Citation : "C’est dire à quel point la quête d’un bien-être qui exempterait nos enfants de toute épreuve et leur garantirait une béatitude que ne viendrait troubler aucune contrariété est, tout à la fois, vaine et dangereuse."

      La Nécessité de la Frustration et de l'Épreuve :

      L'auteur soutient que la frustration est une composante inévitable et même nécessaire de la croissance.

      Apprendre à faire face à la résistance des choses et des êtres est essentiel pour le développement de l'autonomie et de la pensée critique.

      Citation : "Car, pour grandir, il faut en rabattre : les choses et les êtres ne se plient que rarement aux caprices et aux désirs de celui qui vient au monde ; et l’entrée dans ce monde est, toujours et inévitablement, apprentissage de la frustration."

      Le "Bien-Devenir" comme Alternative :

      Meirieu propose de remplacer l'idéologie du bien-être par une "pédagogie du bien-devenir". Cette approche met l'accent sur l'émancipation, la capacité à se projeter dans l'avenir, à faire des choix éclairés et à assumer ses responsabilités.

      Citation : "La quête du Graal de l’éducation, ce n’est pas, ce ne peut pas être, le bien-être : c’est le bien-devenir. C’est ce qui permet à un sujet d’assumer ce qui l’a fait mais lui donne aussi le courage et les moyens de ne pas y être enfermé."

      L'Importance du "Portage" et de la Promesse :

      Pour favoriser le bien-devenir, les éducateurs doivent assurer un "portage" (soutien) constant, offrant aux enfants un espace sûr pour explorer, prendre des risques et apprendre de leurs erreurs.

      Cela implique également de tenir une "promesse" : celle de ne pas les abandonner et de les accompagner dans leur développement.

      Citation : "Toute pédagogie du bien-devenir requiert donc que les éducateurs assurent ce portage — qui est aussi, fondamentalement, un partage d’humanité — dès la toute petite enfance et tout au long du développement de l’enfant."

      L'Enfant comme Être Inachevé et Complet :

      Meirieu souligne l'importance de considérer l'enfant comme un être à la fois "inachevé" (nécessitant protection et accompagnement) et "complet" (ayant le droit d'être entendu et respecté dans ses opinions).

      Citation : "Nous touchons là à ce qui est au cœur même de toute pédagogie du bien-devenir : une vision de l’enfant comme être à la fois inachevé et complet." Implications Pédagogiques :

      Il faut repenser les pratiques éducatives pour qu'elles ne soient pas uniquement axées sur le bien-être immédiat, mais qu'elles préparent les enfants à affronter les défis de la vie.

      Les éducateurs doivent encourager la prise de risques, l'expérimentation et l'apprentissage par l'erreur, tout en offrant un soutien constant et une "promesse" de ne pas abandonner l'enfant.

      Il est crucial de considérer l'enfant comme un être capable de penser par lui-même, de faire des choix et d'assumer ses responsabilités, tout en lui offrant la protection et l'accompagnement nécessaires.

      Conclusion :

      L'article de Philippe Meirieu propose une critique nuancée et stimulante de l'idéologie du bien-être en éducation.

      En plaidant pour une "pédagogie du bien-devenir", il invite les éducateurs à repenser leurs pratiques et à se concentrer sur l'émancipation, la responsabilité et la capacité à faire face à l'adversité, plutôt que sur la simple recherche du bonheur immédiat.

    1. However it remains low enough to guarantee optical transparencyfor cell observation by confocal microscopy, and our molds exhibit similar roughness to what waspreviously shown for similar mold printing techniques [23].preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission.The copyright holder for thisthis version posted January 29, 2025.;https://doi.org/10.1101/2025.01.29.632980doi:bioRxiv preprint

      I'm curious at what orientation you printed these molds? This paper showed a reduction in surface roughness when tilting multiple axis of the model. https://doi.org/10.1038/s41378-023-00607-y . Not sure if this same technique will translate between DLP and SLA though

    1. Table 3Stimuli Selections.Total Number of StimuliSelectionsNumber of SquareSelectionsNumber of KanizsaSelectionsNumber of ControlSelectionsSignificance at p = .05Kanizsa vs.Control5 – 5 0 p < .05*Control vs. Square 4 3 – 1 p = .40Kanizsa vs. Square 7 5 2 – p = .29Note. Summed across the nine cats, each stimuli pair was presented 18 times. Asterisk (*) indicates significance at the p = .05 level.G.E. Smith et al.

      This figure is essential to previewing the text because it helps you see what the author used for his data, in this case he drew a table with a straight forward x and y axis. It shows the number of Selections.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Joint Public Review:

      Summary:

      The behavioral switch between foraging and mating is important for resource allocation in insects. This study investigated the role of the neuropeptide, sulfakinin, and of its receptor, the sulfakinin receptor 1 (SkR1), in mediating this switch in the oriental fruit fly, Bactrocera dorsalis. The authors use genetic disruption of sulfakinin and of SkR1 to provide strong evidence that changes in sulfakinin signaling alter odorant receptor expression profiles and antennal responses and that these changes mediate the behavioral switch. The combination of molecular and physiological data is a strength of the study. Additional work would be needed to determine whether the physiological and molecular changes observed account for the behavioral changes observed.

      Strengths:

      (1) The authors show that sulfakinin signaling in the olfactory organ mediates the switch between foraging and mating, thereby providing evidence that peripheral sensory inputs contribute to this important change in behavior.

      (2) The authors' development of an assay to investigate the behavioral switch and their use of different approaches to demonstrate the role of sulfakinin and SkR1 in this process provides strong support for their hypothesis.

      (3) The manuscript is overall well-organized and documented.

      Weaknesses:

      (1) The authors claim that sulfakinin acts directly on SkR1-positive neurons to modulate the foraging and mating behaviors in B. dorsalis. The authors also indicated in the schematic that satiation suppresses SkR1 expression. Additional experiments and more a detailed discussion of the results would help support these claims.

      (2) The findings reported could be strengthened with additional experimental details regarding time of day versus duration of starvation effects and additional genetic controls, amongst others.

      Recommendations for the authors:

      Major issues

      (1) As written the introduction is somewhat fragmented and does not lay out a clear rationale for the current study in the species used by the authors. Others, including Guo et al. (2021) and Wang et al. (2022), have previously shown that sulfakinin signaling pathways are important for feeding and receptivity regulation in D. melanogaster. Thus, the novelty of this study should be more clearly articulated.

      The introduction in the revision is significantly changed to improve the description for the rationale of study (lines 60-66 in the revision).

      (2) In addition, the Introduction should provide more specific background information on the pheromonal activity of oriental fruit fly body extract, the odor-preferences, and the sex pheromone of this species compared to that of model insects such as Drosophila melanogaster.

      The revision contains a paragraph of introduction for chemical ecology of oriental fruit fly that is related to this study (lines 67-75).

      (3) It isn't clear what the first image in Figure 1C represents - is this a schematic of the area or does it represent data?

      The Fig 1C and the associated figure caption are revised. The figure is more visible by changing the track colors. The figure caption is revised as “Representative foraging trajectories in the 100 mm diameter arenas within a 15-min observation period of flies starved for different durations.”

      (4) The authors should include examples of the EAG recordings following the stimulation with food volatiles or pheromones, not only the results of their analyses. This could be included in the main figures or even in supporting information.

      As suggested, we added the examples of the EAG recordings following the stimulation with food odors and body extracts in the Figure 1 and Figure 3.

      (5) The demonstration that removal of the antennae severely impairs mating is dispensable because the antennae are required for other functions in addition to olfaction.

      We agree that the roles of the antennae are likely more than the olfactory function. As suggested, we removed the data.

      (6) It is currently difficult to understand how the authors measured successful rates of foraging. Please provide more details.

      In the revision, we added a sentence describing the method for measuring in detail. See line 269-273.

      (7) The expression of sulfakinin does not change significantly in the antennae following starvation (Figure 2A). Do the authors know whether they change in the central nervous system under these conditions? Have the authors (or has anyone else) checked the expression pattern of sulfakinin in the antennae? This information would help determine whether the sulfakinin signal that acts on SkR1 is released from neurons in the central nervous system (Figure S4C) or whether it is also released from the neurons in the olfactory organs. Based on the immunochemistry results shown in Figure S4C, it would also be interesting to determine whether the intensity of anti-sulfakinin immunoreactivity changes before versus after starvation. This could help establish whether sulfakinin is released during starvation.

      We added the expression data showing the the mRNA level of Sk in the head that is higher after refeeding in Fig. S3. The change in the expression of Sk is also added in the text (lines 107-110). We were unable to identify the Sk neurons in the antennae suggesting possibility of the direct action of humoral Sk on the antennae.

      (8) In Figure 2A, the authors show that the expression levels of some neuropeptides system components change during starvation. However, it would be helpful if the authors could include more detailed information on how the results are shown in the figure legends (e.g., the expression level of each candidate in fed flies was set as 1, etc).

      We revised the figure caption to explain the Figure 2 with the expression values in the figure legend.

      (9) In Figure 2D, null mutant males of sulfakinin and SkR1 consume more food at all times compared to the wild type. However, the corresponding mutant females consume more food only at night. Is this because the wild-type female flies eat more food during the day? In a related issue, Figure 2D shows differences in food consumption measured at different times of day, however, this is not directly addressed in the text, which instead mentions that "the amount of excess food consumed by the mutants was dependent on the duration of the starvation period in both sexes".

      Thank you for the important suggestions. We speculate that the difference of feeding amounts of females occurring only at night is due to the high basal feeding rate of females during the daytime, masking the increase in feeding in the knockout of Sk signaling. As suggested, we have added a relevant description of the difference in food consumption. In addition, we changed the Y-axis scale in the figure for a justified comparison between males and females. See line 123-128.

      (10) It isn't clear how the time of day relates to the duration of starvation. This suggests that mutant females only consume more at 21:00 (presumably at night) whereas males consume more throughout the day. Does this suggest an interaction with the circadian system? What is the duration of starvation in Figure 3A? In a related issue, in Figure 4 it would be useful to know what time of day the EAG analysis was done because the data shown in Figure 2D suggests that the time of day significantly impacts behavioral responses. And does the red versus blue color scheme of the OR subunits represent up/downregulated levels in wild-type animals? Please define this for the reader.

      In addition to the response to the point 9, responding to the issue of feeding amount in females. As the reviewer noted, there was indeed a diurnal difference in food amount consumed by B. dorsalis. However, whether this is related to circadian rhythms is something we haven't studied for further in-depth. Measuring food intake at these 3 times of day, we all ensured that the duration of starvation was the same 12 h. The duration of starvation in Figure 3A is 12h. We have mentioned this in the manuscript. See line 267-268.

      The EAG for sex pheromones and body surface extracts were measured form 21:00-23:00, and food odor was measured from 9:00-11:00. The times of the experiments are described in the revision. See line 309-311.

      Accordingly, we made a revision of the figure caption for explaining the colored fonts. Red color represents a set of ORs related with foraging and blue color is for a set of ORs related with mating. Therefore, the ORs with red color were upregulated in starved wild-type animals and the ORs with blue color were downregulated in starved wild-type flies. We have defined this in the revised manuscript. See line 672-673.

      (11) The authors convincingly show that SKR1 is present in the antennae and is co-expressed with orco. It would be useful to discuss whether this receptor is also expressed in other tissues where there may be additional sites of action of this pathway.

      Indeed, SkR1 is also expressed in the Drosophila brain. We added the discussion on the expression and additional sites of action of SKR1 within the central nervous system. See line 200-205.

      (12) It isn't clear what the dotted arrows in the model shown in Figure 5 represent.

      Dashed arrows represent the additional possible pathways that have not been tested in this study, but not excluded in the model. Please see the discussion for details of additional possible factors modulating odorant sensitivity relevant to satiety. See line 210-229.

      (13) In Figure 5, the authors indicate that satiation suppresses SkR1 expression. It would be helpful if the authors tested the expression level of SkR1 in re-fed flies (by feeding the flies after 12h starvation) to see whether levels of expression are rapidly restored to the levels seen in satiated animals. Such a result could further support the claims made by the authors.

      Thank for your suggestions. Indeed, refeeding after 12h starvation significantly decreased SkR1. We added the result in supporting information (Fig. S3). See line 713. Results see line 107-110.

      (14) The authors show that locomotor activity is unaffected in the mutants but body size comparison would be more useful here since this could also contribute to baseline differences in meal size.

      In the revision, we provided a comparison between WT and Sk-/- in the supplementary data. Results showed that mutant flies have the same body size as the WT flies. (Fig. S7) See line 742. Results see line 120-121.

      (15) Have the authors tested the behavioral phenotypes of heterozygotes mutant of both Sk and SkR1 flies? This may reveal whether a reduced expression of Sk-SkR1 will also cause significant changes in the foraging and mating behaviors seen during starvation.

      We tested the behavioral phenotypes of heterozygous mutant of Sk knockout flies. The results showed that foraging and mating behaviors of Sk heterozygous mutants were unaffected during starvation, suggesting the mutants are completely recessive. We have added the results in supporting information (Fig. S8). See line 746. Results see line 132-135.

      (16) It would be useful to provide information about which SK peptide is detected by the antibody used in Figure S4C. In Figures S4C and S5D, it would be useful to include a counterstain to show that the general morphology is unaffected in the mutants.

      As suggested, we added a detailed description for rabbit anti-BdSk antibody. See line 362-363. We have improved the background image to be available to show the general structure. So counter staining would not be essential.

      (17) The figure legends for supporting figures need to be improved as they are currently difficult to understand. For example, in S2: what is the meaning of "different removal of antennae"? In S3: it isn't clear how the authors evaluated the responses in EAG experiments; in S4A: there are several DNA sequences that do not appear in the main text of the manuscript; in S4C: the meaning of the boxes and the dots is unclear, as is the figure to the left; in S5D, the authors explain only the suppression of SKR1, yet the figure indicates some images for SKR IHC. These are only a few examples; we ask that the authors revise and improve the legends for supporting figures.

      For S2, we removed the data as suggested. For S3, we added a sentence describing the method for measuring in detail. See line 707-709. For S4, the figure in the revision is significantly changed and added a detailed description in the legend (lines 717-724 in the revision). For S5, we have improved our description. See line 731-734. In addition, we have checked all the figure legends of our manuscript and changes were displayed in track version.

      Minor issues

      (1) It isn't clear what the meaning of "the complexity of sulfakinin pathways" is. Please explain.

      We have rewritten the sentence in the revised manuscript by adding the description as “…complexity of Sk pathways, special and temporal dynamics and multiple ligands and receptors, is…”. See line 61-65.

      (2) Please double-check the calls to the various figures in the text.

      We have double-checked the calls to all the figures in the text to make sure they were correct.

      (3) L125: What is the meaning of "olfactory reprogramming"? Please explain.

      We rephrased it to “alteration of olfactory sensitivities”. See line 145.

      (4) L135: After mentioning qRT-PCR the authors should include a call to a figure that shows these results.

      Thank you for your suggestion, the qRT-PCR results are shown in Figure 4B, and we have added it as suggested. See line 154.

      (5) L270: Details are provided for the extraction of the pheromone. However, more details are needed on how the EAG and other functional assays were done.

      We have described the assay procedures in detail in the materials and method part. See line 298-311.

      (6) Figure 2B. Please remove the period(".") at the C-terminal end of WT sk.

      We are sorry for our mistake. We have corrected it.

    1. xk ← arg minx∈ Py ∥μ − 1k [φ(x) + Ík −1j=1 φ(xj )]∥

      首先,对某个项目 y 的所有样本(来自 Dt ∪ Et−1)提取特征 ϕ(x),并计算它们的平均特征向量 µ 迭代选择:接下来,针对项目 y,需要选择 m_y 个示例。每一次迭代中,从 Py 中选出一个样本,使得当前已选择的示例与 µ 的平均距离最小。具体来说,在第 k 次迭代,选择的样本 x_k 满足:这一步的式子 这一步的意思是:在考虑之前已经选出的样本的基础上,选取新的样本,使得加入这个样本后,所有选出样本的平均特征更接近总体均值 µ。

      构建示例集合:重复以上步骤直到为项目 y 选择出 m_y 个示例,然后将这些示例以 (x, y) 的形式构成集合 Ey。

    1. Author response:

      The following is the authors’ response to the original reviews.

      We thank the reviewers for valuable feedback and comments. Based on the feedback we revised the manuscript and believe that we addressed most of the reviewers' raised points. Below we include a summary of key revisions and point-by-point responses to reviewers comments.

      Abstract/Introduction

      We further emphasized EP-GAN strength in parameter inference of detailed neuron parameters vs specialized models with reduced parameters.

      Results

      We further elaborated on the method of training EP-GAN on synthetic neurons and validating on both synthetic and experimental neurons.

      We added a new section Statistical Analysis and Loss Extension which includes:

      - Statistical evaluation of baseline EP-GAN and other methods on neurons with multi recording membrane potential responses/steady-state currents data: AWB, URX, HSN

      - Evaluation of EP-GAN with added resting potential loss + longer simulations to ensure stability of membrane potential (EP-GAN-E)

      Methods

      We added a detailed explanation on "inverse gradient process"

      We added detailed current/voltage-clamp protocols for both synthetic and experimental validation and prediction scenarios (table 6)

      Supplementary

      We added error distribution and representative samples for synthetic neuron validations (Fig S1)

      We added membrane potential response statistical analysis plots for existing methods for AWB, URX, HSN (Fig S6)

      We added steady-state currents statistical analysis plots on EP-GAN + existing methods for AWB, URX, HSN (Fig S7)

      We added mean membrane potential errors for AWB, URX, HSN normalized by empirical standard deviations for all methods (Table S4)

      Please see our point-by-point responses to specific feedback and comment below.

      Reviewer 1:

      First, at the methodological level, the authors should explain the inverse gradient operation in more detail, as the reconstructed voltage will not only depend on the evaluation of the right-hand side of the HH-equations, as they write but also on the initial state of the system. Why did the authors not simply simulate the responses?

      We thank the reviewer for the feedback regarding the need for further explanation. We have revised the Methods section to provide a more detailed description of the inverse gradient process. The process uses a discrete integration method, similar to Euler’s formula, which takes systems’ initial conditions into account. For the EP-GAN baseline, the initial states were picked soon after the start of the stimulus to reconstruct the voltage during the stimulation period. For EP-GAN with extended loss (EP-GAN-E), introduced in this revision in sub-section Statistical Analysis and Loss Extension, initial states before/after stimulations were also taken into account to incorporate resting voltage states into target loss.

      Since EP-GAN is a neural network and we want the inverse gradient process to be part of the training process (i.e., making EP-GAN a “model informed network”), the process is expected to be implemented as a differentiable function of generated parameter p. This enables the derivatives from reconstructed voltages to be traced back to all network components via back-propagation algorithm.

      Computationally, this requires the implementation of the process as a combination of discrete array operations with “auto-differentiation”, which allows automatic computation of derivatives for each operation. While explicit simulation of the responses using ODE solvers provides more accurate solutions, the algorithms used by these solvers typically do not support such specialized arrays nor are they compatible with neural network training. We thus utilized PyTorch tensors [54], which support both auto-differentiation and vectorization to implement the process.

      The authors did not allow the models time to equilibrate before starting their reconstruction simulations, as testified by the large transients observed before stimulation onset in their plots. To get a sense of whether the models reproduce the equilibria of the measured responses to a reasonable degree, the authors should allow sufficient time for the models to equilibrate before starting their stimulation protocol.

      In the added Statistical Analysis and Loss Extension under the Results section, we added results for EP-GAN-E where we simulate the voltage responses with 5 seconds of added stabilization period in the beginning of simulations. The added period mitigates voltage fluctuations observed during the initial simulation phase and we observe that simulated voltage responses indeed reach stable equilibrium for both prior stimulations and for the zero stimulus current-clamp protocol (Figure 5 bottom, Column 3).

      In fact, why did the authors not explicitly include the equilibrium voltage as a target loss in their set of loss functions? This would be an important quantity that determines the opening level of all the ion channels and therefore would influence the associated parameter values.

      EP-GAN baseline does include equilibrium voltage as a target loss since all current-clamp protocols used in the study (both synthetic and experimental) include a membrane potential trace where the stimulus amplitude is zero throughout the entire recording duration (see added Table 6 for current clamp protocols), thus enforcing EP-GAN to optimize resting membrane potential alongside with other non-zero stimulus current-clamp scenarios.

      To further study EP-GAN’s accuracy in resting potential, we evaluated EP-GAN with supplemental resting potential target loss and evaluated its performance in the sub-section Statistical Analysis and Loss Extension. The added loss, combined with 5 seconds of additional stabilization period, improved accuracy in predicting resting potentials by mitigating voltage fluctuations during the early simulation phase and made significant improvements to predicting AWB membrane potential responses where EP-GAN baseline resulted in overshoot of the resting potential.

      The authors should provide a more detailed evaluation of the models. They should explicitly provide the IV curves (this should be easy enough, as they compute them anyway), and clearly describe the time-point at which they compute them, as their current figures suggest there might be strong transient changes in them.

      We included predicted IV-curve vs ground truth plots in addition to the voltages in the supplementary materials (Figure S2, S5) in the original submitted version of the manuscript. In this revision, we added additional IV-curve plots with statistical analysis for the neurons with multi-recording data (AWB, URX, HSN) in the supplementary materials (Figure S7).

      For the evaluation of predicted membrane potential responses, we added further details in Validation Scenarios (Synthetic) under Results section such that it clearly explains on the current-clamp protocols used for both synthetic and experimental neurons and which time interval the RMSE evaluations were performed.

      In the sub-section Statistical Analysis and Loss Extension, we introduced a new statistical metric in addition to RMSE, applied for neurons AWB, URX, HSN which evaluates the percentage of predicted voltages that fall within the empirical range (i.e., mean +- 2 std) and voltage error normalized by empirical standard deviations (Table S4).

      The authors should assess the stability of the models. Some of the models exhibit responses that look as if they might be unstable if simulated for sufficiently long periods of time. Therefore, the authors should investigate whether all obtained parameter sets lead to stable models.

      In the sub-section Statistical Analysis and Loss Extension, we included individual voltage traces generated by both EP-GAN baseline and EP-GAN-E (extended) with longer simulation (+5 seconds) to ensure stability. EP-GAN-E is able to produce equilibrium voltages that are indeed stable and within empirical bounds throughout the simulations for the zero-stimulus current-clamp scenario (column 3) for the 3 tested neurons (AWB, URX, HSN).

      Minor:

      The authors should provide a description of the model, and it's trainable parameters. At the moment, it is unclear which parameter of the ion channels are actually trained by the methodology.

      The detailed description of the model and its ion channels can be found in [7]. Supplementary materials also include an excel table predicted parameters which lists all EP-GAN fitted parameters for 9 neurons (+3 new parameter sets for AWB, URX, HSN using EP-GAN-E) included in the study, the labels for trainability, and their respective lower/upper bounds used during training data generation. In the revised manuscript, we further elaborated on the above information in the second paragraph of the Results section.

      Reviewer 2:

      Major 1: While the models generated with EP-GAN reproduce the average voltage during current injections reasonably well, the dynamics of the response are not well captured. For example, for the neuron labeled RIM (Figure 2), the most depolarized voltage traces show an initial 'overshoot' of depolarization, i.e. they depolarize strongly within the first few hundred milliseconds but then fall back to a less depolarized membrane potential. In contrast, the empirical recording shows no such overshoot. Similarly, for the neuron labeled AFD, all empirically recorded traces slowly ramp up over time. In contrast, the simulated traces are mostly flat. Furthermore, all empirical traces return to the pre-stimulus membrane potential, but many of the simulated voltage traces remain significantly depolarized, far outside of the ranges of empirically observed membrane potentials. While these deviations may appear small in the Root mean Square Error (RMSE), the only metric used in the study to assess the quality of the models, they likely indicate a large mismatch between the model and the electrophysiological properties of the biological neuron.

      EP-GAN main contribution is targeted towards parameter inference of detailed neuron model parameters, in a compute efficient manner. This is a difficult problem to address even with current state-of-the-art fitting algorithms. While EP-GAN is not perfect in capturing the dynamics of the responses and RMSE does not fully reflect the quality of predicted electrophysiological properties, it’s a generic error metric for time series that is easily interpretable and applicable for all methods. Using such a metric, our studies show that EP-GAN overall prediction quality exceeds those of existing methods when given identical optimization goals in a compute normalized setup.

      In our revised manuscript, we included a new section Statistical Analysis and Loss Extension under Results section where we performed additional statistical evaluations (e.g., % of predicted responses within empirical range) of EP-GAN’s predictions for neurons with multi recording data. The results show that predicted voltage responses from EP-GAN baseline (introduced in original manuscript) are in general, within the empirical range with ~80% of its responses falling within +- 2 empirical standard deviations, which were higher than existing methods: DEMO (57.9%), GDE3 (37.9%), NSDE (38%), NSGA2 (60.2%).

      Major 2: Other metrics than the RMSE should be incorporated to validate simulated responses against electrophysiological data. A common approach is to extract multiple biologically meaningful features from the voltage traces before, during and after the stimulus, and compare the simulated responses to the experimentally observed distribution of these features. Typically, a model is only accepted if all features fall within the empirically observed ranges (see e.g. https://doi.org/10.1371/journal.pcbi.1002107). However, based on the deviations in resting membrane potential and the return to the resting membrane potential alone, most if not all the models shown in this study would not be accepted.

      In our original manuscript, due to all of our neurons’ recordings having a single set of recording data, RMSE was chosen to be the most generic and interpretable error metric. We conducted additional electrophysiological recordings for 3 neurons in prediction scenarios (AWB, URX, HSN) and performed statistical analysis of generated models in the sub-section Statistical Analysis and Loss Extension. Specifically, we evaluated the percentage of predicted voltage responses that fall within the empirical range (empirical mean +- 2 std, p ~ 0.05) that encompass the responses before, during and after stimulus (Figure 5, Table 5) and mean membrane potential error normalized by empirical standard deviations (Table S4).

      The results show that EP-GAN baseline achieves average of ~80% of its predicted responses falling within the empirical range, which is higher than the other methods: DEMO (57.9%), GDE3 (37.9%), NSDE (38%), NSGA2 (60.2%). Supplementing EP-GAN with additional resting potential loss (EPGAN-E) increased the percentage to ~85% with noticeable improvements in reproducing dynamical features for AWB (Figure 5). Evaluations of membrane potential errors normalized by empirical standard deviations also showed similar results where EP-GAN baseline and EP-GAN-E have average error of 1.0 std and 0.7 std respectively, outperforming DEMO (1.7 std), GDE3 (2.0 std), NSDE (3.0 std) and NSGA (1.5 std) (Table S4).

      Major 3: Abstract and introduction imply that the 'ElectroPhysiome' refers to models that incorporate both the connectome and individual neuron physiology. However, the work presented in this study does not make use of any connectomics data. To make the claim that ElectroPhysiomeGAN can jointly capture both 'network interaction and cellular dynamics', the generated models would need to be evaluated for network inputs, for example by exposing them to naturalistic stimuli of synaptic inputs. It seems likely that dynamics that are currently poorly captured, like slow ramps, or the ability of the neuron to return to its resting membrane potential, will critically affect network computations.

      In the paper, EP-GAN is introduced as a parameter estimation method that can aid the development of ElectroPhysiome, which is a network model - these are two different method types and we do not claim EP-GAN is a model that can capture network dynamics. To avoid possible confusion, we made further clarifications in the abstract/introduction that EP-GAN is a machine learning approach for neuron HH-parameter estimation.

      I find it hard to believe that the methods EP-GAN is compared to could not perform any better. For example, multi-objective optimization algorithms are often successful in generating models that match empirical observations very well, but features used as target of the optimization need to be carefully selected for the optimization to succeed. Likely, each method requires extensive trial and error to achieve the best performance for a given problem. It is therefore hard to do a fair comparison. Given these complications, I would like to encourage the authors to rethink the framing of the story as a benchmark of EP-GAN vs. other methods. Also, the number of parameters does not seem that relevant to me, as long as the resulting models faithfully reproduce empirical data. What I find most interesting is that EP-GAN learns general relationships between electrophysiological responses and biophysical parameters, and likely could also be used to inspect the distribution of parameters that are consistent with a given empirical observation.

      We thank the reviewer for providing this perspective. While it is indeed difficult to have a completely fair comparison between existing optimization methods vs EP-GAN due to the fundamental differences in their algorithms, we believe that the current comparisons with other methods are justified as they provide baseline performance metrics to test EP-GAN for its intended use cases.

      The main strength of EP-GAN, as previously mentioned, is in its ability to efficiently navigate large detailed HH-models with many parameters so that it can aid in the development of nervous system models such as ElectroPhysiome, potentially fitting hundreds of neurons in a time efficient manner.

      While EP-GAN’s ability to learn the general relationship between electrophysiological responses and parameter distribution are indeed interesting and warrant a more careful examination, this is not the main focus of the paper since in this work we focus on introducing EP-GAN as a methodology for parameter inference.

      In this context, we believe the comparisons with other methods conducted in a compute normalized manner (i.e., each method is given the same # of simulations) and identical optimization targets provides an adequate framework for evaluating the aforementioned EP-GAN aim. Indeed, while EPGAN excels with larger HH-models, it performs slightly worse than DE for smaller models such as the one used by [16] despite it being more compute efficient (Table S2).

      To emphasize the EP-GAN aim, we revised the main manuscript description to focus on its intended use in parameter inference of detailed neuron parameters vs specialized models with reduced parameters.

      I could not find important aspects of the methods. What are the 176 parameters that were targeted as trainable parameters? What are the parameter bounds? What are the remaining parameters that have been excluded? What are the Hodgkin-Huxley models used? Which channels do they represent? What are the stimulus protocols?

      The detailed description and development of the HH-model that we use and its ion channel list can be found in [7]. Supplementary materials also include an excel table predicted parameters which lists all EP-GAN fitted parameters for 9 neurons (+3 new parameter sets for AWB, URX, HSN using EPGAN-E), the labels for trainability, and parameter bounds used for parameters during the generation of training data.

      We also added a new Table which details the current/voltage clamp protocols used for 9 neurons including the ones used for evaluating EP-GAN-E, which was supplemented with longer simulation time to ensure voltage stability (please see Table 6).

      I could not assess the validation of the EP-GAN by modeling 200 synthetic neurons based on the data presented in the manuscript since the only reported metric is the RMSE (5.84mV and 5.81mV for neurons sampled from training data and testing data respectively) averaged over all 200 synthetic neurons. Please report the distribution of RMSEs, include other biologically more relevant metrics, and show representative examples. The responses should be carefully investigated for the types of mismatches that occur, and their biological relevance should be discussed. For example, is the EP-GAN biased to generate responses with certain characteristics, like the 'overshoot' discussed in Major 1? Is it generally poor at fitting the resting potential?

      We thank the reviewer for the feedback regarding the need for additional supporting data for synthetic neuron validations. In the revised supplementary materials Figure S1, we included the distribution of RMSE errors for both groups of synthetic neuron validations (validation/test set) and representative samples for both EP-GAN baseline and EP-GAN-E. Notably, the inaccuracies observed during the experimental neuron predictions (e.g., resting potential, voltage overshoot) do not necessarily generalize to synthetic neurons, indicating that such mismatches could stem from the differences between synthetic neurons used for training and experimental neurons for predictions. While synthetic neurons are generated according to empirically determined parameter bounds, some experimental neuron types are rarer than the others and may also involve other channels that have not been recorded or modeled in [7], which can affect the quality of predicted parameters (see 2nd and 4th paragraphs of Discussions section for more detail). Also, properties such as recording error/noise that are often present in experimental neurons are not fully accounted for in synthetic neurons.

      To further study how these mismatches can be mitigated, in the revision we added an extended version of EP-GAN where target loss was supplemented with additional resting potential and 5 seconds of stabilization period during simulations (EP-GAN-E described in Statistical Analysis and Loss Extension). With such extensions, EP-GAN-E was able to improve its accuracies on both resting potentials and dynamical features with the most notable improvements on AWB where predicted voltage responses closely match slowly rising voltage response during stimulation. EPGAN-E is an example of further extensions to loss function that account for additional experimental features.

      Furthermore, the conclusion of the ablation study ('EP-GAN preserves reasonable accuracy up to a 25% reduction in membrane potential responses') does not seem to be justified given the voltage traces shown in Figure 3. For example, for RIM, the resting membrane potential stays around 0 mV, but all empirical traces are around -40mV. For AFD, all simulated traces have a negative slope during the depolarizing stimuli, but a positive slope in all empirically observed traces. For AIY, the shape of hyperpolarized traces is off.

      Since EP-GAN baseline optimizes voltage responses during the stimulation period, RMSE was also evaluated with respect to this period. From these errors, we evaluated whether the predicted voltage error for each ablation scenario fell within the 2 standard deviations from the mean error obtained from synthetic neuron test data (i.e. the baseline performance). We found that for input ablation for voltage responses, the error was within such range up to 25% reduction whereas for steady-state current input ablation, all 25%, 50% and 75% reductions resulted in errors within the range.

      We extended the “Ablation Studies” sub-section so that the above reasoning is better communicated to the readers.

      Additionally, I found a number of minor issues:

      Minor 1: Table 1 lists the number of HH simulations as '32k (11k · 3)'. Should it be 33k, since 11.000 times 3 is 33.000? Please specify the exact number of samples.

      Minor 2: x- and y-ticks are missing in Fig 2, Fig 3, Fig S1, Fig S2, Fig S3 and Fig S4.

      Minor 3: All files in the supplementary zip file should be listed and described.

      Minor 4: Code for training the GAN, generation of training datasets and for reproducing the figures should be provided.

      Minor 5: In the reference (Figure 3A, Table 1 Row 2): should this refer to Table 2?

      Minor 6: 'the ablation is done on stimulus space where a 50% reduction corresponds to removing half of the membrane potential responses traces each associated with a stimulus.' - which half is removed?

      We thank the reviewer for pointing out these errors in the original manuscript. The revised manuscript includes corrections for these items. We will publish the python code reproducing the results in the public repository in the near future.

    1. Voici un sommaire de l'entretien avec Grégoire Borst, avec les horodatages correspondants :

      • Introduction []
        • L'animateur présente Grégoire Borst, professeur de psychologie du développement et de neurosciences cognitives, directeur du laboratoire de psychologie du développement et de l'éducation de l'enfant (Lapsid), un laboratoire du CNRS.
        • Borst a fait sa thèse en psychologie en 2005, puis a passé 4 ans en post-doctorat à Harvard, avant de revenir en France en 2010.
        • Le Lapsid est le premier laboratoire français en psychologie scientifique, créé il y a 135 ans.
      • Recherches du laboratoire []
        • Le laboratoire étudie le rôle des mécanismes de contrôle, des automatismes, de la détection de conflits et de doutes dans le développement cognitif et socio-émotionnel de l'enfant et de l'adolescent, ainsi que dans les apprentissages scolaires, en combinant approches comportementales et neuroimagerie.
        • L'objectif est d'intégrer différents niveaux d'explication, du génétique aux contextes sociaux et culturels.
        • Borst explique qu'il s'intéresse aux différences entre les individus et que l'adolescence est la période où l'hétérogénéité est la plus forte.
        • Il souligne l'importance de combiner les approches de la psychologie, des neurosciences, de la linguistique, de l'informatique, de la sociologie, de l'économie et de la didactique pour comprendre le développement de l'enfant.
      • Interventions et publications []
        • Borst intervient auprès du monde de l'éducation et est membre du bureau international de l'éducation à l'UNESCO.
        • Il est l'auteur de nombreux articles scientifiques et de livres, dont "Le cerveau et les apprentissages" et "Explore ton cerveau" avec Olivier Houdé.
        • Il mentionne également "Mon cerveau questions-réponses" pour les moins de 10 ans et "C'est pas moi, c'est mon cerveau" avec Mathieu Cassoti pour les adolescents.
        • Ce dernier ouvrage décrypte le fonctionnement du cerveau des adolescents à travers 14 situations quotidiennes, en utilisant des jeux, des quiz et des récits.
      • Le développement cérébral à l'adolescence []
        • Le cerveau se développe très tôt, dès les premiers jours après la fécondation, et continue de se transformer longtemps après la naissance.
        • Le cerveau n'est pas structuré comme un cerveau adulte avant 20 à 25 ans.
        • La plasticité cérébrale permet aux cerveaux de se transformer tout au long de la vie.
        • Une étude sur l'apprentissage du jonglage montre comment l'acquisition de nouvelles compétences transforme le cerveau.
        • Le cerveau humain contient environ 86 milliards de neurones connectés par 1 million de milliards de connexions.
        • L'adolescence est une période de forte plasticité cérébrale qui dure de 10 à 12 ans.
        • L'entrée dans la puberté marque le début de cette période, avec la réouverture du filet périneuronal qui augmente la plasticité cérébrale.
        • Le développement du cerveau est asynchrone, avec le système limbique (émotions, récompense) qui mature plus tôt que le cortex préfrontal (régulation).
        • Ce décalage rend difficile la régulation des émotions et l'impulsivité chez les adolescents.
      • Gratification différée et prise de risque []
        • La tâche de gratification différée (test du marshmallow) montre la capacité des enfants à se maîtriser.
        • La capacité à différer son plaisir est un prédicteur de la réussite éducative future, plus que le milieu social ou le QI.
        • L'adolescence est une période de mortalité plus élevée que l'enfance en raison des risques suicidaires et des conduites à risque.
        • Les adolescents sont plus sensibles aux récompenses que les enfants et les adultes, comme le montre une expérience en IRM.
        • Les adolescents évaluent le ratio coût-bénéfice différemment, en accordant plus d'importance à la récompense, notamment sociale.
        • Les liens sociaux avec les pairs deviennent primordiaux à l'adolescence.
        • Les comportements à risque, comme le fire challenge, sont une façon de maximiser sa position sociale.
        • L'adolescence est une période où il est difficile de réguler ses émotions, comme une "cocotte-minute émotionnelle" sans valve.
        • Les adolescents sont plus orientés vers les récompenses immédiates, surtout en groupe.
        • Une expérience de conduite simulée montre que les adolescents prennent plus de risques lorsqu'ils sont observés par leurs pairs.
        • Les campagnes de prévention doivent tenir compte de la psychologie des adolescents.
      • Influence sociale et altruisme []
        • Les adolescents sont très influençables par les autres et plus sujets au conformisme social.
        • L'altruisme devient plus stratégique à l'adolescence, avec un partage de ressources surtout avec les amis.
        • Le cerveau adolescent a une grande capacité d'apprentissage, mais est aussi très vulnérable.
        • La consommation d'alcool avant 15 ans a des effets irréversibles sur le développement cérébral, tout comme le cannabis, qui peut entraîner une perte de points de QI.
        • Après la COVID, 40% des adolescents présentent des symptômes dépressifs.
      • Les écrans et les adolescents []
        • L'idée que les écrans sont responsables de la baisse du QI, des troubles de l'attention, des difficultés de concentration, des problèmes psychologiques et de l'addiction est fausse.
        • Il n'y a pas d'addiction aux écrans, ni de lien direct entre les réseaux sociaux et la dépression.
        • Les écrans peuvent même avoir un effet positif sur le développement de l'intelligence entre 8 et 10 ans et le développement de l'empathie.
        • Les écrans ont surtout un impact négatif sur le sommeil et la sédentarité.
        • La lumière artificielle des écrans perturbe la sécrétion de mélatonine, l'hormone du sommeil.
        • Il est recommandé de couper les écrans au moins une heure avant de se coucher.
        • Les rythmes scolaires ne sont pas adaptés au rythme de sommeil des adolescents, qui ont un décalage de phase de 2 heures.
        • Il faudrait décaler la première heure de cours d'une heure pour respecter leur rythme physiologique.
        • Un rapport de l'Élysée sur les enfants et les écrans propose 29 recommandations.
      • Questions et réponses []
        • Les hormones et les transformations physiques expliquent en partie la fatigue et la maladresse des adolescents.
        • Les études sur les écrans ne sont pas encore assez probantes, et la question du contenu est complexe.
        • L'adolescence cérébrale est universelle, mais des différences culturelles et sociales peuvent exister.
        • L'impact des écrans sur les relations sociales est nuancé.
        • Il ne faut pas jouer sur les récompenses pour motiver les ados, mais plutôt développer leur motivation intrinsèque.
        • Les recommandations des 3-6-9-12 sont des repères mais ne sont pas basées sur des études scientifiques solides.
        • Il faut surtout un parcours de parentalité en France qui explique comment se développe un enfant et un adolescent.
        • Il est essentiel de développer les compétences psychosociales dès l'enfance pour prévenir les problèmes de santé mentale.
    1. y the late #$%2s, however, acupuncture anesthe-sia had largely disappeared from both biomedical and traditional Chinese hospitals. )e reason for this decline, I was told by acupuncturists and sur-geons who once worked together to perform the procedure, was that it was “less effective” than biomedical techniques.7 More importantly, according to these acupuncturists and surgeons, two decades of laboratory and clini-cal research had failed to produce any conclusive “scienti(c” explanation,

      Acupuncture anesthesia had mostly disappeared from both conventional and biomedical Chinese hospitals by the late 1980s. Its apparent "less effectiveness" in comparison to biomedical procedures and the absence of a conclusive scientific explanation following two decades of research were the reasons given for this drop. Acupuncture's exclusion from mainstream medical practices as China modernized and prioritized "advanced" technologies in line with Western standards was a reflection of the nation's growing emphasis on biomedicine and globally acknowledged scientific breakthroughs over conventional practices.

    1. Reto 1: Verificación de Series Temporales Descripción: Tienes un conjunto de datos de series temporales con muchas columnas. El reto es asegurarte de que todos los datos contienen exactamente las mismas columnas. Esto

      Datos de esolmet y 3 puntos extra si alguien se le ocurre un prueba extra

    1. No más conjeturas sobre la capacidad que necesita: si opta por poca capacidad al implementar una carga de trabajo, puede terminar con recursos inactivos caros o lidiando con las implicaciones de rendimiento de una capacidad limitada. Con los servicios de computación en la nube, estos problemas pueden desaparecer. Puede usar tanta capacidad como necesite y escalar a

      hola