RRID:Addgene_12259
DOI: 10.1007/s00424-024-03030-y
Resource: RRID:Addgene_12259
Curator: @scibot
SciCrunch record: RRID:Addgene_12259
RRID:Addgene_12259
DOI: 10.1007/s00424-024-03030-y
Resource: RRID:Addgene_12259
Curator: @scibot
SciCrunch record: RRID:Addgene_12259
RRID:Addgene_12260
DOI: 10.1007/s00424-024-03030-y
Resource: RRID:Addgene_12260
Curator: @scibot
SciCrunch record: RRID:Addgene_12260
RRID:Addgene_24219
DOI: 10.1007/s00424-024-03030-y
Resource: RRID:Addgene_24219
Curator: @scibot
SciCrunch record: RRID:Addgene_24219
the ending pulls the accent ahead with it: MO-dern, but mo-DERN-ity, not MO-dern-ity. That doesn’t happen with WON-der and WON-der-ful, or CHEER-y and CHEER-i-ly. But it does happen with PER-sonal, person-AL-ity.
This is one of the most irritating things in the English language to me. Studying Japanese, one of the first things you're taught is that there's no stress on any syllable more than any other when saying a word. It's a Mora-timed language, where English is a stress-timed language. Mora-timed languages don't put stress on certain syllables the way we do in English. By changing the way we are using a word, and thereby changing our intonation, the language just keeps getting more confusing; stress one syllable wrong and everyone in a three mile radius will go, "Why did you say that like that?"
Voici un sommaire minuté des sujets abordés dans la vidéo, basé sur la transcription fournie :
n1(a, b, abn),n1: A mynand2connected to a, b, and abnn2(a, abn, aa),n3(abn, b, bb),n4(aa, bb, y)
instances of a variable
module mynand2(input logic a, b,output logic y);assign y = ~(a & b);endmodule
module for mynand2:n4
y = s ? d1 : d0;endmoduley~001d0[3..0]y~101y~201sd1[3..0] y[3..0]y~301
when s =1 , y = d1, and y=d0 when s=0
"No se puede aprender filosofía, sólo se puede aprender a filosofar" Immanuel Kant
Que grande eres Daniel! Muchas gracias por haber dado esta forma tan perfecta a este espacio. Espero poder ayudar y que me ayudeis. Os leo con cariño. Siempre dispuesto al debate, Vicente
Ese vecino simpático y agradable que siempre saluda en el descansillo y de quién sus vecinas aún no pueden creer que hubiese raptado, asesinado o violado a alguien.
Decía Basilio Martín Patiño en una entrevista hacia el final de su vida que recordaba que cuando los nazis llegaron exiliados a Salamanca tras la guerra mundial las señoras con hijas solteras los colmaban de regalos e invitaciones.
Vivimos a través del consumo
Y desde Debord ya están los cimientos para entender la democracia como un bien de consumo. De hecho, Alicia Valdés y Curtis Yarvin, uno de los filósofos más importantes de la Ilustración Oscura, comparten homónimo para definir el espectáculo debordiano - la Realidad Política. Valdés se alinea en la izquierda lacaniana; Yarvin en el aceleracionismo monárquico anti-woke. Sin embargo, a pesar de sus diferencias, ambos entienden la democracia como un objeto de consumo simbólico propio de la Realidad, que contraponen a lo Real, a lo no simbolizable, siguiendo a Lacan.
marcaba la aceptación acrítica de órdenes de sus superiores para la masacre de millones de personas y que hoy ha seguido fermentado en el contexto contemporáneo en la obediencia ciega a preceptos del capitalismo.
Sobre esto, Milanovic habla sobre la tensión entre vivir en una democracia a nivel espectacular/político/mediático/simbólico y, al tiempo, en una dictadura, ya que las empresas para las que uno trabaja todos los días no son democráticas. Quizá debido a esa tensión más de la mitad de los británicos de menos de 30 años desean una dictadura https://www.elperiodico.com/es/internacional/20250128/mitad-jovenes-britanicos-muestran-favor-dictadura-113777020
Voici un document de synthèse détaillé, reprenant les thèmes principaux et les idées clés des extraits que vous avez fournis, avec des citations pertinentes :
Document de Briefing : Analyse du Colloque du Conseil d'État sur la Notion d'Intérêt Général
Introduction
Ce document synthétise les principaux points abordés lors du colloque du Conseil d’État sur la notion d’intérêt général, en s'appuyant sur les extraits fournis.
Le colloque a exploré la nature, l'évolution, et les défis posés par cette notion centrale du droit public, en examinant son application par le juge administratif et son interaction avec les intérêts particuliers, les libertés publiques et l'action des différents acteurs, y compris les collectivités territoriales et la société civile.
Thèmes Principaux et Idées Clés
L’intérêt général comme dépassement des intérêts particuliers et rôle de l’État
Synthèse et perspectives : L'intérêt général est présenté comme un principe qui transcende les intérêts individuels et catégoriels. Il est la mission de l'État, notamment à travers le pouvoir public, de poursuivre des fins qui s’imposent à tous, incluant la prise en compte des générations futures. Cependant, l'intérêt général n'est pas une notion monolithique opposée aux droits fondamentaux et intérêts particuliers, qu'il englobe en réalité.
Citation: « dépassement des intérêts particuliers en conférant à l'État au pouvoir public la mission de poursuivre des fins qui s'imposent à l'ensemble des individus... l'État est dans cette conception seul capable non seulement de réaliser lorsque c'est nécessaire la synthèse des intérêts qui s'expriment au sein de la société civile mais de contribuer à dépasser les égoïsmes catégoriels et à prendre en compte les intérêts des générations futures »
L’intérêt général comme fondement de l'action administrative et du contrôle juridictionnel Base de la puissance publique : L'intérêt général justifie les prérogatives de puissance publique de l'administration. Il est le fondement de la compétence du juge et est une référence essentielle dans le contrôle juridictionnel des actes administratifs, notamment en matière de contrats publics.
Contrôle par le juge : Le juge administratif joue un rôle central en assurant le respect de l'intérêt général, que ce soit en amont (vérification de la légalité) ou en aval (contrôle des conséquences des décisions). Le juge effectue une mise en balance des différents intérêts (publics et privés), en tenant compte de la proportionnalité, du coût financier, etc.
Citation : « l'intérêt général est derrière les possibilités qui sont offertes au juges dans le procès...le juge du contrat est tenu avant de prononcer sa décision de résiliation d'annulation totale ou partielle... de vérifier qu'elle ne portera pas une atteinte excessive à l'intérêt général » Évolution et Plasticité de la Notion d'Intérêt Général Variabilité : L'intérêt général est par essence variable et adaptable, il n'est pas figé et s'enrichit progressivement. La jurisprudence administrative a joué un rôle important dans le façonnage de sa définition, en particulier à travers la théorie du bilan.
Citation: « l'intérêt général se caractérise essentiellement par sa variabilité et sa plasticité...le juge administratif détecte fréquemment un intérêt public s'attachant à la protection de l'environnement à l'aménagement du territoire ou à la conduite de projets économiques... » Introduction de la théorie du bilan : Le juge met en balance les différents intérêts publics et privés et les inconvénients d'un projet afin de décider de son utilité. Cette théorie marque une évolution dans l'appréciation de l'intérêt général.
Citation : « l'évolution de son contrôle a conduit le juge administratif à affiner sa conception de l'intérêt général par un enrichissement progressif de la notion jusqu'à l'introduction de la théorie du bilan...le juge met en balance les différents intérêts publics comme privés et les inconvénients d'un projet pour retenir ou non l'utilité » Prise en compte des motifs explicites: Le juge explicite souvent les motifs d'intérêt général en cause pour permettre au législateur de modifier le cadre d'appréciation si nécessaire.
L’Intérêt Général dans le Contexte Européen
Influence du droit européen : Le contrôle de conformité des normes internes au droit européen conduit le juge à examiner l'objet des dispositions nationales et sa proportionnalité, ainsi que l’impératif de respecter les objectifs fixés par le législateur européen (ex : lutte contre la pollution, réduction des émissions de gaz à effet de serre).
Citation : «le juge administratif est conduit à jouer un rôle de plus en plus marqué pour contraindre les les autorités nationales à respecter les objectifs d'intérêt général fixé par le législateur européen»
L'Intérêt Général et l'Office du Juge
Intégration de l'intérêt général : Le juge administratif intègre de manière croissante la recherche de l'intérêt général dans sa propre démarche, notamment en se prononçant comme juge de plein contentieux.
Extension des pouvoirs du juge : L'intérêt général est de plus en plus pris en compte dans la définition de l'office du juge, notamment dans les cas où une atteinte grave à l'intérêt général permet de ne pas prononcer une suspension même si les conditions légales sont remplies.
Citation : «la monté de l'intérêt général dans l'ofice du juget administratif est en quelque sorte le reflet de l'extension de ces pouvoirs discrétionnaires pour s'autoréguler dans leur usage »
Diversité des Formes de l’Intérêt Général et Expressions Afférentes
Nuances terminologiques : L’intérêt général est distingué d'autres notions telles que les intérêts fondamentaux de la nation, l'intérêt national, régional, local ou public et l'utilité publique. Ces distinctions témoignent de la complexité de la notion d'intérêt général.
Citation : « Il arrive dans la jurisprudence de parler...de l'intérêt général en tant que tel mais on voit apparaître des précisions sur l'intérêt général et on peut citer de jurisprudence sur l'intérêt national, l'intérêt local... » Rôle des collectivités locales : L'intérêt général n'est pas l'apanage de l'État, les collectivités territoriales jouent également un rôle clé dans sa mise en œuvre au niveau local.
Citation : « l'intérêt général s'il si on veut qu'il soit comment dire accepté par les intérêts particuliers notamment les intérêts individuels euh passe à la fois par un intérêt général appliqué plus localement et avec une décision locale plus forte... »
Les expressions de l'intérêt particulier : L’intérêt particulier n'est pas toujours incompatible avec l'intérêt général.
Les Défis et les Tensions autour de l'Intérêt Général Défis opérationnels : Il est difficile de concilier l'intérêt général avec les contraintes opérationnelles des différents acteurs, par exemple dans le domaine de la sécurité publique (police). Les problématiques liées à la protection des données, au terrorisme et aux libertés individuelles sont soulevées.
Citation : « l'intérêt général n'a sans doute pas été apprécié de la même façon en 2010 et en 2013 avant et après les attentats commis par Mohamed Merin... » Attentes citoyennes : Il est souligné que les citoyens ne comprennent plus les délais importants dans les procédures. De plus, certains territoires (ruralité, outre-mer) sont confrontés à des défis spécifiques en termes d'accès aux services publics.
L’évolution des droits individuels : L'exacerbation des droits individuels peut conduire à une opposition et une incompréhension de l'intérêt général.
Citation : « les intérêts particuliers s'expriment aujourd'hui avec une évolution de des droits individuels qui qui s'exacerb non plus sous la forme d'expression de droit mais mais sous la forme... d'une expression parfois parfois violente et qui non seulement comment dire s'oppose à l'intérêt général mais même le conteste et au fond ne le comprend plus »
Place de la société civile : La question de la légitimité de la société civile à contribuer à la définition de l'intérêt général est soulevée, notamment face aux risques de partialité. Le rôle des associations et autres acteurs non-étatiques est abordé.
L’Intérêt Général et la Protection des Consommateurs
L’intérêt du consommateur comme intérêt général : L’accès à des prix raisonnables pour les consommateurs est considéré comme un intérêt général, notamment en matière de fourniture d’énergie et de santé.
Citation : « l'intérêt du consommateur d'avoir accès à des prix raisonnables se trouve érigé comme intérêt général » Protection des consommateurs : L’intérêt général peut justifier des limitations à la liberté d'entreprendre pour protéger la santé et la sécurité des consommateurs.
Conclusion
Le colloque met en lumière la complexité et l'évolution de la notion d'intérêt général, en insistant sur son rôle central dans le droit administratif et l'action publique.
La discussion souligne la nécessité d'une approche nuancée, tenant compte des multiples enjeux et des divers acteurs impliqués dans sa mise en œuvre.
La nature dynamique de l’intérêt général, son adaptation aux nouveaux défis (environnement, sécurité, économie, etc.) et la légitimité des acteurs contribuant à sa définition restent des questions centrales.
Ce document de briefing fournit une base solide pour la compréhension des discussions du colloque.
Il permet d'appréhender les tensions et les enjeux liés à l'intérêt général dans le contexte contemporain.
Voici un document de synthèse détaillé, reprenant les thèmes principaux et les idées clés des sources que vous avez fournies, avec des citations pertinentes :
Document de Synthèse : Colloque du Conseil d'État sur la Notion d'Intérêt Général
Introduction
Ce document synthétise les discussions et les réflexions issues du colloque du Conseil d’État sur la notion d’intérêt général.
L’objectif est de cerner la complexité de cette notion, son rôle dans l’action administrative et le contentieux, et son interaction avec les droits et libertés individuelles.
Le document explore les différentes perspectives sur l'intérêt général, depuis sa définition et son application par l'administration jusqu'à son interprétation et son contrôle par le juge.
I. Le Rôle Central de l'Intérêt Général
Boussole de l'Action Administrative: L'intérêt général est présenté comme la "boussole de l'action administrative" (citation de l'introduction). C'est la justification fondamentale de l'intervention des pouvoirs publics.
Rôle du Juge Administratif: Le juge administratif est le garant du respect de l'intérêt général par l'administration.
Il vérifie que l'action administrative est justifiée par un objectif d'intérêt général pertinent et que les pouvoirs sont utilisés dans le but pour lequel ils ont été confiés (évitant ainsi le "détournement de pouvoir").
Le juge est celui qui, en dernier ressort, identifie la "substance de l'intérêt général". Identification en Amont: L'identification de l'intérêt général ne se limite pas à l'intervention du juge.
L'administration, en amont, doit elle-même prendre position sur l'intérêt général qu'elle poursuit.
Les directions des affaires juridiques des ministères jouent un rôle de conseil dans cette identification.
Perspective Européenne et Internationale: La notion d'intérêt général est façonnée par le droit interne mais aussi par le droit européen et de l'OMC, ce qui crée des "difficultés" dues aux différences entre ces conceptions.
II. La Complexité et l'Évolution de la Notion
Notion Mal Définie: L'intérêt général est décrit comme une "notion difficile à cerner" et "mal définie en droit interne". Cette imprécision rend son maniement délicat. Motifs Invoqués : L'administration invoque des motifs généraux (sauvegarde des intérêts fondamentaux de la nation, ordre public économique, bon usage des deniers publics) ou sectoriels (protection de la santé, protection des consommateurs, protection de l’environnement).
La jurisprudence et la pratique incitent à mobiliser "plusieurs motifs" pour justifier une mesure.
Contrôle de Proportionnalité : Le contrôle de proportionnalité est central dans l’examen de la légalité des mesures administratives. Il s'agit de vérifier que l’atteinte à un droit ou à une liberté n'est pas disproportionnée par rapport à l'intérêt général poursuivi.
Hiérarchie des Intérêts: La question de la hiérarchie entre différents intérêts généraux est posée, notamment avec l'exemple de la loi littorale. Il y a un "travail de conciliation" entre ces intérêts.
III. Intérêt Général et Droits Fondamentaux : Tension et Équilibre
Prééminence de l'Intérêt Général: Le juge administratif se réserve le droit de faire primer l'intérêt collectif sur les intérêts privés par "la méthode de la mise en balance".
Tension Croissante : Il existe une tension grandissante entre l’intérêt général et les droits et libertés individuelles, ces derniers prenant une place de plus en plus importante. Il devient "de plus en plus difficile de faire triompher devant le juge administratif l'intérêt général prééminent".
Réticence du Juge: Le juge administratif semble moins enclin qu'auparavant à privilégier l'intérêt général face aux intérêts individuels. "Comme si leur objectif suprême était non plus ... de donner la priorité à un intérêt général indépassable, mais d'offrir des garanties accrues aux administrés."
Contrôle Plus Exigeant: Le juge est amené à examiner "plus minutieusement" la qualification d'intérêt général proposée par l'administration, "le fait que l’intérêt général ne puisse plus être parole autoritaire drapée exanté des habits de la vérité".
IV. La Prise en Compte de l'Intérêt Général dans l'Acte de Juger
Acteur de l'Intérêt Général : Le juge administratif est lui-même un acteur de l'intérêt général, son action participant à la défense de cet intérêt. L’acte de juger lui-même s’inscrit dans cette logique.
Définition Concrète: Le juge définit l'intérêt général "de manière très concrète" et "dossier par dossier". Il prend en compte "parfois des considérations à court terme".
Equilibre et Proportionnalité: Une fois l'intérêt général identifié, le juge recherche un "équilibre raisonnable" avec les intérêts privés. Le contrôle est de plus en plus "exigeant" et "affiné".
Plasticité de la Notion: La plasticité de la notion d’intérêt général permet au juge de faire émerger "des constructions juridiques nouvelles et fécondes".
Objectif de Sécurité Juridique : La modulation des effets des décisions dans le temps est fondée sur une exigence de "sécurité juridique".
V. Conséquences des Décisions du Juge
Modulation des Effets : Le juge administratif module dans le temps les effets de ses décisions pour limiter l'impact sur l'intérêt général. Ce pouvoir est devenu moins exceptionnel. Réserves d'Intérêt Général: Le juge peut limiter l'exécution de ses décisions au nom de l'intérêt général, accordant une "sorte d’immunité d’exécution" à l’administration.
Régularisation en Cours d'Instance: La régularisation en cours d'instance, bien qu'un peu à part, participe à la prise en compte des conséquences des décisions en permettant d'éviter l'annulation d'une illégalité si celle-ci peut être corrigée.
Juge Pragmatqiue: Le juge administratif est un "juge de plus en plus pragmatique", soucieux des conséquences de ses décisions. VI. L'Intérêt Général et les Droits Fondamentaux : Une Intrication
Droits Fondamentaux et Limites : Les droits fondamentaux posent des limites à la notion d'intérêt général, mais peuvent aussi être à l'origine d'obligations positives pesant sur l'État pour la sauvegarde d'un intérêt général.
Droits Fondamentaux comme Facteur de Normativité : L'intégration des droits fondamentaux implique des exigences formelles et matérielles dans l'élaboration des normes, ainsi qu'une flexibilité dans leur application.
Évolution de l'Office du Juge : L'essor des droits fondamentaux a conduit à une évolution du rôle du juge, qui est de plus en plus amené à faire un arbitrage entre droits fondamentaux et intérêt général, ce qui provoque une critique de son rôle, notamment chez le personnel politique. VII. Intérêt Général et Libertés Économiques/Droits des Détenus
Prééminence de l'Intérêt Général: L'intérêt général prime souvent sur les libertés économiques (droit de propriété et liberté d'entreprendre), ce qui est caractéristique du modèle français et de son interventionnisme.
Équilibre Délicat avec les Droits des Détenus: Le juge administratif doit trouver un équilibre délicat entre l'intérêt général (ordre public pénitentiaire) et le respect des droits fondamentaux des détenus (dignité, vie privée).
La jurisprudence est marquée par un examen concret des situations et une adaptation aux circonstances. Contrôle Rigoureux: Dans le cas des détenus, la jurisprudence révèle une application rigoureuse de la balance des intérêts, surtout face à des enjeux importants comme la dignité humaine.
VIII. La Convention Européenne des Droits de l'Homme et l'Intérêt Général
Pas d'Opposition Frontale : La Cour Européenne des Droits de l'Homme conçoit l'intérêt général comme incluant le respect des droits fondamentaux, ce qui évite une opposition frontale.
Équilibre et Proportionalité: La Cour vérifie l'existence d'un juste équilibre entre les droits individuels et l'intérêt général, en tenant compte du contexte de chaque affaire.
Intérêt Général comme Cadre Collectif: L'intérêt général doit être envisagé comme un cadre qui dépasse le simple intérêt individuel, et qui cherche à garantir un équilibre dans la société.
Subsidiarité : La Cour intervient en subsidiarité, après l'épuisement des recours internes, afin de vérifier le juste équilibre, et en cas de violation, elle reconnaît une atteinte tant au droit fondamental qu'à l'intérêt général.
Conclusion
Le colloque du Conseil d’État met en lumière la complexité de la notion d’intérêt général.
Il révèle que cette notion est en constante évolution, façonnée par les enjeux sociétaux, les développements du droit et l'interprétation jurisprudentielle.
L'équilibre entre l'intérêt général et les droits individuels reste un défi permanent pour l'administration et pour le juge, dans un contexte où la prise en compte des droits fondamentaux est de plus en plus prégnante.
L'intérêt général, loin d'être une notion figée, est un concept dynamique qui se redéfinit sans cesse et doit toujours être appréhendé de manière concrète et contextuelle.
the Great Acceleration, a second stage of the Anthropocene Age thatthey dated to the mid-twentieth century. Writing in 2007, Steffen et al. notedhow “nearly three-quarters of the anthropogenically driven rise in CO2 con-centration has occurred since 1950 (from about 310 to 380 ppm), and abouthalf of the total rise (48 ppm) has occurred in just the last 30 y
global modernisms temoprally more in line with GA
document de synthèse détaillé, reprenant les thèmes principaux et les idées clés de l'entretien avec Emmanuelle Piquet, tout en incluant des citations directes pour illustrer ses propos :
Document de Synthèse : "Comment faire baisser les conflits avec les adolescents ?" - Analyse de l'entretien avec Emmanuelle Piquet
Introduction :
L'entretien avec Emmanuelle Piquet, psychothérapeute spécialisée dans l'adolescence, explore les dynamiques complexes des conflits entre parents et adolescents, en mettant l'accent sur la nécessité de repenser l'approche parentale.
Loin d'une vision négative de l'adolescence, Piquet propose une perspective axée sur * l'autonomie, * la communication et * l'adaptation.
Thèmes Principaux :
L'Adolescence comme Quête d'Autonomie :
Piquet souligne que l'adolescence est avant tout une période de transition entre l'enfance et l'âge adulte, où l'aspiration à l'autonomie est centrale.
Citation : "La définition qui est la plus intéressante c'est de se dire que c'est un moment où, en effet, ils sont en train de passer de l'âge enfant à l'âge adulte et où [...] ils ont envie d'autonomie."
Cette quête d'autonomie est souvent perçue comme une mise à mal de l'autorité parentale, entraînant des tensions.
Citation : "Je pense qu'il y a quelque chose de l'ordre de notre autorité qui est mise à mal. Justement parce que, comme ils sont dans cette recherche d'autonomie [...] et bien ils nous mettent un peu en déséquilibre."
La Souffrance comme Indicateur Clé :
Piquet insiste sur l'importance de la souffrance comme indicateur d'une relation parent-adolescent dysfonctionnelle.
Citation : "Je pense qu'à partir du moment où, dans la relation, il y a quelqu'un qui souffre, c'est qu'elle n'est pas satisfaisante, cette relation."
Elle adopte une approche non normative, considérant que si une relation fonctionne pour toutes les parties prenantes, il n'y a pas lieu d'intervenir, même si les comportements peuvent sembler bizarres.
Citation: "Si on trouve que les gens font des choses extrêmement bizarres mais que pour autant ça a l'air d'être tout à fait satisfaisant de part et d'autre, alors nous on n'y touche pas en fait."
Autorité vs Faire Autorité :
Piquet distingue deux façons d'exercer l'autorité : par la force et la domination, ou en faisant autorité, c'est-à-dire en devenant un interlocuteur de confiance pour l'adolescent.
Citation: "Il y a une première façon qui consiste à imposer un certain nombre de choses par la force [...] et puis il y a une autre façon qui est celle que je nomme « faire autorité », c'est-à-dire être en fait l'adulte à qui l'adolescent a envie de parler."
Faire autorité implique de ne pas imposer son point de vue, mais d'offrir un espace d'échange et de soutien.
La Rigidité comme Cause de Conflit :
La psychothérapeute observe que les relations qui génèrent le plus de souffrance sont souvent caractérisées par la rigidité, où les parents ont du mal à adapter leurs règles et leurs attentes à l'évolution de l'adolescent.
Citation: "Souvent ce que je constate dans les relations qui créent de la souffrance c'est qu'il y a une des deux parties qui se met dans une rigidité [...] et quand on est très rigide comme ça dans une relation on est comme une espèce de statut de verre."
Elle insiste sur la nécessité d'une relation souple, où les règles sont en constante évolution pour accompagner la progression vers l'autonomie.
Les Parents "Hélicoptères" et la Responsabilisation :
Piquet critique les parents "hélicoptères", trop protecteurs et contrôlants, qui, paradoxalement, envoient un double message à leurs enfants : "je t'aime" et "tu n'es pas capable".
Citation: "Le premier c'est je t'aime. [...] Et le deuxième, c'est tu n'es pas capable."
Elle met l'accent sur la responsabilisation : "Je serai toujours là pour toi, mais je ne vais pas faire à ta place".
L'Importance de l'Écoute Inconditionnelle des Émotions :
Piquet souligne qu'il est crucial pour les parents d'accueillir les émotions de leurs adolescents, même négatives, sans les minimiser, ni les juger.
Citation: "Je pense qu'ils savent mieux que nous ce qu'ils ressentent. Et que dire à quelqu'un « tu ne ressens pas les choses correctement », c'est ultra violent."
Elle conseille de partager ses propres expériences pour normaliser les émotions de l'adolescent.
Le Mur de Briques du Conflit :
Les conflits parent-adolescent créent souvent un "mur de briques" qui empêche la communication.
Il est essentiel que les parents fassent le premier pas en retirant quelques briques, en manifestant une ouverture à la communication sans reproche, pour créer un espace de dialogue.
Citation: "C'est à vous d'enlever quelques petites briques à vous [...] et de dire à travers le trou que vous aurez fait, quoi qu'il arrive, si jamais à un moment donné t'as envie de me parler à nouveau, il n'y aura pas de reproches."
Les Peurs Parentales :
Les parents d'adolescents sont souvent submergés par des peurs : * la drogue, * la délinquance, * la prostitution, * l'exclusion sociale.
Piquet souligne que ces peurs, souvent projetées, ne correspondent pas toujours à la réalité.
Citation: "Le côté SDF, mon fils va finir SDF [...] ça c'est vraiment un truc qui fait super peur."
L'Approche Thérapeutique : "Virage à 180 Degrés" et "Psy Biodégradables"
Piquet et son équipe utilisent l'approche de l'école de Palo Alto, qui consiste à aider les gens à arrêter de faire ce qui alimente le problème et à essayer l'inverse.
Il s'agit d'un "virage à 180 degrés".
Citation: "L'école de Palo Alto consiste à aider les gens à arrêter de faire ce qu'ils font et qui alimentent le problème. Et parfois, [...] c'est le parent d'adolescent qui n'écoutant que son inquiétude et son amour fait un certain nombre de choses inopérantes."
Ils se considèrent comme des "psy biodégradables", cherchant à avoir le moins d'impact direct possible sur l'adolescent et à soutenir les parents pour qu'ils soient les acteurs du changement.
Citation: "Nous ce qu'on aime bien c'est vraiment ne pas laisser de traces pratiquement. C'est pour ça que, vraiment, notre première intention, c'est vraiment de travailler avec le parent, sans voir l'enfant."
Idées Clés :
L'adolescence n'est pas une maladie mais une période de transformation nécessaire.
L'autonomie est la clé pour accompagner l'adolescent vers l'âge adulte.
La relation parent-adolescent doit être souple et en constante évolution.
Il faut apprendre à faire confiance à son enfant.
Les parents doivent accueillir les émotions de leurs enfants et ne pas minimiser leurs souffrances.
Le conflit peut être dépassé si les parents font le premier pas.
La communication est essentielle, même lorsqu'elle est difficile.
Les parents doivent se concentrer sur ce que l'adolescent est en train de devenir plutôt que sur son apparence ou son comportement actuel.
Il est important que les parents se fassent aussi accompagner pour les aider dans cette phase difficile.
Conclusion :
L'entretien avec Emmanuelle Piquet offre une perspective rafraîchissante sur l'adolescence et les conflits qu'elle engendre.
En mettant l'accent sur l'autonomie, l'écoute et l'adaptation, elle propose une approche qui vise à transformer les relations parents-adolescents en des expériences plus sereines et enrichissantes.
Elle rappelle que la rigidité, le contrôle et la négation des émotions de l'adolescent sont souvent les principaux moteurs des conflits.
Ce document peut être utilisé pour informer, sensibiliser et fournir des pistes concrètes aux parents d'adolescents, aux éducateurs et à toute personne intéressée par cette phase de la vie.
analyse détaillée des sources que vous m'avez fournies, sous forme de briefing document.
BRIEFING DOCUMENT : Analyse du débat sur la mixité sociale à l'école
Introduction
Ce document synthétise les principaux thèmes et arguments d'un débat sur la mixité sociale et scolaire, auquel ont participé des personnalités politiques, des experts, et des acteurs de terrain.
Le débat, animé par un journaliste spécialisé dans les sciences humaines, a abordé les enjeux de la ségrégation scolaire en France, en s'appuyant sur des analyses sociologiques et économiques.
Thèmes clés et idées principales
L'importance de la diversité et de l'ouverture
Citation : "pour autant il y en avait beaucoup moins mais je trouve ça très important pour avoir différents points de vue pour s'ouvrir au monde s'ouvrir aux autres et bah je pense que c'est une bonne chose" (19:19-19:25).
Le débat souligne que la diversité des points de vue est essentielle pour la construction des individus et pour le fonctionnement d'une société harmonieuse. La mixité est vue comme un facteur d'enrichissement et d'ouverture sur le monde.
L'isolement dans des environnements homogènes est critiqué. Il est nécessaire de ne pas "être que entre enfin dans le même milieu tout le temps pour pouvoir s'ouvrir à autre chose et être plus ouvert sur euh sur ce qui se passe" (23:23-23:37).
L'école comme enjeu politique et social
Citation : "il s'agit de politique au sens noble, il s'agit d'avenir, il s'agit de société ou plus exactement de faire société" (30:46-30:54)
L'éducation est présentée comme un enjeu politique majeur, voire comme "l'arme la plus puissante pour changer le monde" (31:10-31:16).
L'école est un lieu où se construit la société et où se posent des questions collectives fondamentales : "ouvrir la porte d'une école c'est toujours en quelque sorte sonder l'état de notre pacte social" (36:28-36:39).
L'objectif n'est pas seulement d'atteindre des taux de réussite au baccalauréat, mais de permettre à "chaque enfant puisse trouver sa place dans la société" (32:09). Le constat de la ségrégation scolaire et ses conséquences
L'école française est confrontée à une réalité de "vitesses", voire de "ghettoïsation", avec des établissements "pour pauvres" et "pour riches" (37:20-37:33).
La ségrégation scolaire engendre de "l'évitement scolaire" (37:38), une tendance à "l'entre-soi" (37:38) et un "stress parental" (37:38) lié à l'enjeu du diplôme et de l'orientation.
Ce phénomène touche l'ensemble du territoire français (54:07). Il y a un "séparatisme scolaire, un séparatisme de destin" (54:07), qui mine "la France entière" (54:13).
Les causes de la ségrégation scolaire
La ségrégation n'est pas seulement géographique.
Elle est aussi liée à des facteurs sociaux, historiques et à des mécanismes d'évitement et de stratégies parentales. Il ne suffit pas de regarder les statistiques, il faut "analyser justement derrière socialement j'allais même dire historiquement qui ce qu'il y a derrière ça" (46:04).
Le choix de l'établissement scolaire est une "décision très individuelle" (1:42:53) mais aux conséquences collectives, et la tendance à "l'entre-soi" (1:13:17) se retrouve dans d'autres domaines de la société.
La carte scolaire seule est inefficace : "on est resté sur l'idée que c'était la puissance publique au sommet de l'État qui allait trouver la solution pour tous les territoires et que on s'est contenté d'un seul levier qui était la carte scolaire et en fait ça ça marche pas" (56:05-56:23).
Les expérimentations et pistes de solutions
Des expérimentations locales, menées à partir de 2015, ont visé à adapter les solutions de mixité à la réalité des territoires : "on va lancer une série d'expérimentations avec une aine de collectivités locales... avec un panel de solutions à mettre en place au niveau territorial" (56:34-56:47).
Ces expérimentations ont inclus la fermeture de collèges "ghetto" pour redistribuer les élèves dans d'autres établissements plus mixtes (57:50-58:15).
L'attractivité des établissements, en particulier ceux en difficulté, est un levier important (1:16:25-1:16:37), via des "options sport études, charme, théâtre, etc." (1:16:55-1:17:00) qui doivent bénéficier à tous les élèves (1:22:57-1:23:04).
L'importance des "cités éducatives" a été soulignée, pour faire coopérer "l'ensemble des acteurs du territoire autour des enjeux de ce qu'on peut appeler les facteurs extrascolaires de la réussite scolaire" (1:04:21-1:04:39).
Il faut "veiller à mélanger vraiment les populations y compris dans la façon dont on construit les établissement" (55:03-55:16).
La stabilité des équipes éducatives est aussi importante (1:10:36). La nécessité de renforcer le travail avec les parents et d'améliorer l'orientation des élèves est également mise en avant (1:04:39-1:05:42, 2:07:30-2:07:42)
Le rôle du privé et la nécessité d'une régulation
Le débat soulève la question du financement public de l'enseignement privé et de son rôle dans la ségrégation scolaire (1:13:31-1:13:37, 1:26:17-1:26:23).
Il est question de "contreparties" (1:16:13-1:16:19) à demander au privé qui sont financés par des fonds publics. L'argument étant qu'il faut associer les acteurs privés aux missions de service public.
Il est aussi pointé que des choix individuels sont faits par les chefs d'établissement dans le recrutement de leurs élèves, sans qu'il y ait de transparence sur les critères.
La nécessité d'une régulation et de diagnostic des établissements privés est mise en avant, et notamment la nécessité de se demander ce que les familles financent via leurs inscriptions et si cet argent est bien mis au service des missions publiques d'enseignement et de formation (2:28:42-2:29:34)
Le privé ne peut être considéré comme une "solution miracle". "L'état de séparatisme socio-scolaire n'est pas que le fait de l'enseignement privé" (1:26:35-1:26:39)
L'importance d'une approche scientifique et objective L'apport des chercheurs et du conseil scientifique a été essentiel dans les expérimentations menées (1:45:14-1:45:36).
Il faut s'appuyer sur des données et des analyses objectives pour comprendre les dynamiques en jeu et proposer des solutions adaptées. L'objectivation des paramètres (2:22:39) permet de "d'avancer" (2:22:44).
Il est important de "convier la science avec un regard objectif qui se met à la fois à l'écoute de la réalité vécue des gens et qui en même temps est capable de mettre des mots dessus" (1:45:08-1:45:19).
Points de tension et divergences
Des tensions apparaissent sur la question du privé et de sa régulation.
Des désaccords existent sur l'évaluation des politiques publiques menées et l'efficacité des expérimentations.
La question des moyens alloués aux établissements les plus en difficulté est une source de débat.
Préconisations et perspectives
Volontarisme politique : Il est nécessaire de faire preuve de volontarisme politique et d'afficher clairement l'ambition d'une école plus inclusive.
Observatoires et outils : La mise en place d'observatoires locaux et d'outils de mesure (IPS) est nécessaire pour mieux comprendre les dynamiques de ségrégation et adapter les politiques publiques.
Évaluation des politiques : Un suivi rigoureux et une évaluation des politiques mises en place sont essentiels pour en mesurer l'impact et ajuster les mesures.
Objectifs quantifiés : Il est proposé de fixer des objectifs quantitatifs de mixité sociale, que ce soit pour les établissements publics comme privés.
Territoires et expérimentations : L'importance d'une approche territoriale et des expérimentations adaptées est soulignée.
Fermeture d'établissements ghetto : La nécessité de fermer les établissements "ghetto" est abordée comme une piste pour briser les logiques de ségrégation.
Moyens et attractivité : Il est souligné la nécessité de moyens supplémentaires, en particulier dans les établissements difficiles, mais aussi dans la nécessité de développer leur attractivité.
Travail avec les familles : Il est essentiel de travailler en étroite collaboration avec les familles pour lever les obstacles à la mixité.
Dépasser les clivages idéologiques : Le débat a montré qu'il était nécessaire de dépasser les clivages idéologiques pour se concentrer sur les solutions les plus adaptées à chaque situation locale.
Agir ensemble : La mixité sociale est un enjeu qui concerne l'ensemble de la société. Il est nécessaire de mobiliser tous les acteurs : les politiques, les enseignants, les parents, et les citoyens pour une école plus juste et inclusive.
Conclusion
Le débat souligne l'urgence d'agir contre la ségrégation scolaire, qui est un frein à la construction d'une société plus juste et égalitaire. Il met en évidence la complexité du problème, l'importance d'une approche multi-factorielle et la nécessité d'un engagement politique fort pour construire une école où la mixité sociale soit une réalité.
Résumé Ce texte retranscrit une table ronde autour de la mixité sociale et scolaire en France, organisée par le Conseil Économique, Social et Environnemental (CESE).
Le débat, animé par une journaliste, met en lumière le constat d'une ségrégation scolaire croissante, avec des établissements scolaires de plus en plus séparés socialement, créant un écart grandissant entre élèves favorisés et défavorisés.
Plusieurs intervenants, dont des experts et des élus locaux, débattent des causes de ce phénomène et des solutions possibles, mettant l’accent sur la nécessité d’un diagnostic précis et territorialisé, et proposant des mesures concrètes telles que la création d'observatoires de la mixité scolaire, la mise en place de politiques d'attractivité pédagogique, et l'implication des collectivités locales.
La discussion aborde aussi le rôle complexe de l'enseignement privé dans cette ségrégation et la nécessité d’une coopération entre acteurs publics et privés.
Le but est d’identifier des leviers d’action pour une école plus juste et plus inclusive, contribuant à une meilleure cohésion sociale.
Voici un résumé minuté basé sur la transcription des sources :
Ce résumé minuté met en évidence les principaux points abordés lors de cette discussion sur la mixité sociale et scolaire, les désaccords et les pistes de solutions.
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
https://emcore.ucsf.edu/ucsf-software
Traceback (most recent call last): File "/home/ubuntu/dashboard/py/create_release_tables.py", line 54, in format_anno_for_release parsedanno = HypothesisAnnotation(anno) File "/home/ubuntu/dashboard/py/hypothesis.py", line 231, in init self.links = row['document']['link'] TypeError: string indices must be integers
Mis padres se llaman Joyce y Shi
I am not sure this person is Mei (in the image above). She looks different from the previous "Meis".
Constructing Self-efficacy measurements
Esta sección es parte de la anterior. No desarrolla mediciones, sino más bien propone una forma de evaluar y aquí se desarrollan algunas críticas.
As motivation and affect elements contain the Self-reactiveness dimension of agency self-regulation, Self-reflectiveness is principally explained by the individual’s self-efficacy level. As a part of the intention of human agency, this concept supposes forethought in its structure. In that sense, Self-efficacy is not a self-perception of the ability to execute an action but an orchestration or continued improvisation of multiple skills to manage the ever-changing situation around mastering an activity. Perceived self-efficacy is concerned by judgments of how well one can execute courses of action required to deal with prospective situations
Reordenar el párrafo. Parte con lo que no es para luego decir lo que es. Definiría primero, y luego desarrollar los otros detalles.
Self-efficacy definitions
En esta parte me parece que falta diversidad de autores. Aparece Bandura, y luego Rosa. Queda la impresión que nadie ha desarrollado entre medio el concepto de autoeficacia.
Document de Briefing :
conférence d’Olivier Maulini sur le métier d’enseignant.
Maulini analyse la tension entre le travail réel des enseignants et l’imaginaire qui le façonne, soulignant la difficulté à concilier les attentes idéales (socialisation vs. subjectivation, transmission de significations vs. recherche de sens) avec les réalités complexes du terrain.
Il explore cette tension à travers une scène de classe fictive, mettant en lumière les transactions entre enseignants et élèves concernant le sens et la reconnaissance.
Enfin, il souligne l’importance d’une réflexion collective sur ces tensions, mettant en garde contre les risques du pur idéalisme ou du pur réalisme et plaidant pour un compromis pragmatique et une renormalisation collective du métier pour restaurer la confiance.
Introduction
Ce document analyse un échange oral centré sur la profession d'enseignant, ses défis et les tensions qui la caractérisent.
L'intervenant explore le statut de l'enseignant, l'expérience des enseignants, et les dynamiques en jeu entre enseignants et élèves.
Il aborde également les enjeux de la transmission des savoirs, les compromis nécessaires, et les tensions entre des visions différentes de l'éducation.
Thèmes Principaux
Le Double Statut de l'Enseignant : Objectivation et ExpérienceL'analyse commence par une distinction entre le statut "objectivable" de l'enseignant (la manière dont l'enseignement est perçu et normalisé par la société) et son expérience personnelle (ses sentiments de reconnaissance sociale).
Il est souligné que ces deux aspects ne sont pas toujours alignés, et des tensions peuvent émerger.
Citation: "c'est le versant du statut de l'enseignant qu'on pourrait qualifier de statut objectivable c'est-à-dire il y a des travaux qui vont chercher dans la société la manière dont les enseignants l'enseignement est considéré voir normalisé à partir d'injonction qui viennent de l'extérieur et puis il y a aussi tout le pan bien entendu de l'expérience des enseignants de leur sentiments d'être reconnu ou pas socialement et ces deux choses ne sont pas hipsof factcto corrélé"
La Remise en Question de l'Enseignant et le Rôle de l'Élève
L'intervenant utilise une scène où un enseignant, après avoir reçu une critique honnête de la part d'un élève sur l'ennui en classe, se remet en question de manière excessive. L'enseignant se blame, au lieu de blamer l'élève.
L'importance de se remettre en question avant de remettre en question l'élève est soulignée. Il y a un shift dans les dynamiques de pouvoir entre élèves et enseignants.
Citation : "mes étudiants si ça fonctionne pas il faut d'abord vous remettre en question avant de remettre en question l'élève"
Le Contrat Implicite Enseignant-Élève et son Évolution
Il y a une évolution du contrat implicite entre les enseignants et les élèves.
Autrefois, l'accent était mis sur la discipline et le respect de l'autorité. L'enseignant détenait le savoir et l'élève était passif.
Aujourd'hui, les enseignants veulent que les élèves s'intéressent et participent, et un manque d'intérêt est perçu comme un échec.
Citation: "tous les enseignants veulent que les élèves s'intéressent ils veulent que les élèves participent si les élèves ne participent pas ça veut dire que c'est pas intéressant si c'est pas intéressant ça veut dire que j'ai échoué comme enseignant"
La Transaction sur le Sens et la Reconnaissance
L'analyse introduit les concepts de "transaction sur le sens" (l'élève exprime son ennui, l'enseignant doit faire face à ce manque de sens) et de "transaction sur la reconnaissance" (l'élève cherche à être reconnu pour son potentiel, pas seulement comme élève).
L'enseignant doit valider l'élève, mais aussi le jeune.
Citation: "ici il y a deux transactions principales il y a une transaction sur le sens et une autre sur la reconnaissance c'est-à-dire que la TA transaction sur le sens c'est grosso modo clus dit tu ne participes pas et l'élève dit je m'ennuie"
Les Logiques de Socialisation et de Subjectivation
Deux logiques sont opposées : la socialisation (transmission des connaissances statutaires) et la subjectivation (reconnaissance de l'individu). L'enseignant oscille entre le maintien des normes et l'ouverture à l'individualité de l'élève.
Citation: "dans une logique de de socialisation ou dans une logique de de subjectivation et bien ici on va euh mettre l'accent sur la la transmission des des connaissances telles qu'elles sont instaurées statutairement par le programme par exemple hein"
Les Compromis Opératoires
L'intervenant souligne la nécessité pour les enseignants de faire des "compromis opératoires" entre ces différentes logiques et attentes.
Il met en lumière la différence d'état d'esprit entre les enseignants suisses et français, les premiers étant fiers des compromis qu'ils font, tandis que les seconds peuvent se sentir tiraillés.
Citation: "la grande différence entre le cor-enseignant l'État par exemple d'esprit du cor-enseignant en Suisse et l'état d'esprit du corp-enseignant en France c'est que quand les enseignants suisses font des compromis opératoires ils sont très fiers de faire des compromis"
La Tension entre les Significations et le Sens
L'intervenant questionne la tension entre la transmission des significations (savoirs codifiés, non négociables) et la recherche de sens par les élèves. Il souligne le danger d'abandonner les significations au nom du sens et vice-versa.
Citation: "si l'élève ne trouve pas de sens à la signification ben on lui enseigne quand même la signification moi c'est comme ça que je forme les enseignants aujourd'hui parce que je trouve qu'ils sont très très fragilisés quand quand il ils ont le sentiment que quand ils sanctionnent un élève ils ont échoué"
Les Pédagogies du Contrôle et de la Confiance
L'intervenant distingue deux approches pédagogiques : le contrôle (obéissance, autorité) et la confiance (autonomie, responsabilité).
Il suggère que la confiance inclut le contrôle et que ces deux aspects ne sont pas mutuellement exclusifs.
Citation: "ici voyez la confiance ne s'oppose pas au contrôle puisque comme je disais tout à l'heure la confiance ici elle inclurait le contrôle"
Le Travail Empêché et les Enjeux de l'Imaginaire
L'intervenant introduit le concept de "travail empêché", développé par les ergonomes, pour décrire le sentiment d'impuissance que peuvent ressentir les enseignants face aux contraintes du réel et aux attentes de l'imaginaire.
L'imaginaire du métier est constitué des idéaux exprimés (discours idéalistes) et des attentes à l'œuvre (idéaux inexprimés).
Citation: "le sentiment d'empêchement il est proportionnel à aux difficultés que vous impose le réel et aux idéaux que produit votre imaginaire"
Autres Idées Clés
La fragilité des enseignants : Les enseignants sont fragilisés par les injonctions contradictoires et les changements constants des politiques éducatives.
La recherche de la validation : Les enseignants dépendent de la validation des élèves pour évaluer la qualité de leur travail, alors même que ces élèves ne sont pas toujours considérés comme des "juges crédibles".
L'importance de la compréhension et de l'empathie :
L'intervenant souligne que les enseignants doivent être à la fois compréhensibles (clairs dans leurs explications) et compréhensifs (attentifs aux besoins et aux difficultés des élèves).
Les compromis nécessaires dans l'éducation des enfants :
Les parents jouent un rôle important dans la construction de la pensée autonome des enfants. Ils doivent trouver un compromis entre laisser l'enfant se faire à l'autorité et intervenir à chaque problème.
Le rôle de la comparaison : La comparaison du travail d'enseignant et du travail dans d'autres domaines, comme la médecine ou l'entreprise, met en lumière les attentes que la société a envers le système éducatif.
Conclusion
Cette analyse révèle la complexité du métier d'enseignant, tiraillé entre des logiques et des attentes parfois contradictoires.
L'intervenant met en évidence la nécessité pour les enseignants de développer leur propre autonomie, tout en assumant leur responsabilité dans la transmission des savoirs.
La réflexion sur le "travail empêché" et la gestion de l'imaginaire du métier sont des enjeux majeurs pour la profession enseignante.
Voici un résumé minuté de la transcription, basé sur les informations fournies :
these three ILSA studies
the studies described in the previous chapter [y referencia cruzada]
Es conveniente que la valoración diagnóstica inicial y la clasificación de pacientes con dolor torácico agudo se enmarque en torno a tres categorías: 1) isquemia miocárdica; 2) otras causas cardiopulmonares (enfermedad pericárdica, urgencias aórticas y neumopatías), y 3) causas no cardiopulmonares.
Valoración diagnóstica
Important Skills to improve
Temas importantes para el desarrollo de la herramienta propuesta
Contexto comunitario: El 55% no ha establecido relaciones con otros trabajadores, y el 68% desearía tener una forma de hacerlo.
Habilidades importantes a mejorar
Marketing en línea Programación, TI, desarrollo web Inglés Interpretación de textos, redacción, comprensión lectora, transcripción, Excel Inteligencia Artificial y herramientas tecnológicas avanzadas Finanzas Habilidades blandas, liderazgo Hábitos de perseverancia y concentración para realizar trabajo en línea Comunicación efectiva, gestión de proyectos Trabajo en equipo
Temas de mayor importancia para los trabajadores colaborativos en Toloka en el desarrollo de una herramienta
Compartir experiencias y consejos para mejorar la rentabilidad Realizar más tareas y aumentar la productividad Capacitación para el desarrollo profesional y de habilidades Encontrar las mejores tareas Clarificación de instrucciones, ambigüedades y posibles errores Cuestiones financieras; mejores pagos por las tareas
Women respondents
Temas recurrentes e importantes
Cuidado personal: Equilibrio entre el trabajo y las responsabilidades de cuidado. Pueden ser solteras y tener un nivel económico más alto que los hombres encuestados.
Capital educativo: Uso de YouTube como herramienta para buscar tutoriales sobre cómo realizar las tareas. Uso de herramientas para traducir inglés y desarrollar competencias en este idioma.
Independencia: El trabajo colaborativo se percibe como un ingreso complementario que puede promover la independencia financiera.
Alienación: No tienen contacto con otras mujeres trabajadoras colaborativas. Podrían valorar el contacto con otros trabajadores colaborativos. Percepción de que el género no afecta el trabajo colaborativo. Respeto o neutralidad hacia el trabajo colaborativo.
Research can be considered feminist when it is grounded in a set of theoretical traditions that privilege women’s issues and experiences.
La investigación feminista se centra en analizar los desequilibrios de poder en los temas estudiados y en las relaciones entre investigadores y comunidades. Esto es particularmente relevante para las comunidades indígenas colombianas, ya que combina herramientas éticas como la reflexividad y la interseccionalidad para comprender cómo factores como género, etnicidad y clase afectan sus experiencias.
Un componente clave de la investigación feminista es la integración del análisis de género, que va más allá de la desagregación por sexo y examina actitudes, normas y desigualdades estructurales. En el contexto del trabajo colaborativo en plataformas de Inteligencia Artificial, se realizaron encuestas piloto en Toloka para explorar las vivencias de las mujeres latinoamericanas, incluyendo variables como la pobreza de tiempo, el equilibrio entre el trabajo y las responsabilidades familiares, y las actitudes externas hacia su labor.
Las comunidades indígenas colombianas, al igual que otras poblaciones marginadas, enfrentan retos únicos en el acceso al trabajo colaborativo y la tecnología. El diseño de investigaciones sensibles a sus contextos culturales y sociales, junto con mecanismos éticos como el pago justo y el respeto a su autonomía, permite entender mejor sus necesidades, motivaciones y desafíos. Esto puede contribuir a desarrollar políticas y herramientas más inclusivas que promuevan la igualdad de género, el fortalecimiento comunitario y la equidad en el acceso a oportunidades tecnológicas.
Data labeling is the product of AI models and the manual work of people who monitor, correct and augment the predictions of the former, thus improving their accuracy. This form of labor is known as crowd work.
La producción de datos etiquetados es fundamental para el desarrollo y entrenamiento de sistemas de IA, una tecnología que depende del trabajo manual de personas (crowd workers) que supervisan y mejoran la precisión de los algoritmos.
En Colombia, este tipo de trabajo podría crecer como fuente de ingresos, especialmente para mujeres que enfrentan desafíos sociales como la desigualdad de género y la carga de responsabilidades familiares.
En el caso de las mujeres indígenas colombianas, el trabajo colaborativo en plataformas como Toloka podría ofrecer oportunidades económicas, pero también plantea desafíos particulares.
Estas comunidades suelen combinar el sustento económico con fuertes roles familiares y culturales, lo que limita su acceso a herramientas y formación técnica por localizarse en sitios rurales. Además, muchas mujeres buscan espacios seguros para compartir experiencias, aprender nuevas habilidades y organizarse frente a problemas laborales como la falta de transparencia y el pago justo.
Un diseño de sistema adaptado a estas necesidades podría integrar un chatbot inteligente inspirado en figuras femeninas latinoamericanas, facilitando la comunicación y el acceso a recursos.
Esta posibilidad, centrada en sus valores culturales y sociales, podría fomentar el empoderamiento, la creación de redes de apoyo y la mejora de sus condiciones laborales. Iniciativas como esta no solo beneficiarían a las mujeres indígenas colombianas, sino que también fortalecerían el impacto de la Inteligencia Artificial en contextos locales, promoviendo justicia y equidad en el desarrollo tecnológico.
Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public Review):
Summary:
By using the biophysical chromosome stretching, the authors measured the stiffness of chromosomes of mouse oocytes in meiosis I (MI) and meiosis II (MII). This study was the follow-up of previous studies in spermatocytes (and oocytes) by the authors (Biggs et al. Commun. Biol. 2020: Hornick et al. J. Assist. Rep. and Genet. 2015). They showed that MI chromosomes are much stiffer (~10 fold) than mitotic chromosomes of mouse embryonic fibroblast (MEF) cells. MII chromosomes are also stiffer than the mitotic chromosomes. The authors also found that oocyte aging increases the stiffness of the chromosomes. Surprisingly, the stiffness of meiotic chromosomes is independent of meiotic chromosome components, Rec8, Stag3, and Rad21L. with aging.
Strengths:
This provides a new insight into the biophysical property of meiotic chromosomes, that is chromosome stiffness. The stiffness of chromosomes in meiosis prophase I is ~10-fold higher than that of mitotic chromosomes, which is independent of meiotic cohesin. The increased stiffness during oocyte aging is a novel finding.
Weaknesses:
A major weakness of this paper is that it does not provide any molecular mechanism underlying the difference between MI and MII chromosomes (and/or prophase I and mitotic chromosomes).
We acknowledge that our study does not provide a comprehensive explanation for the stage-related alterations in chromosome stiffness; however, we believe that the observation of these changes is itself of broad interest. Initially, we hypothesized that DNA damage or depletion of meiosis-specific cohesin might contribute to the observed increase in chromosome stiffness. However, our experimental finding did not support these hypotheses, indicating that neither DNA damage nor cohesion depletion is responsible for the stiffness increase. The molecular basis underlying the stage-related stiffness increase remains elusive and requires exploration in future studies. In the Discussion, we propose that factors such as condensin, nuclear proteins, and histone methylation may play a role in regulating meiotic chromosome stiffness. The involvement of these factors in stage-related chromosome stiffening requires future investigation.
Reviewer #2 (Public Review):
This paper reports investigations of chromosome stiffness in oocytes and spermatocytes. The paper shows that prophase I spermatocytes and MI/MII oocytes yield high Young Modulus values in the assay the authors applied. Deficiency in each one of three meiosis-specific cohesins they claim did not affect this result and increased stiffness was seen in aged oocytes but not in oocytes treated with the DNA-damaging agent etoposide.
The paper reports some interesting observations which are in line with a report by the same authors of 2020 where increased stiffness of spermatocyte chromosomes was already shown. In that sense, the current manuscript is an extension of that previous paper, and thus novelty is somewhat limited. The paper is also largely descriptive as it does neither propose a mechanism nor report factors that determine the chromosomal stiffness.
There are several points that need to be considered.
(1) Limitations of the study and the conclusions are not discussed in the "Discussion" section and that is a significant gap. Even more so as the authors rely on just one experimental system for all their data - there is no independent verification - and that in vitro system may be prone to artefacts.
Our experimental system has been used to study different types of chromosome stiffness as well as nuclear stiffness. We have compared our results with previously published data and found the data is consistent across different experiments. To address the reviewer’s concern, we describe the limitations of our in vitro experimental approach in the Discussion section.
(2) It is somewhat unfortunate that they jump between oocytes and spermatocytes to address the cohesin question. Prophase I (pachytene) spermatocytes chromosomes are not directly comparable to MI or MII oocyte chromosomes. In fact, the authors report Young Modulus values of 3700 for MI oocytes and only 2700 for spermatocyte prophase chromosomes, illustrating this difference. Why not use oocyte-specific cohesin deficiencies?
In this study, our goal was to investigate the mechanism underlying the increased chromosome stiffness observed during prophase I. Ideally, we would have compared wild-type and cohesin-deleted mouse oocytes at the metaphase I (MI) stage. However, experimental constraints made this approach unfeasible: spermatocytes and oocytes from Rec8<sup>-/-</sup> and Stag3<sup>-/-</sup> mutant mice cannot reach MI stage, and Rad21l<sup>-/-</sup> mutant mice are sterile in males and subfertile in females, because cohesin proteins are crucial for germline cell development.
Additionally, collecting prophase I chromosomes from oocytes is exceptionally challenging and requires fetal mice as prophase I oocyte sources because female oocytes progress to the diplotene stage during fetal development. The process is further complicated by the difficulty of genotyping fetal mice, making the study of female prophase I impracticable. By contrast, spermatocytes are continuously generated in males throughout life, with meiotic stages readily identifiable, making them more accessible for analysis.
Our findings consistently showed increased chromosome stiffness in both prophase I spermatocytes and MI oocytes, suggesting that the phenomenon is not sex-specific. This observation implies that similar effects on chromosome stiffness may occur across meiotic stages, from prophase I to MI.
(3) It remains unclear whether the treatment of oocytes with the detergent TritonX-100 affects the spindle and thus the chromosomes isolated directly from the Triton-lysed oocytes. In fact, it is rather likely that the detergent affects chromatin-associated proteins and thus structural features of the chromosomes.
Regarding the use of Triton X-100, it is important to emphasize that the concentration used (0.05%) is very low and unlikely to significantly affect chromosome stiffness. To support this assertion, we have provided additional evidence in the revised manuscript demonstrating that this low concentration of Triton X-100 has a negligible effect on chromosome stiffness (Supplement Fig. 5, Right panel).
(4) Why did the authors use mouse strains of different genetic backgrounds, CD-1, and C57BL/6? That makes comparison difficult. Breeding of heterozygous cohesin mutants will yield the ideal controls, i.e. littermates.
The genetic mutant mice, all in a C57BL/6 background, were generously provided by Dr. Philip Jordan and delivered to our lab. As our lab does not currently maintain C57BL/6 colony and given that this strain typically produces small litter sizes - which would have complicated the remainder of the study - we chose CD-1 mice as the control group and used C57BL/6 mice specifically for the cohesin study. To address potential concerns regarding genetic background differences, we compared our results with previously published data from C57BL/6 mice and found no significant differences (2710 ± 610 Pa versus 3670 ± 840 Pa, P= 0.4809) (Biggs et al., 2020). Furthermore, prophase I spermatocytes from CD-1 mice showed no significant difference compared to any of the three cohesin-deleted C57BL/6 mutant mice, suggesting that chromosome stiffness is not significantly influenced by genetic background.
(5) How did the authors capture chromosome axes from STAG3-deficienct spermatocytes which feature very few if any axes? How representative are those chromosomes that could be captured?
We isolated chromosomes from prophase I mutant spermatocytes, which were identified by their large size, round shape, and thick chromosomal threads - characteristics indicative of advanced condensation and a zygotene-like stage during prophase I (Supplemental Fig. 3). The methodology for isolating these chromosomes has been described in details in our previous publication (Biggs et al., 2020), which is referenced in the current manuscript.
Reviewer #3 (Public Review):
Summary:
Understanding the mechanical properties of chromosomes remains an important issue in cell biology. Measuring chromosome stiffness can provide valuable insights into chromosome organization and function. Using a sophisticated micromanipulation system, Liu et al. analyzed chromosome stiffness in MI and MII oocytes. The authors found that chromosomes in MI oocytes were ten-fold stiffer than mitotic ones. The stiffness of chromosomes in MI mouse oocytes was significantly higher than that in MII oocytes. Furthermore, the knockout of the meiosis-specific cohesin component (Rec8, Stag3, Rad21l) did not affect meiotic chromosome stiffness. Interestingly, the authors showed that chromosomes from old MI oocytes had higher stiffness than those from young MI oocytes. The authors claimed this effect was not due to the accumulated DNA damage during the aging process because induced DNA damage reduced chromosome stiffness in oocytes.
Strengths:
The technique used (isolating the chromosomes in meiosis and measuring their stiffness) is the authors' specialty. The results are intriguing and informative to the chromatin/chromosome and other related fields.
Weaknesses:
(1) How intact the measured chromosomes were is unclear.
Currently, a well-calibrated chromosome mechanics experiment requires the extracellular isolation of chromosomes. In experiments conducted parallel to those in our previous study (Biggs et al., 2020), we obtained quantitatively consistent results, including measurements of the Young modulus for prophase I spermatocyte chromosomes. Our isolation approach is significantly gentler than bulk methods that rely on hypotonic buffer-driven cell lysis and centrifugation. If substantial chromosomal damage had occurred during isolation, we would expect greater variation between experiments, as different amounts or types of damage could influence the results.
(2) Some control data needs to be included.
We used wild-type prophase I spermatocytes and metaphase I (MI) oocytes as controls. To validate our findings, we compared some of our results with those reported in a previous study and observed consistent outcomes (Biggs et al., 2020).
(3) The paper was not well-written, particularly the Introduction section.
We have revised the paper and improved the overall quality of the manuscript.
(4) How intact were the measured chromosomes? Although the structural preservation of the chromosomes is essential for this kind of measurement, the meiotic chromosomes were isolated in PBS with Triton X-100 and measured at room temperature. It is known that chromosomes are very sensitive to cation concentrations and macromolecular crowding in the environment (PMID: 29358072, 22540018, 37986866). It would be better to discuss this point.
As suggested, we investigated the impact of PBS and Triton X-100 on chromosome stiffness. Our findings indicate that neither PBS nor Triton X-100 caused significant changes in chromosome stiffness (Supplemental Fig. 5).
Recommendations For The Authors:
Major points of Reviewers that the Editor indicated should be addressed
(1) Reviewer's point 3, the effect of the high concentration of etoposide: It would be advisable to use lower concentrations of etoposide to observe the effect of DNA damage on chromosome stiffness more accurately.
The effect of etoposide on oocyte is dose-dependent (Collins et al., 2015). Oocytes are generally not highly sensitive to DNA damage, and even at relatively high concentrations, not all may exhibit a response. To ensure that sufficient DNA damage in the oocytes we isolated, we used relatively high concentration of etoposide for the experiment. This concentration (50 μg/ml) falls within the typical range reported in the literature (Marangos and Carroll, 2012)(Cai et al., 2023)(Lee et al., 2023). As the reviewer suggested, we tested two additional lower concentrations of etoposide (5 μg/ml and 25 μg/ml) (see Fig. 5 C). We did not observe any significant differences in chromosome stiffness in 5 µg/ml etoposide-treated oocytes compared to the control. However, higher concentrations of etoposide (25 μg/ml) significantly reduced oocyte chromosome stiffness compared to the control.
Revision to manuscript:
“Results at lower etoposide concentrations revealed that chromosome stiffness in untreated control oocytes was not significantly different from that in oocytes treated with 5 μg/ml etoposide (3780 ± 700 Pa versus 3930 ± 400 Pa, P = 0.8624). However, chromosome stiffness in untreated oocytes was significantly higher than that in oocytes treated with 25 μg/ml etoposide (3780 ± 700 Pa versus 1640 ± 340 Pa, P = 0.015) (Figure 5C).”
(2) Reviewer's point 3, the effect of Triton X-100: This is related to the concern of the #3 reviewer. It is critical to check whether the detergent does not affect the stiffness indirectly or not.
To demonstrate that the low concentration of Triton X-100 does not influence chromosome stiffness, we conducted additional experiments. First, we isolated chromosomes and measured their stiffness. Then, we treated the chromosomes with 0.05% Triton X-100 via micro-spraying and remeasured the stiffness. The results showed no significant difference (see Supplement Fig. 5 right panel).
Revision to manuscript:
“In addition to past experiments indicating that mitotic chromosomes are stable for long periods after their isolation (Pope et al., 2006), we carried out control experiments on mouse oocyte chromosomes where we incubated them for 1 hour in PBS, or exposed them to a flow of Triton X-100 solution for 10 minutes; there was no change in chromosome stiffness in either case (Methods and Supplementary Fig. 5).”
(3) Reviewer's point 1, the effect of the buffer composition: Please describe how the composition affects the stiffness of the chromosomes.
PBS is an economical and effective buffer solution that closely mimics the osmotic conditions of the cytoplasm, which is crucial for maintaining chromosomal structural integrity. Appropriate ion concentrations are crucial for preserving chromosome integrity, as imbalances—either too high or too low—can alter chromosome morphology (Poirier and Marko, 2002). When chromosomes are stored in PBS, their stiffness remains relatively stable, even with prolonged exposure, ensuring minimal changes to their physical properties. To confirm this, we isolated chromosomes and measured their stiffness. After one-hour incubation in PBS, we remeasured stiffness and observed no significant differences, which demonstrated that chromosomes remain stable in PBS (see Supplement Fig.5 left panel).
Revision to manuscript:
“In this study, we developed a new way to isolate meiotic chromosomes and measure their stiffness. However, one concern is that the measurements were conducted in PBS solution, which is different from the intracellular environment. To address this, we monitored chromosome stiffness overtime in PBS solution and found that it remained stable over a period of one hour (Supplement Fig. 5 Left panel).”
Reviewer #1 (Recommendations For The Authors):
Major points:
(1) Previously, the role of condensin complexes in chromosome stiffness is shown (Sun et al. Chromosome Research, 2018). Thus, at least the authors described the condensin staining on MI and MII chromosomes.
We have added sentences in the discussion to elaborate on the role of condensin.
Revision to manuscript:
“Several factors, including condensin, have been found to affect chromosome stiffness (Sun et al., 2018). Condensin exists in two distinct complexes, condensin I and condensin II, and both are active during meiosis. Published studies indicate that condensin II is more sharply defined and more closely associated with the chromosome axis from anaphase I to metaphase II (Lee et al., 2011). Additionally, condensin II appears to play a more significant role in mitotic chromosome mechanics compared to condensin I (Sun et al., 2018). Thus, condensin II likely contributes more significantly to meiotic chromosome stiffness than condensin I.”
(2) Although the authors nicely showed the difference in the stiffness between MI and MII chromosomes (Figure 2), as known, MI chromosomes are bivalent (with four chromatids) while MII chromosomes are univalent (with two chromatids). The physical property of the chromosomes would be affected by the number of chromatids. It would be essential for the authors to measure the physical properties of a univalent of MI chromosomes from mice defective in meiotic recombination such as Spo11 and/or Mlh3 KO mice.
The reviewer correctly pointed out that the number of chromatids in chromosomes differs between metaphase I (MI) and metaphase II (MII) stages. We have addressed this difference by calculating Young’s modulus (E), a mechanical property that describes the elasticity of a material, independent of its geometry. Young’s modulus describes the intrinsic properties of the material itself, rather than the specific characteristics of the object being tested. It is calculated as E=(F/A)/(∆L/L0), where F was the force given to stretch the chromosome, A was the cross-section area, ∆L was the length change of the chromosome, and L0 was the original length of the chromosome. While an increase in chromosome or chromatid numbers, results in a larger cross-sectional area, leading to a higher doubling force (F). This variation in chromosome number or cross-sectional area does not impact the calculation of chromosome stiffness/Young’s modulus (E). While study of the mutants suggested by the referee would certainly be interesting, it would be likely that the absence of these key recombination factors would impact chromosome stiffness in a more complex way than just changing their thickness; this type of study is beyond the scope of the present manuscript and is an exciting direction for future studies.
(3) In Figure 5, the authors measure the stiffness of etoposide-treated MI chromosomes. The concentration of the drug was 50 ug/ml, which is very high. The authors should analyze the different concentrations of the drug to check the chromosome stiffness. Moreover, etoposide is an inhibitor of Topoisomerase II. The effect of the drug might be caused by the defective Top2 activity, rather than Top2-adducts, thus DNA damage. It is very important to check the other Top2 inhibitors or DNA-damaging agents to generalize the effect of DNA damage on chromosome stiffness. Moreover, DNA damage induces the DNA damage response. It is important to check the effect of DDR inhibitors on the damage-induced change of stiffness.
The reviewer is correct in noting that etoposide can induce DNA damage and inhibit Top2 activity. To address this concern, our previous DNase experiment provided further clarity and supports our results of this study (Biggs et al., 2020). This experiment was conducted in vitro, where DNase treatment caused DNA damage on chromosomes without affecting Top2 activity or triggering DNA damage response. The results demonstrated that DNase treatment led to reduced chromosome stiffness, which aligns with the findings presented in our manuscript.
(4) In the same line as the #3 point, the authors also need to check the effect of etoposide on the stiffness of mitotic chromosomes from MEF.
Experiments on MEF mitotic chromosomes were designed to serve as a reference for the meiotic chromosome studies. The etoposide experiments on meiotic chromosomes specifically aimed to investigate how DNA damage affects meiotic chromosome structure. While it would be interesting to explore the effects of etoposide-induced DNA damage on mitotic chromosomes, it represents a distinct research question that falls outside the scope of the current study.
Minor points:
(1) Line 141-142: Previous studies by the author analyzed the stiffness of mitotic chromosomes from pro-metaphase. Which stage of cell cycles did the authors analyze here?
To ensure consistency in our experiments, we also measured the stiffness of mitotic chromosomes at the prometaphase stage. The precise stage used is very near to metaphase, at the very end of the prometaphase stage. We have modified the manuscript to clarify this point.
Revision to manuscript:
“For comparison with the meiotic case, we measured the chromosome stiffness of Mouse Embryonic Fibroblasts (MEFs) at late pro-metaphase (just slightly before their attachment to the mitotic spindle) and found that the average Young’s modulus was 340 ± 80 Pa (Figure 2B). The value is consistent with our previously published data, where the modulus for MEFs was measured to be 370 ± 70 Pa (Biggs et al., 2020).”
(2) Line 157: Here, the doubling force of MI (and MII) oocytes should be described in addition to those of spermatocytes.
The purpose of this paragraph is to demonstrate the reproductivity and consistency of our experiments. In this section, we compared our data with previously published findings. Published data do not include chromosome stiffness measurement from MI mouse oocytes. Our experiment is the first to assess this. Therefore, we did not include MI mouse oocytes in that comparison. To clarify this, we have added sentences to highlight the comparison of doubling force.
Revision to manuscript:
“Here, we found that the doubling forces of chromosomes from MI and MII oocytes are 3770 ± 940 pN and 510 ± 50 pN, respectively. We conclude that chromosomes from MI oocytes are much stiffer than those from both mitotic cells and MII oocytes (Supplement Fig. 2), in terms of either Young’s modulus or doubling force.”
(3) Line 202: What stage of prophase I do the authors mean by the spermatocyte stage here? Diakinesis, Metaphase I or prometaphase I? I am not sure how the authors can determine a specific stage of prophase I by only looking at the thickness of the chromosomes. Please show the thickness distribution of WT and Rec8<sup>-/-</sup> chromosomes.
We have reworded the sentence and clarified that the spermatocyte stage is prophase I stage. Since Rec8<sup>-/-</sup> spermatocytes cannot progress beyond the pachytene stage of prophase I, the isolated chromosomes must be in prophase I rather than diakinesis, metaphase I, prometaphase I, or any later stages (Xu et al., 2005). Based on the cell size and degree of chromosome condensation (Biggs et al., 2020), it is most likely that the measured chromosomes are at the zygotene-like stage. However, as we cannot definitively determine the exact substage of prophase I, thus, we have referred to them simply as prophase I.
Revision to manuscript:
“We isolated chromosomes from Rec8<sup>-/-</sup> prophase I spermatocytes, which displayed large and round cell size and thick chromosomal threads, indicative of advanced chromosome compaction after stalling at a zygotene-like prophase I stage (Supplement Fig. 3). The combination of large cell size and degree of chromosome compaction allowed us to reliably identify Rec8<sup>-/-</sup> prophase I chromosomes. Using micromanipulation, we measured chromosome stiffness by stretching the chromosomes (Supplement Fig. 3) (Biggs et al., 2019).”
Reviewer #2 (Recommendations For The Authors):
(1) Line 135: that statement is not substantiated; better to show retraction data and full reversibility.
We added a figure showing oocyte chromosome stretching, which showed that the oocyte chromosome is elastic, and that the stretching process is reversible (Supplement Fig.1).
(2) Line 144: the authors claim that the Young Modulus of MII oocytes is "slightly" higher than that of mitotic cells (MEFs). Well, "slightly" means it is rather similar, and therefore the commonly used statement that MII is similar to mitosis is OK - contrary to the authors' claim.
We have removed the word “slightly” in the manuscript. The difference is statistically significant.
Revision to manuscript:
“Surprisingly, despite this reduction, the stiffness of MII oocyte chromosomes was still significantly higher than that for mitotic cells (Figure 2B).”
(3) There are a lot of awkward sentences in this text. Some sentences lack words, are not sufficiently precise in wording and/or logic, and there are numerous typos. Some examples can be found in lines 89 (grammar), 94, 95 ("looked"), 98, 101 ("difference" - between what?), and some are commonplaces or superficial (lines 92/93, 120..., ). Occasionally the present and past tense are mixed (e.g. in M&M). Thus the manuscript is quite poorly written.
Thanks for the comments of the reviewer. We have revised all the sentences highlighted by the reviewer and polished the entire manuscript.
Reviewer #3 (Recommendations For The Authors):
(1) Line 48. "We then investigated the contribution of meiosis-specific cohesin complexes to chromosome stiffness in MI and MII oocytes." There is no data on oocytes with meiosis-specific cohesin KO. This part should be corrected.
We have corrected this error.
Revision to manuscript:
“We examined the role of meiosis-specific cohesin complexes in regulating chromosome stiffness.”
(2) Lines 155-157. The result of MI mouse oocyte chromosomes should also be mentioned here (Supplementary Figure 1).
Please see our response to Reviewer 1 – Minor Point 2.
(3) Line 163. "The stiffness of chromosomes in MI mouse oocytes is significantly higher compared to MII oocytes."<br /> Is this because two homologs are paired in MI chromosomes (but not in MII chromosomes)? The authors may want to discuss the possible mechanism.
Please see our response to Reviewer 1 – Major Point 2.
(4) Line 188: "We hypothesized that MI oocytes... would have higher chromosome stiffness than MII oocytes." Why did the authors measure chromosomes from spermatocytes but not MI oocytes?
Both spermatocytes and oocytes from Rec8<sup>-/-</sup>, Stag3<sup>-/-</sup>, and Rad21l<sup>-/-</sup> mutant mice cannot reach MI stage because cohesin proteins are crucial for germline-cell development. We chose to use spermatocytes in our study because collecting fetal meiotic oocytes is extremely difficult, and genotyping fetal mice adds another layer of complexity to the experiments. In females, all oocytes complete prophase I and progress to the dictyotene stage during the fetal stage. Obtaining individual oocytes at this stage is challenging. In contrast, spermatocytes are continuously generated at all stages in males.
(5) To support the authors' conclusion, verifying the KO of REC8, STAG3, and RAD21L by immunostaining or other methods is essential.
These mice are provided by one of the authors, Dr. Philip Jordan, who has published several papers using these knockout mice (Hopkins et al., 2014)(Ward et al., 2016). The immunostaining of these models has already been well-characterized in those previous studies. In addition to performing double genotyping, we also use the size of the collected testes as an additional verification of the mutant genotype. These knockout mice have significantly smaller testes compared to their wild-type counterparts, providing a clear physical indicator of the mutation.
(6) Some of the cited papers and descriptions in the Introduction are not appropriate and confusing. This part should be improved:
Line 79. Recent studies have revealed that the 30-nm fiber is not considered the basic structure of chromatin (e.g., review, PMID: 30908980; original papers, PMID: 19064912, 22343941, 28751582). This point should be included.
We have corrected the references as needed. Additionally, thank you for the updated information regarding the 30-nm fiber. We have removed all the descriptions about the 30-nm fiber to ensure the information is accurate and up to date.
(7) Line 83. Reviews on mitotic chromosomes, rather than Ref. 9, should be cited here. For instance, PMID: 33836947, 31230958.
We have corrected it and added references according to the review’s suggestion.
(8) Line 85. Refs. 10 and 11 are not on the "Scaffold/Radial-Loop" model. For instance, PMID: 922894, 277351, 12689587. The other popular model is the hierarchical helical folding model (PMID: 98280, 15353545).
We have corrected it and added appropriate references according to the review’s suggestion. Regarding the hierarchical helical folding model, our experiments do not provide data that either support or refute this model. Thus, we have opted not to include any discussion of this model in our manuscript.
(9) Figure legends. There is no description of the statistical test.
We have added the description of the statistical test at the end of the figure legends for clarity.
(10) Line 156. The authors should mention which stages in spermatocyte prophase I (pachytene?) were used for their measurement.
We cannot precisely determine the substage of prophase I in the spermatocytes although it is most likely in the pachytene stage.
(11) Line 241. "DNA damage reduces chromosome stiffness in oocytes." It would be better to show how much damage was induced in aged and etoposide-treated chromosomes, for example, by gamma-H2AX immunostaining. In addition, there are some papers that show DNA damage makes chromatin/chromosomes softer (e.g., PMID: 33330932). The authors need to cite these papers.
The effects of etoposide and age on meiotic oocytes has been published (Collins et al., 2015)(Marangos et al., 2015)(Winship et al., 2018).
We are grateful for the citation information provided by the reviewer and have added it to our manuscript.
Revision to manuscript:
“Overall, these findings suggest that DNA damage reduces chromosome stiffness in oocytes instead of increasing it, which aligns with other studies showing that DNA damage can make chromosomes softer (Dos Santos et al., 2021). These results suggest that the increased chromosome stiffness observed in aged oocytes is not due to DNA damage.”
(12) Line 328. Senescence?
This error is corrected in the revised manuscript.
Revision to manuscript:
“Defective chromosome organization is often related to various diseases, such as cancer, infertility, and senescence (Thompson and Compton, 2011; Harton and Tempest, 2012; He et al., 2018).”
References:
Biggs, R., P.Z. Liu, A.D. Stephens, and J.F. Marko. 2019. Effects of altering histone posttranslational modifications on mitotic chromosome structure and mechanics. Mol. Biol. Cell. 30:820–827. doi:10.1091/mbc.E18-09-0592.
Biggs, R.J., N. Liu, Y. Peng, J.F. Marko, and H. Qiao. 2020. Micromanipulation of prophase I chromosomes from mouse spermatocytes reveals high stiffness and gel-like chromatin organization. Commun. Biol. 3:1–7. doi:10.1038/s42003-020-01265-w.
Cai, X., J.M. Stringer, N. Zerafa, J. Carroll, and K.J. Hutt. 2023. Xrcc5/Ku80 is required for the repair of DNA damage in fully grown meiotically arrested mammalian oocytes. Cell Death Dis. 14:1–9. doi:10.1038/s41419-023-05886-x.
Collins, J.K., S.I.R. Lane, J.A. Merriman, and K.T. Jones. 2015. DNA damage induces a meiotic arrest in mouse oocytes mediated by the spindle assembly checkpoint. Nat. Commun. 6. doi:10.1038/ncomms9553.
Harton, G.L., and H.G. Tempest. 2012. Chromosomal disorders and male infertility. Asian J. Androl. 14:32–39. doi:10.1038/aja.2011.66.
He, Q., B. Au, M. Kulkarni, Y. Shen, K.J. Lim, J. Maimaiti, C.K. Wong, M.N.H. Luijten, H.C. Chong, E.H. Lim, G. Rancati, I. Sinha, Z. Fu, X. Wang, J.E. Connolly, and K.C. Crasta. 2018. Chromosomal instability-induced senescence potentiates cell non-autonomous tumourigenic effects. Oncogenesis. 7. doi:10.1038/s41389-018-0072-4.
Hopkins, J., G. Hwang, J. Jacob, N. Sapp, R. Bedigian, K. Oka, P. Overbeek, S. Murray, and P.W. Jordan. 2014. Meiosis-Specific Cohesin Component, Stag3 Is Essential for Maintaining Centromere Chromatid Cohesion, and Required for DNA Repair and Synapsis between Homologous Chromosomes. PLoS Genet. 10:e1004413. doi:10.1371/journal.pgen.1004413.
Lee, C., J. Leem, and J.S. Oh. 2023. Selective utilization of non-homologous end-joining and homologous recombination for DNA repair during meiotic maturation in mouse oocytes. Cell Prolif. 56:1–12. doi:10.1111/cpr.13384.
Lee, J., S. Ogushi, M. Saitou, and T. Hirano. 2011. Condensins I and II are essential for construction of bivalent chromosomes in mouse oocytes. Mol. Biol. Cell. 22:3465–3477. doi:10.1091/mbc.E11-05-0423.
Marangos, P., and J. Carroll. 2012. Oocytes progress beyond prophase in the presence of DNA damage. Curr. Biol. 22:989–994. doi:10.1016/j.cub.2012.03.063.
Marangos, P., M. Stevense, K. Niaka, M. Lagoudaki, I. Nabti, R. Jessberger, and J. Carroll. 2015. DNA damage-induced metaphase i arrest is mediated by the spindle assembly checkpoint and maternal age. Nat. Commun. 6:1–10. doi:10.1038/ncomms9706.
Poirier, M.G., and J.F. Marko. 2002. Mitotic chromosomes are chromatin networks without a mechanically contiguous protein scaffold. Proc. Natl. Acad. Sci. U. S. A. 99:15393–15397. doi:10.1073/pnas.232442599.
Pope, L.H., C. Xiong, and J.F. Marko. 2006. Proteolysis of Mitotic Chromosomes Induces Gradual and Anisotropic Decondensation Correlated with a Reduction of Elastic Modulus and Structural Sensitivity to Rarely Cutting Restriction Enzymes. Mol. Biol. Cell. 17:104. doi:10.1091/MBC.E05-04-0321.
Dos Santos, Á., A.W. Cook, R.E. Gough, M. Schilling, N.A. Olszok, I. Brown, L. Wang, J. Aaron, M.L. Martin-Fernandez, F. Rehfeldt, and C.P. Toseland. 2021. DNA damage alters nuclear mechanics through chromatin reorganization. Nucleic Acids Res. 49:340–353. doi:10.1093/nar/gkaa1202.
Sun, M., R. Biggs, J. Hornick, and J.F. Marko. 2018. Condensin controls mitotic chromosome stiffness and stability without forming a structurally contiguous scaffold. Chromosom. Res. 26:277–295. doi:10.1007/s10577-018-9584-1.
Thompson, S.L., and D.A. Compton. 2011. Chromosomes and cancer cells. Chromosom. Res. 19:433–444. doi:10.1007/s10577-010-9179-y.
Ward, A., J. Hopkins, M. Mckay, S. Murray, and P.W. Jordan. 2016. Genetic Interactions Between the Meiosis-Specific Cohesin Components, STAG3, REC8, and RAD21L. G3 (Bethesda). 6:1713–24. doi:10.1534/g3.116.029462.
Winship, A.L., J.M. Stringer, S.H. Liew, and K.J. Hutt. 2018. The importance of DNA repair for maintaining oocyte quality in response to anti-cancer treatments, environmental toxins and maternal ageing. Hum. Reprod. Update. 24:119–134. doi:10.1093/humupd/dmy002.
Xu, H., M.D. Beasley, W.D. Warren, G.T.J. van der Horst, and M.J. McKay. 2005. Absence of Mouse REC8 Cohesin Promotes Synapsis of Sister Chromatids in Meiosis. Dev. Cell. 8:949–961. doi:10.1016/j.devcel.2005.03.018.
si no hay un poco de insatisfacción, un poco de tristeza saludable, una sana capacidad de habitar en la soledad y de estar con nosotros mismos sin huir, corremos el riesgo de permanecer siempre en la superficie de las cosas y no tomar nunca contacto con el centro de nuestra existencia.
clave
difference between the age predicted by the model and actual agein the post-COVID-19 lockdown test sample was 4.2 y
a single lockdown aging someone 4 years
One study of 9- to13-y-old subjects reported accelerated maturation of the medialprefrontal cortex, as reflected by a reduction in cortical thicknessover what would be expected from normal aging, and accelerateddevelopment of the hippocampus, as reflected in an increase inhippocampal volume (13). A study of 16-y-old adolescents reportedreduced average brain cortical thickness and larger bilateral hip-pocampal and amygdala volumes
This is shown to also happen in cases of stress or trauma in adolescent's brains
y
Alt text for image below: A high-mileage Mazda vehicle on a jack getting a brake inspection.
crisis drepanocítica)
Recordar que es esto? Y porque debe de darse analgesia rápida en este caso
Uno de estos circuitos tiene conexiones con el hipotálamo, el mesencéfalo y el bulbo raquídeo; controla selectivamente a las neuronas medulares que transmiten el dolor a lo largo de una vía descendente
Modulación del dolor por una via descendente
La mayor parte de las neuronas medulares que reciben los impulsos procedentes de los nociceptores aferentes primarios envían sus axones al tálamo contralateral. Estos axones forman el haz espinotalámico contralateral que ocupa la sustancia blanca anterolateral de la médula espinal, el borde externo del bulbo raquídeo, y la porción lateral de la protuberancia y el mesencéfalo
Nociceptor aferente primario -> Neurona medula ipsilateral -> **Tálamo contra lateral ** -> Haz espinotalamico Contralateral -> Borde externo Bulbo y Porción lateral de protuberancia y mesencefalo
Hipótesis de convergencia-proyección del dolor irradiado. Según esta hipótesis, los nociceptores aferentes viscerales convergen en las mismas neuronas de proyección del dolor que las aferentes de estructuras somáticas en que éste es percibido. El encéfalo no tiene forma de saber cuál fue el punto real de origen de los estímulos que recibe y “proyecta” erróneamente la sensación a la estructura somática
Hipótesis Dolor irradiado !!!!!
Factores que contribuyen en grado importante a la sensibilización son la disminución del pH, las prostaglandinas, los leucotrienos y otros mediadores de inflamación como la bradicinina.
Factor que contribuye a la sensibilización
estructuras profundas como las articulaciones o víscera hueca, si resultan afectadas por un proceso patológico con un componente inflamatorio, de manera característica se vuelven extraordinariamente sensibles a la estimulación mecánica.
Sensibilización>> la haber mediadores inflamatorios, baja el umbral para activarse ante un estimulo y esto genera que un tejido que no es tan sensible a un estimulo, se convierta en muyyy sensible a la estimulación mecánica
La sensibilización ocurre en la terminación nerviosa (sensibilización periférica) y también en el asta dorsal de la médula espinal (sensibilización central
Tipos sensibilización : periférica y central
nociceptores aferentes primarios puede responder a distintas clases de estímulos nocivos. Por ejemplo, la mayor parte de los nociceptores responde al calor; al frío intenso; a estímulos mecánicos fuertes como un pellizco; cambios del pH, sobre todo un ambiente ácido, y a la aplicación de irritantes químicos, incluidos trifosfato de adenosina (ATP, adenosine triphosphate), serotonina, bradicinina (BK, bradykinin) e histamina
Estímulo nocivo
Prototype for E.D.I.A
El proyecto E.D.I.A (Estereotipos y Discriminación en Inteligencia Artificial) desarrolló una herramienta accesible para evaluar sesgos en procesamiento de lenguaje natural (PLN), específicamente en word embeddings y modelos de lenguaje de gran escala. Este prototipo, alojado en Huggingface, permite que expertos en discriminación, sin habilidades técnicas avanzadas, participen en la detección de sesgos desde las etapas iniciales del desarrollo de tecnologías lingüísticas, evitando daños sociales posteriores.
El proyecto busca empoderar a especialistas en discriminación para auditar sesgos en herramientas de PLN que moldean aplicaciones cotidianas. Estas tecnologías suelen replicar y perpetuar los sesgos presentes en los datos de los que aprenden. Sin embargo, las evaluaciones tradicionales dependen de habilidades técnicas avanzadas en programación y estadística, excluyendo a los expertos en discriminación del núcleo del proceso.
Innovaciones del Prototipo
Visualizaciones Intuitivas: Sustituyen métricas técnicas con representaciones gráficas, permitiendo un análisis más accesible y comunicable.
Conceptos Simplificados: Evitan terminología matemática y estadística compleja, priorizando conceptos intuitivos.
Relaciones Contextuales: Representan gráficamente interacciones entre palabras, contextos de uso y expresiones multi-palabra, enriqueciendo el análisis.
Implementación y Pruebas
El prototipo fue evaluado en talleres interdisciplinarios en Argentina, con 100 asistentes de sectores como la academia, la industria, instituciones públicas y sociedad civil. Estas sesiones confirmaron la utilidad de la herramienta para:
Validar intuiciones sobre discriminación.
Planificar acciones correctivas.
Facilitar debates fundamentados con actores diversos.
Impacto y posible aplicación en Colombia
En Colombia, donde convergen diversas corporalidades y realidades lingüísticas, esta herramienta podría abordar desafíos específicos, como:
Sesgos relacionados con género, raza y clase en aplicaciones de inteligencia artificial.
Exclusión de lenguas indígenas y afrodescendientes en modelos lingüísticos.
Representaciones perjudiciales en medios digitales y plataformas sociales.
El prototipo y su metodología podrían tener potencial para integrarse en prácticas de investigación, políticas públicas y desarrollo tecnológico en Colombia. Además, sus características inclusivas podrían fomentar una mayor participación de comunidades diversas en la creación de tecnologías éticas y culturalmente relevantes.
Recursos Disponibles
Prototipo interactivo: Huggingface.
Código fuente y documentación: Repositorio en GitHub.
Video introductorio: E.D.I.A: Estereotipos y Discriminación en Inteligencia Artificial.
Con este enfoque, E.D.I.A demuestra que reducir barreras técnicas es esencial para construir inteligencia artificial inclusiva y culturalmente sensible, destacando el potencial de metodologías colaborativas en Colombia junto con el Sur Global.
Note: This response was posted by the corresponding author to Review Commons. The content has not been altered except for formatting.
Learn more at Review Commons
We have not tested whether HAC1 mRNA is processed in S. cerevisiae. To address this question, we will perform RT-PCR to test it.
In addition, as requested by the reviewers, we will further test the involvement of Ire1 in the HU/DIA-induced phenotype in S. pombe. For that, we will reassess our RNA-seq data and compare it with data from (Kimmig et al., 2012) (UPR activation in S. pombe). We will test the levels and splicing of mRNA of Bip1 upon HU/DIA treatments by RT-PCR and finally we will test the levels of Gas2p which has been described to decrease upon Ire1/UPR activation in S. pombe.
We are confident in that the results of these experiments and the re-analysis of our RNA-Seq data will help us to infer the mechanisms that modulate the ER response to HU or DIA treatment.
We agree with the reviewer that it is important to determine the redox and the functional state of the secretory pathway in our conditions to fully understand the cellular consequences of these treatments, especially in the case of HU, as it is routinely used in clinics.
In this context, we have already included new data showing that HU or DIA treatment leads to alterations in the Golgi apparatus and in the distribution of secretory proteins (Figures 3A-B).
In addition, we plan to perform mass spectrometry experiments to detect protein glutathionylation in our conditions, as it has been previously shown that DIA treatment leads to glutathionylation of key ER proteins such as Bip1, Pdi or Ero1 (Lind et al., 2002; Wang & Sevier, 2016), which might by reproduced upon HU treatment. We will test specifically the redox state of Bip1, Pdi and/or Ero1 by immunoprecipitation and western blot.
Finally, we plan to test the folding and processing of specific secretory cargoes by western blot in our experimental conditions (See below, Reviewer 2, Major issue #1).
We have tested whether the addition of this antioxidant could prevent and/or revert the N-Cap phenotype. We found that NAC in combination with HU increased N-Cap incidence (Figure 5H). As NAC is a GSH precursor and we find that GSH is required to develop the phenotype of N-Cap (Figure 5A-B, D, G), this result further supports that the HU-induced cellular damage might involve ectopic glutathionylation of proteins.
Unfortunately, we have not tested NAC in combination with DIA, as NAC seems to reduce DIA as soon as they get in contact, as judged by the change in the characteristic orange color of DIA, the same that happens when we combine GSH and DIA (Supplementary Figure 5A-B).
In this regard, the following information has been added to the manuscript (page 32-33, highlighted in blue):
"We also tested GSH addition to the medium in combination with either HU or DIA. When mixed with DIA, we noticed that the color of the culture changed after GSH addition (Figure S5A), which suggests that GSH and DIA can interact extracellularly, thus preventing us from being able to draw conclusions from those experiments. On the other hand, combining GSH with HU increased N-Cap incidence (Figure 5G), as expected based on our previous observations. Additionally, we checked whether the addition of the antioxidant N-acetyl cysteine (NAC), a GSH precursor, impacted upon the N-Cap phenotype. The results were the same as with GSH addition: when combined with HU, NAC increased N-Cap incidence (Figure 5H), whereas in combination, the two compounds interacted extracellularly (Figure S5B). These data align with NAC being a precursor of GSH, as incrementing GSH levels augments the penetrance of the HU-induced phenotype".
DIA is a strong oxidant, and HU treatment results in the production of reactive oxygen species (ROS). Therefore, one hypothesis would be that cytoplasmic chaperone foci represent oxidized and/or misfolded soluble proteins. Indeed, this hypothesis is supported by the appearance of cytoplasmic foci containing the guk1-9-GFP and Rho1.C17R-GFP soluble reporters of misfolding upon HU or DIA treatment (Figure 4I-J). We have already tested if they contain Vgl1, which is one of the main components of heat shock induced stress granules in S. pombe (Wen et al., 2010). However, we found that HU or DIA-induced foci lacked this stress granule marker, and indeed Vgl1 did not form any foci in response to these treatments. Therefore, our aggregates differ from the canonical stress-induced granules. We have yet to include this data in the manuscript, but we plan to do that for the final version.
To further explore the nature of the cytoplasmic aggregates induced by HU and DIA, we will test whether Hsp104-containing foci colocalize with guk1-9-GFP and/or Rho1.C17R-GFP foci.
To test whether these cytosolic aggregates result from retrotranslocation from the ER, we plan to use the vacuolar Carboxipeptidase Y mutant reporter CPY*, which is misfolded. This misfolded protein is imported into the ER lumen but does not reach the vacuole. Instead, it is retrotranslocated to the cytoplasm, where it is ubiquitinated and degraded by the proteasome (Mukaiyama et al., 2012). We will analyze by fluorescence microscopy the localization of CPY*´-GFP and Hsp104-containing aggregates upon HU or DIA treatment and with or without proteasome inhibitors. We can also test the levels, processing and ubiquitination of CPY*-GFP by western blot, as ubiquitination of retrotranslocated proteins occurs once they are in the cytoplasm.
Our results suggest that these aggregates are not bound to ER membranes, as they do not appear in close proximity to the ER area marked by mCherry-AHDL in fluorescence microscopy images.
To fully rule out this possibility, we will test whether these Hsp104-aggregates colocalize with ER transmembrane proteins such as Rtn1 or Yop1, with Gma12-GFP that marks the Golgi apparatus and with the dye FM4-64 that stains endosomal-vacuole membranes.
We have tested whether deletion of key genes involved in autophagy affected the N-Cap phenotype. To this end, we used deletions of ypt1, vac8 and atg8 in strains expressing Cut11-GFP and/or mCherry-AHDL and found that none of them affected N-Cap formation. These data suggest that the core machinery of autophagy is not critical for HU/DIA-induced ER expansion. We plan to include this data in the final version of the manuscript along with the rest of experiments proposed.
To get deeper insights and to fully rule out a possible contribution of macro-autophagy to the HU- and DIA-induced phenotypes, we plan to analyze by western blot whether GFP-Atg8 is induced and cleaved upon HU or DIA treatments which would be indicative of macroautophagy activation.
To test whether the cytoplasmic aggregates are the result of an imbalance between ER-expansion and ER-phagy we plan to analyze the localization of GFP-Atg8 and Hsp104-RFP in the atg7Δ mutant, impaired in the core macro-autophagy machinery. In these conditions, the number or size of the cytoplasmic aggregates might be impacted.
On the other hand, it has been recently shown that an ER-selective microautophagy occurs in yeasts upon ER stress (Schäfer et al., 2020; Schuck et al., 2014). This micro-ER-phagy involves the direct uptake of ER membranes into lysosomes, is independent of the core autophagy machinery and depends on the ESCRT system and is influenced by the Nem1-Spo7 phosphatase. ESCRT directly functions in scission of the lysosomal membrane to complete the uptake of the ER membrane. Interestingly, N-Caps are fragmented in the absence of cmp7 and specially in the absence of vps4 or lem2, the nuclear adaptor of the ESCRT (Figure 3E), We had initially interpreted these results as the need to maintain nuclear membrane identity during the process of ER expansion (Kume et al., 2019); however, the appearance of fragmented ER upon HU treatment in the absence of ESCRT might also be due to an inability to complete microautophagic uptake of ER membranes. To test this hypothesis, we plan to analyze whether the fragmented ER in these conditions co-localize with lysosome/vacuole markers.
As stated in (Taricani et al., 2001), hsp16 expression is strongly induced in a cdc22-M45 mutant background. We performed experiments in this mutant that were included in the original version of the manuscript and remain in the current version (Sup. Fig. 2C) and, under restrictive conditions, we do not see spontaneous N-Cap formation. If Hsp16 overexpression and nucleotide depletion were key to the mechanism triggering N-Cap appearance, we would expect this mutant to eventually form N-Caps when placed at restrictive temperature. Furthermore, Taricani et al. show that Hsp16 expression was abolished in a Δatf1 mutant background in the presence of HU, and we found that this mutant is still able to produce N-Caps in HU; therefore, our results strongly suggest that the phenotype of N-cap is independent on the MAPK pathway and on the expression of hsp16.
We have addressed the status of secretion in cells treated with HU or DIA by assessing the morphology of the Golgi apparatus and the localization of several secretory proteins by fluorescence microscopy and found that both HU and DIA treatments impact the secretion system. In addition, we plan on addressing the redox status of ER proteins (Bip1, Pdi or Ero1) by biochemical approaches. Please see the answer to major issue #2 from reviewer 1.
We will also analyze by western blot the biogenesis and processing of the wildtype vacuolar Carboxypeptidase Y (Cpy1-GFP) and alkaline phosphase (Pho8-GFP), two widely used markers to test the functionality of the ER/endomembrane system.
This same issue arose with reviewer 3, so we decided to change the image of the western blot showing another one with less exposure and added a quantification showing that Bip1-GFP levels remain mostly constant between control conditions and treatments with HU and DIA.
We have also performed the suggested photobleaching experiment to analyze potential changes in crowding and mobility in Bip1-GFP upon HU treatment. We found that Bip1-GFP signal recovers after photobleaching the perinuclear ER in HU-treated cells that had not yet expanded the ER, showing that Bip1-GFP is dynamic in these conditions. However, Bip1-GFP signal did not recover after photobleaching the whole N-Cap in cells that had fully developed the expanded perinuclear ER phenotype, whereas it did recover when only half of the N-Cap region was bleached. This suggests that Bip1-GFP is mobile within the expanded perinuclear ER but cannot freely diffuse between the cortical and the perinuclear ER once the N-Cap is formed.
These data have been included in the revised version of the manuscript, in figure 4B, sup. figures 4A-B, and in page 23.
As all three reviewers had comments about the CHX and Pm-related data, we revised those experiments and noticed a phenotype occurring upon HU+CHX treatment that had gone unnoticed previously and that changed our understanding about the effect of these drugs on the ER. Briefly, we noticed that, although CHX treatment decreases the HU-induced expansion of the perinuclear ER, it indeed induced expansion but in this case in the cortical area of the ER. This means that the phenotype of ER expansion in HU is not being suppressed by addition of CHX, but rather taking place in another area of the ER (cortical ER). We do not understand why this happens; however, these results show that ER expansion is exacerbated both in DIA and HU when combined with CHX. We have included this data in Figures 3C-D and in page 22.
We also examined the trafficking of secretory proteins that go from the ER to the cell tips and noticed that this transit was affected under both drugs (Figures 3A-B). This suggests that, although there is still protein synthesis when cells are exposed to the drugs (as can be seen by the higher levels of chaperones induced by both stresses (Figure 4C-E)), their protein synthesis capacity is possibly impinged on to certain degree. All this information is now included in the manuscript (page 19).
Although we have only included experiments using one redox sensor in the manuscript, we had tested the oxidation of several biosensors during HU and DIA exposure monitoring cytoplasmic, mitochondrial and glutathione-specific probes. We have tried to use ER directed probes however, we have not been successful due to oversaturation of the probe in the highly oxidative environment of the ER lumen.
Although so far we have not been able to directly test the redox status of the ER with optical probes, we plan to test the folding and redox status of several ER proteins and secretory markers by biochemical approaches, so hopefully these experiments will give us more information on this question (See answer to Reviewer 1, Main Issue #2 and Reviewer 2, Main issue #1).
Pm causes premature termination of translation, leading to the release of truncated, misfolded, or incomplete polypeptides into the cytosol and the re-engagement of ribosomes in a new cycle of unproductive translation, as puromycin does not block ribosomes (Aviner, 2020; Azzam & Algranati, 1973). This is likely to decrease the number of peptides entering the ER that can be targeted by either HU or DIA, decreasing in turn ER expansion. Indeed, we have found that Pm treatment alone results in the formation of multiple cytoplasmic protein aggregates marked by Hsp104-GFP (Figure 4K), consistent with a continuous release of incomplete and misfolded nascent peptides to the cytoplasm. This would explain why Pm treatment suppresses N-Cap formation when cells are treated with either HU or DIA.
To further test this idea, we plan to carefully analyze the number, size and dynamics of Hsp104-containing cytoplasmic aggregates in cells treated with HU or DIA and Pm, where N-Caps are suppressed. We expect to find an increase in the accumulation of proteotoxicity in the cytoplasm in these conditions.
On the other hand, CHX inhibits translation elongation by stalling ribosomes on mRNAs, preventing further peptide elongation but leaving incomplete polypeptides tethered to the blocked ribosomes. This reduces overall protein load entering the ER by blocking new protein synthesis and stabilizes misfolded proteins bound to ribosomes. Accordingly, it has been shown previously that blocking translation with CHX abolishes protein aggregation (Cabrera et al., 2020; Zhou et al., 2014). Similarly, we have found that Hsp104 foci are not observed when we add CHX alone or in combination with HU or DIA (Figures 4K-L). These results suggest that cytoplasmic foci that we observe upon HU or DIA treatment likely contain misfolded proteins derived from ongoing translation.
As this question has also been raised by reviewer 1, we have decided to further explore the nature of these cytoplasmic foci (please see answer to Reviewer1, Issue 3). Briefly:
We agree with the reviewer. We have toned down our statements about the relationship between thiol stress, the cytoplasmic chaperone foci and their relationship with ER expansion. We have removed from the text the statement that cytoplasmic foci are independent from ER expansion and thiol stress and have further revised our claims about CHX and Pm in the main text and the discussion to address these and the other reviewers' concerns.
To address this issue, we plan to analyze the localization of proteins involved in iron-sulfur cluster assembly and/or containing iron-sulfur clusters by in vivo fluorescence microscopy, such as DNA polymerase Dna2 or Grx5, during HU or DIA treatments.
Related to this, we have found that a subunit of the ribonucleotide reductase (RNR) aggregated in the cytoplasm upon HU exposure (Figure S2B). It is worth noting that RNR is an iron-containing protein whose maturation needs cytosolic Grxs (Cotruvo & Stubbe, 2011; Mühlenhoff et al., 2020). The catalytic site, the activity site (which governs overall RNR activity through interactions with ATP) and the specificity site (which determines substrate choice) are located in the R1 (Cdc22) subunits, which are the ones that aggregate, while the R2 subunits (Suc22) contain the di-nuclear iron center and a tyrosyl radical that can be transferred to the catalytic site during RNR activity (Aye et al., 2015). The fact that a subunit of RNR aggregates could be related to an impingement on its synthesis and/or maturation due to defects in iron-sulfur cluster formation, as it has been recently published that RNR cofactor biosynthesis shares components with cytosolic iron-sulfur protein biogenesis and that the iron-sulfur cluster assembly machinery is essential for iron loading and cofactor assembly in RNR in yeast (Li et al., 2017). This information has been added to the discussion.
We modified the language used to describe the experiment in the manuscript, as suggested by the reviewer, to clarify that while DTT is kept in the medium, N-Caps never form. In addition, we have also performed a pre-treatment with DTT; adding 1 mM DTT one hour before, washing the reducing agent out and adding HU to the medium then. The result indicates that pre-treating cells with DTT significantly reduces N-Cap formation after a 4-hour incubation with HU, which suggests that triggering reducing stress "protects" cells from the oxidative damage induced by HU and DIA. This information has been also added to the manuscript (Figure 2J).
We have revised and expanded our discussion. In addition, in the final revision of our work we will increase the discussion in the context of the new results obtained.
__ It would be helpful to the reader to explain what some of the reporters are in brief. For example, Guk1-9-GFP and Rho1.C17R-GFP reporters__.
Both the Guk1-9-GFP and Rho1.C17R-GFP are two thermosensitive mutants in guanylate kinase and Rho1 GTPase respectively, that have been previously used in S. pombe as soluble reporters of misfolding in conditions of heat stress. During mild heat shock, both mutants aggregate into reversible protein aggregate centers (Cabrera et al., 2020). This information has now been added to the manuscript.
__ Supplementary Figure 3. The main text suggests panel 3A is focused on diamide treatment. The figure legend discusses this in terms of HU treatment. Which is correct?__
We thank the reviewer for pointing out this mistake. The experiment was performed in 75 mM HU, the legend was correct. It has now been corrected in the manuscript.
__ The authors use ref 110 and 111 to suggest the importance of UPR-independent signaling. However, they do not point out that this UPR-independent signaling referred to in these papers is dependent on the UPR transmembrane kinase IRE1.__
We have included pertinent clarification in the new discussion.
Regarding the levels of Bip1, we now show in Figure 4 a less exposed image of the western blot, and a quantification of Bip1-GFP intensity from three independent experiments. We find that, in our experimental conditions, neither HU nor DIA treatments significantly altered Bip1 levels.
With respect to the RNA-Seq, as we mentioned in the major issue 1 from reviewer 1, we plan to reassess our data to further clarify and add information about ER stress markers induced or repressed by HU and DIA. We also will test the levels of Bip1 and several UPR targets by RT-PCR and by western blot.
We have found that puromycin treatment alone results in the formation of cytoplasmic foci containing Hsp104, suggesting that puromycin indeed increases folding stress in the cytoplasm. We have now included this data in Figure 4K (please see Main Issue #5 from Reviewer 2). Pm suppresses the formation of N-caps induced by HU or DIA; however, we have not addressed cell survival or fitness in these conditions and therefore we cannot conclude about being protective.
In addition, upon the reevaluation of our data, we have realized that CHX treatment suppresses HU-induced perinuclear expansion, although it does not suppress but instead enhances ER expansion in the cortical region. This data has been added to the present version of the manuscript in Figure 3C-D (page 22).
As the reviewer requested, we plan to test the effect of anisomycin (thapsigargin has been described to not work in yeast, as they lack a (SERCA)‐type Ca2+ pump (Strayle et al., 1999), which this drugs targets.
Regarding the downstream effects of HU or DIA treatment on ER proteostasis, we plan to further explore the effect of these drugs on the secretory system (please see major issue #2 from Reviewer 1) and to evaluate the redox state and processing of several key ER and secretory proteins. We will further explore the nature of the aggregates that appear in the cytoplasm in our experimental conditions, which will also shed light into the downstream effects of these drugs in cytoplasmic proteostasis (please see answer to issue #5 from Reviewer 2).
We plan on readdressing this topic by analyzing the genes that have been described to be differentially expressed during UPR activation in S. pombe and comparing them with our data, first by reevaluating our transcriptomic data and second by choosing Bip1 and some other of the differentially expressed genes in (Kimmig et al., 2012) (for example, Gas2, Pho1 or Yop1) and assessing by RT-PCR their mRNA levels in our experimental conditions. As an alternative approach, we will also analyse the levels of UPR targets by western blot upon HU or DIA treatment.
We are confident that the results of these experiments and the re-analysis of our RNA-Seq data will allow us to infer the mechanisms that modulate the ER response to HU or DIA treatment.
We thank the reviewer for pointing this out. We forgot to include this information which now appears in the M&M section as follows:
"A gene was considered as differentially expressed when it showed an absolute value of log2FC(LFC){greater than or equal to}1 and an adjusted p-valueIn this regard, we plan to perform proteome-wide mass spectrometry experiments to detect protein glutathionylation in our conditions, as it has been previously shown that DIA treatment leads to glutathionylation of key ER proteins such as Bip1, Pdi or Ero1 (Lind et al., 2002; Wang & Sevier, 2016), which might by reproduced upon HU treatment. We will also test specifically the redox state of Bip1, Pdi and/or Ero1 by immunoprecipitation and western blot. We also plan to test the folding and processing of specific secretory cargoes by western blot in our experimental conditions (see below, and Reviewer 2, Major issue #1).
We have already tested whether mutant strains with deletions of key enzymes in both cytoplasmic and ER redox systems are able to expand the ER upon HU or DIA treatment. We have found that only pgr1Δ (glutathione reductase), gsa1Δ (glutathione synthetase) and gcs1Δ (glutamate-cysteine ligase) mutants fully suppressed N-Cap formation, which suggests that glutathione has an important role in the phenotype of ER expansion. We have now added the pgr1Δ mutant strain to the main text of the manuscript (Figure 5C, page 31).
We not only investigated the effects of HU on the ER in mammalian cells, but also of DIA. The results from this experiment mimicked the effect of HU (an increase in ER-ID fluorescence intensity in DIA). We merely excluded this information from the manuscript because we were focusing on HU at that point due to its importance as it is used currently in clinics. In this new version of the manuscript, we have included an extra panel in supplementary figure 5 to show the results from DIA in mammalian cells.
1) Figure 1A should show individual data points (i.e. 3 averages of independent experiments) in the bar graph.
Although we initially changed the graph, we believe the bar plot disposition facilitates its comprehension and went back to the initial one. Also, as the rest of the graphs similar to 1A are all expressed as bar plots, changing one would mean that, to avoid visual noise, we should change all. Therefore, we preferred keeping the figure as it was in the original version. However, we include here the graph with each of the averages of the independent experiments.
2) It is argued that Figure 1B demonstrates that the SPB is clustered with the NPC cluster. However, a single image is not enough to support this claim, as the association could be coincidental.
We have changed the image to show a whole population of cells, with several of them having NPC clusters, and we have indicated the position of SPB in each of them (all colocalizing with the N-Cap).
3) Figures 1B through 1D do not indicate the HU concentration.
We thank the reviewer for pointing out this mistake. Figures 1B and 1C represent cells exposed to 15 mM HU for 4 hours, while the graph in 1D shows the results from cells exposed to 75 mM HU over a 4-hour period. This information has been now added to the corresponding figure legend.
4) I was confused by the photobleaching experiments of Figure S1. How do the authors know that there is complete photobleaching of the cytoplasm or nucleus in the absence of a positive control? If photobleaching is incomplete, they could be measuring motility without compartments rather than transport between compartments, and hence the conclusion that trafficking is unaffected could be wrong.
Our control is the background of each microscopy image; we make sure that after the laser bleaches a cell, the bleached area coincides with the background noise. That way, we make sure that fluorescence from any remaining GFP is completely removed from the bleached area.
5) On page 8, they say "exposure to DIA" when they intend HU.
This has been corrected in the manuscript.
6) In Figure S3A, the colocalization of INM proteins with the ER are presented. It is not clearly explained what conclusions are meant to be drawn from this figure, but it seems it would have been more useful to compare INM and Cut11, to see whether the NPCs are localizing at the INM or ONM.
We have added an explanation in the main text to clarify the main conclusions derived from this figure. We think that NPCs localize in a section of the nucleus where the two membranes (INM and ONM) are still bound together.
7) I had to read Figure 2C's description and caption several times to understand the experiment. A schematic would be helpful. 20 mM HU is low compared to most conditions used. Does repositioning eventually take place for 75 mM HU or 3 mM DIA treatment, or do the cells just die before they get a chance?
20 mM HU was used in this experiment to provide a time frame suitable for analysis after HU addition, as a higher HU concentration increases the repositioning time. We found that both HU (75mM 4h) and DIA (3mM 4h)-induced ER expansions are reversible upon drug washout. If HU is kept in the media, ER expansions are eventually resolved. However, DIA is a strong oxidant and if it is kept in the media ER expansions are not resolved and cells do not survive.
8) Figure 2D shows little oxidative consequence from 75 mM HU treatment until 40 min., the same time that phenotypes are observed (Figure 1D). Is this relationship consistent with the kinetics of other concentrations of HU, or of DIA? Seems like a pretty important mechanistic consideration that can rationalize the effects of the two oxidants.
Thanks to this comment, we realized the notation underneath Figure 1D (1E in the new version of the manuscript) could lead to misunderstandings, as the timings there were "random". We have now made a clarification for this panel to be clearer: the timings are normalized to the moment when NPCs cluster. The fact that, before, that moment coincided with "40 minutes" does not mean N-Caps appear at that time point-quite the opposite, as most of them start to appear after >2 hours have passed in HU. We hope this can be better understood now.
9) Figure S4 is missing the asterisk on the lower left cell.
Fixed in the corresponding figure.
10) How is roundness determined in Figure S4B?
Roundness in Figure S4B (now S2E) is determined the same way as in Figure 1D, and as is described in the Method section (copied below). A clarification has been added to the legend to address that.
The 'roundness' parameter in the 'Shape Descriptors' plugin of Fiji/ImageJ was used after applying a threshold to the image in order to select only the more intense regions and subtract background noise (Schindelin et al., 2012). Roundness descriptor follows the function:
Round=4 X [Area]/π X [Major axis]2
where [Area] constitutes the area of an ellipse fitted to the selected region in the image and [Major axis] is the diameter of the round shape that in this case would fit the perimeter of the nucleus.
11) What threshold is used to determine whether cells analyzed in Figures S4C have "small ER" or "large ER"?
Large ER are considered when their area along the projection of a 3-Z section is over 4 μm2 (more than twice the mean area of the ER in cells with N-Caps in milder conditions). This has now been clarified in the legend of the corresponding figure.
__12) The authors interpret Figure 4K as indicating that ER expansion is not involved in the generation of punctal misfolded protein aggregates. However, the washout occurs only after the proteins have already aggregated. The proper interpretation is that the aggregates are not reversible by resolution of the stress, and hence are not physically reliant on disulfide bonds. __
We agree with the reviewer and have modified the interpretation of the indicated figure accordingly (page 30).
The speculation that these proteins are iron dependent is a stretch; there is no reason to believe that losses of iron metabolism are the most important stress in these cells. It seems at least as likely that oxidizing cysteine-containing proteins in the cytosol or messing with the GSH/GSSG ratio in the cytosol would make plenty of proteins misfold; oxidative stress in budding yeast does activate hsf1. However, this point could be addresses by centrifugation and mass spectrometry to identify the aggregated proteome. It is also surprising that the authors did not investigate ER protein aggregation, perhaps by looking at puncta formation of chaperones beyond BiP. By contrast, the fact that gcs1 deletion prevents ER expansion but does not prevent Hsp104 puncta does support the idea that cytoplasmic aggregation is not dependent on ER expansion.
To address this suggestion, we plan to analyze the localization of other chaperones and components of the protein quality control such as the ER Hsp40 Scj1 or the ribosome-associated Hsp70 Sks2.
13) Figure 4L is cited on page 28 when Figure 4K is intended.
This has been corrected in the text, although new panels have been added and now it is 4N.
À cette époque, toute la ville s’occupait du jeûneur ; l’intérêt croissait de jour de jeûne en jour de jeûne ; chacun voulait voir le jeûneur au moins une fois par jour ; vers la fin, il y avait des abonnés qui restaient toute la journée assis devant la petite cage grillagée ; même la nuit, on organisait des visites qui, pour rehausser le spectacle, avaient lieu aux flambeaux ; quand le temps était beau, on portait la cage dehors et c’était alors surtout aux enfants qu’on allait montrer le jeûneur [6].
Citation de « Ein Hungerkünstler » pour étayer la notion de public. Le jeûneur s'expose, "s'alimente" de l'attention du village. Comparaison avec la "performance" du jeûne dans les communauté "ana-mia"
Ces contenus web illustrent ainsi une mise en présence des corps spécifique aux environnements numériques. La manifestation de l’identité et de l’agentivité y est éminemment déclarative et performative [3]. Sur Internet, le soi est dénommé, inscrit dans des narrations, ou représenté par des images et des traces. La présence des usagers dans les environnements d’interaction numérique « comporte ainsi un caractère performatif dans la mesure où nous devons supposer que l’interlocuteur est ce qu’il revendique être [4]
Argument épistémique : Caractère performatif de l'image de soi dans les environnements numériques notamment du corps
– Parce que j’en suis capable.– Parce que je suis l’artiste de la faim.– Parce que je le veux.– Parce que je peux y arriver, je peux tout faire !– Parce que les autres meurent de jalousie quand ils me regardent.– Parce que chaque jour je me sens flambant neuf !– Parce que je ne vais pas décrocher.– Parce que je n’ai pas de temps à perdre avec la bouffe.– Parce que je peux réaliser tout ce que je me propose de faire !– Parce que j’ai la force de volonté.– Parce que c’est ma vie.– Parce que c’est mon choix.– Parce que je veux être parfait.
Exemple de texte pour illustrer le contenu disponible sur le web des troubles alimentaires
Technical, Social and Ethical Considerations of the Workflow of the system
El perfil arquetípico para este sistema es una mujer activa en redes sociales que sufre violencia digital, siendo un referente de opinión o activista. Este enfoque tiene en cuenta que muchas de las mujeres víctimas son líderes de opinión, comunicadoras, académicas o activistas con influencia política, social y de derechos humanos. Por ello, el diseño de la interacción con el chatbot debe ser empático, inclusivo y sensible a las corporalidades de las mujeres en Colombia.
Inteligencia Artificial en el Sistema
La Inteligencia Artificial será clave para el procesamiento y análisis de los casos reportados. El sistema automatizado clasificará la información obtenida, identificando los siguientes elementos:
Tipos de ataque: tipologías de violencia digital.
Palabras clave asociadas al acoso: para identificar patrones recurrentes.
Perfiles de los agresores: para identificar posibles perfiles de acosadores.
Frecuencia y recurrencia: para rastrear la aparición de ataques en contextos específicos (por ejemplo, durante crisis sociopolíticas).
El sistema permitirá, además, generar alertas automáticas cuando se identifiquen patrones de ataque coordinado o perfiles de agresores recurrentes. Esta información será almacenada en una base de datos, que alimentará visualizaciones de datos accesibles al público, lo cual puede ser utilizado por investigadores, periodistas y otras partes interesadas para desarrollar políticas públicas o acciones de defensa.
Protección de Datos y Privacidad
Los datos solicitados al momento del reporte de violencia digital serán limitados y confidenciales. Se pedirá a las víctimas que proporcionen:
Fecha aproximada del ataque.
Plataforma de redes sociales donde ocurrió el ataque.
Evidencia del ataque (captura de pantalla, vínculo, detalles del perfil del agresor).
Datos opcionales como nombre (no necesariamente real), correo electrónico, edad, ocupación y ciudad.
El sistema incluirá una política de privacidad clara y accesible, explicando cómo se utilizarán los datos para el seguimiento y la elaboración de informes. Esto garantizará que el proceso sea transparente y respetuoso con la privacidad de las usuarias.
Impacto y Alcance
La implementación de este sistema en Colombia buscará generar un impacto social a través de una campaña de divulgación que informe a las mujeres sobre la disponibilidad de esta plataforma para reportar casos de DGV y recibir orientación. El sistema tiene como objetivo:
Desarrollar un modelo de base de datos que categorice y cuantifique los casos de violencia digital en Colombia.
Generar visualizaciones de datos que sean descargables y útiles para diversas partes interesadas.
Crear informes que sirvan como herramientas para la toma de decisiones en políticas públicas y apoyo de organizaciones feministas y de derechos humanos.
Desarrollo Futuro
El desarrollo del prototipo del chatbot se apoyaría en principios feministas, y se utilizarían guías de Inteligencia Artificial feminista para asegurar que el diseño del sistema no solo sea funcional, sino también ético y respetuoso con las mujeres. Este chatbot no será una solución única, sino parte de un sistema de soporte integral que incluye recursos y apoyo emocional, legal y digital. Además, se buscarán colaboraciones con organizaciones feministas en Colombia y el sector público para fortalecer el impacto y la implementación del sistema.
Búsqueda de financiamiento para la fase de desarrollo y prueba del prototipo.
Desarrollo y ajuste del chatbot con la inclusión de un equipo de programadores especializados.
Establecimiento de alianzas con organizaciones y organismos nacionales e internacionales para apoyo en la fase de implementación.
Publicación del informe final y la documentación técnica para su difusión académica y en medios abiertos.
Presentation of the proposal: towards a feminist chatbot prototype
El diseño podría incorporar posibilidades sensibles a las corporalidades, la traducción y la Inteligencia Artificial para adaptarlo a las necesidades de las mujeres en Colombia.
Etapas del Proceso de Interacción
Reporte Inicial (Paso 1)
La mujer afectada conecta con el chatbot y se le invita a relatar su experiencia, utilizando un lenguaje empático que no victimiza ni culpa. Se le preguntan detalles sobre el ataque (plataforma, tipo de violencia, momento, posibilidad de subir evidencia, etc.).
La Inteligencia Artificial se emplea para clasificar los casos con base en palabras clave y patrones de violencia digital, generando una base de datos para un análisis más profundo.
Orientación para Reportar en Plataformas Sociales (Paso 2)
El chatbot guía a la víctima sobre cómo denunciar en la plataforma donde ocurrió el ataque, proporcionando enlaces directos a formularios y tutoriales.
Orientación para Reportar a la Policía (Paso 3)
Se ofrecería información sobre cómo denunciar el caso a la policía colombiana, proporcionando enlaces y números de contacto relevantes.
Apoyo Legal (Paso 4)
El chatbot proporciona orientación sobre el marco legal colombiano, aunque la violencia digital aún puede no estar completamente tipificada como delito, y ofrece enlaces a organizaciones que brindan asesoría legal.
Apoyo Emocional (Paso 5)
El chatbot ofrece acceso a información sobre apoyo emocional y psicológico, al incluir organizaciones que trabajan en salud mental y acompañamiento para mujeres víctimas de violencia.
Seguridad Digital (Paso 6)
Se ofrece orientación sobre cómo mejorar la seguridad digital, al proporcionar guías descargables y recomendaciones sobre plataformas especializadas en ciberseguridad, adaptadas al contexto colombiano.
Monitoreo del Caso (Paso 7)
El chatbot hace seguimiento al caso, pregunta si la violencia persiste y ofrece la opción de recibir información sobre talleres relacionados con la violencia digital y la protección en línea.
Conexión con una Comunidad (Paso 8)
Se ofrece la posibilidad de unirse a una comunidad de mujeres que han experimentado situaciones similares, creando un espacio seguro para el apoyo mutuo.
Cierre del Diálogo (Paso 9)
El chatbot cierra la conversación con un mensaje de apoyo y proporciona acceso continuo a los recursos y servicios disponibles.
Corporialidades
El chatbot debe comprender las diversas formas en que las mujeres experimentan la violencia digital, al considerar no solo las consecuencias emocionales y psicológicas, sino también cómo estos ataques pueden afectar su cuerpo, su bienestar y su seguridad. El chatbot debe ofrecer una interacción sensible a estos aspectos, al asegurar que la víctima se sienta comprendida y no juzgada.
Traducción
Se debe adaptar el lenguaje y los recursos disponibles al contexto colombiano, al tener en cuenta las diversas realidades socioculturales del país, como las diferencias en dialectos, clases sociales y la especificidad de las plataformas de comunicación más usadas por las mujeres colombianas. El chatbot puede ofrecer versiones en español con terminología regional para asegurar una mejor comprensión.
Inteligencia Artificial
La Inteligencia Artificial juega un papel fundamental al analizar los datos recopilados, luego de identificar patrones de abuso y ayudar a clasificar los tipos de violencia digital. Además, la Inteligencia Artificial puede optimizar el sistema para ofrecer respuestas más rápidas y personalizadas, al aprender de cada interacción para mejorar la asistencia en tiempo real.
Methodology : Applying feminist principles in the research with women who have experienced DGV situations
La creación de un sistema de respuesta para mujeres que han sufrido violencia de género digital desde una perspectiva feminista implica desarrollar todo el proceso de diseño y creación basado en principios feministas. Este enfoque, fundamentado en la co-creación participativa, el pluralismo, la agencia de las usuarias y la incorporación de corporalidades, busca soluciones tecnológicas que respeten y amplifiquen las experiencias y necesidades de las mujeres afectadas.
Principios Clave para el Diseño Feminista
Pluralismo y Participación
Involucrar activamente a las mujeres afectadas y a organizaciones feministas durante el proceso de diseño para garantizar que las soluciones reflejen sus vivencias y necesidades específicas.
Conocimiento Situado
Reconocer las dinámicas de poder y evitar reproducir desigualdades estructurales. La metodología debe ser inclusiva y ética, dando espacio a voces históricamente marginadas.
Embodiment (Corporalidad)
Incorporar la dimensión emocional y corporal en la investigación, entendiendo cómo las mujeres viven y procesan los episodios de violencia digital.
Agencia de las Usuarias
Diseñar sistemas donde las mujeres sean protagonistas y agentes de su propio proceso, en lugar de delegar el poder únicamente a los diseñadores o instituciones.
Metodología de Investigación
El diseño del sistema se estructuró en dos fases principales:
Co-creación con Mujeres Afectadas y Organizaciones Feministas
A través de entrevistas profundas y dinámicas participativas (como mapas de viaje emocional), se exploraron las experiencias, necesidades y deseos de las mujeres afectadas.
Hallazgos Clave
Sensación de soledad y desorientación al enfrentar la violencia digital.
Restricciones autoimpuestas en redes sociales, como privatización de cuentas y limitación de publicaciones.
Necesidad de comunidades de apoyo para compartir experiencias y evitar revictimización.
Deseo de sistemas tecnológicos que ofrezcan orientación clara y rápida.
Entrevistas con Instituciones y Expertos
Se consultaron actores estratégicos, como instituciones públicas y organizaciones especializadas, para validar y complementar las necesidades identificadas.
Propuesta Tecnológica: Incorporación de Inteligencia Artificial y Traducción
Uso de IA para la Detección y Análisis
Patrones de Violencia: Identificar tendencias en el uso de palabras clave, emojis o comportamientos recurrentes.
Alertas Preventivas: Implementar sistemas que indiquen niveles de riesgo y sugieran acciones inmediatas.
Apoyo Multilingüe
Implementar traducción automática para garantizar accesibilidad a mujeres de diferentes regiones y contextos lingüísticos en Colombia.
Enfoque Comunitario y de Cuidado
Crear redes de apoyo virtual donde las mujeres puedan compartir experiencias y recibir orientación en tiempo real.
Recomendaciones Específicas para aplicarlo en Colombia
Contexto y Localización
Adaptar el sistema a las necesidades específicas de mujeres colombianas, considerando las barreras de acceso tecnológico y el limitado apoyo institucional en ciertos casos de VGD.
Protocolo de Orientación
Diseñar un protocolo que permita a las usuarias entender qué es la violencia digital, cómo proceder y con quién contactar para recibir apoyo.
Confidencialidad y Privacidad
Garantizar que el sistema no requiera información personal innecesaria y respete la privacidad de las usuarias, especialmente en contextos de violencia.
Colaboración y Sostenibilidad
Fomentar alianzas entre organizaciones feministas, instituciones locales y expertos en Inteligencia Artificial para asegurar la sostenibilidad del proyecto.
Summary of feminist principles' framework for AI
La Inteligencia Artificial puede ser una herramienta clave para abordar la VGD en Colombia mediante el desarrollo de chatbots o agentes conversacionales:
Asesorar y guiar para proveer información sobre derechos, rutas de denuncia y acceso a apoyo legal, psicológico y emocional.
Prevenir y detectar patrones de riesgo al analizar palabras clave, emojis o interacciones para identificar posibles crisis de violencia y generar alertas.
Empoderar comunidades para permitir que las víctimas accedan a redes de apoyo y recursos de manera anónima y segura, respetando principios de privacidad y datos.
Principios clave para el desarrollo de IA feminista
De acuerdo con los principios propuestos por Juliana Guerra (2022) y basados en experiencias previas con chatbots en otros países, las soluciones de IA deben:
Ser colaborativas y participativas para co-diseñarse con comunidades, activistas y expertas/os para reflejar las necesidades específicas del contexto colombiano.
Incorporar conocimientos situados para reconocer las particularidades socioculturales y las corporalidades diversas de las personas usuarias.
Garantizar privacidad y consentimiento al usar datos de manera transparente y proteger la identidad de las víctimas.
Fomentar la autonomía para crear herramientas de código abierto y accesibles, evitando la dependencia exclusiva de instituciones públicas.
Un chatbot inspirado en iniciativas como Maruchatbot o Soy Violetta podría ser diseñado en Colombia para:
Brindar orientación en español y lenguas indígenas.
Incorporar enfoques interseccionales que reconozcan las realidades de mujeres rurales, afrodescendientes y LGBTIQ+.
Detectar riesgos mediante Inteligencia Artificial, pero sin almacenar información sensible innecesaria.
Crear alianzas con organizaciones locales y académicas para garantizar sostenibilidad y contextualización.
Chilean context
En Colombia, la ausencia de datos sistematizados, políticas públicas específicas y mecanismos de apoyo institucional, al igual que en Chile, las mujeres enfrentan esta violencia de forma individualizada, sin acceso consistente a redes de apoyo o recursos adecuados. La situación se complica al considerar las diversas corporalidades y contextos sociales, como el de las mujeres rurales, afrodescendientes, indígenas y LGBTQ+, quienes enfrentan formas de violencia exacerbadas por su interseccionalidad.
El país carece de un marco normativo sólido para enfrentar la VGD, a pesar de iniciativas legislativas recientes que abordan parcialmente el problema. Las denuncias en redes sociales, principal mecanismo utilizado por las víctimas, presentan limitaciones como la falta de seguimiento, la continuidad de los ataques y la opacidad de los procedimientos de las plataformas.
La incorporación de Inteligencia Artificial puede transformar el abordaje de la VGD en Colombia con:
Creación de sistemas de datos sistematizados
Bases de datos integradas y centralizadas que permitan identificar patrones, tendencias y perfiles de agresores.
Análisis predictivo para anticipar riesgos y mejorar los mecanismos de protección para las mujeres.
Desarrollo de chatbots con enfoque feminista
Prototipos como asistentes conversacionales que proporcionen orientación legal, psicológica y emocional, adaptados a los contextos regionales y culturales del país.
Incorporación de traducción automática para lenguas indígenas y dialectos, garantizando accesibilidad en comunidades diversas.
Fortalecimiento de redes de apoyo virtuales
Promoción de iniciativas lideradas por colectivas feministas y activistas tecnológicas para diseñar herramientas que amplíen las capacidades de respuesta comunitaria.
Creación de espacios seguros para compartir experiencias y buscar ayuda sin temor a represalias.
Prevención mediante Inteligencia Artificial
Campañas educativas automatizadas para informar sobre la VGD y empoderar a las mujeres en el uso seguro de tecnologías digitales.
Principios feministas para el diseño de IA
El diseño de estas herramientas debe incorporar principios feministas que cuestionen el extractivismo de datos y prioricen la privacidad y la seguridad de las usuarias. Además, deben considerar las corporalidades y experiencias diversas de las mujeres en Colombia, garantizando que las soluciones no perpetúen desigualdades estructurales.
El desarrollo de soluciones basadas en Inteligencia Artificial, junto con políticas públicas adecuadas y la participación activa de mujeres en su diseño, puede ser un paso crucial para abordar la violencia de género digital en Colombia. Esto no solo contribuiría a la prevención y atención de casos, sino también a la creación de un entorno digital más seguro e inclusivo para todas las mujeres.
Summary of Gender Digital Violence
En Colombia, la violencia de género digital (VGD) no solo afecta a mujeres por su mera presencia en plataformas digitales, sino que se agrava cuando participan activamente en debates, liderazgos políticos o en la defensa de derechos humanos y la igualdad de género. Esta violencia, una extensión de la violencia de género offline, tiene profundas consecuencias en la vida personal, emocional y pública de las mujeres, afectando su identidad, dignidad, integridad física y psicológica, y su derecho a la libertad de expresión.
La violencia política contra las mujeres, definida por la Organización de los Estados Americanos (OEA) como cualquier acción basada en el género que busca limitar o anular el ejercicio de sus derechos políticos, se manifiesta de forma recurrente en redes sociales. Estos espacios digitales, estratégicos para comunicadoras, activistas y lideresas, son utilizados para acoso, discursos de odio, ataques simbólicos y amenazas, con el objetivo de silenciar sus voces o inhibir su participación pública.
El impacto de la VGD y la violencia política digital se evidencia en la autocensura, la eliminación de perfiles en redes sociales y el retiro del debate público, perpetuando las barreras de género existentes. Esto afecta especialmente a mujeres indígenas, afrodescendientes, rurales y LGBTQ+, cuyas corporalidades y experiencias de violencia están atravesadas por múltiples formas de discriminación.
En Colombia, donde las desigualdades sociales y la violencia de género convergen con altos índices de violencia política, la Inteligencia Artificial podría desempeñar un papel esencial.
Monitoreo de violencia digital
Uso de Inteligencia Artificial para detectar patrones de discurso de odio, acoso y amenazas dirigidas a mujeres en redes sociales.
Mapeo de las dinámicas de violencia en diferentes regiones y plataformas.
Orientación personalizada
Creación de chatbots que brinden apoyo inmediato a víctimas de VGD, incluyendo traducción automática a lenguas indígenas y regionales, adaptándose a las realidades pluriculturales del país.
Provisión de información sobre recursos legales y psicológicos específicos para mujeres en riesgo.
Prevención y sensibilización
Implementación de campañas automatizadas y personalizadas para educar sobre la violencia de género digital y sus consecuencias, utilizando redes sociales para contrarrestar narrativas de odio.
Ética e inclusión
Cualquier solución tecnológica debe integrar un enfoque interseccional que considere las corporalidades y contextos diversos de las mujeres en Colombia, al respetar la privacidad y evitar prácticas extractivistas de datos. Además, es crucial incluir la participación activa de las mujeres afectadas en el diseño e implementación de estas herramientas, para garantizar su relevancia y efectividad.
La traducción, las corporalidades y la Inteligencia Artificial, puede transformar el abordaje de la VGD en Colombia, fortaleciendo la resiliencia de las mujeres y garantizando espacios digitales más seguros. Sin embargo, para que estas soluciones sean sostenibles, deben estar acompañadas de políticas públicas, colaboración interinstitucional y compromiso social para erradicar las raíces estructurales de la violencia de género.
La red Red de Investigación Feminista en Inteligencia artificial, f<a+i>r
La violencia de género digital (VGD) en Colombia refleja las desigualdades y dinámicas de poder presentes en la sociedad, adaptadas al ámbito tecnológico. Este fenómeno no es estático, ha evolucionado junto con el desarrollo de las tecnologías y su uso social, al transformarse desde los inicios del Internet en 1990 hasta el contexto actual de redes sociales, dispositivos móviles e interconectividad masiva. La VGD engloba cualquier conducta, acción o comportamiento que implique agresiones contra mujeres, niñas y adolescentes, con una fuerte dimensión de género que perpetúa las desigualdades.
La VGD ha sido reconocida por organismos internacionales como las Naciones Unidas y la Iniciativa Spotlight, que destacan el uso de tecnologías de la información y comunicación (TIC) como un medio que facilita, agrava o amplifica actos de violencia de género.
Se identifica como cualquier acción basada en el género que cause daño físico, psicológico, económico o simbólico, instigada o asistida por tecnologías como celulares, Internet y redes sociales.
En Colombia, como en otros países de América Latina, se han identificado entre 10 y 12 tipos de VGD, que incluye:
Acceso no autorizado: Intervención o control de cuentas personales o dispositivos.
Manipulación de información: Alteración o difusión de datos personales.
Acoso y vigilancia: Seguimiento constante en línea.
Divulgación de contenido íntimo sin consentimiento: Publicación de imágenes o información personal.
Estas formas de violencia afectan de manera desproporcionada a las mujeres debido a los roles de género y las dinámicas de poder que se trasladan al espacio digital.
La Inteligencia Artificial puede ser una herramienta crucial para abordar la VGD, especialmente en un país como Colombia, donde las desigualdades tecnológicas y sociales complican la identificación y respuesta a estos casos. Desde una posibilidad feminista e inclusiva, las siguientes aplicaciones son relevantes:
Detección y prevención
Uso de procesamiento de lenguaje natural (NLP) para identificar discursos de odio y amenazas en redes sociales.
Análisis de patrones en datos para prevenir casos recurrentes y mapear perfiles de agresores.
Orientación y apoyo a víctimas
Creación de un chatbot diseñado para brindar atención inicial a mujeres víctimas de VGD, ofreciendo información sobre recursos legales, psicológicos y de seguridad.
Traducción automática y adaptada para alcanzar a mujeres de diferentes regiones lingüísticas y culturales del país.
Sistematización de casos
Generación de bases de datos seguras para documentar incidentes y proponer políticas públicas basadas en evidencia.
Posibilidades éticas y sociales
Es fundamental que el uso de Inteligencia Artificial respete la privacidad y autonomía de las mujeres, evitando el extractivismo de datos y la revictimización. Además, su implementación debe ser sensible a las corporalidades, entendiendo que las experiencias de violencia están mediadas por factores como género, raza, clase y ubicación geográfica.
La combinación del desarrollo tecnológico con una posibilidad feminista puede transformar la forma en que Colombia enfrenta la VGD. Esto requiere no solo innovación en Inteligencia Artificial, sino también colaboración entre el gobierno, la sociedad civil y organismos internacionales para garantizar que las soluciones sean inclusivas, éticas y efectivas.
Digital Gender-Based Violence (DGV)
En Colombia, como en otros contextos, la violencia de género contra las mujeres se extiende a los espacios digitales, siendo una expresión del continuo de dominación patriarcal. La violencia de género digital (VGD) constituye un problema creciente en una sociedad cada vez más digitalizada, donde los entornos virtuales son una extensión de la realidad física. Esta violencia se manifiesta en acoso, discurso de odio, amenazas y otras formas de agresión hacia las mujeres a través de medios tecnológicos.
Colombia enfrenta retos similares a los de países como Chile: falta de datos sistematizados sobre casos de VGD y ausencia de políticas públicas que midan, prevengan y respondan a estas situaciones. Las iniciativas que existen suelen ser impulsadas por colectivos feministas y organizaciones civiles, pero estas no son suficientes para abordar la magnitud del problema.
La Inteligencia Artificial puede jugar un papel clave en la lucha contra la VGD, especialmente a través del procesamiento del lenguaje natural (NLP). Estas tecnologías pueden analizar grandes cantidades de datos no estructurados, sistematizar denuncias y brindar orientación inicial a las víctimas. Por ejemplo, un chatbot diseñado con principios feministas podría:
Recopilar y organizar denuncias de manera segura.
Ofrecer una orientación inicial sobre recursos legales y psicológicos disponibles.
Generar bases de datos para identificar patrones, características de los agresores y tendencias de violencia en redes sociales.
Sin embargo, el uso de la Inteligencia Artificial plantea desafíos éticos y sociales relacionados con la privacidad, el extractivismo de datos y la delegación de decisiones críticas a las máquinas. Por ello, cualquier desarrollo en este ámbito debe integrar principios feministas y un enfoque ético que priorice el bienestar y la seguridad de las mujeres afectadas.
Contexto colombiano
Dado el contexto de Colombia, donde las desigualdades de género se intersectan con problemáticas como la violencia armada, el acceso desigual a la tecnología y las brechas educativas, un proyecto de esta naturaleza debería adaptarse a las necesidades específicas del país. Algunas acciones clave serían:
Identificación de prácticas locales e internacionales para analizar experiencias de otros países y adaptarlas a las realidades culturales, sociales y legales de Colombia.
Diseño inclusivo para incorporar las voces de mujeres colombianas que han sufrido VGD y de organizaciones locales para garantizar un enfoque representativo.
Enfoque territorial para reconocer las diferencias en el acceso y uso de la tecnología entre áreas urbanas y rurales, así como las dinámicas específicas de violencia en cada contexto.
Colaboración interinstitucional para integrar el desarrollo del chatbot con esfuerzos de la sociedad civil, entidades estatales y organismos internacionales que trabajan en la atención a víctimas.
Esto no solo permitiría responder a la violencia de género digital, sino que también podría contribuir a visibilizar y combatir las desigualdades estructurales que perpetúan esta problemática en Colombia.
Pourtant, le sourire est un comportement social destiné à la communication : il est déterminé et mieux prédit par le contexte social plutôt que par les émotions intérieures
Très intéressant comme affirmation. Le sourire serait plutôt une convention sociale en certaines occasions plutôt qu'une simple expression de nos émotions. Cela fait beaucoup de sens lorsque l'on y réfléchit deux minutes et qu'on regarde nos propres comportements.
Astronomy wasborn of superstition , eloquence of ambition, hat red, falsehood, and flat-human vice.tery; geometry of avarice; physics of an idle curiosity; and even moral philosophy ofhuman pride. Thus the arts and sciences owe their birth to our vices; we should beless doubtful of their advantages, if the y had sprung from our virtues.
Rousseau critiques the origins of the arts and sciences, arguing that they stem from human vices rather than virtues. By suggesting that knowledge arises from flawed motives, he challenges the Enlightenment thought that progress through reason inherently benefits humanity. This perspective aligns with his broader critique of society's moral decline due to its obsession with superficial achievements. The passage also implicitly critiques the Ancien Regime, where the aristocracy and intellectual elite prioritized displays of wealth and power over moral governance.
Itis true that in France Socrates would not have drunk the hemlock ,but he wou ld have drunk of a potion infinitel y more bitter, of insult,mockery, and contempt a hundred times worse than death. T hus it isthat luxury, profligacy, and slavery have been, in all ages, the scourgeof the efforts of our pride to emerge from that happ y state of igno-Rousseau again takes a differentpath from other philosophes .He argues that rationality doesnot offer access to all truth,nor ought it. Rather, natureprotects men from themselvesby hiding her secrets , by makingknowledge difficult to acquire.rance, in which the wisdom of providence had placed us.
Here, Rousseau criticizes the Ancien Regime by suggesting that reformers are suppressed not through physical punishment but rather ridicule and rejection. This aligns with his belief that luxury and corruption are barriers in the pursuit of truth and virtue. In the case of the French Revolution, this truth is meaningful reform.
About ICILS, TIMSS and PISA
Se sugirió modificar este capítulo de tal forma que cada estudio sea un sub-apartado y se cierre comparando sus fortalezas y debilidades
Measures of digital self-efficacy
en la tabla hay cosas que aparecen en español y en inglés
Addgene
DOI: 10.1186/s12933-025-02586-y
Resource: Addgene (RRID:SCR_002037)
Curator: @olekpark
SciCrunch record: RRID:SCR_002037
26969
DOI: 10.1101/2025.01.09.632151
Resource: RRID:Addgene_26969
Curator: @olekpark
SciCrunch record: RRID:Addgene_26969
104492
DOI: 10.1101/2023.11.03.565534
Resource: RRID:Addgene_104492
Curator: @olekpark
SciCrunch record: RRID:Addgene_104492
116485
DOI: 10.1038/s41588-024-02015-y
Resource: None
Curator: @olekpark
SciCrunch record: RRID:Addgene_116485
98291
DOI: 10.1038/s41588-024-02015-y
Resource: RRID:Addgene_98291
Curator: @olekpark
SciCrunch record: RRID:Addgene_98291
98290
DOI: 10.1038/s41588-024-02015-y
Resource: RRID:Addgene_98290
Curator: @olekpark
SciCrunch record: RRID:Addgene_98290
98293
DOI: 10.1038/s41588-024-02015-y
Resource: RRID:Addgene_98293
Curator: @olekpark
SciCrunch record: RRID:Addgene_98293
12259
DOI: 10.1038/s41467-024-55692-y
Resource: RRID:Addgene_12259
Curator: @olekpark
SciCrunch record: RRID:Addgene_12259
12260
DOI: 10.1038/s41467-024-55692-y
Resource: RRID:Addgene_12260
Curator: @olekpark
SciCrunch record: RRID:Addgene_12260
52961
DOI: 10.1038/s41467-024-55692-y
Resource: RRID:Addgene_52961
Curator: @olekpark
SciCrunch record: RRID:Addgene_52961
RRID:IMSR_JAX:000664
DOI: 10.1038/s41467-024-55692-y
Resource: RRID:IMSR_JAX:000664
Curator: @scibot
SciCrunch record: RRID:IMSR_JAX:000664
RRID:MMRRC_034296-JAX
DOI: 10.1038/s41467-024-55692-y
Resource: RRID:MMRRC_034296-JAX
Curator: @scibot
SciCrunch record: RRID:MMRRC_034296-JAX
196072
DOI: 10.1038/s41467-024-55684-y
Resource: None
Curator: @olekpark
SciCrunch record: RRID:Addgene_196072
196071
DOI: 10.1038/s41467-024-55684-y
Resource: RRID:Addgene_196071
Curator: @olekpark
SciCrunch record: RRID:Addgene_196071
13032
DOI: 10.1038/s41413-024-00384-y
Resource: RRID:Addgene_13032
Curator: @olekpark
SciCrunch record: RRID:Addgene_13032
44028
DOI: 10.1038/s41413-024-00384-y
Resource: RRID:Addgene_44028
Curator: @olekpark
SciCrunch record: RRID:Addgene_44028
plasmid_62
DOI: 10.1186/s13058-024-01954-y
Resource: None
Curator: @olekpark
SciCrunch record: RRID:Addgene_62988
Voici un document de briefing détaillé, reprenant les thèmes principaux, les idées clés et des citations pertinentes des sources que vous avez fournies, le tout en français :
Document de Briefing : Analyse du Webinaire sur les Pratiques de Gestion de Crise des Directions d'Établissement Scolaire au Québec
Introduction
Ce document vise à synthétiser les points saillants du webinaire intitulé "Réfléchir aux pratiques de gestion de crise des directions d’établissement scolaire au Québec," présenté par Olivier et Anne Michel, professeurs à l'Université du Québec à Montréal (UQAM).
Le webinaire a exploré les résultats d'un projet de recherche sur la gestion de crise dans le contexte scolaire québécois, en particulier pendant la pandémie de COVID-19.
Contexte et Objectifs de la Recherche
Déroulement : Le projet s'est déroulé en plusieurs phases : * Phase 1 : Entretiens semi-dirigés avec des directions d'établissement pour identifier les pratiques de gestion de crise pendant la pandémie. * Phase 2 : Questionnaire en ligne pour évaluer l'importance accordée à ces pratiques en contexte de crise pandémique et globale, et pour identifier les besoins en formation des directions.
Constat Principal : Un Champ de Recherche Peu Documenté
La gestion du changement est un sujet largement étudié dans le domaine de l'éducation, mais les études sur la gestion de crise restent rares, surtout dans le monde francophone.
"…si la gestion du changement est un chantier largement documenté dans le champ de l'administration et des politiques de l'éducation … la gestion des risques et des crises telles que nous la définissons demeure relativement peu documentée dans le secteur de l'éducation."
La pandémie a stimulé l'intérêt pour ce champ de recherche, mais les études restent surtout menées aux États-Unis, conséquence des évènements tels que la fusillade de Columbine.
Définition de la Crise
Caractéristiques: Un événement imprévisible, indu, comportant un danger, une menace et des effets négatifs sur le fonctionnement normal de l'organisation.
Échelle de gravité (Libère): Incident → Accident → Crise → Catastrophe. La crise se situe entre l'accident et la catastrophe.
Risques: La crise est un risque, mais il existe des risques internes et externes multiples dans le milieu scolaire (violence, problèmes de santé mentale, catastrophes naturelles, épidémies, etc.).
La Gestion de Crise
Résultats de la Phase 1 (Étude Qualitative)
8 principes de gestion de crise dégagés:
Résultats de la Phase 2 (Étude Quantitative)
Discussion et Recommandations
Recommandations :
*Développer des formations spécifiques (initiale et co ntinue). * Utiliser des études de cas et des scénarios de crise. * Intégrer des exemples concrets et des classifications de crise. * S'inspirer des pratiques en administration et gestion de crise d'autres secteurs. * Documenter davantage les crises vécues par les directions pour élaborer des scénarios de formation pertinents.
"L'idée serait éventuellement donc de mener des entretiens pour décrire, pour qu'on puisse recenser ces différents types de crises là, les évaluer bien sûr au regard de différents paramètres, notamment leur niveau de gravité par exemple, en venir à faire une classification, donc une typologie également des crises qui ont été vécues."
Limites de l'Étude
Prochaines Étapes
Conclusion
Le webinaire a mis en évidence l'importance d'une approche structurée et documentée de la gestion de crise dans le milieu scolaire québécois.
La recherche menée par Olivier et Anne Michel révèle un besoin criant en formation des gestionnaires d'établissement, et souligne la nécessité de s'inspirer des pratiques éprouvées dans d'autres secteurs, tout en tenant compte des spécificités du contexte scolaire.
Les prochaines phases de la recherche devraient apporter des éclairages supplémentaires et contribuer à améliorer la préparation des directions d'établissement face aux crises.
colloque universitaire à La Réunion, axé sur le thème « Interroger les marges en éducation et en formation ».
Le colloque, quatrième édition des Journées de la recherche en éducation, réunit des chercheurs et praticiens de divers territoires ultramarins et métropolitains, pour explorer la diversité des contextes éducatifs et les inégalités.
Les intervenants soulignent l’importance de considérer les spécificités locales (plurilinguisme, contexte socio-économique), de dépasser une vision centralisée de l'éducation, et de valoriser les expériences des territoires ultramarins souvent perçus comme marginaux.
L'objectif est de favoriser des échanges collaboratifs et interdisciplinaires pour améliorer l’équité et l'inclusion dans le système éducatif.
Voici un résumé des points principaux abordés dans le discours d'ouverture du colloque, basé sur la transcription fournie:
En résumé, ce colloque vise à explorer les marges en éducation, en se basant sur les expériences et les contextes des territoires ultramarins, en particulier la Réunion.
Il favorise la collaboration, l'innovation et une approche critique des pratiques éducatives afin de promouvoir l'égalité des chances pour tous.
Voici un document de synthèse détaillé, en français, basé sur les extraits de la transcription de la vidéo que vous avez fournie :
Document de Synthèse : Colloque "Interroger les marges en éducation et en formation"
Date et Lieu: 27 janvier 2025, Université de La Réunion (INSPE de La Réunion)
Thème Central: L'exploration des marges en éducation et en formation, en considérant la diversité, les spécificités et les lieux, espaces et groupes sociaux perçus comme étant à la marge d'une certaine centralité.
Introduction et Remerciements (0:15-1:25)
L'administratrice provisoire de l'INSPE de la Réunion ouvre le colloque en remerciant les nombreux partenaires : * l'observatoire des sociétés de l'océan Indien (OSOI), * le laboratoire de recherche ICAR et D.I.R, * les membres du réseau R.L.A.S.S., * ainsi que les comités d'organisation et scientifique.
Elle souligne l'implication intense des co-organisateurs, Séverine Ferrié et Pierre-Éric Fagol. Genèse du Colloque et du Réseau R.L.A.S.S. (1:25-1:55)
Le réseau de recherche interdisciplinaire sur les interactions entre culture, langue et apprentissage scolaire (R.L.A.S.S.) a été créé en 2022 pour pallier l'isolement des chercheurs dans les territoires ultramarins.
L'objectif est d'encourager la recherche transdisciplinaire et collaborative, ainsi que de revaloriser les pratiques éducatives tenant compte des contextes locaux.
Ce colloque s'inscrit dans la continuité des colloques des journées de recherche en éducation organisées en Polynésie française en 2018, 2021 et 2022.
Participants et Diversité Géographique (2:21-3:08)
Le colloque rassemble des chercheurs et praticiens reconnus internationalement, provenant de divers territoires ultramarins (Antilles, Guyane, Mayotte, Nouvelle-Calédonie, Polynésie française, Réunion) et des pays voisins de l'océan Indien (Maurice, Madagascar).
Des chercheurs venus de l'autre côté de la mer (France métropolitaine, Belgique, Canada, Finlande, Maroc, Tunisie) sont également présents, soulignant la portée internationale de l'événement.
Objectifs et Thématiques (3:13-4:20)
Le colloque vise à "interroger les marges en éducation" (3:18), en considérant les diverses populations et lieux considérés comme marginaux.
Le premier symposium se concentrera sur la contextualisation et l'adaptation des programmes.
Les discussions aborderont des thématiques variées, telles que : commandes institutionnelles, ressources pour enseigner, représentations, pratiques, relations famille-école, et méthodologie de la recherche.
L'administratrice souligne son expérience personnelle dans les territoires ultramarins, la rendant particulièrement sensible aux spécificités de ces territoires.
Le colloque est l'occasion de réfléchir à la pluralité des situations éducatives mais aussi aux spécificités communes à ces territoires. Intervention du Recteur de la Région Académique (5:13-16:03)
Le recteur souligne l'importance de l'INSPE pour la stratégie académique et se félicite du "nouveau départ" de l'Université de La Réunion (5:31-5:52).
Il met en avant que l'université de la Réunion est la première université ultramarine en nombre d'élèves et d'enseignants chercheurs et fait partie des 15 universités françaises qui ne sont pas en déficit (6:05-6:24)
Il exprime son intérêt pour le sujet des marges en éducation, soulignant que la notion de marge est relative ("on est toujours le marginal de quelqu'un", 6:34). Il prend l'exemple de Paris qui peut être vue comme étant à la marge de l'océan indien (6:40).
Il affirme que les centres de décision en France sont encore très centralisés (7:08-7:15).
Il met en lumière l'importance de cet événement dans le paysage scientifique des outremers (7:21). Il souligne que "la plupart du temps les prix nobels se situent aux marges et aux complémentarités entre les disciplines" (8:17-8:34)
Pour lui, interroger les marges, c'est questionner notre manière d'aborder les espaces et les groupes sociaux (9:11).
Il rappelle la richesse de la France grâce à ses territoires ultramarins, tant en termes de présence dans le monde que de diversité culturelle.
"La richesse de la France c'est pas seulement d'avoir le deuxième domaine maritime mondial grâce à ses outremers, c'est aussi d'avoir une présence à peu près partout dans le monde" (9:37-9:49)
Le recteur insiste sur la nécessité de briser l'isolement des territoires ultramarins dans la recherche éducative (10:08-10:19) et rompre avec une vision misérabiliste de ces territoires (10:30-10:35).
Il dresse un portrait de la Réunion comme un territoire unique, avec des défis sociaux importants, notamment la pauvreté (11:00) :
"1/4 de mes élèves vit dans sa famille en dessous du seuil de pauvreté" (11:00), le chômage (11:19) et les familles monoparentales (11:51).
Il souligne le fort taux d'établissements en zone prioritaire et une surreprésentation de la voie professionnelle (12:21-12:27).
Il aborde les problématiques telles que les violences intrafamiliales et les grossesses précoces (12:51).
Le Recteur souligne l'augmentation des élèves venant de Mayotte et l'importance de les scolariser sans discrimination, malgré des niveaux scolaires parfois plus bas.
"Notre seul et unique sujet de préoccupation, c'est de scolariser tous les enfants sans leur demander d'où ils viennent" (15:43-15:56).
Il plaide pour la mise en commun des forces et des bonnes pratiques de chaque territoire pour dépasser les particularismes et créer un espace de confiance où la recherche en éducation est un levier de transformation sociale (16:03-16:41).
Il explique une logique d'équité qui est de "donner plus à ceux qui ont moins" (17:07-17:19).
Il insiste sur l'importance d'adapter les pratiques pédagogiques, notamment en tenant compte du plurilinguisme, le créole étant parlé par 80% de la population.
"Faire en sorte que les élèves maîtrisent très bien à l'écrit et à l'oral à la fois le créole et le français c'est ça qui permet ensuite de prendre son envol et de maîtriser plus facilement d'autres langues" (18:15-18:36)
Il encourage à valoriser la richesse culturelle des territoires ultramarins au lieu de les considérer comme des obstacles (18:54-19:06).
Il souligne que les marges peuvent être une opportunité pour repenser nos modèles éducatifs (20:02-20:09), et appelle à renforcer la coopération entre les acteurs de l'éducation (20:22-20:29).
Il met en avant une approche collaborative et interdisciplinaire (21:17-21:23)
Pour le Recteur, la marge n'est pas un lieu de déficit mais une source d'innovation et de créativité (21:52-22:03).
Il conclut en rappelant l'objectif commun de l'égalité des chances (22:52-23:04) et que l'Etat va jusqu'au "dernier mètre" pour accompagner tous les élèves (23:53-24:05)
Intervention du Président du Conseil Académique (24:51-36:02)
Le président du conseil académique exprime sa joie d'accueillir les participants (25:04-25:11), notamment en tant que géographe car les questions de marge, de frontières disciplinaires et géographiques lui parlent. (25:11-25:24)
Il remercie les organisateurs, les collègues venant de loin et les équipes administratives (25:50-27:31).
Il souligne l'importance du colloque, qui dépasse les frontières géographiques et permet une réflexion sur des thèmes comme l'égalité des chances et le bilinguisme (27:38-28:32).
"Ce bilinguisme apaisé c'est une porte ouverte vers le plurilinguisme" (28:18-28:24).
Il rappelle que les territoires ultramarins sont au cœur de la stratégie française indo-pacifique et que les enjeux indo-pacifiques seront au premier rang des enjeux géopolitiques du 21e siècle (28:45-29:27).
Il souligne que l'innovation naît aux interfaces disciplinaires (29:33-29:52) et que la complexité des sciences humaines et sociales (SHS) est immense (30:06-30:42).
Il note que l'État n'a pas suffisamment accompagné les SHS, en comparaison aux sciences dures (30:51-31:20).
Il parle du groupe de travail sur la recherche dans les outre-mers, mis en place suite à une réunion interministérielle (31:38-32:16).
Il souligne que les grands organismes doivent travailler ensemble et que les universités ultramarines doivent être légitimes à assurer le chef de file de certains projets (33:00-33:19).
Il affirme qu'il y a un nouvel intérêt pour la recherche en outre-mer et qu'il ne faut pas être "un angle mort" de la recherche et de la stratégie nationale (34:04-35:25)
Il conclut en soulignant que les situations éducatives sont plurielles, mais qu'il existe aussi des dénominateurs communs et que le colloque est une occasion de croiser les regards et faire émerger des innovations (35:25-35:55).
Intervention de la Directrice du Laboratoire D.I.R (36:02-42:41)
La directrice présente le centre de recherche D.I.R. (Déplacement, Identité, Regard, Ecriture) (36:09-36:35)
Elle compare le déplacement du colloque vers la périphérie à une exploration poétique des marges (36:35-37:20).
Elle note que le fait de venir de la marge procure une grande richesse (37:20-37:42).
Elle se réjouit du développement des recherches sur l'éducation au sein de l'enseignement supérieur et de la jonction entre les différentes problématiques sociales de l'enseignement primaire, secondaire et supérieur.
Elle rappelle que l'axe 2 du centre D.I.R (identité en contexte pluriel) est central (39:18-39:39).
Elle insiste sur le caractère international de la provenance des intervenants au colloque (40:00-40:14).
Elle souligne que la périphérie peut jouer un rôle de modélisation pour les centres et que la problématique du colloque s'intègre dans la stratégie régionale (40:40-42:02).
Elle conclut en souhaitant des travaux fructueux pour les jours à venir (42:23-42:36)
Intervention de la Directrice Adjointe du Laboratoire ICAR (42:45-46:20)
La directrice adjointe du laboratoire ICAR (Institut Coopératif Australe de Recherche en Education) introduit le laboratoire (42:45-44:02).
Elle se réjouit que le colloque aborde les marges comme une "déviance vis à vis de standards de norme et de référence et d'autre part au changement social" (44:46-45:05)
Elle souligne que le laboratoire ICAR travaille sur la prise en compte des différences et sur l'exploration des inégalités (45:05-45:24).
Elle explique que ce travail nécessite de s'interroger sur le rapport aux normes et de viser un changement social vers plus d'inclusion (45:32-45:43)
Elle remercie les organisateurs du colloque et souhaite de belles journées de recherche (45:56-46:13). Intervention de la Co-organisatrice (46:20-49:51)
La co-organisatrice du colloque remercie les participants et les partenaires (46:20-46:58).
Elle souligne que les journées n'auraient pas été possibles sans le soutien des laboratoires, des institutions universitaires et des formateurs. (47:03-47:35)
Elle met l'accent sur la valorisation des pratiques éducatives tenant compte des contextes locaux et la diversité des territoires (48:04-48:28).
Elle précise que les marges se trouvent au centre de leurs préoccupations (48:28-48:35).
Elle explique que le décentrement du regard permet d'interroger les normes et d'avoir un regard critique sur les questions d'éducation (48:35-49:08).
Elle conclut en souhaitant des échanges fructueux et un renforcement des dynamiques de travail (49:08-49:51).
Conclusion
Ce colloque représente une étape importante pour la recherche en éducation dans les territoires ultramarins.
Il met en lumière la nécessité de considérer les marges non pas comme des lieux de déficit, mais comme des sources d'innovation et de créativité.
Il souligne également l'importance de la coopération et du dialogue entre tous les acteurs de l'éducation pour construire un système éducatif plus inclusif et plus juste.
Voilà, j'espère que ce document de synthèse répond à vos attentes. N'hésitez pas si vous avez d'autres questions !
Il y a 2 traitement(s) sur cette photographie. Traitement 1: Rocéphine (Ceftriaxone)Traitement 2: Paracétamol - sirop - 3 mL (pipette graduée) - 3 fois par jour
cf commentaire précédent
Il y a 1 traitement(s) sur cette photographie.
Il y a plus de 1 traitement ici
e et de subdivision du texte y sont distincts, tandis que dans d’autres modélisations d’un document, comme en HTML, l’élément paragraphe <p> contient à la fois le texte et son niveau hiérarchique.
il faudrait peut-être parler de de rose et a ordered hierarchy of content object (OHCO), https://doi.org/10.1007/BF02941632
car l'xml de word n'est pas un vraqi xml...
Le cabinet d'avocats Clerc propose des services dans plusieurs domaines liés à l'éducation. Voici un résumé des éléments clés en lien avec l'éducation, basés sur les sources fournies:
En résumé, le cabinet Clerc offre un large éventail de services juridiques liés à l'éducation, allant de la scolarité des élèves aux questions disciplinaires, en passant par les examens, le personnel enseignant et les établissements d'enseignement.
Risks, Limitations and Opportunities
Corporalidades y Perspectiva Feminista
El proyecto AymurAI aborda la justicia con un enfoque feminista, reconociendo las corporalidades de las víctimas de violencia de género (VBG por su sigla en inglés) y la necesidad de anonimizar datos sensibles para protegerlas. En Colombia, donde las desigualdades socioeconómicas y regionales afectan el acceso a la justicia, esta herramienta podría garantizar que los datos judiciales reflejen estas realidades y promuevan soluciones inclusivas y éticas.
Traducción
Dada la diversidad lingüística y cultural en Colombia, AymurAI podría incluir capacidades de traducción y procesamiento de datos multilingües. Esto sería clave para trabajar con lenguas indígenas, dialectos locales (culturemas) y documentos en formatos mixtos (digital y analógico), permitiendo que las decisiones judiciales sean analizadas y publicadas en contextos específicos de cada región.
Inteligencia Artificial para Mitigar Sesgos
Uno de los riesgos identificados es el sesgo en los datos judiciales. En Colombia, donde las jurisdicciones judiciales y los sistemas híbridos (analógico y digital) presentan variabilidad, AymurAI debería adaptarse para identificar y señalar estos sesgos, proporcionando contexto sobre las limitaciones de los datos. Por ejemplo, en zonas rurales con poca densidad poblacional, la anonimización adicional sería crucial para proteger la identidad de las personas.
Estrategias de Implementación
El proyecto plantea un enfoque gradual y adaptativo, priorizando casos piloto en tribunales específicos, con miras a extenderse a otras jurisdicciones. En Colombia, esto podría significar iniciar con tribunales especializados en violencia de género y, progresivamente, incorporar otras ramas judiciales. Las herramientas desarrolladas deben ser maleables para adaptarse a diversas necesidades locales, manteniendo una base tecnológica abierta y colaborativa.
Innovación Social
Transformar prácticas judiciales al crear oficinas especializadas en tribunales para gestionar datos legales con Inteligencia Artificial, al valorar la experiencia humana y utilizando la Inteligencia Artificial como una herramienta complementaria.
Fomentar la justicia abierta para facilitar bases de datos públicas accesibles y contextualizadas, útiles para la elaboración de políticas públicas, activismo y generación de conciencia ciudadana.
Mejorar la experiencia del usuario al incorporar visualizaciones interactivas para comunicar datos de forma comprensible y centrarse en aspectos menos visibles de la justicia.
Retos en Colombia
Infraestructura desigual y limitada en zonas rurales.
Resistencia al cambio en sistemas judiciales tradicionales.
Necesidad de entrenamiento para operadores judiciales en el uso de tecnodiversidades como la Inteligencia Artificial.
Oportunidades
Construcción de una justicia más inclusiva y centrada en las personas.
Desarrollo de tecnología que respete las diversidades culturales y lingüísticas del país.
Promoción de la transparencia y la participación ciudadana en los sistemas de justicia.
AymurAI podría ser un catalizador para modernizar y feminizar la justicia en Colombia, integrando traducción, corporalidades y enfoques de Inteligencia Artificial éticos.
Su implementación fortalecería la protección de las víctimas, mejoraría la calidad de los datos judiciales y abriría nuevas oportunidades para construir una justicia más accesible, equitativa y adaptada a las necesidades locales.
Method and Plan
El proyecto AymurAI se basa en principios de ciencia de datos feminista, integrando teorías como Data Feminism (D’Ignazio y Klein, 2020) y guías sobre aprendizaje automático centrado en las personas (Chancellor, 2018). Su enfoque se orienta al usuario y prioriza la colaboración con personal judicial y organizaciones feministas locales, con una constante evaluación ética durante el diseño y desarrollo. Estas características hacen que AymurAI sea adaptable al contexto colombiano, donde la justicia enfrenta desafíos en violencia de género (VBG por su sigla en inglés), desigualdades tecnológicas y diversidad cultural.
Corporalidades
En Colombia, las corporalidades de las víctimas de violencia de género requieren protección especial dentro de procesos judiciales. AymurAI puede garantizar la anonimización de datos sensibles, permitiendo analizar patrones sin comprometer identidades. Esto fortalecería sistemas judiciales como las comisarías de familia y las fiscalías, promoviendo un tratamiento ético y equitativo de las víctimas.
Traducción
El enfoque del proyecto es altamente adaptable al multilingüismo colombiano, considerando lenguas indígenas y variaciones dialectales en diferentes regiones. La herramienta necesitaría incorporar modelos que respeten y trabajen con la diversidad cultural, facilitando la traducción y análisis de documentos en diferentes idiomas locales.
Inteligencia Artificial Centrada en las Personas
AymurAI combina técnicas de expresiones regulares y aprendizaje automático (NLP) para extraer información estructurada de documentos legales. En el contexto colombiano, esto podría aplicarse para construir bases de datos abiertas que detallen los casos de violencia de género, al ayudar a generar políticas públicas informadas. La colaboración con expertos locales y organizaciones feministas garantizaría que los resultados reflejen las necesidades y realidades específicas del país.
Cronograma y Viabilidad
Un cronograma exploratorio, iterativo y de seis meses permitiría:
Primera fase: Definición de requisitos, etiquetado manual de datos judiciales y desarrollo inicial de modelos basados en expresiones regulares y aprendizaje automático.
Segunda fase: Evaluación de métricas (como precisión y sesgo) y construcción del prototipo.
Tercera fase: Pruebas de usabilidad, refinamientos y pilotaje en un tribunal (equivalente a la implementación inicial en el Tribunal Penal 10 de CABA).
Ventajas y Retos en Colombia
Ventajas:
Protección y anonimización de datos sensibles.
Automatización de tareas administrativas judiciales.
Creación de bases de datos accesibles y abiertas, fomentando la transparencia.
Retos:
Infraestructura desigual, especialmente en regiones rurales.
Capacitación en tecnología para operadores judiciales.
Manejo de sesgos y diversidad cultural en los modelos de Inteligencia Artificial.
AymurAI tiene un alto potencial para contribuir a una justicia más ética, transparente y centrada en las personas en Colombia.
La adaptación al contexto local, mediante la inclusión de traducción, protección de corporalidades y enfoque en Inteligencia Artificial ética, puede transformar significativamente la gestión de datos en casos de violencia de género.
Proposed prototype
El prototipo AymurAI, se inspira en el término quechua aymuray (tiempos de cosecha), propone un prototipo de Inteligencia Artificial para automatizar parcialmente la publicación y mantenimiento de datos abiertos en casos de violencia de género (VBG por su sigla en inglés). Aunque originalmente diseñado para los tribunales penales de CABA y México, su enfoque podría adaptarse al contexto colombiano, considerando los desafíos específicos de la justicia en este país, como las disparidades en infraestructura tecnológica, la necesidad de enfoque sensible al género y las dinámicas socioculturales complejas.
Contexto Colombiano
Corporalidades
La justicia colombiana enfrenta retos particulares en la protección de las corporalidades de las víctimas de VBG. AymurAI podría ser una herramienta clave para garantizar que los datos sensibles sean anonimizados, protegiendo la identidad y contexto de las víctimas, mientras se recopilan datos estructurados sobre los casos para análisis y políticas públicas. Este enfoque fortalecería iniciativas locales como las comisarías de familia, las fiscalías y las líneas de atención a víctimas.
Traducción y Localización Cultural
Dado el multilingüismo y las diferencias culturales en Colombia con las lenguas indígenas y contextos rurales, sería crucial adaptar AymurAI para interpretar documentos en diversos idiomas locales, manteniendo su sensibilidad hacia las especificidades culturales. Además, los formatos comunes en los procesos judiciales colombianos (e.g., actas en Word o PDF) deberían integrarse al sistema para asegurar compatibilidad.
Inteligencia Artificial y Justicia
AymurAI aprovecharía técnicas como el reconocimiento de entidades nombradas (NER) y expresiones regulares para automatizar la extracción de datos relevantes de documentos legales. Este modelo puede capacitarse con datos de fallos judiciales colombianos, como los producidos por los juzgados especializados en VBG, para identificar patrones específicos en contextos nacionales.
Análisis y transparencia de datos: El sistema podría ayudar a construir una base de datos abierta sobre casos de VBG en Colombia, promoviendo transparencia y permitiendo el análisis de tendencias que fortalezcan políticas públicas.
Reducción de carga administrativa: AymurAI permitiría a los funcionarios judiciales automatizar tareas repetitivas como la anonimización de datos, mejorando la eficiencia del sistema judicial.
Accesibilidad y equidad: Una interfaz sencilla aseguraría que incluso empleados judiciales sin conocimientos técnicos puedan operar el sistema, mejorando la inclusión en diferentes regiones del país.
Retos en el Contexto Colombiano
Infraestructura desigual: La conectividad limitada en áreas rurales podría ser un obstáculo; por ello, un sistema que funcione offline sería esencial.
Protección de datos: Garantizar la seguridad y confidencialidad de la información judicial es crítico, especialmente en casos sensibles de VBG.
Capacitación: Involucrar a los operadores judiciales en el uso de AymurAI, con énfasis en justicia de género y herramientas tecnológicas, será fundamental para su adopción efectiva.
AymurAI podría ser una herramienta transformadora para el sistema judicial colombiano, combinando Inteligencia Artificial, sensibilidad cultural y un enfoque en la protección de las víctimas para avanzar hacia una justicia más eficiente, inclusiva y transparente.
the current situation of justice data on GBV in Argentina and Mexico
Frente al tema de corporalidades, en ambos países, los datos judiciales reflejan cómo las violencias de género afectan los cuerpos y vidas de las personas involucradas, especialmente mujeres y poblaciones vulnerabilizadas. Sin embargo, la falta de estandarización y transparencia limita la capacidad de analizar estas experiencias de manera integral. Los casos incluyen detalles sensibles como el tipo de violencia sufrida y los contextos socioeconómicos, enfatizando la importancia de las corporalidades en el diseño de políticas públicas basadas en evidencia.
En cuanto a traducción, el proceso de convertir sentencias legales en datos estructurados involucra traducciones significativas, tanto desde el lenguaje natural de los documentos hacia categorías estandarizadas, como desde los sistemas judiciales hacia bases de datos públicas. Herramientas como “IA2” en Argentina y “Mis Aplicaciones” en México permiten anonimizar y adaptar sentencias para su publicación, aunque la traducción de estos datos al dominio público sigue siendo manual y limitada por los criterios subjetivos de los operadores judiciales.
La Inteligencia Artificial juega un papel clave en la anonimización y estructuración de datos judiciales, pero enfrenta limitaciones. En Argentina, herramientas como IA2 automatizan parte del proceso, pero el trabajo manual sigue siendo necesario para agregar contexto y garantizar precisión. En México, el uso de Inteligencia Artificial está restringido a eliminar datos personales y depende de las decisiones de los jueces sobre qué información es de interés público. Estas implementaciones reflejan un potencial subutilizado de la Inteligencia Artificial para apoyar un análisis más amplio y sistemático de los casos de violencia de género (GBV por su sigla en inglés).
Faced with the lack of official statistics in Latin America, individual women and women organisations made the decision, in recent years, to keep a record of feminicides published in digital and printed media, with the goals of giving visibility to the problem of GBV in their country and of sensitising society and public officials about these occurences.
La ausencia de estadísticas oficiales sobre violencia de género (GBV por su sigla en inglés) en América Latina ha llevado a mujeres y organizaciones a registrar feminicidios mediante el análisis de medios impresos y digitales. Estos esfuerzos, como los informes de “La Casa del Encuentro” en Argentina y el mapa interactivo de feminicidios de María Salguero en México, no solo dan visibilidad a las víctimas, sino que también sensibilizan a la sociedad y a las autoridades públicas sobre la gravedad del problema.
Desde el punto de vista de las corporalidades, los registros de feminicidios resaltan las historias individuales de las víctimas, mostrando su identidad, contexto y las circunstancias específicas de su muerte. Esto humaniza las estadísticas y visibiliza cómo las violencias machistas afectan de manera particular a los cuerpos de mujeres y personas diversas en diferentes esferas, incluyendo lo doméstico, laboral e institucional.
En cuanto a traducción, la incorporación de herramientas tecnológicas, como los plugins de navegador y sistemas de alerta por correo, automatiza la recopilación de datos a partir de fuentes mediáticas. Estas herramientas permiten capturar y traducir información de textos periodísticos a bases de datos estructuradas, facilitando el análisis y la comunicación de los casos a nivel local e internacional.
En cuanto a Inteligencia Artificial, iniciativas como “Datos contra el feminicidio” integran aprendizaje automático (machine learning) para identificar y procesar información relevante sobre feminicidios. Estas tecnologías contribuyen a la sistematización de datos. Es esencial ampliar el enfoque para capturar todas las formas y modalidades de violencia de género. Esto permitirá diseñar políticas públicas más efectivas que aborden la prevención, sanción y erradicación de estas violencias, destacando la necesidad de un enfoque integral y situado en el contexto latinoamericano.
Our project seeks to effect change in the problem of GBV from a feminist, anti-technosolutionist perspective, which we expect to be transformative.
Los riesgos de sesgos, falta de transparencia y consecuencias perjudiciales en la Inteligencia Artificial han sido ampliamente documentados. Frente a esto, el proyecto propone un enfoque feminista y colaborativo, usando la Inteligencia Artificial como herramienta de apoyo, no como sustituto del conocimiento humano, para abordar la violencia de género (GBV por su sigla en inglés) y fomentar la justicia social.
Dentro de las corporalidades, se destaca la importancia de la participación humana, especialmente de expertos con conocimientos sobre desigualdades estructurales, para garantizar un diseño inclusivo y contextualizado. Esto se alinea con principios feministas que priorizan las intersecciones de género, raza y clase, y evita el uso de Inteligencia Artificial para vigilancia o control, optando por enfoques que respeten las diferencias corporales y contextos sociales.
En cuanto al tema de la traducción, el proyecto utiliza modelos de procesamiento de lenguaje natural (NLP) adaptados a contextos hispanohablantes, como BETO, un modelo BERT entrenado en español. Este enfoque permite estructurar información de documentos legales, asegurando que los datos se procesen en su idioma y contexto originales, evitando sesgos asociados con modelos entrenados en inglés.
La Inteligencia Artificial consiste en no automatizar decisiones judiciales ni predecir comportamientos, sino colaborar con expertos para estructurar datos legales y fomentar transparencia. Se inspira en enfoques feministas que abordan dinámicas de poder en sistemas sociotécnicos, subrayando la importancia de datos de alta calidad para informar políticas públicas basadas en evidencia y justicia abierta.
The authors of this paper are four Latin American women that self-identify as intersectional feminists, based in the Global South (Argentina and Mexico) and in the Global North (Sweden), performing work and volunteer tasks in a variety of contexts (education, research, and NGOs — mostly DataGénero5).
Las autoras, feministas de América Latina y Suecia, hablan sobre las desigualdades sociales desde una óptica que combina raza, clase social y género, inspirándose en diversas corrientes del feminismo, incluyendo el transfeminismo, el feminismo negro, indígena y el feminismo contra el capacitismo.
En cuanto a las corporalidades sobre la base de una experiencia situada, las autoras subrayan que ninguna experiencia vital tiene mayor peso que otra, integrando las voces de mujeres y personas LGBTIQ+ desde diversas realidades. Reconocen la pluralidad del feminismo y buscan visibilizar las múltiples luchas dentro de los movimientos feministas, destacando el impacto del género en la vida cotidiana y los sistemas de poder.
Por el lado de la traducción de datos y la justicia abierta, las autoras se inspiran en el feminismo de datos, y proponen el uso de herramientas de Inteligencia Artificial para traducir datos judiciales relacionados con violencia de género en formatos abiertos y contextuales. Esto busca hacer visibles las resoluciones legales sin descontextualizarlas ni comprometer datos sensibles, contribuyendo a la formulación de políticas públicas basadas en evidencia.
La Inteligencia Artificial y el anti-soluccionismo, adoptan una postura crítica hacia la idea de que la Inteligencia Artificial puede “resolver” problemas sociales complejos como la violencia de género. En cambio, argumentan que la Inteligencia Artificial puede ser una herramienta para colaborar con actores humanos expertos en estos temas, ayudando a sistematizar datos de calidad. Rechazan la noción de que la Inteligencia Artificial pueda ser feminista por sí misma, pero promueven su uso por parte de feministas para avanzar en causas sociales.
La propuesta, que se desarrolla con organizaciones como DataGénero y el Criminal Court 10 de Buenos Aires, incluye el diseño y prueba de una Inteligencia Artificial en contextos judiciales. Este enfoque colaborativo, nutrido por alianzas con colectivos de desarrollo de software y procesamiento de lenguaje natural, busca integrar perspectivas interseccionales del Sur Global en la creación de tecnologías justas y éticas.
El prototipo propuesto se alinea con la Agenda 2030 de las Naciones Unidas para el Desarrollo Sostenible, especialmente con el ODS 16 (Paz, Justicia e Instituciones Sólidas), promoviendo sociedades justas, pacíficas e inclusivas. El principio de justicia abierta impulsa instituciones transparentes y responsables, garantiza el acceso a la información y protege las libertades fundamentales.
Dos metas clave del ODS 16 son especialmente relevantes: la Meta 3, que fomenta el acceso equitativo a la justicia y el Estado de derecho, y la Meta 7, que promueve la toma de decisiones inclusivas, participativas y representativas. La judicatura es esencial para cumplir estas metas, contribuyendo además a los ODS 5 (Igualdad de género y empoderamiento de las mujeres) y 10 (Reducción de desigualdades dentro y entre países).
harmful acts towards a person or a group of people based on their gender
La intersección entre las corporalidades, la traducción de datos judiciales y el uso de Inteligencia Artificial en casos de violencia de género en América Latina. Muestra la falta de transparencia y datos accesibles sobre violencia de género contra mujeres y personas LGBTIQ+, lo que dificulta el acceso a la justicia y refuerza la desconfianza en los sistemas judiciales, especialmente en Argentina y México.
Los autores proponen el desarrollo de AymurAI, un prototipo semi-automatizado que colabora con funcionarios judiciales para estructurar y anonimizar datos judiciales relacionados con la violencia de género antes de que escalen a feminicidios. Este proyecto, desde una perspectiva feminista interseccional y anti-soluccionista, busca diseñar tecnologías de Inteligencia Artificial que no sustituyan decisiones humanas, sino que apoyen la comprensión y visibilización de los diferentes tipos de violencia de género, incluyendo formas menos visibles como la violencia psicológica o económica.
En cuanto a las corporalidades en el contexto social, la violencia de género afecta a mujeres, personas trans, no binarias y otras identidades de género, al manifestarse en dimensiones físicas, psicológicas, sexuales, económicas y políticas. La recopilación y apertura de datos judiciales sensibles permitiría identificar patrones de violencia, comprender las dinámicas de los sistemas judiciales y fomentar políticas públicas basadas en evidencia.
Con respecto a la Inteligencia Artificial y la traducción de datos, la propuesta de AymurAI incluye el uso de Inteligencia Artificial para automatizar parcialmente el procesamiento de grandes volúmenes de datos judiciales. Lo que facilitaría la generación de conjuntos de datos anonimizados que, al ser revisados por expertos, contribuirían a la transparencia judicial, la colaboración intersectorial y el diseño de intervenciones más efectivas.
El proyecto busca desafiar la instrumentalización de la Inteligencia Artificial como solución única, centrándose en garantizar la seguridad de los datos sensibles y en crear herramientas éticas que empoderen a los movimientos feministas del Sur Global.
Document de Synthèse : La Santé Mentale des Jeunes en Europe
Source : Vidéo ARTE Europe l'Hebdo : "La santé mentale des jeunes en Europe" (https://www.youtube.com/watch?v=Zwl8BXb_kkU&rco=1)
Date de Diffusion : 24 janvier 2025
Introduction
Cette vidéo d'ARTE examine la crise de la santé mentale chez les jeunes en Europe, un problème exacerbé par la pandémie de COVID-19, mais dont les racines sont plus profondes. Elle met en lumière l'ampleur du problème, ses causes multiples et les défis d'accès aux soins, tout en explorant le rôle ambivalent des réseaux sociaux.
Thèmes Clés et Points Importants
L'Étendue du Problème : Une Crise de Santé Mentale chez les Jeunes
L'Organisation Mondiale de la Santé (OMS) estime que 150 millions d'Européens ont des problèmes de santé mentale, et les jeunes de moins de 30 ans sont particulièrement touchés.
Les "signaux d'alarme" sont au rouge : la santé mentale des jeunes s'est dégradée partout en Europe. La pandémie de COVID-19 a considérablement aggravé la situation. Un rapport de la Commission européenne et de l'OCDE révèle que le nombre de jeunes touchés par des symptômes dépressifs a doublé, voire triplé, dans plusieurs pays par rapport à 2019.
"D'après une étude de la clinique univ de Hambourg, aujourd'hui 5 ans après le début de la pandémie, deux jeunes sur 10 souffrent toujours de troubles psychiques en Allemagne." (2:00-2:06)
Les Causes Multiples de la Dégradation de la Santé Mentale
Facteurs de vulnérabilité individuels : Les difficultés de vie telles que la violence familiale, la précarité, l'incertitude quant à l'avenir, la difficulté à trouver un emploi et un logement stable jouent un rôle crucial.
C’est décrit comme l’image du vase qui se remplit plus vite (2:18).
L'impact de la pandémie : Les confinements, l'isolement social, l'enseignement à distance ont eu un effet négatif sur le moral des jeunes. Dylan, un étudiant français, témoigne :
"Il y a vraiment... de la déprime quoi et de beaucoup d'isolement." (2:56-3:11)
Facteurs structurels et mondiaux : La crise climatique, les conflits armés (Ukraine, Gaza), l'incertitude politique et la montée des populismes ont également un impact sur la santé mentale des jeunes.
"Les jeunes nous communiquent les signaux d'alarme de notre monde moderne. Il nous montre que notre société et ce monde sont en proie à de graves difficultés." (3:33-3:39)
Réseaux sociaux : La surabondance de fake news, d'images violentes, le cyberharcèlement et la comparaison sociale créent de l'isolement et ont un impact négatif sur l'estime de soi.
Défis d'Accès aux Soins et Stigmatisation Un quart des Européens ont eu des difficultés à trouver de l'aide professionnelle pour leur santé mentale (enquête Eurobaromètre).
Les principaux obstacles sont les délais d'attente trop longs et les coûts élevés des traitements (6:01-6:22).
L'accès aux soins de santé mentale publics est insuffisant, forçant les personnes à se tourner vers le privé ou à renoncer aux soins (6:22-6:32).
La stigmatisation persiste : parler de ses problèmes de santé mentale peut être perçu comme une faiblesse, bien que ce tabou commence à être levé grâce à des personnalités publiques qui témoignent de leur expérience (6:34-6:51).
Le Rôle Ambivalent des Réseaux Sociaux
Aspects négatifs : Les réseaux sociaux sont une source de fake news, de contenus violents, de cyberharcèlement et contribuent à l'isolement (4:18-4:42).
Ils peuvent aussi alimenter des conduites à risques (troubles alimentaires, conduites suicidaires) (8:09-8:31).
"Sur TikTok ou Instagram par exemple, les adolescents sont massivement exposés aux fake news, aux images violentes ou encore au cyberharcèlement sans modération." (4:28-4:37)
Aspects positifs : Ils permettent aux jeunes de se tenir informés des sujets d'actualité, de s'informer sur la santé mentale et de partager leurs expériences. Le passage par l’écran peut être moins intimidant que les échanges directs (7:08-7:49).
Julie rolling, pédopsychiatre, explique : "ça leur permet effectivement d'être très au fait de sujets d'actualité... et puis ça leur permet aussi en terme de de santé mentale de se renseigner, d'être peut-être sensibilisé par rapport... à ces aspects-là" (7:08-7:28)
La question de l'interdiction des réseaux sociaux aux mineurs est soulevée :
L'Australie a déjà mis en place cette mesure et la France y réfléchit (7:49-8:01).
Réponses et Initiatives
La Commission européenne a adopté une nouvelle stratégie axée sur la prévention, l'éducation, l'accès à l'emploi, la culture et l'environnement (5:04-5:28).
Plus d'un milliard d'euros ont été débloqués pour financer des initiatives dans ce domaine (5:20-5:28).
En France, la santé mentale est une "grande cause nationale" (5:30-5:34).
Conclusion
La vidéo d'ARTE met en évidence une crise majeure de santé mentale chez les jeunes en Europe, un problème complexe avec des causes multiples allant des facteurs individuels aux enjeux mondiaux.
L'accès aux soins est un défi, et les réseaux sociaux représentent une arme à double tranchant.
La prise de conscience est essentielle et des efforts significatifs sont nécessaires pour améliorer la situation.
La vidéo encourage les jeunes à rechercher de l'aide et met en avant les ressources disponibles (lignes d'écoute, associations).
Citation Clé : "Pas besoin de chercher de bouc émissaire, il y a suffisamment de choses qui peuvent affecter notre santé mentale alors autant prendre le sujet au sérieux." (9:22-9:28)
vénements de coupe et de rupture, t
Probablement un détail mais je n'arrive pas à me représenter de quoi il s'agit : s'agit-il de nommer "à quel moment" il y a une coupure dans la vidéo??
RRID:AB_2861306
DOI: 10.1158/0008-5472.CAN-24-1509
Resource: None
Curator: @scibot
SciCrunch record: RRID:AB_2861306
After several hours navigating a series of bumpy roads in blazing equatorial heat, I was r elieved to arrive at the edge of the reservation. He cut the motor and I removed m y heavy backpack from m y tir ed, sw ea
This is honestly commendable. These anthropologists (or student anthropologists) are quite literally placing themselves in these situations for the sake of fieldwork and learning more about these communities. I think this is important, as it removes former privileges and luxuries in order to respect the community you are researching.
Esta investigación sugiere que el aprendizaje en IVR conduce a un CL extraño más alto que el aprendizaje en medios menos inmersivos, y destaca la importancia de considerar el CL al diseñar herramientas de aprendizaje de IVR.
justificacion
Tabla 1.1 y Tabla 1.2
Referenciar con hyperlink.
Se crearon funciones para el procesamiento y limpieza de datos que permite realizar un análisis eficiente de los mismos.
Evita voz pasiva.
estudio, fechas y sectores. Una vez que se ha determinado riesgo entomológico en una localidad se analizan los tipos de criaderos y su proporción entre sí, para tomar las medidas pertinentes según sea el caso; para ello se utiliza la siguiente función:
Mover párrafo a la into y descripción de esta sección
Generando un dataframe a partir del archivo .csv que se obtuvo con la función clean_raw_data() {.unnumbered} se usa en las siguientes funciones que calculan los índices de riesgo entomológico filtrando por tiempo y lugar según la función. Cada función filtra los datos basándose en detalles del estudio y gestiona advertencias si se encuentra división por cero, asegurando que los datos sean válidos para evitar errores en los cálculos.
Corregir.
El desarrollo e implementación de este paquete de R le permite al Programa Estatal de Vigilancia Entomológica y Control Integral de Enfermedades Transmitidas por Vector la automatiza el cálculo de índices, reduciendo el tiempo de análisis y minimizando errores humanos, pasa de un proceso manual con hojas de cálculo a un flujo de trabajo automatizado y reproducible.
Revisar gramática y volver a redactar.
Paquete de R contiene funciones para carga y limpieza de datos, para calcular índices de estegomía bajo diferentes criterios de filtrado de tiempo y/o área, y para la generación de mapas estáticos e interactivos.
Volver a redactar.
Join Our Monthly Newsletter !function(o,t,e,a) { o._aoForms=o._aoForms||[],o._aoForms.push(a); var n=function() { var o=t.createElement(e); o.src="https://info.eco.ca/acton/content/form_embed.js", o.async=!0; for(var a=t.getElementsByTagName(e)[0],n=a.parentNode,c=document.getElementsByTagName("script"),r=!1,s=0;s<c.length;s++) { if(c[s].getAttribute("src")==o.getAttribute("src"))r=!0; } r ? typeof(_aoFormLoader) != "undefined" ? _aoFormLoader.load({id:"848d016b-1834-4191-98c4-2e9c48ba54a2:d-0034",accountId:"42902",domain:"info.eco.ca",isTemp:false,noStyle:false,prefill:false}) : "" : n.insertBefore(o,a) }; window.attachEvent ? window.attachEvent("onload",n) : window.addEventListener("load",n,!1),n() } (window,document,"script",{id:"848d016b-1834-4191-98c4-2e9c48ba54a2",accountId:"42902",domain:"info.eco.ca",isTemp:false,noStyle:false,prefill:false}); wp.i18n.setLocaleData( { 'text direction\u0004ltr': [ 'ltr' ] } ); ( function( domain, translations ) { var localeData = translations.locale_data[ domain ] || translations.locale_data.messages; localeData[""].domain = domain; wp.i18n.setLocaleData( localeData, domain ); } )( "default", {"translation-revision-date":"2024-11-04 22:51:08+0000","generator":"GlotPress\/4.0.1","domain":"messages","locale_data":{"messages":{"":{"domain":"messages","plural-forms":"nplurals=2; plural=n != 1;","lang":"en_CA"},"Notifications":["Notifications"]}},"comment":{"reference":"wp-includes\/js\/dist\/a11y.js"}} ); var gform_i18n = {"datepicker":{"days":{"monday":"Mo","tuesday":"Tu","wednesday":"We","thursday":"Th","friday":"Fr","saturday":"Sa","sunday":"Su"},"months":{"january":"January","february":"February","march":"March","april":"April","may":"May","june":"June","july":"July","august":"August","september":"September","october":"October","november":"November","december":"December"},"firstDay":1,"iconText":"Select date"}}; var gf_legacy_multi = []; var gform_gravityforms = {"strings":{"invalid_file_extension":"This type of file is not allowed. Must be one of the following:","delete_file":"Delete this file","in_progress":"in progress","file_exceeds_limit":"File exceeds size limit","illegal_extension":"This type of file is not allowed.","max_reached":"Maximum number of files reached","unknown_error":"There was a problem while saving the file on the server","currently_uploading":"Please wait for the uploading to complete","cancel":"Cancel","cancel_upload":"Cancel this upload","cancelled":"Cancelled"},"vars":{"images_url":"https:\/\/eco.ca\/wp-content\/plugins\/gravityforms\/images"}}; var gf_global = {"gf_currency_config":{"name":"Canadian Dollar","symbol_left":"$","symbol_right":"CAD","symbol_padding":" ","thousand_separator":",","decimal_separator":".","decimals":2,"code":"CAD"},"base_url":"https:\/\/eco.ca\/wp-content\/plugins\/gravityforms","number_formats":[],"spinnerUrl":"https:\/\/eco.ca\/wp-content\/plugins\/gravityforms\/images\/spinner.svg","version_hash":"55292f67cda3dd157894b17079a94e93","strings":{"newRowAdded":"New row added.","rowRemoved":"Row removed","formSaved":"The form has been saved. The content contains the link to return and complete the form."}}; #gform_wrapper_18[data-form-index="0"].gform-theme,[data-parent-form="18_0"]{--gf-color-primary: #204ce5;--gf-color-primary-rgb: 32, 76, 229;--gf-color-primary-contrast: #fff;--gf-color-primary-contrast-rgb: 255, 255, 255;--gf-color-primary-darker: #001AB3;--gf-color-primary-lighter: #527EFF;--gf-color-secondary: #fff;--gf-color-secondary-rgb: 255, 255, 255;--gf-color-secondary-contrast: #112337;--gf-color-secondary-contrast-rgb: 17, 35, 55;--gf-color-secondary-darker: #F5F5F5;--gf-color-secondary-lighter: #FFFFFF;--gf-color-out-ctrl-light: rgba(17, 35, 55, 0.1);--gf-color-out-ctrl-light-rgb: 17, 35, 55;--gf-color-out-ctrl-light-darker: rgba(104, 110, 119, 0.35);--gf-color-out-ctrl-light-lighter: #F5F5F5;--gf-color-out-ctrl-dark: #585e6a;--gf-color-out-ctrl-dark-rgb: 88, 94, 106;--gf-color-out-ctrl-dark-darker: #112337;--gf-color-out-ctrl-dark-lighter: rgba(17, 35, 55, 0.65);--gf-color-in-ctrl: #fff;--gf-color-in-ctrl-rgb: 255, 255, 255;--gf-color-in-ctrl-contrast: #112337;--gf-color-in-ctrl-contrast-rgb: 17, 35, 55;--gf-color-in-ctrl-darker: #F5F5F5;--gf-color-in-ctrl-lighter: #FFFFFF;--gf-color-in-ctrl-primary: #204ce5;--gf-color-in-ctrl-primary-rgb: 32, 76, 229;--gf-color-in-ctrl-primary-contrast: #fff;--gf-color-in-ctrl-primary-contrast-rgb: 255, 255, 255;--gf-color-in-ctrl-primary-darker: #001AB3;--gf-color-in-ctrl-primary-lighter: #527EFF;--gf-color-in-ctrl-light: rgba(17, 35, 55, 0.1);--gf-color-in-ctrl-light-rgb: 17, 35, 55;--gf-color-in-ctrl-light-darker: rgba(104, 110, 119, 0.35);--gf-color-in-ctrl-light-lighter: #F5F5F5;--gf-color-in-ctrl-dark: #585e6a;--gf-color-in-ctrl-dark-rgb: 88, 94, 106;--gf-color-in-ctrl-dark-darker: #112337;--gf-color-in-ctrl-dark-lighter: rgba(17, 35, 55, 0.65);--gf-radius: 3px;--gf-font-size-secondary: 14px;--gf-font-size-tertiary: 13px;--gf-icon-ctrl-number: url("data:image/svg+xml,%3Csvg width='8' height='14' viewBox='0 0 8 14' fill='none' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath fill-rule='evenodd' clip-rule='evenodd' d='M4 0C4.26522 5.96046e-08 4.51957 0.105357 4.70711 0.292893L7.70711 3.29289C8.09763 3.68342 8.09763 4.31658 7.70711 4.70711C7.31658 5.09763 6.68342 5.09763 6.29289 4.70711L4 2.41421L1.70711 4.70711C1.31658 5.09763 0.683417 5.09763 0.292893 4.70711C-0.0976311 4.31658 -0.097631 3.68342 0.292893 3.29289L3.29289 0.292893C3.48043 0.105357 3.73478 0 4 0ZM0.292893 9.29289C0.683417 8.90237 1.31658 8.90237 1.70711 9.29289L4 11.5858L6.29289 9.29289C6.68342 8.90237 7.31658 8.90237 7.70711 9.29289C8.09763 9.68342 8.09763 10.3166 7.70711 10.7071L4.70711 13.7071C4.31658 14.0976 3.68342 14.0976 3.29289 13.7071L0.292893 10.7071C-0.0976311 10.3166 -0.0976311 9.68342 0.292893 9.29289Z' fill='rgba(17, 35, 55, 0.65)'/%3E%3C/svg%3E");--gf-icon-ctrl-select: url("data:image/svg+xml,%3Csvg width='10' height='6' viewBox='0 0 10 6' fill='none' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath fill-rule='evenodd' clip-rule='evenodd' d='M0.292893 0.292893C0.683417 -0.097631 1.31658 -0.097631 1.70711 0.292893L5 3.58579L8.29289 0.292893C8.68342 -0.0976311 9.31658 -0.0976311 9.70711 0.292893C10.0976 0.683417 10.0976 1.31658 9.70711 1.70711L5.70711 5.70711C5.31658 6.09763 4.68342 6.09763 4.29289 5.70711L0.292893 1.70711C-0.0976311 1.31658 -0.0976311 0.683418 0.292893 0.292893Z' fill='rgba(17, 35, 55, 0.65)'/%3E%3C/svg%3E");--gf-icon-ctrl-search: url("data:image/svg+xml,%3Csvg version='1.1' xmlns='http://www.w3.org/2000/svg' width='640' height='640'%3E%3Cpath d='M256 128c-70.692 0-128 57.308-128 128 0 70.691 57.308 128 128 128 70.691 0 128-57.309 128-128 0-70.692-57.309-128-128-128zM64 256c0-106.039 85.961-192 192-192s192 85.961 192 192c0 41.466-13.146 79.863-35.498 111.248l154.125 154.125c12.496 12.496 12.496 32.758 0 45.254s-32.758 12.496-45.254 0L367.248 412.502C335.862 434.854 297.467 448 256 448c-106.039 0-192-85.962-192-192z' fill='rgba(17, 35, 55, 0.65)'/%3E%3C/svg%3E");--gf-label-space-y-secondary: var(--gf-label-space-y-md-secondary);--gf-ctrl-border-color: #686e77;--gf-ctrl-size: var(--gf-ctrl-size-md);--gf-ctrl-label-color-primary: #112337;--gf-ctrl-label-color-secondary: #112337;--gf-ctrl-choice-size: var(--gf-ctrl-choice-size-md);--gf-ctrl-checkbox-check-size: var(--gf-ctrl-checkbox-check-size-md);--gf-ctrl-radio-check-size: var(--gf-ctrl-radio-check-size-md);--gf-ctrl-btn-font-size: var(--gf-ctrl-btn-font-size-md);--gf-ctrl-btn-padding-x: var(--gf-ctrl-btn-padding-x-md);--gf-ctrl-btn-size: var(--gf-ctrl-btn-size-md);--gf-ctrl-btn-border-color-secondary: #686e77;--gf-ctrl-file-btn-bg-color-hover: #EBEBEB;--gf-field-img-choice-size: var(--gf-field-img-choice-size-md);--gf-field-img-choice-card-space: var(--gf-field-img-choice-card-space-md);--gf-field-img-choice-check-ind-size: var(--gf-field-img-choice-check-ind-size-md);--gf-field-img-choice-check-ind-icon-size: var(--gf-field-img-choice-check-ind-icon-size-md);--gf-field-pg-steps-number-color: rgba(17, 35, 55, 0.8);} "*" indicates required fields .ao2gf-848d016b-1834-4191-98c4-2e9c48ba54a2 { font-size: 11pt; font-family: 'Open Sans', sans-serif; background-image: none; margin: 0px; padding: 0px; background-repeat: no-repeat; background-size: auto; background-position: center center; .ao2gf-848d016b-1834-4191-98c4-2e9c48ba54a2.ao2gf input, .ao2gf textarea, .ao2gf select{ background-color: #FFFFFF; border-color: #CCCCCC; border-width: 1px; color: ; font-size: inherit; font-family: inherit; } .ao2gf-848d016b-1834-4191-98c4-2e9c48ba54a2.ao2gf input:focus, .ao2gf textarea:focus, .ao2gf select:focus{ border-color: #3B99FC; } .ao2gf-848d016b-1834-4191-98c4-2e9c48ba54a2.ao2gf input.ao2gf-error, .ao2gf textarea.ao2gf-error, .ao2gf select.ao2gf-error{ border-color: #FF0000; border-width: 1px; } .ao2gf-848d016b-1834-4191-98c4-2e9c48ba54a2.ao2gf span.ao2gf-error-message{ color: #FF0000; font-size: 11px; } .ao2gf-848d016b-1834-4191-98c4-2e9c48ba54a2.ao2gf ::-webkit-input-placeholder { color: darkgrey; font-size: inherit; font-family: inherit; text-align: inherit; } .ao2gf-848d016b-1834-4191-98c4-2e9c48ba54a2.ao2gf ::-moz-placeholder { color: darkgrey; font-size: inherit; font-family: inherit; text-align: inherit; } .ao2gf-848d016b-1834-4191-98c4-2e9c48ba54a2.ao2gf :-ms-input-placeholder { color: darkgrey; font-size: inherit; font-family: inherit; text-align: inherit; } .ao2gf-848d016b-1834-4191-98c4-2e9c48ba54a2.ao2gf :-moz-placeholder { color: darkgrey; font-size: inherit; font-family: inherit; text-align: inherit; } .ao2gf_input_672f909ba8bac { background-color: rgb(57, 155, 55); background-image: none; background-repeat: no-repeat; background-size: auto; background-position: center center; color: rgb(255, 255, 255); border-radius: 6px; display: inline-block; text-decoration: none; font-size: 12pt; font-weight: normal; font-style: normal; border-style: solid; border-color: transparent; border-width: 0px; padding: 10px; } First Name*Last Name*Email Address* Company NameCompany Name
Labeled Forms: The forms on Eco Canada are well-labeled, which I think will allow screen readers to correctly identify input fields (name, email, company, etc) and guide users through the process. This is an essential feature that ensures everyone can complete forms without confusion and join the Eco Canada community.
International cooperation and an approach to ADM and machine learning grounded in human rights.
1. Revisión de derechos humanos en la tecnología:
Las agencias de la ONU deben revisar cómo se aplican las leyes y estándares internacionales de derechos humanos en la gestión de datos automatizados, el aprendizaje automático y su impacto en el género. Esto ayudaría a crear enfoques más creativos y adecuados para proteger derechos en esta era digital que avanza rápidamente. 2. Métricas para la inclusión digital: Es urgente crear y medir indicadores globales de inclusión digital, desglosados por género. Estos datos deben incluirse en los informes anuales de instituciones como la ONU, el FMI, la Unión Internacional de Telecomunicaciones, el Banco Mundial y otros organismos internacionales, para promover igualdad en el acceso y uso de la tecnología.
Advocate for and adopt guidelines that establish accountability and transparency for algorithmic decision making (ADM) in both the public and private sectors.
Estas propuestas buscan garantizar que la Inteligencia Artificial no perpetúe sesgos sistémicos y, en cambio, promueva un acceso equitativo a derechos y oportunidades
Recomendaciones clave:
Transparencia y responsabilidad en la toma de decisiones algorítmicas (ADM por su sigla en inglés):
Instituir lineamientos para garantizar que los sistemas ADM sean responsables y transparentes, tanto en el sector público como privado.
Realizar pruebas rigurosas durante todo el ciclo de vida de los sistemas de IA para identificar y mitigar sesgos y daños potenciales, asegurando que mejoren la calidad de vida y no controlen la experiencia humana.
Crear marcos legales sólidos que supervisen y regulen la Inteligencia Artificial, al promover la rendición de cuentas.
Participación activa y diversa:
Garantizar la inclusión de mujeres y niñas, especialmente desde comunidades marginadas, en el diseño, creación y evaluación de ADM. Esto reconoce su experiencia y creatividad como una herramienta crucial para imaginar estructuras más inclusivas.
Fomentar el equilibrio de género en equipos de diseño y toma de decisiones, e incentivar la diversidad interdisciplinaria y feminista para detectar y corregir sesgos en los sistemas.
Datos inclusivos y desagregados:
Desarrollar y abrir conjuntos de datos desagregados por género, clase y raza que permitan comprender y corregir fuentes de sesgo en la Inteligencia Artificial.
Supervisar los procesos de recolección de datos con controles que garanticen que estos no sean obtenidos a expensas de las mujeres y otros grupos tradicionalmente excluidos.
Cooperación internacional y enfoque en derechos humanos:
Revisar las leyes internacionales de derechos humanos aplicadas a ADM para asegurar que sean adecuadas en la era digital y fomenten un enfoque inclusivo.
Establecer métricas globales de inclusión digital, informadas por datos desagregados por sexo, para medir avances en equidad tecnológica.
Impacto en Colombia
Estas acciones permiten que las corporalidades diversas participen activamente en la transformación tecnológica del país. Promueven no solo la igualdad de género, sino también la creación de Inteligencias Artificiales que reflejen las realidades locales y respeten las particularidades de cada comunidad, abriendo camino hacia un futuro inclusivo y sostenible.
we want diverse women proactively involved in all AI processes, shaping the technology that affects every part of our lives.
La exclusión tecnológica y los sesgos en la Inteligencia Artificial tienen un impacto directo en las corporalidades diversas en Colombia, especialmente entre mujeres y niñas en contextos rurales, urbanos y marginalizados.
La iniciativa aboga por incluir experiencias de vida y conocimientos de primera línea para informar el diseño de algoritmos y decisiones tecnológicas. Esto implica no solo abrir espacios en universidades y sectores públicos y privados, sino también asegurar que las corporalidades diversas sean representadas en las mesas donde se toman decisiones sobre intervenciones tecnológicas, incluidas las relacionadas con la Inteligencia Artificial.
Un modelo participativo que conecte a expertas feministas, científicas sociales, ingenieras, economistas y comunidades, creando un ecosistema inclusivo y multidisciplinario para corregir desigualdades de género, raza y clase.
Esto busca evitar que las Inteligencias Artificiales repliquen y amplifiquen los sesgos existentes, promoviendo en cambio algoritmos que reflejen y sirvan a las realidades locales.
En Colombia, donde las dinámicas de exclusión son especialmente complejas, esto es crucial para garantizar que la tecnología no solo sea inclusiva, sino que también respete y valore las corporalidades diversas. Implementar estas propuestas puede contribuir a soluciones más justas y representativas, fomentando un futuro colectivo donde las voces y experiencias de las mujeres sean motor de transformación.
Deploying Feminist AI
La Inteligencia Artificial tiene el potencial de ser una herramienta transformadora para abordar problemas globales como la desigualdad de género, la crisis climática y las injusticias económicas. Sin embargo, las narrativas actuales sobre la Inteligencia Artificial tienden a centrarse en los riesgos y exclusiones, lo que puede desalentar su uso como herramienta positiva para el cambio social.
Para el gremio de la traducción en Colombia, esta posibilidad presenta una oportunidad única, como intermediación lingüística clave en la transferencia cultural, la traducción puede desempeñar un papel fundamental en la democratización del acceso a la tecnología.
Es crucial promover la idea de que las tecnodiversidades, incluida la Inteligencia Artificial, deben ser inclusivas por diseño y capaces de abordar desigualdades de género y sociales.
El prototipado de Inteligencias Artificiales feministas en el sector público es clave para transformar estas narrativas. Este proceso requiere ideas locales, investigación interdisciplinaria y participación activa de diversos actores, incluidas las mujeres y niñas.
En Colombia, el gremio de traducción puede contribuir traduciendo y adaptando contenidos relacionados con tecnologías inclusivas, así como ayudando a desarrollar Inteligencias Artificiales que respeten la diversidad cultural y lingüística.
La construcción de una Inteligencia Artificial más inclusiva y feminista exige pensar en la tecnología como un bien público y colectivo.
El gremio de traducción puede aportar al diseño de políticas y productos tecnológicos que reflejen las realidades locales, ayudando a codificar la igualdad de género y raza en los sistemas. Este enfoque permitirá no solo superar barreras históricas, sino también empoderar a las mujeres en el diseño y uso de tecnologías transformadoras.
Al abrir espacios para que las mujeres participen en el diseño de Inteligencias Artificiales, se fomenta un futuro en el que las tecnologías de interés público tengan un impacto social positivo y sostenible, contribuyendo al avance de una sociedad más equitativa, tanto en Colombia como a nivel global.
Le quatrième Plan régional Santé Environnement (PRSE4) d'Île-de-France accorde une attention particulière aux problématiques de santé environnementale touchant les enfants, les jeunes et leurs familles, ainsi qu'à l'implication potentielle de l'Éducation Nationale dans ces enjeux. Le plan souligne que la santé humaine, la santé animale et l'environnement sont intimement liées et que les citoyens sont de plus en plus préoccupés par ces questions.
Problématiques spécifiques aux enfants et aux jeunes
Implication de l'Éducation Nationale
Actions du PRSE4 en faveur des enfants et des jeunes
Le PRSE4 propose plusieurs actions concrètes pour protéger la santé des enfants et des jeunes, notamment :
En résumé, le PRSE4 reconnaît que les enfants et les jeunes sont particulièrement vulnérables aux risques environnementaux et met en place des actions ciblées pour les protéger.
L'Éducation Nationale est identifiée comme un acteur clé dans la transmission de connaissances et la sensibilisation à ces enjeux, et le plan prévoit des actions spécifiques pour impliquer le secteur de l'éducation dans cette démarche.
n Ed-Tech Tragedy? (La tecnología: ¿una tragedia para la educación?) analiza las múltiples consecuencias negativas, totalmente imprevistas, que se produjeron a raíz de la expansión de las tecnologías educativas. El libro muestra cómo las distintas soluciones que se propusieron, centradas principalmente en la tecnología, dejaron de lado a una gran mayoría de estudiantes. También explica detalladamente las distintas formas en que se vio perjudicada la educación, incluso en aquellos casos en los que se podía acceder a la tecnología y esta funcionaba con normalidad.
Este parrafo es importante...
Voici un document de synthèse axé sur les enfants, les jeunes et leurs familles, en utilisant les informations des sources fournies :
État de santé et défis
Déterminants de santé
Priorités d'action
Axes de transformation
Moyens et leviers
En résumé, les enjeux de santé pour les enfants et les jeunes en Île-de-France sont multiples et nécessitent une approche globale et coordonnée, axée sur la prévention, la réduction des inégalités et l'adaptation des services aux besoins des populations.
Who is the AI innovation economy for?
La implementación de la Inteligencia Artificial en procesos públicos y privados tiene el potencial de amplificar o mitigar estas desigualdades:
Inclusión en el diseño de Inteligencia Artificial ya que las comunidades marginadas deben participar activamente en la creación de datasets y sistemas de IA que respeten su identidad, necesidades y derechos.
Fomento de la equidad porque la contratación pública puede usarse como herramienta para corregir desigualdades estructurales, exigiendo la participación de empresas que prioricen la diversidad y la justicia social en sus procesos tecnológicos.
La riqueza lingüística de Colombia, que incluye lenguas indígenas y criollas, es un recurso invaluable que debe ser integrado en el desarrollo de IA:
Crear bases de datos que incluyan lenguas como el wayuunaiki o nasa yuwe puede garantizar que las tecnologías no excluyan a comunidades no hispanohablantes.
La traducción y localización de los procesos de contratación y regulación de Inteligencia Artificial permitirán a más sectores de la población comprender y participar en estos procesos.
La contratación pública es una herramienta poderosa para modelar la economía de la innovación y promover la responsabilidad en el desarrollo de Inteligencia Artificial.
Transparencia y rendición de cuentas
La falta de transparencia en la contratación de la Inteligencia Artificial puede perpetuar desigualdades. Colombia puede adoptar medidas como:
Creación de registros algorítmicos: Similar a las iniciativas de Ámsterdam y Helsinki, registrar y publicar información sobre los algoritmos usados en servicios públicos.
Publicación de contratos de la Inteligencia Artificial: Hacer accesibles al público detalles clave de los contratos gubernamentales, como los estándares éticos que las empresas deben cumplir.
Auditorías independientes: Garantizar que las tecnologías contratadas respeten los derechos humanos y eviten impactos negativos en poblaciones vulnerables.
Inclusión y diversidad en la contratación pública
Requisitos de diversidad: Exigir que las empresas contratadas para desarrollar una Inteligencia Artificial demuestren compromiso con principios de equidad, diversidad e inclusión.
Incentivos a comunidades subrepresentadas: Promover la participación de pequeñas empresas lideradas por mujeres, indígenas o afrodescendientes en licitaciones tecnológicas.
Regulación ética en el desarrollo de IA
Estándares obligatorios de ética: Implementar marcos legales para regular las prácticas éticas de las empresas proveedoras de IA, como un estándar colombiano de impacto algorítmico (similar al AIA en Canadá).
Cooperación internacional: Participar en iniciativas globales como GPAI para fomentar la responsabilidad en el desarrollo y despliegue de IA, asegurando que las empresas cumplan estándares internacionales.
Principios feministas y justicia social en la Inteligencia Artificial colombiana
La integración de principios feministas en la contratación y desarrollo de la Inteligencia Artificial puede garantizar que las tecnologías beneficien a todos los sectores de la población:
Contratación equitativa: Diseñar sistemas de e-procurement que prioricen la contratación de empresas lideradas por mujeres y otras minorías históricamente excluidas.
Reparación histórica: Usar la contratación pública para corregir desigualdades estructurales, asignando recursos a proyectos que beneficien a comunidades marginadas.
three essential recommendations for building equality from scratch when designing e-procurement systems: civic participation, automation of reparation rules, and the constant improvement of the e-procurement platforms:
En Colombia, las comunidades indígenas, afrodescendientes y campesinas enfrentan barreras estructurales que limitan su acceso a la participación económica y política, agravadas por la desigualdad en la distribución de recursos tecnológicos.
La implementación de sistemas de contratación pública automatizados (e-procurement) en Colombia podría:
Promover la participación activa de mujeres, personas con discapacidades y grupos étnicos en la lista de proveedores.
Compensar desigualdades históricas al aplicar reglas de reparación que prioricen a las comunidades marginalizadas en la asignación de contratos.
Por ejemplo, podrían diseñarse mecanismos para priorizar la contratación de mujeres rurales y pequeñas cooperativas lideradas por minorías en sectores como la agricultura o la tecnología.
Colombia tiene una rica diversidad lingüística con lenguas indígenas, criollas y el español. Para que la Inteligencia Artificial sea verdaderamente inclusiva, es crucial desarrollar datasets localizados y traducir contenidos a lenguas como el wayuunaiki, emberá o nasa yuwe.
Garantizar que las comunidades no hispanohablantes puedan participar en procesos de contratación pública.
Reducir el sesgo en la Inteligencia Artificial al incorporar datos lingüísticos y culturales diversos en el entrenamiento de algoritmos.
Tal como se observa en iniciativas como la plataforma Common Voice en África, Colombia podría promover proyectos similares para recopilar y digitalizar lenguas locales, fortaleciendo la inclusión en sistemas automatizados de gobernanza.
Inspirándose en el enfoque presentado, Colombia puede utilizar Inteligencia Artificial y e-procurement para mejorar los procesos de contratación pública con énfasis en equidad e inclusión:
Crear plataformas abiertas donde las comunidades puedan participar activamente en el diseño y mejora de los sistemas.
Incluir mecanismos de retroalimentación para que las decisiones sean transparentes y respondan a las necesidades locales.
Implementar medidas temporales que prioricen a mujeres, minorías étnicas y personas con discapacidad en los procesos de contratación.
Diseñar incentivos económicos para cooperativas lideradas por mujeres y comunidades indígenas, promoviendo la redistribución equitativa de recursos públicos.
Garantizar que las plataformas sean de código abierto para permitir auditorías y mejoras colaborativas.
Documentar públicamente los cambios realizados en los sistemas, asegurando que respondan a las demandas ciudadanas.
Principios feministas en la tecnología gubernamental
Adoptar un enfoque feminista en la implementación de tecnologías emergentes en Colombia puede:
Promover la igualdad de género al incorporar principios de equidad desde el diseño de Inteligencia Artificial.
Aumentar la transparencia diseñar sistemas que prioricen los derechos humanos y eviten prácticas discriminatorias.
Fortalecer la gobernanza democrática al integrar la perspectiva de género en las políticas públicas de contratación.
Por ejemplo, los sistemas de contratación pública podrían evaluar automáticamente la representación de género entre los proveedores, asegurando una distribución justa de oportunidades.
Le Programme Régional d’Accès à la Prévention et aux Soins (PRAPS) vise à améliorer l'accès à la prévention et aux soins pour les populations en situation de grande vulnérabilité sociale. Il accorde une attention particulière à la santé des enfants et de leurs familles, ainsi qu'à l'implication potentielle de l'Éducation Nationale.
Interventions précoces auprès des enfants et de leur famille * Développement des interventions précoces : Le PRAPS met l'accent sur le développement d'interventions précoces auprès des enfants et de leurs familles. Cela inclut un accompagnement spécifique des femmes enceintes pour assurer le repérage précoce de la dépression périnatale et des troubles de la relation mère-bébé. * Coordination des acteurs : Il est essentiel de mieux coordonner les interventions auprès des personnes migrantes primo-arrivantes, en tenant compte des besoins spécifiques des enfants. * Petite enfance : Pour les enfants de 0 à 3 ans, il est recommandé de sensibiliser les équipes aux difficultés spécifiques de ces enfants, de développer des espaces bébés dans les lieux d'hébergement, et d'accompagner les femmes dans leur suivi pré et post-natal. L'orientation rapide vers des équipes de pédopsychiatrie spécialisées est également cruciale. * Enfants d'âge scolaire : Pour les enfants de 3 à 12 ans, la santé psychique se fonde sur un environnement stimulant et sécurisant, l'école et la fréquentation d'autres enfants sont essentiels. L'instabilité des conditions de vie peut perpétuer ou réactiver des traumatismes, entraînant des troubles nécessitant une attention particulière. * Adolescents : Cette période nécessite une attention particulière des travailleurs de première ligne, notamment ceux de la médecine générale. Les consultations de médiation culturelle en psychiatrie peuvent s'avérer utiles. * Lien avec la PMI et la médecine scolaire : Les liens entre la médecine scolaire, la Protection Maternelle et Infantile (PMI) pour les plus petits et les services de soins sont déterminants, y compris pour la suite.
Santé mentale * Troubles psychiques chez les enfants et les familles : Le PRAPS reconnaît que les problèmes de santé mentale sont prégnants chez les populations en situation de précarité, y compris chez les enfants. * Importance des lieux d'accueil : Pour les Mineurs Non Accompagnés (MNA), il est essentiel de disposer de lieux d'accueil permettant d'échanger avec des adultes de confiance et d'orienter vers des soins psychothérapeutiques. * Soutien aux parents : La prise en charge de l'enfant doit intégrer la place des parents et tenir compte de leurs propres difficultés. * L'inclusion de professionnels de la pédopsychiatrie au sein des Équipes Mobiles de Précarité (EMPP) est encouragée pour une meilleure prise en compte des enfants en situation de précarité.
Actions spécifiques pour les femmes et les familles * Suivi périnatal : L'accompagnement des femmes dans leur suivi pré et post-natal est crucial pour le repérage et l'orientation précoces des signes de dépression périnatale, de troubles psychiques ou de troubles de la relation mère-bébé. Des équipes mobiles sont dédiées aux interventions en périnatalité. * Actions renforcées en santé mentale : Des actions renforcées en santé mentale sont nécessaires auprès des femmes précaires isolées et de leurs enfants, avec une évaluation globale somatique, sociale et psychique par des équipes de psychiatrie périnatale. * Accès aux lieux d'accueil de la petite enfance : Le PRAPS soutient le plaidoyer pour un meilleur accès aux lieux d'accueil de la petite enfance (crèches) en lien avec les PMI et l'Aide Sociale à l'Enfance (ASE). L'accès aux lieux d'accueil enfants-parents (LAEP) pour les familles très démunies est également encouragé. * Guide méthodologique "Agir avec les femmes en périnatalité" pour accompagner les initiatives de démarches de santé communautaire.
Implication de l'Éducation Nationale * Rôle de l'école : L'école joue un rôle essentiel dans le développement psychique et social des enfants, en particulier pour ceux qui vivent dans des conditions de précarité. L'école et la fréquentation d'autres enfants sont essentielles. * Liens entre les acteurs : Les liens entre la médecine scolaire, la PMI et les services de soins sont déterminants. * Repérage des difficultés : Les acteurs de l'Éducation Nationale, notamment les enseignants, peuvent être des acteurs clés dans le repérage précoce des difficultés rencontrées par les enfants et leurs familles, et dans l'orientation vers les services de soins appropriés. Il est important que les acteurs de l'Éducation nationale soient sensibilisés aux enjeux de la précarité et à ses impacts sur la santé des enfants. * Lutte contre l’illectronisme : L'Éducation Nationale peut contribuer à la lutte contre l'illectronisme en santé, en soutenant l'acquisition des compétences numériques nécessaires pour accéder aux informations et aux services de santé. * Formation des professionnels : Le PRAPS encourage la formation continue des professionnels de l'Éducation Nationale sur les problématiques de santé liées à la précarité, afin qu'ils puissent mieux accompagner les enfants et les familles qu'ils accueillent.
Autres points importants: * Médiation en santé : Les médiateurs en santé peuvent faciliter la communication entre les familles et les professionnels de santé, notamment pour les personnes allophones. La médiation en santé s'appuie sur l'intervention d'un tiers pour faciliter la circulation d'informations et éclaircir les relations avec le système de santé. * Interprétariat professionnel : L'accès à l'interprétariat professionnel est essentiel pour lutter contre les exclusions dues à la barrière de la langue, garantissant un égal accès aux droits, à la prévention et aux soins. * "Aller-vers" : Les démarches "d'aller-vers" mises en œuvre par des équipes mobiles pluridisciplinaires sont indispensables pour atteindre les populations les plus éloignées du système de santé. * Bilans de santé : Le PRAPS encourage la structuration de bilans de santé pour les migrants primo-arrivants, avec une possibilité d'orientation spécialisée et de rattrapage vaccinal. * Prévention et promotion de la santé : Le PRAPS insiste sur la nécessité de diffuser une culture collective de la prévention et de construire des approches adaptées aux publics sans "chez-soi". Cela inclut l'accès à la vaccination, à des outils de réduction des risques et des dommages, et aux dépistages (tuberculose, IST, VIH).
Ce document de synthèse met en évidence l'importance d'une approche globale et coordonnée pour la santé des enfants et de leurs familles en situation de précarité, avec une attention particulière portée à la santé mentale, à l'accès aux soins et à la prévention. L'implication de l'Éducation Nationale est essentielle pour assurer un suivi et un accompagnement adaptés à ces populations.
le document contient plusieurs éléments qui concernent particulièrement les jeunes des Yvelines et leurs familles, bien qu'il ne soit pas toujours spécifique à ce département. Voici les points les plus pertinents :
Santé mentale: Le document souligne que la santé mentale des jeunes est une priorité régionale. L'augmentation des épisodes dépressifs et des pensées suicidaires chez les jeunes est une préoccupation majeure. Les familles des Yvelines peuvent utiliser ces informations pour être vigilantes aux signes de mal-être chez leurs enfants et adolescents, et pour rechercher des ressources de soutien si nécessaire. Le document encourage le développement de programmes de prévention et de dépistage précoce en santé mentale.
Inégalités territoriales : Le document met en évidence des disparités territoriales en matière de santé en Île-de-France. Bien qu'il ne détaille pas les spécificités des Yvelines, il est important de noter que certains territoires de ce département peuvent être plus touchés par ces inégalités que d'autres. Il est donc important que les familles et les acteurs locaux des Yvelines analysent la situation de leur territoire pour identifier les besoins spécifiques de la population.
Accès aux soins : Le document indique que l'accès aux soins reste inégal en Île-de-France. Les Yvelines, comme d'autres départements, peuvent être confrontées à des difficultés d'accès aux professionnels de santé, en particulier dans les zones sous-denses. Les familles peuvent être impactées par ces difficultés, notamment pour l'accès aux soins pédiatriques, à la santé mentale et aux consultations spécialisées. Le document mentionne que l'ARS soutient le développement de structures d'exercice coordonné (MSP, CDS, CPTS) et les aides à l'installation dans les zones sous-denses, ce qui peut améliorer l'accès aux soins à terme dans les Yvelines.
Offre de soins en pédiatrie et périnatalité : Le document mentionne des difficultés spécifiques dans les Yvelines concernant l'offre de soins en pédiatrie et périnatalité. Il est noté que certaines maternités de type I ont des difficultés à assurer une liste d’astreinte de pédiatres et que certaines maternités de type IIA ont une activité inférieure à 1500 naissances. Des fermetures estivales de maternités faute de sages-femmes ont également été observées dans les Yvelines. Les familles des Yvelines doivent être conscientes de ces difficultés potentielles, et les acteurs locaux doivent œuvrer à améliorer l'offre de soins dans ces domaines. Une équipe de territoire contribue à faire fonctionner la permanence des soins en obstétrique et en néonatalogie dans le GHT, avec un centre périnatal de proximité sans hébergement. Le document précise que les maternités de type I sont en difficulté dans ce département, avec des fermetures estivales faute de sages-femmes et que certaines maternités de type IIA ont une activité inférieure à 1 500 naissances.
Soins palliatifs: Le document indique qu'il faut développer la collaboration avec les associations de bénévoles et les aidants dans le domaine des soins palliatifs, ce qui peut concerner les familles des Yvelines ayant des proches nécessitant ces soins.
Activité physique et sportive: Le document mentionne l'importance de la pratique sportive et de l'activité physique pour la santé des jeunes. Les JOP 2024 sont l'occasion d'encourager l'activité physique pour tous les Franciliens. Les familles des Yvelines peuvent bénéficier de cette dynamique pour promouvoir l'activité physique auprès de leurs enfants.
Logement des soignants : Pour faciliter l'accès aux stages, le dispositif "Logement des soignants" pourrait être étendu aux étudiants en santé, notamment en grande couronne, ce qui pourrait inclure les Yvelines.
SMR : Des implantations sont envisagées dans les Yvelines sur le SMR mention locomoteur, mention système digestif-endocrinologie-diabétologie-nutrition et cardio-vasculaire.
Psychiatrie: Dans les Yvelines, il y a possibilité de délivrer une mention supplémentaire pour la mise en conformité d’une unité mixte grands adolescents – jeunes adultes existante avec le nouveau régime des autorisations et de développer l’offre d’hospitalisation partielle pour la psychiatrie périnatale.
Dialyse : La modernisation des locaux de dialyse sera soutenue, en particulier dans les Yvelines, pour améliorer la prise en charge du parcours global MRC.
Inégalités sociales de santé: Le document insiste sur les inégalités sociales de santé et leurs effets sur les populations les plus défavorisées, avec une attention particulière aux quartiers prioritaires de la politique de la ville (QPV) et aux poches de pauvreté rurales. Les familles vivant dans ces zones des Yvelines pourraient être plus exposées aux difficultés d'accès aux soins et aux problèmes de santé.
Actions départementales : Le document cite quelques exemples d'actions départementales issues du CNR santé, dont l'expérimentation de l'élargissement des horaires pour la prise de rendez-vous dans le cadre du service d'accès aux soins (Yvelines – 78).
En résumé, bien que les Yvelines ne soient pas toujours explicitement mentionnées, ce document fournit des informations essentielles sur les enjeux de santé qui concernent les jeunes et leurs familles dans ce département. Les parents élus au sein du CESCE peuvent utiliser ces informations pour mieux comprendre les défis locaux et pour initier des actions pertinentes pour améliorer la santé et le bien-être des élèves.
oici un document de synthèse détaillé, basé sur les extraits que vous avez fournis, et qui aborde les thèmes principaux et les idées clés.
Document de Synthèse : Analyse des Orientations et Objectifs du Système de Santé en Île-de-France
Introduction
Ce document a pour but de synthétiser les principaux axes stratégiques et les objectifs opérationnels pour le système de santé en Île-de-France, tels que présentés dans le document source. Il met en lumière les enjeux majeurs, les pistes d'action proposées et les indicateurs de suivi envisagés pour améliorer la santé des populations franciliennes.
Thèmes Clés et Idées Importantes
Renforcement du Pouvoir d'Agir des Citoyens (Empowerment)
L'empowerment est défini comme un processus d'apprentissage permettant d'accéder au pouvoir, et il peut être à la fois individuel, collectif, social et politique. L'objectif est de développer une culture de la prévention et de donner aux citoyens les moyens d'agir sur leur propre santé. Cela se traduit par le développement et l'évaluation de programmes de pairs éducateurs en santé dans les établissements scolaires et universitaires, ainsi que par l'implication des collectivités territoriales et des usagers dans les réflexions sur l'accès aux droits. "L’empowerment articule deux dimensions, celle du pouvoir, qui constitue la racine du mot, et celle du processus d’apprentissage pour y accéder."
Des Parcours de Santé Lisibles, Fluides et Répondant aux Besoins
Prévention et Santé Publique
La prévention bucco-dentaire est identifiée comme un facteur clé de la santé générale.
Il est nécessaire d'augmenter le recours aux consultations de prévention chez les enfants, adolescents, jeunes adultes et femmes enceintes.
L'appui addictologique aux services de psychiatrie et aux structures de secteur doit être renforcé.
"Renforcer la prévention bucco-dentaire et promouvoir la santé orale comme facteur incontournable de bonne santé générale."
Ressources Humaines en Santé
Autorisations des Activités de Soins et Équipements
Matériels Lourds * Une organisation des autorisations des activités de soins par niveaux de recours est mise en place, avec des zones de répartition spécifiques. * Il existe des objectifs quantitatifs pour l'offre de soins (OQOS), notamment pour la médecine, la psychiatrie, les soins de suite et de réadaptation (SMR), et certaines spécialités comme la cardiologie et l'oncologie. * L'accent est mis sur l'équité territoriale, avec des implantations supplémentaires prévues dans les zones sous-dotées comme la Seine-Saint-Denis et la grande couronne. * Pour certaines spécialités, un seuil minimal d'activité doit être respecté afin d'assurer la qualité et la sécurité des soins. * L'objectif est de maintenir le nombre des implantations de médecine adulte sur la région, l’offre y étant considérée comme globalement suffisante. * Permanence des Soins en Établissements de Santé (PDSES) * Les établissements de santé doivent assurer une permanence des soins, notamment pour les urgences endoscopiques en gastro-entérologie. * Un financement spécifique est attribué aux établissements pour assurer cette permanence, en fonction du nombre de lignes de PDSES et de la gradation des soins. * Laboratoires de Biologie Médicale (LBM) * Une analyse des besoins en biologie médicale a été réalisée par département, en fonction de la densité des sites de LBM pour 100 000 habitants. * Indicateurs de Suivi
Plusieurs indicateurs sont proposés pour évaluer la mise en œuvre du PRS3 (Projet Régional de Santé), notamment :
Conclusion
Ce document révèle une volonté forte de la part des acteurs de la santé en Île-de-France d'améliorer l'accès aux soins, de renforcer la prévention, de transformer l'offre médico-sociale et de réduire les inégalités territoriales.
Les axes stratégiques et les objectifs opérationnels sont ambitieux, nécessitant une coordination et une mobilisation importantes de tous les acteurs concernés.
Le suivi régulier des indicateurs permettra d'évaluer les progrès accomplis et d'ajuster les actions en conséquence.
AI models in Africa
Estudio Comparativo: África y Colombia
África y Colombia comparten desafíos similares en cuanto a la desigualdad, el acceso limitado a servicios básicos y la diversidad cultural. Las lecciones del Makerere AI Lab pueden inspirar soluciones en Colombia, como:
Usar Inteligencia Artificial para la detección temprana de enfermedades en humanos y cultivos, combinando tecnologías móviles y datos localizados.
Generar datasets diversos para sistemas de reconocimiento de voz en lenguas minoritarias.
Promover un enfoque “de abajo hacia arriba” en la creación de datos, garantizando la participación activa de las comunidades en la recolección y uso de información.
the most critical issues to harness innovation within the AI ecosystem
La diversidad corporal en Colombia abarca una amplia gama de experiencias, marcadas por la riqueza multicultural y la interacción de comunidades indígenas, afrodescendientes, campesinas y urbanas. Esta diversidad también está entrelazada con el acceso desigual a la tecnología, la salud y la educación, especialmente en áreas rurales.
El uso de la Inteligencia Artificial para abordar problemas sociales, como se ha hecho en África, puede inspirar iniciativas en Colombia. Por ejemplo:
La Inteligencia Artificial para diagnósticos tempranos de enfermedades como el cáncer de mama o la tuberculosis, adaptados a los contextos rurales colombianos, donde los servicios médicos son limitados.
Modelos de Inteligencia Artificial para identificar plagas y enfermedades en cultivos de importancia para las comunidades rurales, como el café, el plátano o el maíz.
Considerar las diversidades corporales al diseñar soluciones que sean accesibles para todas las personas, independientemente de sus capacidades físicas o contexto social.
La traducción en Colombia puede desempeñar un papel fundamental en la creación y el uso de datos localizados para entrenar a la Inteligencia Artificial. Similar a la inclusión de Luganda en el proyecto Common Voice en África, se pueden desarrollar iniciativas para recopilar y traducir datos en lenguas indígenas colombianas, como el wayuunaiki, nasa yuwe o emberá.
Ampliar la representación de las lenguas indígenas en aplicaciones de la Inteligencia Artificial, como asistentes virtuales o sistemas de reconocimiento de voz.
Ayudar a preservar y revitalizar estas lenguas al integrarlas en tecnologías modernas.
Generar datasets lingüísticos diversos que fomenten el desarrollo de Inteligencia Artificial inclusivas, contextualizadas y éticamente responsables.
La Inteligencia Artificial para el bien social descrito en África puede adaptarse al contexto colombiano, aprovechando la “tubería de datos a impacto” para resolver problemas reales.
La identificación de problemas debe ser participativa, integrando a las comunidades afectadas.
Soluciones para mejorar la logística de distribución de alimentos en regiones apartadas.
Inteligencia Artificial para identificar y mitigar riesgos ambientales en zonas afectadas por la minería ilegal o la deforestación.
Es crucial desarrollar datasets localizados y representativos para evitar sesgos en los modelos de Inteligencia Artificial.
Bases de datos agrícolas que reflejen las particularidades de los ecosistemas colombianos.
Datos de salud adaptados a las diversidades genéticas y culturales del país.
El diseño de IA debe basarse en el entendimiento del contexto local y cultural.
Adaptar modelos a las necesidades específicas de comunidades indígenas y afrodescendientes.
Integrar saberes tradicionales en soluciones tecnológicas, reconociendo el conocimiento colectivo y las prácticas ancestrales.
La educación en ética de la Inteligencia Artificial es esencial para formar profesionales conscientes de los impactos sociales y culturales de sus creaciones. Además, deben establecerse directrices claras para implementar principios éticos en el desarrollo de tecnologías, fomentando prácticas inclusivas y no extractivas.
Authors’ Response (31 October 2024)
GENERAL ASSESSMENT
Pannexin (Panx) hemichannels are a family of heptameric membrane proteins that form pores in the plasma membrane through which ions and relatively large organic molecules can permeate. ATP release through Panx channels during the process of apoptosis is one established biological role of these proteins in the immune system, but they are widely expressed in many cells throughout the body, including the nervous system, and likely play many interesting and important roles that are yet to be defined. Although several structures have now been solved of different Panx subtypes from different species, their biophysical mechanisms remain poorly understood, including what physiological signals control their activation. Electrophysiological measurements of ionic currents flowing in response to Panx channel activation have shown that some subtypes can be activated by strong membrane depolarization or caspase cleavage of the C-terminus. Here, Henze and colleagues set out to identify endogenous activators of Panx channels, focusing on the Panx1 and Panx2 subtypes, by fractionating mouse liver extracts and screening for activation of Panx channels expressed in mammalian cells using whole-cell patch clamp recordings. The authors present a comprehensive examination with robust methodologies and supporting data that demonstrate that lysophospholipids (LPCs) directly Panx-1 and 2 channels. These methodologies include channel mutagenesis, electrophysiology, ATP release and fluorescence assays, molecular modelling, and cryogenic electron microscopy (cryo-EM). Mouse liver extracts were initially used to identify LPC activators, but the authors go on to individually evaluate many different types of LPCs to determine those that are more specific for Panx channel activation. Importantly, the enzymes that endogenously regulate the production of these LPCs were also assessed along with other by-products that were shown not to promote pannexin channel activation. In addition, the authors used synovial fluid from canine patients, which is enriched in LPCs, to highlight the importance of the findings in pathology. Overall, we think this is likely to be a landmark study because it provides strong evidence that LPCs can function as activators of Panx1 and Panx2 channels, linking two established mediators of inflammatory responses and opening an entirely new area for exploring the biological roles of Panx channels. Although the mechanism of LPC activation of Panx channels remains unresolved, this study provides an excellent foundation for future studies and importantly provides clinical relevance.
We thank the reviewers for their time and effort in reviewing our manuscript. Based on their valuable comments and suggestions, we have made substantial revisions. The updated manuscript now includes two new experiments supporting that lysophospholipid-triggered channel activation promotes the release of signaling molecules critical for immune response and demonstrates that this novel class of agonist activates the inflammasome in human macrophages through endogenously expressed Panx1. To better highlight the significance of our findings, we have excluded the cryo-EM panel from this manuscript. We believe these changes address the main concerns raised by the reviewers and enhance the overall clarity and impact of our findings. Below, we provide a point-by-point response to each of the reviewers’ comments.
RECOMMENDATIONS
Essential revisions:
- The authors present a tremendous amount of data using different approaches, cells and assays along with a written presentation that is quite abbreviated, which may make comprehension challenging for some readers. We would encourage the authors to expand the written presentation to more fully describe the experiments that were done and how the data were analysed so that the 2 key conclusions can be more fully appreciated by readers. A lot of data is also presented in supplemental figures that could be brought into the main figures and more thoroughly presented and discussed.
We appreciate and agree with the reviewers’ observation. Our initial manuscript may have been challenging to follow due to our use of both wild-type and GS-tagged versions of Panx1 from human and frog origins, combined with different fluorescence techniques across cell types. In this revision, we used only human wild-type Panx1 expressed in HEK293S GnTI<sup>-</sup> cells, except for activity-guided fractionation experiments, where we used GS-tagged Panx1 expressed in HEK293 cells (Fig. 1). For functional reconstitution studies, we employed YO-PRO-1 uptake assays, as optimizing the Venus-based assay was challenging. We have clarified these exceptions in the main text. We think these adjustments simplify the narrative and ensure an appropriate balance between main and supplemental figures.
- It would also be useful to present data on the ion selectivity of Panx channels activated by LPC. How does this compare to data obtained when the channel is activated by depolarization? If the two stimuli activate related open states then the ion selectivity may be quite similar, but perhaps not if the two stimuli activate different open states. The authors earlier work in eLife shows interesting shifts in reversal potentials (Vrev) when substituting external chloride with gluconate but not when substituting external sodium with N-methyl-D-glucamine, and these changed with mutations within the external pore of Panx channels. Related measurements comparing channels activated by LPC with membrane depolarization would be valuable for assessing whether similar or distinct open states are activated by LPC and voltage. It would be ideal to make Vrev measurements using a fixed step depolarization to open the channel and then various steps to more negative voltages to measure tail currents in pinpointing Vrev (a so called instantaneous IV).
We fully agree with the reviewer on the importance of ion selectivity experiments. However, comparing the properties of LPC-activated channels with those activated by membrane depolarization presented technical challenges, as LPC appears to stimulate Panx1 in synergy with voltage. Prolonged LPC exposure destabilizes patches, complicating G-V curve acquisition and kinetic analyses. While such experiments could provide mechanistic insights, we think they are beyond the scope of current study.
- Data is presented for expression of Panx channels in different cell types (HEK vs HEKS GnTI-) and different constructs (Panx1 vs Panx1-GS vs other engineered constructs). The authors have tried to be clear about what was done in each experiment, but it can be challenging for the reader to keep everything straight. The labelling in Fig 1E helps a lot, and we encourage the authors to use that approach systematically throughout. It would also help to clearly identify the cell type and channel construct whenever showing traces, like those in Fig 1D. Doing this systematically throughout all the figures would also make it clear where a control is missing. For example, if labelling for the type of cell was included in Fig 1D it would be immediately clear that a GnTI- vector alone control for WT Panx1 is missing as the vector control shown is for HEK cells and formally that is only a control for Panx2 and 3. Can the authors explain why PLC activates Panx1 overexpressed in HEK293 GnTl- cells but not in HEK293 cells? Is this purely a function of expression levels? If so, it would be good to provide that supporting information.
As mentioned above, we believe our revised version is more straightforward to digest. We have improved labeling and provided explanations where necessary to clarify the manuscript. While Panx1 expression levels are indeed higher in GnTI<sup>-</sup> than in HEK293 cells, we are uncertain whether the absence of detectable currents in HEK293 cells is solely due to expression levels. Some post-translational modifications that inhibit Panx1, such as lysine acetylation, may also impact activity. Future studies are needed to explore these mechanisms further.
- The mVenus quenching experiments are somewhat confusing in the way data are presented. In Fig 2B the y axis is labelled fluorescence (%) but when the channel is closed at time = 0 the value of fluorescence is 0 rather than 100 %, and as the channel opens when LPC is added the values grow towards 100 instead of towards 0 as iodide permeates and quenches. It would be helpful if these types of data could be presented more intuitively. Also, how was the initial rate calculated that is plotted in Fig 2C? It would be helpful to show how this is done in a figure panel somewhere. Why was the initial rate expressed as a percent maximum, what is the maximum and why are the values so low? Why is the effect of CBX so weak in these quenching experiments with Panx1 compared to other assays? This assay is used in a lot of experiments so anything that could be done to bolster confidence is what it reports on would be valuable to readers. Bringing in as many control experiments that have been done, including any that are already published, would be helpful.
We modified the Y-axis in Figure 2 to “Quench (%)” for clarity. The data reflects fluorescence reduction over time, starting from LPC addition, normalized to the maximal decrease observed after Triton-X100 addition (3 minutes), enabling consistent quenching value comparisons. Although the quenching value appears small, normalization against complete cell solubilization provides reproducible comparisons. We do not fully understand why CBX effects vary in Venus quenching experiments, but we speculate that its steroid-like pentacyclic structure may influence the lysophospholipid agonistic effects. As noted in prior studies (DOI: 10.1085/jgp.201511505; DOI: 10.7554/eLife.54670), CBX likely acts as an allosteric modulator rather than a simple pore blocker, potentially contributing to these variations.
- Could provide more information to help rationalize how Yo-Pro-1, which has a charge of +2, can permeate what are thought to be anion favouring Panx channels? We appreciate that the biophysical properties of Panx channel remain mysterious, but it would help to hear how a bit more about the authors thinking. It might also help to cite other papers that have measured Yo-Pro-1 uptake through Panx channels. Was the Strep-tagged construct of Panx1 expressed in GnTI- cells and shown to be functional using electrophysiology?
Our recent study suggest that the electrostatic landscape along the permeation pathway may influence its ion selectivity (DOI: 10.1101/2024.06.13.598903). However, we have not yet fully elucidated how Panx1 permeates both anions and cations. Based on our findings, ion selectivity may vary with activation stimulus intensity and duration. Cation permeation through Panx1 is often demonstrated with YO-PRO-1, which measures uptake over minutes, unlike electrophysiological measurements conducted over milliseconds to seconds. We referenced two representative studies employing YO-PRO-1 to assess Panx1 activity. Whole-cell current measurements from a similar construct with an intracellular loop insertion indicate that our STREP-tagged construct likely retains functional capacity.
- In Fig 5 panel C, data is presented as the ratio of LPC induced current at -60 mV to that measured at +110 mV in the absence of LPC. What is the rationale for analysing the data this way? It would be helpful to also plot the two values separately for all of the constructs presented so the reader can see whether any of the mutants disproportionately alter LPC induced current relative to depolarization activated current. Also, for all currents shown in the figures, the authors should include a dashed coloured line at zero current, both for the LPC activated currents and the voltage steps.
We used the ratio of LPC-induced current to the current measured at +110 mV to determine whether any of the mutants disproportionately affect LPC-induced current relative to depolarization-activated current. Since the mutants that did not respond to LPC also exhibited smaller voltage-stimulated currents than those that did respond, we reasoned that using this ratio would better capture the information the reviewer is suggesting to gauge. Showing the zero current level may be helpful if the goal was to compare basal currents, which in our experience vary significantly from patch to patch. However, since we are comparing LPC- and voltage-induced currents within the same patch, we believe that including basal current measurements would not add useful information to our study.
Given that new experiments included to further highlight the significance of the discovery of Panx1 agonists, we opted to separate structure-based mechanistic studies from this manuscript and removed this experiment along with the docking and cryo-EM studies.
- The fragmented NTD density shown in Fig S8 panel A may resemble either lipid density or the average density of both NTD and lipid. For example, Class7 and Class8 in Fig.S8 panel D displayed split densities, which may resemble a phosphate head group and two tails of lipid. A protomer mask may not be the ideal approach to separate different classes of NTD because as shown in Fig S8 panel D, most high-resolution features are located on TM1-4, suggesting that the classification was focused on TM1-4. A more suitable approach would involve using a smaller mask including NTD, TM1, and the neighbouring TM2 region to separate different NTD classes.
We agree with the reviewer and attempted 3D classification using multiple smaller masks including the suggested region. However, the maps remained poorly defined, and we were unable to confidently assign the NTD.
- The authors don’t discuss whether the LPC-bound structures display changes in the external part of the pore, which is the anion-selective filter and the narrower part of the pore. If there are no conformational changes there, then the present structures cannot explain permeability to large molecules like ATP. In this context, a plot for the pore dimension will be helpful to see differences along the pore between their different structures. It would also be clearer if the authors overlaid maps of protomers to illustrate differences at the NTD and the "selectivity filter."
Both maps show that the narrowest constriction, formed by W74, has a diameter of approximately 9 Å. Previous steered molecular dynamics simulations suggest that ATP can permeate through such a constriction, implying an ion selection mechanism distinct from a simple steric barrier.
- The time between the addition of LPC to the nanodisc-reconstituted protein and grid preparation is not mentioned. Dynamic diffusion of LPC could result in equal probabilities for the bound and unbound forms. This raises the possibility of finding the Primed state in the LPC-bound state as well. Additionally, can the authors rationalize how LPC might reach the pore region when the channel is in the closed state before the application of LPC?
We appreciate the reviewer’s insight. We incubated LPC and nanodisc-reconstituted protein for 30 minutes, speculating that LPC approaches the pore similarly to other lipids in prior structures. In separate studies, we are optimizing conditions to capture more defined conformations.
- In the cryo-EM map of the “resting” state (EMDB-21150), a part of the density was interpreted as NTD flipped to the intracellular side. This density, however, is poorly defined, and not connected to the S1 helix, raising concerns about whether this density corresponds to the NTD as seen in the “resting” state structure (PDB-ID: 6VD7). In addition, some residues in the C-terminus (after K333 in frog PANX1) are missing from the atomic model. Some of these residues are predicted by AlphaFold2 to form a short alpha helix and are shown to form a short alpha helix in some published PANX1 structures. Interestingly, in both the AF2 model and 6WBF, this short alpha helix is located approximately in the weak density that the authors suggest represents the “flipped” NTD. We encourage the authors to be cautious in interpreting this part as the “flipped” NTD without further validation or justification.
We agree that the density corresponding the extended NTD into the cytoplasm is relatively weak. In our recent study, we compared two Panx1 structures with or without the mentioned C-terminal helix and found evidence suggesting the likelihood of NTD extension (DOI: 10.1101/2024.06.13.598903). Nevertheless, to prevent potential confusion, we have removed the cryo-EM panel from this manuscript.
- Since the authors did not observe densities of bound PLC in the cryo-EM map, it is important to acknowledge in the text the inherent limitations of using docking and mutagenesis methods to locate where PLC binds.
Thank you for the suggestion. We have removed this section to avoid potential confusion.
Optional suggestions:
- The authors used MeOH to extract mouse liver for reversed-phase chromatography. Was the study designed to focus on hydrophobic compounds that likely bind to the TMD? Panx1 has both ECD and ICD with substantial sizes that could interact with water soluble compounds? Also, the use of whole-cell recordings to screen fractions would not likely identify polar compounds that interact with the cytoplasmic part of the TMD? It would be useful for the authors to comment on these aspects of their screen and provide their rationale for fractionating liver rather than other tissues.
We have added a rationale in line 90, stating: “The soluble fractions were excluded from this study, as the most polar fraction induced strong channel activities in the absence of exogenously expressed pannexins.” Additionally, we have included a figure to support this rationale (Fig. S1A).
- The authors show that LPCs reversibly increase inward currents at a holding voltage of -60 mV (not always specified in legends) in cells expressing Panx1 and 2, and then show families of currents activated by depolarizing voltage steps in the absence of LPC without asking what happens when you depolarize the membrane after LPC activation? If LPCs can be applied for long enough without disrupting recordings, it would be valuable to obtain both I-V relations and G-V relations before and after LPC activation of Panx channels. Does LPC disproportionately increase current at some voltages compared to others? Is the outward rectification reduced by LPC? Does Vrev remain unchanged (see point above)? Its hard to predict what would be observed, but almost any outcome from these experiments would suggest additional experiments to explore the extent to which the open states activated by LPC and depolarization are similar or distinct.
Unfortunately, in our hands, the prolonged application of lysolipids at concentrations necessary to achieve significant currents tends to destabilize the patch. This makes it challenging to obtain G-V curves or perform the previously mentioned kinetic analyses. We believe this destabilization may be due to lysolipids’ surfactant-like qualities, which can disrupt the giga seal. Additionally, prolonged exposure seems to cause channel desensitization, which could be another confounding factor.
- From the results presented, the authors cannot rule out that mutagenesis-induced insensitivity of Panx channels to LPCs results from allosteric perturbations in the channels rather than direct binding/gating by LPCs. In Fig 5 panel A-C, the authors introduced double mutants on TM1 and TM2 to interfere with LPC binding, however, the double mutants may also disrupt the interaction network formed within NTD, TM1, and TM2. This disruption could potentially rearrange the conformation of NTD, favouring the resting closed state. Three double Asn mutants, which abolished LPC induced current, also exhibited lower currents through voltage activation in Fig 5S, raising the possibility the mutant channels fail to activate in response to LPC due to an increased energy barrier. One way to gain further insight would be to mutate residues in NTD that interact with those substituted by the three double Asn mutants and to measuring currents from both voltage activation and LPC activation. Such results might help to elucidate whether the three double Asn mutants interfere with LPC binding. It would also be important to show that the voltage-activated currents in Fig. S5 are sensitive to CBX?
Thank you for the comment, with which we agree. Our initial intention was to use the mutagenesis studies to experimentally support the docking study. Due to uncertainties associated with the presented cryo-EM maps, we have decided to remove this study from the current manuscript. We will consider the proposed experiments in a future study.
- Could the authors elaborate on how LPC opens Panx1 by altering the conformation of the NTDs in an uncoordinated manner, going from “primed” state to the “active” state. In the “primed” state, the NTDs seem to be ordered by forming interactions with the TMD, thus resulting in the largest (possible?) pore size around the NTDs. In contrast, in the “active” state, the authors suggest that the NTDs are fragmented as a result of uncoordinated rearrangement, which conceivably will lead to a reduction in pore size around NTDs (isn’t it?). It is therefore not intuitive to understand why a conformation with a smaller pore size represents an “active” state.
We believe the uncoordinated arrangement of NTDs is dynamic, allowing for potential variations in pore size during the activated conformation. Alternatively, NTD movement may be coupled with conformational changes in TM1 and the extracellular domain, which in turn could alter the electrostatic properties of the permeation pathway. We believe a functional study exploring this mechanism would be more appropriately presented as a separate study.
- Can the authors provide a positive control for these negative results presented in Fig S1B and C?
The positive results are presented in Fig. 1D and E.
- Raw images in Fig S6 and Fig S7 should contain units of measurement.
Thank you for pointing this out.
- It may be beneficial to show the superposition between primed state and activated state in both protomer and overall structure. In addition, superposition between primed state and PDB 7F8J.
We attempted to superimpose the cryo-EM maps; however, visually highlighting the differences in figure format proved challenging. Higher-resolution maps would allow for model building, which would more effectively convey these distinctions.
- Including particles number in each class in Fig S8 panel C and D would help in evaluating the quality of classification.
Noted.
- A table for cryo-EM statistics should be included.
Thanks, noted.
- n values are often provided as a range within legends but it would be better to provide individual values for each dataset. In many figures you can see most of the data points, which is great, but it would be easy to add n values to the plots themselves, perhaps in parentheses above the data points.
While we agree that transparency is essential, adding n-values to each graph would make some figures less clear and potentially harder to interpret in this case. We believe that the dot plots, n-value range, and statistical analysis provide adequate support for our claims.
- The way caspase activation of Panx channels is presented in the introduction could be viewed as dismissive or inflammatory for those who have studied that mechanism. We think the caspase activation literature is quite convincing and there is no need to be dismissive when pointing out that there are good reasons to believe that other mechanisms of activation likely exist. We encourage you to revise the introduction accordingly.
Thank you for this comment. Although we intended to support the caspase activation mechanism in our introduction, we understand that the reviewer’s interpretation indicates a need for clarification. We hope the revised introduction removes any perception of dismissiveness.
- Why is the patient data in Fig 4F normalized differently than everything else? Once the above issues with mVenus quenching data are clarified, it would be good to be systematic and use the same approach here.
For Fig. 4F, we used a distinct normalization method to account for substantial day-to-day variation in experiments involving body fluids. Notably, we did not apply this normalization to other experimental panels due to their considerably lower day-to-day variation.
- What was the rational for using the structure from ref 35 in the docking task?
The docking task utilized the human orthologue with a flipped-up NTD. We believe that this flipped-up conformation is likely the active form that responds to lysolipids. As our functional experiments primarily use the human orthologue for biological relevance, this structure choice is consistent. Our docking data shows that LPC does not dock at this site when using a construct with the downward-flipped NTD.
- Perhaps better to refer to double Asn ‘substitutions’ rather than as ‘mutations’ because that makes one think they are Asn in the wt protein.
Done.
- From Fig S1, we gather that Panx2 is much larger than Panx1 and 3. If that is the case, its worth noting that to readers somewhere.
We have added the molecular weight of each subtype in the figure legend.
- Please provide holding voltages and zero current levels in all figures presenting currents.
We provided holding voltages. However, the zero current levels vary among the examples presented, making direct comparisons difficult. Since we are comparing currents with and without LPC, we believe that indicating zero current levels is unnecessary for this study.
- While the authors successfully establish lysophospholipid-gating of Panx1 and Panx2, Panx3 appears unaffected. It may be advisable to be more specific in the title of the article.
We are uncertain whether Panx3 is unaffected by lysophospholipids, as we have not observed activation of this subtype under any tested conditions.
(This is a response to peer review conducted by Biophysics Colab on version 1 of this preprint.)
Consolidated Peer Review Report (20 December 2023)
GENERAL ASSESSMENT
Pannexin (Panx) hemichannels are a family of heptameric membrane proteins that form pores in the plasma membrane through which ions and relatively large organic molecules can permeate. ATP release through Panx channels during the process of apoptosis is one established biological role of these proteins in the immune system, but they are widely expressed in many cells throughout the body, including the nervous system, and likely play many interesting and important roles that are yet to be defined. Although several structures have now been solved of different Panx subtypes from different species, their biophysical mechanisms remain poorly understood, including what physiological signals control their activation. Electrophysiological measurements of ionic currents flowing in response to Panx channel activation have shown that some subtypes can be activated by strong membrane depolarization or caspase cleavage of the C-terminus. Here, Henze and colleagues set out to identify endogenous activators of Panx channels, focusing on the Panx1 and Panx2 subtypes, by fractionating mouse liver extracts and screening for activation of Panx channels expressed in mammalian cells using whole-cell patch clamp recordings. The authors present a comprehensive examination with robust methodologies and supporting data that demonstrate that lysophospholipids (LPCs) directly Panx-1 and 2 channels. These methodologies include channel mutagenesis, electrophysiology, ATP release and fluorescence assays, molecular modelling, and cryogenic electron microscopy (cryo-EM). Mouse liver extracts were initially used to identify LPC activators, but the authors go on to individually evaluate many different types of LPCs to determine those that are more specific for Panx channel activation. Importantly, the enzymes that endogenously regulate the production of these LPCs were also assessed along with other by-products that were shown not to promote pannexin channel activation. In addition, the authors used synovial fluid from canine patients, which is enriched in LPCs, to highlight the importance of the findings in pathology. Overall, we think this is likely to be a landmark study because it provides strong evidence that LPCs can function as activators of Panx1 and Panx2 channels, linking two established mediators of inflammatory responses and opening an entirely new area for exploring the biological roles of Panx channels. Although the mechanism of LPC activation of Panx channels remains unresolved, this study provides an excellent foundation for future studies and importantly provides clinical relevance.
RECOMMENDATIONS
Essential revisions:
Optional suggestions:
REVIEWING TEAM
Reviewed by:
Jorge Contreras, Professor, University of California, Davis, USA: electrophysiology and ion channel mechanisms
Wei Lü, Associate Professor, Department of Structural Biology, Van Andel Institute, USA: ion channel mechanisms, X-ray crystallography and cryo-EM
Xiaofeng Tan, Research Fellow, NINDS, NIH, USA: structural biology (X-ray crystallography and cryo-electron microscopy) and ion channel mechanisms
Kenton J. Swartz, Senior Investigator, NINDS, NIH, USA: ion channel structure and mechanisms, chemical biology and biophysics, electrophysiology and fluorescence spectroscopy
Curated by:
Kenton J. Swartz, Senior Investigator, NINDS, NIH, USA
(This consolidated report is a result of peer review conducted by Biophysics Colab on version 1 of this preprint. Comments concerning minor and presentational issues have been omitted for brevity.)
Document de Synthèse : La Santé des Franciliens (2023-2027)
Introduction
Ce document analyse les principaux thèmes et données saillantes extraits du rapport "La santé des Franciliens : diagnostic pour le projet régional de santé 2023-2027" de l'Observatoire régional de santé d'Île-de-France (ORS Île-de-France).
Il vise à fournir un outil de référence pour les décideurs régionaux, l'Agence Régionale de Santé (ARS) et l'ensemble des acteurs de santé du territoire, dans le cadre de la préparation du Projet Régional de Santé 3 (PRS3) pour la période 2023-2027.
Comme le souligne le Dr. Ludovic Toro, Président de l'ORS : "La santé est un bien commun.
Agir sur les déterminants sociétaux et environnementaux pour l’améliorer nous concerne tous."
Thèmes Clés
Inégalités Territoriales et Sociales de Santé
Santé des Femmes
Comportements à Risque et Addictions
Déterminants Environnementaux de la Santé
Santé Mentale
Santé des Enfants et Adolescents
Santé des Personnes Âgées
Santé Sexuelle
Offre et Accès aux Soins
Nutrition et Sécurité Alimentaire
Insécurité Alimentaire: Le document met en évidence le problème de l'insécurité alimentaire, en utilisant le "Household Food Security Survey Module".
Les prévalences les plus élevées d'IA sont observées chez les ménages avec faibles revenus. L'inadéquation de l'offre alimentaire est également un facteur de risque.
La précarité alimentaire est diagnostiquée par une approche par "public" et par une approche par "l'offre". Surpoids et Obésité :
Le rapport mentionne que "l'indice de masse corporelle (IMC) est une mesure simple du poids par rapport à la taille couramment utilisée pour estimer le surpoids et l’obésité chez l’adulte". Perspectives et Enjeux
Le document conclut en soulignant l'importance d'une approche globale et coordonnée pour améliorer la santé des Franciliens. Il met en avant les besoins de :
Conclusion
Ce document de synthèse, en s'appuyant sur le rapport de l'ORS Île-de-France, fournit un aperçu complet des défis et des enjeux de santé dans la région.
Il doit servir de base solide pour l'élaboration du Projet Régional de Santé 3 (PRS3), afin de promouvoir une santé équitable et accessible pour tous les Franciliens.
groups are emerging
Estudio de Caso en Colombia
La digitalización de territorios en proyectos de conservación en Colombia puede tener implicaciones graves si no se respetan los derechos indígenas. Por ejemplo:
Proyectos de conservación basados en Inteligencia Artificial que clasifican áreas protegidas sin considerar que estos territorios son habitados y administrados por comunidades indígenas pueden generar conflictos y despojo.
La ausencia de perspectivas colectivas en los algoritmos de Inteligencia Artificial refuerza narrativas individualistas que no reflejan la cosmovisión indígena sobre la naturaleza como un ente vivo y compartido.
La posición ética planteada por Indigenous AI resalta la necesidad de redefinir cómo se colectivizan el conocimiento y la identidad digital, respetando los valores comunitarios y evitando procesos extractivos.
We are essentially digitizing trees, animals, and plants and rivers, and boundaries, defining those using satellite imagery.
En Colombia, las corporalidades están profundamente vinculadas a la identidad cultural, territorial y espiritual. Para muchas comunidades indígenas, afrodescendientes y campesinas, el cuerpo no solo es físico, sino también un puente con la tierra y la naturaleza.
Estas comunidades entienden el territorio como un elemento vital de su existencia colectiva, lo que contrasta con las visiones occidentales que separan al individuo del entorno natural.
La digitalización de territorios, como se plantea en el uso de la Inteligencia Artificial para conservación, presenta desafíos éticos importantes. Clasificar y definir tierras y recursos naturales a través de imágenes satelitales y algoritmos puede despojar a estas comunidades de su conexión simbólica y material con el territorio, perpetuando desigualdades históricas y vulnerando sus derechos culturales y corporales.
La traducción en Colombia podría desempeñar un papel clave al mediar entre las perspectivas indígenas y las prácticas occidentales de conservación y digitalización de territorios.
Traducir no solo lenguas, sino también conceptos culturales como la relacionalidad con la naturaleza y el conocimiento colectivo, es esencial para evitar malentendidos y garantizar que las voces de las comunidades sean escuchadas.
Por ejemplo, cuando se desarrollan proyectos de conservación basados en la Inteligencia Artificial, la traducción puede ayudar a garantizar que los principios, usos y riesgos de estas tecnologías sean entendidos desde las cosmovisiones indígenas, en lugar de imponer terminologías y enfoques que no respeten sus prácticas y saberes.
La implementación de Inteligencia Artificial en conservación y digitalización de tierras en Colombia debería centrarse en que:
Las comunidades indígenas deban ser incluidas como actores principales en el diseño de tecnologías que afectan sus territorios. Esto requiere procesos de consulta previos, libres e informados, en línea con los estándares internacionales de derechos humanos.
En lugar de imponer un modelo de digitalización basado en la separación tierra-persona, la Inteligencia Artificial deba reflejar cómo estas comunidades perciben su conexión espiritual, cultural y económica con la naturaleza.
La Inteligencia Artificial deba reconocer y respetar el conocimiento colectivo de las comunidades. Esto incluye evitar la apropiación de datos que no consideren el carácter comunal de la identidad y el saber indígena, promoviendo en su lugar principios éticos como los planteados en la posición de Indigenous AI.
The Oracle for Transfeminist Technologies
Las herramientas especulativas como The Oracle for Transfeminist Technologies podrían inspirar prácticas y tecnologías que respeten y celebren la pluralidad de cuerpos y subjetividades.
En un país con desigualdades, las tecnologías transfeministas podrían abordar temas como el acceso a la salud, la educación y la representación, diseñando soluciones inclusivas que desafíen la discriminación estructural basada en el cuerpo, el género o la sexualidad.
La traducción en Colombia desempeñaría un papel crucial en la preservación y promoción de lenguas indígenas, afrodescendientes y criollas.
Desde una posibilidad transfeminista, la traducción podría ir más allá del lenguaje, integrando valores de justicia social y respeto por las diversidades. Por ejemplo, el acto de traducir no solo debería ser lingüístico, sino también cultural, incorporando sensibilidades hacia las experiencias de género y sexualidad que desafían las normas hegemónicas.
The Oracle for Transfeminist Technologies puede inspirar la creación de herramientas y metodologías que permitan a las comunidades marginadas de Colombia expresar sus narrativas y cosmovisiones de manera auténtica, respetando su diversidad cultural y corporal.
En Colombia, la Inteligencia Artificial podría tener el potencial de ser una herramienta transformadora, pero debe ser desarrollada con un enfoque ético y transfeminista para evitar reproducir dinámicas de exclusión.
El uso de valores transfeministas en el diseño de tecnologías podría guiar el desarrollo de sistemas que promuevan:
Garantizar que la la Inteligencia Artificial no excluya a personas trans, no binarias o pertenecientes a comunidades indígenas y afrodescendientes.
Co-crear tecnodiversidades con las comunidades, adaptando los valores y necesidades locales, al igual que lo hace The Oracle for Transfeminist Technologies en sus talleres participativos.
Reconocer que los datos no son neutrales, y fomentar prácticas de recolección y uso de datos que respeten la autonomía y dignidad de las personas y comunidades.
The Oracle for Transfeminist Technologies, demuestra cómo las tecnodiversidades pueden diseñarse desde valores transfeministas. En el contexto colombiano, estas metodologías podrían adaptarse para abordar problemáticas locales, como:
La visibilización de experiencias trans y no binarias en el acceso a derechos.
El diseño de plataformas que amplifiquen voces diversas, en especial las de personas marginadas por su género, raza o etnicidad.
La creación de tecnologías que fomenten redes de apoyo y solidaridad entre comunidades diversas.
Case study: Papa Reo
Estudios de Caso: Papa Reo y Colombia
Papa Reo, una iniciativa de innovación indígena, muestra cómo las cosmovisiones pueden moldear soluciones tecnológicas éticas y sostenibles. En Colombia, los pueblos indígenas y afrodescendientes podrían liderar proyectos tecnológicos basados en sus propias prácticas culturales, como el cuidado comunitario de datos y los principios de reciprocidad.
Iniciativas de traducción y conservación de lenguas indígenas en Colombia, inspiradas en modelos como Papa Reo, podrían desarrollarse con el objetivo de preservar el patrimonio lingüístico y cultural, evitando enfoques extractivos y promoviendo la participación comunitaria.
Los principios feministas de autonomía, consentimiento, conocimiento situado y conectividad sembrada, propuestos por comunidades tecnológicas en América Latina, también son aplicables a las diversidades en Colombia. Estos valores fomentan la creación de tecnologías que reflejan y respetan las experiencias de género, raza y clase, desmantelando sistemas opresivos.
Non-Western ethics
En Colombia, la diversidad corporal refleja no solo las características físicas y culturales de su población, sino también las dinámicas sociales, económicas y políticas que afectan la inclusión y la representación. Las comunidades indígenas, afrodescendientes y otras minorías han luchado por el reconocimiento de sus derechos y por la valorización de sus cosmovisiones, que incluyen principios similares a los de reciprocidad y relacionalidad descritos.
En contextos comunitarios, prácticas como la minga son expresiones de esta reciprocidad, donde el trabajo colectivo no es una acción voluntaria aislada, sino una responsabilidad mutua.
La traducción en Colombia como intermediación lingüística permite preservar y visibilizar las lenguas indígenas, afrodescendientes y criollas, al conectar estas comunidades con la nación y el mundo. Desde este enfoque, la traducción no sólo debe respetar el lenguaje, sino también los valores y cosmovisiones de las comunidades.
Iniciativas como Papa Reo, promueve el desarrollo tecnológico y lingüístico cuya raíz está en los principios comunitarios, evitando la explotación cultural y fomentando una representación auténtica.
La Inteligencia Artificial en Colombia podría aprender de iniciativas como Papa Reo, integrando principios de relacionalidad y reciprocidad en su diseño y uso. En lugar de imponer soluciones tecnológicas adaptadas a las comunidades, el desarrollo de la Inteligencia Artificial en el país debería partir de las necesidades, valores y principios de las mismas.
Esto es crucial para evitar procesos extractivos en el manejo de datos, especialmente aquellos provenientes de comunidades indígenas y rurales. Por ejemplo, un sistema de Inteligencia Artificial que respete la autonomía de las comunidades podría implementar licencias similares a kaitiakitanga, donde los datos no son propiedad privada, sino bienes comunes protegidos.
Gendered innovation
En Colombia, las diversidades corporales, que incluyen la intersección de género, orientación sexual, raza, condición de discapacidad y condición socioeconómica, reflejan desigualdades históricas y estructurales. Las mujeres y niñas marginadas, así como otras comunidades vulnerables, enfrentan barreras para acceder y participar en el diseño y gobernanza de tecnologías como la Inteligencia Artificial. Sin embargo, estas personas son agentes de cambio y poseen conocimientos prácticos y resiliencia que pueden ser fundamentales para el desarrollo de tecnologías que respondan a sus realidades.
Incorporar las experiencias y sensibilidades de las Inteligencia Artificial permitiría diseñar herramientas más inclusivas que contribuyan al crecimiento personal y comunitario. Estas posibilidades pueden abordar sesgos algorítmicos y fomentar aplicaciones tecnológicas que promuevan la justicia social y la dignidad.
Colombia, con su rica diversidad lingüística que incluye 65 lenguas indígenas reconocidas, enfrenta desafíos similares a los descritos en el caso del proyecto Papa Reo para el maorí. Muchas comunidades indígenas y afrodescendientes en el país tienen lenguas propias que son esenciales para expresar sus identidades culturales, pero estas lenguas están subrepresentadas en las tecnodiversidades actuales.
Las tecnologías de reconocimiento de voz y procesamiento de lenguaje natural no están suficientemente desarrolladas para lenguas indígenas.
La falta de acceso a tecnologías en lenguas maternas perpetúa desigualdades en el acceso a la educación, la participación política y otros derechos.
Proyectos como el Papa Reo podrían inspirar el desarrollo de herramientas similares en Colombia, promoviendo plataformas tecnológicas que incorporen lenguas indígenas para fortalecer la identidad cultural y la inclusión.
La interseccionalidad y el ecofeminismo ofrecen herramientas valiosas para cuestionar las dinámicas de poder en la producción y uso de la Inteligencia Artificial en Colombia, por ejemplo:
Interseccionalidad: Reconocer cómo las opresiones múltiples (género, etnia, clase) afectan el acceso y uso de tecnologías, y diseñar soluciones que atiendan estas necesidades interrelacionadas.
Asimismo, incorporar cosmovisiones y saberes ancestrales en el diseño tecnológico para desafiar los paradigmas dominantes y construir modelos alternativos de innovación.
Ecofeminismo: Abogar por tecnologías que no solo sean socialmente justas, sino también sostenibles y respetuosas con el medio ambiente.
La creación de tecnologías que respeten las necesidades ecológicas y culturales de las comunidades locales, como herramientas para la gestión sostenible de recursos naturales o la preservación de lenguas y saberes indígenas.
La Inteligencia Artificial en Colombia tiene el potencial de abordar las desigualdades locales si se desarrolla desde una lógica inclusiva y alternativa que:
Promueva la participación de comunidades diversas, especialmente de mujeres y personas con discapacidades, en todas las etapas de desarrollo de la tecnología.
Incorpore lenguas y perspectivas locales en las bases de datos y algoritmos.
Garantice la transparencia, la gobernanza inclusiva y la responsabilidad en el uso de tecnologías.
Para el futuro se podrían:
Implementar políticas que financien proyectos de tecnología inclusiva y promuevan la participación comunitaria en su diseño.
Crear espacios para que mujeres, comunidades indígenas y otros grupos marginados puedan aportar su experiencia y creatividad al desarrollo tecnológico.
Crear proyectos piloto que estén inspirados en iniciativas como Papa Reo, desarrollar herramientas de IA para lenguas indígenas en Colombia que sirvan como modelo para otras regiones.
According to Srinivasan (2019),1 the way people in the global South use and experience digital technologies could help bring a different understanding to tech innovation and its applications in the real world and the ways in which they are built for and by users.
Las comunidades indígenas en Colombia representan un componente esencial de la diversidad cultural, étnica y epistemológica del país. Estas comunidades poseen conocimientos ancestrales que pueden aportar soluciones innovadoras frente a problemas complejos, como la sostenibilidad ambiental, la gestión de recursos y la convivencia en contextos de diversidad. Estos saberes, aunque históricamente marginados, tienen el potencial de enriquecer la forma en que se diseñan y utilizan las tecnodiversidades, incluida la Inteligencia Artificial, para que sean inclusivas y contextualizadas.
La implementación de tecnologías que respeten y promuevan las lenguas indígenas puede contribuir a la preservación del patrimonio cultural y facilitar el acceso a derechos fundamentales para estas comunidades.
Asimismo, la traducción en Colombia, particularmente en el contexto de lenguas indígenas, es una herramienta fundamental para la inclusión y la equidad. Dada la diversidad lingüística del país, la traducción puede actuar como intermediación multilíngue entre estas comunidades y el desarrollo tecnológico.
La Inteligencia Artificial puede desempeñar un papel crucial en la traducción automática y el procesamiento del lenguaje natural para lenguas indígenas. Sin embargo, esto requiere una infraestructura adecuada y un enfoque ético para garantizar que estas tecnologías respeten las particularidades culturales como los culturemas y no refuercen asimetrías de poder.
Colombia, desde el Sur Global, enfrenta desafíos significativos en términos de desigualdad y acceso a tecnología. Sin embargo, también tiene la oportunidad de liderar un modelo de innovación que sea sensible a las realidades locales y que priorice la justicia social y ambiental. La Inteligencia Artificial podría:
Diseñar tecnologías accesibles que respondan a las necesidades específicas de comunidades en contextos de bajos recursos.
Desarrollar algoritmos y bases de datos que incluyan y prioricen las lenguas y conocimientos indígenas.
Abordar problemas ambientales de manera contextualizada y sostenible.
Las políticas públicas deben promover la participación activa de las comunidades indígenas y otros actores marginados en la creación y uso de tecnologías. Esto incluye:
Financiación para proyectos de traducción y preservación de lenguas indígenas utilizando la Inteligencia Artificial.
Promoción de la participación comunitaria en el diseño de tecnologías.
Regulaciones éticas para garantizar que las tecnologías no perpetúen formas de violencia estructural ni exclusión.
Cultures of innovation: everyday innovation from the margins
El jugaad es una forma de “hackeo cotidiano” en la India, donde las personas, especialmente de las castas más bajas, encuentran soluciones creativas y prácticas con los recursos limitados que tienen a mano.
No se trata solo de una forma económica o improvisada de resolver problemas; es una manera de relacionarse con el entorno y sus desafíos, usando habilidades manuales, intuición y creatividad.
**El jugaad no sigue las reglas típicas de la innovación organizada o profesional, sino que crea su propia lógica basada en la necesidad y la adaptabilidad. **
Es una práctica que no separa pensar y hacer, ni se preocupa tanto por el futuro o el pasado; en lugar de eso, actúa en el presente para resolver problemas de manera inmediata. Esta forma de trabajar no solo produce objetos o soluciones funcionales, sino que también refleja emociones, experiencias y conexiones humanas con su entorno.
El jugaad no es solo una técnica; es una forma de vida que redefine cómo entendemos la innovación y las capacidades humanas.
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public Review):
Summary:
The authors' research group had previously demonstrated the release of large multivesicular body-like structures by human colorectal cancer cells. This manuscript expands on their findings, revealing that this phenomenon is not exclusive to colorectal cancer cells but is also observed in various other cell types, including different cultured cell lines, as well as cells in the mouse kidney and liver. Furthermore, the authors argue that these large multivesicular body-like structures originate from intracellular amphisomes, which they term "amphiectosomes." These amphiectosomes release their intraluminal vesicles (ILVs) through a "torn-bag mechanism." Finally, the authors demonstrate that the ILVs of amphiectosomes are either LC3B positive or CD63 positive. This distinction implies that the ILVs either originate from amphisomes or multivesicular bodies, respectively.
Strengths:
The manuscript reports a potential origin of extracellular vesicle (EV) biogenesis. The reported observations are intriguing.
Weaknesses:
It is essential to note that the manuscript has issues with experimental designs and lacks consistency in the presented data. Here is a list of the major concerns:
(1) The authors culture the cells in the presence of fetal bovine serum (FBS) in the culture medium. Given that FBS contains a substantial amount of EVs, this raises a significant issue, as it becomes challenging to differentiate between EVs derived from FBS and those released by the cells. This concern extends to all transmission electron microscopy (TEM) images (Figure 1, 2P-S, S5, Figure 4 P-U) and the quantification of EV numbers in Figure 3. The authors need to use an FBS-free cell culture medium.
Although FBS indeed contains bovine EVs, however, the presence of very large multivesicular EVs (amphiectosomes) that our manuscript focuses on has never been observed and reported. For reported size distributions of EVs in FBS, please find a few relevant references below:
PMID: 29410778, PMID: 33532042, PMID: 30940830 and PMID: 37298194
All the above publications show that the number of lEVs > 350-500 nm is negligible in FBS. The average diameter of MV-lEVs (amphiectosomes) described in our manuscript is around 1.00-1.50 micrometer.
Reviewer #1: These papers evaluated the effectiveness of various methods to eliminate EVs from FBS, emphasizing the challenges associated with the presence of EVs in FBS. They also caution against using FBS in EV studies due to these issues. However, I did not find a clear indication regarding the size distributions of EVs in FBS in these papers.
Please provide accurate reference supporting the claim that 'lEVs > 350-500 nm are negligible in FBS.' The papers cited by the authors do not address this specific point.
In the revised manuscript, we addressed the point that due to sterile filtering of FBS, it cannot contain large >0.22 µm EVs
Our response to Reviewer #1 point 2. When we demonstrated the TEM of isolated EVs, we consistently used serum- free conditioned medium (Fig2 P-S, Fig2S5 J, O) as described previously (Németh et al 2021, PMID: 34665280).
Reviewer #1: This is an important point that is not mentioned in the original main text, figure legend or method. Please address.
We agree and we apologize for it. We added this information to the revised manuscript.
Our response to Reviewer #1 point 3. Our TEM images show cells captured in the process of budding and scission of large multivesicular EVs excluding the possibility that these structures could have originated from FBS.
Reviewer #1: These images may also depict the engulfment of EVs in FBS. Hence, it is crucial to utilize EV-free or EV-depleted FBS.
As we mentioned earlier, we added the information to the revised manuscript that sterile filtering of the FBS presumably removed particles >0.22 µm EVs
Our response to Reviewer #1 point 4. In addition, in our confocal analysis, we studied Palm-GFP positive, cell-line derived MV-lEVs. Importantly, in these experiments, FBS-derived EVs are non-fluorescent, therefore, the distinction between GFP positive MV-lEVs and FBS-derived EVs was evident.
Reviewer #1: I agree that these fluorescent-labeled assays conclusively indicate that the MV-lEVs are originating from the cells. However, the images of concerns are the non- fluorescent-labeled images in (Figure 1, 2P-S, S5, Figure 4 P-U and Figure 3). The MV-lEVs may derive from both the cells and FBS.
Please see above our response to points 1-3.
Our response to Reviewer #1 point 5. In addition, culturing cells in FBS-free medium (serum starvation) significantly affects autophagy. Given that in our study, we focused on autophagy related amphiectosome secretion, we intentionally chose to use FBS supplemented medium.
Reviewer #1 If this is a concern, the authors should use EV-depletive FBS.
As we discussed above, sterile filtration of FBS removes particles >0.22 µm. In addition, based on our preliminary experiments, EV-depleted serum may effect cell physiology.
Our response to Reviewer #1 point 6. Even though the authors of this manuscript are not familiar with the technological details how FBS is processed before commercialization, it is reasonable to assume that the samples are subjected to sterile filtration (through a 0.22 micron filter) after which MV-lEVs cannot be present in the commercial FBS samples.
Reviewer #1This is a fair comment that needs to be included in the manuscript.
As you suggested, this comment is now included in the revised manuscript
(2) The data presented in Figure 2 is not convincingly supportive of the authors' conclusion. The authors argue that "...CD81 was present in the plasma membrane-derived limiting membrane (Figures 2B, D, F), while CD63 was only found inside the MV-lEVs (Fig. 2A, C, E)." However, in Figure 2G, there is an observable CD63 signal in the limiting membrane (overlapping with the green signals), and in Figure 2J, CD81 also exhibits overlap with MV-IEVs.
Both CD63 and CD81 are tetraspanins known to be present both in the membrane of sEVs and in the plasma membrane of cells (for references, please see Uniprot subcellular location maps: https://www.uniprot.org/uniprotkb/P08962/entry#subcellular_location https://www.uniprot.org/uniprotkb/P60033/entry#subcellular_location). However, according the feedback of the reviewer, for clarity, we will delete the implicated sentence from the text.
Reviewer #1 Please also justify the statement questioned in (3) as these arguments are interconnected.
We hope you find our above responses to your comment acceptable.
(3) Following up on the previous concern, the authors argue that CD81 and CD63 are exclusively located on the limiting membrane and MV-IEVs, respectively (Figure 2-A-M). However, in lines 104-106, the authors conclude that "The simultaneous presence of CD63, CD81, TSG101, ALIX, and the autophagosome marker LC3B within the MV-lEVs..." This statement indicates that CD63 and CD81 co-localize to the MV-IEVs. The authors need to address this apparent discrepancy and provide an explanation.
There must be a misunderstanding because we did not claim or implicate in the text that “CD81 and CD63 are exclusively located on the limiting membrane and MV-IEVs”. Here we studied co-localization of the above proteins in the case intraluminal vesicles (ILVs). In Fig 2. we did not show any analysis of limiting membrane co-localization.
Reviewer #1 I have indicated that this statement is found in lines 104-106, where the authors argue, 'The simultaneous presence of CD63, CD81, TSG101, ALIX, and the autophagosome marker LC3B within the MV-lEVs...' If the authors acknowledge the inaccuracy of this statement, please provide a justification for this argument.
For clarity, we modified the description of data shown in Fig2 in the revised manuscript.
(4) The specificity of the antibodies used in Figure 2 should be validated through knockout or knockdown experiments. Several of the antibodies used in this figure detect multiple bands on western blots, raising doubts about their specificity. Verification through additional experimental approaches is essential to ensure the reliability and accuracy of all the immunostaining data in this manuscript.
We will consider this suggestion during the revision of the manuscript.
Reviewer #1:Please do so.
We carefully considered the suggestion, but we realized that it was not feasible for us to perform gene silencing in the case of all our used antibodies before resubmission of our revised manuscript. However, we repeated the Western blot for mouse anti-CD81 (Invitrogen MAA5-13548) and replaced the previous Western blot by it in the revised manuscript (Fig.2-S4H)
(5) In Figures 2P-R, the morphology of the MV-IEVs does not resemble those shown in Figures 1-A, H, and D, indicating a notable inconsistency in the data.
EM images in Figure2 P-R show sEVs separated from serum-free conditioned media as opposed to MV-lEVs, which were in situ captured in fixed tissue cultures (Fig1). Therefore, the two EV populations necessarily have different size and structure. Furthermore, Fig. 1 shows images of ultrathin sections while in Figure 2P-R, we used a negative-positive contrasting of intact sEV-s without embedding and sectioning.
(6) There are no loading controls provided for any of the western blot data.
Not even the latest MISEV 2023 guidelines give recommendations for proper loading control for separated EVs in Western blot (MISEV 2023 , DOI: 10.1002/jev2.12404 PMID: 38326288). Here we applied our previously developed method (PMID: 37103858), which in our opinion, is the most reliable approach to be used for sEV Western blotting. For whole cell lysates, we used actin as loading control (Fig3-S2B).
Reviewer #1: The blots referenced here (Fig2-S3; Fig2-S4B; Fig3-S2B) were conducted using total cell lysates, not EV extracts. Only one blot in Fig3-S2B includes an actin control. All remaining blots should incorporate actin controls for consistency.
Fig2-S3 (corresponding to Fig2-S4 in the revised manuscript) only shows reactivity of the used antibodies. This Western blot is not intended to serve as a basis of any quantitative conclusions. Fig2-S4 (corresponding to Fig2-S5 in the revised manuscript) includes the actin control. Fig3-S2B shows the complete membrane, which was cut into 4 pieces, and the immune reactivity of different antibodies was tested. The actin band was included on the anti-LC3B blot. For clarity, we rephrased the figure legend.
Additionally, for Figures 2-S4B, the authors should run the samples from lanes i-iii in a single gel.
Please note that in Figure 2- S4B, we did run a single gel, and the blot was cut into 4 pieces, which were tested by anti-GFP, anti-RFP, anti-LC3A and anti-LC3B antibodies. Full Western blots are shown in Fig.3_S2 B, and lanes “1”, “2” and “3” correspond to “i”, “ii” and “iii” in Fig.2-S4, respectively.
Reviewer #1: In the original Figure 2- S4B, the blots were sectioned into 12 pieces. If lanes "i," "ii," and "iii" were run on the same blot, the authors are advised to eliminate the grids between these lanes.
Grids separating the lanes have been eliminated on Fig.2_S4 (now Fig.2_S5 in the revised manuscript).
(7) In Figure 2-S4, is there co-localization observed between LC3RFP (LC3A?) with other MV-IFV markers? How about LC3B? Does LC3B co-localize with other MV-IFV markers?
In Supplementary Figure 2-S4, we showed successful generation of HEK293T-PalmGFP-LC3RFP cell line. In this case we tested the cells, and not the released MV-lEVs. LC3A co-localized with the RFP signal as expected.
Reviewer #1: Does LC3RFP colocalize with MV-IFV markers in HEK293T-PalmGFP-LC3RFP cell line? This experiment aims to clarify the conclusion made in lines 104-106, where the authors assert that 'The concurrent existence of CD63, CD81, TSG101, ALIX, and the autophagosome marker LC3B within the MV-lEVs...'
In the case of PalmGFP-LC3RFP cells, LC3-RFP is overexpressed. Simultaneous assessment of this overexpressed protein with non-overexpressed, fluorescent antibod-detected molecules proved to be challenging because of spectral overlaps and inappropriate signal-noise ratios. Furthermore, in association with EVs, the number of antibody-detected molecules is substantially lower than in cells. Therefore, even though we tried, we could not successfully perform these experiments.
(8) The TEM images presented in Figure 2-S5, specifically F, G, H, and I, do not closely resemble the images in Figure 2-S5 K, L, M, N, and O. Despite this dissimilarity, the authors argue that these images depict the same structures. The authors should provide an explanation for this observed discrepancy to ensure clarity and consistency in the interpretation of the presented data.
As indicated in Material and Methods, Fig 2-S5 F, G, H and I are conventional TEM images fixed by 4% glutaraldehyde 1% OsO<sub>4</sub> 2h and embedded into Epon resin with a post contrasting of 3.75% uranyl acetate 10 min and 12 min lead citrate. Samples processed this way have very high structure preservation and better image quality, however, they are not suitable for immune detection. In contrast, Fig.2.-S5 K,L,M,N shows immunogold labelling of in situ fixed samples. In this case we used milder fixation (4% PFA, 0.1% glutaraldehyde, postfixed by 0.5% OsO<sub>4</sub> 30 min) and LR-White hydrophilic resin embedding. This special resin enables immunogold TEM analysis. The sections were exposed to H<sub>2</sub>O<sub>2</sub> and NaBH<sub>4</sub> to render the epitopes accessible in the resin. Because of the different applied techniques, the preservation of the structure is not the same. In the case of Fig.2 J, O, separated sEVs were visualised by negative-positive contrast and immunogold labelling as described previously (PMID: 37103858).
Reviewer #1: Please include this justification in the revised version.
We included this justification in the revised manuscript.
(9) For Figures 3C and 3-S1, the authors should include the images used for EV quantification. Considering the concern regarding potential contamination introduced by FBS (concern 1), it is advisable for the authors to employ an independent method to identify EVs, thereby confirming the reliability of the data presented in these figures.
In our revised manuscript, we will provide all the images used for EV quantification in Figure 3C. Given that Figures 3C and 3-S1 show MV-lEVs released by HEK293T-PlamGFP cells, the possible interference by FBS-derived non-fluorescent EVs can be excluded.
Reviewer #1: Please provide all the images.
Original LASX files are provided (DOI: 10.6019/S-BIAD1456 ).
Reviewer #1: The images raising concerns regarding the contamination of EVs in FBS primarily consist of transmission electron microscopy (TEM) images, namely, Figure 1, 2P-S, S5, and Figure 4 P-U, along with the quantification of EV numbers in Figure 3. These concerns persist despite the use of fluorescent-labeled experiments. While fluorescent-labeled MV-lEVs are conclusively identified as originating from the cells, the MV-lEVs observed in Figure 1, 2P-S, S5, and Figure 4 P-U and Figure 3 may derive from both the cells and FBS.
Large EVs (with diameter >800 nm) derived from FBS were not present in our experiments, as discussed above.
(10) Do the amphiectosomes released from other cell types as well as cells in mouse kidneys or liver contain LC3B positive and CD63 positive ILVs?
Based on our confocal microscopic analysis, in addition the HEK293T-PalmGFP cells, HT29 and HepG2 cells also release similar LC3B and CD63 positive MV-lEVs. Preliminary evidence shows MV-lEV secretion by additional cell types.
The response of Reviewer #1: Please show these data in the revised manuscript. Moreover, do cells in mouse kidneys or liver contain LC3B positive and CD63 positive ILVs?
We have added new confocal microscopic images to Fig2-S3 showing amphiectosomes released also by the H9c2 (ATCC) cardiomyoblast cell line. To preserve the ultrastructure of MV-lEVs in complex organs like kidney and liver, fixation with 4% glutaraldehyde with 1% OsO4 appears to be essential. This fixation does not allow for immune detection to assess LC3B and CD63 positive MV-lEVs in the ultrathin sections.
Reviewer #2 (Public Review):
Summary:
The authors had previously identified that a colorectal cancer cell line generates small extracellular vesicles (sEVs) via a mechanism where a larger intracellular compartment containing these sEVs is secreted from the surface of the cell and then tears to release its contents. Previous studies have suggested that intraluminal vesicles (ILVs) inside endosomal multivesicular bodies and amphisomes can be secreted by the fusion of the compartment with the plasma membrane. The 'torn bag mechanism' considered in this manuscript is distinctly different because it involves initial budding off of a plasma membrane-enclosed compartment (called the amphiectosome in this manuscript, or MV-lEV). The authors successfully set out to investigate whether this mechanism is common to many cell types and to determine some of the subcellular processes involved.
The strengths of the study are:
(1) The high-quality imaging approaches used, seem to show good examples of the proposed mechanism.
(2) They screen several cell lines for these structures, also search for similar structures in vivo, and show the tearing process by real-time imaging.
(3) Regarding the intracellular mechanisms of ILV production, the authors also try to demonstrate the different stages of amphiectosome production and differently labelled ILVs using immuno-EM.
Several of these techniques are technically challenging to do well, and so these are critical strengths of the manuscript.
The weaknesses are:
(1) Most of the analysis is undertaken with cell lines. In fact, all of the analysis involving the assessment of specific proteins associated with amphiectosomes and ILVs are performed in vitro, so it is unclear whether these processes are really mirrored in vivo. The images shown in vivo only demonstrate putative amphiectosomes in the circulation, which is perhaps surprising if they normally have a short half-life and would need to pass through an endothelium to reach the vessel lumen unless they were secreted by the endothelial cells themselves.
Our previous results analyzing PFA-fixed, paraffin embedded sections of colorectal cancer patients provided direct evidence that MV-lEV secretion also occurs in humans in vivo (PMID: 31007874). Regarding your comment on the presence of amphiectosomes in the circulation despite their short half-lives, we would like to point out that Fig1.X shows a circulating lymphocyte which releases MV-lEV within the vessel lumen. Furthermore, in the revised manuscript, an additional Fig.1-S1 is provided. Here, we show the release of MV-lEVs both by an endothelial and a sub-endothelial cell (Fig.1-S1G). In addition, these images show the simultaneous presence of MV-lEVs and sEVs in the circulation (Fig.1-S1.A,C,D,H and I). The transmission electron micrographs of mouse kidney and liver sections provide additional evidence that the MV-lEVs are released by different types of cells, and the “torn bag release” also takes place in vivo (Fig.1.V).
(2) The analysis of the intracellular formation of compartments involved in the secretion process (Figure 2-S5) relies on immuno-EM, which is generally less convincing than high-/super-resolution fluorescence microscopy because the immuno-labelling is inevitably very sporadic and patchy. High-quality EM is challenging for many labs (and seems to be done very well here), but high-/super-resolution fluorescence microscopy techniques are more commonly employed, and the study already shows that these techniques should be applicable to studying the intracellular trafficking processes.
As you suggested, in the revised manuscript, we present additional super-resolution microscopy (STED) data. The intracellular formation of amphisomes, the fragmentation of LC3B-positive membranes and the formation of LC3B-positive ILVs were captured (Fig. 3B-F).
(3) One aspect of the mechanism, which needs some consideration, is what happens to the amphisome membrane, once it has budded off inside the amphiectosome. In the fluorescence images, it seems to be disrupted, but presumably, this must happen after separation from the cell to avoid the release of ILVs inside the cell. There is an additional part of Figure 1 (Figure 1Y onwards), which does not seem to be discussed in the text (and should be), that alludes to amphiectosomes often having a double membrane.
We agree with your comment regarding the amphisome membrane and we added a sentence to the Discussion of the revised manuscript. Fig1Y onwards is now discussed in the manuscript. In addition, we labelled the surface of living HEK293 cells with wheat germ agglutinin (WGA), which binds to sialic acid and N-acetyl-D-glucosamine. After removing the unbound WGA by washes, the cells were cultured for an additional 3 hours, and the release of amphiectosomes was studied. The budding amphiectosome had WGA positive membrane providing evidence that the external limiting membrane had a plasma membrane origin (Fig.3G)
(4) The real-time analysis of the amphiectosome tearing mechanism seemed relatively slow to me (over three minutes), and if this has been observed multiple times, it would be helpful to know if this is typical or whether there is considerable variation.
Thank you for this comment. In the revised manuscript, we highlight that the first released LC3 positive ILV was detected as early as within 40 sec.
Overall, I think the authors have been successful in identifying amphiectosomes secreted from multiple cell lines and demonstrating that the ILVs inside them have at least two origins (autophagosome membrane and late endosomal multivesicular body) based on the markers that they carry. The analysis of intracellular compartments producing these structures is rather less convincing and it remains unclear what cells release these structures in vivo.
I think there could be a significant impact on the EV field and consequently on our understanding of cell-cell signalling based on these findings. It will flag the importance of investigating the release of amphiectosomes in other studies, and although the authors do not discuss it, the molecular mechanisms involved in this type of 'ectosomal-style' release will be different from multivesicular compartment fusion to the plasma membrane and should be possible to be manipulated independently. Any experiments that demonstrate this would greatly strengthen the manuscript.
We appreciate these comments of the reviewer. Experiments are on their way to elucidate the mechanism of the “ectosomal style” exosome release and will be the topic of our next publication.
In general, the EV field has struggled to link up analysis of the subcellular biology of sEV secretion and the biochemical/physical analysis of the sEVs themselves, so from that perspective, the manuscript provides a novel angle on this problem.
Reviewer #3 (Public Review):
Summary:
In this manuscript, the authors describe a novel mode of release of small extracellular vesicles. These small EVs are released via the rupture of the membrane of so-called amphiectosomes that resemble "morphologically" Multivesicular Bodies.
These structures have been initially described by the authors as released by colorectal cancer cells (https://doi.org/10.1080/20013078.2019.1596668). In this manuscript, they provide experiments that allow us to generalize this process to other cells. In brief, amphiectosomes are likely released by ectocytosis of amphisomes that are formed by the fusion of multivesicular endosomes with autophagosomes. The authors propose that their model puts forward the hypothesis that LC3 positive vesicles are formed by "curling" of the autophagosomal membrane which then gives rise to an organelle where both CD63 and LC3 positive small EVs co-exist and would be released then by a budding mechanism at the cell surface that appears similar to the budding of microvesicles /ectosomes. Very correctly the authors make the distinction from migrasomes because these structures appear very similar in morphology.
Strengths:
The findings are interesting despite that it is unclear what would be the functional relevance of such a process and even how it could be induced. It points to a novel mode of release of extracellular vesicles.
Weaknesses:
This reviewer has comments and concerns concerning the interpretation of the data and the proposed model. In addition, in my opinion, some of the results in particular micrographs and immunoblots (even shown as supplementary data) are not of quality to support the conclusions.
Recommendations for the authors:
Reviewer #1 (Recommendations For The Authors):
(1) Highlight MV-IEV, ILV and limiting membrane in Figure-1G, N, and U.
Based on the suggestion, we revised Figure1
(2) Figure 1-Y-AF are not mentioned in the text.
In the revised manuscript, we discuss Figure 1Y-AF
(3) The term "IEVs" in Figure 2-S2 is not defined.
We modified the figure legend: we changed MV-lEV to amphiectosome
(4) Need to quantify co-localization in Figure 2-S2.
As suggested, we carried out the co-localisation analysis (Fig2-S2I), and Fig2-S2 was re-edited
Reviewer #2 (Recommendations For The Authors):
I have two recommendations for improving the manuscript through additional experiments:
(1) I think the description of the intracellular processes taking place in order to form amphiectosomes would be much stronger if some super-resolution imaging could be undertaken. This should label the different compartments before and after fusion with specific markers that highlight the protein signature of the different limiting and ILV membranes much more clearly than immuno-EM. It will also help in characterising the double-membrane structure of amphiectosomes at the point of budding and reveal whether the patchy labelling of the inner membrane emerges after amphiectosome release (the schematic model currently suggests that it happens before).
Thank you for your suggestion. STED microscopy was applied and results are shown in new Fig3 and the schematic model was modified accordingly.
(2) The implications of the manuscript would be more wide-ranging if the authors could test genetic manipulations that are believed to block exosome or ectosome release, eg. Rab27a or Arrdc1 knockdown. This may allow them to determine whether MV-lEVs can be released independently of the classical exosome release mechanism because they use a different route to be released from the plasma membrane. This experiment is not essential, but I think it would start to address the core regulatory mechanisms involved, and if successful, would easily allow the authors to determine the ratio of CD63-positive sEVs being secreted via classical versus amphiectosome routes.
The suggestion is very valuable for us and these studies are being performed in a separate project.
I think there are several other ways in which the manuscript could be improved to better explain some of the approaches, findings and interpretation:
(1) Include some explanation in the text of certain key tools, particularly:
a. Palm-GFP and whether its expression might alter the properties of the plasma membrane since this is used in a lot of experiments and is the only marker that seems to uniformly label the outer membrane of amphiectosomes. One concern might be that its expression drives amphiectosome secretion.
We found evidence for amphiectosome release also in the case of several different cells not expressing Palm-GFP. We believe, this excludes the possibility that Palm-GFP expression is the inducer of the amphiectosome release. Both by fluorescent and electron microscopy, the Palm-GFP non expressing cells showed very similar MV-lEVs. In addition, in the case of non-transduced HEK293 and fluorescent WGA-binding, we made similar observations.
b. Lactadherin - does this label the amphiectosomes after their release or does the wash-off step mean that it only labels cells, which subsequently release amphiectosomes?
Lactadherin labels the amphiectosomes after their release and fixation. Living cells cannot be labelled by lactadherin as PS is absent in the external plasma membrane layer of living cells. We used WGA on HEK293 cells to further support the plasma membrane origin of the external membrane of amphiectosomes.
(2) Explain the EM and confocal imaging approaches more clearly. Most importantly, is a 3D reconstruction always involved to confirm that 'separated' amphiectosomes are not joined to cells in another Z-plane.
Thank you for your suggestion. We have modified the manuscript accordingly
(3) Presenting triple-labelled images with red, green and yellow channels does not allow individual labelling to be determined without single-channel images and even then, it is much more informative to use three distinguishable colours that make a different colour with overlap, eg. CMY? Fig.2_S2D and E do not display individual channels, so definitely need to be changed.
In case of Fig.2_S2D, we now show the individual channels, the earlier E image has been removed. In case of the STED images, CMY colors had been used, as you suggested.
(4) Please discuss in the text the data in Figure 1Y onwards concerning single/double membranes on MV-lEVs.
In the revised manuscript, we discuss the question on single/double membranes and we refer to Figure 1Y-AF
(5) On line 162, reword 'intraluminal TSPAN4 only' to 'one in which TSPAN4 is only intraluminal' to make it clear that other proteins are also marking the intraluminal region, not TSPAN4 only.
We modified the text accordingly.
(6) Points for further discussion and further conclusions:
a. In vivo experiments - discuss the limitations of this part of the analysis - it seems that none of the amphiectosome markers have been analysed in this part of the study and the MV-lEVs are only in the circulation.
b. Can the authors give any further indication of the levels of MV-lEVs relative to free sEVs from any of their studies?
Using our current approach, it is not possible to determine the levels of MV-lEVs to free sEV. Without analyzing serial ultrathin sections, determination of the relative ratio of MV-lEVs and sEVs would depend on the actual section plane. In future projects, we will determine the ratio of LC3 positive and negative sEVs by single EV analysis techniques (such as SP-IRIS). In the revised manuscript, additional TEM images are included to provide evidence for the simultaneous presence of sEVs and MV-lEVs and MV-lEVs both inside and outside of the circulation.
c. Please discuss the single versus double membrane issue (relating to experiments proposed above).
We discuss this question in more details in the revised manuscript.
d. Please point out that the release mechanism (plasma membrane budding) will involve different molecular mechanisms to establish exosome release, and this might provide a route to determine relative importance.
We are currently running a systemic analysis of the release mechanism of amphiectosomes, and this will be the topic of a separate manuscript.
Reviewer #3 (Recommendations For The Authors):
* The model is not supported.
* The data is not of quality.
* The appropriate methods are not exploited.
We are sorry, we cannot respond to these unsupported critiques.
Document de Briefing : Analyse du Webinaire "Scénariser un Enseignement Hybride"
Introduction
Ce document résume les principaux thèmes et informations clés présentés lors du webinaire de France Université Numérique, daté du 24 janvier 2025, concernant la micro-certification "Scénariser un enseignement hybride". Les intervenants étaient Xavier Moulin, Directeur du Service au Numérique et à l'Accompagnement Pédagogique, et Joshua Fonti, Ingénieur pour l'enseignement numérique, tous deux de l'Université de Nîmes.
1. Informations Générales sur la Formation
2. Points Clés et Idées Fortes
L'importance de la scénarisation : Joshua Fonti souligne que la formation est conçue pour faciliter la construction de modules de formation, en partant des expertises des intervenants et en offrant un cadre pour gagner du temps et être plus dynamique.
"A partir de là, on a cherché à construire un parcours pour, à partir de nos expertises métiers, de notre quotidien qui pourraient vous permettre de rendre plus facile et peut être plus dynamique, de gagner du temps dans la construction de modules de formation."
Approche axée sur la recherche : Xavier Moulin insiste sur le fait que la formation est basée sur la recherche en sciences de l'éducation.
"Effectivement, la formation est adaptée à des formateurs non universitaire, mais à la différence que cette formation donne, c'est qu'elle est basée et elle a été montée sur des articles de recherche et sur la recherche scientifique dans les domaines de l'éducation."
Autonomie et transfert de compétences : La formation vise à rendre les participants autonomes et capables d'intervenir auprès de leurs pairs.
"Il s'agit aussi de devenir autonome et de pouvoir intervenir auprès de ses pairs."
Adaptation au contexte : La formation est pensée pour s'adapter aux différents contextes professionnels et besoins des participants.
"Tout ça c'est vraiment amené à répondre à vos besoins et vous permettre de de scénariser et déployer des contenus pédagogiques adaptés. Donc ça peut être adapté aux besoins des étudiants, des apprenants, mais surtout à votre contexte."
La différenciation pédagogique : L'hybridation est vue comme un moyen de proposer une différenciation pédagogique et d'améliorer l'accessibilité pour tous les apprenants.
"Une des forces d'utiliser un petit peu l'hybridation, c'est de pouvoir proposer de la différenciation pédagogique, c'est à dire des activités que tout le monde va pouvoir réaliser, autant de fois qu'il le souhaite."
Posture réflexive : Les participants sont encouragés à adopter une posture réflexive sur leurs pratiques pédagogiques, en partant d'observations concrètes pour ajuster les parcours.
"Et enfin, il y a l'idée d'adopter une posture réflexive sur ses pratiques."
3. Définitions et Concepts
Enseignement Hybride : Combinaison d'éléments de formation en présentiel et à distance, avec l'utilisation d'outils numériques. HADI : Enseignement à Distance. Ouis : Situations d'apprentissage et d'évaluation qui s'inscrivent dans une logique d'approche par compétences. Micro-certification : Certification courte qui valide une compétence précise.
4. Questions et Réponses
De nombreuses questions ont été posées, couvrant des aspects tels que :
5. Citations Clés
Sur l'objectif de la formation : "l'idée c'est de pouvoir bien ces ressources et ces activités, l'écrit avec des outils qui sont gratuits et les implémenter sur une plateforme CMS."
Sur la cohérence pédagogique : "l'idée c'est qu'il y ait une cohérence, c'est de pouvoir faire du tissage un élément à l'autre. Donc là, je vous renvoie aux travaux, par exemple de Buchon et Tuteur."
Sur l'importance de l'accompagnement : "Nous, en tant qu'expert métier, on va pouvoir répondre à des questions qui seraient très spécifiques à votre contexte."
Sur le tutorat: "Un dispositif de tutorat qui vise à vous accompagner au gré de vos besoins et à votre soli et aux sollicitations que vous pouvez avoir pour vous accompagner quotidiennement sur des besoins méthodologiques, sur un soutien psychosocial ou cognitif, sur l'apprentissage d'un concept ou d'une action."
Sur l'approche multimodale : "...de développer la capacité de déployer une approche qu'on va caractériser de multimodale pour favoriser la différenciation pédagogique"
Conclusion
Ce webinaire a présenté de manière claire et détaillée la formation "Scénariser un enseignement hybride".
La formation est pensée pour être pratique, flexible et axée sur le développement des compétences des participants.
Elle met en avant l'importance de la scénarisation, de la cohérence pédagogique, de la différenciation et de l'adaptation au contexte, le tout dans un cadre théorique basé sur la recherche en sciences de l'éducation.
La micro certification est un atout non négligeable, reconnue par plusieurs universités.
Le dispositif de tutorat assure un suivi personnalisé pour chaque apprenant.
Author response:
Public Reviews:
Reviewer #1 (Public review):
Summary: <br /> In this manuscript, the authors identified that
(1) CDK4/6i treatment attenuates the growth of drug-resistant cells by prolongation of the G1 phase;
(2) CDK4/6i treatment results in an ineffective Rb inactivation pathway and suppresses the growth of drug-resistant tumors;
(3) Addition of endocrine therapy augments the efficacy of CDK4/6i maintenance;
(4) Addition of CDK2i with CDK4/6 treatment as second-line treatment can suppress the growth of resistant cell;
(5) The role of cyclin E as a key driver of resistance to CDK4/6 and CDK2 inhibition.
Strengths:
To prove their complicated proposal, the authors employed orchestration of several kinds of live cell markers, timed in situ hybridization, IF and Immunoblotting. The authors strongly recognize the resistance of CDK4/6 + ET therapy and demonstrated how to overcome it.
Weaknesses:
The authors need to underscore their proposed results from what is to be achieved by them and by other researchers.
Thank you for your thoughtful review and for highlighting both the strengths and weaknesses of our manuscript. We appreciate your recognition of the methodological rigor and the significance of our findings in addressing resistance to CDK4/6 inhibitors combined with endocrine therapy.
To address your concern regarding the need to delineate our results from those achieved by other researchers, we will incorporate clarifications in the revised manuscript. Specifically, we will:
(1) Clearly distinguish our novel contributions from prior findings in the field.
(2) Explicitly cite and discuss relevant studies to contextualize our work, ensuring that our contributions are appropriately framed within the broader body of knowledge.
These revisions will enhance the transparency and impact of our manuscript, as well as highlight the originality and significance of our findings. Thank you again for your constructive feedback.
Reviewer #2 (Public review):
Summary:
This study elucidated the mechanism underlying drug resistance induced by CDK4/6i as a single agent and proposed a novel and efficacious second-line therapeutic strategy. It highlighted the potential of combining CDK2i with CDK4/6i for the treatment of HR+/HER2- breast cancer.
Strengths:
The study demonstrated that CDK4/6 induces drug resistance by impairing Rb activation, which results in diminished E2F activity and a delay in G1 phase progression. It suggests that the synergistic use of CDK2i and CDK4/6i may represent a promising second-line treatment approach. Addressing critical clinical challenges, this study holds substantial practical implications.
Weaknesses:
(1) Drug-resistant cell lines: Was a drug concentration gradient treatment employed to establish drug-resistant cell lines? If affirmative, this methodology should be detailed in the materials and methods section.
We greatly appreciate the reviewer for raising this important question. In the revised manuscript, we will update the methods section to include a detailed description of how the drug-resistant cell lines were developed. Specifically, we will clarify whether a drug concentration gradient treatment was employed and provide step-by-step details to ensure reproducibility.
(2) What rationale informed the selection of MCF-7 cells for the generation of CDK6 knockout cell lines? Supplementary Figure 3. A indicates that CDK6 expression levels in MCF-7 cells are not notably elevated.
We appreciate the reviewer’s insightful question about the rationale for selecting MCF-7 cells to generate CDK6 knockout cell lines. This choice was guided by prior studies highlighting the significant role of CDK6 in mediating resistance to CDK4/6 inhibitors (1-4). Moreover, we observed a 4.6-fold increase in CDK6 expression in CDK4/6 inhibitor-resistant MCF-7 cells compared to their drug-naïve counterparts (Supplementary Figure 3A). While we did not detect notable differences in CDK4/6 activity between wild-type and CDK6 knockout cells under CDK4/6 inhibitor treatment, these findings point to a potential non-canonical function of CDK6 in conferring resistance to CDK4/6 inhibitors.
(3) For each experiment, particularly those involving mice, the author must specify the number of individuals utilized and the number of replicates conducted, as detailed in the materials and methods section.
We sincerely thank the reviewer for bringing this to our attention. In the revised manuscript, we will provide explicit details regarding the number of replicates and mice used for each experiment. This information will be included in the materials and methods section, figure legends, and relevant text to ensure transparency and clarity.
(4) Could this treatment approach be extended to triple-negative breast cancer?
We greatly appreciate the reviewer’s inquiry about extending our findings to triple-negative breast cancer (TNBC). Based on our data presented in Figure 1 and Supplementary Figure 2, which include the TNBC cell line MDA-MB-231, we anticipate that the benefits of maintaining CDK4/6 inhibitors could indeed be applied to TNBC with an intact Rb/E2F pathway.
Reviewer #3 (Public review):
Summary:
In their manuscript, Armand and colleagues investigate the potential of continuing CDK4/6 inhibitors or combining them with CDK2 inhibitors in the treatment of breast cancer that has developed resistance to initial therapy. Utilizing cellular and animal models, the research examines whether maintaining CDK4/6 inhibition or adding CDK2 inhibitors can effectively control tumor growth after resistance has set in. The key findings from the study indicate that the sustained use of CDK4/6 inhibitors can slow down the proliferation of cancer cells that have become resistant, and the combination of CDK2 inhibitors with CDK4/6 inhibitors can further enhance the suppression of tumor growth. Additionally, the study identifies that high levels of Cyclin E play a significant role in resistance to the combined therapy. These results suggest that continuing CDK4/6 inhibitors along with the strategic use of CDK2 inhibitors could be an effective strategy to overcome treatment resistance in hormone receptor-positive breast cancer.
Strengths:
(1) Continuous CDK4/6 Inhibitor Treatment Significantly Suppresses the Growth of Drug-Resistant HR+ Breast Cancer: The study demonstrates that the continued use of CDK4/6 inhibitors, even after disease progression, can significantly inhibit the growth of drug-resistant breast cancer.
(2) Potential of Combined Use of CDK2 Inhibitors with CDK4/6 Inhibitors: The research highlights the potential of combining CDK2 inhibitors with CDK4/6 inhibitors to effectively suppress CDK2 activity and overcome drug resistance.
(3) Discovery of Cyclin E Overexpression as a Key Driver: The study identifies overexpression of cyclin E as a key driver of resistance to the combination of CDK4/6 and CDK2 inhibitors, providing insights for future cancer treatments.
(4) Consistency of In Vitro and In Vivo Experimental Results: The study obtained supportive results from both in vitro cell experiments and in vivo tumor models, enhancing the reliability of the research.
(5) Validation with Multiple Cell Lines: The research utilized multiple HR+/HER2- breast cancer cell lines (such as MCF-7, T47D, CAMA-1) and triple-negative breast cancer cell lines (such as MDA-MB-231), validating the broad applicability of the results.
Weaknesses:
(1) The manuscript presents intriguing findings on the sustained use of CDK4/6 inhibitors and the potential incorporation of CDK2 inhibitors in breast cancer treatment. However, I would appreciate a more detailed discussion of how these findings could be translated into clinical practice, particularly regarding the management of patients with drug-resistant breast cancer.
We greatly appreciate this opportunity to further contextualize our findings within clinical practice. In the revised manuscript, we will expand the discussion to explore how the identified mechanisms can inform patient stratification and therapeutic combinations. We will also highlight the potential of integrating CDK2 inhibitors with continued CDK4/6 inhibition as a second-line strategy for HR+ breast cancer patients who exhibit resistance to CDK4/6 inhibitors, leveraging insights from current and ongoing clinical trials. This will provide a clearer framework for translating our findings into actionable therapeutic strategies.
(2) While the emergence of resistance is acknowledged, the manuscript could benefit from a deeper exploration of the molecular mechanisms underlying resistance development. A more thorough understanding of how CDK2 inhibitors may overcome this resistance would be valuable.
Thank you for this insightful suggestion. In the revised manuscript, we will delve deeper into the molecular mechanisms by which CDK2 inhibitors counteract resistance to CDK4/6 inhibitors and endocrine therapy. We will emphasize the role of the non-canonical Rb inactivation pathway and upregulated transcriptional activity in reactivating CDK2, which contribute to resistance under CDK4/6 inhibition. Furthermore, we will discuss how dual inhibition of CDK4/6 and CDK2 effectively suppresses this resistance pathway, offering a mechanistic rationale for the therapeutic potential of this combination strategy.
(3) The manuscript supports the continued use of CDK4/6 inhibitors, but it lacks a discussion on the long-term efficacy and safety of this approach. Additional studies or data to support the safety profile of prolonged CDK4/6 inhibitor use would strengthen the manuscript.
We greatly appreciate the reviewer for raising this important point. To address this, we will incorporate a discussion on the long-term safety and efficacy of CDK4/6 inhibitor maintenance therapy. Drawing from clinical trials and retrospective analyses (5-9), we will highlight data supporting the tolerability of prolonged CDK4/6i treatment, particularly in combination with endocrine therapy. We will also discuss its clinical benefits over chemotherapy or endocrine therapy alone, contextualizing these findings with our proposed therapeutic approach (6,8-11).
References:
(1) Yang C, Li Z, Bhatt T, Dickler M, Giri D, Scaltriti M_, et al._ Acquired CDK6 amplification promotes breast cancer resistance to CDK4/6 inhibitors and loss of ER signaling and dependence. Oncogene 2017;36:2255-64
(2) Li Q, Jiang B, Guo J, Shao H, Del Priore IS, Chang Q_, et al._ INK4 Tumor Suppressor Proteins Mediate Resistance to CDK4/6 Kinase Inhibitors. Cancer Discov 2022;12:356-71
(3) Ji W, Zhang W, Wang X, Shi Y, Yang F, Xie H_, et al._ c-myc regulates the sensitivity of breast cancer cells to palbociclib via c-myc/miR-29b-3p/CDK6 axis. Cell Death & Disease 2020;11:760
(4) Wu X, Yang X, Xiong Y, Li R, Ito T, Ahmed TA_, et al._ Distinct CDK6 complexes determine tumor cell response to CDK4/6 inhibitors and degraders. Nature Cancer 2021;2:429-43
(5) Martin JM, Handorf EA, Montero AJ, Goldstein LJ. Systemic Therapies Following Progression on First-line CDK4/6-inhibitor Treatment: Analysis of Real-world Data. Oncologist 2022;27:441-6
(6) Xi J, Oza A, Thomas S, Ademuyiwa F, Weilbaecher K, Suresh R_, et al._ Retrospective Analysis of Treatment Patterns and Effectiveness of Palbociclib and Subsequent Regimens in Metastatic Breast Cancer. J Natl Compr Canc Netw 2019;17:141-7
(7) Basile D, Gerratana L, Corvaja C, Pelizzari G, Franceschin G, Bertoli E_, et al._ First- and second-line treatment strategies for hormone-receptor (HR)-positive HER2-negative metastatic breast cancer: A real-world study. Breast 2021;57:104-12
(8) Kalinsky K, Accordino MK, Chiuzan C, Mundi PS, Sakach E, Sathe C_, et al._ Randomized Phase II Trial of Endocrine Therapy With or Without Ribociclib After Progression on Cyclin-Dependent Kinase 4/6 Inhibition in Hormone Receptor–Positive, Human Epidermal Growth Factor Receptor 2–Negative Metastatic Breast Cancer: MAINTAIN Trial. Journal of Clinical Oncology;0:JCO.22.02392
(9) Kalinsky K, Bianchini G, Hamilton EP, Graff SL, Park KH, Jeselsohn R_, et al._ Abemaciclib plus fulvestrant vs fulvestrant alone for HR+, HER2- advanced breast cancer following progression on a prior CDK4/6 inhibitor plus endocrine therapy: Primary outcome of the phase 3 postMONARCH trial. Journal of Clinical Oncology 2024;42:LBA1001-LBA
(10) Mayer EL, Wander SA, Regan MM, DeMichele A, Forero-Torres A, Rimawi MF_, et al._ Palbociclib after CDK and endocrine therapy (PACE): A randomized phase II study of fulvestrant, palbociclib, and avelumab for endocrine pre-treated ER+/HER2- metastatic breast cancer. Journal of Clinical Oncology 2018;36:TPS1104-TPS
(11) Llombart-Cussac A, Harper-Wynne C, Perello A, Hennequin A, Fernandez A, Colleoni M_, et al._ Second-line endocrine therapy (ET) with or without palbociclib (P) maintenance in patients (pts) with hormone receptor-positive (HR[+])/human epidermal growth factor receptor 2-negative (HER2[-]) advanced breast cancer (ABC): PALMIRA trial. Journal of Clinical Oncology 2023;41:1001-
Reviewer #1 (Public review):
Summary:
The authors study the effect of the addition of synthetic amphiphile on the gating mechanisms of the mechano-sensitive channel MscL. They observe that the amphiphile reduces the membrane stretching and bending modulii, and increases the channel activation pressure. They then conclude that gating is sensitive to these two membrane parameters. This is explained by the effect of the amphiphile on the so-called membrane interfacial tension.
Strengths:
The major strength is that the authors found a way to tune the membrane's mechanical properties in a controlled manner, and find a progressive change of the suction pressure at which MscL gates. If analysed thoroughly, these results could give valuable information.
Weaknesses:
The weakness is the analysis and the discussion. I would like to have answers to some basic questions.
(1) The explanation of the phenomenon involves a difference between interfacial tension and tension, without the difference between these being precisely defined. In the caption of Figure 4, one can read "Under tension, the PEO groups adsorb to the bilayer, suggesting adsorption is a thermodynamically favorable process that lowers the interfacial tension." What does this mean? Under what tension is the interfacial tension lowered? The fact that the system's free energy could be lowered by putting it under mechanical tension would result in a thermodynamic unstable situation. Is this what the authors mean?
(2) From what I understand, a channel would feel the tension exerted by the membrane at its periphery, which is what I would call membrane tension. The fact that polymers may reorganise under membrane stretch to lower the system's free energy would certainly affect the membrane stretching modulus (as measured Figure 2E), but what the channel cares about is the tension (I would say). If the membrane is softer, a larger pipette pressure is required to reach the same level of tension, so it is not surprising that a given channel requires a larger activation pressure in softer membranes. To me, this doesn't mean that the channel feels the membrane stiffness, but rather that a given pressure leads to different tensions (which is what the channel feels) for different stiffnesses.
(3) In order to support the authors' claim, the micropipette suction pressure should be appropriately translated into a membrane tension. One would then see whether the gating tension is affected by the presence of amphiphiles. In the micropipette setup used here, one can derive a relationship between pressure and tension, that involves the shape of the membrane. This relationship is simple (tension=pressure difference times pipette radius divided by 2) only in the limit where the membrane tongue inside the pipette ends with a hemisphere of constant radius independent of the pressure, and the pipette radius is much smaller than the GUV radius. None of these conditions seem to hold in Figure 2C. On the other hand, the authors do report absolute values of tension in the y-axis of Figure 2D. It seems quite straightforward to plot the activation tension (rather than pressure) as a function of the amphiphile volume fraction in Figure 2B. This is what needs to be shown.
(4) The discussion needs to be improved. I could not find a convincing explanation of the role of interfacial tension in the discussion. The equation (p.14) distinguishes three contributions, which I understand to be (i) an elastic membrane deformation such as hydrophobic mismatch or other short-range effects, (ii) the protein conformation energy, and (iii) the work done by membrane tension. Apparently, the latter is where the effect is (which I agree with), but how this consideration leads to a gating energy difference (between lipid only and modified membrane) proportional to the interfacial tension is completely obscure (if not wrong).
(5) I am rather surprised at the very small values of stretching and bending modulii found under high-volume fraction. These quantities are obtained by fitting the stress-strain relationship (Figure 2D). Such a plot should be shown for all amphiphile volume fraction, so one can assess the quality of the fits.
LSM980
DOI: 10.1038/s41440-024-02082-y
Resource: ZEISS LSM 980 with Airyscan 2 Microscope (RRID:SCR_025048)
Curator: @inessasarian
SciCrunch record: RRID:SCR_025048
CRL-1927 ™, ATCC
DOI: 10.1038/s41440-024-02082-y
Resource: (ATCC Cat# CRL-1927, RRID:CVCL_5368)
Curator: @inessasarian
SciCrunch record: RRID:CVCL_5368
RRID:Addgene_81073
DOI: 10.1038/s41440-024-02082-y
Resource: RRID:Addgene_81073
Curator: @scibot
SciCrunch record: RRID:Addgene_81073
RRID:CVCL_6911
DOI: 10.1186/s12885-025-13544-y
Resource: (ATCC Cat# PTA-5077, RRID:CVCL_6911)
Curator: @scibot
SciCrunch record: RRID:CVCL_6911
RRID:AB_2934013
DOI: 10.1038/s42255-024-01177-7
Resource: (Cell Signaling Technology Cat# 91131, RRID:AB_2934013)
Curator: @scibot
SciCrunch record: RRID:AB_2934013
RRID:SCR_014199
DOI: 10.1038/s41598-024-80889-y
Resource: Adobe Photoshop (RRID:SCR_014199)
Curator: @scibot
SciCrunch record: RRID:SCR_014199
SCR_013672
DOI: 10.1038/s41598-024-80889-y
Resource: ZEISS ZEN Microscopy Software (RRID:SCR_013672)
Curator: @scibot
SciCrunch record: RRID:SCR_013672
RRID:SCR_007369
DOI: 10.1038/s41598-024-80889-y
Resource: Image-Pro Plus (RRID:SCR_007369)
Curator: @scibot
SciCrunch record: RRID:SCR_007369
RRID:IMSR_JAX:000664
DOI: 10.1038/s41598-024-80889-y
Resource: RRID:IMSR_JAX:000664
Curator: @scibot
SciCrunch record: RRID:IMSR_JAX:000664
RRID:AB_2536180
DOI: 10.1007/s00429-025-02893-w
Resource: (Thermo Fisher Scientific Cat# A-31570, RRID:AB_2536180)
Curator: @scibot
SciCrunch record: RRID:AB_2536180
Author response:
The following is the authors’ response to the previous reviews.
Reviewer #1 (Public review):
Summary:
In the manuscript the authors describe a new pipeline to measure changes in vasculature diameter upon optogenetic stimulation of neurons. The work is useful to better understand the hemodynamic response on a network /graph level.
Strengths:
The manuscript provides a pipeline that allows to detect changes in the vessel diameter as well as simultaneously allows to locate the neurons driven by stimulation.
The resulting data could provide interesting insights into the graph level mechanisms of regulating activity dependent blood flow.
Weaknesses:
(1) The manuscript contains (new) wrong statements and (still) wrong mathematical formulas.
The symbols in these formulas have been updated to disambiguate them, and the accompanying statements have been adjusted for clarity.
(2) The manuscript does not compare results to existing pipelines for vasculature segmentation (opensource or commercial). Comparing performance of the pipeline to a random forest classifier (illastik) on images that are not preprocessed (i.e. corrected for background etc.) seems not a particularly useful comparison.
We’ve now included comparisons to Imaris (a commercial) for segmentation and VesselVio (open-source) for graph extraction software.
For the ilastik comparison, the images were preprocessed prior to ilastik segmentation, specifically by doing intensity normalization.
Example segmentations utilizing Imaris have now been included. Imaris leaves gaps and discontinuities in the segmentation masks, as shown in Supplementary Figure 10. The Imaris segmentation masks also tend to be more circular in cross-section despite irregularities on the surface of the vessels observable in the raw data and identified in manual segmentation. This approach also requires days to months to generate per image stack.
“Comparison with commercial and open-source vascular analysis pipelines
To compare our results with those achievable on these data with other pipelines for segmentation and graph network extraction, we compared segmentation results qualitatively with Imaris version 9.2.1 (Bitplane) and vascular graph extraction with VesselVio [1]. For the Imaris comparison, three small volumes were annotated by hand to label vessels. Example slices of the segmentation results are shown in Supplementary Figure 10. Imaris tended to either over- or under-segment vessels, disregard fine details of the vascular boundaries, and produce jagged edges in the vascular segmentation masks. In addition to these issues with segmentation mask quality, manual segmentation of a single volume took days for a rater to annotate. To compare to VesselVio, binary segmentation masks (one before and one after photostimulation) generated with our deep learning models were loaded into VesselVio for graph extraction, as VesselVio does not have its own method for generating segmentation masks. This also facilitates a direct comparison of the benefits of our graph extraction pipeline to VesselVio. Visualizations of the two graphs are shown in Supplementary Figure 11. Vesselvio produced many hairs at both time points, and the total number of segments varied considerably between the two sequential stacks: while the baseline scan resulted in 546 vessel segments, the second scan had 642 vessel segments. These discrepancies are difficult to resolve in post-processing and preclude a direct comparison of individual vessel segments across time. As the segmentation masks we used in graph extraction derive from the union of multiple time points, we could better trace the vasculature and identify more connections in our extracted graph. Furthermore, VesselVio relies on the distance transform of the user supplied segmentation mask to estimate vascular radii; consequently, these estimates are highly susceptible to variations in the input segmentation masks.We repeatedly saw slight variations between boundary placements of all of the models we utilized (ilastik, UNet, and UNETR) and those produced by raters. Our pipeline mitigates this segmentation method bias by using intensity gradient-based boundary detection from centerlines in the image (as opposed to using the distance transform of the segmentation mask, as in VesselVio).”
(3) The manuscript does not clearly visualize performance of the segmentation pipeline (e.g. via 2d sections, highlighting also errors etc.). Thus, it is unclear how good the pipeline is, under what conditions it fails or what kind of errors to expect.
On reviewer’s comment, 2D slices have been added in the Supplementary Figure 4.
(4) The pipeline is not fully open-source due to use of matlab. Also, the pipeline code was not made available during review contrary to the authors claims (the provided link did not lead to a repository). Thus, the utility of the pipeline was difficult to judge.
All code has been uploaded to Github and is available at the following location: https://github.com/AICONSlab/novas3d
The Matlab code for skeletonization is better at preserving centerline integrity during the pruning of hairs from centerlines than the currently available open-source methods.
- Generalizability: The authors addressed the point of generalizability by applying the pipeline to other data sets. This demonstrates that their pipeline can be applied to other data sets and makes it more useful. However, from the visualizations it's unclear to see the performance of the pipeline, where the pipelines fails etc. The 3d visualizations are not particularly helpful in this respect . In addition, the dice measure seems quite low, indicating roughly 20-40% of voxels do not overlap between inferred and ground truth. I did not notice this high discrepancy earlier. A thorough discussion of the errors appearing in the segmentation pipeline would be necessary in my view to better assess the quality of the pipeline.
2D slices from the additional datasets have been added in the Supplementary Figure 13 to aid in visualizing the models’ ability to generalize to other datasets.
The dice range we report on (0.7-0.8) is good when compared to those (0.56-86) of 3D segmentations of large datasets in microscopy [2], [3], [4], [5], [6]. Furthermore, we had two additional raters segment three images from the original training set. We found that the raters had a mean inter class correlation of 0.73 [7]. Our model outperformed this Dice score on unseen data: Dice scores from our generalizability tests on C57 mice and Fischer rats on par or higher than this baseline.
Reviewer #2 (Public review):<br /> The authors have addressed most of my concerns sufficiently. There are still a few serious concerns I have. Primarily, the temporal resolution of the technique still makes me dubious about nearly all of the biological results. It is good that the authors have added some vessel diameter time courses generated by their model. But I still maintain that data sampling every 42 seconds - or even 21 seconds - is problematic. First, the evidence for long vascular responses is lacking. The authors cite several papers:
Alarcon-Martinez et al. 2020 show and explicitly state that their responses (stimulus-evoked) returned to baseline within 30 seconds. The responses to ischemia are long lasting but this is irrelevant to the current study using activated local neurons to drive vessel signals.
Mester et al. 2019 show responses that all seem to return to baseline by around 50 seconds post-stimulus.
In Mester et al. 2019, diffuse stimulations with blue light showed a return to baseline around 50 seconds post-stimulus (cf. Figure 1E,2C,2D). However, focal stimulations where the stimulation light is raster scanned over a small region focused in the field of view show longer-lasting responses (cf. Figure 4) that have not returned to baseline by 70 seconds post-stimulus [8]. Alarcon-Martinez et al. do report that their responses return baseline within 30 seconds; however, their physiological stimulation may lead to different neuronal and vessel response kinetics than those elicited by the optogenetic stimulations as in current work.
O'Herron et al. 2022 and Hartmann et al. 2021 use opsins expressed in vessel walls (not neurons as in the current study) and directly constrict vessels with light. So this is unrelated to neuronal activity-induced vascular signals in the current study.
We agree that optogenetic activation of vessel-associated cells is distinct from optogenetic activation of neurons, but we do expect the effects of such perturbations on the vasculature to have some commonalities.
There are other papers including Vazquez et al 2014 (PMID: 23761666) and Uhlirova et al 2016 (PMID: 27244241) and many others showing optogenetically-evoked neural activity drives vascular responses that return back to baseline within 30 seconds. The stimulation time and the cell types labeled may be different across these studies which can make a difference. But vascular responses lasting 300 seconds or more after a stimulus of a few seconds are just not common in the literature and so are very suspect - likely at least in part due to the limitations of the algorithm.
The photostimulation in Vazquez et al. 2014 used diffuse photostimulation with a fiberoptic probe similar to Mester et al. 2019 as opposed to raster scanning focal stimulation we used in this study and in the study by Mester et al. 2019 where we observed the focal photostimulation to elicited longer than a minute vascular responses. Uhlirova et al. 2016 used photostimulation powers between 0.7 and 2.8 mW, likely lower than our 4.3 mW/mm2 photostimulation. Further, even with focal photostimulation, we do see light intensity dependence of the duration of the vascular responses. Indeed, in Supplementary Figure 2, 1.1 mW/mm2 photostimulation leads to briefer dilations/constrictions than does 4.3 mW/mm2; the 1.1 mW/mm2 responses are in line, duration wise, with those in Uhlirova et al. 2016.
Critically, as per Supplementary Figure 2, the analysis of the experimental recordings acquired at 3-second temporal resolution did likewise show responses in many vessels lasting for tens of seconds and even hundreds of seconds in some vessels.
Another major issue is that the time courses provided show that the same vessel constricts at certain points and dilates later. So where in the time course the data is sampled will have a major effect on the direction and amplitude of the vascular response. In fact, I could not find how the "response" window is calculated. Is it from the first volume collected after the stimulation - or an average of some number of volumes? But clearly down-sampling the provided data to 42 or even 21 second sampling will lead to problems. If the major benefit to the field is the full volume over large regions that the model can capture and describe, there needs to be a better way to capture the vessel diameter in a meaningful way.
In the main experiment (i.e. excluding the additional experiments presented in the Supplementary Figure 2 that were collected over a limited FOV at 3s per stack), we have collected one stack every 42 seconds. The first slice of the volume starts following the photostimulation, and the last slice finishes at 42 seconds. Each slice takes ~0.44 seconds to acquire. The data analysis pipeline (as demonstrated by the Supplementary Figure 2) is not in any way limited to data acquired at this temporal resolution and - provided reasonable signal-to-noise ratio (cf. Figure 5) - is applicable, as is, to data acquired at much higher sampling rates.
It still seems possible that if responses are bi-phasic, then depth dependencies of constrictors vs dilators may just be due to where in the response the data are being captured - maybe the constriction phase is captured in deeper planes of the volume and the dilation phase more superficially. This may also explain why nearly a third of vessels are not consistent across trials - if the direction the volume was acquired is different across trials, different phases of the response might be captured.
Alternatively, like neuronal responses to physiological stimuli, the vascular responses elicited by increases in neuronal activity may themselves be variable in both space and time.
I still have concerns about other aspects of the responses but these are less strong. Particularly, these bi-phasic responses are not something typically seen and I still maintain that constrictions are not common. The authors are right that some papers do show constriction. Leaving out the direct optogenetic constriction of vessels (O'Herron 2022 & Hartmann 2021), the Alarcon-Martinez et al. 2020 paper and others such as Gonzales et al 2020 (PMID: 33051294) show different capillary branches dilating and constricting. However, these are typically found either with spontaneous fluctuations or due to highly localized application of vasoactive compounds. I am not familiar with data showing activation of a large region of tissue - as in the current study - coupled with vessel constrictions in the same region. But as the authors point out, typically only a few vessels at a time are monitored so it is possible - even if this reviewer thinks it unlikely - that this effect is real and just hasn't been seen.
Uhlirova et al. 2016 (PMID: 27244241) observed biphasic responses in the same vessel with optogenetic stimulation in anesthetized and unanesthetized animals (cf Fig 1b and Fig 2, and section “OG stimulation of INs reproduces the biphasic arteriolar response”). Devor et al. (2007) and Lindvere et al. (2013) also reported on constrictions and dilations being elicited by sensory stimuli.
I also have concerns about the spatial resolution of the data. It looks like the data in Figure 7 and Supplementary Figure 7 have a resolution of about 1 micron/pixel. It isn't stated so I may be wrong. But detecting changes of less than 1 micron, especially given the noise of an in vivo prep (brain movement and so on), might just be noise in the model. This could also explain constrictions as just spurious outputs in the model's diameter estimation. The high variability in adjacent vessel segments seen in Figure 6C could also be explained the same way, since these also seem biologically and even physically unlikely.
Thank you for your comment. To address this important issue, we performed an additional validation experiment where we placed a special order of fluorescent beads with a known diameter of 7.32 ± 0.27um, imaged them following our imaging protocol, and subsequently used our pipeline to estimate their diameter. Our analysis converged on the manufacturer-specified diameters, estimating the diameter to be 7.34 ± 0.32. The manuscript has been updated to detail this experiment, as below:
Methods section insert
“Second, our boundary detection algorithm was used to estimate the diameters of fluorescent beads of a known radius imaged under similar acquisition parameters. Polystyrene microspheres labelled with Flash Red (Bangs Laboratories, inc, CAT# FSFR007) with a nominal diameter of 7.32um and a specified range of 7.32 ± 0.27um as determined by the manufacturer using a Coulter counter were imaged on the same multiphoton fluorescence microscope set-up used in the experiment (identical light path, resonant scanner, objective, detector, excitation wavelength and nominal lateral and axial resolutions, with 5x averaging). The images of the beads had a higher SNR than our images of the vasculature, so Gaussian noise was added to the images to degrade the SNR to the same level of that of the blood vessels. The images of the beads were segmented with a threshold, centroids calculated for individual spheres, and planes with a random normal vector extracted from each bead and used to estimate the diameter of the beads. The same smoothing and PSF deconvolution steps were applied in this task. We then reported the mean and standard deviation of the distribution of the diameter estimates. A variety of planes were used to estimate the diameters.”
Results Section Insert
“Our boundary detection algorithm successfully estimated the radius of precisely specified fluorescent beads. The bead images had a signal-to-noise ratio of 6.79 ± 0.16 (about 35% higher than our in vivo images): to match their SNR to that of in vivo vessel data, following deconvolution, we added Gaussian noise with a standard deviation of 85 SU to the images, bringing the SNR down to 5.05 ± 0.15. The data processing pipeline was kept unaltered except for the bead segmentation, performed via image thresholding instead of our deep learning model (trained on vessel data). The bead boundary was computed following the same algorithm used on vessel data: i.e., by the average of the minimum intensity gradients computed along 36 radial spokes emanating from the centreline vertex in the orthogonal plane. To demonstrate an averaging-induced decrease in the uncertainty of the bead radius estimates on a scale that is finer than the nominal resolution of the imaging configuration, we tested four averaging levels in 289 beads. Three of these averaging levels were lower than that used on the vessels, and one matched that used on the vessels (36 spokes per orthogonal plane and a minimum of 10 orthogonal planes per vessel). As the amount of averaging increased, the uncertainty on the diameter of the beads decreased, and our estimate of the bead's diameter converged upon the manufacturer's Coulter counter-based specifications (7.32 ± 0.27um), as tabulated in Table 1.”
Reviewer #1 (Recommendations for the authors):
Comments to the authors replies to the reviews:
- Supplementary Figure 13:
As indicated before the 3d images + scale makes it impossible to judge the quality of the outputs.
As aforementioned, 2D slices have been added to the Supplementary Figure 13.
- Supplementary Table 3:
There is a significant increase in the Hausdorrf and Mean Surface Distance measures for the new data, why ?
A single vessel being missed by either the rater or the model would significantly affect the Hausdorff distance (HD) and by extension Mean Surface Distance: this is particularly pertinent in the LSFM image with its much larger FOV and thus a potential for much larger max distances to result from missed vessels in the prediction or ground truth data. Large Hausdorff distances may indicate a vessel was missed in either the ground truth or the segmentation mask.
Of note, a different rater annotated these additional datasets from the raters labeling the ground truth data. There is a high variability in boundary placements between raters. On a test where three raters segmented the same three images from the original dataset, we computed a ICC of 0.73 across their segmentations. Our model Dice scores on predictions in out-of-distribution data sets were on par with the inter-rater ICC on the Thy1ChR2 2PFM data.
- Supplementary Figure 2: The authors provide useful data on the time responses. However, looking at those figures, it is puzzling why certain vessels were selected as responding as there seems almost no change after stimulation. In addition, some of the responses seem to actually start several tens of seconds before the actual stimulus (particularly in A).
Only some traces in C and D (dark blue) seem to be actually responding vessels.
This is not discussed and unclear.
Supplementary Figure 2 displays the time courses of vessel calibre for all vessels in the FOV, not just those deemed responders.
The aforementioned effects are due to the loess smoothing filter having been applied to the time courses for the preliminary response, which has been rectified in the updated figures. In particular, Supplementary Figure 2 has been updated with separate loess smoothing before and after photostimulation. The (pre-stimulation) effect is gone once the loess smoothing has been separated.
- R Point 7: As indicated before and in agreement with the alternative reviewer, the quality of the results in 3d is difficult to judge. No 2d sections that compare 'ground truth' with inferred results are shown in the current manuscript which would enable a much better judgment. The provided video is still 3d and not a video going through 2d slices. Also, in the video the overlap of vasculature and raw data seems to be very good and near 100%, why is the dice measure reported earlier so low ? Is this a particularly good example ?
Some examples, indicating where the pipeline fails (and why) would be helpful to see, to judge its performance better (ideally in 2d slices).
As discussed in the public comments, the 2D slices are now included in Suppl. Fig. 4 and suppl. Fig 13 to facilitate visual assessment. The vessels are long and thin so that slight dilations or constrictions impact the Dice scores without being easily visualizable.
- Author response images 6 and 7. From the presented data the constrictions measured in the smaller vessels may be a result (at least partly) of noise. This seems to be particularly the case in Author response image 7 left top and bottom for example. It would be helpful to show the actual estimates of the vessels radii overlaid in the (raw) images. In some of the pictures the noise level seems to reach higher values than the 10-20% of noise used in the tests by the authors in the revision.
The vessel radii are estimated as averages across all vertices of the individual vessels: it is thus not possible to overlay them meaningfully in 2D slices: in Figure 2B, we do show a rendering of sample vessel-wise radii estimates.
- "We tested the centerline detection in Python, scipy (1.9.3) and Matlab. We found that the Matlab implementation performed better due to its inclusion of a branch length parameter for the identification of terminal branches, which greatly reduced the number of false branches; the Python implementation does not include this feature (in any version) and its output had many more such "hair" artifacts. Clearmap skeletonization uses an algorithm by Palagyi & Kuba(1999) to thin segmentation masks, which does not include hair removal. Vesselvio uses a parallelized version of the scipy implementation of Lee et al. (1994) algorithm which does not do hair removal based on a terminal branch length filter; instead, Vesselvio performs a threshold-based hair removal that is frequently overly aggressive (it removes true positive vessel branches), as highlighted by the authors."
This statement is wrong. The removal of small branches in skeletons is algorithmically independent of the skeletonization algorithm itself. The authors cite a reference concerned with the algorithm they are currently employing for the skeletonization. Careful assessment of that reference shows that this algorithm removes small length branches after skeletonization is performed. This feature is available in open-source packages as well, or could be easily implemented.
We appreciate that skeletonization is distinct from hair removal and have reworded this paragraph for clarity. We are currently working with SciPy developers to implement hair removal in their image processing pipeline so as to render our pipeline fully open-source.
The removal of hairs after skeletonization with length based thresholding leads to the possibility of removing parts of centerlines in the main part of vessels after branch points with hairs. The Matlab implementation does not do this and leaves the main branches intact.
This text has been updated to:
“Hair” segments shorter than 20 μm and terminal on one end were iteratively removed, starting with the shortest hairs and merging the longest hairs at junctions with 2 terminal branches with the main vessel branch to reduce false positive vascular branches and minimize the amount of centerlines removed. This iterative hair removal functionality of the skeletonization algorithm is currently unavailable in python, but is available in Matlab [9].
- "On the reviewer's comment, we did try inputting normalized images into Ilastik, but this did not improve its results." This is surprising. Reasonable standard preprocessing (e.g. background removal, equalization, and vessel enhancement) would probably restore most of illastik's performance in the indicated panel.
While the improvement may be present in a particular set of images, the generalizability of such improvement to other patches is often poor in our experience, as reflected by aforementioned results and the widespread uptake of DL approaches to image segmentation. The in vivo datasets also contain artifacts arising from eg. bleeding into the FOV that ilastik is highly sensitive to. This is an example of noise that is not easily removed by standard preprocessing.
- "Typical pre-processing/standard computer vision techniques with parameter tuning do not generalize on out-of-distribution data with different image characteristics, motivating the shift to DL-based approaches."
I disagree with this statement. DL approaches can generalize typically when trained with sufficient amount of diverse data. However, DL approaches can also fail with new out of distribution data. In that situation they only be 'rescued' via new time intensive data generation and retraining. Simple standard image pre-processing steps (e.g. to remove background or boost vessel structures) have well defined parameter that can be easily adapted to new out of distribution data as clear interpretations are available. The time to adapt those parameters is typically much smaller than retraining of DL frameworks.
We find that the standard image processing approaches with parameter tuning work robustly only if fine-tuned on individual images; i.e., the fine-tuning does not generalize across datasets. This approach thus does not scale to experiments yielding large image sizes/having high throughput experiments. While DL models may not generalize to out-of-distribution data, fine-tuning DL models with a small subset of labels generally produce superior models to parameter tuning that can be applied to entire studies. Moreover, DL fine-tuning is typically an efficient process due to very limited labelling and training time required.
- It is still unclear how the authors pipeline performs compared with other (open source or commercially) available pipelines. As indicated before, comparing to illastik, particularly when feeding non preprocessed data, does not seem to be a particularly high bar.
This question has also been raised by the other reviewer who asked to compare to commercially available pipelines.
This question was not answered by the authors, and instead the authors reply by claiming to provide an open source pipeline. In fact, the use of matlab in their pipeline does not make it fully open-source either. Moreover, as mentioned before, open-source pipelines for comparisons do exists.
As discussed above, the manuscript now includes comparisons to Imaris for segmentation and Vesselvio for graph extraction. The pipeline is on github.
-"We agree with the review that this question is interesting; however, it is not addressable using present data: activated neuronal firing will have effects on their postsynaptic neighbors, yet we have no means of measuring the spread of activation using the current experimental model."
Distances to the closest neuron in the manuscript are measured without checking if it's active. Thus, distances to the first set of n neurons could be measured in the same way, ignoring activation effects.
Shorter distances to an entire ensemble of neurons would still be (more) informative of metabolic demands.
This could indeed be done within the existing framework. The connected-components-3d can be used to extract individual occurrences of neurons in the FOV from the neuron segmentation mask. Each neuron could then have its distance calculated to each point on the vessel centerlines.
- model architecture:
It is unclear from the description if any positional encoding was used for the image patches.
It is unclear if the architecture / pipeline can handle any volume sizes or is trained on a fixed volume shapes? In the latter case how is the pipeline applied?
The model includes positional encoding, as described in Hatamizadeh et al. 2021.
The model can be applied to images of any size, as demonstrated on larger images in Supplementary Figure 9 and on smaller images in Supplementary Figure 2. The pipeline is applied in the same way. It will read in the size of an input image and output an image of the same size.
- transformer models often show better results when using a learning rate scheduler that adjust the learning rate (up and down ramps typically). Did the authors test such approaches?
We did not use a learning rate scheduler, as we found we were getting good results without using one.
- formula (4): The 95% percentile of two numbers is the max, and thus (5) is certainly not what the HD95 metric is. The formula is simply wrong as displayed.
Thank you. The formula has been updated.
- formula (5): formula 5 is certainly wrong: n_X, n_y are either integer numbers as indicated by the sum indices or sets when used in the distances, but can't be both at the same time.
Thank you for your comment. The Formula has been updated.
- The statement:
"this functionality of the skeletonization algorithm is currently unavailable in any python implementation, but is available in Matlab [56]."
is not correct (see reply above)
Please see the response above. This text has been updated to:
“Hair” segments shorter than 20 μm and terminal on one end were iteratively removed, starting with the shortest hairs and merging the longest hairs at junctions with 2 terminal branches with the main vessel branch to reduce false positive vascular branches and minimize the amount of centerlines removed. This iterative hair removal functionality of the skeletonization algorithm is currently unavailable in Python, but is available in Matlab [9].
- the centerline extraction is performed after taking the union of smoothed masks. The union operation can induce novel 'irregular' boundaries that degrade skeletonization performance. I would expect to apply smoothing after the union?
Indeed the images were smoothed via dilation after taking the union, as described in the previous set of responses to private comments.
- "The radius estimate defined the size of the Gaussian kernel that was convolved with the image to smooth the vessel: smaller vessels were thus convolved with narrower kernels."
It's unclear what image were filtered ?
We have updated this text for clarity:
The radius estimate defined the size of the Gaussian kernel that was convolved with the 2D image slice to smooth the vessel: smaller vessels were thus convolved with narrower kernels.
- Was deconvolution on the raw images applied or after Gaussian filtering ?
The deconvolution was applied before Gaussian filtering.
- ",we extracted image intensities in the orthogonal plane from the deconvolved raw registered image. A 2D Gaussian kernel with sigma equal to 80% of the estimated vessel-wise radius was used to low-pass filter the extracted orthogonal plane image and find the local signal intensity maximum searching, in 2D, from the center of the image to the radius of 10 pixels from the center."
Would it not be better to filter the 3d image before extracting a 2d plane and filter then ?
That could be done, but would incur a significant computational speed penalty. 2D convolutions are faster, and produced excellent accuracy when estimating radii in our bead experiment.
What algorithm was used to obtain the 2d images.
The 2d images were obtained using scipy.ndimage.map_coordinates.
- Figure 2: H is this the filtered image or the raw data ?
Panel H is raw data.
- It would be good to see a few examples of the raw data overlaid with the radial estimates to evaluate the approach (beyond the example in K).
Additional examples are shown in Figure 5.
- Figure 2 K: Why are boundary points greater than 2 standard deviations away from the mean excluded ?
They are excluded to account for irregularities as vessels approach junctions [10], [11] REF.
- Figure 2 L: what exactly is plotted here ? What are vertex wise changes, is that the difference between the minimum and maximum of all the detected radii for a single vertex? Why do some vessels (red) show high values consistently throughout the vessel ?
Figure 2L displays change in the radius of vertices - in this FOV- following photostimulation in relation to baseline.
- Assortativity: to calculate the assortativity, are radius changes binned in any form to account for the fact that otherwise, $e_{xy}$ and related measures will be likely be based on single data points?
Assortativity is not calculated from single data points. It can be calculated by either binning into categories or computing it on scalars i.e. average radius across a vessel segment:
See here for info on calculating assortativity from binned categories (ie classifying a vessel as a constrictor, dilator or non-responder):
And see here for calculating assortativity from scalar values:
We calculated the assortativity using scalar values.
In both cases, one uses all nodes and calculates the correlation between each node and its neighbours with an attribute that can be binned or is a scalar. Binning the value on a given node would not affect the number of nodes in a graph.
- "Ilastik tended to over-segment vessels, i.e. the model returned numerous false positives, having a high recall (0.89{plus minus}0.19) but low precision (0.37{plus minus}0.33) (Figure 3, Supplementary Table 3)."
As indicated before, and looking at Figure 4, over segmentation seems due to too high background. A suggested preprocessing step on the raw images to remove background could have avoided this.
The images were normalized in preprocessing.
- Figure 4: The 3d panels are not much easier to read in the revised version. As suggested by other reviewers, 2d sections indicating the differences and errors would be much more helpful to judge the pipelines quality more appropriately.
As discussed above, 2D sections are now available in a supplementary figure.
- Figure 3: What would be the dice score (and other measures) between two ground truths extracted by two annotations by two humans (assisted e.g. by illastik).
Two additional human rates annotated images. We observed a ICC of 0.73 across a total of three raters on the three images.
- Figure 5: The authors only provide the absolute value of SU for the sigma noise levels. This only has some meaning when compared to the mean or median SU of the images. In the text the maximal intensity of 1023 SU is mentioned, but what are those values in images with weaker / smaller vessels (as provided in the constriction examples in the revision)/
I am unclear why this validation figure should be part of the main manuscript while generalization performance is left out.
The manuscript has been updated with the mean SNR value of 5.05 ± 0.15 to provide context for the quality of our images.
Bibliography
(1) J. R. Bumgarner and R. J. Nelson, “Open-source analysis and visualization of segmented vasculature datasets with VesselVio,” Cell Rep. Methods, vol. 2, no. 4, Apr. 2022, doi: 10.1016/j.crmeth.2022.100189.
(2) G. Tetteh et al., “DeepVesselNet: Vessel Segmentation, Centerline Prediction, and Bifurcation Detection in 3-D Angiographic Volumes,” Front. Neurosci., vol. 14, Dec. 2020, doi: 10.3389/fnins.2020.592352.
(3) N. Holroyd, Z. Li, C. Walsh, E. Brown, R. Shipley, and S. Walker-Samuel, “tUbe net: a generalisable deep learning tool for 3D vessel segmentation,” Jul. 24, 2023, bioRxiv. doi: 10.1101/2023.07.24.550334.
(4) W. Tahir et al., “Anatomical Modeling of Brain Vasculature in Two-Photon Microscopy by Generalizable Deep Learning,” BME Front., vol. 2020, p. 8620932, Dec. 2020, doi: 10.34133/2020/8620932.
(5) R. Damseh, P. Delafontaine-Martel, P. Pouliot, F. Cheriet, and F. Lesage, “Laplacian Flow Dynamics on Geometric Graphs for Anatomical Modeling of Cerebrovascular Networks,” ArXiv191210003 Cs Eess Q-Bio, Dec. 2019, Accessed: Dec. 09, 2020. [Online]. Available: http://arxiv.org/abs/1912.10003
(6) T. Jerman, F. Pernuš, B. Likar, and Ž. Špiclin, “Enhancement of Vascular Structures in 3D and 2D Angiographic Images,” IEEE Trans. Med. Imaging, vol. 35, no. 9, pp. 2107–2118, Sep. 2016, doi: 10.1109/TMI.2016.2550102.
(7) T. B. Smith and N. Smith, “Agreement and reliability statistics for shapes,” PLOS ONE, vol. 13, no. 8, p. e0202087, Aug. 2018, doi: 10.1371/journal.pone.0202087.
(8) J. R. Mester et al., “In vivo neurovascular response to focused photoactivation of Channelrhodopsin-2,” NeuroImage, vol. 192, pp. 135–144, May 2019, doi: 10.1016/j.neuroimage.2019.01.036.
(9) T. C. Lee, R. L. Kashyap, and C. N. Chu, “Building Skeleton Models via 3-D Medial Surface Axis Thinning Algorithms,” CVGIP Graph. Models Image Process., vol. 56, no. 6, pp. 462–478, Nov. 1994, doi: 10.1006/cgip.1994.1042.
(10) M. Y. Rennie et al., “Vessel tortuousity and reduced vascularization in the fetoplacental arterial tree after maternal exposure to polycyclic aromatic hydrocarbons,” Am. J. Physiol.-Heart Circ. Physiol., vol. 300, no. 2, pp. H675–H684, Feb. 2011, doi: 10.1152/ajpheart.00510.2010.
(11) J. Steinman, M. M. Koletar, B. Stefanovic, and J. G. Sled, “3D morphological analysis of the mouse cerebral vasculature: Comparison of in vivo and ex vivo methods,” PLOS ONE, vol. 12, no. 10, p. e0186676, Oct. 2017, doi: 10.1371/journal.pone.0186676.
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public review):
Summary:
In this work, the authors present a cornucopia of data generated using deep mutational scanning (DMS) of variants in MET kinase, a protein target implicated in many different forms of cancer. The authors conducted a heroic amount of deep mutational scanning, using computational structural models to augment the interpretation of their DMS findings.
Strengths:
This powerful combination of computational models, experimental structures in the literature, dose-response curves, and DMS enables them to identify resistance and sensitizing mutations in the MET kinase domain, as well as consider inhibitors in the context of the clinically relevant exon-14 deletion. They then try to use the existing language model ESM1b augmented by an XGBoost regressor to identify key biophysical drivers of fitness. The authors provide an incredible study that has a treasure trove of data on a clinically relevant target that will appeal to many.
We thank Reviewer 1 for their generous assessment of our manuscript!
Weaknesses:
However, the authors do not equally consider alternative possible mechanisms of resistance or sensitivity beyond the impact of mutation on binding, even though the measure used to discuss resistance and sensitivity is ultimately a resistance score derived from the increase or decrease of the presence of a variant during cell growth.
For this resistance screen, Ba/F3 was a carefully chosen cellular selection system due to its addiction to exogenously provided IL-3, undetected expression of endogenous RTKs (including MET), and dependence on kinase transgenes to promote signaling and growth under IL-3 withdrawal. Together this allows for the readout of variants that alter kinase-driven proliferation without the caveat of bypass resistance. In our previous phenotypic screen (Estevam et al., 2024, eLife), we also carefully examined the impact of all possible MET kinase domain mutations both in the presence and absence of IL-3 withdrawal, but no inhibitors. There, we identified a small group of mutations that were associated with gain-of-function behavior located at conserved regulatory motifs outside of the catalytic site, yet these mutations were largely sensitive to inhibitors within this screen.
Here, the majority of resistance mutations were located at or near the ATP-binding pocket, suggesting an impact on resistance through direct drug interactions. However, there was also a small population of distal mutations that met our statistical definitions of resistance. Within the crizotinib selection, sites such as T1293, L1272, T1261, amongst others, demonstrated resistance profiles but were located in C-lobe away from the catalytic site. While we did not experimentally validate these specific mutations, it is possible that non-direct drug binders instead promote resistance through allosteric or conformational mechanisms which preserve kinase activity and signaling. Indeed, our ML framework explicitly included conformational and stability effects as significant in improving predictions.
We would be happy to further discuss any specific alternative resistance mechanisms Reviewer 1 has in mind! Thank you for highlighting this!
There are also points of discussion and interpretation that rely heavily on docked models of kinase-inhibitor pairs without considering alternative binding modes or providing any validation of the docked pose. Lastly, the use of ESM1b is powerful but constrained heavily by the limited structural training data provided, which can lead to misleading interpretations without considering alternative conformations or poses.
The majority of our interpretations are grounded in the X-ray structures of WT MET bound to the inhibitors studied (or close analogs). The use of docked models (note - to mutant structures predicted by UMol, not ESM, that can have conformational changes) is primarily in the ML part of the manuscript. Indeed, in our models, conformational and binding mode changes are taken into account as features (see Ligand RMSD, Residue RMSD). There are certainly improved methods (AF3 variants) emerging that might have even more power to model these changes, but they come with greater computational costs and are something we will be evaluating in the future.
We added to the results section: “While our features can account for some changes in MET-mutant conformation and altered inhibitor binding pose, the prediction of these aspects can likely be improved with new methods.”
Reviewer #2 (Public review):
Summary:
This manuscript provides a comprehensive overview of potential resistance mutations within MET Receptor Tyrosine Kinase and defines how specific mutations affect different inhibitors and modes of target engagement. The goal is to identify inhibitor combinations with the lowest overlap in their sensitivity to resistant mutations and determine if certain resistance mutations/mechanisms are more prevalent for specific modes of ATP-binding site engagement. To achieve this, the authors measured the ability of ~6000 single mutants of MET's kinase domain (in the context of a cytosolic TPR fusion) to drive IL-3-independent proliferation (used as a proxy for activity) of Ba/F3 cells (deep mutational profiling) in the presence of 11 different inhibitors. The authors then used co-crystal and docked structures of inhibitor-bound MET complexes to define the mechanistic basis of resistance and applied a protein language model to develop a predictive model of inhibitor sensitivity/resistance.
Strengths:
The major strengths of this manuscript are the comprehensive nature of the study and the rigorous methods used to measure the sensitivity of ~6000 MET mutants in a pooled format. The dataset generated will be a valuable resource for researchers interested in understanding kinase inhibitor sensitivity and, more broadly, small molecule ligand/protein interactions. The structural analyses are systematic and comprehensive, providing interesting insights into resistance mechanisms. Furthermore, the use of machine learning to define inhibitor-specific fitness landscapes is a valuable addition to the narrative. Although the ESM1b protein language model is only moderately successful in identifying the underlying mechanistic basis of resistance, the authors' attempt to integrate systematic sequence/function datasets with machine learning serves as a foundation for future efforts.
We thank Reviewer 2 for their thoughtful assessment of our manuscript!
Weaknesses:
The main limitation of this study is that the authors' efforts to define general mechanisms between inhibitor classes were only moderately successful due to the challenge of uncoupling inhibitor-specific interaction effects from more general mechanisms related to the mode of ATP-binding site engagement. However, this is a minor limitation that only minimally detracts from the impressive overall scope of the study.
We agree. We have added to the discussion: “A full landscape of mutational effects can help to predict drug response and guide small molecule design to counteract acquired resistance. The ability to define molecular mechanisms towards that goal will likely require more purposefully chosen chemical inhibitors and combinatorial mutational libraries to be maximally informative.”
Reviewer #3 (Public review):
Summary:
In the manuscript 'Mapping kinase domain resistance mechanisms for the MET receptor tyrosine kinase via deep mutational scanning' by Estevam et al, deep mutational scanning is used to assess the impact of ~5,764 mutants in the MET kinase domain on the binding of 11 inhibitors. Analyses were divided by individual inhibitor and kinase inhibitor subtypes (I, II, I 1/2, and III). While a number of mutants were consistent with previous clinical reports, novel potential resistance mutants were also described. This study has implications for the development of combination therapies, namely which combination of inhibitors to avoid based on overlapping resistance mutant profiles. While one suggested pair of inhibitors with the least overlapping resistance mutation profiles was suggested, this manuscript presents a proof of concept toward a more systematic approach for improved selection of combination therapeutics. Furthermore, in a final part of this manuscript the data was used to train a machine learning model, the ESM-1b protein language model augmented with an XG Boost Regressor framework, and found that they could improve predictions of resistance mutations above the initial ESM-1b model.
Strengths:
Overall this paper is a tour-de-force of data collection and analysis to establish a more systematic approach for the design of combination therapies, especially in targeting MET and other kinases, a family of proteins significant to therapeutic intervention for a variety of diseases. The presentation of the work is mostly concise and clear with thousands of data points presented neatly and clearly. The discovery of novel resistance mutants for individual MET inhibitors, kinase inhibitor subtypes within the context of MET, and all resistance mutants across inhibitor subtypes for MET has clinical relevance. However, probably the most promising outcome of this paper is the proposal of the inhibitor combination of Crizotinib and Cabozantib as Type I and Type II inhibitors, respectively, with the least overlapping resistance mutation profiles and therefore potentially the most successful combination therapy for MET. While this specific combination is not necessarily the point, it illustrates a compelling systematic approach for deciding how to proceed in developing combination therapy schedules for kinases. In an insightful final section of this paper, the authors approach using their data to train a machine learning model, perhaps understanding that performing these experiments for every kinase for every inhibitor could be prohibitive to applying this method in practice.
We thank Reviewer 3 for their assessment of our manuscript (we are very happy to have it described as a tour-de-force!)
Weaknesses:
This paper presents a clear set of experiments with a compelling justification. The content of the paper is overall of high quality. Below are mostly regarding clarifications in presentation.
Two places could use more computational experiments and analysis, however. Both are presented as suggestions, but at least a discussion of these topics would improve the overall relevance of this work. In the first case it seems that while the analyses conducted on this dataset were chosen with care to be the most relevant to human health, further analyses of these results and their implications of our understanding of allosteric interactions and their effects on inhibitor binding would be a relevant addition. For example, for any given residue type found to be a resistance mutant are there consistent amino acid mutations to which a large or small or effect is found. For example is a mutation from alanine to phenylalanine always deleterious, though one can assume the exact location of a residue matters significantly. Some of this analysis is done in dividing resistance mutants by those that are near the inhibitor binding site and those that aren't, but more of these types of analyses could help the reader understand the large amount of data presented here. A mention at least of the existing literature in this area and the lack or presence of trends would be worthwhile. For example, is there any correlation with a simpler metric like the Grantham score to predict effects of mutations (in a way the ESM-1b model is a better version of this, so this is somewhat implicitly discussed).
Indeed we experimented with including these types of features in the XGBoost scheme (particularly residue volume change and distance) to augment the predictive power of the ESM model - see Figure 8 - figure supplement 1; however, we didn’t find them as significant. Therefore, the signal is likely very small and/or incorporated into the baseline ESM model.
Indeed, this discussion relates to the second point this manuscript could improve upon: the machine learning section. The main actionable item here is that this results section seems the least polished and could do a better job describing what was done. In the figure it looks like results for certain inhibitors were held out as test data - was this all mutants for a single inhibitor, or some other scheme? Overall I think the implications of this section could be fleshed out, potentially with more experiments.
Figure 8A and the methods section contain a very detailed explanation of test data. We have thought about it and do not have any easy path to improve the description, which we reproduce here:
“Experimental fitness scores of MET variants in the presence of DMSO and AMG458 were ignored in model training and testing since having just one set of data for a type I ½ inhibitor and DMSO leads to learning by simply memorizing the inhibitor type, without generalizability. The remaining dataset was split into training and test sets to further avoid overfitting (Figure 8A). The following data points were held out for testing - (a) all mutations in the presence of one type I (crizotinib) and one type II (glesatinib analog) inhibitor, (b) 20% of randomly chosen positions (columns) and (c) all mutations in two randomly selected amino acids (rows) (e.g. all mutations to Phe, Ser). After splitting the dataset into train and test sets, the train set was used for XGBoost hyperparameter tuning and cross-validation. For tuning the hyperparameters of each of the XGBoost models, we held out 20% of randomly sampled data points in the training set and used the remaining 80% data for Bayesian hyperparameter optimization of the models with Optuna (Akiba et al., 2019), with an objective to minimize the mean squared error between the fitness predictions on 20% held out split and the corresponding experimental fitness scores. The following hyperparameters were sampled and tuned: type of booster (booster - gbtree or dart), maximum tree depth (max_depth), number of trees (n_estimators), learning rate (eta), minimum leaf split loss (gamma), subsample ratio of columns when constructing each tree (colsample_bytree), L1 and L2 regularization terms (alpha and beta) and tree growth policy (grow_policy - depthwise or lossguide). After identifying the best combination of hyperparameters for each of the models, we performed 10-fold cross validation (with re-sampling) of the models on the full training set. The training set consists of data points corresponding to 230 positions and 18 amino acids. We split these into 10 parts such that each part corresponds to data from 23 positions and 2 amino acids. Then, at each of 10 iterations of cross-validation, models were trained on 9 of 10 parts (207 positions and 16 amino acids) and evaluated on the 1 held out part (23 positions and 2 amino acids). Through this protocol we ensure that we evaluate performance of the models with different subsets of positions and amino acids. The average Pearson correlation and mean squared error of the models from these 10 iterations were calculated and the best performing model out of 8192 models was chosen as the one with the highest cross-validation correlation. The final XGBoost models were obtained by training on the full training set and also used to obtain the fitness score predictions for the validation and test sets. These predictions were used to calculate the inhibitor-wise correlations shown in Figure 8B.“
As mentioned in the 'Strengths' section, one of the appealing aspects of this paper is indeed its potential wide applicability across kinases -- could you use this ML model to predict resistance mutants for an entirely different kinase? This doesn't seem far-fetched, and would be an extremely compelling addition to this paper to prove the value of this approach.
This is exactly where we want to go next! But as we see here, it is going to be hard and require more purposeful selection of chemicals and likely combinatorial mutations to be maximally informative (see also reviewer 2 response where we have added text)
Another area in which this paper could improve its clarity is in the description of caveats of the assay. The exact math used to define resistance mutants and its dependence on the DMSO control is interesting, it is worth discussing where the failure modes of this procedure might be. Could it be that the resistance mutants identified in this assay would differ significantly from those found in patients? That results here are consistent with those seen in the clinic is promising, but discrepancies could remain.
Thank you for pointing this out. The greatest trade-off of probing the intracellular MET kinase (juxtamembrane, kinase domain, c-tail) in the constitutively active TPR system is that while we gain cytoplasmic expression, constitutive oligomerization, and HGF-independent activation, other features like membrane-proximal effects are lost and translatability of some mutations in non-proliferative conditions may also be limited. Nevertheless, Ba/F3 allows IL-3 withdrawal to serve as an effective variant readout of transgenic kinase variant effects due to its undetectable expression of endogenous RTKs and addiction to exogenous interleukin-3 (IL-3).
In our previous study, we were also interested in comparing the phenotypic results to available patient populations in cBioPortal. We observed that our DMS captured known oncogenic MET kinase variants, in addition to a population of gain-of-function variants within clinical residue positions that have not been clinically reported. Interestingly, the population of possible novel gain-of-function mutant codons were more distant in genetic space (2-3 Hamming distance) from wild type than the clinically reported variant codon (1-2 Hamming distance).
For this inhibitor screen, we also carefully compared previously reported and validated resistance mutations across referenced publications to that of our inhibitor screen, and observed large agreement as noted in-text. While discrepancies could definitely remain, there is precedence for consistency.
Furthermore a more in depth discussion of the MetdelEx14 results is warranted. For example, why is the DMSO signature in Figure 1 - supplement 4 so different from that of Figure 1?
In our previous study (Estevam et al., 2024), we more directly compared MET and METΔExon14, and while observed several differences, especially at conserved regulatory motifs, the TPR expression system did not provide a robust differential. Therefore, we hypothesize that a membrane-bound context is likely necessary to obtain a differential that captures juxtamembrane regulatory effects for these two isoforms. For that reason, we did not place heavy emphasis on the differences between MET and METΔExon14 in this study. Nevertheless, we performed parallel analysis of the METΔExon14 inhibitor DMS and provided all source and analyzed data in our GitHub repository (https://github.com/fraser-lab/MET_kinase_Inhibitor_DMS).
In our analysis of resistance, we used Rosace to score and compare DMSO and inhibitor landscapes. We present the full distribution of raw scores in Figure 1 for each condition. However, to visually highlight resistance mutations as a heatmap, we subtracted the scores of each variant in each inhibitor condition from the raw DMSO score, making the heatmaps in Figure 1 - supplement 4 appear more “blue.”
And finally, there is a lot of emphasis put on the unexpected results of this assay for the tivantinib "type III" inhibitor - could this in fact be because the molecule "is highly selective for the inactive or unphosphorylated form of c-Met" according to Eathiraj et al JBC 2011?
The work presented by Eathiraj et al JBC 2011 is a key study we reference and is foundational to tivantinib. While the point brought up about tivantinib’s selective preference for an inactive conformation is valid, this is also true for type II kinase inhibitors. In our study, regardless of inhibitor conformational preference, tivantinib was the only one with a nearly identical landscape to DMSO and exhibited selection even in the absence of Ba/F3 MET-addiction (Figure 1E). This result is in closer agreement with MET agnostic behavior reported by Basilico et al., 2013 and Katayama et al., 2013.
While this paper is crisply written with beautiful figures, the complexity of the data warrants a bit more clarity in how the results are visualized. Namely, clearly highlighting mutants that have previously reported and those identified by this study across all figures could help significantly in understanding the more novel findings of the work.
To better compare and contrast novel mutation identified in this study to others, we compiled a list of reported resistance mutations from recent clinical and experimental studies (Pecci et al 2024; Yao et al., 2023; Bahcall et al., 2022; Recondo et al., 2020; Rotow et al ., 2020; Fujino et al., 2019), since a direct database with resistance annotations does not exist for MET, to the best of our knowledge. In total, this amounted to 31 annotated resistance mutations across crizotinib, capmatinib, tepotinib, savolitinib, cabozantinib, merestinib, and glesatinib, which we have now tabulated in a new figure (Figure 4) and commentary in the main text:
To assess the agreement between our DMS and previously annotated resistance mutations, we compiled a list of reported resistance mutations from recent clinical and experimental studies (Pecci et al 2024; Yao et al., 2023; Bahcall et al., 2022; Recondo et al., 2020; Rotow et al ., 2020; Fujino et al., 2019) (Figure 4A,B). Overall, previously discovered mutations are strongly shifted to a GOF distribution for the drugs where resistance is reported from treatment or experiment; in contrast, the distribution is centered around neutral for those sites for other drugs not reported in the literature (Figure 4C). However, even in cases such as L1195V, we observe GOF DMS scores indicative of resistance to previously reported inhibitors. Given this overall strong concordance with prior literature and clinical results, we can also provide hypotheses to clarify the role of mutations that are observed in combination with others. For example, H1094Y is a reported driver mutation that has been linked to resistance in METΔEx14 for glesatinib with either the secondary L1195V mutation or in isolation (Recodo et al., 2020). However, in our assay H1094Y demonstrated slight sensitivity to gelesatinib, suggesting that either resistance is linked to the exon14 deletion isoform, the L1195V mutation, or a cellular factor not modeled well by the BaF3 system.
Finally, the potential impacts and follow-ups of this excellent study could be communicated better - it is recommended that they advertise better this paper as a resource for the community both as a dataset and as a proof of concept. In this realm I would encourage the authors to emphasize the multiple potential uses of this dataset by others to provide answers and insights on a variety of problems.
Please see below
Related to this, the decision to include the MetdelEx14 results, but not discuss them at all is interesting, do the authors expect future analyses to lead to useful insights? Is it surprising that trends are broadly the same to the data discussed?
Our previous paper suggests that Ba/F3 isn’t a great model for measuring the differences between MET and METΔEx14, so we haven’t emphasized other than to point to our previous paper. We include the full analysis here nonetheless as a resource. Potentially where the greatest differences between resistance mutant behaviors would be observed is in the full-length, membrane-bound MET and METΔEx14 receptor isoforms. While outside of the scope of this study, there is great potential to use the resistance mutations identified in this study as a filtered group to test and map differential inhibitor sensitivities between receptor isoforms.
And finally it could be valuable to have a small addition of introspection from the authors on how this approach could be altered and/or improved in the future to facilitate the general application of this approach for combination therapies for other targets.
See also reviewer 2 response where we have added text.
Recommendations for the authors:
Reviewer #1 (Recommendations for the authors):
Major points of revision:
(1) It seems like much of the structural interpretation of the inhibitor binding mode, outside of crizotinib binding, appears to come from docked models of the inhibitor to the MET kinase domain. Given the potential variability of the docked structure to the kinase domain, it would be useful for the authors to consider alternative possible binding modes that their docking pipeline may have suggested. It could also be useful to provide some degree of validation or contextualization of their docking models.
All individual figures are very carefully inspected based on either existing crystal structures of the inhibitor or closely related inhibitors (ATP, 3DKC; crizotinib, 2WGJ; tepotinib, 4R1V; tivantinib, 3RHK; AMG-458, 5T3Q; NVP-BVU972, 3QTI; merestinib, 4EEV; savolitinib, 6SDE). In total, four structural interpretations were the result of docking onto reference experimental structures (capmatinib, cabozantinib, glumetinib, glesatinib). As we wrote above, different conformations and binding modes are possible in predicted mutant structures (as we did here at scale) and included in the ML analysis already.
(2) In the first section, the authors classify an inhibitor as Type Ia on docking models, but mention the conflicting literature describing it as type Ib - it would be helpful to provide a contextualization of why this distinction between Ia and Ib matters, and what difference it might make. It would also be useful to know if their docking score only suggested poses compatible with Ia or if other poses were provided as well. Validation using other method might be beneficial, especially since they acknowledge the conflicting literature for classification. Or at least recontextualization that more evidence would be needed.
Kinase inhibitors have several canonical structural definitions we use to base the classifications in this study. Specifically, type I inhibitors are classified in MET by interactions with Y1230, D1228, K1110 in addition to its conformation in the ATP-binding site. Type I inhibitors are further subdivided into type 1a in MET if it leverages interactions with the solvent front and residue G1163. In prior literature referenced, tepotinib was classified as type 1b, which would imply it does not have solvent front interactions, like savolitinib (PDB 6SDE) or NVP-BVU972 (PDB 3QTI). However, in the tepotinib experimental structure (PDB 4R1V), we observed a greater structural resemblance to other type 1a inhibitors opposed to type 1b (Figure 1 - figure supplement 1b).
(3) The measure used to discuss resistance and sensitivity is ultimately a resistance score derived from the increase or decrease of the presence of a variant during cell growth. This is not a measure of direct binding. It would be helpful if the authors discussed alternative mechanisms through which these variants may impact resistance and/or sensitivity, such as stability, protonation effects, or kinase activity. The score itself may be convolving over all these potential mechanisms to drive GOF and LOF observed behavior.
See the response to the public review. Indeed, our ML framework explicitly included conformational and stability effects as significant in improving predictions.
(4) While it is promising to try and improve the predictive properties of ESM1b, it is not exactly clear why the authors considered their structural data of 11 inhibitors a sufficient dataset with which to augment the model. It would be useful for the authors to provide some additional context for why they wished to augment ESM1b in particular with their dataset, and provide any metrics indicating that their training data of 11 inhibitors provided an adequate statistical sample.
We don’t understand what this means. Sorry!
(5) The authors use ESM-1b to predict the fitness impact of each mutation and augment it using protein structural data of drug-target interactions. However, using an XGBoost regressor on a single set of 11 kinase-inhibitor interaction pairs is an incredibly sparse dataset to train upon. It would be useful for the authors to consider the limitations of their model, as well as its extensibility in the context of alternate binding poses, alternate conformations, or changes in protonation states of ligand or inhibitor.
On the contrary - this is 11 chemicals across 3000 mutations. We have discussed alternative interpretations above.
Minor points:
(1) It would also be useful for the authors to provide more context around their choice of regressor. XGBoost is a powerful regressor but can easily overfit high dimensional data when paired with language models such as ESM-1b. This would be particularly useful since some of the features to train on were also generated using existing models such as ThermoMPNN.
Yes - we are quite concerned about overfitting and have tried to assess overfitting by careful design of test and validation sets.
(2) The authors also mention excluding their DMSO and AMG458 scores in the model training and testing due to overfitting issues - it would be useful to have an SI figure pointing to this data.
No - we exclude the DMSO because that is the reference (baseline) and AMG because it has a different binding mode. This isn’t related to overfitting.
(3) The authors mention in their docking pipeline that 5 binding modes were used for each ligand docking, but it appears that only one binding mode is considered in the main figures. It would be useful for the authors to provide additional details about what were the other binding modes used for, how different were each binding mode, and how was the "primary" mode selected (and how much better was its score than the others).
The reviewer misinterprets the difference between poses shown in figures, based on mostly crystal structures or carefully selected templates, and the use of docked models in feature engineering for the ML part of the study. Where existing crystal structures do not exist, we performed docking for capmatinib, cabozantinib, glumetinib, glesatinib onto reference structures bound to type I (2WGJ) and type II (4EEV) inhibitors. We selected one representative binding mode based on the reference inhibitor, and while not exact, at a minimum these models provide a basis for structural interpretation.
Reviewer #2 (Recommendations for the authors):
My main suggestion is for the authors to add a few sentences (in non-technical language) to the results section, specifically before the results shown in Figure 3, defining gain-of-function, loss-of-function, resistance, and sensitivity. While these definitions are present in the materials and methods section, explicitly discussing them prior to the relevant results would significantly improve the overall readability of the manuscript.
We defined “gain-of-function” and “loss-of-function” mutations as those with fitness scores statistically greater or lower than wild-type. Within the DMSO condition, gain-of-function and loss-of -function labels describe mutational perturbation to protein function, whereas within inhibitor conditions, the labels describe the difference in fitness introduced by an inhibitor.
We have also clarified these definitions where the terms are first introduced: “As expected, the DMSO control population displayed a bimodal distribution with mutations exhibiting wild-type fitness centered around 0, with a wider distribution of mutations that exhibited loss- or gain-of-function effects, as defined by fitness scores with statistically significant lower or greater scores than wild-type, respectively.”
Figure 7D. Please add a bit more detail to the legend on how fold change (y-axis) was calculated.
Here, fold change represents the number of viable cells at each inhibitor concentration relative to the TKI control, measured with the CellTiter-Glo® Luminescent Cell Viability Assay (Promega) as an end point readout. We have updated the legend of Figure 7D with calculation details: “Dose-response for each inhibitor concentration is represented as the fraction of viable cells relative to the TKI free control.”
I must admit, I did not understand what "Specific inhibitor fitness landscapes also aid in identifying mutations with potential drug sensitivity, such as R1086 and C1091 in the MET P-loop" means. These are positions where most mutations lead to greater sensitivity to crizotinib. Is the idea that there are potentially clinically-relevant MET mutations that can be targeted over wild type with crizotinib?
Thank you for highlighting this! The P-loop (phosphate-binding loop) is a glycine-rich structural motif conserved in kinase domains. This motif is located in the N-lobe, where its primary role is to gate ATP entry into the active site and stabilize the phosphate groups of ATP when bound. Therefore, the P-loop is a common target region for ATP-competitive inhibitor design, but also a site where resistance can emerge (Roumiantsev et al., 2002). The idea we’d like to convey is that identifying residues that offer the potential for drug stabilization with the added benefit of having lower risk resistance, is an attractive consideration for novel inhibitor design.
We have added to the text: “Individual inhibitor resistance landscapes also aid in identifying target residues for novel drug design by providing insights into mutability and known resistance cases. This enables the selection of vectors for chemical elaboration with potential lower risk of resistance development. Sites with mutational profiles such as R1086 and C1091, located in the common drug target P-loop of MET, could be likely candidates for crizotinib.”
Reviewer #3 (Recommendations for the authors):
(1) Suggested Improvements to the Figures:
a) Figure 4A - T1261 seems to be mislabeled
b) In Figure 3A it's suggested to highlight mutants determined to be resistance mutants by this scheme.
c) In Figure 3D it would be informative to highlight which of these resistance mutants have already been previously reported and which are novel to this study
d) Throughout figures 3A, 3D, and 4G the graphical choices on how to highlight synonymous mutations and mutations not performed in the assay needs improvement.
The Green vs Grey 'TRUE' vs 'FALSE' boxes are confusing. Just a green box indicating synonymous mutations would be sufficient. Additionally these green boxes are hard to see, and often edges of this green box are currently missing making it even more difficult to see and interpret.
* In Figure 4A mutants do not seem to be indicated by a line or plus sign, but this is not explained in the legend or the caption. Please add.
* In 3D and 4G it is not clear if the mutants not performed are indicated at all - perhaps they are indicated in white, making them indistinguishable from scores with 0. Please clarify.
T1261 and G1242 are now correctly labeled.
In text we have also highlighted reported resistance mutations for crizotinib, which are inclusive of clinical reports and in vitro characterization: “These sites, and many of the individual mutations, have been noted in prior reports, such as: D1228N/H/V/Y, Y1230C/H/N/S, G1163R.”
We have adjusted the heatmaps to improve visual clarity. Mutations with score 0 are white, as indicated by the scale bar, and mutations uncaptured by the screen are now in light yellow. The green outline distinguishing WT synonymous mutations have also been adjusted so edges are no longer cut off. In our representations, we only distinguished mutations by the score color scale bar and WT outline. What looked like a “plus” or “line” in the original figure was only the heatmap background, which now should be resolved in the updated figure and legends for Figure 3 and Figure 4.
(2) Some Minor Suggested Improvements to the Text:
a) The abbreviation CBL for 'CBL docking site' is used without being defined.
b) Figure 3G is referenced, but it does not exist.
c) In the sentence 'Beyond these well characterized sites, regions with sensitivity occurred throughout the kinase, primarily in loop-regions which have the greatest mutational tolerance in DMSO, but do not provide a growth advantage in the presence of an inhibitor (Figure 1 - Figure Supplement 1; Figure 1 - Figure Supplement 2).'. It is not clear why these supplemental figures are being referenced.
d) In the supplement section 'Enrich2 Scoring' has what seem like placeholders for citations in [brackets]
Cbl is a E3 ubiquitin ligase that plays a role in MET regulation through engagement with exon 14, specifically at Y1003 when phosphorylated. This mode of regulation was more highlighted in our previous study. However, since Cbl was only mentioned briefly in this study, we have removed reference to it to simplify the text.
In addition, we have removed the figure 3G reference and corrected the in-text range. We have also removed references to figure supplements where unnecessary and edited the “Enrich2 scoring” method section to now reference missing citations.
Reviewer #2 (Public review):
Summary:
The author developed a new device to overcome current limitations in the imaging process of 3D spheroidal structures. In particular, they created a system to follow in real-time tumour spheroid formation, fusion and cell migration without disrupting their integrity. The system has also been exploited to test the effects of a therapeutic agent (chemotherapy) and immune cells.
Strengths:
The system allows the in situ observation of the 3D structures along the 3 axes (x,y and z) without disrupting the integrity of the spheroids; in a time-lapse manner it is possible to follow the formation of the 3D structure and the spheroids fusion from multiple angles, allowing a better understanding of the cell aggregation/growth and kinetic of the cells.
Interestingly the system allows the analysis of cell migration/ escape from the 3D structure analysing not only the morphological changes in the periphery of the spheroids but also from the inner region demonstrating that the proliferating cells in the periphery of the structure are more involved in the migration and dissemination process. The application of the system in the study of the effects of doxorubicin and NK cells would give new insights in the description of the response of tumor 3D structure to killing agents.
RRID:SCR_001905
DOI: 10.1007/s00248-024-02484-y
Resource: R Project for Statistical Computing (RRID:SCR_001905)
Curator: @scibot
SciCrunch record: RRID:SCR_001905
BREF: Bonnes Idées d'Innovation Pédagogique selon le CSEN
Ce document résume les principales idées et recommandations du document "Quelques Bonnes Idées d'Innovation Pédagogique" publié par le Conseil scientifique de l'éducation nationale (CSEN) en décembre 2022.
Il s'appuie sur les recherches internationales, notamment celles du Fonds anglais pour l'éducation (EEF), pour proposer des pistes concrètes d'amélioration de l'école française.
I. Introduction:
Le document souligne l'opportunité de la démarche "Notre école, faisons-la ensemble" pour introduire des innovations efficaces et bénéfiques pour les élèves. Il s'appuie sur les analyses du EEF, un organisme unique qui évalue scientifiquement un grand nombre d'interventions scolaires.
II. Bonnes idées d'innovation pédagogique:
Le document propose 13 pistes d'innovation, chacune appuyée par des références scientifiques:
Espacer progressivement les tests pour favoriser la rétention à long terme.
Encourager les activités génératives qui favorisent la compréhension profonde et le transfert des compétences.
Favoriser l'attention et la concentration:
Diversifier les activités pédagogiques et éviter les longs cours magistraux.
Utiliser des outils informatiques interactifs comme les QCM.
Minimiser les sources de distraction dans la classe et privilégier des présentations claires et concises.
Aider les élèves à gérer leur charge cognitive: Diviser les contenus en "morceaux" plus faciles à manipuler.
Décomposer les problèmes en sous-tâches et utiliser des démonstrations pas à pas. Fournir des aides et des instructions claires.
Encourager l'automatisation de certaines tâches pour libérer la mémoire de travail.
Évaluer rigoureusement les innovations visant à gérer la charge cognitive.
Promouvoir la pensée méthodique et l'esprit critique: Intégrer explicitement les objectifs de pensée méthodique et d'esprit critique dans les leçons.
Enseigner aux élèves les stratégies et les outils nécessaires pour analyser l'information, identifier les biais et développer leur esprit critique.
Encourager la métacognition et l'auto-évaluation des connaissances. Développer des projets pour l'école inclusive:
Travailler en collaboration avec des professionnels de santé extérieurs.
Adapter les tâches et les évaluations scolaires aux besoins spécifiques de chaque élève.
Utiliser des technologies adaptées pour compenser les handicaps.
Communiquer fréquemment et constructivement avec les parents.
Mettre en place des aides humaines formées et supervisées par les enseignants. Impliquer l'ensemble de l'équipe pédagogique et le chef d'établissement pour créer un environnement inclusif.
"Les approches pédagogiques préconisées dans ce document, telles que l’enseignement explicite, le tutorat par les pairs et le travail collaboratif, l’apprentissage de stratégies métacognitives, et les approches de gestion de comportement à l’échelle de la classe et de l’établissement sont particulièrement efficaces pour ces élèves."
III. Fausses bonnes idées à éviter:
Le document met en garde contre certaines idées séduisantes mais inefficaces, voire nocives :
Différencier les élèves selon leur "style d'apprentissage".
Pratiquer une pédagogie basée sur la découverte. S'engager dans des solutions miracles dénuées de validité scientifique.
Tout miser sur la technologie sans réflexion pédagogique préalable.
Tout miser sur le design de la classe au détriment du confort et de la concentration des élèves. Innover pour innover sans se baser sur des preuves scientifiques.
IV. Conclusion:
Le document encourage les équipes éducatives à s'inspirer des pratiques fondées sur la recherche scientifique pour améliorer la réussite et le bien-être des élèves. Il recommande de tester rigoureusement l'efficacité de toute innovation pédagogique et de ne pas hésiter à l'interrompre si les résultats ne sont pas au rendez-vous.
"Le plus important est de savoir quelles approches ont réellement fait leurs preuves au profit des élèves."
Le CSEN et le programme IDEE proposent un dispositif d'accompagnement pour aider les équipes éducatives à mettre en place des innovations pédagogiques efficaces et à les évaluer rigoureusement.
BREF: Bonnes Idées d'Innovation Pédagogique selon le CSEN
Ce document résume les principales idées et recommandations du document "Quelques Bonnes Idées d'Innovation Pédagogique" publié par le Conseil scientifique de l'éducation nationale (CSEN) en décembre 2022.
Il s'appuie sur les recherches internationales, notamment celles du Fonds anglais pour l'éducation (EEF), pour proposer des pistes concrètes d'amélioration de l'école française.
I. Introduction:
Le document souligne l'opportunité de la démarche "Notre école, faisons-la ensemble" pour introduire des innovations efficaces et bénéfiques pour les élèves.
Il s'appuie sur les analyses du EEF, un organisme unique qui évalue scientifiquement un grand nombre d'interventions scolaires.
II. Bonnes idées d'innovation pédagogique:
Le document propose 13 pistes d'innovation, chacune appuyée par des références scientifiques:
"L’enseignement explicite bénéficie généralement à tous les élèves, y compris aux élèves forts. Cependant, les élèves faibles ou défavorisés semblent bénéficier tout particulièrement de cet enseignement."
"Une bonne nuit de sommeil est essentielle pour l’apprentissage : elle prépare l’enfant à être plus attentif le lendemain, et elle est indispensable pour que les informations apprises s’ancrent dans la mémoire."
"Le cerveau humain apprend bien plus facilement d’une autre personne plutôt que d’un ordinateur ou d’un livre."
Maximiser l'engagement des parents: * Maintenir un contact régulier avec les parents, en se concentrant sur les apprentissages et en valorisant les réussites. * Privilégier une communication à double sens et consulter les parents. * Encourager les réunions de parents dans un cadre informel. * Aider les parents à soutenir leurs enfants dans l'organisation de leur temps, la fixation d'objectifs et la mise en place de bonnes habitudes de travail et de sommeil. * Encourager la lecture à la maison avec un accompagnement adéquat. * Vérifier que tous les élèves ont bien appris à lire: * Préparer les élèves dès la maternelle en développant le vocabulaire, le langage oral et la connaissance des lettres. * Utiliser des manuels efficaces au CP et assurer une formation adéquate aux enseignants. * Identifier les élèves en difficulté grâce aux évaluations nationales et leur apporter un soutien supplémentaire. * Vérifier la compétence en lecture tout au long de la scolarité et proposer des remédiations spécifiques. * Encourager la pratique et le plaisir de la lecture. * Utiliser des logiciels d'apprentissage de la lecture ayant fait leurs preuves, comme Kalulu et GraphoGame. * Redonner goût aux mathématiques et aux sciences: * Rematérialiser les maths en les reliant à des objets concrets et des manipulations. * Promouvoir un aller-retour entre situations concrètes et abstractions mathématiques. * Intégrer des jeux mathématiques, des casse-têtes et des constructions dans l'apprentissage. * Créer des bibliothèques de classe avec des livres à contenu mathématique et scientifique. * Inviter des intervenants passionnés de sciences et de maths pour partager leur enthousiasme. * Mettre en place des projets scientifiques liés à des enjeux concrets comme le climat ou la biodiversité. * Favoriser la mémorisation à long terme, la compréhension et le transfert des compétences: * Tester les élèves régulièrement et leur fournir un feedback bienveillant et informatif. * Espacer progressivement les tests pour favoriser la rétention à long terme. * Encourager les activités génératives qui favorisent la compréhension profonde et le transfert des compétences. * Favoriser l'attention et la concentration: * Diversifier les activités pédagogiques et éviter les longs cours magistraux. * Utiliser des outils informatiques interactifs comme les QCM. * Minimiser les sources de distraction dans la classe et privilégier des présentations claires et concises. * Aider les élèves à gérer leur charge cognitive: * Diviser les contenus en "morceaux" plus faciles à manipuler. * Décomposer les problèmes en sous-tâches et utiliser des démonstrations pas à pas. * Fournir des aides et des instructions claires. * Encourager l'automatisation de certaines tâches pour libérer la mémoire de travail. * Évaluer rigoureusement les innovations visant à gérer la charge cognitive. * Promouvoir la pensée méthodique et l'esprit critique: * Intégrer explicitement les objectifs de pensée méthodique et d'esprit critique dans les leçons. * Enseigner aux élèves les stratégies et les outils nécessaires pour analyser l'information, identifier les biais et développer leur esprit critique. * Encourager la métacognition et l'auto-évaluation des connaissances. * Développer des projets pour l'école inclusive: * Travailler en collaboration avec des professionnels de santé extérieurs. * Adapter les tâches et les évaluations scolaires aux besoins spécifiques de chaque élève. * Utiliser des technologies adaptées pour compenser les handicaps.
Communiquer fréquemment et constructivement avec les parents.
Mettre en place des aides humaines formées et supervisées par les enseignants.
Impliquer l'ensemble de l'équipe pédagogique et le chef d'établissement pour créer un environnement inclusif.
"Les approches pédagogiques préconisées dans ce document, telles que l’enseignement explicite, le tutorat par les pairs et le travail collaboratif, l’apprentissage de stratégies métacognitives, et les approches de gestion de comportement à l’échelle de la classe et de l’établissement sont particulièrement efficaces pour ces élèves."
III. Fausses bonnes idées à éviter:
Le document met en garde contre certaines idées séduisantes mais inefficaces, voire nocives :
Différencier les élèves selon leur "style d'apprentissage". Pratiquer une pédagogie basée sur la découverte.
S'engager dans des solutions miracles dénuées de validité scientifique.
Tout miser sur la technologie sans réflexion pédagogique préalable.
Tout miser sur le design de la classe au détriment du confort et de la concentration des élèves. Innover pour innover sans se baser sur des preuves scientifiques.
IV. Conclusion:
Le document encourage les équipes éducatives à s'inspirer des pratiques fondées sur la recherche scientifique pour améliorer la réussite et le bien-être des élèves.
Il recommande de tester rigoureusement l'efficacité de toute innovation pédagogique et de ne pas hésiter à l'interrompre si les résultats ne sont pas au rendez-vous.
"Le plus important est de savoir quelles approches ont réellement fait leurs preuves au profit des élèves."
Le CSEN et le programme IDEE proposent un dispositif d'accompagnement pour aider les équipes éducatives à mettre en place des innovations pédagogiques efficaces et à les évaluer rigoureusement.
Author response:
The following is the authors’ response to the original reviews.
We appreciate the positive assessment and agree that the experimental data offer valuable insights into HBV capsid assembly inhibition. Based on the reviewers' suggestions, we have clarified the cryo-EM data and added structural and mechanistic details throughout the manuscript, which we believe significantly enhance its overall clarity and impact. The manuscript now better reflects a promising strategy to interfere with the HBV life cycle. We have carefully addressed all comments to improve both the clarity and quality of the manuscript.
Response to Public Reviews
We greatly appreciate the insightful comments and suggestions from the reviewers. Below, we provide responses to the points raised in the public reviews.
Reviewer #1 (Public Review):
Summary:
In this paper, the authors present an interesting strategy to interfere with the HBV life cycle: the preparation of geranyl and peptides' dimers that could impede the correct assembly of hepatitis B core protein HBc into viable capsids. These dimers are of different nature, depending on the HBc site the authors plan to target. A preliminary study with geranyl dimers (targeting a hydrophobic site of HBc) was first investigated. The second series deals with peptide-PEG linker-peptide dimers, targeting the tips of HBc dimer spikes.
Strengths:
This work is very well conducted, combining ITC experiments (for determination of dimers' KD), cellular effects (thanks to the grafting of previously developed dimers with polyarginine-based cell penetrating peptide) HBV infected HEK293 cells and Cryo-EM studies.
The findings of these research teams unambiguously demonstrated the interest of such dimeric structures in impeding the correct HBV life cycle and thus, could bring solutions in the control of its development. Ultimately, a new class of HBV Capside Assembly Modulators could arise from this study.
There is no doubt that this work could bring very interesting information for people working on VHB.
Weaknesses:
Some minor corrections must be made, especially for a more precise description of the strategy and the chemical structure of the designed new VHB capsid assembly modulators.
We are grateful for the positive feedback on the experimental design, the combination of ITC, cellular effects, and Cryo-EM studies, and the potential for developing new classes of HBV Capsid Assembly Modulators (CAMs). In the revised version we have clarified the design rationale for the choice of the PEG linker length in the Supplementary Information, linking it to the structural measurements of the capsid. Chemical structures and detailed molecular formulas were added and terms have been corrected. A scrambled dimeric peptide served as a negative control, which showed no binding, confirming the specificity of our designed peptide and ruling out non-specific interactions from other elements of the molecules such as the linkers. Finally, we have revised the nomenclature for the geranyl dimers to better reflect the chemical structure. All figures, including Figure 3, have been updated to high-resolution. All mentioned typos have been corrected. Consultation dates have been added to the website references. HPLC terminology was corrected.
Reviewer #2 (Public Review):
Summary:
Vladimir Khayenko et al. discovered two novel binding pockets on HBc with in vitro binding and electron microscopy experiments. While the geranyl dimer targeting a central hydrophobic pocket displayed a micromolar affinity, the P1-dimer binding to the spike tip of HBc has a nanomolar affinity. In the turbidity assay and at the cellular level, an HBc aggregation from peptide crosslinking was demonstrated.
Strengths:
The study identifies two previously unexplored binding pockets on HBc capsids and develops novel binders targeting these sites with promising affinities.
Weaknesses:
While the in vitro and cellular HBc aggregation effects are demonstrated, the antiviral potential against HBV infection is not directly evaluated in this study.
Thank you for recognizing the innovative approach of our work and the potential for developing novel antivirals targeting HBc. We have now included additional discussion on potential future experiments aimed at evaluating the compounds' effects on cellular physiology and viral infectivity.
Reviewer #3 (public Review):
Summary:
HBV is a continuing public health problem and new therapeutics would be of great value. Khayenko et al examine two sites in the HBc dimer as possible targets for new therapeutics. Older drugs that target HBc bind at a pocket between two HBc dimers. In this study Khayenko et al examine sites located in the four helix bundle at the dimer interface.
The first site is a pocket first identified as a triton100 binding site. The authors suggest it might bind terpenes and use geraniol as an example. They also test a decyl maltose detergent and a geraniol dimer intended for bivalent binding. The KDs were all in the 100µM range. Cryo-EM shows that geraniol binds the targeted site.
The second site is at the tip of the spike. Peptides based on a 1995 study (reference 43) were investigated. The authors test a core peptide, two longer peptides, and a dimer of the longest peptide. A deep scan of the longest monomer sequence shows the importance of a core amino acid sequence. The dimeric peptide (P1-dimer) binds almost 100 fold better than the monomer parent (P1). Cryo-EM structures confirm the binding site. The dimeric peptide caused HBc capsid aggregation When HBc expressing cells were treated with active peptide attached to a cell penetrating peptide, the peptide caused aggregation of HBc antigen mirroring experiments with purified proteins.
Strengths:
The two sites have not been well investigated. This paper marks a start. The small collection of substrates investigated led to discovery of a dimeric peptide that leads to capsid aggregation, presumably by non-covalent crosslinking. The structures determined could be very useful for future investigations.
Weaknesses:
In this draft, the rational for targets for the triton x100 site is not well laid out. The target molecules bind with KDs weaker that 50µM. The way the structural results are displayed, one cannot be sure of the important features of binding site with respect to the the substrate. The peptide site and substrates are better developed, but structural and mechanistic details need to be described in greater detail.
We appreciate the reviewer’s positive comments on identifying and targeting previously unexplored sites on HBc, and the potential utility of our dimeric peptides in future studies. We have revised the Results section to better explain the rationale behind targeting the hydrophobic binding site. Additionally, the structures have been revised for clearer presentation, and we now emphasize the key features of the binding site and the role of substrate specificity.
Recommendations For The Authors:
Reviewer #1 (Recommendations For The Authors):
For clarity, the chemical structure of SLLGRM peptide, geraniol and HAP molecules must be indicated, preferably in Fig. 1 (at least in the Supplementary Information section).
We have now included the chemical structures of the SLLGRM peptide, geraniol, and HAP molecules for clarity in Figure 1 and in the main manuscript to ensure they are easily accessible for reference and to provide further detail and context.
In the same idea, in Fig. 1 (and in the text): The molecular formula of heteroaryldihydropyrimidine HAP must be clearly indicated, as the nature of the heteroatom (S, O, N?) in this "heteroaryl" derivative is not indicated.
The full molecular formula of HAP (((2S)-1-[[(4R)-4-(2-chloranyl-4-fluoranyl-phenyl)-5-methoxycarbonyl-2-(1,3-thiazol-2-yl)-1,4-dihydropyrimidin-6-yl]methyl]-4,4-bis(fluoranyl)-pyrrolidine-2-carboxylic acid), is now included the figure legend.
with a polyethylene glycol (PEG) linker that could bridge the distance of 38 Å between the two opposing hydrophobic pockets": what is the rationale of the design of this linker? Authors must explain briefly why/how they have chosen this linker length and nature (please indicate a reference for the appropriate choice of PEG linker). Same remarks for dimers targeting the capsid spike tips, having 50 angstroms PEG linkers. So, the choice of the linker length must be clearly explained and not be only mentioned in the sentence of the discussion part "Using our structural knowledge of the capsid, particularly the distances between the spikes.
We have now better clarified the rationale for the design of the PEG linker length. The linker lengths were specifically chosen based on structural knowledge of the capsid, particularly the measured distances between the spike tips (60 Å) and the hydrophobic pockets (40 Å). In the Supplementary Information (Supplementary Figure 1), we now clearly explain how these measurements guided the choice of PEG linker length, allowing for optimal bridging and interaction between the binding sites. This supplementary figure now explicitly connects the design rationale to the specific structural features of the capsid.
I do not agree with the authors when they claim a "nanomolar affinity of 312 nM". To me, a nanomolar affinity would require several of few tens of nanoM (but not three hundreds) ... So, please correct with "sub-micromolar affinity of 312 nM" and all the other parts of the manuscript (title and caption of Figure 3..., "the peptide dimer (P1dC) with nanomolar affinity" "nanomolar levels"...).
We thank the Rev#1 for pointing this out. Since the term "nanomolar affinity" can indeed be interpreted as referring to the lower end of the nanomolar range, rather than values close to 300 nM we have revised the manuscript to refer to the "sub-micromolar affinity" where applicable. This change has been made throughout the manuscript, including the subtitles and figure captions, and the text.
The drug design strategy was to combine two peptides showing low affinity, attached by a PEG linker with an appropriate length and appears obvious to me. But a control experiment is anyway missing: the peptide-PEG linker derivative (not the dimer peptide-PEG linker-peptide...) should have been evaluated for an unambiguous proof of concept of these dimeric peptides. To my opinion, for the publication of this work, these experiments should be brought (eg, when describing the affinities of SLLGR dimers). I agree that Cryo-EM experiments bring evidences of the dimer binding but the affinity values for (peptide-PEG linker) derivatives would bring an additional proof (as the PEG flexible linkers was not resolved by Cryo-EM).
Thank you for your thoughtful comment regarding the use of a monovalent control for the peptide-PEG linker. A scrambled dimeric peptide serves as a negative control. In ITC it showed no binding at all. Thereby ruling out possibly unspecific interactions mediated by the introduced PEG linker or handle itself.
Given the complete lack of binding with the scrambled dimeric peptide, we believe this thoroughly excludes the need for an additional monovalent control, as it provides strong evidence that the observed binding is driven specifically by the designed peptide sequence and not by the linker or other structural components. We have now made this clarification more explicit in the revised manuscript to avoid any ambiguity. We hope this addresses your concern, and we appreciate your suggestion to further strengthen the rigor of the work. Despite its identical charge, molecular weight and atom composition the scrambled control did not cause HBc aggregation in living cells, thus indicating sequence specific action of the aggregating dimer.
The nomenclature of the dimers must be modified because there is no logic between the name "long dimer" and the chemical structure. Particularly, the number of ethylene glycol motifs must be indicated: authors have to find an appropriate nomenclature indicating both the linker length and nature (small molecule or peptide) of the bivalent parts (and hence, do not mention anymore "short geranyl dimer" "long geranyl dimer").
Thank you for your valuable suggestion regarding the nomenclature of the dimers. We agree that the terms "short geranyl dimer" and "long geranyl dimer" do not fully reflect the chemical structure of the molecules. In response, we have revised the nomenclature to provide a clearer indication of both the linker length and the nature of the bivalent parts. We now refer to the dimers as (Geranyl)<sub>2</sub>-Lys for the dimer with two geranyl groups attached to lysine and (Geranyl-PEG3)<sub>2</sub>-Lys for the dimer with a PEG3 linker (three ethylene glycol units) between the lysine amine and the geranyl groups. These revised names more accurately describe the structural differences and should avoid any ambiguity.
Lines 198-199: "Among these, the dimerized P1 exhibited a higher 198 occupation of the binding site, as illustrated in Supplementary Figure 9." But in Supp. Fig. 9, dimer P1dC (10) is described. As the text above is describing P1-dimer (9), the Supp. Fig. 9 must be provided, if available. If not, please modify this conclusion accordingly. In the text, when mentioning dimerized P1 peptide, authors must indicate with which compound it deals: (9) or (10)?
Thank you for your careful reading of the manuscript and for pointing out the discrepancy. In Supplementary Figure 9, the dimer described is P1dC, not P1d. The text has been revised to clarify this. We appreciate your attention to detail.
Please note that the graphic quality of Figure 3 is bad as it results in pixelized drawings (especially for the chemical structures).
Thank you for your feedback regarding the quality of Figure 3. We have now updated all figures, including Figure 3, to high-resolution PNG format with 300-500 dpi to ensure optimal graphic quality. This should resolve the pixelization issue, particularly for the chemical structures.
Minor typos: "clinical studies, a third are CAMs.[6]" "to the spike base hydrophobic pocket" "geraniol affinity to the central hydrophobic pocket, we designed"
We have corrected the punctuation in the mentioned sentences and appreciate your careful review of the manuscript.
Concerning the citation of a website (references 5 and 6), I guess that the consultation date should be mentioned.
We have now updated the references accordingly, including the consultation dates.
In the Materials and Methods part, Peptide synthesis paragraph, authors must write "semi-preparative HPLC.
It’s now corrected to "semi-preparative HPLC".
In the supplementary information file, 1H and 13C NMR spectrum for the small molecule "Short Geranyl Dimer (SGD)" should be provided.
The purity and identity of this Geranyl derivate were confirmed through UV detection in LC-MS and supported by the mass spectra, which provide robust and clear evidence of the compound's structure and well-accepted method for confirming the structure in this context. While we understand the value of NMR in structural analysis, we believe that additional analytical evidence is not critical for this study.
Reviewer #2 (Recommendations For The Authors):
Overall, this study presents an innovative approach to target the HBV core protein and paves the way for developing new classes of antivirals with a distinct mechanism of action. The findings expand the current knowledge of druggable sites on HBc capsids and provide promising lead compounds. Future studies exploring the antiviral effects and optimizing the binders for therapeutic applications would be valuable next steps.
We sincerely thank the reviewer for the positive assessment of our work and for highlighting its innovative approach to targeting the HBV core protein. We appreciate your recognition of the study's potential in paving the way for developing new classes of antivirals with distinct mechanisms of action. Below, we provide responses to each of the points raised.
The significance of the central hydrophobic pocket as a target may require additional experiments for validation. Currently, the substrate binding activity is relatively low and appears to have a non-significant impact on HBc.
We agree that the central hydrophobic pocket exhibits relatively weak binding affinity with the ligands tested in this study. However, we have provided additional structural evidence and affinity data to support its relevance as a druggable site. In recognition of the weak affinity of these small molecules, we expanded our focus to include peptide-based binders, which yielded higher affinities, particularly when dimerized.
It might be more effective to present Figure 1B after summarizing all the results.
We understand the reviewer’s suggestion. However, we decided to highlight and summarize the major findings early in the manuscript. We included Figure 1B at the beginning to allow readers to quickly grasp the core concepts and outcomes of our study.
The labels for P1/P2 are presented in Figure 1A, yet their definitions are not provided until the second part of the Results section.
We appreciate the reviewer’s observation. While see a benefit of showing three trackable sites on HBV early and as an overview but we also agree that the early presentation of P1/P2 could lead to some confusion. To resolve this, we have revised the figure to introduce only on the minimal peptide to avoid any ambiguity. The full dimer sequences and names are introduced later.
Further investigation of the cytotoxic potential of peptide-induced HBc aggregation is necessary.
Investigating the cytotoxicity together with infectivity is an important future direction but outside the scope of this study. We now elaborate on this point in the discussion.
Reviewer #3 (Recommendations For The Authors):
Two sites in the dimer interface are shown to bind ligands. It is not shown that filling these regions will change infection. The exhaustive studies by Bruss showed point mutations directly alter infection and would be of value to discuss.
We thank Rev#3 for this very helpful comment. We now highlight how point mutations in these regions were shown to affect HBV infectivity. Thereby providing a link between our findings and how ligand binding might influence the viral life cycle.
It is not shown whether the two sites interact. Molecular dynamics by Hadden or Gumbart may be informative. The failure to look for a connection between these sites is an oversight.
We thank Rev#3 for the insightful suggestion to explore potential interactions between the two binding sites. We acknowledge that molecular dynamics (MD) simulations, such as those performed by Gumbart et al. and Hadden et al., could indeed provide valuable insights into the structural dynamics and potential cooperativity between these sites. Indeed, molecular dynamics of the HBV capsid by Perilla and Hadden has demonstrated significant flexibility in the capsid spikes and their interactions with neighboring subunits suggesting that the dynamics of binding sites could influence ligand accessibility and potential crosstalk.
We believe that our own previous structural studies together with data in this work provide substantial experimental evidence on this topic. In Makbul et al. 2021a (doi.org/10.3390/microorganisms9050956) we observed that peptide binding (particularly P2) did not stabilize the spikes; instead, the upper part of the spikes exhibited considerable wobbling. This variability mirrored the conformational diversity reported in MD simulations. Using local classification, we noted that the variability in the spike's upper region was greater when P2 was bound than in its absence. Additionally, in Makbul et al. 2021b (doi.org/10.3390/v13112115), we showed that peptide binding had little effect on the hydrophobic pocket beneath the mobile spike region, located in the more rigid part of the capsid. While we observed F97 in the D-monomer adopting two alternate rotamer orientations upon P2 binding this was not exclusive to P2, as similar changes were noted in the L60V mutant even without bound peptide.
We have updated the manuscript to briefly discuss this crosstalk, that provides additional context to our findings. Interestingly, only TX100—but not geraniol—completely flipped F97 into an alternate orientation, forming a new π-π stacking interaction with the mobile region of the spike. This finding suggests that interactions within the hydrophobic pocket are transmitted based on ligand specific interactions to the tips of the spikes. Thus, supporting and refining the concept of a crosstalk between binding sites, primarily initiated from the hydrophobic pocket in a ligand specific fashion.
The logic for proposing a terpene ligand is strained. Comparisons are made to HBs and the HDV delta antigen. However, HBs is myristoylated not farnesylated and delta antigen binds HBs not HBc.
We have revised the text to clarify the rationale for testing terpenes as ligands, focusing instead on the specific properties of the hydrophobic pocket targeted by geraniol.
The authors suggest larger terpenes as binding agents, but there does not appear to be room for a longer molecule in the binding site. The authors do not discuss whether a longer molecule could be modeled in the site based on their density.
We appreciate this observation and agree that the potential for larger terpenes to bind this site is not obvious from the structural data presented in this work. We have now included a more detailed visualization (Fig2D) and discussion of the hydrophobic binding pocket, based on the density observed in the presented geraniol structure and the previous triton structure and discuss its implications of the binding of larger hydrophobic molecules into the site (Fig 2D).
The authors note that the structure could explain molecular details of this site, but these are not discussed. A more complete analysis of the geraniol protein is necessary, including an estimate of the resolution of that density.
We agree that a more complete analysis of the hydrophobic binding site was warranted. We have now expanded the discussion of the structural details of this binding site based on the geraniol-bound structure, the density and occupancy accounted by this ligand. These additional details (Fig 2C,D and Fig 5) should provide a clearer understanding of the binding interactions observed.
The dimeric geraniol is marginally better binding than the monomer, two-fold, but this could be due to doubling the number of geraniols per ligand or due to an undefined interaction of the extended molecule with the surface of the capsid. A geraniol linker should be tested.
The modest improvement in binding may indeed only reflect the doubled number of geraniols rather than linker-mediated avidity effects. Interaction of the linker with the capsid surface is ruled-out by the scrambled control that included the same linkers but did not show any capacity to bind.
Is the enhanced binding of dimer due to bivalent binding of dimer to one capsid? Is it a chance interaction of the linker with the surface of HBc, which is easily tested? Is it an avidity effect due to aggregation of capsids?
Thank you for this insightful question. Our data suggest that the enhanced binding is due to bivalent interactions. To address the possibility of non-specific interactions from either the handle or the linker, we included a scrambled dimeric peptide as a negative control, which showed no binding. This rules out non-specific interactions from the linker or handle. Given this, we believe an additional monovalent control is unnecessary, as the scrambled control confirms that the binding is driven by the geraniol and peptide warheads alone. We have clarified this in the revised manuscript and appreciate your suggestion to strengthen the study.
The experimental analysis of point mutation of P1 is not analyzed beyond stating that it shows the importance of the core peptide sequence. Is there rationale for the effect of R3 to E and K10 to E mutation?
We appreciate the reviewer's curiosity and request for a more detailed discussion of the P1 deep mutational scan data and its implications. The observed low mutation tolerance of the core peptide sequence SLLGRM regarding HBc binding is highly consistent with our prior structural data and binding studies in solutions (https://doi.org/10.3390/microorganisms9050956) as well as the results from the original phage library screening (M. R. Dyson, K. Murray, Proceedings of the National Academy of Sciences 1995, 92, 2194–2198), and the binding data presented here. Notably, the data set does not suggest that additional binding interfaces contribute to the aggregation seen with N-terminal elongated P1 and P2 versus the non-aggregating shorter SLLGRM. While the positional scan largely aligns with previous phage binding hierarchy and quantified ligands, we were previously prompted by surprising affinity gains for positive to negative amino exchanges in related peptides in same way as Rev#3: Specifically, “SLLGEM” has been predicted previously and here to show enhanced affinity over “SLLGRM”. Quantification in solution, however, could not confirm this enhanced HBV binding affinity (Makbul et al. 2021 Microorganisms), which could not be recapitulated by in solution quantification. In the revised version of the manuscript we now highlight the possible limited predictive power of this assay for positions where positively charged residues are exchanged by negatively charged residues (Figure legend of Fig 3D).
The fluctuations in Figure 3B could be largely magnification of noise due to changing the y-axis. The fluctuations can be characterized as standard variation, excluding the injections, to allow a quantitative judgment.
Isothermal titration calorimetry heat fluctuations without injections are now shown in the supplementary information scaled to the same y-axis (Supplementary Figure 3D).
Molecular graphics throughout are too small and poorly labeled.
We have revised the molecular graphics throughout the manuscript to increase their size and improve labeling for clarity. All figures are now provided in 500dpi.
In Figure 2, compounds 1 and 2 are pyrophosphates. The label in the figure should be corrected.
Thank you for pointing this out. These compounds were removed for clarity.
In the introduction, the phrase "discontinuation frequently leads to relapse" should be changed to something less ambiguous.
Thank you for highlighting this point regarding the phrasing in the introduction. We have revised the statement to more accurately reflect the clinical situation by specifying that stopping treatment often results in viral rebound and disease recurrence in many patients. This adjustment clarifies the intended meaning and addresses the ambiguity you identified. We hope this revision better aligns with the clinical context of HBV management and improves the overall clarity of the manuscript.
Define "functional cure" in the introduction.
Thank you for your suggestion to clarify the term 'functional cure.' We have revised the manuscript and instead of ”functional cure” we mention the goal of sustained viral suppression without detectable HBV DNA and loss of hepatitis B surface antigen (HBsAg) without the need for continuous therapy. This should provide greater clarity for readers and improve the overall comprehensibility of the introduction.
The sentence beginning line 92 is not clear unless one has already read the paper. Figure 1 is not well described.
Thank you for your valuable feedback regarding the clarity of this sentence and the legend of Figure 1. We have revised the text and legend to provide more context and improve the flow for readers who are unfamiliar with the specifics of the study. The revised version now clearly explains the targeted binding sites and the purpose of the bivalent binders at the beginning of the results section.
In line 235 the meaning is not clear. What is in excess? Is there free CPP in solution? Is it the charge on the CPP?
We have clarified the passage as requested.
When describing peptide-induced aggregation, Figures 5 and 6, figure 1B is never referred to. Figure 1B would work better as part of Figure 6.
We understand the reviewer’s suggestion. However, we decided to highlight and summarize the major findings and the underlying hypothesis early in the manuscript. We included Figure 1B at the beginning to allow readers to quickly grasp a core concept and outcome of our study.
We now however refer to Figure 1B and together with all the other changes hope that we have improved the clarity and quality of the manuscript.
We appreciate your constructive feedback and the opportunity to further refine the work.
De igual manera existe un predominio de madressolterasoseparadasquesonelsustentodel hogar, por lo que este tipo de situación permite que el emprendimiento femenino surja en gran medida en estas zonas para que generen nuevas oportunidades de empleo y ayuda a la dinámica económicadeestoslugares
sirve para el contexto geografico social
Author response:
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public Review):
Summary:
Rigor in the design and application of scientific experiments is an ongoing concern in preclinical (animal) research. Because findings from these studies are often used in the design of clinical (human) studies, it is critical that the results of the preclinical studies are valid and replicable. However, several recent peer-reviewed published papers have shown that some of the research results in cardiovascular research literature may not be valid because their use of key design elements is unacceptably low. The current study is designed to expand on and replicate previous preclinical studies in nine leading scientific research journals. Cardiovascular research articles that were used for examination were obtained from a PubMed Search. These articles were carefully examined for four elements that are important in the design of animal experiments: use of both biological sexes, randomization of subjects for experimental groups, blinding of the experimenters, and estimating the proper size of samples for the experimental groups. The findings of the current study indicate that the use of these four design elements in the reported research in preclinical research is unacceptably low. Therefore, the results replicate previous studies and demonstrate once again that there is an ongoing problem in the experimental design of preclinical cardiovascular research.
Strengths:
This study selected four important design elements for study. The descriptions in the text and figures of this paper clearly demonstrate that the rate of use of all four design elements in the examined research articles was unacceptably low. The current study is important because it replicates previous studies and continues to call attention once again to serious problems in the design of preclinical studies, and the problem does not seem to lessen over time.
Weaknesses:
The current study uses both descriptive and inferential statistics extensively in describing the results. The descriptive statistics are clear and strong, demonstrating the main point of the study, that the use of these design elements is quite low, which may invalidate many of the reported studies. In addition, inferential statistical tests were used to compare the use of the four design elements against each other and to compare some of the journals. The use of inferential statistical tests appears weak because the wrong tests may have been used in some cases. However, the overall descriptive findings are very strong and make the major points of the study.
We sincerely appreciate the reviewer’s comments and detailed feedback and their recognition of the importance of this work in replicating previous studies and calling attention to the problems in preclinical study design. In response to the reviewer’s suggestions, we have recalculated our inferential statistics. In place of our previous inferential statistics, we have used an alternative correction calculation for p-values (Holm-Bonferroni corrections) and used median-based linear model analyses and nonparametric Kruskal-Wallis tests that are more appropriate for analyzing this dataset. Our overall trends in results remain the same.
Reviewer #2 (Public Review):
Summary
This study replicates a 2017 study in which the authors reviewed papers for four key elements of rigor: inclusion of sex as a biological variable, randomization of subjects, blinding outcomes, and pre-specified sample size estimation. Here they screened 298 published papers for the four elements. Over a 10 year period, rigor (defined as including any of the 4 elements) failed to improve. They could not detect any differences across the journals they surveyed, nor across models. They focused primarily on cardiovascular disease, which both helps focus the research but limits the potential generalizability to a broader range of scientific investigation. There is no reason, however, to believe rigor is any better or worse in other fields, and hence this study is a good 'snapshot' of the progress of improving rigor over time.
Strengths
The authors randomly selected papers from leading journals, e.g., PNAS). Each paper was reviewed by 2 investigators. They pulled papers over a 10-year period, 2011 to 2021, and have a good sample of time over which to look for changes. The analysis followed generally accepted guidelines for a structured review.
Weaknesses
The authors did not use the exact same journals as they did in the 2017 study. This makes comparing the results complicated. Also, they pulled papers from 2011 to 2021, and hence cannot assess the impact of their own prior paper.
The authors write "the proportion of studies including animals of both biological sexes generally increased between 2011 and 2021, though not significantly (R2= 0.0762, F(1,9)= 0.742, p= 0.411 (corrected p=8.2". This statement is not rigorous because the regression result is not statistically significant. Their data supports neither a claim of an increase nor a decrease over time. A similar problem repeats several times in the remainder of their results presentation.
I think the Introduction and the Discussion are somewhat repetitive and the wording could be reduced.
Impact and Context
Lack of reproducibility remains an enormous problem in science, plaguing both basic and translational investigations. With the increased scrutiny on rigor, and requirements at NIH and other funding agencies for more rigor and transparency, one would expect to find increasing rigor, as evidenced by authors including more study design elements (SDEs) that are recommended. This review found no such change, and this is quite disheartening. The data implies that journals-editors and reviewers-will have to increase their scrutiny and standards applied to preclinical and basic studies. This work could also serve as a call to action to investigators outside of cardiovascular science to reflect on their own experiences and when planning future projects.
We sincerely appreciate the reviewer’s insights and comments and recognition of our work contributing to the growing body of evidence on the lack of rigor in preclinical cardiovascular research study design. Regarding the weaknesses the reviewer noted; the referenced 2017 publication details a study by Ramirez et al, and was not conducted by our group. Our study aimed to expand upon their findings by using a more recent timeframe and an alternative list of highly respected cardiovascular research journals. We have now better clarified this distinction in the manuscript. We have also addressed our phrasing regarding the lack of statistical significance in the increase of the proportion of studies including animals of both sexes from 2011-2021.
Recommendations for the authors:
Reviewer #1 (Recommendations For The Authors):
Many of the methods in this study were strong or adequate. Although the descriptive statistics appear solid, there are significant problems that need to be addressed in the selection and use of inferential statistics.
(1) One of the design elements that was studied was sample size estimation. This is usually done by a power analysis. The authors should consider what group size for the examined journals is adequate for their statistics to be valid. Or they could report the power of their studies to achieve a given meaningful difference.
We thank the reviewer for this excellent observation. We unfortunately failed to conduct an a priori power analysis. Previous research (Gupta, et al. 2016) suggests that post-hoc power calculations should not be carried out after the study has been conducted. We acknowledge the importance of establishing a sufficient sample size to draw sound conclusions based on an adequate effect size, and we regret that we did not carry out the appropriate estimations. We are very appreciative of the reviewer’s suggestions and aim to implement such an appropriate study design element in future studies.
Gupta KK, Attri JP, Singh A, Kaur H, Kaur G. Basic concepts for sample size calculation: Critical step for any clinical trials!. Saudi J Anaesth. 2016;10(3):328-331. doi:10.4103/1658-354X.174918
(2) A Bonferroni correction was used extensively. Because of its use, the corrected p values often appear much too high. The Bonferroni test becomes much too conservative for more than 3 or 4 tests. I suggest using a different test for multiple comparisons.
We thank the reviewer for their insightful suggestion. We have updated all p-values to reflect a Holm-Bonferroni correction instead. All p-values have been corrected and updated.
(3) The use of the chi-square test for categorical data is appropriate. However, the t-test and multiple regression tests are designed for continuous variables. Here, it appears that they were used for the nominal variables (Table 1). For these nominal data, other nonparametric tests should be used.
We thank the reviewer for this valuable insight. We have updated our statistical analysis methods and now use nonparametric Kruskal-Wallis tests to analyze differences in SDE reporting across journals, instead of chi-square test. Our reported p-values have been adjusted accordingly.
(4) It is not clear exactly when each test is used. The stats section in Methods should better delineate when each test is used. In addition, it would be helpful to include the test used in the figure legends.
We thank the reviewer for bringing up this important point. We have now updated the methods section to better delineate which tests were used, and also included the specific tests in the figure legends.
(5) You will need to rewrite some sections of the text to reflect the changes due to changing your use of statistics.
We have rewritten the sections of the text to reflect the changes in our use of statistics.
Here are a few comments on the presentation.
(1) Some of the figure legends are almost impossible to read. They are too congested.
We thank the reviewer for pointing this out. We have edited the figure legends to make them more readable. We will also attach a pdf with the graphs to allow for easier formatting.
(2) Also, is it possible to drop some of the panels in Figure 1?
The panels in figure 1 have been rearranged to make them more readable. We believe that each panel provides valuable visual summaries of our data, that will aid readers in understanding our results.
(3) It is not mandatory that values of y-axis on the graphs go up 100% (Figs 2 and 3). Using a maximum value of 100% clumps the lines visually. I suggest a max value on the y-axis of the graph of 50% or 60%. That will spread the lines better visually so differences can better be seen.
We thank the reviewer for considering the experience of our paper’s readers. The y-axes of Figures 2 and 3 have been truncated to 50%. The trend lines in each Figure now appear more separated and differences can better be seen.
Reviewer #2 (Recommendations For The Authors):
The authors did not use the exact same journals as they did in the 2017 study. This makes comparing the results complicated. Also, they pulled papers from 2011 to 2021, and hence cannot assess the impact of their own prior paper.
We appreciate the reviewer’s concern in maintaining consistency with the paper published by Ramirez, et al. in 2017. To clarify, our efforts focused on providing a replication study that expanded upon the original Ramirez publication - which we have no affiliation with. For our study, we used different academic journals than those used by Ramirez, et al, and also a different time-frame. We have updated the language in the manuscript to better-clarify the purpose and parameters of our study relative to the previous, unaffiliated, study.
The authors write "the proportion of studies including animals of both biological sexes generally increased between 2011 and 2021, though not significantly (R2= 0.0762, F(1,9)= 0.742, p= 0.411 (corrected p=8.2". This statement is not rigorous because the regression result is not statistically significant. Their data supports neither a claim of an increase nor a decrease over time. A similar problem repeats several times in the remainder of their results presentation.
Thank you for bringing this information to our attention. We agree with the concern regarding the statement, “the proportion of studies including animals of both biological sexes generally increased between 2011 and 2021, though not significantly (R2= 0.0762, F(1,9)= 0.742, p= 0.411 (corrected p=8.2.” We have rephrased the statement. Our updated Holm-Bonferroni corrected p-value is now noted in this more appropriately worded description of our results. Lastly, we have addressed the wording and redundancy seen in both the introduction and discussion and have made both more concise.
I think the Introduction and the Discussion are somewhat repetitive and the wording could be reduced.
We thank the reviewer for bringing this to our attention. We have addressed the redundancy across the Introduction and the Discussion. We have also altered the wording to reflect a more concise explanation of our study.
The 'trends' are not statistically significant. A non-significant trend does not exist and no claim of a 'trend' is justified by the data.
We thank the reviewer for this observation. We have updated the phrasing of ‘trends’ in all areas of the manuscript.
Reviewer #2 (Public review):
Summary:
This work addresses the question of how 'leading' and 'lagging' PGCs differ, molecularly, during their migration to the mouse genital ridges/gonads during fetal life (E9.5, E10.5, E11.5), and how this is regulated by different somatic environments encountered during the process of migration. E9.5 and E10.5 cells differed in expression of genes involved in canonical WNT signaling and focal adhesions. Differences in cell adhesion, actin cytoskeletal dynamics were identified between leading and lagging cells, at E9.5, before migration into the gonads. At E10.5, when some PGCs have reached the genital ridges, differences in Nodal signaling response genes and reprogramming factors were identified. This last point was verified by whole mount IF for proteins downstream of Nodal signaling, Lefty1/2. At E11.5, there was upregulation of genes associated with chromatin remodeling and oxidative phosphorylation. Some aspects of the findings were also found to be likely true in human development, established via analysis of a dataset previously published by others.
Strengths:
The work is strong in that a large number of PGCs were isolated and sequenced, along with associated somatic cells. The authors dealt with problem of very small number of migrating mouse PGCs by pooling cells from embryos (after ascertaining age matching using somite counting). 'Leading' and 'lagging' populations were separated by anterior and posterior embryo halves and the well-established Oct4-deltaPE-eGFP reporter mouse line was used.
Weaknesses:
The work seems to have been carefully done, but I do not feel the manuscript is very accessible, and I do not consider it well written. The novel findings are not easy to find. The addition of at least one figure to show the locations of putative signaling etc. would be welcome.
(1) The initial discussion of CellRank analysis (under 'Transcriptomic shifts over developmental time...' heading) is somewhat confusing - e.g. If CellRank's 'pseudotime analysis' produces a result that seems surprising (some E9.5 cells remain in a terminal state with other E9.5 cells) and 'realtime analysis' produces something that makes more sense, is there any point including the pseudotime analysis (since you have cells from known timepoints)? Perhaps the 'batch effects' possible explanation (in Discussion) should be introduced here. Do we learn anything novel from this CellRank analysis? The 'genetic drivers' identified seem to be genes already known to be key to cell transitions during this period of development.
(2) In Discussion - with respect to Y-chromosome correlation, it is not clear why this analysis would be done at E10.5, when E11.5 data is available (because some testis-specific effect might be more apparent at the later stage).
(3) Figure 2A - it seems surprising that there are two clusters of E9.5 anterior cells
(4) Figure 5F - there does seem to be more LEFTY1/2 staining in the anterior region, but also more germ cells as highlighted by GFP
transfeminist knowledge sharing
El conocimiento situado, una noción central en los feminismos del sur global, nos recuerda que los cuerpos están profundamente arraigados en contextos históricos, políticos y sociales. En este sentido, el diseño de Inteligencias Artificiales debe alejarse de la abstracción universalista y abordar las necesidades de cuerpos específicos, respetando sus narrativas.
El diseño colectivo implica la participación directa de comunidades que han sido excluidas de las discusiones sobre tecnología. Desde una posibilidad transfeminista, esto no solo incluye a mujeres cisgénero, sino también a personas no binarias, trans y de diversas experiencias de género. Esto asegura que la Inteligencias Artificiales refleje realidades múltiples y no refuerce estructuras de poder normativas.
Las comunidades indígenas, con su profundo conocimiento de la conservación y la interacción armónica con el entorno, aportan principios esenciales para el diseño ético de la Inteligencia Artificial. Estas comunidades entienden la tecnología como una extensión del cuerpo colectivo, no como una herramienta separada. La Inteligencia Artificial, podría integrarse como una herramienta para el bienestar ecológico y cultural, en lugar de un instrumento extractivista.
Imagine you are deaf and blind
Imagínate vivir en condición de discapacidad visual y auditiva, dependiendo exclusivamente de otras personas para lograr una traducción del mundo que te rodea.
Tu percepción está moldeada por la información que otros eligen compartir contigo y cómo la interpretan. Esta traducción no es neutral; está impregnada de sesgos, prioridades, y limitaciones.
Los algoritmos de Inteligencia Artificial, actúan como traductores de datos a decisiones y también presentan sesgos. Pero, ¿qué sucede cuando esas traducciones fallan o privilegian ciertas perspectivas sobre otras?
Los algoritmos, en su esencia, son cuerpos digitales que interpretan, procesan y deciden. Sin embargo, estos cuerpos no existen en el vacío. Son creados por humanos, influenciados por sus propias experiencias, limitaciones, y sesgos. En este sentido, la Inteligencia Artificial no solo traduce datos, sino también las prioridades y omisiones de quienes la diseñan.
El sesgo algorítmico es un reflejo directo de cómo ciertos cuerpos son sistemáticamente silenciados o malinterpretados en los datos. Por ejemplo, los sistemas de reconocimiento facial han mostrado tasas significativamente más altas de error al identificar rostros de personas negras o mujeres, lo que deriva en daños irreversibles como acusaciones falsas o vigilancia excesiva. Estos errores no son solo técnicos; son éticos, porque los cuerpos afectados no solo son datos mal clasificados, sino personas que cargan con las consecuencias.
Las decisiones de diseño, como qué categorías incluir o qué diferencias ignorar, traducen las vidas de las personas en formatos legibles para una máquina, pero a menudo lo hacen de forma reductiva. Por ejemplo, nombres o características culturales pueden ser transformados o eliminados debido a limitaciones en la estructura del sistema. Estas decisiones, aunque aparentemente técnicas, tienen implicaciones en la forma en que los cuerpos son reconocidos o desestimados en los espacios sociales y legales.
La traducción sirve como intermediación no sólo lingüística sino como transformara de problemas complejos del mundo real en un modelo simplificado que una máquina pueda procesar. Sin embargo, esta traducción no es neutral ni universal. Es un proceso moldeado por el lenguaje, el contexto cultural, y las prioridades del equipo de desarrollo.
Al igual que en la traducción entre idiomas, traducir problemas sociales en modelos de Inteligencia Artificial implica decisiones sobre qué preservar, qué transformar, y qué descartar. Un equipo de desarrollo que no comprende las complejidades culturales del contexto que está modelando puede introducir sesgos significativos.
En muchas ocasiones, los sistemas de Inteligencia Artificial traducen las identidades humanas en categorías discretas, ignorando las complejas intersecciones de raza, género, clase y otras variables. Por ejemplo, una Inteligencia Artificial diseñada para ser justa con mujeres o con personas negras podría ignorar las experiencias específicas de las mujeres negras, perpetuando la exclusión de aquellos en las intersecciones de estas categorías.
Los algoritmos tienen un impacto físico y tangible en los cuerpos humanos. Desde negaciones de crédito hasta vigilancia injusta, estos sistemas afectan de manera desproporcionada a los grupos marginados.
La diversidad en los equipos de desarrollo debe ir más allá de una métrica. Es esencial incluir las voces y experiencias de aquellos más afectados por los sistemas algorítmicos.
Las decisiones de diseño deben basarse en un profundo entendimiento cultural y social. Esto implica consultar a expertos locales y a las comunidades afectadas para garantizar que la Inteligencia Artificial refleje sus realidades, en lugar de distorsionarlas.
Las instituciones que implementan IA deben abrir sus sistemas a auditorías públicas, permitiendo que las comunidades afectadas cuestionen y revisen los algoritmos que moldean sus vidas.
Ninguna Inteligencia Artificial es neutral ni perfecta. Las empresas deben ser transparentes sobre las limitaciones de sus modelos y educar a los usuarios en la identificación y mitigación de sesgos.
Groups Fairness vs Individual Fairness
Las métricas de equidad en inteligencia artificial consiste en equilibrar las necesidades y derechos de los individuos con los de los grupos a los que pertenecen. En esta interacción, los cuerpos, como sujetos de políticas, datos y decisiones, son tanto el objeto de la equidad como el espacio donde se manifiestan sus fallas. Al mismo tiempo, traducir valores éticos en métricas matemáticas resalta los límites y riesgos de confiar únicamente en lo cuantitativo para resolver problemas profundamente humanos.
Muchas métricas de equidad, como paridad demográfica o igualdad de oportunidades, priorizan problemas entre grupos tales como género o raza. No obstante, esta priorización puede generar nuevas desigualdades dentro de esos mismos grupos. Por ejemplo:
Buscar igualdad en las tasas de aceptación entre grupos, sin necesariamente garantizar que los individuos más cualificados sean seleccionados. Esto puede incluir a personas menos capacitadas en un esfuerzo por equilibrar resultados entre géneros o razas.
Dar prioridad a contratar a los individuos más cualificados, independientemente del grupo, lo que puede excluir a grupos marginados.
Estas desigualdades generan costos. La primera puede aumentar la percepción de injusticia entre individuos del mismo grupo (un postulante cualificado rechazado mientras uno menos cualificado es aceptado). La segunda perpetúa desigualdades estructurales al priorizar una lógica de mérito que no considera las barreras históricas. Esta tensión refleja una realidad ineludible: no existe una solución técnica capaz de satisfacer simultáneamente todas las demandas de equidad.
La interseccionalidad complica aún más estas dinámicas. Un algoritmo que parece justo en términos de género (hombres/mujeres) o raza (blancos/negros), puede ser injusto para subgrupos en las intersecciones de estas categorías, como mujeres afro, indígenas, etc. Estas identidades no son meras combinaciones de atributos; son experiencias vividas que reflejan múltiples niveles de opresión y privilegio.
En un sistema de contratación, la representación equitativa de hombres y mujeres puede ocultar la exclusión sistemática de mujeres racializadas o indígenas. Esto subraya cómo la traducción de conceptos éticos en métricas matemáticas puede pasar por alto la complejidad de las experiencias humanas.
En el diseño de la Inteligencia Artificial, la traducción no solo implica convertir datos en modelos, sino también transcribir principios éticos en reglas operativas. Esta traducción puede ser empoderadora o dañina para los cuerpos que impacta.
Empoderadora en el sentido en que una Inteligencia Artificial que integra principios éticos mediante enfoques participativos y contextualizados puede visibilizar y mitigar desigualdades estructurales.
Dañina en el sentido en que si las decisiones se limitan a métricas aisladas, como maximizar precisión, las Inteligencias Artificiales pueden reforzar jerarquías preexistentes, ignorando los cuerpos marginados que quedan fuera de su diseño.
Priorizar lo mensurable sobre lo significativo en el sentido en que los cuerpos afectados por estas decisiones recuerdan que, detrás de cada dato, hay vidas humanas con historias complejas.
Para lograr una ética corporal en la Inteligencia Artificial y para abordar estas tensiones, sería clave una ética que reconozca tanto los cuerpos como los cuerpos que usan ensamblajes para programar a la Inteligencia Artificial:
Las métricas de equidad deben diseñarse de manera participativa, integrando las experiencias de los cuerpos afectados. Esto requiere un enfoque interdisciplinario que combine ética, ciencias sociales e ingeniería.
La equidad debería ser un proceso continuo, lo que incluye auditorías regulares para identificar y mitigar sesgos que surgen en datos y modelos.
Las métricas deben ir más allá de categorías rígidas y considerar las intersecciones complejas que definen las experiencias humanas.
Replantear el objetivo técnico de los modelos, priorizando minimizar daños sobre maximizar precisión. Esto quiere decir que lo ideal sería reorientar la eficiencia hacia resultados que reflejen valores humanos.
What is fairness?
Los cuerpos, la traducción y la Inteligencia Artificial vistos desde una posibilidad ética en la equidad revela tensiones profundas entre lo cuantificable y lo ético, especialmente cuando consideramos cómo la Inteligencia Artificial impacta los cuerpos. En el centro de estas tensiones yace un desafío, traducir conceptos éticos complejos, como la equidad, en métricas operativas que puedan implementarse en modelos matemáticos. Sin embargo, esta traducción no es neutral ni perfecta, es un acto cargado de decisiones políticas, éticas y culturales que afectan directamente a las corporalidades.
Primero que todo, el cuerpo en el centro del problema refleja que los cuerpos son los sujetos finales de las decisiones algorítmicas. Por ejemplo, en el caso de las métricas de paridad demográfica y de igualdad de oportunidades, los cuerpos se convierten en estadísticas como los números de aceptación o rechazo y probabilidades calculadas. Esto despersonaliza a los individuos y reduce sus complejas experiencias a puntos de datos que se integran en sistemas automatizados.
Además, el impacto sobre los cuerpos marginados no puede desvincularse de sus contextos culturales y sociales. Por ejemplo, la paridad demográfica puede corregir desigualdades numéricas, pero si la Inteligencia Artificial perpetúa estereotipos o malinterpreta características culturales en sus métricas de similitud, los cuerpos aún enfrentan injusticias.
segundo, la traducción vista desde las posibilidades éticas hasta la matemática implica que la traducción de conceptos éticos en modelos matemáticos, como los índices de entropía generalizada o las métricas de equidad grupal, enfrenta una paradoja fundamental ya que la ética es intrínsecamente contextual y fluida, mientras que las matemáticas buscan exactitud, consistencia y universalidad. Esto da lugar a dilemas como el teorema de imposibilidad, donde no es posible satisfacer simultáneamente múltiples métricas de equidad.
Este proceso de traducción puede ocultar o amplificar sesgos, dependiendo de cómo se define y mide la similitud entre individuos. Por ejemplo, al intentar medir similitud entre postulantes a un empleo, ¿cómo se traduce la experiencia laboral de una mujer en un contexto cultural donde históricamente se han excluido sus contribuciones? Traducir esta experiencia en un valor numérico puede distorsionar las realidades de los cuerpos que pretende representar.
Tercero, la Inteligencia Artificial como cuerpo traductor indica que no solo traduce datos, sino también cuerpos y experiencias, reduciéndolos a representaciones que interactúan con sistemas automatizados. Por lo tanto, la Inteligencia Artificial actúa como un “cuerpo traductor” que interpreta y reconfigura las relaciones de poder existentes. Un ejemplo es la dificultad de aplicar métricas de equidad grupal en contextos donde las desigualdades históricas han creado disparidades profundas en las oportunidades educativas y económicas.
Por último, el problema surge cuando la Inteligencia Artificial perpetúa, en lugar de mitigar, estas desigualdades. Por ejemplo, si una métrica de igualdad de oportunidades selecciona predominantemente a individuos del grupo mayoritario debido a sus mayores tasas de calificación previa, las dinámicas de poder se refuerzan, y los cuerpos del grupo minoritario quedan relegados.
Para lograr unas posibilidades éticas dentro de las cartografías de tecnodiversidades es necesario que:
Las métricas de equidad deben reevaluarse constantemente en función de los contextos sociales, políticos y culturales en los que se aplican. Esto implica un enfoque repetitivo y dinámico en la toma de decisiones algorítmicas.
Los cuerpos no pueden reducirse a datos estadísticos. Es necesario desarrollar métodos participativos que integren las experiencias vividas de los afectados por las decisiones algorítmicas en el diseño y evaluación de la Inteligencia Artificial.
En lugar de intentar traducir la ética directamente a matemática, podemos fomentar una interacción entre disciplinas, incluyendo la filosofía, las ciencias sociales y la ingeniería, para que la traducción sea más inclusiva y representativa.
Las métricas de similitud deben ser auditadas desde perspectivas interdisciplinarias para garantizar que no perpetúen sesgos ni deshumanicen a los sujetos.
mind
acá hay una división muy taxativa entre el modelo input-output y lo de la mente. Y el origen de la concepción de la mente en psicología es muy anterior al tema computacional.
Introduction
propongo que esta parte la editemos y reescribamos con @Daniel
Introduction
acá comenzaría con algo más general de contexto: The present document ...que es lo que hace y por qué
Author response:
The following is the authors’ response to the original reviews.
eLife Assessment
This study offers a useful treatment of how the population of excitatory and inhibitory neurons integrates principles of energy efficiency in their coding strategies. The analysis provides a comprehensive characterisation of the model, highlighting the structured connectivity between excitatory and inhibitory neurons. However, the manuscript provides an incomplete motivation for parameter choices. Furthermore, the work is insufficiently contextualized within the literature, and some of the findings appear overlapping and incremental given previous work.
We are genuinely grateful to the Editors and Reviewers for taking time to provide extremely valuable suggestions and comments, which will help us to substantially improve our paper. We decided to do our very best to implement all suggestions, as detailed in the point-by-point rebuttal letter below. We feel that our paper has improved considerably as a result.
Public Reviews:
Reviewer #1 (Public Review):
Summary: Koren et al. derive and analyse a spiking network model optimised to represent external signals using the minimum number of spikes. Unlike most prior work using a similar setup, the network includes separate populations of excitatory and inhibitory neurons. The authors show that the optimised connectivity has a like-to-like structure, leading to the experimentally observed phenomenon of feature competition. They also characterise the impact of various (hyper)parameters, such as adaptation timescale, ratio of excitatory to inhibitory cells, regularisation strength, and background current. These results add useful biological realism to a particular model of efficient coding. However, not all claims seem fully supported by the evidence. Specifically, several biological features, such as the ratio of excitatory to inhibitory neurons, which the authors claim to explain through efficient coding, might be contingent on arbitrary modelling choices. In addition, earlier work has already established the importance of structured connectivity for feature competition. A clearer presentation of modelling choices, limitations, and prior work could improve the manuscript.
Thanks for these insights and for this summary of our work.
Major comments:
(1) Much is made of the 4:1 ratio between excitatory and inhibitory neurons, which the authors claim to explain through efficient coding. I see two issues with this conclusion: (i) The 4:1 ratio is specific to rodents; humans have an approximate 2:1 ratio (see Fang & Xia et al., Science 2022 and references therein); (ii) the optimal ratio in the model depends on a seemingly arbitrary choice of hyperparameters, particularly the weighting of encoding error versus metabolic cost. This second concern applies to several other results, including the strength of inhibitory versus excitatory synapses. While the model can, therefore, be made consistent with biological data, this requires auxiliary assumptions.
We now describe better the ratio of numbers of E and I neurons found in real data, as suggested. The first submission already contained an analysis of how the optimal ratio of E vs I neuron numbers depends in our model on the relative weighting of the loss of E and I neurons and on the relative weighting of the encoding error vs the metabolic cost in the loss function (see Fig. 7E). We revised the text on page 12 describing Fig. 7E.
To allow readers to form easily a clear idea of how the weighting of the error vs the cost may influence the optimal network configuration, we now present how optimal parameters depend on the weighting in a systematic way, by always including this type of analysis when studying all other model parameters (time constants of single E and I neurons, noise intensity, metabolic constant, ratio of mean I-I to E-I connectivity). These results are shown on the Supplementary Fig. S4 A-D and H, and we comment briefly on each of them in Results sections (pages 9, 10, 11 and 12) that analyze each of these parameters.
Following this Reviewer’s comment, we now included a joint analysis of network performance relative to the ratio of E-I neuron numbers and the ratio of mean I-I to E-I connectivity (Fig. 7J). We found a positive correlation between optima values of these two ratios. This implies that a lower ratio of E-I neuron numbers, such as a 2:1 ratio in human cortex mentioned by the reviewer, predicts lower optimal ratio of I-I to E-I connectivity and thus weaker inhibition in the network. We made sure that this finding is suitably described in revision (page 13).
(2) A growing body of evidence supports the importance of structured E-I and I-E connectivity for feature selectivity and response to perturbations. For example, this is a major conclusion from the Oldenburg paper (reference 62 in the manuscript), which includes extensive modelling work. Similar conclusions can be found in work from Znamenskiy and colleagues (experiments and spiking network model; bioRxiv 2018, Neuron 2023 (ref. 82)), Sadeh & Clopath (rate network; eLife, 2020), and Mackwood et al. (rate network with plasticity; eLife, 2021). The current manuscript adds to this evidence by showing that (a particular implementation of) efficient coding in spiking networks leads to structured connectivity. The fact that this structured connectivity then explains perturbation responses is, in the light of earlier findings, not new.
We agree that the main contribution of our manuscript in this respect is to show how efficient coding in spiking networks can lead to structured connectivity implementing lateral inhibition similar to that proposed in the recent studies mentioned by the Reviewer. We apologize if this was not clear enough in the previous version. We streamlined the presentation to make it clearer in revision. We nevertheless think it useful to report the effects of perturbations within this network because these results give information about how lateral inhibition works in our network. Thus, we kept presenting it in the revised version, although we de-emphasized and simplified its presentation. We now give more emphasis to the novelty of the derivation of this connectivity rule from the principles of efficient coding (pages 4 and 6). We also describe better (page 8) what the specific results of our simulated perturbation experiments add to the existing literature.
(3) The model's limitations are hard to discern, being relegated to the manuscript's last and rather equivocal paragraph. For instance, the lack of recurrent excitation, crucial in neural dynamics and computation, likely influences the results: neuronal time constants must be as large as the target readout (Figure 4), presumably because the network cannot integrate the signal without recurrent excitation. However, this and other results are not presented in tandem with relevant caveats.
We improved the Limitations paragraph in Discussion, and also anticipated caveats in tandem with results when needed, as suggested.
We now mention the assumption of equal time constants between the targets and readouts in the Abstract.
We now added the analysis of the network performance and dynamics as a function of the time constant of the target (t<sub>x</sub>) to the Supplementary Fig S5 (C-E). These results are briefly discussed in text on page 13. The only measure sensitive to t<sub>x</sub> is the encoding error of E neurons, with a minimum at t<sub>x</sub> =9 ms, while I neurons and metabolic cost show no dependency. Firing rates, variability of spiking as well as the average and instantaneous balance show no dependency on t<sub>x</sub>. We note that t<sub>x</sub> = t, with t=1/l the time constant of the population readout (Eq. 9), is an assumption we use when we derive the model from the efficiency objective (Eq. 18 to 23). In our new and preliminary work (Koren, Emanuel, Panzeri, Biorxiv 2024), we derived a more general class of models where this assumption is relaxed, which gives a network with E-E connectivity that adapts to the time constant of the stimulus. Thus, the reviewer is correct in the intuition that the network requires E-E connectivity to better integrate target signals with a different time constant than the time constant of the membrane. We now better emphasize this limitation in Discussion (page 16).
(4) On repeated occasions, results from the model are referred to as predictions claimed to match the data. A prediction is a statement about what will happen in the future – but most of the “predictions” from the model are actually findings that broadly match earlier experimental results, making them “postdictions”.
This distinction is important: compared to postdictions, predictions are a much stronger test because they are falsifiable. This is especially relevant given (my impression) that key parameters of the model were tweaked to match the data.
We now comment on every result from the model as either matching earlier experimental results, or being a prediction for experiments.
In Section “Assumptions and emergent properties of the efficient E-I network derived from first principles”, we report (page 4) that neural networks have connectivity structure that relates to tuning similarity of neurons (postdiction).
In Section “Encoding performance and neural dynamics in an optimally efficient E-I network” we report (page 5) that in a network with optimal parameters, I neurons have higher firing rate than E neurons (postdiction), that single neurons show temporally correlated synaptic currents (postdiction) and that the distribution of firing rates across neurons is log-normal (postdiction).
In Section “Competition across neurons with similar stimulus tuning emerging in efficient spiking networks” we report (page 6) that the activity perturbation of E neurons induces lateral inhibition on other E neurons, and that the strength of lateral inhibition depends on tuning similarity (postdiction). We show that activity perturbation of E neurons induces lateral excitation in I neurons (prediction). We moreover show that the specific effects of the perturbation of neural activity rely on structured E-I-E connectivity (prediction for experiments, but similar result in Sadeh and Clopath, 2020). We show strong voltage correlations but weak spike-timing correlations in our network (prediction for experiments, but similar result in Boerlin et al. 2013).
In Section “The effect of structured connectivity on coding efficiency and neural dynamics”, we report (page 7) that our model predicts a number of differences between networks with structured and unstructured (random) connectivity. In particular, structured networks differ from unstructured ones by showing better encoding performance, lower metabolic cost, weaker variance over time in the membrane potential of each neuron, lower firing rates and weaker average and instantaneous balance of synaptic currents.
In Section “Weak or no spike-triggered adaptation optimizes network efficiency”, we report (page 9) that our model predicts better encoding performance in networks with adaptation compared to facilitation. Our results suggest that adaptation should be stronger in E compared to I (PV+) neurons (postdiction). In the same section, we report (page 10) that our results suggest that the instantaneous balance is a better predictor of model efficiency than average balance (prediction).
In Section “Non-specific currents regulate network coding properties”, we report (page 10) that our model predicts that more than half of the distance between the resting potential and firing threshold is taken by external currents that are unrelated to feedforward processing (postdiction). We also report (page 11) that our model predicts that moderate levels of uncorrelated (additive) noise is beneficial for efficiency (prediction for experiments, but similar results in Chalk et al., 2016, Koren et al., 2017, Timcheck et al. 2022).
In Section “Optimal ratio of E-I neuron numbers and of mean I-I to E-I synaptic efficacy coincide with biophysical measurements”, we predict the optimal ratio of E to I neuron numbers to be 4:1 (postdiction) and the optimal ratio of mean I-I to E-I connectivity to be 3:1 (postdiction). Further, we report (page 13) that our results predict that a decrease in the ratio of E-I neuron numbers is accompanied with the decrease in the ratio of mean I-I to E-I connectivity.
Finally, in Section “Dependence of efficient coding and neural dynamics on the stimulus statistics”, we report (page 13) that our model predicts that the efficiency of the network has almost no dependence on the time scale of the stimulus (prediction).
Reviewer #2 (Public Review):
Summary:
In this work, the authors present a biologically plausible, efficient E-I spiking network model and study various aspects of the model and its relation to experimental observations. This includes a derivation of the network into two (E-I) populations, the study of single-neuron perturbations and lateral-inhibition, the study of the effects of adaptation and metabolic cost, and considerations of optimal parameters. From this, they conclude that their work puts forth a plausible implementation of efficient coding that matches several experimental findings, including feature-specific inhibition, tight instantaneous balance, a 4 to 1 ratio of excitatory to inhibitory neurons, and a 3 to 1 ratio of I-I to E-I connectivity strength. It thus argues that some of these observations may come as a direct consequence of efficient coding.
Strengths:
While many network implementations of efficient coding have been developed, such normative models are often abstract and lacking sufficient detail to compare directly to experiments. The intention of this work to produce a more plausible and efficient spiking model and compare it with experimental data is important and necessary in order to test these models.
In rigorously deriving the model with real physical units, this work maps efficient spiking networks onto other more classical biophysical spiking neuron models. It also attempts to compare the model to recent single-neuron perturbation experiments, as well as some longstanding puzzles about neural circuits, such as the presence of separate excitatory and inhibitory neurons, the ratio of excitatory to inhibitory neurons, and E/I balance. One of the primary goals of this paper, to determine if these are merely biological constraints or come from some normative efficient coding objective, is also important.
Though several of the observations have been reported and studied before (see below), this work arguably studies them in more depth, which could be useful for comparing more directly to experiments.
Thanks for these insights and for the kind words of appreciation of the strengths of our work.
Weaknesses:
Though the text of the paper may suggest otherwise, many of the modeling choices and observations found in the paper have been introduced in previous work on efficient spiking models, thereby making this work somewhat repetitive and incremental at times. This includes the derivation of the network into separate excitatory and inhibitory populations, discussion of physical units, comparison of voltage versus spike-timing correlations, and instantaneous E/I balance, all of which can be found in one of the first efficient spiking network papers (Boerlin et al. 2013), as well as in subsequent papers. Metabolic cost and slow adaptation currents were also presented in a previous study (Gutierrez & Deneve 2019). Though it is perfectly fine and reasonable to build upon these previous studies, the language of the text gives them insufficient credit.
We indeed built our work on these important previous studies, and we apologize if this was not clear enough. We thus improved the text to make sure that credit to previous studies is more precisely and more clearly given (see detailed reply for the list of changes made).
To facilitate the understanding on how we built on previous work, we expanded the comparison of our results with the results of Boerlin et al. (2013) about voltage correlations and uncorrelated spiking (page 7), comparison with the derivation of physical units of Boerlin et al. (2013) (page 3), discussion of how results on the ratio of the number of E to I neurons relate to Calaim et al (2022) and Barrett et al. (2016) (page 16), and comment on the previous work by Gutierrez and Deneve about adaptation (page 8).
Furthermore, the paper makes several claims of optimality that are not convincing enough, as they are only verified by a limited parameter sweep of single parameters at a time, are unintuitive and may be in conflict with previous findings of efficient spiking networks. This includes the following.
Coding error (RMSE) has a minimum at intermediate metabolic cost (Figure 5B), despite the fact that intuitively, zero metabolic cost would indicate that the network is solely minimizing coding error and that previous work has suggested that additional costs bias the output.
Coding error also appears to have a minimum at intermediate values of the ratio of E to I neurons (effectively the number of I neurons) and the number of encoded variables (Figures 6D, 7B). These both have to do with the redundancy in the network (number of neurons for each encoded variable), and previous work suggests that networks can code for arbitrary numbers of variables provided the redundancy is high enough (e.g., Calaim et al. 2022).
Lastly, the performance of the E-I variant of the network is shown to be better than that of a single cell type (1CT: Figure 7C, D). Given that the E-I network is performing a similar computation as to the 1CT model but with more neurons (i.e., instead of an E neuron directly providing lateral inhibition to its neighbor, it goes through an interneuron), this is unintuitive and again not supported by previous work. These may be valid emergent properties of the E-I spiking network derived here, but their presentation and description are not sufficient to determine this.
With regard to the concern that our previous analyses considered optimal parameter sets determined with a sweep of a single parameter at a time, we have addressed this issue in two ways. First, we presented (Figure 6I and 7J and text on pages 11 and 13) results of joint sweeps of variations of pairs of parameters whose joint variations are expected to influence optimality in a way that cannot be understood varying one parameter at a time. These new analyses complement the joint parameter sweep of the time constants of single E and I neurons (t<sub>r</sub><sup>E</sup> and t<sub>r</sub><sup>I</sup>) that has already been presented in Fig. 5A (former Fig. 4A). Second, we conducted, within a reasonable/realistic range of possible variations of each individual parameter, a Monte-Carlo random joint sampling (10000 simulations with 20 trials each) of all 6 model parameters that we explored in the paper. We presented these new results on Fig. 2 and discuss it on pages 5-6.
The Reviewer is correct in stating that the error (RMSE) exhibits a counterintuitive minimum as a function of the metabolic constant despite the fact that, intuitively, for vanishing metabolic constant the network is solely minimizing the coding error (Fig. 6B). In our understanding, this counterintuitive finding is due to the presence of noise in the membrane potential dynamics. In the presence of noise, a non-vanishing metabolic constant is needed to suppress “inefficient” spikes purely induced by noise that do not contribute to coding and increase the error. This gives rise to a form of “stochastic resonance”, where the noise improves detection of the signal coming from the feedforward currents. We note that the metabolic constant and the noise variance both appear in the non-specific external current (Eq. 29f in Methods), and, thus, a covariation in their optimal values is expected. Indeed, we find that the optimal metabolic constant monotonically increases as a function of the noise variance, with stronger regularization (larger beta) required to compensate for larger variability (larger sigma) (Fig. 6I). Finally, we note that a moderate level of noise (which, in turn, induces a non-trivial minimum of the coding error as a function of beta) in the network is optimal. The beneficial effect of moderate levels of noise on performance in networks with efficient coding has been shown in different contexts in previous work (Chalk et al. 2016, Koren and Deneve, 2017). The intuition is that the noise prevents the excessive synchronization of the network and insufficient single neuron variability that decrease the performance. The points above are now explained in the revised text on page 11.
The Reviewer is also correct in stating that the network exhibits an optimal performance for intermediate values of the number of I neurons and the number of encoded features. In our understanding, the optimal number of encoded features of M=3 arises simply because all the other parameters were optimized for those values of M. The purpose of those analyses was not to state that a network optimally encodes only a given number of features, but how a network whose parameters are optimized for a given M perform reasonably well when M is varied. We clarify this on page 13 of Results in Discussion on page 16. In the same Discussion paragraph we refer also to the results of Calaim et al mentioned by the Reviewer.
To address the concern about the comparison of efficiency between the E-I and the 1CT model, we took advantage of the Reviewer’s suggestions to consider this issue more deeply. In revision, we now compare the efficiency of the 1CT model with the E population of the E-I model (Fig. 8H). This new comparison changes the conclusion about which model is more efficient, as it shows the 1CT model is slightly more efficient than the E-I model. Nevertheless, the E-I model performance is more robust to small variations of optimal parameters, e.g., it exhibits biologically plausible firing rates for non-optimal values of the metabolic constant. See also the reply to point 3 of the Public Review of Reviewer 2 for more detail. We added these results and the ensuing caveats for the interpretation of this comparison on Page 14, and also revised the title of the last subsection of Results.
Alternatively, the methodology of the model suggests that ad hoc modeling choices may be playing a role. For example, an arbitrary weighting of coding error and metabolic cost of 0.7 to 0.3, respectively, is chosen without mention of how this affects the results. Furthermore, the scaling of synaptic weights appears to be controlled separately for each connection type in the network (Table 1), despite the fact that some of these quantities are likely linked in the optimal network derivation. Finally, the optimal threshold and metabolic constants are an order of magnitude larger than the synaptic weights (Table 1). All of these considerations suggest one of the following two possibilities. One, the model has a substantial number of unconstrained parameters to tune, in which case more parameter sweeps would be necessary to definitively make claims of optimality. Or two, parameters are being decoupled from those constrained by the optimal derivation, and the optima simply corresponds to the values that should come out of the derivation.
We thank the reviewer for bringing about these important questions.
In the first submission, we presented both the encoding error and the metabolic cost separately as a function of the parameters, so that readers could get an understanding of how stable optimal parameters would be to the change of the relative weighting of encoding error and metabolic cost. We specified this in Results (page 5) and we kept presenting separately encoding and metabolic terms in the revision.
However, we agree that it is important to present the explicit quantification on how the optimal parameters may depend on g<sub>L</sub>. In the first submission, we showed the analysis for all possible weightings in case of two parameters for which we found this analysis was the most relevant – the ratio of neuron numbers (Fig. 7E, Fig. 6E in first submission) and the optimal number of input features M (see last paragraph on page 13 and Fig. 8D). We now show this analysis also for the rest of studied model parameters in the Supplementary Fig. S4 (A-D and H). This is discussed on pages 9, 10,11 and 12.
With regard to the concern that the scaling of synaptic weights should not be controlled separately for each connection type in the network, we agree and we would like to clarify that we did not control such scaling separately. Apologies if this was not clear enough. From the optimal analytical solution, we obtained that the connectivity scales with the standard deviation of decoding weights (s<sub>w</sub><sup>E</sup> and s<sub>w</sub><sup>I</sup>) of the pre and postsynaptic populations (Methods, Eq. 32). We studied the network properties as a function of the ratio of average I-I to E-I connectivity (Fig. 7 F-I; Supplementary Fig. S4 D-H), which is equivalent to the ratio of standard deviations s<sub>w</sub><sup>I</sup> /s<sub>w</sub><sup>E</sup> (see Methods, Eq. 35). We clarified this in text on page 12.
Next, it is correct that our synaptic weights are an order of magnitude smaller than the metabolic constant. We analysed a simpler version of the network that has the coding and dynamics identical to our full model (Methods, Eq. 25) but without the external currents. We found that the optimal parameters determining the firing threshold in such a simpler network were biologically implausible (see Supplementary Text 2 and Supplementary Table S1). We considered as another simple solution the rescaling of the synaptic efficacy such as to have biologically plausible threshold. However, that gave implausible mean synaptic efficacy (see Supplementary Text 2). Thus, to be able to define a network with biologically plausible firing threshold and mean synaptic efficacy, we introduced the non-specific external current. After introducing such current, we were able to shift the firing threshold to biologically plausible values while keeping realistic values of mean synaptic efficacy. Biologically plausible values for the firing threshold are around 15 -– 20 mV above the resting potential (Constantinople and Bruno, 2013), which is the value that we have in our model. A plausible value for the average synaptic strength is between a fraction of one millivolt to a couple of millivolts (Constantinople & Bruno, 2013, Campagnola et al. 2022), which also corresponds to values that the synaptic weights take. The above results are briefly explained in the revised text on page 4.
Finally, to study the optimality of the network when changing multiple parameters at a time, we added a new analysis with Monte-Carlo random joint sampling (10.000 parameter sets with 20 trials for each set) of all 6 model parameters that we explored in the paper. We compared (Fig 2) the so-obtained results of each simulation with those obtained from the understanding gained from varying one or two parameters at a time (optimal parameters reported in Table 1 and used throughout the paper). We found (Fig. 2) that the optimal configuration in Table 1 was never improved by any other simulations we performed, and that the first three random simulations that came the closest to the optimal one of Table 1 had stronger noise intensity but also stronger metabolic cost than the configuration on Table 1. The second, third and fourth configurations had longer time constants of both E and I single neurons (adaptation time constants). Ratio of E-I neuron numbers and of I-I to E-I connectivity in the second, third and fourth best configuration were either jointly increased or decreased with respect to our configuration. These results are reported on Fig. 2 and in Tables 2-3 and they are discussed in Results (page 5).
Reviewer #3 (Public Review):
Summary:
In their paper the authors tackle three things at once in a theoretical model: how can spiking neural networks perform efficient coding, how can such networks limit the energy use at the same time, and how can this be done in a more biologically realistic way than previous work?
They start by working from a long-running theory on how networks operating in a precisely balanced state can perform efficient coding. First, they assume split networks of excitatory (E) and inhibitory (I) neurons. The E neurons have the task to represent some lower dimensional input signal, and the I neurons have the task to represent the signal represented by the E neurons. Additionally, the E and I populations should minimize an energy cost represented by the sum of all spikes. All this results in two loss functions for the E and I populations, and the networks are then derived by assuming E and I neurons should only spike if this improves their respective loss. This results in networks of spiking neurons that live in a balanced state, and can accurately represent the network inputs.
They then investigate in-depth different aspects of the resulting networks, such as responses to perturbations, the effect of following Dale's law, spiking statistics, the excitation (E)/inhibition (I) balance, optimal E/I cell ratios, and others. Overall, they expand on previous work by taking a more biological angle on the theory and showing the networks can operate in a biologically realistic regime.
Strengths:
(1) The authors take a much more biological angle on the efficient spiking networks theory than previous work, which is an essential contribution to the field.
(2) They make a very extensive investigation of many aspects of the network in this context, and do so thoroughly.
(3) They put sensible constraints on their networks, while still maintaining the good properties these networks should have.
Thanks for this summary and for these kind words of appreciation of the strengths of our work.
Weaknesses:
(1) The paper has somewhat overstated the significance of their theoretical contributions, and should make much clearer what aspects of the derivations are novel. Large parts were done in very similar ways in previous papers. Specifically: the split into E and I neurons was also done in Boerlin et al (2008) and in Barrett et al (2016). Defining the networks in terms of realistic units was already done by Boerlin et al (2008). It would also be worth it to discuss Barrett et al (2016) specifically more, as there they also use split E/I networks and perform biologically relevant experiments.
We improved the text to make sure that credit to previous studies is more precisely and more clearly given (see rebuttal to the specific suggestions of Reviewer 2 for a full list).
We apologize if this was not clear enough in the previous version.
With regard to the specific point raised here about the E-I split, we revised the text on page 2. With regard to the realistic units, we revised the text on page 3. Finally, we commented on relation between our results and results of the study by Barrett et al. (2016) on page 16.
(2) It is not clear from an optimization perspective why the split into E and I neurons and following Dale's law would be beneficial. While the constraints of Dale's law are sensible (splitting the population in E and I neurons, and removing any non-Dalian connection), they are imposed from biology and not from any coding principles. A discussion of how this could be done would be much appreciated, and in the main text, this should be made clear.
We indeed removed non-Dalian connections because Dale’s law is a major constraint for biological plausibility. Our logic was to consider efficient coding within the space of networks that satisfy this (and other) biological plausibility constraints. We did not intend to claim that removing the non-Dalian connections was the result of an analytical optimization. We clarified this in revision (page 4).
(3) Related to the previous point, the claim that the network with split E and I neurons has a lower average loss than a 1 cell-type (1-CT) network seems incorrect to me. Only the E population coding error should be compared to the 1-CT network loss, or the sum of the E and I populations (not their average). In my author recommendations, I go more in-depth on this point.
We carefully considered these possibilities and decided to compare only the E population of the E-I model with the 1-CT model. On Fig.8G (7C of the first submission), E neurons have a slightly higher error and cost compared to the 1CT network. In the revision, we compared the loss of E neurons of the E-I model with the loss of the 1-CT model. Using such comparison, we found that the 1CT network has lower loss and is more efficient compared to E neurons of the E-I model. We revised Figure 8H and text on page 14 to address this point.
(4) While the paper is supposed to bring the balanced spiking networks they consider in a more experimentally relevant context, for experimental audiences I don't think it is easy to follow how the model works, and I recommend reworking both the main text and methods to improve on that aspect.
We tried to make the presentation of the model more accessible to a non-computational audience in the revised paper. We carefully edited the text throughout to make it as accessible as possible.
Assessment and context:
Overall, although much of the underlying theory is not necessarily new, the work provides an important addition to the field. The authors succeeded well in their goal of making the networks more biologically realistic, and incorporating aspects of energy efficiency. For computational neuroscientists, this paper is a good example of how to build models that link well to experimental knowledge and constraints, while still being computationally and mathematically tractable. For experimental readers, the model provides a clearer link between efficient coding spiking networks to known experimental constraints and provides a few predictions.
Thanks for these kind words. We revised the paper to make sure that these points emerge more clearly and in a more accessible way from the revised paper.
Recommendations for the authors:
Reviewer #1 (Recommendations For The Authors):
Referring to the major comments:
(1) Be upfront about particular modelling choices and why you made them; avoid talk of a "striking/surprising", etc. ability to explain data when this actually requires otherwise-arbitrary choices and auxiliary assumptions. Ideally, this nuance is already clear from the abstract.
We removed all the "striking/surprising" and similar expressions from the text.
We added to the Abstract the assumption of equal time constants of the stimulus and of the membrane of E and I neurons and the assumption of the independence of encoded stimulus features.
In revision, we performed additional analyses (joint parameter sweeps, Monte-Carlo joint sampling of all 6 model parameters) providing additional evidence that the network parameters in Table 1 capture reasonably well the optimal solution. These are reported on Figs. 2, 6I and 7J and in Results (pages 5, 11 and 13). See rebuttal to weaknesses of the public review of the Referee 2 for details.
(2) Make even more of an effort to acknowledge prior work on the importance of structured E-I and I-E connectivity.
We have revised the text (page 4) to better place our results within previous work on structured E-I and I-E connectivity.
(3) Be clear about the model's limitations and mention them throughout the text. This will allow readers to interpret your results appropriately.
We now comment more on model's limitations, in particular the simplifying assumption about the network's computation (page 16), the lack of E-E connectivity (page 3), the absence of long-term adaptation (page 10), and the simplification of only having one type of inhibitory neurons (page 16).
(4) Present your "predictions" for what they are: aspects of the model that can be made consistent with the existing data after some fitting. Except in the few cases where you make actual predictions, which deserve to be highlighted.
We followed the suggestion of the reviewer and distinguished cases where the model is consistent with the data (postdictions) from actual predictions, where empirical measurements are not available or not conclusive. We compiled a list of predictions and postdictions in response to the point 4 of Reviewer 1. In revision, we now comment about every property of the model as either reproducing a known property of biological networks (postdiction) or being a prediction. We improved the text in Results on pages 4, 5, 6, 7, 9, 10, 11, 12 and 13 to accommodate these requests.
Minor comments and recommendations
It's a sizable list, but most can be addressed with some text edits.
(1) The image captions should give more details about the simulations and analyses, particularly regarding sample sizes and statistical tests. In Figure 5, for example, it is unclear if the lines represent averages over multiple signals and, if so, how many. It's probably not a single realization, but if it is, this might explain the otherwise puzzling optimal number of three stimuli. Box plots visualize the distribution across simulation trials, but it's not clear how many. In Figure 7d, a star suggests statistical significance, but the caption does not mention the test or its results; the y-axis should also have larger limits.
All statistical results were computed on 100 or 200 simulation trials, depending on the figure, with duration of the trial of 1 second of simulated time. To compute statistical results in Fig. 1, we used 10 trials with duration of 10 seconds for each trial. Each trial consisted of M independent realizations of Ornstein-Uhlenbeck (OU) processes as stimuli, independent noise in the membrane potential and an independent draw of tuning parameters, such that the results are general over specific realization of these random variables. Realizations of the OU processes were independent across stimulus dimensions and across trials. We added this information in the caption of each figure.
The optimal number of M=3 stimuli is the result of measuring the performance of the network in 100 simulation trials (for each parameter value), thus following the same procedure as for all other parameters. Boxplots on Fig. 8G-H were also generated from results computed in 100 simulation trials, which we have now specified in the caption of the figure, together with the statistical test used for assessing the significance (twotailed t-test). We also enlarged the limits of Fig. 8H (7D in the previous version).
(2) The Oldenburg paper (reference 62) finds suppression of all but nearby neurons in response to two- photon stimulation of small neural ensembles (instead of single neurons, as in Chettih & Harvey). This isn't perfectly consistent with the model's results, even though the Oldenburg experiments seem more relevant given the model's small size, and strong connectivity/high connection probability between similarly tuned neurons. What might explain the potential mismatch?
We sincerely apologize for not having been precise enough on this point when comparing our model against Chettih & Harvey and Oldenburg et al. We corrected the sentence (page 6) to remove the claim that our model reproduces both.
We speculate that the discrepancy between perturbing our model and the Oldenburg data may arise from the lack of E-E connectivity in our model. Synaptic connections between E neurons with similar selectivity could create an enhancement instead of suppression between neuronal pairs with very similar tuning. We added a sentence about this in the section with perturbation experiments “Competition across neurons with similar stimulus tuning emerging in efficient spiking networks” (page 7) where we discuss this limitation of our model. We feel that this example shows the utility to derive some perturbation results from our model, as not all networks with some degree of lateral inhibition will show the same perturbation results. Comparing our model's perturbation with real data perturbation results has thus some value to better appreciate strengths and limitations of our approach.
(3) "Previous studies optogenetically stimulated E neurons but did not determine whether the recorded neurons were excitatory or inhibitory " (p. 11). I believe Oldenburg et al. did specifically image excitatory neurons.
The reviewer is correct about Oldenburg et al. imaging specifically excitatory neurons. We have revised this part of the Discussion (page 15).
(4) The authors write that efficiency is particularly achieved where adaptation is stronger in E compared to I neurons (p. 7; Figure 4). Although this would be consistent with experimental data (the I neurons in the model seem akin to fast-spiking Pv+ cells), I struggle to see it in the figure. Instead, it seems like there are roughly two regimes. If either of the neuronal timescales is faster than the stimulus timescale, the optimisation fails. If both are at least as slow, optimisation succeeds.
We agree with the reviewer that the adaptation properties of our inhibitory neurons are compatible with Pv+ cells. What is essential for determining the dynamical regime of the network is less the relation to the time constant of the stimulus (t<sub>x</sub>) but rather the relation between the time constant of the population readout (t, which is also the membrane time constant) and the time constant of the single neuron (t<sub>r</sub><sup>y</sup> for y=E and y=I; see Eq. 23, 25 or 29e). The relation between t and t<sub>r</sub><sup>y</sup> determines if single neurons generate spike-triggered adaptation (t<sub>r</sub><sup>y</sup> > t) or spike-triggered facilitation (t<sub>r</sub><sup>y</sup> < t; see Table 4). In regimes with facilitation in either E or I neurons (or both), the network performance strongly deteriorates compared to regimes with adaptation (Fig. 5A).
Beyond adaptation leading to better performance, we also found different effects of adaptation in E and I neurons. We acknowledge that the difference of these effects was difficult to see from the Fig. 4B in the first submission. We have now replotted results from previously shown Fig. 4B to focus on the adaptation regime only, (since the Fig. 5A already establishes that this is the regime with better performance). We also added figures showing the differential effect of adaptation in E and I cell type on the firing rate and on the average loss (Fig. 5C-D). Fig. 5B and C (top plots) show that with adaptation in E neurons, the error and the loss increase more slowly than with adaptation in I neurons. Moreover, the firing rate in both cell types decreases with adaptation in E neurons, while this is not the case with adaptation in I neurons (Fig. 5D). These results are added to the figure panels specified above and discussed in text on page 9.
To clarify the relation between neuronal and stimulus timescale, we now also added the analysis of network performance as a function of the time constant of the stimulus t<sub>x</sub> (Supplementary Fig. S5 C-E). We found that the model's performance is optimal when the time constant of the stimulus is close to the membrane time constant t. This result is expected, because the equality of these time constants was imposed in our analytical derivation of the model (t<sub>x</sub> = t). We see a similar decrease in performance for values of t<sub>x</sub> that are faster and slower with respect to the membrane time constant (Supplementary Fig. S5C, top). These results are added to the figure panels specified above and discussed in text on page 13.
(5) A key functional property of cortical interneurons is their lower stimulus selectivity. Does the model replicate this feature?
We think that whether I neurons are less selective than E neurons is still an open question. A number of recent empirical studies reported that the selectivity of I neurons is comparable to the selectivity of E neurons (see., e.g., Kuan et al. Nature 2024, Runyan et al. Neuron 2010, Najafi et al. Neuron 2020). In our model, the optimal solution prescribes a precise structure in recurrent connectivity (see Eq. 24 and Fig. 1C(ii)) and structured connectivity endows I neurons with stimulus selectivity. To show this, we added plots of example tuning curves and the distribution of the selectivity index across E and I neurons (Fig. 8E-F) and described these new results in Results (page 14). Tuning curves in our network were similar to those computed in a previous work that addressed stimulus tuning in efficient spiking networks (Barrett et al. 2016). We evaluated tuning curves using M=3 constant stimulus features and we varied one of the features while the two others were kept fixed. We provided details on how the tuning curves and the selectivity index were computed in a new Methods subsection (“Tuning curves and selectivity index”) on page 50.
(6) The final panels of Figure 4 are presented as an approach to test the efficiency of biological networks. The authors seem to measure the instantaneous (and time-averaged) E-I balance while varying the adaptation parameter and then correlate this with the loss. If that is indeed the approach (it's difficult to tell), this doesn't seem to suggest a tractable experiment. Also, the conclusion is somewhat obvious: the tighter the single neuron balance, the fewer unnecessary spikes are fired. I recommend that the authors clearly explain their analysis and how they envision its application to biological data.
We indeed measured the instantaneous (and time-averaged) E-I balance while varying the adaptation parameters and then correlating this with the loss. We did not want to imply that the latter panels of Figure 4 are a means to test the efficiency or biological networks or that we are suggesting new and possibly unfeasible experiments. We see it as a way to better conceptually understand how spike triggered adaptation helps the network’s coding efficiency, by tightening the E I balance in a way that it reduces the number of unnecessary spikes. We apologize if the previous text was confusing in this respect. We have now removed the initial paragraph of former Results Subsection (including removing the subsection title) and added new text about different effect of adaptation in E and I neurons on Page 9. We also thoroughly revised Figure 5.
(7) The external stimuli are repeatedly said to vary (or be tracked) across "multiple time scales", which might inadvertently be interpreted as (i) a single stimulus containing multiple timescales or (ii) simultaneously presented stimuli containing different timescales. These scenarios are potential targets for efficient coding through neuronal adaptation (reference 21 in the manuscript and Pozzorini et al. Nat. Neuro. 2013), but they are not addressed in the current model. I recommend the authors clarify their statements regarding timescales (and if they're up for it, acknowledge this as a limitation).
We thank the reviewer for bringing up this interesting point. To address the second point raised by the Reviewer (simultaneously presented stimuli containing multiple timescales), we performed new analyses to test the model with simultaneously presented stimuli that have different timescales. We found that the model encodes efficiently such stimuli. We tested the case with a 3-dimensional stimulus where each dimension is an Ornstein-Uhlenbeck process with a different time constant. More precisely, we kept the time constant in the first dimension fixed (at 10 ms), and varied the time constant in the second and third dimension such that the time constant in the third dimension is doubled with respect to the second dimension. We plotted the encoding error in every stimulus dimension for E and I neurons (Fig. 8B, left plot) as well as the encoding error and the metabolic cost averaged across stimulus dimensions (Fig. 8B, right plot). The results are briefly described with text on page 13.
Regarding the case i) (single stimulus containing multiple timescales), we considered two possibilities. One possibility is that timescales of the stimulus are separable, and in this case a single stimulus containing several time scales can be decomposed in several stimuli with a single time scale each. As we assign a new set of weights for each dimension of the decomposed stimulus, this case is similar to the case ii) that we already addressed. Another possibility is that timescales of the stimulus cannot be separated. This case is not covered in the present analysis and we listed it among the limitations of the model. We revised the text (page 13) around the question of multiple time scales and included the citation of Pozzorini et al. (2013).
(8) It is claimed that the model uses a mixed code to represent signals, citing reference 47 (Rigotti et al., Nature 2013). But whereas the model seems to use linear mixed selectivity, the Rigotti reference highlights the virtues of nonlinear mixed selectivity. In my understanding, a linearly mixed code does not enjoy the same benefits since it’s mathematically equivalent to a non-mixed code (simply rotate the readout matrix). I recommend that the authors clarify the type of selectivity used by their model and how it relates to the paper(s) they cite.
The reviewer is correct that our selectivity is a linear mixing of input variables, and differs from the selectivity in Rigotti et al. (2013) which is non-linear. We revised the sentence on page 4 to clarify better that the mixed selectivity we consider is linear and we removed Rigotti’s citation.
(9) Reference 46 is cited as evidence that leaky integration of sensory features is a relevant computation for sensory areas. I don’t think this is quite what the reference shows. Instead, it finds certain morphological and electrophysiological differences between single pyramidal neurons in the primary visual cortex compared to the prefrontal cortex. Reference 46’ then goes on to speculate that these are differences relevant to sensory computation. This may seem like a quibble, but given the centrality of the objectivee function in normative theories, I think it's important to clarify why a particular objective is chosen.
We agree that our reference of Amatrudo et al was not the best reference and that the previous text was confusing. We thus tried to improve on its clarity. We looked at the previous theoretical efficient coding papers introducing this leaky integration and we could not find in the previous theoretical work a justification of this assumption based on experimental papers. However, there is evidence that neurons in sensory structures, and in cortical association areas respond to time varying sensory evidence by summing stimuli over time with a weight that decreases steadily going back in time from the time of firing, which suggests that neurons integrate time-varying sensory features. In many cases, these integration kernels decay approximately exponentially going back in time, and several models explaining successfully perceptual readouts of neural activity work assuming leaky integration. This suggests that the mathematical approximation of leaky integration of sensory evidence, though possibly simplistic, is reasonable. We revised the text in this respect (page 2).
(10) The definition of the objective function uses beta as a tuning parameter, but later parts of the text and figures refer to a parameter g_L which might only be introduced in the convex combination of Eq. 40a.
This is correct. Parameter optimization has been performed on a weighted sum of the average encoding error and cost as given by the Eq. 39a (40a in first submission), with the weighting g<sub>L</sub> for the error versus the cost, and not the beta that is part of the objective in Eq.10. The convex combination in Eq. 39a allowed us to find a set of optimal parameters that is within biologically realistic parameter ranges, which includes realistic values for the firing threshold. The average encoding error and metabolic cost (the two terms on the right-hand side of Eq. 39a, without weighting with g<sub>L</sub>) in our network are of the same order (see Fig 8G for the E-I model where these values are plotted separately for the optimal network). Weighing the cost with optimal beta that is in the range of ~10 would have yielded a network that optimizes almost exclusively the metabolic cost and would bias the results towards solutions with poor encoding accuracy.
To document more fully how the choice of weighting of the error with the cost (g<sub>L</sub>) affects the optimal parameters, we now added new analysis (Fig. 8D and Supplementary Fig. S4 A-D and H) showing optimal parameters as a function of this weighting. We commented on these results in the text on pages 9-11 and 12. For further details, please see also the reply to point 1 or Reviewer 1.
(11) Figure 1J: "In E neurons, the distribution of inhibitory and of net synaptic inputs overlap". In my understanding, they are in fact identical, and this is by construction. It might help the reader to state this.
We apologize for an unclear statement. In E neurons, net synaptic current is the sum of the feedforward current and of recurrent inhibition (Eq. 29c and Eq. 42). With our choice of tuning parameters that are symmetric around zero and with stimulus features that have vanishing mean, the mean of the feedforward current is close to zero. Because of this, the mean of the net current is negative and is close to the mean of the inhibitory current. We have clarified this in the text (page 5).
(12) A few typos:
- p1. "Minimizes the encoding accuracy" should be "maximizes..."
- p1: "as well the progress" should be something like "as well as the progress"
- p.11 In recorded neurons where excitatory or inhibitory. ", "where" should be "were" - Fig3: missing parentheses (B)
- Fig4B: the 200 ticks on the y-scale are cut off.
- Panel Fig. 5a: "stimulus" should be "stimuli".
- Ref 24 "Efficient andadaptive sensory codes" is missing a space.
- p. 26: "requires" should be "required".
- On several occasions, the article "the" is missing.
We thank the reviewer for kindly pointing out the typos that we now corrected.
Reviewer #2 (Recommendations For The Authors):
I would like to give the authors more details about the two main weaknesses discussed above, so that they may address specific points in the paper. First, there is the relation to previous work. Several published articles have presented very similar results to those discussed here, including references 5, 26, 28, 32, 33, 42, 43, 48, and an additional reference not cited by the authors (Calaim et al. 2022 eLife e73276). This includes:
(1) Derivation of an E-I efficient spiking network, which is found in refs. 28, 42, 43, and 48. This is not reflected in the text: e.g., "These previous implementations, however, had neurons that did not respect Dale's law" (Introduction, pg. 1); "Unlike previous approaches (28, 48), we hypothesize that E and I neurons have distinct normative objectives...". The authors should discuss how their derivation compares to these.
We have now fully clarified on page 3 that our model builds on the seminal previous works that introduced E-I networks with efficient coding (Supplementary text in Boerlin et al. 2013, Chalk et al. 2016, Barrett et al. 2016).
(2) Inclusion of a slow adaptation current: I believe this also appears in a previous paper (Gutierrez & Deneve 2019, ref. 33) in almost the exact same form, and is again not reflected in the text: "The strength of the current is proportional to the difference in inverse time constants ... and is thus absent in previous studies assuming that these time constants are equal (... ref. 33). Again, the authors should compare their derivation to this previous work.
We thank the reviewer for pointing this out. We sincerely apologize if our previous version did not recognize sufficiently clearly that the previous work of Gutierrez and Deneve (eLife 2019; ref 33) introduced first the slow adaptation current that is similar to spike-triggered adaptation in our model. We have made sure that the revised text recognizes it more clearly. We also explained better what we changed or added with respect to this previous work (see revised text on page 8).
The work by Gutierrez and Deneve (2019) emphasizes the interplay between single neuron property (an adapting current in single neurons) and network property (networklevel coding through structured recurrent connections). They use a network that does not distinguish E and I neurons. Our contribution instead focuses on the adaptation in an E-I network. To improve the presentation following the Reviewer’s comment, we now better emphasize the differential effect of adaptation in E and in I neurons in revision (Fig. 5 B-D). Moreover, Gutierrez and Deneve studied the effect of adaptation on slower time scales (1 or 2 seconds) while we study the adaptation on a finer time scale of tens of milliseconds. The revised text detailed this is reported on Page 8.
(3) Background currents and physical units: Pg. 26: "these models did not contain any synaptic current unrelated to feedforward and recurrent processing" and "Moreover previous models on efficient coding did not thoroughly consider physical units of variables" - this was briefly described in ref. 28 (Boerlin et al. 2013), in which the voltage and threshold are transformed by adding a common constant, and additional aspects of physical units are discussed.
It is correct that Boerlin et al (2013) suggested adding a common constant to introduce physical units. We now revised the text to make clearer the relation between our results and the results of Boerlin et al. (2013) (page 3). In our paper, we built on Boerlin et al. (2013) and assigned physical units to computational variables that define the model's objective (the targets, the estimates, the metabolic constant, etc.). We assigned units to computational variables in such a way that physical variables (such as membrane potential, transmembrane currents, firing thresholds and resets) have the correct physical units. We have now clarified how we derived physical units in the section of Results where we introduce the biophysical model (page 3) and specified how this derivation relates to the results in Boerlin et al. (2013).
(4) Voltage correlations, spike correlations, and instantaneous E/I balance: this was already pointed out in Boerlin et al. 2013 (ref 28; from that paper: "Despite these strong correlations of the membrane potentials, the neurons fire rarely and asynchronously") and others including ref. 32. The authors mention this briefly in the Discussion, but it should be more prominent that this work presents a more thorough study of this well-known characteristic of the network.
We agree that it would be important to comment on how our results relate to these results in Boerlin et al. (2013). It is correct that in Boerlin et al. (2013) neurons have strong correlations in the membrane potentials, but fire asynchronously, similarly to what we observe in our model. However, asynchronous dynamics in Boerlin et al. (2013) strongly depends on the assumption of instantaneous synaptic transmission and time discretization, with a “one spike per time bin” rule in numerical implementation. This rule enforces that at most one spike is fired in each time bin, thus actively preventing any synchronization across neurons. If this rule is removed, their network synchronizes, unless the metabolic constant is strong enough to control such synchronization to bring it back to asynchronous regime (see ref. 36). Our implementation does not contain any specific rule that would prevent synchronization across neurons. We now cite the paper by Boerlin and colleagues and briefly summarize this discussion when we describe the result of Fig. 3D on page 7.
(5) Perturbations and parameters sweep: I found one previous paper on efficient spiking networks (Calaim et al. 2022) which the authors did not cite, but appears to be highly relevant to the work presented here. Though the authors perform different perturbations from this previous study, they should ideally discuss how their findings relate to this one. Furthermore, this previous study performs extensive sweeps over various network parameters, which the authors might discuss here, when relevant. For example, on pg. 8, the authors write “We predict that, if number of neurons within the population decreases, neurons have to fire more spikes to achieve an optimal population readout” – this was already shown in Calaim et al. 2022 Figure 5, and the authors should mention if their results are consistent.
We apologize for not being aware of Calaim et al. (2022) when we submitted the first version of our paper. This important study is now cited in the revised version. We have now, as suggested, performed sweeps of multiple parameters inspired by the work of Calaim. This new analysis is described extensively in reply to Weaknesses in the Public Review of reviewer 2 and is found in Fig 2, 6I and 7J and described on pages 5,11 and 13.
The Reviewer is also correct that the compensation mechanism that applies when changing the ratio of E-I neuron numbers is similar to the one described in Barrett et al. (2016) and related to our claim “if number of neurons within the population decreases, neurons have to fire more spikes to achieve an optimal population readout”. We have now added (page 11) that this prediction is consistent with the finding of Barrett et al. (2016).
With regard to the dependence of optimal coding properties on the number of neurons, we have tried to better describe similarities and differences with our work and that of Calaim et al as well as with the work of Barrett et al. (2016) which reports highly relevant results. These additional considerations are summarized in a paragraph in Discussion (page 16).
(6) Overall, the authors should distinguish which of their results are novel, which ones are consistent with previous work on efficient spiking networks, and which ones are consistent in general with network implementations of efficient and sparse coding. In many of the above cases, this manuscript goes into much more depth and study of each of the network characteristics, which is interesting and commendable, but this should be made clear. In clarifying the points listed above, I hope that the authors can better contextualize their work in relation to previous studies, and highlight what are the unique characteristics of the model presented here.
We made a number of clarifications of the text to provide better contextualization of our model within existing literature and to credit more precisely previous publications. This includes commenting on previous studies that introduced separate objective functions of E and I neurons (page 2), spike-triggered adaptation (page 8), physical units (page 3), and changes in the number of neurons in the network (page 16).
Next, there are the claims of optimal parameters. As explained on pg. 35 (criterion for determining optimal model parameters), it appears to me that they simply vary each parameter one at a time around the optimal value. This argument appears somewhat circular, as they would need to know the optimal parameters before starting this sweep. In general, I find these optimality considerations to be the most interesting and novel part of the paper, but the simulations are relatively limited, so I would ask the authors to either back them up with more extensive parameter sweeps that consider covariations in different parameters simultaneously (as in Calaim et al. 2022). Furthermore, the authors should make sure that they are not breaking any of the required relationships between parameters necessary for the optimization of the loss function. Again, some of the results (such as coding error not being minimized with zero metabolic cost) suggests that there might be issues here.
We thank the reviewer for this insightful suggestion. We have now added a joint sweep of all relevant model parameters using Monte-Carlo parameter search with 10.000 iterations. We randomly drew parameter configurations from predetermined parameter ranges that are detailed in the newly added Table 2. Parameters were sampled from a uniform distribution. We varied all the six model parameters studied in the paper (metabolic constant, noise intensity, time constant of single E and I neurons, ratio of E to I neurons and ratio of the mean I-I to E-I connectivity). We now present these results on a new Figure 2. We did not find any set of parameters with lower loss than the parameters in Table 1 when the weighting of the error with the cost was in the following range: 0.4<g<sub>L</sub><0.81 (Fig. 2C). While our large but finite Monte-Carlo random sampling does not fully prove that the configuration we selected as optimal (on Table 1) is a global optimum, it shows that this configuration is highly efficient. Further, and as detailed in the rebuttal to the Weaknesses of the Public Review of Referee 2, analyses of the near optimal solutions are compatible with the notion (resulting from the join parameter sweep studies that we added to Figures 6 and 7) that network optimality may be influenced by joint covariations in parameters. These new results are reported in Results (page 5, 11 and 13) and in Figure 2, 6I an 7J.
Some more specific points:
(1) In general, I find it difficult to understand the scaling of the RMSE, cost, and loss values in Figures 4-7. Why are RMSE values in the range of 1-10, whereas loss and cost values are in the range of 0-1? Perhaps the authors can explicitly write the values of the RMSE and loss for the simulation in Figure 1G as a reference point.
Encoding error (RMSE), metabolic cost (MC) and average loss for a well performing network are within the range of 1-10 (see Fig. 8G or 7C in the first submission). To ease the visualization of results, we normalized the cost and the loss on Figs. 6-8 in order to plot them on the same figure (while the computation of the optima is done following the Eq. 39 and is without normalization). We have now explicitly written the values of RMSE, MC and the average loss (non-normalized) for the simulation in Fig. 1D on page 5, as suggested by the reviewer. We have also revised Fig. 4 and now show the absolute and not the relative values of the RMSE and the MC (metabolic cost).
(2) Optimal E-I neuron ratio of 4:1 and efficacy ratio of 3:1: besides being unintuitive in relation to previous work, are these two optimal settings related to one another? If there are 4x more excitatory neurons than inhibitory neurons, won't this affect the efficacy ratio of the weights of the two populations? What happens if these two parameters are varied together?
Thanks for this insightful point. Indeed, the optima of these two parameters are interdependent and positively correlated - if we decrease the E-I neuron ratio, the optimal efficacy ratio decreases as well. To better show this relation we added figures with 2dimensional parameter search (Fig. 7J) where we varied jointly the two ratios. The red cross on the right figure marks the optimal ratios used as optimal parameters in our study. These finding are discussed on page 13.
(3) Optimal dimensionality of M=[1,4]: Again, previous work (Calaim et al. 2022) would suggest that efficient spiking networks can code for arbitrary dimensional signals, but that performance depends on the redundancy in the network - the more neurons, the better the coding. From this, I don't understand how or why the authors find a minimum in Figure 7B. Why does coding performance get worse for small M?
We optimized all model parameters with M=3 and this is the reason why M=3 is the optimal number of inputs when we vary this parameter. Our network shows a distinct minimum of the encoding error as a function of the stimulus dimensionality for both E and I neurons (Fig. 8C, top). This minimum is reflected in the minimum of the average loss (Fig. 8C, bottom). The minimum of the loss is shifted (or biased) by the metabolic cost, with strong weighting of the cost lowering the optimal number of inputs. This is discussed on pages 13-14.
Here are a list of other, more minor points, that the authors can consider addressing to make the results and text more clear:
(1) Feedforward efficient coding models: in the introduction (pg. 1) and discussion (pg. 11) it is mentioned that early efficient coding models, such as that of Olshausen & Field 96, were purely feedforward, which I believe to be untrue (e.g., see Eq. 2 of O&F 96). Later models made this even more explicit (Rozell et al. 2008). Perhaps the authors can either clarify what they meant by this, or downplay this point.
We sincerely apologize for the oversight present in the previous version of the text. We agree with the reviewer that the model in Olshausen and Field (1996) indeed defines a network with recurrent connections, and the same type of recurrent connectivity has been used by Rozell et al. (2008, 2013). The structure of the connectivity in Olshausen and Field (as well as in Rozell et al (2008)) is closely related to the structure of connectivity that we derived in our model. We have corrected the text in the introduction (page 1) to remove these errors.
(2) Pg. 2 - The authors state: "We draw tuning parameters from a normal distribution...", but in the methods, it states that these are then normalized across neurons, so perhaps the authors could add this here, or rephrase it to say that weights are drawn uniformly on the hypersphere.
We rephrased the description of how weights were determined (page 2).
(3) Pg. 2 - "We hypothesize the time-resolved metabolic cost to be proportional to the estimate of a momentary firing rate of the neural population" - from what I can see, this is not the usual population rate, which would be an average or sum of rates across the population.
Indeed, the time-dependent metabolic cost is not the population rate (in the sense of the sum of instantaneous firing rates across neurons), but is proportional to it by a factor of 1/t. More precisely, we can define the instantaneous estimate of the firing rate of a single neuron i as z<sub>i</sub>(t) = 1/t<sub>r</sub> r<sub>i</sub>(t) with r<sub>i</sub>(t) as in Eq. 7. We have clarified this in the revised text on page 3.
(4) Pg. 3: "The synaptic strength between two neurons is proportional to their tuning similarity if the tuning similarity is positive" - based on the figure and results, this appears to be the case for I-E, E-I, and I-I connections, but not for E-E connections. This should be clarified in the text. Furthermore, one reference given in the subsequent sentence (Ko et al. 2011, ref. 51), is specifically about E-E connections, so doesn't appear to be relevant here.
We have now specified that the Eq. 24 does not describe E-E connections. We also agree that the reference (Ko et al. 2011) did not adequately support our claim and we thus removed it and revised the text on page 3 accordingly.
(5) Pg. 3: "the relative weight of the metabolic cost over the encoding error controls the operating regime of the network" and "and an operating regime controlled by the metabolic constant" - what do you mean by operating regime here?
We used the expression “operating regime” in the sense of a dynamical regime of the network. However, we agree that this expression may be confusing and we removed it in revision.
(6) Pg. 3: "Previous studies interpreted changes of the metabolic constant beta as changes to the firing thresholds, which has less biological plausibility" - can the authors explain why this is less plausible, or ideally provide a reference for it?
In biological networks, global variables such as brain state can strongly modulate the way neural networks respond to a feedforward stimulus. These variables influence neural activity in at least two distinct ways. One is by changing non-specific synaptic inputs to neurons, which is a network-wide effect (Destexhe and Pare, Nature Reviews Neurosci. 2003). This is captured in our model by changing the strength of the mean and fluctuations in the external currents. Beyond modulating synaptic currents, another way of modulating neural activity is by changing cell-intrinsic factors that modulate the firing threshold in biological neurons (Pozzorini et al. 2013). Previous studies on spiking networks with efficient coding interpreted the effect of the metabolic constant as changes to the firing threshold (Koren and Deneve, 2017, Gutierrez and Deneve 2019), which corresponds to cell-intrinsic factors. Here we instead propose that the metabolic constant modulates the neural activity by changing the non-specific synaptic input, homogeneously across all neurons in the network. Interpreting the metabolic constant as setting the mean of the non-specific synaptic input was necessary in our model to find an optimal set of parameters (as in Table 1) that is also biologically plausible. We revised the text accordingly (page 4).
(7) Pg. 4: Competition across neurons: since the model lacks E-E connectivity, it seems trivial to conclude that there is competition through lateral inhibition, and it can be directly determined from the connectivity. What is gained from running these perturbation experiments?
We agree that a reader with a good understanding of sparse / efficient coding theory can tell that there is competition across neurons with similar tuning already from the equation for the recurrent connectivity (Eq. 24). However, we presume that not all readers can see this from the equations and that it is worth showing this with simulations.
Following the reviewer's comment, we have now downplayed the result about the model manifesting lateral inhibition in general on page 6. We have also removed its extensive elaboration in Discussion.
One reason to run perturbation experiments was to test to what extent the optimal model qualitatively replicates empirical findings, in particular, single neuron perturbation experiments in Chettih and Harvey, 2019, without specifically tuning any of the model parameters. We found that the model reproduces qualitatively the main empirical findings, without tuning the model to replicate the data. We revised the text on page 5 accordingly.
Further reason to run these experiments was to refine predictions about the minimal amount of connectivity structure that generates perturbation response profiles that are qualitatively compatible with empirical observations. To establish this, we did perturbation experiments while removing the connectivity structure of a particular connectivity sub-matrices (E-I, I-I or I-E; Fig. S3 F). This allowed us to determine which connectivity matrix has to be structured to observe results that qualitatively match empirical findings. We found that the structure of E-I and I-E connectivity is necessary, but not the structure of I-I connectivity. Finally, we tested partial removal of the connectivity structure where we replaced the precise (and optimal) connectivity structure and imposed a simpler connectivity rule. In the optimal connectivity, the connection strength is proportional to the tuning similarity. A simpler connectivity rule, in contrast, only specifies that neurons with similar tuning share a connection, and beyond this the connection strength is random. Running perturbation experiments in such a network obeying a simpler connectivity rule still qualitatively replicated empirical results from Chettih and Harvey (2019). This is shown on the Supplementary Fig. S2F on described on page 8.
(8) Pg. 4: "the optimal E-I network provided a precise and unbiased estimator of the multidimensional and time-dependent target signal" - from previous work (e.g., Calaim et al. 2022), I would guess that the estimator is indeed biased by the metabolic cost. Why is this not the case here? Did you tune the output weights to remove this bias?
Output weights were not tuned to remove the bias. On Fig. 1H in the first submission we plotted the bias for the network that minimizes the encoding error. We forgot to specify this in the text and figure caption, for which we apologize. We now replaced this figure with a new one (Fig. 1E) where we plot the bias of the network minimizing the average loss (with parameters as in Table 1). The bias of the network minimizing the error is close to zero, B^E = 0.02 and B^I = 0.03. The bias of the network minimizing the loss is stronger and negative, B^E = -0.15 and B^I=-0.34. In the text of Results, we now report the bias of both networks (i.e., optimizing the encoding error and optimizing the loss). We also added a plot showing trial-averaged estimates and a time-dependent bias in each stimulus dimension (Supplementary figure S1 F). Note that the network minimizing the encoding error requires a lower metabolic constant (β = 6) than the network optimizing the loss (β=14), however, the optimal metabolic cost in both networks is nonzero. We revised the text and explained these points on page 5.
(9) Pg. 4: "The distribution of firing rates was well described by a log-normal distribution" - I find this quite interesting, but it isn't clear to me how much this is due to the simulation of a finitetime noisy input. If the neurons all have equal tuning on the hypersphere, I would expect that the variability in firing is primarily due to how much the input correlates with their tuning. If this is true, I would guess that if you extend the duration of the simulation, the distribution would become tighter. Can you confirm that this is the stationary distribution of the firing rates?
We now simulated the network with longer simulation time (10 seconds of simulated time instead of 2 seconds used previously) and also iterated the simulation across 10 trials to report a result that is general across random draws of tuning parameters (previously a single set of tuning parameters was used). The reviewer is correct that the distribution of firing rates of E neurons has become tighter with longer simulation time, but distributions remain log-normal. We also recomputed the coefficient of variation (CV) using the same procedure. We updated these plots on Fig. 1F.
(10) Pg. 4: "We observed a strong average E-I balance" - based on the plots in Figure 1J, the inputs appear to be inhibition-dominated, especially for excitatory neurons. So by what criterion are you calling this strong average balance?
The reviewer is correct about the fact that the net synaptic input to single neurons in our optimal network shows excess inhibition and the network is inhibition-dominated, so we revised this sentence (page 5) accordingly.
(11) Pg. 4: Stronger instantaneous balance in I neurons compared to E neurons - this is curious, and I have two questions: (1) can the authors provide any intuition or explanation for why this is the case in the model? and (2) does this relate to any literature on balance that might suggest inhibitory neurons are more balanced than excitatory neurons?
In our model, I neurons receive excitatory and inhibitory synaptic currents through synaptic connections that are precisely structured. E neurons receive structured inhibition and a feedforward current. The feedforward current consists of M=3 independent OU processes projected on the tuning vectors of E neurons w<sub>i</sub><sup>E</sup>. We speculate that because the synaptic inhibition and feedforward current are different processes and the 3 OU inputs are independent, it is harder for E neurons to achieve the instantaneous balance that would be as precise as in I neurons. While we think that the feedforward current in our model reflects biologically plausible sensory processing, it is not a mechanistic model of feedforward processing. In biological neurons, real feedforward signals are implemented as a series of complex feedforward synaptic inputs from downstream areas, while the feedforward current in our model is a sum of stimulus features, and is thus a simplification of a biological process that generates feedforward signals. We speculate that a mechanistic implementation of the feedforward current could increase the instantaneous balance in E neurons. Furthermore, the presence of EE connections could potentially also increase the instantaneous balance in E neurons. We revised the Discussion about these important questions that lie on the side of model limitations and could be advanced in future work. We could not find any empirical evidence directly comparing the instantaneous balance in E versus I neurons. We have reported these considerations in the revised Discussion (page 16).
(12) Pg. 5, comparison with random connectivity: "Randomizing E-I and I-E connectivity led to several-fold increases in the encoding error as well as to significant increases in the metabolic cost" and Discussion, pg. 11: "the structured network exhibits several fold lower encoding error compared to unstructured networks": I'm wondering if these comparisons are fair. First, regarding activity changes that affect the metabolic cost - it is known that random balanced networks can have global activity control, so it is not straightforward that randomizing the connectivity will change the metabolic cost. What about shuffling the weights but keeping an average balance for each neuron's input weights? Second, regarding coding error, it is trivial that random weights will not map onto the correct readout. A fairer comparison, in my opinion, would at least be to retrain the output weights to find the best-fitting decoder for the threedimensional signal, something more akin to a reservoir network.
Thank you for raising these interesting questions. The purpose of comparing networks with and without connectivity structure was to observe causal effects of the connectivity structure on the neural activity. We agree that the effect on the encoding error is close to trivial, because shuffling of connectivity weights decouples neural dynamics from decoding weights. We have carefully considered Reviewer's suggestions to better compare the performance of structured and unstructured networks.
In reply to the first point, we followed the reviewer's suggestion and compared the optimal network with a shuffled network that matched the optimal network in its average balance. This was achieved by increasing the metabolic constant, decreasing the noise intensity and slightly decreasing the feedforward stimulus (we did not find a way to match the net current in both cell types by changing a single parameter). As we compared the metabolic cost between the optimal and the shuffled network with matched average balance, we still found lower metabolic cost in the optimal network, even though the difference was now smaller. We replaced Fig. 3B from the first submission with these new results in Fig. 4B and commented on them in the text (page 7).
In reply to the second point, we followed reviewer’s suggestion and compared the encoding error (RMSE) of the optimal network and the network with shuffled connectivity where decoding weights are trained such as to optimally reconstruct the target signal. As suggested, we now analyzed the encoding error of the networks using decoding weights trained on the set of spike trains generated by the network using linear least square regression to minimize the decoding error. For a fair and quantitative comparison and because we did not train decoding weights of our structured model, we performed this same analysis using spike trains generated by networks with structured and shuffled recurrent connectivity. We found that the encoding error is smaller in the E population and much smaller in the I population in the structured compared to the random network. Decoding weights found numerically in the optimal network approach uniform distribution of weights that we used in our model (Fig. 4A, right). In contrast, decoding weights obtained from the random network do not converge to a uniform distribution, but instead form a much sparser distribution, in particular in I neurons (Supplementary Fig. S3 A). These additional results reported in the above mentioned figures are discussed in text on page 14.
(13) Pg. 5: "a shift from mean-driven to fluctuation-driven spiking" and Pg. 11 "a network structured as in our efficient coding solution operates in a dynamical regime that is more stimulus-driven, compared to an unstructured network that is more fluctuation driven" - I would expect that the balanced condition dictates that spiking is always fluctuation driven. I'm wondering if the authors can clarify this.
We agree with the reviewer that networks with and without connectivity structure are fluctuation-driven, because in a mean-driven network the mean current must be suprathreshold (Ahmadian and Miller, 2021), which is not the case of either network. We removed the claim of the change from mean to fluctuation driven regime in the revised paper. We are grateful to the Reviewer for helping us tighten the elaboration of our findings.
(14) Pg. 5: "suggesting that variability of spiking is independent of the connectivity structure" - the literature of balanced networks argues against this. Is this not simply because you have a noisy input? Can you test this claim?
We thank the reviewer for the suggestion. We tested this claim by measuring the coefficient of variation in networks receiving a constant stimulus. In particular, we set the same strength in each of the M=3 stimulus dimensions and set the stimulus amplitude such as to match the firing rate of the optimal network in response to the OU stimulus. We computed the coefficient of variation in 200 simulation trials. The removal of connectivity structure did not cause significant change of the coefficient of variation in a network driven by a constant stimulus (Fig. 4E). These additional results are discussed in text on page 7.
We also taken the suggestion about variability of spiking being independent of the connectivity structure. We removed this claim in the revision, because we only tested a couple of specific cases where the connectivity is structured with respect to tuning similarity (fully structured, fully unstructured and partially unstructured networks). This is not exhaustive of all possible structures that recurrent connectivity may have.
(15) Pg. 6: "we also removed the connectivity structure only partially, keeping like-to-like connectivity structure and removing all structure beyond like-to-like" - can you clarify what this means, perhaps using an equation? What connectivity structure is there besides like-to-like?
In the optimal model, the strength of the synapse between a pair of neurons is proportional to the tuning similarity of the two neurons, Y<sub>ij</sub> proportional to J<sub>ij</sub> for Y<sub>ij</sub> >0 (see Eq. 24 and Fig. 1C(ii)). Besides networks with optimal connectivity, we also tested networks with a simpler connectivity rule. Such a simpler rule prescribes a connection if the pair of neurons has similar tuning (Y<sub>ij</sub> >0), and no connection otherwise. The strength of the connection following this simpler connectivity rule is otherwise random (and not proportional to pairwise tuning similarity Y<sub>ij</sub> as it is in the optimal network). We clarified this in the revision (page 8), also by avoiding the term “like-to-like” for the second type of networks, which could indeed be prone to confusion.
(16) Pgs. 6-7: "we indeed found that optimal coding efficiency is achieved with weak adaptation in both cell types" and "adaptation in E neurons promotes efficient coding because it enforces every spike to be error- correcting" - this was not clear to me. First, it appears as though optimal efficiency is achieved without adaptation nor facilitation, i.e., when the time constants are all equal. Indeed, this is what is stated in Table 1. So is there really a weak adaptation present in the optimal case? Second, it seems that the network already enforces each spike to be errorcorrecting without adaptation, so why and how would adaptation help with this?
We agree with the Reviewer that the network without adaptation in E and I neurons is already optimal. It is also true that most spikes in an optimal network should already be error-correcting (besides some spikes that might be caused by the noise). However, regimes with weak adaptation in E neurons remain close to optimality. Spike-triggered facilitation, meanwhile, ads spikes that are unnecessary and decrease network efficiency. We revised the Fig.5 (Fig. 4 in first submission) and replaced 2-dimensional plots in Fig.4 C-F with plots that show the differential effect of adaptation in E neurons (top) and in I neurons (bottom plots) for the measures of the encoding error (RMSE), the efficiency (average loss) and the firing rate (Fig. 5B-D). On the new Fig. 5C it is evident that the loss of E and I population grows slowly with adaptation in E neurons (top) while it grows faster with adaptation in I neurons (bottom). These considerations are explained in revised text on page 9.
(17) Pg. 7: "adaptation in E neurons resulted in an increase of the encoding error in E neurons and a decrease in I neurons" - it would be nice if the authors could provide any explanation or intuition for why this is the case. Could it perhaps be because the E population has fewer spikes, making the signal easier to track for the I population?
We agree that this could indeed be the case. We commented on it in revision (page 9).
(18) Pg. 7: "The average balance was precise...with strong adaptation in E neurons, and it got weaker when increasing the adaptation in I neurons (Figure 4E)" - I found the wording of this a bit confusing. Didn't the balance get stronger with larger I time constants?
By increasing the time constant of I neurons, the average imbalance got weaker (closer to zero) in E neurons (Fig. 5G, left), but stronger (further away from zero) in I neurons (Fig. 5G, right). We have revised the text on page 9 to make this clearer.
(19) Pg. 7: Figure 4F is not directly described in the text.
We have now added text (page 9) commenting on this figure in revision.
(20) Pg. 8: "indicating that the recurrent network dynamics generates substantial variability even in the absence of variability in the external current" -- how does this observation relate to your earlier claim (which I noted above) that "variability of spiking is independent of connectivity structure"?
We agree that the claim about variability of spiking being independent of connectivity structure was overstated and we thus removed it. The observation that we wanted to report is that both structured and unstructured networks have very similar levels of variability of spiking of single neurons. The fact that much of the variability of the optimal network is generated by recurrent connections is not incompatible. We revised the related text (page 11) for clarity.
(21) Pg. 9: "We found that in the optimally efficient network, the mean E-I and I-E synaptic efficacy are exactly balanced" - isn't this by design based on the derivation of the network?
True, the I-E connectivity matrix is the transpose of the E-I connectivity matrix, and their means are the same by the analytical solution. This however remains a finding of our study. We have clarified this in the revised text (page 12).
(22) Pg. 30, eq. 25: the authors should verify if they include all possible connectivity here, or if they exclude EE connectivity beforehand.
We now specify that the equation for recurrent connectivity (Eq. 24, Eq. 25 in first submission) does not include the E-E connectivity in the revised text (page 41).
Reviewer #3 (Recommendations For The Authors):
Essential
(1) Currently, they measure the RMSE and cost of the E and I population separately, and the 1CT model. Then, they average the losses of the E and I populations, and compare that to the 1CT model, with the conclusion that the 1CT model has a higher average loss. However, it seems to me that only the E population should be compared to the 1CT model. The I population loss determines how well the I population can represent the E population representation (which it can do extremely well). But the overall coding accuracy of the network of the input signal itself is only represented by the E population. Even if you do combine the E and I losses, they should be summed, not averaged. I believe a more fair conclusion would be that the E/I networks have generally slightly worse performance because of needing to follow Dale's law, but are still highly efficient and precise nonetheless. Of course, I might be making a critical error somewhere above, and happy to be convinced otherwise!
We carefully considered the reviewer's comment and tested different ways of combining the losses of the E and I population. We decided to follow the reviewer's suggestion and to compare the loss of the E population of the E-I model with the loss of the one cell type model. As evident already from the Fig. 8G, such comparison indeed changes the result to make the 1CT model more efficient. Also, the sum of losses of E and I neurons results in the 1CT model being more efficient than the E-I model. Note, however, the robustness of the E-I model to changes in the metabolic constant (Fig. 6C, top). The firing rates of the E-I model stay within physiological ranges for any value of the metabolic constant, while the firing rate of the 1CT model skyrocket for the metabolic constant that is lower than optimal (Fig. 8I).
We added to Results (page 14) a summary of these findings.
(2) The methods and main text should make much clearer what aspects of the derivation are novel, and which are not novel (see review weaknesses for specifics).
We specified these aspects, as discussed in more detail in the above reply to point 4 of the public review of Reviewer 1.
Request:
If possible, I would like to see the code before publication and give recommendations on that (is it easy to parse and reproduce, etc.)
We are happy to share the computer code with the reviewer and the community. We added a link to our public repository containing the computer code that we used for simulations and analysis to the preprint and submission (section “Code availability” on page 17).
Suggestions:
(1) I believe that for an eLife audience, the main text is too math-heavy at the beginning, and it could be much simplified, or more effort could be made to guide the reader through the math.
We tried to do our best to improve the clarity of description of mathematical expressions in the main text.
(2) Generally vector notation makes network equations for spiking neurons much clearer and easier to parse, I would recommend using that throughout the paper (and not just in the supplementary methods).
We now use vector notation throughout the paper whenever we think that this improves the intelligibility of the text.
(3) In the discussion or at the end of the results adding a clear section summarizing what the minimal requirements or essential assumptions are for biological networks to implement this theory would be helpful for experimentalists and theorists alike.
We have added such a section in Discussion (page 15).
(5) I think the title is a bit too cumbersome and hard to parse. Might I suggest something like 'Efficient coding and energy use in biophysically realistic excitatory-inhibitory spiking networks' or 'Biophysically constrained excitatory-inhibitory spiking networks can efficiently implement efficient coding'.
We followed reviewer’s suggestion and changed the title to “Efficient coding in biophysically realistic excitatory-inhibitory spiking networks.”
(6) How the connections were shuffled exactly was not clear to me in how it was described now. Did they just take the derived connectivity, and shuffle the connections around? I recommend a more explicit methods section on it (I might have missed it).
Indeed, the connections of the optimal network were randomly shuffled, without repetition, between all neuronal pairs of a specific connectivity matrix. This allows to preserve all properties of the distribution of connectivity weights and only removes the structure of the connectivity, which is precisely what we wanted to test. We now added a section in Methods (“Removal of connectivity structure”) on pages 51-52 where we explain how the connectivity structure is removed.
(7) Figure 1 sub-panel ordering was confusing to read (first up down, then left right). Not sure if re- arranging is possible, but perhaps it could be A, B, and C at the top, with subsublabels (i) and (ii). Might become too busy though.
We followed this suggestion and rearranged the Fig. 1 as suggested by the reviewer.
(8) Equation 3 in the main text should specify that 'y' stands for either E or I.
This has been specified in the revision (page 3).
(9) Figure 1D shows a rough sketch of the types of connectivities that exist, but I would find it very useful to also see the actual connection strengths and the effect of enforcing Dale's law.
We revised this figure (now Fig. 1B (ii)) and added connection strengths as well as a sketch of a connection that was removed because of Dale’s law.
(10) The main text mentions how the readout weights are defined (normal distributions), but I think this should also be mentioned in the methods.
Agreed. We indeed had Methods section “Parametrization of synaptic connectivity (page 46), where we explain how readout weights are defined. We apologize if a call on this section was not salient enough in the first submission. We made sure that the revised main text contains a clear pointer to this Methods section for details.
(11) The text seems to mix ‘decoding weights’ and ‘readout weights’.
Thanks for this suggestion to use consistent language. We opted for ‘decoding weights’ and removed ‘readout weights’.
(12) The way the paper is written makes it quite hard to parse what are new experimental predictions, and what results reproduce known features. I wonder if some sort of 'box' is possible with novel predictions that experimentalists could easily look at and design an experiment around.
We now revised the text. We clarified for every property of the model if this property is a prediction of facts that were not yet experimentally tested or if it accounts for previously observed properties of biological neurons. Please see the reply to point 4 of Reviewer 1.
(13) Typo's etc.:
Page 5 bottom -- ("all") should have one of the quotes change direction (common latex typo, seems to be the only place with the issue).
We thank the reviewer for pointing out this typo that has been removed in revision.
Shop img[data-lowsrc="https://www.patagonia.ca/dw/image/v2/bdjb_PRD/on/demandware.static/-/Library-Sites-PatagoniaShared/default/dw70b28018/category/routing/mens-routing.jpg?q=40&sw=50&"] + img {--focal-point-x:50%; --focal-point-y:50%;} Men's Men's Men's
this is easy to navigate because you can scrolls sideways with the arrow keys. although i don't think you can select the different buttons.
What are our values as a society? And which policies and regulations do we need in order to live up to them? What can we regulate, and in which ways can and should we regulate it?
La regulación debe centrarse en proteger y representar a las corporalidades humanas en toda su diversidad. Esto implica diseñar políticas que aborden las necesidades y derechos de las personas afectadas por la Inteligencia Artificial. Para lograrlo, es crucial que los responsables de políticas comprendan las implicaciones tecnológicas y que los tecnólogos integren valores sociales y éticos en sus diseños. Por ejemplo, aunque se desaconseje el uso de características protegidas en predicciones de la Inteligencia Artificial, la recopilación de estos datos podría ser esencial para auditar impactos demográficos y garantizar que no se perpetúen inequidades.
La traducción, como puente necesario entre disciplinas, abarca desde la tecnología hasta las políticas públicas. Se requiere un lenguaje común que permita a tecnólogos, legisladores y comunidades colaborar para crear soluciones responsables que sean aplicables en contextos específicos. Este proceso de traducción debe ser dinámico y reflexivo, adaptándose al impacto continuo de la tecnología en las personas.
La Inteligencia Artificial y las sociedades son dinámicas, lo que implica que no basta con diseñar una tecnología y desplegarla. Es necesario un proceso de auditoría continua que evalúe cómo los sistemas afectan a las corporalidades y modifican los datos y comportamientos sociales con el tiempo. Esto subraya la necesidad de reguladores confiables y terceros independientes que garanticen que las tecnologías sigan siendo justas y responsables a medida que evolucionan.
Additional Ethical Considerations
Las corporalidades deben ser el centro en la creación de la Inteligencia Artificial, ya que los sistemas impactan directamente a las personas. Es crucial incluir las voces de poblaciones vulnerables, quienes suelen ser ignoradas en los procesos de diseño. Esto implica entender las expectativas de justicia y equidad de los diversos grupos sociales y cómo estas varían según factores como el estatus socioeconómico, la experiencia tecnológica y el nivel de impacto directo que puedan experimentar.
La traducción con Inteligencia Artificial no solo implica convertir datos en decisiones, sino hacerlo de manera comprensible para todos. La interpretación va más allá de la simple explicación técnica, buscando que cualquier ciudadano pueda entender cómo opera un sistema y confiar en sus resultados. Este proceso de traducción también debe integrar valores éticos como la privacidad y la equidad, equilibrando estas metas para evitar comprometer una en favor de la otra.
Las Inteligencias Artificiales no funcionan aisladas; interactúan con los humanos que las usan y son afectadas por ellos. Esto es evidente en escenarios como la revisión de libertad condicional, donde las decisiones finales las toman jueces basándose en las predicciones de la Inteligencia Artificial. La interacción entre sesgos humanos y algorítmicos puede amplificar injusticias si no se diseña cuidadosamente todo el sistema. Por ello, la colaboración entre humanos y máquinas debe ser auditada y diseñada para reducir los sesgos, no perpetuarlos.
Classification Rebalancing Ranking Sampling
Las corporalidades son esenciales en la Inteligencia Artificial, ya que los datos utilizados reflejan las experiencias vividas de las personas. Sin embargo, estas experiencias están mediadas por atributos protegidos como género, raza o condición de discapacidad, los cuales a menudo están ausentes o mal representados en los sistemas algorítmicos. Esto limita la capacidad de los algoritmos para abordar inequidades estructurales. Por ejemplo, si un proceso de selección de personal omite mujeres en etapas iniciales, el sistema no podrá generar una representación justa más adelante.
Traducir conceptos como equidad o no discriminación en métricas procesables es un desafío. Los algoritmos deben manejar definiciones de justicia, pero no siempre pueden satisfacerlas simultáneamente. Esto requiere soluciones que minimicen las injusticias en distintos contextos. Además, los sistemas actuales comienzan a abordar problemas como la ausencia de atributos protegidos en los datos, utilizando técnicas que implican trabajar de forma implícita con la información demográfica.
La Inteligencia Artificial no es monolítica; es un conjunto de algoritmos interconectados que toman decisiones en varias etapas. El sesgo puede introducirse en cualquier punto, desde la preselección de datos hasta la etapa final de decisión. Por ejemplo, en procesos como la búsqueda web o la contratación, los sesgos en etapas iniciales limitan la capacidad de los algoritmos posteriores para producir resultados diversos y justos. Por ello, es crucial considerar la equidad en todas las partes del sistema, no solo en su resultado final.
The Research Landscape of Debiasing AI
Las corporalidades que han sido marginadas por género, raza u otros atributos, son fundamentales en el diseño de la Inteligencia Artificial. La selección de atributos que se consideran en un algoritmo refleja decisiones humanas sobre qué corporalidades e identidades deben ser visibilizadas y cómo deben ser representadas en los datos. Por ejemplo, en un sistema de contratación, garantizar la representación equitativa de hombres y mujeres o de personas no binarias implica reconocer y traducir estas identidades en métricas que el algoritmo pueda procesar.
Un desafío clave en el diseño de ls Inteligencia Artificial justa es traducir conceptos sociales como la equidad en definiciones matemáticas que los algoritmos puedan implementar. Definir métricas de justicia que sean contextualmente apropiadas. Por ejemplo, en un problema de clasificación de candidatos, podría requerirse que el algoritmo produzca una lista que refleje una distribución demográfica justa basada en género o raza. Este proceso de traducción no es neutral, ya que está influido por los valores y preferencias de los responsables del diseño.
Se destacan avances recientes que han permitido agrupar métricas de equidad en familias de definiciones, facilitando el desarrollo de meta-algoritmos. Estos frameworks no requieren rediseñar un algoritmo desde cero para cada contexto; en su lugar, aceptan definiciones específicas de justicia y producen resultados ajustados a esos criterios. Por ejemplo, al definir qué atributos proteger como género o raza y qué métrica de equidad emplear como representación igualitaria o tasas de error similares, el framework genera una solución personalizada para un caso particular.
if we can strategically intervene algorithmically, we have a powerful tool to help break the cycle of discrimination.
Las corporalidades (entendidas como los cuerpos físicos y los ensamblajes que los habitan) son intrínsecas a los datos que alimentan los sistemas de Inteligencia Artificial. Aunque los sistemas se perciban como objetivos o neutrales, los datos que los entrenan están profundamente arraigados en contextos sociales y culturales provenientes del Norte Global. La decisión de qué datos recolectar y cómo procesarlos refleja juicios humanos, que a menudo priorizan ciertas corporalidades sobre otras. Esto explica por qué ciertas Inteligencias Artificiales pueden amplificar inequidades ya existentes, como en el ejemplo de los algoritmos que ofrecen empleo mejor remunerado principalmente a hombres o a personas blancas.
El diseño de la Inteligencia Artificial que ignora las diversidades corporales e identitarias perpetúa su invisibilización y marginalización. Surge la necesidad de considerar cómo las corporalidades se registran, interpretan y representan en los datos.
La traducción implica transformar las experiencias humanas, incluidas las vivencias de corporalidades diversas, en datos que la Inteligencia Artificial pueda procesar. Sin embargo, esta traducción no es neutral. La selección de métricas, variables y optimizaciones refleja decisiones humanas que pueden reforzar dinámicas de poder existentes.
¿Qué cuerpos se incluyen o excluyen?
Si los datos no representan adecuadamente a personas no binarias, racializadas o en condición de discapacidad, la Inteligencia Artificial no podrá abordar sus necesidades ni reconocer sus realidades.
¿Cómo se procesan las diferencias?
Las corporalidades no normativas suelen ser traducidas en categorías reduccionistas o ignoradas por completo en el diseño de algoritmos, lo que perpetúa su exclusión.
La traducción entre corporalidades y modelos computacionales es un acto político, donde los sesgos y prioridades humanos moldean la representación de la realidad.
La Inteligencia Artificial no sólo refleja, sino que también transforma la relación entre corporalidades y sociedades al influir en oportunidades, recursos y visibilidad. La Inteligencia Artificial puede crear bucles de retroalimentación negativos, donde los sesgos iniciales en los datos refuerzan y amplifican desigualdades existentes, afectando las decisiones futuras, entre estos:
Algoritmos que perpetúan la discriminación laboral.
Sistemas que limitan el acceso a recursos como vivienda, educación o servicios.
Esta capacidad de influencia también abre una oportunidad para intervenir. Al desarrollar y aplicar estrategias de desviación algorítmica (debiasing), es posible diseñar sistemas que rompan estos ciclos de discriminación. Esto requiere integrar una conciencia crítica sobre las corporalidades y su representación en los datos.
La percepción de que la Inteligencia Artificial es imparcial desvía la atención de las formas en que las corporalidades y las experiencias humanas son fundamentales para su diseño y funcionamiento. La Inteligencia Artificial no existe separada de las dinámicas humanas, por el contrario, actúa como un espejo que amplifica tanto nuestras virtudes como nuestros sesgos.
La construcción de Inteligencia Artificial más justa requiere:
Reconocer las corporalidades ausentes en los datos y priorizar su inclusión.
Redefinir los procesos de traducción para capturar la complejidad de las realidades humanas en lugar de simplificarlas.
Intervenir estratégicamente en la Inteligencia Artificial para mitigar su impacto negativo y transformar las dinámicas sociales hacia una mayor equidad.
La coeducation, une responsabilité à partager entre l'école et les familles
La coéducation c'est l'idée d'une mutualisation d'un partage entre les différents acteurs qui entourent l'éducation d'un enfant
donc les parents bien entendu tout d'abord, ils sont les premiers éducateurs mais également tous les autres professionnels qui vont intervenir dans l'école et en dehors de l'école également dans le soin le loisir tous les différents acteurs.
Alors cette idée, elle est relativement nouvelle.
En tout cas dans l'école puisque l'école en tout cas l'école républicaine s'est construite plutôt dans une idée de cloisonnement avec les familles d'un côté et l'école de l'autre côté aujourd'hui.
On pense donc à une responsabilité qui se partagent pour accompagner au mieux l'enfant ou le gel, c'est un partage qui est important puisqu'il s'agit de réfléchir ensemble à ce qui est le mieux pour cet enfant ou cet élève
mais qui est aussi limité puisqu'en fait les familles gardent leur liberté éducative et l'école garde sa liberté pédagogique.
Les postures professionnelles favorisant la coéducation
Donc les professionnels dans l'école et puis particulièrement, les enseignants ne sont pas habitués à cette posture de mutualisation avec les parents donc il va falloir qu'ils prennent un certain nombre de postures différentes pour favoriser effectivement cette coéducation et en tout premier lieu l'explicitation et la coopération qui ne vont pas forcément de soi. Expliciter, ça va être expliquer aux parents comment fonctionne l'institution scolaire qui n'est pas facilement lisible par eux. Et puis coopérer ça va être faire oeuvre commune pour accompagner ensemble l'enfant l'élève dans son chemin scolaire tout cela, ça ne sera pas facile parce qu'il y a une asymétrie entre les professionnels les parents qui est normal.
Les professionnels sont des experts dans leur métier et les parents sont des experts dans leur famille, mais ces deux expertises ne sont pas à égalité et puis les parents viennent dans le cadre de l'école, c'est un cadre qui est organisé par les professionnels donc il y a parfois du surplomb des attitudes qui sont compliquées pour les parents et un concept qui est très aidant c'est l'idée de parité d'estime c'est à dire que on n'est pas dans la fusion, on n'est pas ici pour essayer d'être toujours d'accord ou d'aller toujours dans le même sens ni dans la confusion des rôles mais on est dans la reconnaissance réciproque de l'idée d'une compétence de part et d'autre.
Les leviers favorisant la coéducation
Concrètement les enseignants sur le terrain mais également les autres professionnels des structures par exemple périscolaires vont mettre en place un certain nombre de dispositifs pour favoriser cette communication et ces dispositifs vont répondre à plusieurs objectifs. Souvent, ils ne sont pas très pensés, ces dispositifs et petit à petit avec le changement de posture les professionnels vont être amenés à développer des gestes professionnels pour mettre en place ces dispositifs de la même façon qu'ils préparent leur classe ou leurs activités périscolaires.
Alors ces dispositifs, on peut les regrouper avec quatre grands objectifs qui vont être * accueillir * informer * dialoguer et * impliquer pour chacun de ces leviers à son importance et demande déjà ce professionnel qui vont mettre en oeuvre notamment la parité d'estime. Alors le premier c'est l'accueil. L'accueil, il est extrêmement important.. Il est au moment de la première inscription mais également tous les jours dans l'établissement scolaire et c'est là où on passe le message à la fois aux parents qu’ils sont les bienvenus et à la fois qu'il y a des règles pour faire fonctionner ce dialogue.
Ensuite le deuxième disons levier ça sera l'information l'information, elle est opérationnelle fonctionnelle obligatoire en même temps. Il importe de se demander est-ce que l'information arrive bien jusqu'au parents d'élèves et pour cela ils sont là adapté aux besoins des parents en fonction du contexte dans lequel on travaille
le troisième levier ça sera le dialogue, c'est le creuset de la parité d'estime à ce moment-là, on met en place des instances pour dialoguer avec les parents et il y a beaucoup de questions à se poser * est-ce qu'on les a suffisamment bien accueillis pour ce dialogue, * est-ce qu'on les écoute * est-ce qu'on a est-ce qu'on a de la place pour leurs paroles et pour leur point de vue qui va être dans la démarche éducative un point de vue souvent différent, il va falloir donc s'accorder sur ces différents points de vue et puis enfin l'implication qui est celle qui est souvent la plus attendue par les professionnels. et qui va être de différents ordres implications individuelle qui consiste à suivre son enfant dans son travail scolaire bien entendu qui ait demandé à tous les parents et puis un tas d'autres formes d'implication qui sont actuellement développées dans les établissements et qui restent facultatives pour les parents qui sont donc de plus convivial avoir des temps des temps par exemple dans les établissements d'ordre institutionnel lorsque les parents sont représentés dans les instances d'ordre pédagogique lorsque les parents sont invités pour participer par exemple à des temps de l'école est d'ordre culturel lorsque les parents sont invités à rentrer dans l'école pour apporter des éléments tous ces dispositifs se conjuguent ensemble, ils sont foisonnants en même temps, les enseignants sont parfois déçu parce que les parents ne répondent pas toujours à la hauteur. Je pense qu'il est important de se souvenir que les parents pour s'impliquer doivent être déjà accueillis et informer
S'approprier la coéducation, un levier pour bien vivre à l'école ?
La coéducation, c'est un véritable levier pour bien vivre à l'école puisque cette responsabilité est partagée. Finalement, c'est une aventure commune pour éduquer ensemble les enfants. En fait, c'est un réel changement de paradigme historique à l'intérieur de l'institution scolaire. Donc ça ne va pas de soi, ça va demander des efforts aux professionnels, il s'agit donc d'ouvrir la porte et en même temps, on voit bien qu'il ne suffit pas d'ouvrir la porte parce que quand on l'ouvre on entre dans une nouvelle complexité en même temps si la porte n'est pas ouverte, rien ne peut. Se passer j'encourage les professionnels à persévérer dans cette dynamique intégrer. Finalement ces nouveaux gestes professionnels dans leur pratique, ils vont s'en trouver énormément enrichis.
Author response:
The following is the authors’ response to the current reviews.
Those comments are all valuable and very helpful for revising and improving our paper, as well as the important guiding significance to our researches. We have studied comments carefully and have made correction which we hope meet with approval.
Reviewer #3 (Public review):
Summary:
The manuscript by Ma et al. describes a multi-model (pig, mouse, organoid) investigation into how fecal transplants protect against E. coli infection. The authors identify A. muciniphila and B. fragilis as two important strains and characterize how these organisms impact the epithelium by modulating host signaling pathways, namely the Wnt pathway in lgr5 intestinal stem cells.
Strengths:
The strengths of this manuscript include the use of multiple model systems and follow up mechanistic investigations to understand how A. muciniphila and B. fragilis interacted with the host to impact epithelial physiology.
Weaknesses:
As in previous revisions, there remains concerning ambiguity in the methodology used for microbiota sequence analysis and it would be difficult to replicate the analysis in any meaningful way. In this revision, concerns about the rigor and reproducibility of this component of the manuscript have been increased. Readers should be cautious with interpretation of this data.
(1) In previous versions of the manuscript it would appear the correct bioproject accession was listed but, the actual link went to an unrelated project. The updated accession link appears to contain raw data; however, the authors state they used an Illumina HiSeq 2500. This would be an unusual choice for V3-V4 as it would not have read lengths long enough to overlap. Inspection of the first sample (SRR19164796) demonstrates that this is absolutely not the raw data, as there is a ~400 nt forward read, and a 0 length reverse read. All quality scores are set to 30. There is no logical way to go from HiSeq 2500 raw data and read lengths to what was uploaded to the SRA and it was certainly not described in the manuscript.
What we uploaded to the SRA was Contigs files for sample, we have modified the description on line 694.
(2) No multiple testing correction was applied to the microbiome data.
The alpha diversity indexes were tested using T-test and wilcox test, and we showed the result of T-test in FigureS1B. The p-values were corrected for multiple testing using the Benjamini-Hochberg method, we have modified the description on line 322.
---------
The following is the authors’ response to the previous reviews.
Public Reviews:
Reviewer #2 (Public Review):
Ma X. et al proposed that A. muciniphila was a key strain that promotes the proliferation and differentiation of intestinal stem cells through acting on the Wnt/β-catenin signaling pathway. They used various models, such as piglet model, mouse model and intestinal organoids to address how A. muciniphila and B. fragilis offer the protection against ETEC infection. They showed that FMT with fecal samples, A. muciniphila or B. fragilis protected piglets and/or mice from ETEC infection, and this protection is manifested as reduced intestinal inflammation/bacterial colonization, increased tight junction/Muc2 proteins, as well as proper Treg/Th17 cells. Additionally, they demonstrated that A. muciniphila protected basal-out and/or apical-out intestinal organoids against ETEC infection via Wnt signaling.
Comments on revised version:
Please add proper references to indicate the invasion of ETEC into organoids after 1 h of infection.
We have added references on line 211.
References:
Xiao K, Yang Y, Zhang Y, Lv QQ, Huang FF, Wang D, Zhao JC, Liu YL. 2022. Long-chain PUFA ameliorate enterotoxigenic Escherichia coli-induced intestinal inflammation and cell injury by modulating pyroptosis and necroptosis signaling pathways in porcine intestinal epithelial cells. Br. J. Nutr. 128(5):835-850.
Qian MQ, Zhou XC, Xu TT, Li M, Yang ZR, Han XY. 2023. Evaluation of Potential Probiotic Properties of Limosilactobacillus fermentum Derived from Piglet Feces and Influence on the Healthy and E. coli-Challenged Porcine Intestine. Microorganisms. 11(4).
Reviewer #3 (Public Review):
Summary:
The manuscript by Ma et al. describes a multi-model (pig, mouse, organoid) investigation into how fecal transplants protect against E. coli infection. The authors identify A. muciniphila and B. fragilis as two important strains and characterize how these organisms impact the epithelium by modulating host signaling pathways, namely the Wnt pathway in lgr5 intestinal stem cells.
Strengths:
The strengths of this manuscript include the use of multiple model systems and follow up mechanistic investigations to understand how A. muciniphila and B. fragilis interacted with the host to impact epithelial physiology.
Weaknesses:
After an additional revision, the bioinformatics section of the methods has changed significantly from previous versions and now indicates a third sequencer was used instead: Ion S5 XL. Important parameters required to replicate analysis have still not been provided. Inspection of the SRA data indicates a mix of Illumina MiSeq and Illumina HiSeq 2500. It is now unclear which sequencing technology was used as authors have variably reported 4 different sequencers for these samples. Appropriate metadata was not provided in the SRA, although some groups may be inferred from sample names. These changing descriptions of the methodologies and ambiguity in making the data available create concerns about rigor of study and results.
Due to confusing the sequencing method of this experiment with other experiment samples, we apologize for the multiple incorrect modifications of the method description. We have modified the method for microbiome sequencing technology on line 304. The sequencing technology is Illumina HiSeq 2500. The SRA metadata can be viewed at https://www.ncbi.nlm.nih.gov/sra/PRJNA837047. The sample names ep1-6 and ef1-6 were correspond to the EP and EF groups, respectively.
Recommendations For the Authors:
As in the previous revision:
-provide important parameters required to replicate analysis
-ensure that reporting of sequencing technology is correct as data listed on SRA appears to be derived from Illumina sequencers, and was deposited indicating as such.
-update SRA metadata such that experimental groups are clear and match the nomenclature used in the manuscript (Particularly for samples which are labelled [A-Z][0-9]
- The multiple testing correction wasn’t applied.
-Due to confusing the sequencing method of this experiment with other experiment samples, we apologize for the multiple incorrect modifications of the method description. We have modified the method for microbiome sequencing technology on line 304. The sequencing technology is Illumina HiSeq 2500.
- The SRA metadata can be viewed at https://www.ncbi.nlm.nih.gov/sra/PRJNA837047. The sample names ep1-6 and ef1-6 were correspond to the EP and EF groups, respectively.
Data Feminism in Action
Corporalidades y representación en los datos
Las corporalidades están en el centro de los datos sobre feminicidio, ya que estos buscan visibilizar la violencia sistémica y letal dirigida a ciertos cuerpos, principalmente de mujeres y personas feminizadas. Sin embargo, los desafíos en la recopilación de estos datos revelan la complejidad de traducir las vivencias de estas corporalidades en registros sistemáticos. La falta de estandarización en las definiciones y categorías relacionadas con feminicidio no solo dificulta el análisis comparativo, sino que también invisibiliza ciertas experiencias de violencia que no encajan en definiciones tradicionales o normativas.
El reconocimiento y representación precisa de estas corporalidades en los datos es un acto político: dar visibilidad a los cuerpos afectados significa reconocer su existencia y exigir justicia.
Traducción de experiencias vividas a datos y narrativas
La recopilación de datos sobre feminicidio no es solo un proceso técnico, es un acto de traducción entre las realidades vividas y las estructuras formales de datos. Este proceso requiere decisiones sobre qué contar, cómo clasificar, y qué significados se asignan a los datos recolectados. Las presentaciones sobre “marcos de datos del feminicidio” y “estandarización de datos” destacaron los retos asociados con la homogeneización de realidades diversas en un formato legible por sistemas globales.
La traducción no solo ocurre a nivel técnico; también se refleja en la narrativa: los datos, cuando se presentan mediante visualizaciones o análisis espaciales, cuentan historias que humanizan las cifras y amplifican las voces de las víctimas.
Inteligencia artificial como herramienta de intervención
La Inteligencia Artificial desempeña un papel crucial en la recopilación, análisis y visibilización de datos sobre feminicidio. Por ejemplo, la presentación de Catherine D’Ignazio sobre un clasificador automatizado para detectar feminicidios subraya cómo los algoritmos pueden ayudar a procesar grandes volúmenes de información, como artículos de noticias. Sin embargo, el uso de IA plantea desafíos éticos y técnicos:
Si los modelos están entrenados con datos incompletos o sesgados, pueden perpetuar exclusiones y desigualdades.
Los algoritmos deben ser lo suficientemente flexibles para adaptarse a contextos regionales y culturales específicos sin forzar definiciones homogéneas.
La Inteligencia Artificial también permite análisis avanzados, como el análisis espacial de feminicidios presentado en el evento, lo que abre nuevas posibilidades para comprender patrones geográficos y contextuales de violencia.
Construcción de comunidad y visibilización
El evento no sólo promovió el uso de datos y tecnologías, sino también la construcción de una comunidad interdisciplinaria de activistas, académicos, periodistas y funcionarios. Este enfoque reconoce que ni la tecnología ni los datos son suficientes por sí solos: el cambio requiere colaboración, solidaridad y una conciencia ética que priorice las experiencias humanas detrás de las cifras.
When thinking about
Corporalidades y su representación en los datos
La importancia de las categorías utilizadas en los procesos de recolección de datos, ejemplifica cómo elegir el género en un formulario, puede excluir a millones de personas no binarias. Esta exclusión es un reflejo de cómo las corporalidades y las identidades son frecuentemente invisibilizadas en los sistemas normativos. Las corporalidades no normativas enfrentan barreras sistemáticas que las excluyen de ser reconocidas o contabilizadas, perpetuando la marginalización a nivel político y social.
Los datos sobre feminicidio evidencian la importancia de considerar las particularidades de las corporalidades afectadas. Si no se registran las diversas circunstancias y contextos de los crímenes, se dejan fuera a víctimas y experiencias específicas, generando lo que se llama “datos ausentes”.
Traducción de realidades en datos
El proceso de traducir realidades sociales complejas, como el feminicidio, en categorías legibles para bases de datos o sistemas estadísticos, es un acto de traducción crítico. Este proceso no solo implica transferir información de un medio a otro, sino también decidir qué aspectos de esa realidad se consideran importantes, cómo se clasifican y qué se omite. Por ejemplo, en algunos casos, el acto de clasificar un asesinato como feminicidio puede depender de si se reconoce la motivación de género detrás del crimen, algo que no siempre está bien documentado o considerado por los sistemas legales.
Esta traducción imperfecta entre la experiencia vivida y la representación numérica no solo afecta la visibilidad del problema, sino también la capacidad de diseñar políticas efectivas.
Inteligencia Artificial como herramienta de intervención
La inteligencia artificial tiene el potencial de transformar la manera en que estos problemas son analizados y abordados. Por un lado, puede ayudar a procesar grandes volúmenes de información, como los reportes de feminicidio generados por activistas y periodistas, detectando patrones y tendencias. Por otro lado, las Inteligencias Artificiales deben ser diseñadas con cuidado para no reproducir o amplificar sesgos existentes. Si los modelos de Inteligencia Artificial están entrenados en datos incompletos o sesgados, corren el riesgo de reforzar las mismas dinámicas de exclusión que pretenden combatir.
Author response:
Reviewer #1(Public review):
Summary:
This manuscript details the results of a small pilot study of neoadjuvant radiotherapy followed by combination treatment with hormone therapy and dalpiciclib for early-stage HR+/HER2-negative breast cancer.
Strengths:
The strengths of the manuscript include the scientific rationale behind the approach and the inclusion of some simple translational studies.
Weaknesses:
The main weakness of the manuscript is that overly strong conclusions are made by the authors based on a very small study of twelve patients. A study this small is not powered to fully characterize the efficacy or safety of a treatment approach, and can, at best, demonstrate feasibility. These data need validation in a larger cohort before they can have any implications for clinical practice, and the treatment approach outlined should not yet be considered a true alternative to standard evidence-based approaches.
I would urge the authors and readers to exercise caution when comparing results of this 12-patient pilot study to historical studies, many of which were much larger, and had different treatment protocols and baseline patient characteristics. Cross-trial comparisons like this are prone to mislead, even when comparing well powered studies. With such a small sample size, the risk of statistical error is very high, and comparisons like this have little meaning.
We greatly appreciate your evaluation of our study and fully agree with the limitations you have pointed out. We have clearly stated the limitations of the small sample size and emphasized the need for a larger population to validate our preliminary findings in the discussion section (Lines 311-316).
We acknowledge that this small sample size is not powered to characterize this regimen as a promising alternative regimen in the treatment of patients with HR-positive, HER2-negative breast cancer. Therefore, we have revised the description of this regimen to serve as a feasible option for neoadjuvant therapy in HR-positive, HER2-negative breast cancers both in the discussion (Lines 317-320) and the abstract (Lines 71-72).
We agree with you that cross-trial comparisons should be approached with caution due to differences in study designs and patient populations. In our discussion section, we acknowledge that small sample size limited the comparison of our data with historical data in the literature due to the potential bias (Lines 312-313). We clearly state that such comparisons hold limited significance (Lines 313-314) and suggest a larger population to validate our preliminary findings.
• Why was dalpiciclib chosen, as opposed to another CDK4/6 inhibitor?
Thank you for your comments. The rationale for selecting dalpiciclib over other CDK4/6 inhibitors in our study is primarily based on the following considerations:
(1) Clinical Efficacy: In several clinical trials, including DAWNA-1 and DAWNA-2, the combination of dalpiciclib with endocrine therapies such as fulvestrant, letrozole, or anastrozole has been shown to significantly extend the progression-free survival (PFS) in patients with hormone receptor-positive, HER2-negative advanced breast cancer (1-2).
(2) Tolerability and Management of Adverse Reactions: The primary adverse reactions associated with dalpiciclib are neutropenia, leukopenia, and anemia. Despite these potential side effects, the majority of patients are able to tolerate them, and with proper monitoring and management, these reactions can be effectively mitigated (1-2).
(3) Comparable pharmacodynamic with other CDK4/6 inhibitors: The combination of CDK4/6 inhibitors, including palbociclib, ribociclib, and abemaciclib, with aromatase inhibitors has demonstrated an enhanced ability to suppress tumor proliferation and increase the rate of clinical response in neoadjuvant therapy for HR-positive, HER2-negative breast cancer (3-5). Furthermore, preclinical studies have shown that dalpiciclib has comparable in vivo and in vitro pharmacodynamic activity to palbociclib, suggesting its potential effectiveness in similar treatment regimens (6).
(4) Accessibility and Regulatory Approval: Dalpiciclib has gained marketing approval in China on December 31, 2021, which facilitates the accessibility of this medication, making it a more convenient option when considering treatment plans.
References:
(1) Zhang P, Zhang Q, Tong Z, et al. Dalpiciclib plus letrozole or anastrozole versus placebo plus letrozole or anastrozole as first-line treatment in patients with hormone receptor-positive, HER2-negative advanced breast cancer (DAWNA-2): a multicentre, randomised, double-blind, placebo-controlled, phase 3 trial(J). The Lancet Oncology, 2023, 24(6): 646-657.
(2) Xu B, Zhang Q, Zhang P, et al. Dalpiciclib or placebo plus fulvestrant in hormone receptor-positive and HER2-negative advanced breast cancer: a randomized, phase 3 trial(J). Nature medicine, 2021, 27(11): 1904-1909.
(3) Hurvitz S A, Martin M, Press M F, et al. Potent cell-cycle inhibition and upregulation of immune response with abemaciclib and anastrozole in neoMONARCH, phase II neoadjuvant study in HR+/HER2− breast cancer(J). Clinical Cancer Research, 2020, 26(3): 566-580.
(4) Prat A, Saura C, Pascual T, et al. Ribociclib plus letrozole versus chemotherapy for postmenopausal women with hormone receptor-positive, HER2-negative, luminal B breast cancer (CORALLEEN): an open-label, multicentre, randomised, phase 2 trial(J). The lancet oncology, 2020, 21(1): 33-43.
(5) Ma C X, Gao F, Luo J, et al. NeoPalAna: neoadjuvant palbociclib, a cyclin-dependent kinase 4/6 inhibitor, and anastrozole for clinical stage 2 or 3 estrogen receptor–positive breast cancer(J). Clinical Cancer Research, 2017, 23(15): 4055-4065.
(6) Long F, He Y, Fu H, et al. Preclinical characterization of SHR6390, a novel CDK 4/6 inhibitor, in vitro and in human tumor xenograft models(J). Cancer science, 2019, 110(4): 1420-1430.
• The eligibility criteria are not consistent throughout the manuscript, sometimes saying early breast cancer, other times saying stage II/III by MRI criteria.
criteria in our manuscript. We deeply apologize for any confusion caused by these inconsistencies. We have revised the term from “early-stage HR-positive, HER2-negative breast cancer” to “early or locally advanced HR-positive, HER2-negative breast cancer” (Lines 128 and 150). The term “early or locally advanced” encompasses two different stages of breast cancer, whereas “Stage II/III by MRI criteria” refers to specific stages within the TNM staging system.
• The authors should emphasize the 25% rate of conversion from mastectomy to breast conservation and also report the type and nature of axillary lymph node surgery performed. As the authors note in the discussion section, rates of pathologic complete response/RCB scores are less prognostic for hormone-receptor-positive breast cancer than other subtypes, so one of the main rationales for neoadjuvant medical therapy is for surgical downstaging. This is a clinically relevant outcome.
We appreciate your constructive comments. Based on your suggestions, we have made the following revisions and additions to the article.
The breast conservation rate serves as a secondary endpoint in our study (Line 62 and 179). We have highlighted the significant 25% conversion rate from mastectomy to breast conservation in both the results (Lines 229-230) and discussion sections (Lines 290-292).
In our study, all patients underwent lymph node surgery, including sentinel lymph node biopsy or axillary lymph node dissection. Among them, 58.3% of patients (7/12) underwent sentinel lymph node biopsies.
We agree with your point that the prognostic value of pathologic complete response/RCB score is lower for hormone receptor-positive breast cancer compared to other subtypes, we have revised the discussion section to clarify that one of the principal objectives for neoadjuvant therapy in this patient population is to facilitate downstaging and enhance the rate of breast conservation (Lines 289-290). And also emphasized that this neoadjuvant therapeutic regiment appeared to improve the likelihood of pathological downstaging and achieve a margin-free resection, particularly for those with locally advanced and high-risk breast cancer (Lines 293-295).
Reviewer #2 (Public review):
Firstly, as this is a single-arm preliminary study, we are curious about the order of radiotherapy and the endocrine therapy. Besides, considering the radiotherapy, we also concern about the recovery of the wound after the surgery and whether related data were collected.
Thanks for the comments. The treatment sequence in this study is to first administer radiotherapy, followed by endocrine therapy. A meta-analysis has indicated that concurrent radiotherapy with endocrine therapy does not significantly impact the incidence of radiation-induced toxicity or survival rates compared to a sequential approach (1). In light of preclinical research suggesting enhanced therapeutic efficacy when radiotherapy is delivered prior to CDK4/6 inhibitors, we have opted to administer radiotherapy before the combination therapy of CDK4/6 inhibitors and hormone therapy (2).
In our study, we collected data on surgical wound recovery. All 12 patients had Class I incisions, which healed by primary intention. The wounds exhibited no signs of redness, swelling, exudate, or fat necrosis.
References:
(1) Li Y F, Chang L, Li W H, et al. Radiotherapy concurrent versus sequential with endocrine therapy in breast cancer: A meta-analysis(J). The Breast, 2016, 27: 93-98.
(2) Petroni G, Buqué A, Yamazaki T, et al. Radiotherapy delivered before CDK4/6 inhibitors mediates superior therapeutic effects in ER+ breast cancer(J). Clinical Cancer Research, 2021, 27(7): 1855-1863.
Secondly, in the methodology, please describe the sample size estimation of this study and follow up details.
Thanks for pointing out this crucial omission. Sample size estimation for this study and follow-up details have been added in the methodology section. The section on sample size estimation has been revised to state in Statistical analysis: “This exploratory study involves 12 patients, with the sample size determined based on clinical considerations, not statistical factors (Lines 210-211).” The section on follow up has been revised to state in Procedures section “A 5-year follow-up is conducted every 3 months during the first 2 years, and every 6 months for the subsequent 3 years. Additionally, safety data are collected within 90 days after surgery for subjects who discontinue study treatment (Lines 169-172).”
Thirdly, in Table 1, the item HER2 expression, it's better to categorise HER2 into 0, 1+, 2+ and FISH-.
Thank you very much for pointing out this issue. The item HER2 expression in Table 1 has been revised from “negative, 1+, 2+ and FISH-” to “0, 1+, 2+ and FISH-”.
Reviewer #2 (Public review):
Summary:
The authors are trying to test the hypothesis that ATP bursts are the predominant driver of antibiotic lethality of Mycobacteria
Strengths:
No significant strengths in the current state as it is written.
Weaknesses:
A major weakness is that M. smegmatis has a doubling time of three hours and the authors are trying to conclude that their data would reflect the physiology of M. tuberculossi which has a doubling time of 24 hours. Moreover, the authors try to compare OD measurements with CFU counts and thus observe great variabilities.
Comments on revisions:
The authors confirm they are using CFU counts, but then Figure 1 has 0 as the first data point on the Y-axis. This should be somewhere between 10e5 or 10e6. CFU would not start at 0, your initial inoculum has to be more than 0 to have something to challenge.
helps) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42# Your answers here # Berechne den temporalen und zonalen Durchschnitt des Meeresspiegeldrucks (msl) slp = ds['msl'] / 100 # Umrechnung von Pa auf hPa slp_avg = slp.mean(dim='time').mean(dim='longitude') # Berechne den temporalen und zonalen Durchschnitt der Oberflächenwinde (u10, v10) u10 = ds['u10'] v10 = ds['v10'] u10_avg = u10.mean(dim='time').mean(dim='longitude') v10_avg = v10.mean(dim='time').mean(dim='longitude') # Erstelle die Plots fig, axs = plt.subplots(3, 1, figsize=(10, 15)) # Plot für den durchschnittlichen Meeresspiegeldruck axs[0].plot(slp_avg, label='Mean Sea-Level Pressure', color='blue') axs[0].axhline(y=1013, color='black', linestyle='--', label='Standard Pressure (1013 hPa)') axs[0].set_title('Zonal and Temporal Average of Sea-Level Pressure') axs[0].set_xlabel('Latitude') axs[0].set_ylabel('Pressure (hPa)') axs[0].legend() # Plot für den durchschnittlichen Wind in u-Richtung (zonal) axs[1].plot(u10_avg, label='Mean u10 Wind', color='green') axs[1].axhline(y=0, color='black', linestyle='--', label='Zero Line') axs[1].set_title('Zonal and Temporal Average of u10 Wind') axs[1].set_xlabel('Latitude') axs[1].set_ylabel('Wind Speed') axs[1].legend() # Plot für den durchschnittlichen Wind in v-Richtung (meridional) axs[2].plot(v10_avg, label='Mean v10 Wind', color='red') axs[2].axhline(y=0, color='black', linestyle='--', label='Zero Line') axs[2].set_title('Zonal and Temporal Average of v10 Wind') axs[2].set_xlabel('Latitude') axs[2].set_ylabel('Wind Speed (m/s)') axs[2].legend() # Layout und Darstellung der Plots plt.tight_layout() plt.show()
Attention: There is a big problem in the x-ticks! Your Latitude goes from 0 to 2XX... Interpreting and analysing these values is very difficult (in reality these are entries from 90°N to -90°S in 0.75° steps...)... Normally you should have seen that while answering to the questions...
The reason is that you just plotted slp_avg, u10_avg, without the correct x-values... You would have needed to plot the latitude values versus slp_avg, u10_avg. Or directly the "DataArray", e.g. via:
ds_zavg = ds.mean(dim=['time', 'longitude']) slp_zavg = ds_zavg.msl / 100 slp_zavg.plot()
In [13]: v10_t_avg = ds.v10.mean(dim='time') v10_avg = v10_t_avg.mean(dim= 'longitude').plot(label = 'v10') u10_t_avg = ds.u10.mean(dim='time') u10_avg = u10_t_avg.mean(dim= 'longitude').plot( label = 'u10') plt.xlabel('Latitude') plt.ylabel('Windspeed in m/s)') plt.title('Average windspeed in u and v-component (Time-Longitude Mean)') plt.axhline(y=0, color = 'red') plt.legend() plt.grid() plt.show()
Good! To make it even easier to understand you can add ... at "10 m", and eventually rename the labels in the legend. Also: the u-component is often called the "zonal wind component", the v-component is called the "meridional wind component".
En premier lieu, en ce qui concerne les ressources humaines et techniques, un arbitrage doit être réalisé entre le choix de développer en interne les SIA publics et celui de recourir à un prestataire externe. Néanmoins, ce recours à l’externalisation ne signifie pas que la question des ressources humaines propres de l’administration soit résolue. En effet, l’externalisation oblige l’administration à disposer des ressources humaines internes suffisamment en pointe en la matière pour définir le cahier des charges, contrôler l’exécution du travail par le prestataire. À cette fin, et en l’état actuel des choses, le Conseil d’État suggère une mutualisation des moyens entre administrations.
Los desafíos de la gobernanza de la IA: La rápida evolución de la IA plantea desafíos importantes en términos de regulación y gobernanza. Es necesario desarrollar marcos normativos que permitan aprovechar los beneficios de esta tecnología al tiempo que se mitigan los riesgos.
L’exploitation du potentiel des SIA dans la sphère publique est progressive et inégale. L’étude du Conseil d’État a identifié quelques obstacles qui expliquent cette situation. Ceux-ci tiennent à la mauvaise qualité des données à disposition et au manque de moyens ou encore au risque juridique, au défaut d’acceptabilité ou encore aux questions de sécurisation de l’outil. Même si les décideurs publics et la volonté politique peuvent fortement encourager cet usage des SIA, il est recommandé par le Conseil d’État d’être vigilant sur plusieurs points
La importancia de la ética en el desarrollo de la IA: El texto subraya la necesidad de desarrollar una IA ética y responsable. Es fundamental establecer principios claros que guíen el desarrollo y el uso de estas tecnologías, como la transparencia, la equidad y la responsabilidad. La IA debe ser diseñada para servir al bien común y no para perpetuar desigualdades o discriminaciones.
Il est important que les citoyens et les administrations aient une bonne compréhension de l’IA et des systèmes d’IA pour favoriser la confiance. Pour ce faire, le Conseil d’État fait preuve de pédagogie en expliquant différents termes, notions et acceptions de l’IA et des systèmes d’IA. D’ailleurs, il réaffirme la nécessité pour les citoyens de développer une culture des concepts et enjeux de l’IA, de comprendre le fonctionnement des SIA et souligne l’importance d’acquérir vis-à-vis de l’IA un sens critique, basé sur les informations sur les grands principes de fonctionnement des SIA, de leurs avantages et inconvénients…
La necesidad de una alfabetización digital: Para aprovechar al máximo el potencial de la IA y participar de manera informada en el debate público sobre esta tecnología, es fundamental promover la alfabetización digital. Esto implica desarrollar las competencias necesarias para comprender cómo funciona la IA, evaluar sus impactos y tomar decisiones informadas sobre su uso.
A nuestro equipo nos gustó mucho el artículo sobre el uso de la inteligencia artificial para mejorar la moralidad humana, y decidimos elegirlo porque aborda un tema actual y muy interesante" cómo la tecnología puede influir en nuestra ética" El texto analiza el libro Más (que) humanos, de Francisco Lara y Julian Savulescu, que explora cómo la IA podría ayudarnos a tomar decisiones más éticas en contextos complicados.
Lo que más llamó nuestra atención fue el debate sobre los riesgos y beneficios de estas tecnologías. Por un lado, es prometedor que la IA pueda apoyar nuestras decisiones en problemas globales como el cambio climático o la distribución justa de recursos. Pero, por otro nos hizo reflexionar sobre los riesgos de depender demasiado de estas herramientas, como la posibilidad de que se reduzca nuestra capacidad de tomar decisiones morales por nosotros mismos.
Elegimos este artículo porque conecta la filosofía con los retos tecnológicos actuales y plantea preguntas profundas sobre nuestra responsabilidad ética como sociedad. Nos pareció un texto que no solo informa, sino que también invita a pensar críticamente sobre el futuro.
Bias in the data and algorithms
En el contexto de la recolección de datos inclusivos, se puede analizar desde tres perspectivas como puente para comprender realidades diversas, y el uso ético de la Inteligencia Artificial para promover la inclusión.
Corporalidades y sesgos en los datos y algoritmos
Las metodologías de recolección de datos tradicionales han invisibilizado a muchas corporalidades, como los cuerpos de mujeres embarazadas, personas trans y no binarias. Por ejemplo, el diseño de cinturones de seguridad que ignora diferencias corporales aumenta el riesgo de lesiones graves para mujeres. La Inteligencia Artificial, si se basa en datos sesgados, amplifica estas desigualdades, perpetuando narrativas dominadas por parámetros masculinos y blancos.
Los sistemas de clasificación basados en el binario hombre/mujer deben ser replanteados. Incorporar categorías inclusivas en los algoritmos y diseños de recolección de datos es crucial para capturar experiencias interseccionales relacionadas con género, etnia, discapacidad y otras identidades.
Traducción como herramienta de inclusión
Traducir no solo significa adaptar el lenguaje, sino también interpretar las realidades vividas por diferentes corporalidades en contextos sociales y culturales específicos. Por ejemplo, las respuestas de mujeres en redes sociales pueden estar condicionadas por el miedo a amenazas digitales, lo que requiere traducir estas limitaciones en análisis más sensibles.
La traducción de los riesgos y beneficios de la recolección de datos a las comunidades involucradas es esencial para garantizar un consentimiento informado. Esto es especialmente relevante para grupos vulnerables que pueden no comprender completamente cómo se utilizarán sus datos.
Inteligencia Artificial como aliada para la inclusión y la ética
La Inteligencia Artificial puede combinar datos tradicionales y no tradicionales para ofrecer información y corregir desigualdades. Sin embargo, el diseño de algoritmos debe incluir parámetros que reflejen normas sociales y realidades políticas, considerando los desafíos específicos de las mujeres y niñas.
La intersección de big data y la Inteligencia Artificial plantea desafíos éticos en términos de privacidad. Las tecnologías deben garantizar la anonimización de datos y prevenir su mal uso para dañar a comunidades vulnerables. Iniciativas como las directrices del UNDG enfatizan la necesidad de marcos normativos que protejan la privacidad y promuevan el consentimiento informado.
Para lograr una verdadera transformación, los gobiernos y organizaciones deben invertir en personal técnico capacitado y promover colaboraciones público-privadas que aprovechen el big data para el bien común. Esto incluye proyectos de recopilación de datos enfocados en feminismos interseccionales y en visibilizar a las corporalidades históricamente excluidas.
Collecting the right data with methods that ensure the right disaggregation is an important first step, but to create a more inclusive data system, these data must also be analyzed and interpreted using appropriate and efficient methods.
Corporalidades y fuentes de datos no tradicionales
Las fuentes no tradicionales, como registros administrativos y datos generados por la ciudadanía, pueden proporcionar información más granular sobre las corporalidades. Esto incluye indicadores relacionados con salud, género, y desigualdades espaciales o demográficas, como se observó en el caso de Nepal, donde datos geoetiquetados revelaron variaciones espaciales de desigualdades de género.
Incorporar datos de poblaciones marginadas, como personas trans, no binarias o mujeres en comunidades rurales, puede visibilizar experiencias y desigualdades que los métodos tradicionales ignoran. Por ejemplo, análisis de violencia sexual en El Salvador demuestran cómo el análisis de registros administrativos puede desglosar patrones que afectan a grupos específicos.
Traducción de datos y la alteración de dinámicas de poder
La recopilación de datos generados por la ciudadanía permite reflejar mejor las realidades vividas por distintas corporalidades, particularmente en regiones o contextos donde los métodos tradicionales no capturan su complejidad. Por ejemplo, la traducción de experiencias reportadas a través de encuestas SMS debe respetar las diferencias culturales y lingüísticas de los encuestados.
Al permitir que las comunidades participen directamente en la generación de datos, como en el proyecto de Ghana para reportar resultados en salud materno-infantil, se altera la dinámica de poder en la recopilación de datos. Este enfoque feminista reconoce las experiencias vividas y las traduce en evidencia cuantificable.
IA como herramienta para la equidad de datos
La Inteligencia Artificial puede detectar patrones en fuentes de datos no tradicionales, como redes sociales, imágenes satelitales y registros móviles. Estos patrones pueden revelar desigualdades relacionadas con género, ubicación o acceso a servicios, permitiendo intervenciones informadas.
Las tecnologías de Inteligencia Artificial permiten analizar múltiples capas de desigualdad simultáneamente. Por ejemplo, combinando datos satelitales, móviles y encuestas demográficas, se pueden mapear variaciones interseccionales como la brecha de alfabetización por género y región.
La Inteligencia Artificial puede crear datos sintéticos para suplir vacíos en áreas donde las corporalidades y sus experiencias no están representadas, siempre con atención ética para evitar reforzar sesgos.
Desafíos éticos y metodológicos
La recopilación de datos mediante métodos no tradicionales y herramientas de Inteligencia Artificial plantea preocupaciones sobre la privacidad, especialmente para las corporalidades en situaciones vulnerables.
La interpretación y traducción de datos implica que el análisis automatizado puede no captar matices culturales (culturemas) o de género si no se diseñan algoritmos sensibles e inclusivos.
Muchos gobiernos y comunidades carecen de los recursos técnicos y financieros necesarios para implementar sistemas avanzados de análisis de datos, limitando su capacidad para traducir datos en políticas inclusivas.
Un enfoque transformador para la recopilación de datos
Usar fuentes no tradicionales para incluir experiencias invisibilizadas, como mujeres en zonas rurales o personas no binarias.
Garantizar que los datos recojan las perspectivas locales con sensibilidad cultural y lingüística.
Diseñar algoritmos que no solo procesen datos, sino que también prioricen la equidad y representen realidades interseccionales.
Implementar regulaciones claras para proteger la privacidad y asegurar que las tecnologías beneficien a las poblaciones marginadas.
Addressing problems of traditional data collection
Superar las limitaciones de los métodos tradicionales de recopilación de datos mediante enfoques inclusivos y feministas incluye la consideración de cuerpos diversos, la sensibilidad cultural y lingüística en la traducción de datos, y el uso de Inteligencia Artificial para identificar y corregir sesgos inherentes.
Corporalidades y sesgos en la recopilación de datos
Los métodos tradicionales tienden a homogeneizar las experiencias corporales, ignorando diferencias relacionadas con género, identidad no binaria, edad, condición de discapacidad, o situación socioeconómica. Esto invisibiliza las realidades de las mujeres y otros grupos marginados.
Los datos recopilados a nivel de hogar ignoran desigualdades dentro del mismo, lo que perpetúa la invisibilización de corporalidades y experiencias individuales. Por ejemplo, las contribuciones económicas de mujeres y niñas suelen ser subestimadas o ignoradas.
Las corporalidades en situaciones de vulnerabilidad extrema, como mujeres refugiadas, personas trans y no binarias, enfrentan mayores riesgos de ser omitidas. Estas exclusiones limitan la capacidad de crear políticas que respondan a sus necesidades.
Traducción como mediadora inclusiva
El diseño de preguntas en las encuestas refleja sesgos culturales y de género, lo que perpetúa desigualdades. Por ejemplo, las preguntas que asumen roles tradicionales, como identificar a una mujer como “ama de casa” sin considerar su trabajo remunerado, invisibilizan sus contribuciones económicas. Traducir estos términos con sensibilidad feminista puede ayudar a visibilizar estas realidades.
La traducción debe permitir incorporar categorías de género inclusivas, como “no binario” o “otra”, asegurando que los datos reflejen corporalidades no normativas y realidades locales.
Inteligencia artificial como herramienta inclusiva
La Inteligencia Artificial puede identificar y mitigar sesgos en el diseño de encuestas y en la recopilación de datos al analizar patrones de exclusión. Por ejemplo, puede destacar cómo ciertas preguntas excluyen a las mujeres trans o no binarias al imponer categorías binarias de género.
Los algoritmos pueden descomponer datos por variables interseccionales, como género, edad, ingresos y etnicidad, para revelar desigualdades invisibles en los métodos tradicionales. Esto incluye medir desigualdades dentro de los hogares y entre grupos marginados.
La Inteligencia Artificial puede utilizar fuentes de datos no tradicionales, como redes sociales o sensores, para captar experiencias de corporalidades excluidas en contextos conflictivos o difíciles de alcanzar con métodos estándar.
Bias exists in current data collection practices, leaving women and girls invisible in the data.
Los sesgos en la recopilación de datos y la invisibilización de mujeres y niñas puede relacionarse profundamente con corporalidades, traducción e Inteligencia Artificial.
Corporalidades y recopilación de datos
La invisibilidad de corporalidades diversas en los métodos tradicionales de recopilación de datos tienden a generalizar los cuerpos de las mujeres y niñas, omitiendo diferencias significativas como raza, etnicidad, identidad de género, condición de discapacidad o edad. Un enfoque feminista desvirtuaría estos datos para representar estas corporalidades y sus experiencias específicas.
Los sesgos en tecnologías biométricas y sensores en la Inteligencia Artificial analizan datos corporales, como reconocimiento facial o monitoreo de salud, a menudo fallan en captar la diversidad de cuerpos femeninos o marginados, reforzando estereotipos y exclusión.
Traducción como puente inclusivo
El lenguaje inclusivo en la recopilación de datos, al traducir encuestas, análisis o resultados, se corre el riesgo de eliminar términos culturalmente específicos como los culturemas que reflejan la diversidad de corporalidades y experiencias. Por ejemplo, conceptos relacionados con género o identidad corporal en un idioma pueden no tener equivalentes exactos en otro, lo que invisibiliza problemáticas clave.
La interpretación del significado en la traducción de datos requiere una sensibilidad cultural y feminista que respete las diferencias lingüísticas y semánticas de cómo las corporalidades y el género son comprendidos en distintos contextos. Sin esta sensibilidad, la traducción puede reforzar las inequidades en lugar de corregirlas.
Inteligencia Artificial (IA) y recopilación feminista de datos
Las oportunidades en la Inteligencia Artificial tiene un enorme potencial para analizar grandes volúmenes de datos de manera desagregada, identificar patrones de exclusión y ampliar el uso de fuentes no tradicionales (como redes sociales, sensores, y datos de dispositivos móviles). Esto podría visibilizar experiencias de mujeres y niñas que antes eran ignoradas.
Los riesgos de sesgos algorítmicos sucederían si los algoritmos de Inteligencia Artificial son entrenados con datos históricos sesgados, replicarán esas inequidades. Esto incluye subrepresentar corporalidades no normativas o ignorar contextos culturales específicos al interpretar datos traducidos.
El diseño interseccional sirve para superar estas limitaciones, los algoritmos deben diseñarse con principios feministas que incluyan parámetros explícitos para identificar y corregir sesgos relacionados con género y corporalidades diversas.
Las preguntas pueden adaptarse para incluir corporalidades, traducción e IA:
• ¿Quién define qué corporalidades son relevantes y cómo se representan en los datos?
• ¿Quién traduce y cómo garantiza que las voces de las mujeres y niñas sean fielmente representadas?
• ¿Cómo asegura la IA que los datos desagregados reflejen experiencias interseccionales y no perpetúen exclusión?
• ¿Quién decide qué fuentes de datos se utilizan y con qué criterios éticos?
Hacia una Inteligencia Artificial inclusiva
Una Inteligencia Artificial de recopilación de datos verdaderamente inclusiva debe:
1. Diseñar encuestas y procesos de recopilación que reconozcan la diversidad de corporalidades y vivencias.
2. Incorporar traducciones que respeten el contexto cultural y lingüístico, permitiendo que los datos capturen las realidades de mujeres y niñas en distintos entornos.
3. Utilizar IA para integrar fuentes de datos no tradicionales, asegurando que los modelos sean revisados constantemente para mitigar sesgos algorítmicos.
4. Basarse en principios feministas que guíen cada etapa del proceso, desde la definición del problema hasta el uso de los datos.
Acknowledgments The authors acknowledge the COBE SST2 data provided by the NOAA/OAR/ESRL (PSL, Boulder, Colorado, USA), obtained from their website at https://psl.noaa.gov/data/gridded/data.cobe2.html and to the public IBTrACs database provided by the National Oceanic and Atmospheric Administration. Also, A.P-A. acknowledges the support from UVigo PhD grants. J.C.F-A. and R.S acknowledge the support from the Xunta de Galicia (Galician Regional Government). References Aiyyer, A. & Thorncroft, C. 2006. “Climatology of vertical shear over the tropical Atlantic”. Journal of Climate, 19: 2969-2983, ISSN: 0894-8755, DOI: 10.1175/JCLI3685.1. Andrews, D. G.; Holton, J. R. & Leovy, C. B. 1987. Middle Atmosphere Dynamics. 1st ed., vol. 40, United Kingdom: Academic Press, 489p., ISBN: 9780080511672, Available: <https://www.sciencedirect.com/bookseries/international-geophysics/vol/40/suppl/C>, [Consulted: Febraury 10, 2021]. Arora, K. & Dash, P. 2016. “Towards Dependence of Tropical Cyclone Intensity on Sea Surface Temperature and Its Response in a Warming World”. Climate, 4(2): 30, ISSN: 2225-1154, DOI: 10.3390/cli4020030. Bhatia, K. T.; Vecchi, G. A.; Knutson, T. R.; Murakami, H.; Kossin, J.; Dixon, K. W. & Whitlock, C. E. 2019. “Recent increases in tropical cyclone intensification rates”. Nature Communication, 10: 635, ISSN 2041-1723, DOI: 10.1038/s41467-019-08471-z. Bister, M. & Emanuel, K. A. 2002. “Low frequency variability of tropical cyclone potential intensity 1. Interannual to interdecadal variability”. Journal Geophysical Research Atmosphere, 107(D24): 4801, ISSN:2169-8996, DOI: 10.1029/2001JD000776. Camargo, S. J.; Emanuel, K. A. & Sobel, A. H. 2007. “Use of a Genesis Potential Index to Diagnose ENSO Effects on Tropical Cyclone Genesis”. Journal of Climate, 20: 4819-4834, ISSN: 0894-8755, DOI: 10.1175/JCLI4282.1. Caron, L.; Boudreault, M. & Bruyère, C. L. 2015. “Changes in large-scale controls of Atlantic tropical cyclone activity with the phases of the Atlantic multidecadal oscillation”. Climate Dynamics, 44: 1801-1821, ISSN: 1432-0894, DOI: 10.1007/s00382-014-2186-5. Chang, E. K. M. & Guo, Y. 2007. “Is the number of North Atlantic tropical cyclones significantly underestimated prior to the availability of satellite observations?”. Geophysical Research Letter, 34: L14801, ISSN: 1944-8007, DOI: 10.1029/2007GL030169. Chiang, J. C. H. & Vimont, D. J. 2004. “Analagous meridional modes of atmosphere-ocean variability in the tropical Pacific and tropical Atlantic”. Journal of Climate, 17(21): 4143-4158, ISSN: 0894-8755, DOI: 10.1175/JCLI4953.1. Cione, J. J. & Uhlhorn, E.W. 2003. “Sea Surface Temperature Variability in Hurricanes: Implications with Respect to Intensity Change”. Monthly Weather Review, 131(8): 1783-1796, ISSN: 1520-0493, DOI: 10.1175//2562.1. Dare, R. A. & McBride, J. L. 2011. “The threshold sea surface temperature condition for tropical cyclogenesis”. Journal of Climate, 24: 4570-4576, ISSN: 0894-8755, DOI: 10.1175/JCLI-D-10-05006.1. DeMaria, M.; Knaff, J. A. & Connell, B. H. 2001. “A Tropical Cyclone Genesis Parameter for the Tropical Atlantic”. Weather and Forecasting, 16: 219-233, ISSN: 1520-0434, DOI: 10.1175/1520-0434(2001)016<0219:ATCGPF>2.0.CO;2 Deser, C.; Alexander, M. A.; Xie, S.-P. & Phillips, A. S. 2010. “Sea surface temperature variability: Patterns and mechanisms”. Annual Review of Marine Science, 2: 115-143, ISSN: 1941-0611, DOI:10.1146/annurev-marine-120408-151453. Elsner, J. B. 2003. “Tracking hurricanes”. Bulletin of the American Meteorological Society, 84: 353-356, ISSN: 1520-0477, DOI: 10.1175/BAMS-84-3-353. Emanuel, K. A. 2007. “Environmental factors affecting tropical cyclone power dissipation”. Journal of Climate, 20: 5497-5509, ISSN: 0894-8755, DOI: 10.1175/2007JCLI1571.1 Emanuel, K. A. 2013. “Downscaling CMIP5 climate models shows increased tropical cyclone activity over the 21st century”. Proceedings of the National Academy of Sciences, 110: 12219-12224, ISSN: 1091-6490, DOI: 10.1073/pnas.1301293110. Enfield, D. B.; Mestas-Nunez, A. M. & Trimble, P. J. 2001. “The Atlantic Multidecadal Oscillation and its relationship to rainfall and river flows in the continental U.S”. Geophysical Research Letter,28: 2077-2080, ISSN: 1944-8007, DOI: 10.1029/2000GL012745 Enfield, D. B.; Mestas, A.M.; Mayer, D. A. & Cid-Serrano, L. 1999. “How ubiquitous is the dipole relationship in tropical Atlantic sea surface temperatures?”. Journal of Geophysical Research Ocean, 104: 7841-7848, ISSN: 2169-9291, DOI: 10.1029/1998JC900109. Fraza, E. & Elsner, J. B. 2015. “A climatological study of the effect of sea-surface temperature on North Atlantic hurricane intensification”. Physical Geography, 36(5): 395-407, ISSN: 1930-0557, DOI: 10.1080/02723646.2015.1066146. Goldenberg, S. B.; Landsea, C. W.; Mestas-Nuñez, A. M. & Gray, W. M. 2001. “The Recent Increase in Atlantic Hurricane Activity: Causes and Implications”. Science, 293: 474-479, ISSN: 1095-9203, DOI: 10.1126/science.1060040. Gray, W. M. 1968. “Global view of the origin of tropical disturbances and storms”. Monthly Weather Review, 96(10): 669-700, ISSN: 1520-0493, DOI: 10.1175/1520-0493(1968)096<0669:GVOTOO>2.0.CO;2. Gray, W. M. 1984. “Atlantic seasonal hurricane frequency. Part I: El Niño and 30 mb quasi-biennial oscillation influences”. Monthly Weather Review, 112(9): 1649-1668, ISSN: 1520-0493, DOI: 10.1175/1520-0493(1984)112<1649:ASHFPI>2.0.CO;2. Hakkinen, S. & Rhines, P. B. 2004. “Decline of subpolar North Atlantic gyre circulation during the 1990s”. Science, 304: 555-559, ISSN: 1095-9203, DOI: 10.1126/science.1094917. Hakkinen, S. & Rhines, P. B. 2009. “Shifting surface currents in the northern North Atlantic Ocean”. Journal Geophysical Research, 114: C04005, ISSN: 2169-9291, DOI: 10.1029/2008JC004883. Held, I. M. & Soden, B. J. 2006. “Robust responses of the hydrological cycle to global warming”. Journal of Climate, 19: 5686-5699, ISSN: 0894-8755, DOI: 10.1175/JCLI3990.1. Hirahara, S.; Ishii, M. & Fukuda, Y. 2014 “Centennial-scale sea surface temperature analysis and its uncertainty”. Journal of Climate, 27: 57-75, ISSN: 0894-8755, DOI: 10.1175/JCLI-D-12-00837.1. Hurrell, J. W. 1995. “Decadal trends in the North Atlantic Oscillation and relationships to regional temperature and precipitation”. Science, 269: 676-679, ISSN: 1095-9203. Jiang, H.; Halverson, J. B. & Zipser, E. J. 2008. “Influence of environmental moisture on TRMM-derived tropical cyclone precipitation over land and ocean”. Geophysical Research Letter, 35: L17806, ISSN: 1944-8007, DOI: 10.1029/2008GL034658. Jones, P. D.; Jónsson, T. & Wheeler, D. 1997. “Extension to the North Atlantic Oscillation using early instrumental pressure observations from Gibraltar and South-West Iceland”. International Journal of Climatology, 17: 1433-1450, ISSN: 1097-0088, DOI: 10.1002/(SICI)1097-0088(19971115)17:13<1433::AID-JOC203>3.0.CO;2-P. Keith, E. & Xie, L. 2009. “Predicting Atlantic Tropical Cyclone Seasonal Activity in April”. Weather and Forecasting, 24: 436-455, ISSN: 1520-0434, DOI: 10.1175/2008WAF2222139.1. Killick, R.; Fearnhead, P. & Eckley, I. A. 2012. “Optimal detection of change points with a linear computational cost”. Journal of the American Statistical Association, 107(500): 1590-1598, ISSN: 1537-274X, DOI: 10.1080/01621459.2012.737745. Klotzbach, P. J. 2010. “On the Madden-Julian oscillation-Atlantic hurricane relationship”. Journal Climate, 23: 282-293, ISSN: 0894-8755, DOI: 10.1175/2009JCLI2978.1. Klotzbach, P. J. & Gray, V. M. 2008. “Multidecadal variability in North Atlantic tropical cyclone activity”. Journal of Climate, 21: 3929-3935, ISSN: 0894-8755, DOI: 10.1175/2008JCLI2162.1. Knaff, J. A. 1998. “Predicting summertime Caribbean pressure in early April”. Weather and Forecasting, 13: 740-752, ISSN: 1520-0434, DOI: 10.1175/1520-0434(1998)013<0740:PSCPIE>2.0.CO;2. Knapp, K. R.; Kruk, M. C.; Levinson, D. H.; Diamond, H. J. & Neumann, C. J. 2010. “The International Best Track Archive for Climate Stewardship (IBTrACS): Unifying tropical cyclone best track data”. Bulletin of the American Meteorological Society, 91: 363-376, ISSN: 1520-0477, DOI:10.1175/2009BAMS2755.1. Kossin, J. P.; Camargo, S. J. & Sitkowski, M. 2010. “Climate modulation of North Atlantic hurricane tracks”. Journal of Climate, 23: 3057-3076, ISSN: 0894-8755, DOI: 10.1175/2010JCLI3497.1. Kossin, J. P.; Olander, T. L. & Knapp, K. R. 2013. “Trend Analysis with a New Global Record of Tropical Cyclone Intensity”. Journal of Climate, 26; 9960-9976, ISSN: 0894-8755, DOI: 10.1175/JCLI-D-13-00262.1. Kossin, J.; Emanuel, K. & Vecchi, G. 2014. “The poleward migration of the location of tropical cyclone maximum intensity”. Nature, 509: 349-352, ISSN: 1476-4687, DOI: 10.1038/nature13278. Knutson, T. R.; Sirutis, J. J.; Zhao, M.; Tuleya, R. E.; Bender, M.; Vecchi, G. A.; Villarini, G. & Chavas, D. 2015. “Global projections of intense tropical cyclone activity for the late twenty-first century from dynamical downscaling of CMIP5/RCP4.5 scenarios”. Journal of Climate, 28(18): 7203-7224, ISSN: 0894-8755, DOI: 10.1175/jcli-d-15-0129.1. Krishnamurthy, L.; Vecchi, G. A.; Msadek, R.; Murakami, H.; Wittenberg, A. & Zeng, F. 2016. “Impact of Strong ENSO on Regional Tropical Cyclone Activity in a High-Resolution Climate Model in the North Pacific and North Atlantic Oceans”. Journal of Climate, 29: 2375-2394, ISSN: 0894-8755, DOI: 10.1175/JCLI-D-15-0468.1. Lim, Y.; Schubert, S. D.; Reale, O.; Molod, A. M.; Suarez, M. J. & Auer, B. M. 2016. “Large-Scale Controls on Atlantic Tropical Cyclone Activity on Seasonal Time Scales”. Journal of Climate, 29: 6727-6749, ISSN: 0894-8755, DOI: 10.1175/JCLI-D-16-0098.1. Lim, Y. K.; Schubert, S. D.; Kovach, R.; Molod, A. M. & Pawson, S. 2018. “The Roles of Climate Change and Climate Variability in the 2017 Atlantic Hurricane Season”. Scientific Reports, 8: 16172, ISSN: 2045-2322, DOI: 10.1038/s41598-018-34343-5 Lin, I. ‐I.; Camargo, S. J.; Patricola, C. M.; Boucharel, J.; Chand, S.; Klotzbach, P.; Chan, J. C. L.; Wang, B.; Chang, P.; Li, T. & Jin, F. F. 2020. ENSO and Tropical Cyclones. In McPhaden, M. J.; Santoso, A. & Cai, W. (eds). El Niño Southern Oscillation in a Changing Climate. United States of America: American Geophysical Union (AGU), ISBN: 9781119548164, DOI: 10.1002/9781119548164.ch17. Liu, M.; Vecchi, G. A.; Smith, J. A. & Knutson, T. R. 2019. “Causes of large projected increases in hurricane precipitation rates with global warming”. npj Climate and Atmospheric Science, 2(1): 1-5, ISSN: 23973722, DOI: 10.1038/s41612-019-0095-3. Loader, C. R. 1999. “Bandwidth Selection: Classical or Plug-In?” The Annals of Statistics, 27(2): 415-438, ISSN: 00905364. Mendelsohn, R.; Emanuel, K. A.; Chonabayashi, S. & Bakkensen, L. 2012. “The impact of climate change on global tropical cyclone damage”. Nature Climate Change, 2: 205-209, ISSN: 1758-6798, DOI: 10.1038/nclimate1357. Molinari, J.; Knight, D.; Dickinson, M.; Vollaro, D. & Skubis, S. 1997. “Potential vorticity, easterly waves, and eastern Pacific tropical cyclogenesis”. Monthly Weather Review, 125: 2699-2708, ISSN: 1520-0493, DOI: 10.1175/1520-0493(1997)125<2699:PVEWAE>2.0.CO;2. Montgomery, M. T. 2016. Recent Advances in Tropical Cyclogenesis. In Mohanty U. C. & Gopalakrishnan S.G. (eds) Advanced Numerical Modeling and Data Assimilation Techniques for Tropical Cyclone Prediction. Switzerland: Springer, ISBN: 978-94-024-0895-9, DOI: 10.5822/978-94-024-0896-6_22. Murakami, H.; Li, T. & Hsu, P. 2014. “Contributing Factors to the Recent High Level of Accumulated Cyclone Energy (ACE) and Power Dissipation Index (PDI) in the North Atlantic”. Journal of Climate, 27: 3023-3034, ISSN: 0894-8755, DOI: 10.1175/JCLI-D-13-00394.1. Naujokat, B. 1986. “An update of the observed quasi-biennial oscillation of the stratospheric winds over the tropics”. Journal of Atmospheric Science, 43: 1873-1877, ISSN: 1520-0469, DOI: 10.1175/1520-0469(1986)043<1873:AUOTOQ>2.0.CO;2 Neumann, C. J. 1993. Global climatology. Global Guide to Tropical Cyclone Forecasting, (ser. WMO/TD No. 560, Rep. TCP-31), Technical Document, Ginebra: World Meteorological Organization. Available: <https://library.wmo.int/index.php?lvl=notice_display&id=305#.YR_jPVuxXeM>, [Consulted: Febraury 15, 2021]. Noy, I. 2016. “Tropical storms: the socioeconomics of cyclones”. Nature Climate Change, 6:343, ISSN: 1758-6798, DOI: 10.1038/nclimate2975. Park, W. & Latif, M. 2005. “Ocean dynamics and the nature of air-sea interactions over the North Atlantic at decadal timescales”. Journal of Climate, 18: 982-95, ISSN: 0894-8755, DOI: 10.1175/JCLI-3307.1 Pazos, M. & Gimeno, L. 2017. “Identification of moisture sources in the Atlantic Ocean for cyclogenesis processes”. In: 1st International Electronic Conference on Hydrological Cycle (ChyCle-2017). Sciforum Electronic Conference Series, Vol. 1, Basel, Switzerland: MDPI, DOI: 10.3390/CHyCle-2017-04862 Penland, C. & Matrosova, L. 1998. “Prediction of tropical Atlantic sea surface temperatures using Linear Inverse Modeling”. Journal of Climate, 11(3): 483-496, ISSN: 0894-8755, DOI: 10.1175/1520-0442(1998)011<0483:POTASS>2.0.CO;2 Pérez-Alarcón, A.; Sorí, R.; Fernández-Alvarez, J. C.; Nieto, R. & Gimeno, L. 2020. “Moisture Sources for Tropical Cyclones Genesis in the Coast of West Africa through a Lagrangian Approach”. Environmental Sciences Proceedings, 4:3, ISSN: 2673-4931, DOI: 10.3390/ecas2020-08126 Saffir, H. S. 1973. “Hurricane wind and storm surge”. Military Engineering, 65(423): 4-5, ISSN: 00263982. Scott, A. J. & Knott, M. 1974. “A Cluster Analysis Method for Grouping Means in the Analysis of Variance”. Biometrics, 30(3): 507-512, ISSN: 0006341X. Shen, W. X.; Tuleya, R. E. & Ginis, I. 2000. “A sensitivity study of the thermodynamic environment on GFDL model hurricane intensity: Implications for global warming”. Journal of Climate, 13: 109-121, ISSN: 0894-8755, DOI: 10.1175/1520-0442(2000)013<0109:ASSOTT>2.0.CO;2 Simpson, R. H. 1974. “The hurricane disaster-potential scale”. Weatherwise, 27: 169-186, ISSN: 1940-1310, DOI: 10.1080/00431672.1974.9931702 Smith, C. A. & Sardeshmukh, P. .2000. “The Effect of ENSO on the Intraseasonal Variance of Surface Temperature in Winter”. International Journal of Climatology, 20: 1543-1557, ISSN: 1097-0088, DOI: 10.1002/1097-0088(20001115)20:13<1543::AID-JOC579>3.0.CO;2-A. Tang, B. H. & Neelin, J. D. 2004. “ENSO influence on Atlantic hurricanes via tropospheric warming”. Geophysical Research Letter, 31: L24204, ISSN: 1944-8007, DOI: 10.1029/2004GL021072. Toggweiler, J. R. & Russell, J. 2008. “Ocean circulation in a warming climate”. Nature, 451: 286-288, ISSN: 1476-4687, DOI: 10.1038/nature06590. Vecchi, G. A. & Knutson, T. R. 2008. “On Estimates of Historical North Atlantic Tropical Cyclone Activity”. Journal of Climate, 21(14): 3580-3600, ISSN: 0894-8755, DOI: 10.1175/2008JCLI2178.1. Vecchi, G. & Soden, B. 2007. “Effect of remote sea surface temperature change on tropical cyclone potential intensity”. Nature, 450: 1066-1070, ISSN: 1476-4687, DOI: 10.1038/nature06423. Vimont, J. P. & Kossin, J. P. 2007. “The Atlantic meridional mode and hurricane activity”. Geophysical Research Letter, 34: L07709, ISSN: 1944-8007, DOI: 10.1029/2007GL029683. Wang, X.; Liu, H. & Foltz, G. R. 2017. “Persistent influence of tropical North Atlantic wintertime sea surface temperature on the subsequent Atlantic hurricane season”. Geophysical Research Letter, 44: 7927- 7935, ISSN: 1944-8007 , DOI: 10.1002/2017GL074801. Wehner, M.; Prabhat; Reed, K. A.; Stone, D.; Collins, W. D. & Bacmeister, J. 2015. “Resolution Dependence of Future Tropical Cyclone Projections of CAM5.1 in the U.S. CLIVAR Hurricane Working Group Idealized Configurations”. Journal of Climate, 28: 3905-3925, ISSN: 0894-8755, DOI: 10.1175/JCLI-D-14-00311.1. Xie, L.; Yan, T.; Pietrafesa, L. J.; Morrison, J. M. & Karl, T. 2005. “Climatology and Interannual Variability of North Atlantic Hurricane Tracks”. Journal of Climate, 18: 5370-5381, ISSN: 0894-8755, DOI: 10.1175/JCLI3560.1. Xu, J.; Wang, Y. & Tan, Z. 2016. “The Relationship between Sea Surface Temperature and Maximum Intensification Rate of Tropical Cyclones in the North Atlantic”. Journal of Atmospheric Sciences, 73: 4979-4988, ISSN: 1520-0469, DOI: 10.1175/JAS-D-16-0164.1. Ye, M.; Wu, J.; Liu, W.; He, X. & Wang, C. 2020. “Dependence of tropical cyclone damage on maximum wind speed and socioeconomic factors”. Environmental Research Letters, 15(9): 094061, ISSN: 1748-9326, DOI: 10.1088/1748-9326/ab9be2.
CIT: Possible resources
está compuesto por la máquina de anestesia (consulte la siguiente sección), los vaporizadores de anestesia (consulte el Capítulo 3 ), el sistema de respiración (consulte el Capítulo 4 ), el ventilador (consulte el Capítulo 6 ) y el sistema de eliminación de gases residuales (consulte el Capítulo 5 ).
Componentes de la maquina de anestesia
Table 3Biophysical and economic accounts for the ecosystem services air purification, urban cooling, and climate regulation. Examples from studies conducted in Europe and United States.Ecosystem service City Biophysical accounts Economic value estimates Valuation model ReferenceAir purification Barcelona, Spain 305.6 t/y €1,115,908 Avoided costs/UFORE Chaparro and Terradas (2009)Chicago, USA 5575 t/y US$ 9.2 million Avoided costs/C-BAT McPherson et al. (1997)Modesto, USA 154 t/y;3.7 lb/treeUS$1.48 million US$16/tree Willingness to pay McPherson et al. (1999)Sacramento, USA 1457 t/y US$28.7 millionUS$1500/haAvoided costs Scott et al. (1998)Philadelphia, USA 802 t/y US$ 3.9 million/y Avoided costs Nowak et al. (2007)Urban cooling/heatingChicago, USA 0.5 GJ/tree (cooling)2.1 GJ/tree (heating)US$15/treeUS$10/treeUS$50–90 per dwelling unitAvoided costs/C-BAT McPherson et al. (1997)Modesto, USA 110,133 Mbtu/y; 122 kWh/tree US$870,000 US$10/tree Avoided costs McPherson et al. 1999Sacramento, USA 157 GWh (cooling)145 TJ (heating)US$18.5 mill/y US$ 1.3 mill/y Avoided costs Simpson (1988)Climate regulation (t of C/y) Barcelona, Spain Storage: 113,437 tSequestration: 6187 t/y;5422 t/y (net)Not assessed Avoided costs/UFORE Chaparro and Terradas (2009)Modesto, USA 13,900 t336 lb/treeUS$ 460,000 US$ 5/tree Avoided costs McPherson et al. (1999)Philadelphia, USA Storage : 530,000 tSequestration16,100 t /yUS$ 9.8 millionUS$ 297,000Avoided costs/UFORE Nowak et al. (2007)Washington, USA 572 t/y1.0 t/ha/yUS$ 13,156 Avoided costs/UFORE Nowak and Crane (2002)Chicago, USA Storage: 5.6 million t(14–18 t/ha)Not assessed Avoided costs/C-BAT McPherson et al. (1997)PM: particulate matter. UFORE: Urban Forest Effects model; C-BAT: Cost–Benefit Analysis of Trees. When pollutants are not specified, calculations include NO 2, SO2 , PM 10 , O 3 andCO). Note: Figures were not converted to net present values and should be taken as illustration only.239E. Gómez-Baggethun, D.N. Barton / Ecological Economics 86 (2013) 235–245
I find this table to be really interesting as it assumes value for disservices in these urban areas. I find it fascinating that different areas of the world have to spend more or less money for these certain disservices.
Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.
Learn more at Review Commons
Summary and significance in the context of the field:
In this work, the authors conduct a detailed investigation of the 'ectopic'/'bystander' activation of the gene Mnx1 by enhancers of Shh, located in the neighboring TAD. TAD borders have been shown in a number of works to contribute to the remarkable specificity of enhancer-promoter choice, and the current dogma in the field is to view them as perfect boundaries to enhancer-promoter interaction. Notably, this current dogma also highlights a conundrum in our understanding of gene regulation, as available 3D genome data from both sequencing and microscopy show that TAD borders are regions of abrupt decrease in 3D proximity, but far from perfect borders, with numerous cross-TAD interactions detected by Hi-C and its variants and by single-cell microscopy (albeit fewer than the local intra-TAD interactions).
The authors show convincing data that Mnx1 indeed responds transcriptionally to several Shh-enhancers located over 100 kb distal and on the wrong side of the TAD boundary. The data come from developing mouse embryos, span several tissues, and include key controls for specificity of the method. This provides convincing data with which to challenge the currently widely accepted view of as TADs a significant boundary, complimenting the few examples that indicate that such regulation is possible in special cases (see further discussion in 2b below). I believe this work represents an important and substantive contribution to the field and should ultimately be published, after a few notable issues have been addressed.
Major comments:
Does the CTCF degron substantially remove CTCF from the Mnx1/Shh TAD border?<br /> In prior AID-CTCF degron studies 1,2, a considerable fraction of cohesin dependent TAD borders are retained upon CTCF removal. Moreover, CTCF sites at these retained borders still have clear ChIP-seq peaks - even though the protein is >95% depleted and scarcely detectable by western. Thus, while I suspect that the authors are correct that the shorter distance of the 35 kb border deletion contributes substantially to the increased crosstalk between the Mnx1 and Shh-enhancers, I suspect part of the reason for a lack of a similar effect in the CTCF degron is due to the known challenges in removing CTCF from this border. To argue that the border but not the CTCF is important, I think it would be helpful to show the CTCF signal is sufficiently lost in the degron by ChIP-seq and/or show that this TAD border has been lost by Hi-C. Alternatively, the authors could tone down this claim to something more conservative, as I did not find it to be presented as a key conclusion of the paper as a whole.
Minor comments:
I believe the manuscript could be strengthened by some textual revisions of the introduction: 2a) In particular, in my opinion, the authors' description of existing data for the importance of TAD borders in enhancer promoter regulation is not described in a sufficiently balanced and complete manner, and overall impression given by the text is that CTCF marked borders have little serious evidence for a role in developmental enhancer specificity and are maybe a cancer thing. This is doubly unfortunate, as it undermines the impact of the authors work in expanding our view of what TAD borders are in a regulatory sense, as well as presents an unbalanced view of work in the field. This is of course easily corrected. In particular I recommend the following revisions:
It is " depletion of CTCF has only a small effect on transcription in cell culture (Nora et al., 2017; Hsieh et al., 2022)." It should be clarified that there is only a small acute * effect on transcription (in the first 6-12 hours), which may tell us more about the timescale at which promoters sample, integrate and respond to changes in their enhancer environment than about the roles of CTCF particularly. Notably, this degradation is lethal*, it results in massive changes in transcription after 4 days, and I suspect the authors agree that this lethal affect arises from CTCF's role in transcription regulation (if you remove some key cytoskeletal protein or metabolic enzyme the primary cause of cell death is not transcriptional, but almost all the evidence for CTCF's vital role in the cell is linked in one way or another to transcription). The discussion of TAD border deletions is more one-sided than ideal. I appreciate the discussion is usually even more unbalanced when presenting the opposite view in the literature - many works only cite the examples where border deletion does lead to ectopic expression and phenotypes. The current text presented a subset of these border deletion data in such a way as to give me the impression the authors are deeply skeptical that CTCF plays a role as an insulator of E-P interactions in a developmental context (rather than just as a weird cancer thing). For example:
Pennacchio's lab has analyzed a series of TAD border deletions with more examples of both lethal effects and effects with no apparent phenotype 3
Deletion of TAD borders upstream of the FGF3/4/15 locus in mouse is embryonic lethal (particularly the border Kim et al label TB1 and didn't delete in their cancer model). https://www.biorxiv.org/content/10.1101/2024.08.03.606480v1
I appreciate that Bickmore and colleagues found quite phenotypically normal mice upon deletion of CTCF sites from Shh, but it might be balanced to still reference the work from Uishiki et al that indicate in humans the CTCF site does play a role in Shh - ZRS communication: 4
As the authors are doubtless aware, Andrey and colleagues show a CTCF dependent enhancement of a sensitized ZRS enhancer. 5
Zuin et al. in an elegant experiment in which an enhancer is mobilized to different distances away from its promoter using transposon induction, reported a complete lack of detection of enhancers mobilizing outside the TAD to activate gene expression 6.
A balanced presentation of the data on CTCF role might include some discussion of the above. In light of these earlier works, the findings the authors report about border bypass are all the more surprising.
2b) By contrast, direct evidence for cross TAD interactions at endogenous loci has not to my knowledge been shown as clearly as described in the current manuscript.
Recent work from Rocha and colleagues 7 showed evidence that some enhancers upstream of Sox2 can pass ectopically induced boundaries. While recent work has described examples of 'TAD border bypass' at endogenous loci (e.g. for Pitx1 8, Hoxa regulation 9), these reports really just expand the view of regulatory boundaries rather than provide evidence against it. They invoke a 3D stacking of boundaries that allows boundary proximal enhancers and promoters to stack with (and so bypass) an intervening TAD boundary. Notably, in this view enhancers and promoters that lie away from the border of their respective TADs are still separate, and indeed intervening genes between distal enhancers for Pitx1 and Hoxa appear to follow these rules.2 Mnx1 and the Shh enhancers by contrast do not appear to be an example of border stacking. Given that Sox2 at least is also a TAD border, and the position of the bypassing enhancers is not precisely known in the work from Rocha, it is possible that that case is also an example of boundary stacking, which appears less likely in the case of Mnx1 (which does not appear to be at CTCF marked border, at least in mESCs).
Statistics
Some of the bar graphs quantifying the %-expressing cells do not obviously have associated n-values, as are some of the violin plots of the distances. I think all these bar graphs could also benefit from adding errorbars (e.g. by bootstrapping from the sampled population). This will help the reader more easily appreciate how sampling error and sample size affect the variation seen in the plots.
Recommendations for improving the figures
Figure 2
I would have preferred the authors zoom in more on the FISH spots to help the reader appreciate the proximity. I do appreciate also seeing a field of more than 1 cell (to give some sense of the variability), but these images mostly have only 1 spot pair per panel, which is exceedingly small as they contain parts of more than 1 nucleus. There is also unnecessary white space in this figure that could have been used to show zoom in panels.
Figure 3 -image panels
The same applies to the image panels in this figure as for figure 2 - there is considerable unused whitespace, the image panels capture mostly a single nucleus and its pattern of DAPI dense heterochromatin (which isn't particularly relevant to the narrative) while the fluorescent spots that are the focus of the narrative are quite small. It is nice to have an example of the cell to see that this isn't just random background (that there is just one spot per cell) - in that sense though it's equally helpful to show its not just 1 cell in the field that has the signal-to-noise (SNR) shown.<br /> For this figure and the panels in figure 2, I'd recommend showing a zoom out showing ~3 nuclei with transcription foci (at least in the regions where the % transcribing is >60% it should be fine to have adjacent nuclei transcribing, for those where it is 10%, 1 of 3 nuclei transcribing in the image selected would also help get the sense of the data). These zoom out images would also give a sense of the SNR in the image, and then a zoom in where the FISH spots are sizable would make it easier to see the neighboring transcripts. Extended Data Fig 3 does a better job showing the context of the limb and then zooming in to an image where the RNA spots are appreciable. It looks like the resolution of the zoom in is lower, such that zooming in further on the spots in this data may not enhance the image.
Figure 3 - DNA FISH
It would be helpful to include a diagram indicated where the DNA FISH probes are located on the genome and their size in kb as an inset in the figure.
References cited above
The authors show convincing data that Mnx1 indeed responds transcriptionally to several Shh-enhancers located over 100 kb distal and on the wrong side of the TAD boundary. The data come from developing mouse embryos, span several tissues, and include key controls for specificity of the method. This provides convincing data with which to challenge the currently widely accepted view of as TADs a significant boundary, complimenting the few examples that indicate that such regulation is possible in special cases (see further discussion in 2b below). I believe this work represents an important and substantive contribution to the field and should ultimately be published, after a few notable issues have been addressed.
Audience: I believe this work will be of general interest to the eukaryotic transcription community, the 4D genome community, and the developmental biology community.
My expertise: developmental biology, 4D genome biology, microscopy
Briefing Doc: Les jeunes et l'engagement politique - une perspective INJEP
Sources: Exposé de Laurent Lardeux, chargé d'études et de recherche à l'INJEP (Institut national de la jeunesse et de l'éducation populaire) lors d'une conférence à l'INSPÉ de Villeneuve d'Ascq, le 15 janvier 2025.
Thèmes principaux:
Idées et faits importants:
1. Définition et dimensions de l'engagement:
Le terme "engagement" est polysémique et recouvre une variété d'approches, allant de la mobilisation ponctuelle à l'engagement libre et autonome, en passant par des formes plus conventionnelles liées à des organisations.
L'INJEP aborde l'engagement selon trois dimensions complémentaires : * L'engagement lié à un idéal et des valeurs mobilisatrices (solidarité, entraide, citoyenneté) * L'engagement lié à des dispositifs institutionnels (droits, devoirs, dispositifs de participation) * L'engagement à travers des pratiques effectives d'implication dans la vie collective.
2. Évolution du rapport des jeunes à la politique et à la démocratie:
3. L'engagement des jeunes dans les mouvements pour le climat:
L'enquête de l'INJEP auprès de 52 jeunes activistes du mouvement climat met en lumière les facteurs déclencheurs de leur engagement et leurs modes d'action.
Le mouvement climat se caractérise par:
4. Le rôle de la socialisation politique (famille, école) dans l'engagement:
5. Les facteurs déclencheurs de l'engagement des jeunes activistes:
6. La relation entre engagement alternatif et participation institutionnelle:
Conclusion:
L'engagement des jeunes est en mutation, marqué par une défiance accrue envers les institutions et une recherche de nouvelles formes de participation.
L'engagement pour le climat est emblématique de cette évolution, avec des jeunes qui s'impliquent de manière plus directe et contestataire.
La recherche de l'INJEP souligne l'importance de mieux comprendre les motivations et les aspirations de ces nouvelles générations, afin de créer les conditions d'un dialogue constructif et d'une participation citoyenne renouvelée.