649 Matching Annotations
  1. Apr 2024
    1. Stated briefly the work of the intelligence department can be 50brought under the three heads: filing, indexing and summarising.

      For Kaiser, his "intelligence department" has three broad functions: summarizing, indexing, and filing.

    2. You cannot buy a ready-made intelligence departmenton which to run your business.
    3. it follows that no purchasable articlecan supply our individual wants so far as a key to our stockof information is concerned. We shall always be mainly de-pendent in this direction upon our own efforts to meet ourown situation.

      I appreciate his emphasis on "always" here. Though given our current rise of artificial intelligence and ChatGPT, this is obviously a problem which people are attempting to overcome.

      Sadly, AI seem to be designed for the commercial masses in the same way that Google Search is (cross reference: https://hypothes.is/a/jx6MYvETEe6Ip2OnCJnJbg), so without a large enough model of your own interests, can AI solve your personal problems? And if this is the case, how much data will it really need? To solve this problem, you need your own storehouse of personally curated data to teach an AI. Even if you have such a store for an AI, will the AI still proceed in the direction you would in reality or will it represent some stochastic or random process from the point it leaves your personal data set?

      How do we get around the chicken-and-egg problem here? What else might the solution space look like outside of this sketch?

    4. When our stock of information has been systematically arranged,and is available for use, it has ceased to be a mere note-book,which it may have been at the start; it h;i^ x'adually developedinto tin- nucleus of an intelligence department, «>\crin- .-illthe subjects and their ramifications within the scope of oaractivity.

      intelligence department!!!

      subtlety in definition of "mere note-book" versus card index

      Kaiser doesn't give a strong definition of the difference between notes (here taking on a fleeting sort of definition), and notes indexed and arranged, but he gives it a powerful sounding name and implies that there is useful power within the practice of doing so.

  2. Mar 2024
    1. We can't use algorithms to filter for quality because they're not designed to. They're designed to steer you towards whatever's most profitable for their creators.That puts the onus on us, as users, to filter out the noise and that is increasingly difficult.
    1. So AI could be a tool to help move people into expression, to move past creative blocks

      To what extent are we using AI in this way in ds106? That is, using it as a starting point to build on rather than an end product?

    1. Résumé de la Vidéo

      La vidéo est une discussion sur l'intelligence artificielle (IA) générative et son impact sur l'éducation. Damien Dubreuil et Benoît introduisent le sujet en expliquant leur expérience et leur rôle dans l'utilisation de l'IA. Ils abordent la responsabilité des éducateurs à comprendre et à tester ces systèmes, soulignant l'importance de la vérification des informations générées par l'IA.

      Points Forts: 1. Introduction au webinaire sur l'IA générative [00:00:09][^1^][1] * Présentation des intervenants et du format du webinaire * Discussion sur la direction d'un établissement scolaire à l'ère de l'IA générative 2. Expérience et rôle des intervenants dans l'IA [00:01:07][^2^][2] * Damien Dubreuil partage son expérience en tant que proviseur et référent numérique * Benoît parle de son parcours en IA et de son travail avec l'ESSEC et le hub France IA 3. Compréhension et définition de l'IA générative [00:04:03][^3^][3] * Clarification des termes et des concepts clés de l'IA * Explication de l'IA générative et de son évolution 4. Responsabilité éducative face à l'IA générative [00:15:42][^4^][4] * Importance pour les éducateurs de tester et de comprendre l'IA générative * Discussion sur la fiabilité des informations générées et la nécessité de vérification Résumé de la vidéo

      Cette vidéo explore l'utilisation de l'intelligence artificielle (IA) dans le domaine pédagogique, en particulier pour traiter de grandes quantités de données textuelles. Elle aborde les avantages de l'IA pour gagner du temps dans la rédaction de rapports d'autoévaluation et la planification de projets éducatifs. La vidéo souligne également les défis réglementaires liés à l'utilisation de l'IA, comme le RGPD et le futur AI Act européen, tout en discutant des hallucinations, un phénomène où l'IA peut générer des informations fausses mais plausibles.

      Points saillants : 1. Utilisation de l'IA dans l'éducation [00:23:49][^1^][1] * Simplifie la gestion de grandes quantités de données * Permet de gagner du temps dans la rédaction de rapports * Nécessite une vérification humaine pour assurer l'exactitude 2. Défis réglementaires et éthiques [00:29:58][^2^][2] * Le respect du RGPD et l'anticipation du AI Act européen * Importance de la protection des données personnelles * Nécessité d'une approche responsable dans l'utilisation de l'IA 3. Phénomène des hallucinations de l'IA [00:36:47][^3^][3] * L'IA peut créer des informations fausses mais crédibles * Les hallucinations sont inhérentes à la nature des systèmes de traitement du langage * Importance de la vérification par les utilisateurs pour éviter la désinformation 4. Intégration de l'IA dans les pratiques pédagogiques [00:42:00][^4^][4] * Encouragement à expérimenter avec l'IA pour comprendre ses capacités et limites * Organisation d'ateliers pour familiariser le personnel et les élèves avec l'IA * Utilisation de l'IA comme outil d'assistance et non comme source unique de vérité Résumé de la Vidéo

      La partie 3 de la vidéo aborde l'intégration de l'intelligence artificielle (IA) dans les solutions éducatives, notamment le déploiement de "Mia seconde" pour accompagner les lycéens. Elle soulève des questions sur la protection des données (RGPD) et l'esprit critique face à ces outils. L'orateur partage son expérience personnelle avec l'utilisation d'un chatbot basé sur Chat GPT 35 sur le site de son lycée, qui bien que perfectible, sert d'outil de veille et d'amélioration de son travail.

      Points Forts: 1. Déploiement de Mia seconde [00:47:02][^1^][1] * Solution intégrant l'IA pour les lycéens * Déjà en phase expérimentale * Provoque des interrogations sur le RGPD et l'esprit critique 2. Utilisation de l'IA dans les espaces numériques de travail [00:47:39][^2^][2] * Intégration d'outils d'IA dans les académies * Exemple de la W flash déjà utilisée par certains 3. Expérience personnelle avec un chatbot [00:48:20][^3^][3] * Mise en place d'un chatbot sur le site du lycée * Basé sur Chat GPT 35, nécessite des améliorations * Utilisé pour la veille et l'amélioration des services du lycée 4. Perspectives sur l'évolution des outils d'IA [00:55:44][^4^][4] * Discussion sur la mise à jour du chatbot vers Chat GPT 4 * Importance de l'actualisation des données et des réponses fournies

    1. Résumé de la Vidéo

      Cette vidéo explore les défis de définir l'intelligence, en particulier l'intelligence artificielle (IA), et comment notre compréhension de l'intelligence est influencée par la perspective humaine. Elle discute de la manière dont les ordinateurs et les réseaux neuronaux traitent l'information, la nature étroite de l'IA actuelle, et comment les grandes entreprises technologiques ont façonné le développement de l'IA. La vidéo souligne également l'intelligence dans le monde naturel, comme celle des octopodes et des organismes non cérébraux, remettant en question notre conception de l'intelligence humaine comme unique.

      Points Forts: 1. Définition de l'intelligence [00:01:48][^1^][1] * Difficulté à définir l'intelligence * Intelligence souvent mesurée par rapport aux humains * L'IA est perçue comme une ligne d'arrivée imaginaire 2. L'IA et les ordinateurs [00:02:39][^2^][2] * Les ordinateurs comme collection ingénieuse d'électronique * Les programmes informatiques rendent les ordinateurs utiles * L'IA actuelle est le résultat de connexions et de réseaux neuronaux 3. L'IA corporative et ses limites [00:05:20][^3^][3] * L'IA dominante est créée par des corporations * L'IA est bonne pour classer et reconnaître des images * La collecte de données à grande échelle par les entreprises 4. Intelligence dans la nature [00:13:40][^4^][4] * L'intelligence des octopodes et leur reconnaissance des humains * Les organismes sans cerveau démontrent de l'intelligence * Les réseaux de communication entre les arbres dans les forêts

    1. "The Curse of Recursion: Training on Generated Data Makes Models Forget," a recent paper, goes beyond the ick factor of AI that is fed on botshit and delves into the mathematical consequences of AI coprophagia: https://arxiv.org/abs/2305.17493 Co-author Ross Anderson summarizes the finding neatly: "using model-generated content in training causes irreversible defects": https://www.lightbluetouchpaper.org/2023/06/06/will-gpt-models-choke-on-their-own-exhaust/ Which is all to say: even if you accept the mystical proposition that more training data "solves" the AI problems that constitute total unsuitability for high-value applications that justify the trillions in valuation analysts are touting, that training data is going to be ever more elusive.
    2. Botshit can be produced at a scale and velocity that beggars the imagination. Consider that Amazon has had to cap the number of self-published "books" an author can submit to a mere three books per day: https://www.theguardian.com/books/2023/sep/20/amazon-restricts-authors-from-self-publishing-more-than-three-books-a-day-after-ai-concerns
    3. For people inflating the current AI hype bubble, this idea that making the AI "more powerful" will correct its defects is key. Whenever an AI "hallucinates" in a way that seems to disqualify it from the high-value applications that justify the torrent of investment in the field, boosters say, "Sure, the AI isn't good enough…yet. But once we shovel an order of magnitude more training data into the hopper, we'll solve that, because (as everyone knows) making the computer 'more powerful' solves the AI problem"
    4. As the lawyers say, this "cites facts not in evidence." But let's stipulate that it's true for a moment. If all we need to make the AI better is more training data, is that something we can count on? Consider the problem of "botshit," Andre Spicer and co's very useful coinage describing "inaccurate or fabricated content" shat out at scale by AIs: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4678265 "Botshit" was coined last December, but the internet is already drowning in it. Desperate people, confronted with an economy modeled on a high-speed game of musical chairs in which the opportunities for a decent livelihood grow ever scarcer, are being scammed into generating mountains of botshit in the hopes of securing the elusive "passive income": https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
    1. 3:50 "options are the right but not obligation to buy or sell"<br /> who on earth is so stupid to take part in such a gamble?<br /> this is just another intelligence test, exploiting the fact that most people are idiots.

  3. Feb 2024
    1. https://chat.openai.com/g/g-z5XcnT7cQ-zettel-critique-assistant

      Zettel Critique Assistant<br /> By Florian Lengyel<br /> Critique Zettels following three rules: Zettels should have a single focus, WikiLinks indicate a shift in focus, Zettels should be written for your future self. The GPT will suggest how to split multi-focused notes into separate notes. Create structure note from a list of note titles and abstracts.

      ᔥ[[ZettelDistraction]] in Share with us what is happening in your ZK this week. February 20, 2024

    1. Résumé de la vidéo [00:00:02][^1^][1] - [00:26:00][^2^][2] :

      Cette vidéo présente une conférence sur l'intelligence collective, organisée dans le cadre de la semaine du cerveau et en partenariat avec l'école nationale supérieure des officiers sapeurs-pompiers d'Aix-en-Provence. Elle aborde les recherches sur l'amélioration de la performance des groupes, non seulement en termes d'efficacité et de créativité, mais aussi de bien-être des membres.

      Points forts : + [00:02:17][^3^][3] L'intelligence collective * Estelle Michinoff, professeur d'université * Recherches sur la performance des groupes + [00:10:27][^4^][4] Définition d'une équipe * Communication régulière * Objectif commun + [00:14:31][^5^][5] Variété des collectifs * Équipes permanentes, autonomes, virtuelles * Dynamiques et temporaires + [00:16:45][^6^][6] Transformer une équipe d'experts * Intégrer les expertises individuelles * Créer une synergie de groupe + [00:18:11][^7^][7] Ingrédients d'une équipe experte * Diversité de compétences * Gestion des processus de groupe + [00:25:44][^8^][8] Facilitation du travail d'équipe * Techniques de facilitation * Animateurs pour dynamiser le groupe Résumé de la vidéo [00:26:01][^1^][1] - [00:50:57][^2^][2] : La vidéo aborde l'importance de la mémoire transactive et des modèles mentaux partagés dans les équipes d'urgence et de travail. Elle explique comment ces éléments contribuent à l'efficacité, à l'adaptabilité et au bien-être des équipes, en soulignant la nécessité de l'entraînement et de la coordination pour développer ces compétences.

      Points forts : + [00:26:01][^3^][3] Modèle Big Five * Leadership efficace * Orientation d'équipe + [00:28:00][^4^][4] Communication * Boucles de rétroaction fermées * Modèle mental partagé + [00:29:00][^5^][5] Mémoire transactive * Système de traitement de l'information partagée * Connaissance des expertises au sein du groupe + [00:38:00][^6^][6] Entraînement des équipes * Développement de compétences collaboratives * Importance de la simulation et du débriefing Résumé de la vidéo 00:50:59 - 01:15:53 :

      La vidéo aborde la mise en place de formations d'équipe dans divers contextes, tels que le sport, la santé et l'aéronautique, en mettant l'accent sur les compétences non techniques et collaboratives. Elle discute également de la gestion des équipes hétérogènes et de l'importance de la communication, des rôles clairs et de la coordination au sein des équipes.

      Points forts : + [00:51:12][^1^][1] Exemples de team training * Mention de méthodes avancées dans le sport * Référence à des coachs comme Claude Onesta + [00:52:15][^2^][2] Team training dans la santé * Développement de méthodes comme le CRM santé * Importance des compétences collaboratives spécifiques + [00:53:42][^3^][3] Difficultés médicales et non médicales * Discussion sur les défis de la compréhension mutuelle * Importance de l'écoute au sein des équipes + [00:55:00][^4^][4] Gestion des équipes hétérogènes * L'hétérogénéité favorise l'innovation et la créativité * Importance de la diversité des points de vue + [00:57:08][^5^][5] Facteurs prédictifs d'une bonne équipe * Communication et partage de modèles mentaux * Rôles clairs et objectifs communs + [01:00:33][^6^][6] Expérience pratique dans les services d'urgence * Importance de l'adaptabilité et de la gestion du stress * Rôle crucial de l'attitude et de la formation des leaders Résumé de la vidéo 01:15:54 - 01:40:30 : La vidéo traite de la gestion de crise et de la prise de décision dans des situations complexes. Elle souligne l'importance de l'adaptabilité, de la collaboration et de l'innovation dans le processus décisionnel.

      Points forts : + [01:16:02][^1^][1] Gestion de crise * Plans d'action et cycles de décisions * Importance de la progressivité et de l'échange régulier + [01:17:39][^2^][2] Adaptabilité face à la complexité * Solutions toutes faites inexistantes * Nécessité d'adaptation et de compétences techniques + [01:19:01][^3^][3] Impact des nouvelles technologies * Augmentation exponentielle de la complexité * Perturbation des modèles classiques de travail d'équipe + [01:23:04][^4^][4] Engagement et motivation * Recherche de sens et contribution à une réflexion commune * Importance de l'engagement des jeunes générations + [01:26:00][^5^][5] Méthodes ludiques en gestion de projet * Utilisation de jeux pour améliorer la concentration et la collaboration * Production accélérée et engagement des participants + [01:32:18][^6^][6] Réflexion sur les méthodes de travail * Questionnement sur les méthodes de management traditionnelles * Exploration de nouvelles approches pour la gestion de projets complexes Résumé de la vidéo [01:40:31][^1^][1] - [02:04:14][^2^][2]:

      La vidéo aborde l'intelligence collective et la gestion des équipes dans des contextes complexes. Elle souligne l'importance des compétences non techniques, comme la capacité à s'engager et à collaborer, qui sont cruciales pour la réussite des réunions et la production efficace.

      Points forts: + [01:40:31][^3^][3] L'importance des compétences non techniques * Essentielles pour la collaboration * Plus déterminantes que les compétences techniques + [01:46:01][^4^][4] Les méthodes d'animation * Casser les codes sociaux pour stimuler la créativité * Utiliser des jeux pour encourager la participation + [01:57:00][^5^][5] Gestion de la complexité * Nécessité d'adaptation face à des situations inédites * L'intelligence collective comme outil de résolution de problèmes Résumé de la vidéo 02:04:15 - 02:05:32 : La partie 6 de la vidéo aborde la nécessité d'évolution des compétences en gestion de projets et de nouveaux styles de management.

      Points forts : + [02:04:15][^1^][1] Évolution des compétences * Nécessité de changement * Nouveaux managers + [02:04:26][^2^][2] Remerciements * Appréciation des éclairages * Partage d'enthousiasme + [02:05:03][^3^][3] Conclusion * Remerciements aux connectés * Rendez-vous l'année prochaine

    1. la question de de Nicolas qui est un petit peu plus usage pratique euh qui se demande si c'est si 00:45:00 l'outil euh euh je pense en particulier à chat GPT mais j'imagine il y a d'autres il y a d'autres outils mais est-ce qu'il est capable de synthétiser des réponses à des questionnaires de de satisfaction et et des questionnaire de qualité euh et de faire de sortir voilà 00:45:15 des des des comment dire des analyses de manière qualitative et quantitative et cetera
    2. une question de 00:42:01 gislen qui se posait un peu bah la question de du fameux droit d'auteur des productions qui sont généré par unea générative celui qui a qui a créé la requête et choisit la la génération qui 00:42:14 lui paraît satisfaisante est-ce que il peut prétendre être auteur de la production c'est un peu voilà qu'en est-il des des droits d'auteur et et voilà et les images qu'on obtient ou même d'ailleurs les textes he est-ce que 00:42:27 on peut s'en approprier le le le droit ou pas
    3. pour utiliser ces Z générative on vous propose 00:38:23 quelques astuces on va se résumer à trois con il très synthétique pour que vous ayez en tête sur une seule planche
    4. je passe maintenant rapidement pour vous donner un aperçu des risques 00:32:18 dans l'utilisation de deulas
    5. 'est le moment de passer la main à yacopo qui va vous présenter des 00:22:37 exemples d'usage de forg yacopo
    6. écris-moi et un email de newsletter de promotion d'un programme gratuit pour sensibiliser et accompagner les acteurs de l'économie sociale et solidaire sur les enjeux de la cybécurité
    7. dans un deuxè 00:17:21 exemple on peut demander à chatbt de de vous aider dans la recherche de financement et donc la question qui est posée c'est pose-moi la requête pardon qui qui est posée c'est pose-moi des questions qui doivent 00:17:35 me permettre de trouver les bons arguments pour convaincre et obtenir une subvention
    8. c'est par exemple vous voulez organiser un soirée événementielle ce qui a parfois le cas dans pas mal d'associations ici 00:18:50 j'ai pris l'exemple d'une fondation qui finance des projets autour de la recherche sur le cancer
    9. quelques exemples d'usage qui 00:16:03 pourraent être des bons exemples d'usage pour les associations
    10. Résumé vidéo [00:00:00][^1^][1] - [00:22:46][^2^][2]: La vidéo présente une session de webinaire organisée par Solidatech, où l'équipe de Cyber Forgo discute de l'intelligence artificielle (IA) et de son utilisation par les associations. Elodie de Solidatech introduit le webinaire et explique les aspects pratiques, suivie par une présentation de Solidatech et ses services pour les associations. L'équipe de Cyber Forgo partage ensuite des exemples d'utilisation de l'IA générative et des conseils pour son utilisation sécurisée et efficace.

      Points forts: + [00:00:00][^3^][3] Introduction et objectifs du webinaire * Présentation par Elodie de Solidatech * Discussion sur l'IA par Cyber Forgo + [00:02:15][^4^][4] Présentation de Solidatech * Services numériques pour associations * Histoire et mission de Solidatech + [00:06:19][^5^][5] Exemples d'utilisation de l'IA générative * Utilité de l'IA pour les associations * Conseils pour une utilisation responsable + [00:09:35][^6^][6] Risques et précautions d'usage de l'IA * Importance de la sécurité et de l'éthique * Astuces pour éviter les pièges de l'IA Résumé de la vidéo [00:22:48][^1^][1] - [00:49:22][^2^][2]: La vidéo présente l'association Data for Good qui soutient des projets technologiques à impact social. Elle met en lumière l'utilisation de l'IA pour extraire des données, analyser des images et accélérer le développement technique avec des ressources limitées.

      Points forts: + [00:22:48][^3^][3] Structure de Data for Good * Saisons de projets tech * Recrutement envisagé + [00:24:56][^4^][4] Extraction de données non structurées * Observatoire sur l'évasion fiscale + [00:25:09][^5^][5] Analyse d'image * Projets environnementaux * Détection d'incendies et pollution plastique + [00:25:35][^6^][6] Développements techniques complexes * Projets ambitieux avec petites équipes + [00:30:50][^7^][7] Projet Carbon Bombs * Visualisation de mégaprojets fossiles + [00:32:16][^8^][8] Risques de l'IA générative * Biais, fausses informations, fuites de données + [00:38:23][^9^][9] Conseils pour utiliser l'IA générative * Écrire des requêtes claires * Ne pas divulguer d'informations personnelles * Vérifier les réponses obtenues Résumé de la vidéo 00:49:23 - 01:04:05 : La vidéo aborde l'écriture de prompts, la confidentialité des données avec les outils génératifs, et l'utilisation de l'IA dans les associations. Elle souligne l'importance de la transparence et de la sécurité des données.

      Points clés : + [00:49:23][^1^][1] Écriture de prompts * Conseils pour construire des requêtes + [00:50:09][^2^][2] Applications pour prompts * Utiliser des applis pour créer des requêtes + [00:51:13][^3^][3] Conseils sur les requêtes * Créer ses propres requêtes pour mieux comprendre l'outil + [00:52:01][^4^][4] Sécurité des données * Risques liés à l'utilisation des outils génératifs + [00:54:06][^5^][5] Confidentialité et IA * Prudence avec les informations personnelles + [00:57:18][^6^][6] IA pour FAQ * Réflexion sur l'utilisation de l'IA pour améliorer les FAQ

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:44:18][^2^][2]:

      Cette vidéo est une conférence de Jean-Gabriel Ganascia, professeur d'informatique à Sorbonne Université, sur le thème des servitudes virtuelles. Il s'agit de la manière dont les technologies numériques, et notamment l'intelligence artificielle, peuvent exercer des formes de contrainte et d'oppression sur les individus et les sociétés. Il analyse les enjeux éthiques, politiques et juridiques de ces technologies, et propose des pistes de réflexion pour s'en libérer.

      Points clés: + [00:00:07][^3^][3] Il rend hommage à Blaise Pascal, précurseur de la machine à calculer et de la réflexion sur l'intelligence artificielle * Il cite un texte où Pascal distingue les machines des animaux par l'absence de volonté * Il critique les transhumanistes qui veulent attribuer une volonté aux machines + [00:02:46][^4^][4] Il décrit le monde en ligne dans lequel nous vivons, où nos vies sont soumises aux flux d'information et à l'intelligence artificielle * Il reconnaît les bienfaits du numérique dans de nombreux domaines (web, santé, robotique, écologie, etc.) * Il dénonce les nouvelles formes de coercition et d'oppression qui s'exercent au plan cognitif * Il compare la situation actuelle à celle décrite par Étienne de La Boétie dans son Discours de la servitude volontaire + [00:09:01][^5^][5] Il expose les limites des chartes éthiques du numérique, qui se multiplient sans être efficaces * Il montre que les principes invoqués (autonomie, dignité, bienfaisance, non-malfaisance, etc.) sont ambigus, relatifs et imprécis * Il illustre les conséquences sociales et politiques des technologies de l'information et de la communication avec des exemples concrets (reconnaissance faciale, crédit social, cloud act, etc.) * Il plaide pour une approche plus pragmatique et plus critique, qui prenne en compte les usages réels et les modes d'appropriation des technologies + [00:20:10][^6^][6] Il propose une éthique du numérique fondée sur la responsabilité individuelle et collective * Il distingue l'éthique de la morale, de la déontologie et du droit * Il définit l'éthique comme une réflexion sur les valeurs et les normes qui orientent nos actions * Il s'inspire de la pensée de Jacques Derrida, qui met l'accent sur l'ouverture à l'avenir et à l'imprévisible * Il cite un texte d'Albert Camus, qui donne quatre conseils aux journalistes : être lucide, refuser de diffuser la haine, user de l'ironie et être obstiné

    1. Résumé de la vidéo [00:00:00][^1^][1] - [00:54:42][^2^][2]:

      Cette vidéo est une table d'échange sur le thème de l'animation d'une communauté de pratique, de ses avantages, de ses défis et de ses astuces. Elle réunit quatre animateurs de différentes communautés de FADIO, un réseau de formation à distance interordres au Québec, ainsi qu'une chercheuse qui a étudié la participation dans une communauté de pratique. La vidéo est animée par Julie Bélan, responsable des communautés de FADIO.

      Points saillants: + [00:00:36][^3^][3] Les raisons de devenir animateur d'une communauté * Le désir de faire avancer un groupe et de développer une expertise * La possibilité de continuer à apprendre et de faire du réseautage * L'opportunité de partager sa contribution et de se faire solliciter * Le besoin de réfléchir à sa pratique et de s'inspirer des autres + [00:09:17][^4^][4] Les défis de l'animation d'une communauté * La gestion du temps, de l'espace et des outils * La mobilisation et la fidélisation des participants * La reconnaissance et le soutien des directions * La diversité et la complémentarité des profils + [00:27:13][^5^][5] Les astuces pour animer une communauté * Partir des besoins et des intérêts des participants * Être flexible et adaptable à la dynamique du groupe * Varier les techniques et les stratégies d'animation * Produire quelque chose de concret et de transférable * Se former et se faire accompagner en tant qu'animateur

    1. Célia ZOLYNSKI, Du calcul du sujet à sa mise en pouvoir d'agir

      Résumé de la vidéo [00:00:00][^1^][1] - [01:10:00][^2^][2] :

      Cette vidéo présente la conférence de Célia Zolynski, professeur de droit du numérique à l'Université Paris 1, sur le thème "Du calcul du sujet à sa mise en pouvoir d'agir". Elle expose les enjeux de régulation des systèmes d'intelligence artificielle (IA) et les impacts potentiels de ces systèmes sur les droits et libertés fondamentaux des personnes. Elle propose de compléter l'approche actuelle du régulateur, qui consiste à protéger le sujet calculé par des obligations de transparence et de responsabilité, par une approche qui vise à mettre le sujet en pouvoir d'agir, en lui offrant des possibilités de paramétrage, de jouabilité et de curation des contenus et des services numériques.

      Points clés : + [00:00:00][^3^][3] Le contexte général de la régulation des systèmes d'IA * Les enjeux de protection des droits fondamentaux face aux impacts des systèmes d'IA * Les textes en cours de discussion au niveau européen et international * La notion de sujet calculé comme instrument de mesure des obligations imposées aux fournisseurs de systèmes d'IA + [00:19:45][^4^][4] L'approche complémentaire de la mise en pouvoir d'agir du sujet * Les limites de l'approche actuelle fondée sur la transparence et la responsabilité * Les pistes pour renforcer l'autonomie et la dignité du sujet face aux systèmes d'IA * Les exemples de paramétrage, de jouabilité et de curation des contenus et des services numériques + [00:44:45][^5^][5] Les questions et les échanges avec le public * Les modalités de mise en œuvre des obligations de transparence et d'accès aux données * Les difficultés de conciliation entre les différents textes et les différents niveaux de régulation * Les perspectives de recherche et de collaboration interdisciplinaire sur ces sujets

    1. Despite the opportunities of AI-based technologies for teaching and learning, they have also ethical issues.

      Yes, I agree with this statement. Ethical issues range from academic integrity concerns to data privacy. AI technology based on algorithmic applications intentionally collects human data from its users and they do not specifically know what kind of data and what quantities of them are collected

    1. Die Selbstverpflichtungen der Regierungen zur Dekarbonisierung reichen bei weitem nicht aus. Ein Bericht, der von den Vereinten Nationen als Grundlage für die kommende COP28 publiziert wurde, ergibt, dass 2030 etwa 20 bis 23 Gigatonnen mehr CO<sub>2</sub> emittiert werden sollen, als mit dem 1,5 °-Ziel verträglich wäre. Zum ersten Mal wird in einem offiziellen UN-Dokument das Ende der Nutzung fossiler Brennstoffe gefordert. https://www.theguardian.com/environment/2023/sep/08/un-report-calls-for-phasing-out-of-fossil-fuels-as-paris-climate-goals-being-missed

      Bericht: https://unfccc.int/sites/default/files/resource/EMBARGOED_DRAFT_Sythesis-report-of-the-technical-dialogue-of-the-first-global-stocktake.pdf

      Bericht: https://unfccc.int/documents/631600

    1. Joy, Bill. “Why the Future Doesn’t Need Us.” Wired, April 1, 2000. https://www.wired.com/2000/04/joy-2/.

      Annotation url: urn:x-pdf:753822a812c861180bef23232a806ec0

      Annotations: https://jonudell.info/h/facet/?user=chrisaldrich&url=urn%3Ax-pdf%3A753822a812c861180bef23232a806ec0&max=100&exactTagSearch=true&expanded=true

    2. The experiences of the atomic scientists clearly show the need to takepersonal responsibility, the danger that things will move too fast, andthe way in which a process can take on a life of its own. We can, as theydid, create insurmountable problems in almost no time flat. We mustdo more thinking up front if we are not to be similarly surprised andshocked by the consequences of our inventions.

      Bill Joy's mention that insurmountable problems can "take on a life of [their] own" is a spectacular reason for having a solid definition of what "life" is, so that we might have better means of subverting it in specific and potentially catastrophic situations.

    3. The GNR technologies do not divide clearly into commercial andmilitary uses; given their potential in the market, it’s hard to imaginepursuing them only in national laboratories. With their widespreadcommercial pursuit, enforcing relinquishment will require a verificationregime similar to that for biological weapons, but on an unprecedentedscale. This, inevitably, will raise tensions between our individual pri-vacy and desire for proprietary information, and the need for verifica-tion to protect us all. We will undoubtedly encounter strong resistanceto this loss of privacy and freedom of action.

      While Joy looks at the Biological and Chemical Weapons Conventions as well as nuclear nonproliferation ideas, the entirety of what he's looking at is also embedded in the idea of gun control in the United States as well. We could choose better, but we actively choose against our better interests.

      What role does toxic capitalism have in pushing us towards these antithetical goals? The gun industry and gun lobby have had tremendous interest on that front. Surely ChatGPT and other LLM and AI tools will begin pushing on the profitmaking levers shortly.

    4. Now, as then, we are creators of new technologies and stars of theimagined future, driven—this time by great financial rewards andglobal competition—despite the clear dangers, hardly evaluating whatit may be like to try to live in a world that is the realistic outcome ofwhat we are creating and imagining.
  4. Jan 2024
    1. How soon could such an intelligent robot be built? The coming ad-vances in computing power seem to make it possible by 2030.

      In 2000, Bill Joy predicted that advances in computing would allow an intelligent robot to be built by 2030.

    2. in hishistory of such ideas, Darwin Among the Machines, George Dysonwarns: “In the game of life and evolution there are three players at thetable: human beings, nature, and machines. I am firmly on the side ofnature. But nature, I suspect, is on the side of the machines.”
    3. Uncontrolledself-replication in these newer technologies runs a much greater risk: arisk of substantial damage in the physical world.

      As a case in point, the self-replication of misinformation on social media networks has become a substantial physical risk in the early 21st century causing not only swings in elections, but riots, take overs, swings in the stock market (GameStop short squeeze January 2021), and mob killings. It is incredibly difficult to create risk assessments for these sorts of future harms.

      In biology, we see major damage to a wide variety of species as the result of uncontrolled self-replication. We call it cancer.

      We also see programmed processes in biological settings including apoptosis and necrosis as means of avoiding major harms. What might these look like with respect to artificial intelligence?

    4. Moravec’s view is that the robots will eventually suc-ceed us—that humans clearly face extinction.

      Joy contends that one of Hans Moravec's views in his book Robot: Mere Machine to Transcendent Mind is that robots will push the human species into extinction in much the same way that early North American placental species eliminated the South American marsupials.

    5. Our overuse of antibiotics has led to what may be thebiggest such problem so far: the emergence of antibiotic-resistant andmuch more dangerous bacteria. Similar things happened when attemptsto eliminate malarial mosquitoes using DDT caused them to acquireDDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.

      Just as mosquitoes can "acquire" (evolve) DDT resistance or bacteria might evolve antiobiotic-resistance, might not humans evolve AI resistance? How fast might we do this? On what timeline? Will the pressure be slowly built up over time, or will the onset be so quick that extinction is the only outcome?

    1. by far the most illuminating to me is the idea that mental causation works from virtual futures towards the past 00:33:17 whereas physical causation works from the past towards the future and these two streams of causation sort of overlap in the present

      for - comparison - mental vs physical causation - adjacency - Michael Levin's definition of intelligence - Sheldrake's mental vs physical causation

      key insight - comparison - mental vs physical causation - mental causation works from virtual futures to past - physical causation works from past to future - this is an interesting way of seeing things

      adjacency - between - direction of mental vs physical causation - Michael Levin's definition of intelligence (adopting WIlliam James's idea) and cognition and cognitive light cones of living organisms:: - having a goal - having autonomy and agency to reach that goal - adjacency statement - Levin adopts a definition of cognition from scientific predecessors that relate to goal activity. - When an organism chooses one specific behavioral trajectory over all other possible ones in order to reach a goal - this is none other than choosing a virtual future that projects back to the present - In our species, innovation and design is based on this future-to-present backwards projection

    1. it's a field of diverse intelligence

      for - definition - diverse intelligence

      definition - diverse intelligence - developing a framework that encompasses the wide field of intelligence of living systems

    1. Use Glaze, a system designed to protect human artists by disrupting style mimicry, to protect what you create from being stolen under the guise of 'training AI'; the term should really be 'thievery'.

  5. Dec 2023
    1. This Wo;Id Encyclopedia would bethe mental background of every intelli-gent man in the world.

      Who, here, defines intelligence?

      How would comparative anthropology between societies view such an effort? Would all societies support such an endeavor?

    1. Matt GrossMatt Gross (He/Him) • 1st (He/Him) • 1st Vice President, Digital Initiatives at Archetype MediaVice President, Digital Initiatives at Archetype Media 4d • 4d • So, here's an interesting project I launched two weeks ago: The HistoryNet Podcast, a mostly automated transformation of HistoryNet's archive of 25,000+ stories into an AI-driven daily podcast, powered by Instaread and Zapier. The voices are pretty good! The stories are better than pretty good! The implications are... maybe terrifying? Curious to hear what you think. Listen at https://lnkd.in/emUTduyC or, as they always say, "wherever you get your podcasts."

      https://www.linkedin.com/feed/update/urn:li:activity:7142905086325780480/

      One can now relatively easily use various tools in combination with artificial intelligence-based voices and reading to convert large corpuses of text into audiobooks, podcasts or other spoken media.

    1. there's this broader issue of of being able to get inside other people's heads as we're driving down the road all the time we're looking at other 00:48:05 people and because we have very advanced theories of mind
      • for: comparison - AI - HI - example - driving, comparison - artificial i human intelligence - example - driving
    2. in my view the biggest the most dangerous phenomenon on the human on our planet is uh human stupidity it's not artificial intelligence
      • for: meme - human stupidity is more dangerous than artificial intelligence

      • meme: human stupidity is more dangerous than artificial intelligence

      • author:Nikola Danaylov
      • date: 2021
  6. Nov 2023
    1. Roger Hardy erklärt in diesem Artikel über die von ihm in Großbritannien gegründete Organisation Round our Way, dass Arbeiterklassen-Communities von der globalen Erhitzung und ihren Folgen besonders stark betroffen sind und das auch wissen. Nur eine Klimabewegung für "ordinary people" könne das Fundament für einen gesellschaftlichen Konsens über Klimaschutz herstellen. https://www.theguardian.com/environment/commentisfree/2023/nov/21/working-class-people-climate-crisis-policy

    1. overpopulation is just another intelligence test, and most people are failing, again.<br /> the problem is pacifism, the solution is permanent tribal warfare and legal serial murder.<br /> but first there is depopulation, killing 95% of today's population. fucking useless eaters... byye! no one will miss you.

      Delete The Garbage. World Cure. RD9 Virus. The Brothers Grimsby 2016<br /> https://www.youtube.com/watch?v=HGG0Nq3BwqQ

    1. I use expiration dates and refrigerators to make a point about #AI and over-reliance, and @dajb uses ducks. #nailingit @weareopencoop

      —epilepticrabbit @epilepticrabbit@social.coop on Nov 09, 2023, 11:51 at https://mastodon.social/@epilepticrabbit@social.coop/111382329524902140

    1. In einem Brief wollen mehr als 100 britische Energieunternehmen Premierminister Rishi Sunak warnen von der aktuellen Dekarbonisierungspolitik abzugehen. Gerade erst hat ein Gutachten gezeigt, mit welchen Gefahren die zu große Abhängigkeit Großbritanniens von gaslieferungen verbunden ist. Für das net sirocil sind diesen Bericht zufolge 327 Milliarden Pfund Investitionen nötig Punkt bisher haben sich die Regierung aber nur zu gut 22,5 Milliarden Pfund verpflichtet. https://www.theguardian.com/environment/2023/jul/16/top-uk-energy-firms-to-warn-rishi-sunak-dont-back-off-green-agenda

      Net Zero-Bericht von Chris Skidmore: https://www.gov.uk/government/publications/review-of-net-zero

      Report des Office for Budget Stability: https://obr.uk/frs/fiscal-risks-and-sustainability-july-2023/#:~:text=In%20this%2C%20our%20second%20FRS,on%20the%20UK's%20public%20debt.

    1. As an ex-Viv (w/ Siri team) eng, let me help ease everyone's future trauma as well with the Fundamentals of Assisted Intelligence.<br><br>Make no mistake, OpenAI is building a new kind of computer, beyond just an LLM for a middleware / frontend. Key parts they'll need to pull it off:… https://t.co/uIbMChqRF9

      — Rob Phillips 🤖🦾 (@iwasrobbed) October 29, 2023
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
  7. Oct 2023
    1. Wang et. al. "Scientific discovery in the age of artificial intelligence", Nature, 2023.

      A paper about the current state of using AI/ML for scientific discovery, connected with the AI4Science workshops at major conferences.

      (NOTE: since Springer/Nature don't allow public pdfs to be linked without a paywall, we can't use hypothesis directly on the pdf of the paper, this link is to the website version of it which is what we'll use to guide discussion during the reading group.)

    1. Thank you. Steve, for raising the alarm on this catastrophe! One minor comment. It should be QC'ed, not QA'ed. Quality control is done first. Quality Assurance (QA) comes after QC. QA is basically checking the calculations and the test results in the batch records. I worked in QC and QA for big pharma for decades. I tried to warn people in early 2021 that there's no way the quality control testing could be done at warp speed. Nobody listened to me despite my decades of experience in big pharma!

      "warp speed" sounds fancy, plus "its an emergency, we have no time"...

      it really was just an intelligence test, a global-scale exploit of trust in authorities. (and lets be honest, stupid people deserve to die.)

      problem is, they (elites, military, industry) seem to go for actual forced vaccinations, which would be an escalation from psychological warfare to actual warfare against the 95% "useless eaters".

      personally, i would prefer if they would globally legalize serial murder and assault rifles, then "we the people" would solve the overpopulation. (because: serial murder is the only alternative to mass murder.) but they are scared that we would also kill the wrong people (their servants because they are evil or stupid). (anyone crying about depopulation should suggest better solutions. denying overpopulation is just another failed intelligence test.)

    1. Envisioning the next wave of emergent AI

      Are we stretching too far by saying that AI are currently emergent? Isn't this like saying that card indexes of the early 20th century are computers. In reality they were data storage and the "computing" took place when humans did the actual data processing/thinking to come up with new results.

      Emergence would seem to actually be the point which comes about when the AI takes its own output and continues processing (successfully) on it.

    1. Hans Bethe, who won the Nobel Prize in Physics in 1967, remarked: “I have sometimes wondered whether a brain like von Neumann’s does not indicate a species superior to that of man.”
  8. Sep 2023
      • for: bio-buddhism, buddhism - AI, care as the driver of intelligence, Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, care drive, care light cone, multiscale competency architecture of life, nonduality, no-self, self - illusion, self - constructed, self - deconstruction, Bodhisattva vow
      • title: Biology, Buddhism, and AI: Care as the Driver of Intelligence
      • author: Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, AI - ethics
      • date: May 16, 2022
      • source: https://www.mdpi.com/1099-4300/24/5/710/htm

      • summary

        • a trans-disciplinary attempt to develop a framework to deal with a diversity of emerging non-traditional intelligence from new bio-engineered species to AI based on the Buddhist conception of care and compassion for the other.
        • very thought-provoking and some of the explanations and comparisons to evolution actually help to cast a new light on old Buddhist ideas.
        • this is a trans-disciplinary paper synthesizing Buddhist concepts with evolutionary biology
    1. “intelligence as care
      • for: wisdom and compassion, intelligence as care
      • comment
        • the slogan "intelligence as care" seems parallel to the Buddhist slogan of "wisdom and compassion" where:
          • care is analogous to compassion
          • insight is analogous to wisdom
    2. Given that intelligent behavior does not require traditional brains [16,18], and can take place in many spaces besides the familiar 3D space of motile behavior (e.g., physiological, metabolic, anatomical, and other kinds of problem spaces), how can we develop rigorous formalisms for recognizing, designing, and relating to truly diverse intelligences?
      • for: key question
      • key question
      • paraphrase
        • Given that
          • intelligent behavior does not require traditional brains, and
          • can take place in many spaces besides the familiar 3D space of motile behavior, for example
            • physiological space,
            • metabolic space,
            • anatomical space, and
            • other kinds of problem spaces,
          • how can we develop rigorous formalisms for
            • recognizing,
            • designing, and
            • relating
          • to truly diverse intelligences?
    1. all intelligence collective intelligence
      • for: quote, quote - intelligence, major evolutionary transition, MET, quote - collective inteillgence, quote - Michael Levin
      • quote
        • all intelligence is collective intelligence
      • author: Michael Levin

      • comment

        • Major evolutionary transition (MET) are milestones in evolution in which collections of distinct individual life forms unite into one cohesive collection due to improved fitness and begin to replicate as a new individual unit
        • hence the Deep Humanity term individual / collective gestalt, developed to deal with the level of human organisms and the societies and groups they belong to, applies to evolutionary biology as well through the MET where a new higher level individual is formed out of a collective of lower level indivdiuals
    2. an overview of the paper
      • for: paper overview, paper overview - the computational boundary of a self
      • paper overview

        • motivated by 2018 Templeton Foundation conference to present idea on unconventional and diverse intelligence
        • Levin was interested in any conceivable type of cognitive system and was interested in find a way to universally characterize them all

          • how are they detected
          • how to understand them
          • how to relate to them and
          • how to create them
        • Levin had been thinking about this for years

        • Levin adopts a cybernetic definition of intelligence proposed by William James that focuses on the competency to reach a defined goal by different paths
        • Navigation plays a critical role in this defiinition.
    1. if one Zooms in you find out that we are all in fact Collective intelligences
      • for: quote, quote - Michael Levin, quote - multicellular organism, quote individual/collective gestalt, individual/collective gestalt
      • quote
        • If one zooms in, you find out that we are all in fact collective intelligence
      • author: Michael Levin
      • date: 2022
      • source:https://www.youtube.com/watch?v=jLiHLDrOTW8
      • for: superorganism, multi-level superorganism, collective intelligence, individual-collective gestalt, Michael Levin,

      • title: Cell Intelligence in Physiological and Morphological Spaces

      • author: Michael Levin
      • date 2022
      • comment
        • This is a talk on collective intelligence in unconventional spaces
    1. R.U.R.: Rossum’s Universal Robots, drama in three acts by Karel Čapek, published in 1920 and performed in 1921. This cautionary play, for which Čapek invented the word robot (derived from the Czech word for forced labour), involves a scientist named Rossum who discovers the secret of creating humanlike machines. He establishes a factory to produce and distribute these mechanisms worldwide. Another scientist decides to make the robots more human, which he does by gradually adding such traits as the capacity to feel pain. Years later, the robots, who were created to serve humans, have come to dominate them completely.
    1. What do you do then? You can take the book to someone else who, you think, can read better than you, and have him explain the parts that trouble you. ("He" may be a living person or another book-a commentary or textbook. )

      This may be an interesting use case for artificial intelligence tools like ChatGPT which can provide the reader of complex material with simplified synopses to allow better penetration of the material (potentially by removing jargon, argot, etc.)

    2. Active Reading

      He then pushes a button and "plays back" the opinion whenever it seems appropriate to do so. He has performed acceptably without having had to think.

      This seems to be a reasonable argument to make for those who ask, why read? why take notes? especially when we can use search and artificial intelligence to do the work for us. Can we really?

  9. Aug 2023
  10. Jul 2023
    1. Epstein, Ziv, Hertzmann, Aaron, Herman, Laura, Mahari, Robert, Frank, Morgan R., Groh, Matthew, Schroeder, Hope et al. "Art and the science of generative AI: A deeper dive." ArXiv, (2023). Accessed July 21, 2023. https://doi.org/10.1126/science.adh4451.

      Abstract

      A new class of tools, colloquially called generative AI, can produce high-quality artistic media for visual arts, concept art, music, fiction, literature, video, and animation. The generative capabilities of these tools are likely to fundamentally alter the creative processes by which creators formulate ideas and put them into production. As creativity is reimagined, so too may be many sectors of society. Understanding the impact of generative AI - and making policy decisions around it - requires new interdisciplinary scientific inquiry into culture, economics, law, algorithms, and the interaction of technology and creativity. We argue that generative AI is not the harbinger of art's demise, but rather is a new medium with its own distinct affordances. In this vein, we consider the impacts of this new medium on creators across four themes: aesthetics and culture, legal questions of ownership and credit, the future of creative work, and impacts on the contemporary media ecosystem. Across these themes, we highlight key research questions and directions to inform policy and beneficial uses of the technology.

    1. A.G.I.-ism distracts from finding better ways to augment intelligence.
      • There are people who are designing systems to prioritize augmenting human intelligence and use machines to assist us
      • For instance, it was the vision of Doug Engelbart
  11. Jun 2023
    1. Reflection enters the picture when we want to allow agents to reflect uponthemselves and their own thoughts, beliefs, and plans. Agents that have thisability we call introspective agents.
    1. Im ersten Jahr nach der Invasion der Ukraine im Februar 2022 hat Großbritannien für 19,3 Milliarden Pfund Öl und Gas aus anderen autoritären Petrostaaten als Russland bezogen. Eine Analyse von Desmog ergibt, dass Großbritannien in diesem Jahr für 125,7 Milliarden Pfund fossile Brennstoffe importiert und damit zum ersten Mal die 100-Milliarden-Grenze überschritten hat, obwohl eine Reduktion des Verbrauchs von Öl und Gas dringend nötig ist. Trotz des Embargos verkaufte auch Russland eine Rekordmenge an Öl in diesem Jahr. https://www.theguardian.com/environment/2023/jun/09/193bn-of-fossil-fuels-imported-by-uk-from-authoritarian-states-in-year-since-ukraine-war

  12. learn-us-east-1-prod-fleet01-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-xythos.content.blackboardcdn.com
    1. The problem with that presumption is that people are alltoo willing to lower standards in order to make the purported newcomer appear smart. Justas people are willing to bend over backwards and make themselves stupid in order tomake an AI interface appear smart

      AI has recently become such a big thing in our lives today. For a while I was seeing chatgpt and snapchat AI all over the media. I feel like people ask these sites stupid questions that they already know the answer too because they don't want to take a few minutes to think about the answer. I found a website stating how many people use AI and not surprisingly, it shows that 27% of Americans say they use it several times a day. I can't imagine how many people use it per year.

    1. there is a scenario 00:18:21 uh possibly a likely scenario where we live in a Utopia where we really never have to worry again where we stop messing up our our planet because intelligence is not a bad commodity more 00:18:35 intelligence is good the problems in our planet today are not because of our intelligence they are because of our limited intelligence
      • limited (machine) intelligence

        • cannot help but exist
        • if the original (human) authors of the AI code are themselves limited in their intelligence
      • comment

        • this limitation is essentially what will result in AI progress traps
        • Indeed,
          • progress and their shadow artefacts,
          • progress traps,
          • is the proper framework to analyze the existential dilemma posed by AI
    1. Project Tailwind by Steven Johnson

    2. I’ve also found that Tailwind works extremely well as an extension of my memory. I’ve uploaded my “spark file” of personal notes that date back almost twenty years, and using that as a source, I can ask remarkably open-ended questions—“did I ever write anything about 19th-century urban planning” or “what was the deal with that story about Houdini and Conan Doyle?”—and Tailwind will give me a cogent summary weaving together information from multiple notes. And it’s all accompanied by citations if I want to refer to the original direct quotes for whatever reason.

      This sounds like the sort of personalized AI tool I've been wishing for since the early ChatGPT models if not from even earlier dreams that predate that....

  13. May 2023
    1. Deep Learning (DL) A Technique for Implementing Machine LearningSubfield of ML that uses specialized techniques involving multi-layer (2+) artificial neural networksLayering allows cascaded learning and abstraction levels (e.g. line -> shape -> object -> scene)Computationally intensive enabled by clouds, GPUs, and specialized HW such as FPGAs, TPUs, etc.

      [29] AI - Deep Learning

    1. The object of the present volume is to point out the effects and the advantages which arise from the use of tools and machines ;—to endeavour to classify their modes of action ;—and to trace both the causes and the consequences of applying machinery to supersede the skill and power of the human arm.

      [28] AI - precedents...

    1. Epidemiologist Michael Abramson, who led the research, found that the participants who texted more often tended to work faster but score lower on the tests.

      [21] AI - Skills Erosion

    1. An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms. It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

      [21] AI Nuances

    1. According to him, there are several goals connected to AI alignment that need to be addressed:

      [20] AI - Alignment Goals

    1. The following table lists the results that we visualized in the graphic.

      [18] AI - Increased sophistication

    1. https://xtiles.app/62e9167a308426236b1d2b91 https://xtiles.app/62c29d1866533a18d0717564

      Presumably this is part of xTiles' planning for various personas and strategy.

    1. https://xtiles.app/6249b3f811d8db0dcd173512

      Fascinating to see an xTiles page named "competitive analysis", but an interesting example of "eating their own dogfood" to make it.

    1. Get some of the lowest ad prices while protecting your brand with a system backed by Verity and Grapeshot. Rest easy that your ads will only show up where you’d like them to.

      Is there a word or phrase in the advertising space which covers the filtering out of websites and networks which have objectionable material one doesn't want their content running against?

      Contextual intelligence seems to be one...

      Apparently the platforms Verity and Grapeshot (from Oracle) protect against this.

    1. Tagging and linking with AI (Napkin.one) by Nicole van der Hoeven

      https://www.youtube.com/watch?v=p2E3gRXiLYY

      Nicole underlines the value of a good user interface for traversing one's notes. She'd had issues with tagging things in Obsidian using their #tag functionality, but never with their [[WikiLink]] functionality. Something about the autotagging done by Napkin's artificial intelligence makes the process easier for her. Some of this may be down to how their user interface makes it easier/more intuitive as well as how it changes and presents related notes in succession.

      Most interesting however is the visual presentation of notes and tags in conjunction with an outliner for taking one's notes and composing a draft using drag and drop.

      Napkin as a visual layer over tooling like Obsidian, Logseq, et. al. would be a much more compelling choice for me in terms of taking my pre-existing data and doing something useful with it rather than just creating yet another digital copy of all my things (and potentially needing sync to keep them up to date).

      What is Napkin doing with all of their user's data?

  14. Apr 2023
    1. Malheureusement, de nombreuses études dites de « socio-génomique » font progresser, en s’appuyant sur les études GWAS, l’idée que nous sommes génétiquement prédéterminés à faire des études ou pas (l’idée étant que les variations génétiques influeraient sur la variable QI, dont on vient de rappeler les limites…). Selon ce courant de pensée, nos capacités intellectuelles sont écrites dans notre génome. Largement diffusées tant par la presse scientifique que par les médias généralistes ou certains ouvrages comme ceux des psychologues Kathryn Paige Harden ou Robert Plomin, par exemple. Ces idées conduisent inéluctablement à se demander à quoi bon promouvoir une éducation pour tous quand certains y seraient, pour ainsi dire, « génétiquement imperméables »…
    1. Abstract

      Recent innovations in artificial intelligence (AI) are raising new questions about how copyright law principles such as authorship, infringement, and fair use will apply to content created or used by AI. So-called “generative AI” computer programs—such as Open AI’s DALL-E 2 and ChatGPT programs, Stability AI’s Stable Diffusion program, and Midjourney’s self-titled program—are able to generate new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”). These generative AI programs are “trained” to generate such works partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks. This Legal Sidebar explores questions that courts and the U.S. Copyright Office have begun to confront regarding whether the outputs of generative AI programs are entitled to copyright protection as well as how training and using these programs might infringe copyrights in other works.

    1. It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.

      This isn't true. The Stochastic Parrots paper outlines other avenues for reining in the harms of language models like GPT's.

  15. Mar 2023
    1. For us the rule of brawn has been broken, and intelligence has become the decisive factor in success. Schools, railroads, factories, and the largest commercial concerns may be successfully managed by persons who are physically weak or even sickly. One who has intelligence constantly measures opportunities against his own strength or weakness and adjusts himself to conditions by following those leads which promise most toward the realization of his individual possibilities.

      I think intelligence has always been a determining factor of success. When someone is smart or intelligent we tend to assume that they will be successful in life. I think this is important to the history of psychology because we have been determined on trying to understand intelligence and then we were grading intelligence based off the score they were getting. We were discussing how intelligence differs across people and that people that were feeble-minded were potential criminals. We discussed how superiors become leaders and lead civilization.

    2. Industrial concerns doubtless suffer enormous losses from the employment of persons whose mental ability is not equal to the tasks they are expected to perform. The present methods of trying out new employees, transferring them to simpler and simpler jobs as their inefficiency becomes apparent, is wasteful and to a great extent unnecessary. A cheaper and more satisfactory method would be to employ a psychologist to examine applicants for positions and to weed out the unfit. Any business employing as many as five hundred or a thousand workers, as, for example, a large department store, could save in this way several times the salary of a well-trained psychologist.

      I think this is interesting because they are saying that intelligence testing could be used to determine job positions. I agree that employing a psychologist to examine applications for positions would be beneficial because the employer doesn't have to worry about certain things the psychologist would look for. I agree that using a psychologist to weed people out of decision of employment could be effective because many people are applying, but the employers only want certain people for that job. I think this is relevant to the history of psychology because there are some companies who use people to determine who is deemed fit for the company, and this is what they wanted to start doing so they could find the best employees for that particular job.

    3. Instead, there are many grades of intelligence, ranging from idiocy on the one hand to genius on the other.

      I think this is interesting because they had thought that under the right conditions that children would be equally, or almost equally, capable of making satisfactory school progress, but they made a discovery that not all children are equal or almost equal. There are different grades of intelligence depending on the person because everyone is different. This is important to the history of psychology because there has now been a discovery of different grades of intelligence. They used to think that children had an equal intelligence or almost equal but there are many different grades of intelligence grading from idiocy to average to genius. We still utilize the grades of intelligence today, but the grades are categorized differently such as idiocy being extremely low on the intelligence scale and genius being very superior. We have changed the name of the grades of intelligence.

    4. The Uses of Intelligence Tests

      I think this is interesting because we have used intelligence testing back then to try and understand how intelligence is measured. Today we still use many different types of intelligence testing such as the Stanford-Binet Intelligence scale and the IQ test which are used to measure intelligence. I was thinking that the STAAR test is a way to measure intelligence, but when I looked it up, it states "No, STAAR tests do not measure a student's intelligence the way they should" (Breuer, 2020). The use of intelligence testing can help diagnose intellectual disabilities or someone's intellectual potential.

    1. Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.

      Would the argument here for stochastic parrots also potentially apply to or could it be abstracted to Markov monkeys?

    1. A.I. Is Mastering Language. Should We Trust What It Says?<br /> by Steven Johnson, art by Nikita Iziev

      Johnson does a good job of looking at the basic state of artificial intelligence and the history of large language models and specifically ChatGPT and asks some interesting ethical questions, but in a way which may not prompt any actual change.


      When we write about technology and the benefits and wealth it might bring, do we do too much ethics washing to help paper over the problems to help bring the bad things too easily to pass?

    2. We know from modern neuroscience that prediction is a core property of human intelligence. Perhaps the game of predict-the-next-word is what children unconsciously play when they are acquiring language themselves: listening to what initially seems to be a random stream of phonemes from the adults around them, gradually detecting patterns in that stream and testing those hypotheses by anticipating words as they are spoken. Perhaps that game is the initial scaffolding beneath all the complex forms of thinking that language makes possible.

      Is language acquisition a very complex method of pattern recognition?

    3. How do we make them ‘‘benefit humanity as a whole’’ when humanity itself can’t agree on basic facts, much less core ethics and civic values?
    4. Another way to widen the pool of stakeholders is for government regulators to get into the game, indirectly representing the will of a larger electorate through their interventions.

      This is certainly "a way", but history has shown, particularly in the United States, that government regulation is unlikely to get involved at all until it's far too late, if at all. Typically they're only regulating not only after maturity, but only when massive failure may cause issues for the wealthy and then the "regulation" is to bail them out.

      Suggesting this here is so pie-in-the sky that it only creates a false hope (hope washing?) for the powerless. Is this sort of hope washing a recurring part of

    5. OpenAI has not detailed in any concrete way who exactly will get to define what it means for A.I. to ‘‘benefit humanity as a whole.’’

      Who get's to make decisions?

    6. Whose values do we put through the A.G.I.? Who decides what it will do and not do? These will be some of the highest-stakes decisions that we’ve had to make collectively as a society.’’

      A similar set of questions might be asked of our political system. At present, the oligopolic nature of our electoral system is heavily biasing our direction as a country.

      We're heavily underrepresented on a huge number of axes.

      How would we change our voting and representation systems to better represent us?

    7. Should we build an A.G.I. that loves the Proud Boys, the spam artists, the Russian troll farms, the QAnon fabulists?

      What features would be design society towards? Stability? Freedom? Wealth? Tolerance?

      How might long term evolution work for societies that maximized for tolerance given Popper's paradox of tolerance?

    8. Right before we left our lunch, Sam Altman quoted a saying of Ilya Sutskever’s: ‘‘One thing that Ilya says — which I always think sounds a little bit tech-utopian, but it sticks in your memory — is, ‘It’s very important that we build an A.G.I. that loves humanity.’ ’’
    1. the apocalypse they refer to is not some kind of sci-fi takeover like Skynet, or whatever those researchers thought had a 10 percent chance of happening. They’re not predicting sentient evil robots. Instead, they warn of a world where the use of AI in a zillion different ways will cause chaos by allowing automated misinformation, throwing people out of work, and giving vast power to virtually anyone who wants to abuse it. The sin of the companies developing AI pell-mell is that they’re recklessly disseminating this mighty force.

      Not Skynet, but social disruption

    1. ChatGPTThis is a free research preview.🔬Our goal is to get external feedback in order to improve our systems and make them safer.🚨While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.
  16. Feb 2023
    1. Sam Matla talks about the collector's fallacy in a negative light, and for many/most, he might be right. But for some, collecting examples and evidence of particular things is crucially important. The key is to have some idea of what you're collecting and why.

      Historians collecting small facts over time may seem this way, but out of their collection can emerge patterns which otherwise would never have been seen.

      cf: Keith Thomas article

      concrete examples of this to show the opposite?

      Relationship to the idea of AI coming up with black box solutions via their own method of diffuse thinking

    1. For years inventions have extended man's physical powers rather than the powers of his mind.
    1. Certainly, computerizationmight seem to resolve some of the limitations of systems like Deutsch’s, allowing forfull-text search or multiple tagging of individual data points, but an exchange of cardsfor bits only changes the method of recording, leaving behind the reality that one muststill determine what to catalogue, how to relate it to the whole, and the overarchingsystem.

      Despite the affordances of recording, searching, tagging made by computerized note taking systems, the problem still remains what to search for or collect and how to relate the smaller parts to the whole.


      customer relationship management vs. personal knowledge management (or perhaps more important knowledge relationship management, the relationship between individual facts to the overall whole) suggested by autocomplete on "knowl..."

    2. One might then say that Deutsch’s index devel-oped at the height of the pursuit of historical objectivity and constituted a tool ofhistorical research not particularly innovative or limited to him alone, given that the useof notecards was encouraged by so many figures, and it crystallized a positivistic meth-odology on its way out.

      Can zettelkasten be used for other than positivitistic methodologies?

    1. https://www.cyberneticforests.com/ai-images

      Critical Topics: AI Images is an undergraduate class delivered for Bradley University in Spring 2023. It is meant to provide an overview of the context of AI art making tools and connects media studies, new media art, and data ethics with current events and debates in AI and generative art. Students will learn to think critically about these tools by using them: understand what they are by making work that reflects the context and histories of the tools.

    1. Sloan, Robin. “Author’s Note.” Experimental fiction. Wordcraft Writers Workshop, November 2022. https://wordcraft-writers-workshop.appspot.com/stories/robin-sloan.

      brilliant!

    2. "I have affirmed the premise that the enemy can be so simple as a bundle of hate," said he. "What else? I have extinguished the light of a story utterly.

      How fitting that the amanuensis in a short story written with the help of artificial intelligence has done the opposite of what the author intended!

    1. Wordcraft Writers Workshop by Andy Coenen - PAIR, Daphne Ippolito - Brain Research Ann Yuan - PAIR, Sehmon Burnam - Magenta

      cross reference: ChatGPT

    2. LaMDA was not designed as a writing tool. LaMDA was explicitly trained to respond safely and sensibly to whomever it’s engaging with.
    3. LaMDA's safety features could also be limiting: Michelle Taransky found that "the software seemed very reluctant to generate people doing mean things". Models that generate toxic content are highly undesirable, but a literary world where no character is ever mean is unlikely to be interesting.
    4. A recurring theme in the authors’ feedback was that Wordcraft could not stick to a single narrative arc or writing direction.

      When does using an artificial intelligence-based writing tool make the writer an editor of the computer's output rather than the writer themself?

    5. If I were going to use an AI, I'd want to plugin and give massive priority to my commonplace book and personal notes followed by the materials I've read, watched, and listened to secondarily.

    6. Several participants noted the occasionally surreal quality of Wordcraft's suggestions.

      Wordcraft's hallucinations can create interesting and creatively surreal suggestions.

      How might one dial up or down the ability to hallucinate or create surrealism within an artificial intelligence used for thinking, writing, etc.?

    7. Writers struggled with the fickle nature of the system. They often spent a great deal of time wading through Wordcraft's suggestions before finding anything interesting enough to be useful. Even when writers struck gold, it proved challenging to consistently reproduce the behavior. Not surprisingly, writers who had spent time studying the technical underpinnings of large language models or who had worked with them before were better able to get the tool to do what they wanted.

      Because one may need to spend an inordinate amount of time filtering through potentially bad suggestions of artificial intelligence, the time and energy spent keeping a commonplace book or zettelkasten may pay off magnificently in the long run.

    8. Many authors noted that generations tended to fall into clichés, especially when the system was confronted with scenarios less likely to be found in the model's training data. For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship. Yudhanjaya Wijeratne attempted to deviate from standard fantasy tropes (e.g. heroes as cartographers and builders, not warriors), but Wordcraft insisted on pushing the story toward the well-worn trope of a warrior hero fighting back enemy invaders.

      Examples of artificial intelligence pushing toward pre-existing biases based on training data sets.

    9. Wordcraft tended to produce only average writing.

      How to improve on this state of the art?

    10. “...it can be very useful for coming up with ideas out of thin air, essentially. All you need is a little bit of seed text, maybe some notes on a story you've been thinking about or random bits of inspiration and you can hit a button that gives you nearly infinite story ideas.”- Eugenia Triantafyllou

      Eugenia Triantafyllou is talking about crutches for creativity and inspiration, but seems to miss the value of collecting interesting tidbits along the road of life that one can use later. Instead, the emphasis here becomes one of relying on an artificial intelligence doing it for you at the "hit of a button". If this is the case, then why not just let the artificial intelligence do all the work for you?

      This is the area where the cultural loss of mnemonics used in orality or even the simple commonplace book will make us easier prey for (over-)reliance on technology.


      Is serendipity really serendipity if it's programmed for you?

    11. The authors agreed that the ability to conjure ideas "out of thin air" was one of the most compelling parts of co-writing with an AI model.

      Again note the reference to magic with respect to the artificial intelligence: "the ability to conjure ideas 'out of thin air'".

    12. Wordcraft shined the most as a brainstorming partner and source of inspiration. Writers found it particularly useful for coming up with novel ideas and elaborating on them. AI-powered creative tools seem particularly well suited to sparking creativity and addressing the dreaded writer's block.

      Just as using a text for writing generative annotations (having a conversation with a text) is a useful exercise for writers and thinkers, creative writers can stand to have similar textual creativity prompts.

      Compare Wordcraft affordances with tools like Nabokov's card index (zettelkasten) method, Twyla Tharp's boxes, MadLibs, cadavre exquis, et al.

      The key is to have some sort of creativity catalyst so that one isn't working in a vacuum or facing the dreaded blank page.

    13. We like to describe Wordcraft as a "magic text editor". It's a familiar web-based word processor, but under the hood it has a number of LaMDA-powered writing features that reveal themselves depending on the user's activity.

      The engineers behind Wordcraft refer to it "as a 'magic text editor'". This is a cop-out for many versus a more concrete description of what is actually happening under the hood of the machine.

      It's also similar, thought subtly different to the idea of the "magic of note taking" by which writers are taking about ideas of emergent creativity and combinatorial creativity which occur in that space.

    14. The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.

      Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?

    15. Our team at Google Research built Wordcraft, an AI-powered text editor centered on story writing, to see how far we could push the limits of this technology.
    1. https://pair.withgoogle.com/

      People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI by doing fundamental research, building tools, creating design frameworks, and working with diverse communities.

    1. Author's note by Robin Sloan<br /> November 2022

    2. I have to report that the AI did not make a useful or pleasant writing partner. Even a state-of-the-art language model cannot presently “understand” what a fiction writer is trying to accomplish in an evolving draft. That’s not unreasonable; often, the writer doesn’t know exactly what they’re trying to accom­plish! Often, they are writing to find out.