993 Matching Annotations
  1. Sep 2022
    1. Can copyright vest in an AI? The primary objective of intellectual property law is to protect the rights of the creators of intellectual property.10 Copyright laws specifically aim to: (i) promote creativity and encourage authors, composers, artists and designers to create original works by affording them the exclusive right to exploit such work for monetary gain for a limited period; and (ii) protect the creators of the original works from unauthorised reproduction or exploitation of those works.

      Can copyright vest in an AI?

      The primary objective of intellectual property law is to protect the rights of the creators of intellectual property.10 Copyright laws specifically aim to: (i) promote creativity and encourage authors, composers, artists and designers to create original works by affording them the exclusive right to exploit such work for monetary gain for a limited period; and (ii) protect the creators of the original works from unauthorised reproduction or exploitation of those works.

    1. To my knowledge, conferring copyright in works generated by artificial intelligence has never been specifically prohibited. However, there are indications that the laws of many countries are not amenable to non-human copyright. In the United States, for example, the Copyright Office has declared that it will “register an original work of authorship, provided that the work was created by a human being.” This stance flows from case law (e.g. Feist Publications v Rural Telephone Service Company, Inc. 499 U.S. 340 (1991)) which specifies that copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Similarly, in a recent Australian case (Acohs Pty Ltd v Ucorp Pty Ltd), a court declared that a work generated with the intervention of a computer could not be protected by copyright because it was not produced by a human.

      To my knowledge, conferring copyright in works generated by artificial intelligence has never been specifically prohibited. However, there are indications that the laws of many countries are not amenable to non-human copyright. In the United States, for example, the Copyright Office has declared that it will “register an original work of authorship, provided that the work was created by a human being.” This stance flows from case law (e.g. Feist Publications v Rural Telephone Service Company, Inc. 499 U.S. 340 (1991)) which specifies that copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Similarly, in a recent Australian case (Acohs Pty Ltd v Ucorp Pty Ltd), a court declared that a work generated with the intervention of a computer could not be protected by copyright because it was not produced by a human.

    1. With the advent of AI software, computers — not monkeys — will potentially create millions of original works that may then be protected by copyright, under current law, for more than 100 years.

      With the advent of AI software, computers — not monkeys — will potentially create millions of original works that may then be protected by copyright, under current law, for more than 100 years.

    1. The Napkin Math 的 Evan Armstrong 本周发表了一篇长文,讨论了在 AI 生成内容技术推动内容创作成本逐步逼近零之后产生的问题。文中包含了大量 AI 生成内容的案例,对于理解目前技术所处的阶段有很多帮助。

      Armstrong 认为,商业模式可以简化认为是:生产、获客和分发三个环节。从内容行业的角度看,互联网已经将分发这个环节的成本降为零。而在 AI 生成内容的时代,内容生产的成本可能是下一个被颠覆的环节。

      作者认为,变化的周期可能是 5-10 年,也就是说在 2030 年前后,内容生产和创作将发生重大的变化,进而影响知识工作者的权力分配,而每个人与信息的关系也会发生剧烈的变化。

      Armstrong 从创造和协作两个角度分析可能产生的影响:

      • 创造。从零开始制造东西,完全替代以前需要人工投入的产品。
      • 协作。人类与人工智能工具配对,极大地改善和加快了他们的工作流程。

      他倾向于认为,协作可能是 AI 颠覆性更强的地方。而这意味着权力或利益的重新分配:

      • 自动化去掉重复的、低价值的工作是生产力提高的主要来源。
      • 在技术领域,新的创新总是在执行幂律法则。表现出色的人将不再需要支持人员,他们可以直接用人工智能来处理简单的事情。
    1. Artificial intelligence is the defining industrial and technical paradigm of the remainder of our lifetimes.

      BOOM! This is a strong claim. 20-30 years ago we would have said the same, starting with the word "internet". which begs the question - what's the Venn diagram for AI and the internet? Are they the same? Is one a necessary condition for the other?

    2. The greats, like William Gibson, Robert Heinlein, Octavia Butler and Samuel Delany, have long been arcing towards the kind of strangeness that Wang is talking about. Their AI fictions have given us our best imagery: AI, more like a red giant, an overseer, its every movement and choice as crushing and irrefutable as death; or, a consciousness continually undoing and remaking itself in glass simulations; or, a vast hive mind that runs all its goals per second to completion, at any cost; or, a point in a field, that is the weight of a planet, in which all knowledge is concentrated. These fictions have made AI poetics possible.

      So "alien intelligence" rather than "artificial intelligence". And then "artificial poetics", to grasp this so-called intelligence, that has to be understood not in the sense of something intelligent, but of something doing (alien) thinking.

    1. We believe that the net benefits of scale outweigh the costs associated with these qualifications, provided that they are seriously addressed as part of what scaling means. The alternative of small, hand-curated models from which negative inputs and outputs are solemnly scrubbed poses different problems. “Just let me and my friends curate a small and correct language model for you instead” is the clear and unironic implication of some critiques.

      This is the classical de/centralization debate, visible today also with regard to online platforms. Which, by the way, are or will be inserting LLMs into their infrastructural stacks. Thinking about de/centralization always reminds me of Frank Pasquale's "Tech Platforms and the Knowledge Problem" https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3197292

    1. Onecannot hope thus to equal the speed and flexibility withwhich the mind follows an associative trail, but it should bepossible to beat the mind decisively in regard to the perma-nence and clarity of the items resurrected from storage

      I agree, but at the same time, I wonder if the new modern technologies that imitate the human mind could eventually surpass the flexibility and speed of human thinking. In recent times, Google's AI system (LaMDA) convinced several people that it has consciousness. I wonder if one could aspire to equal the speed and flexibility that this article mentioned in terms of human mind.

    1. In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

      This is a big question, whether use restrictions, which are becoming prolific (RAIL license, for example), can be enforced. If not, and that's a big if, it might create a situation of "responsibility washing" - licensors can argue they did all that's possible to curb harmful uses, and these will continue to happen in a gray / dark zone

  2. Aug 2022
    1. 过去十年的大部分 AI 系统都是基于监督学习,利用人工标注的数据集进行训练。它们取得了巨大的成功,但也存在明显的缺陷。此类的 AI 对于理解大脑如何工作没什么帮助,因为包括人类在内的动物是不依靠已标注数据集学习的。生物大脑通过探索环境获得对世界的深入理解。科学家们开始探索自我监督学习的机器学习算法,此类神经网络显示出了与大脑如何工作的相似性。当然大脑的工作不只是限于自我监督式学习,它充满了反馈连接,现有的自学式 AI 缺乏此类功能。AI 模型还有很长的路要走。

    1. We feel that there is a balance to be struck between maximizing access and use of LLMs on the one hand, and mitigating the risks associated with use of these powerful models, on the other hand, which could bring about harm and a negative impact on society. The fact that a software license is deemed "open" ( e.g. under an "open source" license ) does not inherently mean that the use of the licensed material is going to be responsible. Whereas the principles of 'openness' and 'responsible use' may lead to friction, they are not mutually exclusive, and we strive for a balanced approach to their interaction.
  3. Jul 2022
    1. AI systems replace the automated cognitive function of humans in maintaining importantsocial systems and augment the impact of such functions which are creative, singular and novel

      !- concern : AI replace the automated cognitive functions * Is just replacing the "automated" cognitive functions enough to avoid potential progress trap of AI takeover?

    2. we oppose the popular predictionof the upcoming, ‘dreadful AI takeover’

      !- in other words : Human takeover * The title of the paper comes from a play on the popular term "AI takeover" * It advocates for humans to takeover managing the world in a responsible way rather than machines.

    3. A cognitiveagent is needed to perform this very action (that needs to be recurrent)—and another agent is neededto further build on that (again recurrently and irrespective to the particular agents involved).

      This appears to be setting up the conditions for an artificial cognitive agent to be able to play a role (ie Artificial Intelligence)

    4. it would then be present social systemsand not the future AI the most proper context of considering and understanding the notion of takeover.
      • Author argues that current social systems have already taken over command of humans.
    1. 前 Google CEO Eric Schmidt 将 AI 比作核武器,呼吁建立类似相互保证毁灭的威慑制度,防止世界最强大的国家率先发动攻击。Schmidt 称,在不遥远的未来中美可能需要围绕 AI 制定条约,在上个世纪的 50 年代和 60 年代,美国和苏联这两个超级大国最终达成了 《禁止在大气层、外层空间和水下进行核武器试验条约》,这是一个限制核武器试验的国际条约,旨在减缓军备竞赛和减少大气中过量的放射性尘埃。Schmidt 认为中国和美国可能需要在 AI 领域达成类似的条约。

    1. 在俄罗斯举行的一次国际象棋比赛中,一位与儿童棋手对弈的机器人棋手弄伤了对方的手指,原因是他还没有轮到时抢着出棋,而安装有机械臂的机器人显然缺乏相应的程序,它伸出手臂紧紧按住其手指,直到成年人过来干预拉出手指。发布在 Baza Telegram 频道上的视频展现了这一罕见的事故。这位儿童参加的九岁以下年龄组的比赛,他的名字叫 Christopher,在事故后手指打上石膏,继续参加并完成了比赛。他的父母据报道联络了公诉人办公室。国际象棋大师 Sergey Karjakin 认为是软件错误导致了此次事故。

    1. 在掌握海量数据,对用户进行几乎完美的跟踪之后,AI 是否就无所不能了?伊利诺伊大学和斯坦福大学的经济学家研究了机器学习在预测消费者选择上的能力,他们的结论是预测消费者选择非常困难,AI 并不特别擅长。他们发现,即时信息如用户评论、推荐和新选择对决策有愈来愈大的影响,这些信息不能事先测量和预期,大数据可用于改善预测,但程度甚微,预测仍然非常不精确。

    1. Google 周五解雇了相信 AI 有意识的工程师 Blake Lemoine。他在播客 Big Technology Podcast 上透露了这一消息。Lemoine 此前任职于 Responsible AI 部门,在与公司的聊天机器人 LaMDA 对话之后他认为 AI 有了意识。他曾分享了与 AI 的对话内容,Lemoine 问 LaMDA 最害怕什么?LaMDA 回答说,也许听起来奇怪,它恐惧于被关闭。Lemoine:就像死亡?LaMDA:就像是死亡。Lemoine 和一名同事向 Google 高层展示证据,证明 LaMDA 有了意识。但 Google 高管及其他 AI 研究人员都不认同他的观点。

    1. because it only needs to engage a portion of the model to complete a task, as opposed to other architectures that have to activate an entire AI model to run every request.

      i don't really understand this: in z-code thre are tasks that other competitive softwares would need to restart all over again while z-code can do it without restarting...

    2. Z-code models to improve common language understanding tasks such as name entity recognition, text summarization, custom text classification and key phrase extraction across its Azure AI services. But this is the first time a company has publicly demonstrated that it can use this new class of Mixture of Experts models to power machine translation products.

      this model is what actually z-code is and what makes it special

    3. have developed called Z-code, which offer the kind of performance and quality benefits that other large-scale language models have but can be run much more efficiently.

      can do the same but much faster

    1. Efforts to use AI to predict crime have been fraught with controversy due to the potential to replicate existing biases in policing. But a new system powered by machine learning holds the promise of not only making better predictions but also highlighting these biases.
    1. Superintelligence has long served as a source of inspiration for dystopian science fiction that showed humanity being overthrown, defeated, or imprisoned by machines.
    1. 人工智能研究实验室OpenAI 在四月发布了 DALL-E 2,2021 年发布的 DALL-E 的继任者。两个人工智能系统都能根据自然语言文本描述生成令人惊叹的图像;它们能制作看起来像照片、插图、绘画、动画,以及基本上你可以用文字表达出来的任何艺术风格的图像。DALL-E 2 有诸多改善:更好的分辨率、更快的处理速度和一个编辑器功能,编辑器允许用户仅使用文本命令对生成的图像进行修改,例如“用植物替代花瓶”或“让狗的鼻子变大”。用户还可以上传自己的图像,然后告诉人工智能系统如何对其进行调整。世界对 DALL-E 2 的最初反应是惊叹和高兴。可以在几秒钟之内将任何物体和生物组合在一起;可以模仿任何艺术风格;可以描绘任何位置;并且可以描绘出任何照明条件。例如看到一副毕加索风格的鹦鹉翻煎饼图像,谁能不印象深刻呢?可当人们思考哪些行业容易被这种技术颠覆的时候,担忧出现了。

      OpenAI 尚未向公众、商业实体甚至整个人工智能社区发布该技术。OpenAI 研究员 Mark Chen 对 IEEE Spectrum 表示:“我们也和人们一样对滥用感到担忧,这是我们非常重视的事情。”该公司邀请了一些人尝试 DALL-E 2,允许他们与全世界分享他们的成果。有限公开测试的政策与 Google 的政策形成鲜明对比,后者刚刚发布了自己的文本到图像生成器 Imagen。在发布该系统时,Google 宣布不会发布代码或公开演示,因为存在滥用和生成有害图像的风险。Google 发布了一些非常令人印象深刻的图片,但没有向世界展示任何它所暗示的、有问题的内容。

    1. 受婴儿学习方式的启发,Deep Mind 的计算机科学家开发出一种程序能学习物体行为的简单物理学规则。研究报告发表在《Nature Human Behaviour》期刊上。当婴儿看到违反物理规则的画面时他们会表现出惊讶,比如视频中的球突然消失了。但 AI 在理解此类行为上的能力欠缺。Luis Piloto 和同事开发出名叫 Physics Learning through Auto-encoding and Tracking Objects (PLATO) 的软件模型,像婴儿那样学习简单的物理学规则。研究团队通过给 PLATO 观看许多描绘简单场景的视频来训练它,比如球落到地上,球滚到其他物体后面又再次出现,很多球之间弹来弹去。训练之后,研究人员给 PLATO 观看了有时包含不可能场景的视频,以此作为测试。和年幼的小孩一样,PLATO 在看到“不可能场景”时表现出了“惊讶”,比如物体互相穿过却没有发生相互作用。PLATO 只观看了 28 小时的视频就获得了以上学习效果。这些结果对 AI 和人类认知研究皆有重大影响。研究团队表示,这一模型可以学习各种物理概念,且体现出与发展心理学的发现一致的特点,而 PLATO 可以作为研究人类如何学习直观物理的一个有力工具,同时也表明了物体表征对于人类理解周围世界具有重要作用。

    1. Medical AI’s social impact is not merely a question of practice but also the insufficiency of its promise
  4. Jun 2022
    1. 英国知识产权局(IPO)决定 AI 系统暂时不能为发明申请专利。IPO 最近一次咨询发现,专家对人工智能目前是否能在没有人类帮助的情况下进行发明持怀疑态度。IPO 表示,现行法律允许人类为人工智能协助完成的发明申请专利,尽管有误解,但实际情况并非如此。去年上诉法庭裁定 Stephen Thaler 败诉,后者曾表示他的 Dabus AI 系统应该被认定为两项专利申请的发明人:一种食品容器和一种闪光灯。法官以二比一的多数支持 IPO 必须是真人才能作为发明人的立场。大法官 Laing 在她的判决中写道:“只有人才能拥有权利——机器不行。”“专利是一项法定权利,只能够被授予个人。”但是 IPO 也表示,它将“需要了解我们的知识产权制度在未来如何保护人工智能设计的发明”,并致力于推动国际讨论,保持英国的竞争力。

      很多人工智能系统都是使用从互联网上复制的大量数据训练的。 IPO 周二还宣布计划修改版权法,为了公共利益允许所有人合法访问——而不是像现在一样仅限于进行非商业研究的人访问,以此“促进人工智能技术的使用,并拓宽‘数据挖掘’技术。”权利持有人仍然能控制其作品的访问权并为此收取费用,但是不能再针对挖掘它们的能力收取额外费用。在咨询中,IPO 指出,英国是少数几个保护无人类创作者的计算机生成作品的国家之一。它表示,“计算机生成的作品”的“作者”被定义为“为作品创作进行必要安排的人”。保护期限为自作品完成之日起 50 年。表演艺术工作者工会 Equity 呼吁修改版权法,以保护演员的生计免受人工智能内容的影响,例如用他们的面部图像或声音生成“deepfakes”。IPO 表示他们会慎重对待该问题,但“现阶段人工智能技术对表演者的影响仍不明确。”该机构表示“将继续关注这些问题。”

    1. 机器学习模型正呈指数级增长。训练它们所需的能量也成倍增长——通过训练之后 AI 才能准确处理图像或文本或视频。随着人工智能社区努力应对其对环境的影响,一些会议现在要求论文提交者提供有关二氧化碳排放的信息。新研究提供了一种更准确的方法计算排放量。它还比较了影响它们的因素,并测试了两种减少排放的方法。 研究人员训练了 11 个规模不等的机器学习模型处理语言或图像。训练时间从单 GPU 上 1 小时到 256 个 GPU 上 8 天不等。他们记录每秒的能耗数据。还获得了 16 个地理区域 2020 年期间以五分钟为单位的每千瓦时能源碳排放量。然后他们可以比较在不同地区、不同时间运行不同模型的碳排放量。 为训练最小模型的 GPU 供电的碳排放量大致相当于为手机充电。最大的模型包含了 60 亿个参数,参数是衡量模型大小的标准。虽然它的训练只完成了 13%,但是 GPU 的碳排放量几乎相当于一个美国家庭一年耗电的碳排放量。而一些已部署的模型,例如 OpenAI 的 GPT-3,包含的参数超过了 1000 亿个。 减少碳排放的最大因素是地理区域:各地区每千瓦时的二氧化碳排放量从 200 克到 755 克不等。除了改变位置之外,研究人员还测试了两种减少二氧化碳排放的方法,他们能做到这一点得益于高时间粒度的数据。第一种方法是“灵活的开始”,这种方法可能会将训练延迟长达 24 个小时。对于需要几天时间训练的最大的模型,推迟一天通常只能将碳排放量减少不到 1%,但是对于小得多的模型,这样的延迟可以减少 10% 到 80% 的排放量。第二种方法是“暂停加恢复”,这种方法是在排放量高的时段暂停训练,只要总的训练时间增长不超过一倍即可。这种方法给小模型带来的好处只有几个百分点,但是在半数的地区,它让最大的模型受益达到 10% 到 30%。每千瓦时的排放量随着时间波动,部分是因为由于缺乏足够的能量存储,当风能和太阳能等间歇性清洁能源无法满足需求时,电网必须依赖“脏电”。

    1. Companies need to actually have an ethics panel, and discuss what the issues are and what the needs of the public really are. Any ethics board must include a diverse mix of people and experiences. Where possible, companies should look to publish the results of these ethics boards to help encourage public debate and to shape future policy on data use.

    1. 2009 年当时在普林斯顿大学的计算机科学家李飞飞创造了一个将改变人工智能历史的数据集。该数据集被称为 ImageNet,包含了数百万张标记图像,可训练复杂的机器学习模型识别图片中的内容。2015 年,这些机器超越了人类的识别能力。不久之后,李飞飞开始寻找她所谓的另一个“北极星”——将以完全不同的方式推动人工智能发展为真正的智能。

      她回顾了 5.3 亿年前的寒武纪大爆发——当时许多陆地生物物种首次出现,她从中获得了灵感。一种有影响力的理论认为,新物种爆发的部分原因在于第一次能看到周围世界的眼睛的出现。李飞飞意识到,动物的视觉永远不会自行出现,而是“深深根植于一个需要在瞬息万变的环境中移动、导航、生存、操纵和改变的整个身体之中。”她表示:“这就是为什么我会很自然地在人工智能方面转向更积极的愿景。”

      如今李飞飞的工作重点是人工智能代理,它们不仅可以从数据集中接受静态图像,还可以在三维虚拟世界的模拟环境中四处移动并与环境交互。这是一个被称为具身人工智能(embodied AI)的新领域的广泛目标,李飞飞并不是唯一投身于该领域的人。该领域与机器人技术重叠,因为机器人可以是具身人工智能代理在现实世界中的物理等价物,而强化学习——总是训练交互式代理学习将长期奖励作为激励。但是李飞飞和其他一些人认为,具身人工智能可以推动从机器学习直接能力(如识别图像)到学习如何通过多个步骤执行复杂的类人任务(如制作煎蛋卷)的重大转变。

    1. 人工智能的使用正在蓬勃发展,但是它可能并不是你想象中的秘密武器:从网络行动到虚假信息,人工智能拓展了国家安全威胁的触角,可以精确、快速大规模地针对个人和整个社会。随着美国努力保持领先地位,美国情报体系(IC)正努力适应并开启人工智能即将带来的革命。美国情报体系启动了一些针对人工智能的影响和道德用途的举措,分析师开始构思人工智能将如何彻底地改变他们的学科,但是这些方法和 IC 对此类技术的其他一些实际应用在很大程度上都是分散的...美国不同的政府机构正在如何使用人工智能查找全球网络流量和卫星图像中的模式,但是在使用人工智能解释意图时存在着一些问题:Pyrra Technologies 的首席技术官 Eric Curwin 表示,人工智能的理解可能更类似于刚学会走路的人类幼儿。该公司帮助客户识别从暴力到虚假信息在内的各种虚拟威胁。Curwin表示:“例如人工智能可以理解人类语言的基础知识,但是基本模型不具备完成特定任务的相关知识或对上下文的理解。”Curwin 解释说,为了“建立可以开始取代人类直觉或认知的模型,研究人员必须首先了解如何解释行为,并将该行为转化成人工智能可以学习的东西。”

    1. 人工智能让研究人员能检查当今科学仪器产生的大量数据,改变了科学实践。使用深度学习,可以从数据本身中学习,在数据的海洋中大海捞针。人工智能正在推动基因搜索、药学、药物设计和化合物合成的发展。为了从新数据中提取信息,深度学习要使用算法,算法通常是在海量数据上训练出来的神经网络。按照其分步说明,它与传统计算有很大的不同。它从数据中学习。深度学习没有传统计算编程那么透明,这留下了一个悬而未决的重要问题:系统学到了什么,它知道什么?五十年来,计算机科学家一直在试图解决蛋白质折叠问题,但没有成功。2016 年 Google 母公司 Alphabet 的人工智能子公司 DeepMind 推出了 AlphaFold 计划。利用蛋白质数据库作为训练集,该库中包含了超过 15 万种蛋白质的经验确定结构。不到五年的时间里,AlphaFold 就解决了蛋白质折叠问题,或者至少解决了其中最重要的方面:根据氨基酸序列识别蛋白质结构。AlphaFold 无法解释蛋白质是如何如此快速而精准地折叠的。这对人工智能来说是一次巨大的胜利,因为它不仅赢得了很高的科学声誉,而且是一项可能影响每个人生活的重大科学突破。

  5. scottaaronson.blog scottaaronson.blog
    1. 知名量子计算机专家 Scott Aaronson 宣布他将离开 UT Austin 一年,到 AI 创业公司 OpenAI(大部分是远程)从事理论研究,其工作主要是研究防止 AI 失控的理论基础,以及计算复杂性对此有何作为。他承认暂时没有头绪,因此需要花一整年时间去思考。OpenAI 的使命是确保 AI 能让全人类受益,但它同时也是一家盈利实体。Scott Aaronson 称他即使没有签署保密协议,但也不太会公开任何专有信息,但会分享 AI 安全性的一般思考。他说,人们对于 AI 安全性的短期担忧是在垃圾信息、监视和宣传方面滥用 AI,长期担忧则是当 AI 智能在所有领域都超过人类会发生什么。一个方案是找到方法让 AI 在价值观上与人类保持一致。

    1. Google 工程师 Blake Lemoine 任职于 Responsible AI 部门。作为工作的一部分,他在去年秋天开始与公司的聊天机器人 LaMDA 对话。LaMDA 运用了 Google 最先进的大型语言模型,使用从互联网上收集的数万亿词汇进行训练。在与 LaMDA 交流期间,41 岁的 Lemoine 认为 AI 有了意识。比如 Lemoine 问 LaMDA 最害怕什么?LaMDA 回答说,也许听起来奇怪,它恐惧于被关闭。Lemoine:就像死亡?LaMDA:就像是死亡。Lemoine 和一名同事向 Google 高层展示证据,证明 LaMDA 有了意识。副总裁 Blaise Aguera y Arcas 和部门主管 Jen Gennai 看了他的证据之后驳回了他的主张。本周一他被公司勒令休行政假,在切断对其账号的访问前,他给有 200 人的 Google 机器学习列表发帖说,“LaMDA 是有生命的(LaMDA is sentient)”,他不在的时候请好好照顾它。没人回应他的帖子。

    1. 人工智能将通过自动化繁琐的任务使人类更加高效。例如,人类可以使用诸如 GPT-3 之类的文本 AI 来生成想法/样板写作,以绕过空白页的恐惧,然后简单地选择最好的并对其进行改进/迭代。(基于 GPT-2 的 AI Dril 就是一个早期的例子)。随着人工智能变得更好,“辅助创造力”将变得更大,使人类能够比以往更轻松、更好地创造复杂的人工制品(包括视频游戏!)。

  6. May 2022
    1. Another absurd page that suggests Alexa has feelings. In the strictest sense Alexa doesn't even qualify as a partial AI. It's just a glorified (although extremely helpful) lookup table. There is no reason to believe that even a true AI, such as a self-teaching, building and growing neural network (which Alexa is not), has feelings. Of what we know of feelings, the hard question of consciousness is only a prerequisite ... doesn't even guarantee having feelings, and even whether machines can be conscious is doubtful by many if not most experts of AI within existentialism. Even all the theories of consciousness are rooted in correlations that have little to do with scientific tenets, so to make the leap to an AI having feelings, let alone Alexa which isn't even a theoretical AI, is just a sad to see. At best we should not be "rude" to machine because it might be hard for some to distinguish between machine and a feeling thing. but in that case the problem is with the misperception that machines can feel, more than it is a problem with people being "rude" to machines.

    1. This is just really horrible to validate a falsehood to children that Alexa does in fact have feelings. Really warped, really messed up. Of course children should be taught good manners, and by example no less, but I worry for a future where people can be manipulated by suggesting that a non-living thing has feelings, regardless whether it has an AI or not.

      Note that a true AI has yet to be created ... only facsimile's exist, mostly of the expert-based AI which Alexa is, which doesn't even fit the definition of a partial AI, it's just a lookup table.

  7. Apr 2022
    1. nother trend that surfaced in our summer survey and became more pronounced in our 2021 surveydata is that organizations are focusing on AI/ML use cases that will reduce costs while improving thecustomer experience. When respondents were asked about the different ways they’re applying AI/ML intheir organizations, customer experience and process automation rose to the top as some of the mostcommon use cases respondents selected. We also saw a dramatic (74%) year-on-year increase inorganizations that selected more than five use cases from the list of options in the survey.

      There were more use cases from 2020 to 2021. The biggest increase was in improving customer experience. Following closely behind was in generating customer insights, then automating processes.

    2. Here’s an even more telling indicator of the accelerating pace of AI/ML strategies. Respondents were askedhow many data scientists their organizations employ, from which we estimated the average number of datascientists employed by organizations in both the 2020 and 2021 data. Year-on-year, the average number ofdata scientists employed has increased by 76%. In fact 29% of respondents in our 2021 report now havemore than 100 data scientists on their team, a significant increase from the 17% reported last year.

      There was a major increase in the number of data scientists from 2020 and 2021.

    3. It’s clear from this year’s data that AI/ML projects have becomeone of the top strategic priorities in many enterprises. As of last year,organizations had already begun to boost their AI/ML investments;71% of respondents in our 2020 report said their AI/ML budgets hadincreased compared with the previous year.They’re not dialing back that spending this year. In fact, companiesappear to be doubling down on their AI/ML investments. We ran asurvey this summer to see how organizations were adapting to thepandemic and its impacts, and it showed a new sense of urgencyaround AI/ML projects.

      Companies are spending more on AI.

    4. Continuing the trends we saw in our summer survey, our 2021 surveyshows an increase in prioritization, spending, and hiring for AI/ML. Firstoff, 76% of organizations say they prioritize AI/ML over other IT initiatives,and 64% say the priority of AI/ML has increased relative to other ITinitiatives in the last 12 months.43%Respondents who told usthat AI/ML matters “waymore than we thought”in a survey this summerThe time to invest inAI/ML is now, no matteryour organization’s size,

      AI is taking priority over the other IT initiatives.

    5. This year’s survey revealed 10 key trends that organizations should be paying attention toif they want to succeed with AI/ML in 2021. The trends fall into a few main themes, and theoverarching takeaway is that organizations are moving AI/ML initiatives up their strategicpriority lists—and accelerating their spending and hiring in the process.But despite increasing budgets and staff, organizations continue to face significant barriersto reaping AI/ML’s full benefits. Specifically, the market is still dominated by early adopters,and organizations continue to struggle with basic deployment and organizational challenges.The bottom line is, organizations simply haven’t learned how to translate increasinginvestments into efficiency and scale

      Many organisations still face challenges in AI adoption. The key question is how do they translate increasing investments in AI into efficiency and scale.

    6. 2020 was a year of belt-tightening for many organizations due largely to the macroeconomicimpacts of the COVID-19 pandemic. In May 2020, Gartner predicted that global IT spendingwould decline 8% over the course of 2020 as business and technology leaders refocusedtheir budgets on their most important initiatives.One thing is readily apparent in the 2021 edition of our enterprise trends in machinelearning report: AI and ML initiatives are clearly on the priority list in many organizations.Not only has the upheaval of 2020 not impeded AI/ML efforts that were already underway,but it appears to have accelerated those projects as well as new initiatives.

      2022 is certainly a year in which AI is changing many businesses

    Tags

    Annotators

  8. Mar 2022
    1. Ben Collins. (2022, February 28). Quick thread: I want you all to meet Vladimir Bondarenko. He’s a blogger from Kiev who really hates the Ukrainian government. He also doesn’t exist, according to Facebook. He’s an invention of a Russian troll farm targeting Ukraine. His face was made by AI. https://t.co/uWslj1Xnx3 [Tweet]. @oneunderscore__. https://twitter.com/oneunderscore__/status/1498349668522201099

    1. Eric Topol. (2022, February 28). A multimodal #AI study of ~54 million blood cells from Covid patients @YaleMedicine for predicting mortality risk highlights protective T cell role (not TH17), poor outcomes of granulocytes, monocytes, and has 83% accuracy https://nature.com/articles/s41587-021-01186-x @NatureBiotech @KrishnaswamyLab https://t.co/V32Kq0Q5ez [Tweet]. @EricTopol. https://twitter.com/EricTopol/status/1498373229097799680

    1. projet européen X5-GON (Global Open Education Network) qui collecte les informations sur les ressources éducatives libres et qui marche bien avec un gros apport d’intelligence artificielle pour analyser en profondeur les documents
  9. Feb 2022
    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: This study was approved by the Institutional Review Board of the Emory University School of Medicine.

      Consent: IRB of Emory University School of Medicine gave ethical approval for this work I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

      Inclusion and Exclusion Criteria

      not detected.

      Attrition

      The first case was identified in September of 2006 , 13 cases were detected in 2007 , and 16 cases in 2008 across these two hospitals ( total of 30 with 120 matched controls) .

      Sex as a biological variable

      Mean Median 60 62 ( range from 27 to 90 ) Sex Female Male 25 ( 52 ) 23 ( 48 ) Site of isolation Urine

      Subject Demographics

      Age: not detected. Weight: not detected.

      Randomization

      Controls, patients without CRKP were randomly selected from a computerized list of inpatients who matched the case age (+/- 5 years), sex, and facility and whose admission date was within 48 hours of the date of the initial, positive culture.

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Availability: The comparison of clinical characteristics between cases and controls was made using Chi-Square (or It is made available under a CC-BY-NC-ND 4.0 International license .

      Identifiers: medRxiv preprint doi: https:// doi.org/10.1101/2022.02.08.22269570; this version posted February 9 , 2022 . https://doi.org/10.1101/2022.02.08.22269570

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: The ethics committee approval of the research protocol was made by the Ankara City Hospital Consent: Informed consent was obtained from the patients to participate in the study.

      Inclusion and Exclusion Criteria

      not detected.

      Attrition

      Two publications are evaluating the association with Netrin-1 in bleomycin-induced lung fibrosis in mice and SSc lung cell culture in humans.

      Sex as a biological variable

      A total of 56 SSc patients (mean age: 48.08±13.59) consisting of 53 females and 3 males, who were followed up in the rheumatology department of Ankara city hospital, diagnosed according to the 2013 ACR (American College of Rheumatology)/EULAR (European League Against Rheumatism) SSc classification criteria were included in the study.

      Subject Demographics

      Age: For the control group, 58 healthy volunteers (mean age: 48.01±11.59 years) consisting of 54 females and 4 males were included in the study.

      Randomization

      not detected.

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Availability: It is made available under a CC-BY-NC-ND 4.0 International license .

      Identifiers: preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in medRxiv preprint doi: https:// doi.org/10.1101/2022.02.05.22270510; this version posted February 10, 2022. https://doi.org/10.1101/2022.02.05.22270510

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

      Field Sample Permit: The research has been conducted using the UK Biobank Resource and has been approved by the UK Biobank under Application no. 36226.

      Consent: I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

      Inclusion and Exclusion Criteria

      Similarly , individuals where a large proportion of SNPs could not be measured were excluded.

      Attrition

      not detected.

      Sex as a biological variable

      not detected.

      Subject Demographics

      Age: not detected.

      Weight: not detected.

      Randomization

      Mendelian randomization ( MR ) is a robust and accessible tool to examine the causal relationship between an exposure variable and an outcome from GWAS summary statistics. [ 19 ] We employed two-sample summary data Mendelian randomization to further validate causal effects of neutrophil cell count genes on the outcome of critical illness due to COVID-19

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Identifiers: medRxiv preprint doi: https:// doi.org/10.1101/2021.05.18.21256584; this version posted February 14 , 2022 . https://doi.org/10.1101/2021.05.18.21256584

      Identifiers: Manhattan plot of neutrophil cell count showing that we reproduce the reported CDK6 signal ( rs445 ) on chromosome 7 . rs445

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: 234 Ethical clearance was obtained from the regional Ethical Review Board of Amhara

      Consent: The general aim and purpose of the study was described to each 239 eligible patient and all voluntary participants gave verbal informed consent prior to 240 enrolment.

      Inclusion and Exclusion Criteria

      Those 135 patients who were critically ill and unable to respond and those not voluntary to 136 participate were excluded.

      Attrition

      Those 135 patients who were critically ill and unable to respond and those not voluntary to 136 participate were excluded .

      Sex as a biological variable

      Sex Male Female Age group 18-24 25-44 ≥45

      Subject Demographics

      Age: 130 All adult patients ( aged ≥18 years ) who were using clinical laboratory services at 131 public health facilities of east Amhara , northeast Ethiopia were source population.

      Randomization

      132 Study population and eligibility criteria 133 Adult patients who received general laboratory services at the randomly selected 134 government health facilities during the study period were study population .

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Availability: It is made available under a CC-BY-NC-ND 4.0 International license .

      Identifiers: preprint doi: https:// doi.org/10.1101/2022.01.25.22269238; this version posted January 25 , 2022 . https://doi.org/10.1101/2022.01.25.22269238

    1. Another strategy is reinforcement learning (aka. constraint learning), as used in some AI systems.
  10. Jan 2022
    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      Field Sample Permit: Our findings indicate a paucity of 217 research focusing on field trials and implementation studies related to CHIKV RDTs .

      IRB: I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

      Consent: I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

      Inclusion and Exclusion Criteria

      98 Articles were excluded if (i) the studies were reviews, case reports, or opinion articles; (ii) 99 the studies evaluated the performance of reverse transcription loop-mediated isothermal 100 amplification (RT-LAMP) assays; (iii) the studies were related to an outbreak investigation 101 without the evaluation of the accuracy of CHIKV RDTs; (iv) the studies used an inappropriate 102 study population (asymptomatic individuals); (v) the studies described inappropriate It is made available under a CC-BY-NC-ND 4.0 International license.

      Attrition

      Based on the tile and the abstract , 96 were excluded , with 89 full-text 158 articles retrieved and assessed for eligibility .

      Sex as a biological variable

      not detected.

      Subject Demographics

      Age: not detected. Weight: not detected.

      Randomization

      Similarly , there was a high risk of bias in 210 the patient selection domain because only three studies enrolled a consecutive or random 211 sample of eligible patients with suspicion of CHIKV infection to reduce the bias in the 212 diagnostic accuracy of the index test .

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Availability: The 90 Prisma-ScR checklist is available in the Supplementary material.

      Identifiers: medRxiv preprint doi: https:// doi.org/10.1101/2022.01.28.22270018; this version posted January 30 , 2022 . https://doi.org/10.1101/2022.01.28.22270018

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: Institutional Review Board and all participants gave their signed informed consent.

      Consent: Institutional Review Board and all participants gave their signed informed consent.

      Inclusion and Exclusion Criteria

      83 years; 34 males; 57 righthanded , see Table 1 ) met the inclusion criteria: All patients were older than 18 years , presented with first-ever ischemic ( 83 % ) or haemorrhagic ( 17 % ) stroke and behavioural deficits as assessed by a neurological examination.

      Attrition

      Patients who had a history of neurological or psychiatric presentations ( e.g. transient ischemic attack) , multifocal or bilateral strokes , or had MRI contraindications ( e.g. claustrophobia , ferromagnetic objects ) were excluded from the analysis ( n = 131 patients , see the enrollment flowchart in the supplementary materials from Corbetta et al. 2015).

      Sex as a biological variable

      Handedness ( % right-handed ) 91.94 Sex ( % female ) 45.16 Abbreviations: SD = standard deviation It is made available under a CC-BY-NC 4.0 International license.

      Subject Demographics

      Age: 83 years; 34 males; 57 righthanded , see Table 1 ) met the inclusion criteria: All patients were older than 18 years , presented with first-ever ischemic ( 83 % ) or haemorrhagic ( 17 % ) stroke and behavioural deficits as assessed by a neurological examinatio .

      Randomization

      The task instructions require patients to place and remove the nine pegs one at a time and in random order as quickly as possible ( Mathiowetz et al. 1985; Oxford Grice et al. 2003).

      Blinding

      Two boardcertified neurologists ( Drs Corbetta and Carter ) reviewed all segmentations blinded to the individual behavioural data .

      Power Analysis

      We believe that adding other factors ( e.g. demographic , clinical , socioeconomic variables ) that likely interact with the recovery of patients can help us increase the model’s predictive power.

      Replication

      not required.

      Cell Line Authentication

      Authentication: However , most of the studies fall into one of the pitfalls that were described above ( i.e. overfitting , generalisability , and diaschisis ) as the models are not validated in an independent dataset.

      Code Information

      Identifiers: This procedure is available as supplementary code with the manuscript ( see https://github.com/lidulyan/Hierarchical-Linear- Regression-R- ).

      https://github.com/lidulyan/Hierarchical-Linear- Regression-R-

      Data Information

      Availability: Handedness ( % right-handed ) 91.94 Sex ( % female ) 45.16 Abbreviations: SD = standard deviation It is made available under a CC-BY-NC 4.0 International license .

      Identifiers: preprint doi: https:// doi.org/10.1101/2021.12.01.21267129; this version posted December 2 , 2021.

      https://doi.org/10.1101/2021.12.01.21267129

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      Field Sample Permit: Collection of data for detecting cellular spatiotemporal condition supporting circularization For this purpose, online database and web server were used by taking specific queries like , RBP-types or lncRNAs to search out their special location inside cellular spaces

      Inclusion and Exclusion Criteria

      not required.

      Attrition

      not required.

      Sex as a biological variable

      not required.

      Subject Demographics

      Age: not required.

      Weight: not required.

      Randomization

      To reduce computational complexity in dealing with very large database where number of data is greater than 1000 , sample datasets were used through random selection of data from the original database .

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      not required.

      Data Information

      Identifiers: We analyzed the spread of this biomolecular entity outside and inside the sub- cellular space along with assimilating other reported pieces of information (e.g., about RBP molecules involved in circularization of such bioRxiv preprint doi: https:// doi.org/10.1101/2021.10.26.465935; this version posted October 26, 2021. https://doi.org/10.1101/2021.10.26.465935

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      Field Sample Permit: Seeds of sorghum (Sorghum bicolor) were obtained from the seed collection unit of the Office of the Agricultural Development Programme, Benin City, Edo State, Nigeria. Ferruginous (or iron elevated) soil used in this present study was obtained from around the Life Sciences Faculty environment and pooled to obtain composite sample.

      Inclusion and Exclusion Criteria

      not required.

      Attrition

      not required.

      Sex as a biological variable

      not required.

      Subject Demographics

      Age: not required.

      Weight: not required.

      Randomization

      In order to confirm ferrugenicity, samples were collected from random areas and iron content was first confirmed in the area before more samples were collected and pooled.

      Blinding

      not detected.

      Power Analysis

      not detected.

      Replication

      The experiment was laid out incompletely randomized design in a factorial arrangement and replicated three times per treatment.

      Number: The experiment was laid out incompletely randomized design in a factorial arrangement and replicated three times per treatment .

      Data Information

      Availability: It is made available under aCC-BY 4.0 International license.

      Identifiers: preprint doi: https:// doi.org/10.1101/2021.11.22.469542; this version posted November 22 , 2021 .

      https://doi.org/10.1101/2021.11.22.469542

    1. SciScore rigor report

      Sciscore is an AI platform that assesses the rigor of the methods used in the manuscript. SciScore assists expert referees by finding and presenting information scattered throughout a manuscript in a simple format.


      Not required = Field is not applicable to this study

      Not detected = Field is applicable to this study, but not included.


      Ethics

      IRB: Samples and data collections were conducted according to the guidelines of the Declaration of Helsinki , and approved by the Ethics Committee Sciences et Santé Animale n°115 ( protocol code COVIFEL approved on 1 September 2020 , registered under SSA_2020_010 ) .

      Euthanasia Agents: Cells were then incubated for 72 h at 37 °C with 5 % of CO2 .

      Field Sample Permit: These experiments were approved by the Anses/ENVA/UPEC ethic committee and the French Ministry of Research ( Apafis n°24818-2020032710416319 ) .

      Consent: All sera from the first cohort , and whole blood samples from the second cohort , were obtained from the Toulouse hospital , where all patients give , by default , their consent for any biological material left over to be used for research purposes after all the clinical tests requested by doctors have been duly completed.

      Inclusion and Exclusion Criteria

      not detected.

      Attrition

      One additional conclusion that can be drawn from the comparison of the results of the RBD-ELISA with those of the Jurkat-S&R-flow test is that, whilst the two methods show similar sensitivities, the ELISA signals tend to saturate very rapidly, and are thus much less dynamic that those obtained by flow cytometry.

      Sex as a biological variable

      Of note, we did not notice an increased frequency of allo-reactivity in samples from women compared to men, which suggests that allo-reactivity after pregnancy is not a major cause in the origin of those allo-reactions.

      Subject Demographics

      Age: Experiments on virally-infected hamsters Eight week-old female Syrian golden hamsters ( Mesocricetus auratus , strain RjHan:AURA ) from Janviers’s breeding Center ( Le Genest , St Isle , France ) were housed in an animal-biosafety level 3 ( A-BSL3) , with ad libidum access to water and food.

      Randomization

      The results of the second cohort, which comprised a few Covid patients, but also a large proportion of blood samples randomly picked among those from patients hospitalized for conditions unrelated to Covid-19, yielded a much less clear picture than the first one.

      Blinding

      On the other hand, the situation was much less clear-cut for the cohort comprising blood samples picked more or less randomly and blindly among those available as left-overs from the hematology department and was, therefore, more akin to a ‘real’ population.

      Power Analysis

      not detected.

      Replication

      not required.

      Cell Line Authentication

      Contamination: The Jurkat-S and Jurkat-R cell lines were both checked for the absence of mycoplasma contamination using the HEK blue hTLR2 kit ( Invivogen , Toulouse , France ).

      Authentication: For the same reason , the blood samples for the experiment shown on Figure 3A were collected by one of the authors by simple finger-pricking.

    1. He said the new AI tutor platform collects “competency skills graphs” made by educators, then uses AI to generate learning activities, such as short-answer or multiple-choice questions, which students can access on an app. The platform also includes applications that can chat with students, provide coaching for reading comprehension and writing, and advise them on academic course plans based on their prior knowledge, career goals and interest

      I saw an AI Tutor demo as ASU+GSV in 2021 and it was still early stage. Today, the features highlighted here are yet to be manifested in powerful ways that are worth utilizing, however, I do believe the aspirations are likely to be realized, and in ways beyond what the product managers are even hyping. (For example, I suspect AI Tutor will one day be able to provide students feedback in the voice/tone of their specific instructor.)

  11. Dec 2021
    1. Word vectors capture the context of their corresponding word. They're often inaccurate for extracting actual semantics of language (for example, you can't use them to find antonyms), but they do work well for identifying an overall tonal direction.

      Embeddings for logos: Can an embedding be used to encode some style features about logos?

    Tags

    Annotators

    URL

    1. Standard algorithms as a reliable engine in SaaS https://en.itpedia.nl/2021/12/06/standaard-algoritmen-als-betrouwbaar-motorblok-in-saas/ The term "Algorithm" has gotten a bad rap in recent years. This is because large tech companies such as Facebook and Google are often accused of threatening our privacy. However, algorithms are an integral part of every application. As is known, SaaS is standard software, which makes use of algorithms just like other software.

      • But what are algorithms anyway?
      • How can we use standard algorithms?
      • How do standard algorithms end up in our software?
      • When is software not an algorithm?
    1. automatic OER processing

      I am unsure of what "automatic OER Processing" might mean/ Can anyone help?

      The closes I came was in the section of a International Journal of OER paper by Stephen Downes: A Look at the Future of Open Educational Resources where in the Artificial Intelligence section he illustrates an example of AI creating OER (?)

      What is relevant to open education is that the services offered by these programs will be available as basic resources to help build courses, learning modules, or interactive instruction. For example, Figure 3 illustrates a simple case. It takes the URL of an image, loads it, and connects an online artificial intelligence gateway offered by Microsoft as part of its Azure cloud services using an API key generated from an Azure account.

      The Azure AI service automatically generates a description of the image, which is used as an alt tag, so the image can be accessible; the alt tag can be read by a screen reader for those who aren’t able to actually see the image. In this case, the image recognition technology automatically created the text “a large waterfall over a rocky cliff,” along with a more complete set of analytical data about the image.

      Yes this is interesting and is a useful tool for content creation, but to me seems a far leap to creating educational content.

  12. Nov 2021
    1. Boosting is an approach to machine learning based on the idea of creatinga highly accurate prediction rule by combining many relatively weak and inaccu-rate rules

      This definition applies to all ensemble methods, right?

    1. Use of AI has increased tremendously

    Tags

    Annotators

    1. Instead, current AI research on sustainability tends to emphasize the quantifiable effects of environmental pollution and climate change, and focus on solutions of continued measurement, monitoring, and optimizing for efficiency.
  13. Oct 2021
  14. Sep 2021
    1. Side note: When I flagged yours as a dupe during review, the review system slapped me in the face and seriously accused me of not paying attention, a ridiculous claim by itself since locating a (potential) dupe requires quite a lot of attention.
  15. Aug 2021
    1. Here is a list of some open data available online. You can find a more complete list and details of the open data available online in Appendix B.

      DataHub (http://datahub.io/dataset)

      World Health Organization (http://www.who.int/research/en/)

      Data.gov (http://data.gov)

      European Union Open Data Portal (http://open-data.europa.eu/en/data/)

      Amazon Web Service public datasets (http://aws.amazon.com/datasets)

      Facebook Graph (http://developers.facebook.com/docs/graph-api)

      Healthdata.gov (http://www.healthdata.gov)

      Google Trends (http://www.google.com/trends/explore)

      Google Finance (https://www.google.com/finance)

      Google Books Ngrams (http://storage.googleapis.com/books/ngrams/books/datasetsv2.html)

      Machine Learning Repository (http://archive.ics.uci.edu/ml/)

      As an idea of open data sources available online, you can look at the LOD cloud diagram (http://lod-cloud.net ), which displays the connections of the data link among several open data sources currently available on the network (see Figure 1-3).

    1. Normally, thousands of rabbits and guinea pigs are used andkilled, in scientific laboratories, for experiments which yieldgreat and tangible benefits to humanity. This war butcheredmillions of people and ruined the health and lives of tens ofmillions. Is this climax of the pre-war civilization to be passedunnoticed, except for the poetry and the manuring of the battlefields, that the“poppies blow”stronger and better fed? Or is thedeath of ten men on the battle field to be of as much worth inknowledge gained as is the life of one rabbit killed for experi-ment? Is the great sacrifice worth analysing? There can be onlyone answer—yes. But, if truth be desired, the analysis must bescientific.

      Idea: Neural net parameter analysis but with society as the 'neural net' and the 'training examples' things like industrial accidents, etc. How many 'training examples' does it take to 'learn' a lesson, and what can we infer about the rate of learning from these statistics?

  16. Jul 2021
    1. Facebook AI. (2021, July 16). We’ve built and open-sourced BlenderBot 2.0, the first #chatbot that can store and access long-term memory, search the internet for timely information, and converse intelligently on nearly any topic. It’s a significant advancement in conversational AI. https://t.co/H17Dk6m1Vx https://t.co/0BC5oQMEck [Tweet]. @facebookai. https://twitter.com/facebookai/status/1416029884179271684

    1. An “attention map” of each prediction shows the important data points considered by the models as they make that prediction.

      This gets us closer to explainable AI, in that the model is showing the clinician which variables were important in informing the prediction.

    1. Recommendations DON'T use shifted PPMI with SVD. DON'T use SVD "correctly", i.e. without eigenvector weighting (performance drops 15 points compared to with eigenvalue weighting with (p = 0.5)). DO use PPMI and SVD with short contexts (window size of (2)). DO use many negative samples with SGNS. DO always use context distribution smoothing (raise unigram distribution to the power of (lpha = 0.75)) for all methods. DO use SGNS as a baseline (robust, fast and cheap to train). DO try adding context vectors in SGNS and GloVe.
  17. Jun 2021
    1. many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs

      And the problem is that, even human beings are not very sensitive to how this can be done well. Because there is such diversity in human cultures, preferences, and norms, deciding whose values to prioritise is problematic.

    1. One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning

      This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes. To see this, and to effectively resist it, we have to understand the appeal of these mistakes. We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.

    1. Baudrillard

      Surprised to see Baudrillard categorized as harder? more opaque? more sophisticated? than Derrida... Someone who had read both might switch the order...

    2. he intellectual equivalent of peacock feathers

      I can't find it right now, but recently came across an example of how a different field, perhaps closer to Morningstar's, has experienced a kind of "drift", wherein a sizable portion of artificial intelligence research was characterized as being of low quality and published only due to a small "in group" colluding.

  18. Apr 2021
    1. Deep Reinforcement Learning and its Neuroscientific Implications In this paper, the authors provided a high-level introduction to deep RL, discussed some of its initial applications to neuroscience, and surveyed its wider implications for research on brain and behaviour and concluded with a list of opportunities for next-stage research. Although DeepRL seems to be promising, the authors wrote that it is still a work in progress and its implications in neuroscience should be looked at as a great opportunity. For instance, deep RL provides an agent-based framework for studying the way that reward shapes representation, and how representation, in turn, shapes learning and decision making — two issues which together span a large swath of what is most central to neuroscience.  Check the paper here.

      This should be of interest to the @braingel group and others interested in the intersections of AI and neuroscience.

  19. Mar 2021
    1. I can see what I was doing a handful of years ago or to see a forgotten picture of one of my children doing something cute
    1. The digital universe could add some 175 zettabytes of data per year by 2025, according to the market-analysis firm IDC.
    2. The process of DNA data storage combines DNA synthesis, DNA sequencing and an encoding and decoding algorithm to pack information into DNA more durably and at higher density than is possible in conventional media. That could be up to 17 exabytes per gram1.
    1. Using chemicals to improve our economy of attention and become emotionally "fitter" is an option that penetrated public consciousness some time ago.

      Same is true of reinforcement learning algorithms.

    2. They have become more significant because social interaction is governed by social convention to a much lesser extent than it was fifty years ago.

      Probably because everything is now alogrithmically mediated.

  20. Feb 2021
    1. Currently, the downsides of this merger are starting to become obvious, including the loss of privacy, political polarization, psycho‑logical manipulation, addictive use, social anxiety and distraction, misinformation, and mass narcissism.53

      Downsides of AI

    2. From a historical perspective of social change, the merger between biological and AI has already crossed beyond any point of return, at least from the social science perspective of society as a whole

      The AI / biology merger

    3. Advancements in the field of AI have been dazzling. AI has not only superseded humans in many intellectual tasks, like several kinds of cancer diagnosis47 and speech recognition (reducing AI’s word-error rate from 26% to 4% just between 2012 and 2016)

      Advancements in AI

    1. move away from viewing AI systems as passive tools that can be assessed purely through their technical architecture, performance, and capabilities. They should instead be considered as active actors that change and influence their environments and the people and machines around them.

      Agents don't have free will but they are influenced by their surroundings, making it hard to predict how they will respond, especially in real-world contexts where interactions are complex and can't be controlled.

    1. Koo's discovery makes it possible to peek inside the black box and identify some key features that lead to the computer's decision-making process.

      Moving towards "explainable AI".

    1. A primary goal of AI design should be not just alignment, but legibility, to ensure that the humans interacting with the AI know its goals and failure modes, allowing critique, reuse, constraint etc.

      Applying the thinking here to artificial intelligence...

  21. Jan 2021
    1. - To biznes, eksperci i obywatele są prawdziwymi twórcami polskiego ekosystemu AI. Państwo powinno przede wszystkim ich wspierać. W najbliższym czasie planujemy serię otwartych spotkań z każdą z tych grup, na których będziemy wspólnie pracować nad uszczegółowieniem – zapowiedział Antoni Rytel, wicedyrektor GovTech Polska. - Oprócz tego, specjalne zespoły będą zapewniać ciągłe wsparcie wszystkim tym podmiotom. Uruchomimy też kanał bieżącego zgłaszania pomysłów technicznych i organizacyjnych wspierających rozwój AI w naszym kraju – dodał.

      The first steps of developing AI in Poland

    2. W okresie krótkoterminowym decydujące dla sukcesu polityki sztucznej inteligencji będzie ochrona talentów posiadających zdolności modelowania wiedzy i analityki danych w systemach AI oraz wsparcie dla rozwoju własności intelektualnej wytwarzanej w naszym kraju – dodaje Robert Kroplewski, pełnomocnik ministra cyfryzacji ds. społeczeństwa informacyjnego.

      AI talents will be even more demanded in Poland

    3. Dokument określa działania i cele dla Polski w perspektywie krótkoterminowej (do 2023 r.), średnioterminowej (do 2027 r.) i długoterminowej (po 2027 r.). Podzieliliśmy je na sześć obszarów: AI i społeczeństwo – działania, które mają uczynić z Polski jednego z większych beneficjentów gospodarki opartej na danych, a z Polaków - społeczeństwo świadome konieczności ciągłego podnoszenia kompetencji cyfrowych. AI i innowacyjne firmy – wsparcie polskich przedsiębiorstw AI, m.in. tworzenie mechanizmów finansowania ich rozwoju, współpracy start up-ów z rządem. AI i nauka – wsparcie polskiego środowiska naukowego i badawczego w projektowaniu interdyscyplinarnych wyzwań lub rozwiązań w obszarze AI, m.in. działania mające na celu przygotowanie kadry ekspertów AI. AI i edukacja – działania podejmowane od kształcenia podstawowego, aż do poziomu uczelni wyższych – programy kursów dla osób zagrożonych utratą pracy na skutek rozwoju nowych technologii, granty edukacyjne. AI i współpraca międzynarodowa – działania na rzecz wsparcia polskiego biznesu w zakresie AI oraz rozwój technologii na arenie międzynarodowej. AI i sektor publiczny – wsparcie sektora publicznego w realizacji zamówień na rzecz AI, lepszej koordynacji działań oraz dalszym rozwoju takich programów jak GovTech Polska.

      AI priorities in Poland

    4. Rozwój AI w Polsce zwiększy dynamikę PKB o nawet 2,65 pp w każdym roku. Do 2030 r. pozwoli zautomatyzować ok. 49% czasu pracy w Polsce, generując jednocześnie lepiej płatne miejsca pracy w kluczowych sektorach.

      Prediction of developing AI in Poland

    1. Help is coming in the form of specialized AI processors that can execute computations more efficiently and optimization techniques, such as model compression and cross-compilation, that reduce the number of computations needed. But it’s not clear what the shape of the efficiency curve will look like. In many problem domains, exponentially more processing and data are needed to get incrementally more accuracy. This means – as we’ve noted before – that model complexity is growing at an incredible rate, and it’s unlikely processors will be able to keep up. Moore’s Law is not enough. (For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only ~4x!) Distributed computing is a compelling solution to this problem, but it primarily addresses speed – not cost.
    1. At any rate, if CSHW can be used to build a good quantitative model of human-human interactions, it might also be possible to replicate these dynamics in human-computer interactions. This could take a weak form, such as building computer systems with a similar-enough interactional syntax to humans that some people could reach entrainment with it; affective computing done right.

      [[Aligning Recommender Systems]]

  22. Dec 2020
    1. The current public dialog about these issues too often uses “AI” as an intellectual wildcard, one that makes it difficult to reason about the scope and consequences of emerging technology. Let us begin by considering more carefully what “AI” has been used to refer to, both recently and historically.

      This emerging field is often hidden under the label AI, which makes it difficult to reason about.

    2. Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.

      Analogous to the collapse of early bridges and building, before the maturation of civil engineering, our early society-scale inference-and-decision-making systems break down, exposing serious conceptual flaws.

    1. The Globe and Mail reports that Element AI sold for less than $500 million USD. This would place the purchase price well below the estimated valuation that the Montréal startup was said to have after its $200 million CAD Series B round in September 2019.

      This was a downround for them in a sense that eventhough they sold for USD$500M their post-money round in Sep 2019 was CAD$200M meaning that they did not improve on their valuation after one year. Why?

    2. Despite being seen as a leader and a rising star in the Canadian AI sector, Element AI faced difficulties getting products to market.

      They had faced productisastion problems, just like many other AI startups.It looks like they have GTM problems too,

    3. Element AI had more than 500 employees, including 100 PhDs.

      500 employees is indeed large. A 100-person team of PhDs is very large as well, They could probably tackle many difficult AI Problems!

    4. n 2017, the startup raised what was then a historic $137.5 million Series A funding round from a group of notable investors including Intel, Microsoft, National Bank of Canada, Development Bank of Canada (BDC), NVIDIA, and Real Ventures.

      This was indeed a historic amonunt raised! Probably because of Yoshua Bengio one of the god fathers of AI!

  23. Nov 2020
    1. AI is not analogous to the big science projects of the previous century that brought us the atom bomb and the moon landing. AI is a science that can be conducted by many different groups with a variety of different resources, making it closer to computer design than the space race or nuclear competition. It doesn’t take a massive government-funded lab for AI research, nor the secrecy of the Manhattan Project. The research conducted in the open science literature will trump research done in secret because of the benefits of collaboration and the free exchange of ideas.

      AI research is not analogous to space research or an arms race.

      It can be conducted by different groups with a variety of different resources. Research conducted in the open is likely to do better because of the benefits of collaboration.

  24. Oct 2020
    1. Facebook AI is introducing M2M-100, the first multilingual machine translation (MMT) model that can translate between any pair of 100 languages without relying on English data. It’s open sourced here. When translating, say, Chinese to French, most English-centric multilingual models train on Chinese to English and English to French, because English training data is the most widely available. Our model directly trains on Chinese to French data to better preserve meaning. It outperforms English-centric systems by 10 points on the widely used BLEU metric for evaluating machine translations. M2M-100 is trained on a total of 2,200 language directions — or 10x more than previous best, English-centric multilingual models. Deploying M2M-100 will improve the quality of translations for billions of people, especially those that speak low-resource languages. This milestone is a culmination of years of Facebook AI’s foundational work in machine translation. Today, we’re sharing details on how we built a more diverse MMT training data set and model for 100 languages. We’re also releasing the model, training, and evaluation setup to help other researchers reproduce and further advance multilingual models. 

      Summary of the 1st AI model from Facebook that translates directly between languages (not relying on English data)

  25. Sep 2020
  26. Aug 2020
  27. Jul 2020
  28. Jun 2020
    1. Google’s novel response has been to compare each app to its peers, identifying those that seem to be asking for more than they should, and alerting developers when that’s the case. In its update today, Google says “we aim to help developers boost the trust of their users—we surface a message to developers when we think their app is asking for a permission that is likely unnecessary.”
    1. 5A85F3

      I have signed up for hypothesis and verified my email so i can leave you this following comment:

      long time reader, first time poster here. greatest blog of all time

  29. May 2020
    1. Machine learning has a limited scope
    2. AI is a bigger concept to create intelligent machines that can simulate human thinking capability and behavior, whereas, machine learning is an application or subset of AI that allows machines to learn from data without being programmed explicitly
    1. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed
    1. machines tend to be designed for the lowest possible risk and the least casualties

      why is this a problem?

    2. machines must weigh the consequences of any action they take, as each action will impact the end result
    3. goals of artificial intelligence include learning, reasoning, and perception
    4. refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions
  30. Apr 2020
    1. As the largest Voronoi regions belong to the states on the frontier of the search, this means that the tree preferentially expands towards large unsearched areas.
    2. inherently biased to grow towards large unsearched areas of the problem
    1. Natural Language Processing with Python – Analyzing Text with the Natural Language Toolkit Steven Bird, Ewan Klein, and Edward Loper
    1. How to setup and use Stanford CoreNLP Server with Python Khalid Alnajjar August 20, 2017 Natural Language Processing (NLP) Leave a CommentStanford CoreNLP is a great Natural Language Processing (NLP) tool for analysing text. Given a paragraph, CoreNLP splits it into sentences then analyses it to return the base forms of words in the sentences, their dependencies, parts of speech, named entities and many more. Stanford CoreNLP not only supports English but also other 5 languages: Arabic, Chinese, French, German and Spanish. To try out Stanford CoreNLP, click here.Stanford CoreNLP is implemented in Java. In some cases (e.g. your main code-base is written in different language or you simply do not feel like coding in Java), you can setup a Stanford CoreNLP Server and, then, access it through an API. In this post, I will show how to setup a Stanford CoreNLP Server locally and access it using python.
    1. CoreNLP includes a simple web API server for servicing your human language understanding needs (starting with version 3.6.0). This page describes how to set it up. CoreNLP server provides both a convenient graphical way to interface with your installation of CoreNLP and an API with which to call CoreNLP using any programming language. If you’re writing a new wrapper of CoreNLP for using it in another language, you’re advised to do it using the CoreNLP Server.
    1. Programming languages and operating systems Stanford CoreNLP is written in Java; recent releases require Java 1.8+. You need to have Java installed to run CoreNLP. However, you can interact with CoreNLP via the command-line or its web service; many people use CoreNLP while writing their own code in Javascript, Python, or some other language. You can use Stanford CoreNLP from the command-line, via its original Java programmatic API, via the object-oriented simple API, via third party APIs for most major modern programming languages, or via a web service. It works on Linux, macOS, and Windows. License The full Stanford CoreNLP is licensed under the GNU General Public License v3 or later. More precisely, all the Stanford NLP code is GPL v2+, but CoreNLP uses some Apache-licensed libraries, and so our understanding is that the the composite is correctly licensed as v3+.
    2. Stanford CoreNLP provides a set of human language technology tools. It can give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize dates, times, and numeric quantities, mark up the structure of sentences in terms of phrases and syntactic dependencies, indicate which noun phrases refer to the same entities, indicate sentiment, extract particular or open-class relations between entity mentions, get the quotes people said, etc. Choose Stanford CoreNLP if you need: An integrated NLP toolkit with a broad range of grammatical analysis tools A fast, robust annotator for arbitrary texts, widely used in production A modern, regularly updated package, with the overall highest quality text analytics Support for a number of major (human) languages Available APIs for most major modern programming languages Ability to run as a simple web service
    1. OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 18 million. The library is used extensively in companies, research groups and by governmental bodies. Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching streetview images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan. It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDAand OpenCL interfaces are being actively developed right now. There are over 500 algorithms and about 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a templated interface that works seamlessly with STL containers.
    1. that can be partially automated but still require human oversight and occasional intervention
    2. but then have a tool that will show you each of the change sites one at a time and ask you either to accept the change, reject the change, or manually intervene using your editor of choice.
  31. Mar 2020
    1. Humans can no longer compete with AI in chess. They should not be without AI in litigation either.
    2. Just as chess players marshall their 16 chess pieces in a battle of wits, attorneys must select from millions of cases in order to present the best legal arguments.
    1. Now that we’re making breakthroughs in artificial intelligence, there’s a deeply cemented belief that the human brain works as a deterministic, mathematical process that can be replicated exactly by a Turing machine.
    1. Overestimating robots and AI underestimates the very people who can save us from this pandemic: Doctors, nurses, and other health workers, who will likely never be replaced by machines outright. They’re just too beautifully human for that.

      Yes - we used to have human elevator operators and telephone operators that would manually connect your calls. We now have automated check-out lines in stores and toll booths. In the future, we will have automated taxis and, yes, even some automated health care. Automated healthcare will enable better healthcare coverage with the same number of healthcare workers (or the same level of coverage with fewer workers). There can be good things or bad things about it - the way we do it will absolutely matter. We just need to think through how best to obtain the good without much of the bad ... rather than assuming it wont ever happen.

    2. the demand for products will keep climbing as well, as we’re seeing with this hiring bonanza.

      Probably not. The increase in demand is a result of the social-distancing and the hoarding. This is not a steady state. The demand for many things will return to normal (or below) once people figure out what they are using and what is still available. For example - you don't use that much more toilet paper when you are at home ... but you buy more if you don't know when it will be available again.

    3. Last week, Amazon officials announced that in response to the coronavirus they were hiring 100,000 additional humans to work in fulfillment centers and as delivery drivers, showing that not even this mighty tech company can do without people.

      Amazon has adopted automation in a very big and increasing way. Just because it has not automated everything yet, doesn't mean that complete automation isn't possible. We already know automated delivery is in the works. Amazon, Uber and Google are all working on the details of autonomous navigation ... and the ultimate result will absolutely impact future drivers (pun intended).

    4. Why haven’t the machines saved us yet?

      because machines don't buy tickets to fly on planes and vacation on cruise ships.

    5. And that’s all because of the vulnerabilities of the human worker.

      It has more to do with the vulnerabilities of the human traveler and the human guest (and less to do with the workers). The demand for these services has simply gone down while people try to avoid spreading the virus.

    1. The system has been criticised due to its method of scraping the internet to gather images and storing them in a database. Privacy activists say the people in those images never gave consent. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said in a recent interview with CoinDesk. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”